can't start domU after resizing zfs volume

Roger Pau Monné roger.pau at citrix.com
Wed Sep 23 14:54:01 UTC 2015


El 23/09/15 a les 16.36, Michael Reifenberger ha escrit:
> 
> Zitat von Roger Pau Monné <roger.pau at citrix.com>:
> 
>> El 18/09/15 a les 19.41, Michael Reifenberger ha escrit:
>>> Hi,
>>> today I've got my first real xen dom0 error so far:
>>>
>>> I had a 20G zfs volume with windows installed (Windows has the PV
>>> drivers installed).
>>> The disk section of the cfg looks like:
>>> ...
>>> disk =  [
>>>         '/dev/zvol/zdata/VM/win81/root,raw,hda,rw',
>>>         '/VM/ISO/W81.PRO.X64.MULTi8.ESD.Apr2015.iso,raw,hdc:cdrom,r'
>>>         ]
>>> boot="d"
>>> ...
>>>
>>>
>>> This works until shutting down the domU and extending the volume (from
>>> 20G) to 40G:
>>>
>>> zfs set volsize=40G zdata/VM/win81/root
>>>
>>> Now trying to start the guest I get:
>>>
>>> (vm)(root) # xl create win81.cfg
>>> Parsing config from win81.cfg
>>> libxl: error: libxl_device.c:950:device_backend_callback: unable to add
>>> device with path /local/domain/0/backend/vbd/6/768
>>> libxl: error: libxl_device.c:950:device_backend_callback: unable to add
>>> device with path /local/domain/0/backend/vbd/6/5632
>>> libxl: error: libxl_create.c:1153:domcreate_launch_dm: unable to add
>>> disk devices
>>> libxl: error: libxl_dm.c:1595:kill_device_model: unable to find device
>>> model pid in /local/domain/6/image/device-model-pid
>>> libxl: error: libxl.c:1608:libxl__destroy_domid:
>>> libxl__destroy_device_model failed for 6
>>> libxl: error: libxl_device.c:950:device_backend_callback: unable to
>>> remove device with path /local/domain/0/backend/vbd/6/768
>>> libxl: error: libxl_device.c:950:device_backend_callback: unable to
>>> remove device with path /local/domain/0/backend/vbd/6/5632
>>> libxl: error: libxl.c:1645:devices_destroy_cb: libxl__devices_destroy
>>> failed for 6
>>> libxl: info: libxl.c:1691:devices_destroy_cb: forked pid 2306 for
>>> destroy of domain 6
>>>
>>> Since I saw in syslog that GEOM did some auto-moddings of the disk I
>>> did:
>>>
>>> `gpart commit zvol/zdata/VM/win81/root` on the dom0,
>>> and `gpart resize -i 2 zvol/zdata/VM/win81/root`
>>> but this didn't change the above failure.
>>
>> The handbook for bhyve when using ZVOLs is to create them using:
>>
>> # zfs create -V16G -o volmode=dev zroot/linuxdisk0
>>
>> Note the volmode=dev, which prevents GEOM from sniffing the partition
>> table.
>>
> 
> Thats at least a workaround!
> Sometimes it would be nice to be able to access/pre-fill domU
> slices/partitions on
> dom0 as well...

I agree, I've never used volmode=dev and never had problems on Xen Dom0,
but I've also never tried to resize a volume. I will try to reproduce
this on my setup in order to figure out what's going on, but it won't be
today :(.

> 
>>> Only after reboot the guest can be started so somewhere must be a
>>> mismatch of cached data...
>>>
>>> Any clues?
>>
>> This is from my own experience, but xen-blkback doesn't recover from
>> errors sometimes and ends up in some kind of locked state waiting for a
>> device to disconnect. Not sure if that's the case here, but I won't be
>> surprised.
> 
> How does xen-blkback construct this Path:
> /local/domain/0/backend/vbd/6/768 or
> /local/domain/0/backend/vbd/6/5632?

Those are created and mostly populated by the toolstack (xl),
xen-blkback only writes the set of features it supports there, but the
physical disk information is provided by the toolstack.

> Is the volmode=dev changable after creation or only at creation time?

I would expect so, but I'm no ZFS guru, so it's probably best to search
the man pages. I guess as a workaround, or unless why this happens is
better understood to me, it's better to set volmode=dev and unset it if
you wish to access the partitions inside the ZVOL.

> 
> BTW: Many thanks for supporting XEN-dom0 under FreeBSD/ZFS.
> So far it works surprisingly stable (Except some minor glitches like the
> above) :-)

Thanks for testing it!

Roger.



More information about the freebsd-xen mailing list