ZFS Snapshot problems

Peter Maloney peter.maloney at brockmann-consult.de
Sun Feb 12 13:10:50 UTC 2012


Am 12.02.2012 13:36, schrieb Matthew Seaman:
> On 12/02/2012 11:56, Peter Maloney wrote:
>> I had a problem where I could not delete, rename, send, etc. a
>> snapshot... , (possibly caused by a kernel panic during a zfs
>> replication). Maybe yours is related. I did not try viewing the contents
>> of my snapshot.
>>
>> If it is related, the solution is:
>>
>> zdb -d poolname | grep %
>>
>> Expect the command to take long, and output should include all your
>> clones you made, plus some "Input/output errors" for some others. Pay
>> attention to the ones with errors, and then delete those, then try to
>> access your snapshot again.
> Interesting.  I guess this is not the expected output?
>
> lucid-nonsense:/usr/home/matthew:# zpool list
> NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
> zroot   448G  42.9G   405G     9%  1.20x  ONLINE  -
> lucid-nonsense:/usr/home/matthew:# zdb -d zroot | grep %
> zdb: can't open 'zroot': No such file or directory
>
> Running truss(1) on zdb shows:
>
> open("/dev/gpt/disk0",O_RDONLY,00)		 ERR#2 'No such file or directory'
> open("/dev/gpt/disk2",O_RDONLY,00)		 ERR#2 'No such file or directory'
>
> which is true -- /dev/gpt is empty.  
I've had the same problem also... it is very annoying. It seems that
after the first time you import a pool using another boot system (USB
stick, DVD, etc.), it breaks the gpt label usage and uses gptid for
everything after that. Before this, you have gpt and gptid, but after
this, you only have gptid.

You could try adding:
kern.geom.label.gptid.enable=0
to /boot/loader.conf and reboot, which will then show the /dev/gpt/...
labels everywhere again, and will make the /dev/gptid directory
empty/disappear. (but you get gptid labels again if you disable that
option and reboot again)

I don't know what side effects that change has though. You can usually
assume that ZFS will just figure out the pool regardless of labels
(because it uses its own label metadata; see zdb output to see the other
id), but apparently your case is something special, getting actual
errors instead of only wrong names.

In my experience, there are no strange side effects. But maybe there
would be if you inserted some other disks with the same gpt/ labels.


And another long shot idea: you could also try booting off of a DVD and
importing using "-o cachefile=.... -o altroot=..." and then copying the
cachefile over your current one (/boot/zfs/zpool.cache I think) to see
if it then has the right names when you reboot again.


And again, I don't know if your data is at risk using any of my
suggestions. I always play around with things like that in test virtual
machines first.
> But disk0 and disk1 should show up:
>
> lucid-nonsense:/usr/home/matthew:# gpart show -l /dev/ad0
> =>       34  976773101  ad0  GPT  (465G)
>          34        128    1  (null)  (64k)
>         162   33554432    2  swap0  (16G)
>    33554594  943218541    3  disk0  (449G)
>
> lucid-nonsense:/usr/home/matthew:# gpart show -l /dev/ad2
> =>       34  976773101  ad2  GPT  (465G)
>          34        128    1  (null)  (64k)
>         162   33554432    2  swap2  (16G)
>    33554594  943218541    3  disk2  (449G)
>
> ... and come to think of it: the disk0 and disk2 labels used to show up
> here too:
>
> lucid-nonsense:/usr/home/matthew:# zpool status zroot
>   pool: zroot
>  state: ONLINE
>   scan: scrub repaired 0 in 3h35m with 0 errors on Sun Feb 12 11:14:10 2012
> config:
>
> 	NAME                                            STATE     READ WRITE CKSUM
> 	zroot                                           ONLINE       0     0     0
> 	  mirror-0                                      ONLINE       0     0     0
> 	    gptid/848287e9-5f8e-11df-808e-e0cb4e266481  ONLINE       0     0     0
> 	    gptid/a6d0bec4-5f8e-11df-808e-e0cb4e266481  ONLINE       0     0     0
>
> errors: No known data errors
>
> 	Cheers,
>
> 	Matthew
>



More information about the freebsd-fs mailing list