[BUG] cache corruption on incremental snapshot receive
Martin Matuska
mm at FreeBSD.org
Tue Aug 9 21:15:35 UTC 2011
Hi, a user has wrongly described a bug report (kern/156933) but has
contacted me recently.
I wrote a script that reproduces the bug, it has nothing to do with
readonly=on.
The following script corrupts cache on snapshot receive (not
reproducible on Solaris):
#!/bin/sh
zpool destroy tpool
dd if=/dev/zero of=/tmp/poolfile bs=1m count=64
zpool create tpool /tmp/poolfile
zfs create tpool/ds1
zfs create tpool/ds2
echo "Test line 1" > /tpool/ds1/test.txt
zfs snapshot tpool/ds1 at s1
zfs send tpool/ds1 at s1 | zfs recv -F tpool/ds2
echo "Test line 2" >> /tpool/ds1/test.txt
zfs snapshot tpool/ds1 at s2
# This causes corrupted FS cache after 2nd recv
tail /tpool/ds2/test.txt
#
zfs send -i @s1 tpool/ds1 at s2 | zfs recv -F tpool/ds2
md5 /tpool/ds1/test.txt
md5 /tpool/ds2/test.txt
If you umount + mount tpool/ds2, the file is correct again.
--
Martin Matuska
FreeBSD committer
http://blog.vx.sk
More information about the zfs-devel
mailing list