zfs issue - disappearing data
Mike Carlson
mike at bayphoto.com
Sat May 4 00:03:34 UTC 2013
Interesting.
Is that why zdb shows so many objects?
Is this a configuration mistake, and would it lead to data loss?
Can I provide any addition information ?
Mike C
On 5/3/2013 3:41 PM, Adam Nowacki wrote:
> Looks like we have a leak with extended attributes:
>
> # zfs create -o mountpoint=/test root/test
> # touch /test/file1
> # setextattr user test abc /test/file1
> # zdb root/test
> Object lvl iblk dblk dsize lsize %full type
> 8 1 16K 512 0 512 0.00 ZFS plain file
> 9 1 16K 512 1K 512 100.00 ZFS directory
> 10 1 16K 512 512 512 100.00 ZFS plain file
>
> object 8 - the file,
> object 9 - extended attributes directory,
> object 10 - value of the 'test' extended attribute
>
> # rm /test/file1
> # zdb root/test
>
> Object lvl iblk dblk dsize lsize %full type
> 10 1 16K 512 512 512 100.00 ZFS plain file
>
> objects 8 and 9 are deleted, object 10 is still there (leaked).
>
> On 2013-05-03 19:43, Mike Carlson wrote:
>> We had a critical issue with a zfs server that exports shares via samba
>> (3.5) last night
>>
>> system info:
>> uname -a
>>
>> FreeBSD zfs-1.discdrive.bayphoto.com 9.1-RELEASE FreeBSD 9.1-RELEASE
>> #0 r243825: Tue Dec 4 09:23:10 UTC 2012
>> root at farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64
>>
>> zpool history:
>>
>> History for 'data':
>> 2013-02-25.17:11:37 zpool create data raidz /dev/gpt/disk1.nop
>> /dev/gpt/disk2.nop /dev/gpt/disk3.nop /dev/gpt/disk4.nop
>> 2013-02-25.17:11:41 zpool add data raidz /dev/gpt/disk5.nop
>> /dev/gpt/disk6.nop /dev/gpt/disk7.nop /dev/gpt/disk8.nop
>> 2013-02-25.17:11:47 zpool add data raidz /dev/gpt/disk9.nop
>> /dev/gpt/disk10.nop /dev/gpt/disk11.nop /dev/gpt/disk12.nop
>> 2013-02-25.17:11:53 zpool add data raidz /dev/gpt/disk13.nop
>> /dev/gpt/disk14.nop /dev/gpt/disk15.nop /dev/gpt/disk16.nop
>> 2013-02-25.17:11:57 zpool add data raidz /dev/gpt/disk17.nop
>> /dev/gpt/disk18.nop /dev/gpt/disk19.nop /dev/gpt/disk20.nop
>> 2013-02-25.17:12:02 zpool add data raidz /dev/gpt/disk21.nop
>> /dev/gpt/disk22.nop /dev/gpt/disk23.nop /dev/gpt/disk24.nop
>> 2013-02-25.17:12:08 zpool add data spare /dev/gpt/disk25.nop
>> /dev/gpt/disk26.nop
>> 2013-02-25.17:12:15 zpool add data log /dev/gpt/log.nop
>> 2013-02-25.17:12:19 zfs set checksum=fletcher4 data
>> 2013-02-25.17:12:22 zfs set compression=lzjb data
>> 2013-02-25.17:12:25 zfs set aclmode=passthrough data
>> 2013-02-25.17:12:30 zfs set aclinherit=passthrough data
>> 2013-02-25.17:13:25 zpool export data
>> 2013-02-25.17:15:33 zpool import -d /dev/gpt data
>> 2013-03-01.12:31:58 zpool add data cache /dev/gpt/cache.nop
>> 2013-03-15.12:22:22 zfs create data/XML_WORKFLOW
>> 2013-03-27.12:05:42 zfs create data/IMAGEQUIX
>> 2013-03-27.13:32:54 zfs create data/ROES_ORDERS
>> 2013-03-27.13:32:59 zfs create data/ROES_PRINTABLES
>> 2013-03-27.13:33:21 zfs destroy data/ROES_PRINTABLES
>> 2013-03-27.13:33:26 zfs create data/ROES_PRINTABLE
>>
>> We had a file structure drop off:
>>
>> /data/XML_WORKFLOW/XML_ORDERS/
>>
>> around 5/2/2012 @ 17:00
>>
>> In that directory, there were a few thousand directories (containing
>> images and a couple metadata text/xml files)
>>
>> What is odd, is doing a du -h in the parent XML_WORKFLOW directory, only
>> reports ~150MB:
>>
>> # find . -type f |wc -l
>> 86
>> # du -sh .
>> 130M .
>>
>>
>> however, df reports 1.5GB:
>>
>> # df -h .
>> Filesystem Size Used Avail Capacity Mounted on
>> data/XML_WORKFLOW 28T 1.5G 28T 0% /data/XML_WORKFLOW
>>
>> zdb -d shows:
>>
>> # zdb -d data/XML_WORKFLOW
>> Dataset data/XML_WORKFLOW [ZPL], ID 139, cr_txg 339633, 1.53G,
>> 212812 objects
>>
>> Digging further into zdb, the path is missing for most of those objects:
>>
>> # zdb -ddddd data/XML_WORKFLOW 635248
>> Dataset data/XML_WORKFLOW [ZPL], ID 139, cr_txg 339633, 1.53G,
>> 212812 objects, rootbp DVA[0]=<5:b274264000:2000>
>> DVA[1]=<0:b4d81a8000:2000> [L0 DMU objset] fletcher4 lzjb LE
>> contiguous unique double size=800L/200P birth=1202311L/1202311P
>> fill=212812
>> cksum=16d24fb5aa:6c2e0aff6bc:129af90fe2eff:2612f938c5292b
>>
>> Object lvl iblk dblk dsize lsize %full type
>> 635248 1 16K 512 6.00K 512 100.00 ZFS plain file
>> 168 bonus System attributes
>> dnode flags: USED_BYTES USERUSED_ACCOUNTED
>> dnode maxblkid: 0
>> path ???<object#635248>
>> uid 11258
>> gid 10513
>> atime Thu May 2 17:31:26 2013
>> mtime Thu May 2 17:31:26 2013
>> ctime Thu May 2 17:31:26 2013
>> crtime Thu May 2 17:13:58 2013
>> gen 1197180
>> mode 100600
>> size 52
>> parent 635247
>> links 1
>> pflags 40800000005
>> Indirect blocks:
>> 0 L0 3:a9da05a000:2000 200L/200P F=1 B=1197391/1197391
>>
>> segment [0000000000000000, 0000000000000200) size 512
>>
>> The application that writes to this volume runs on a windows client, so
>> far, it has exhibited identical behavior across two zfs servers, but not
>> on a generic windows server 2003 network share.
>>
>> The question is, what is happening to the data. Is it a samba issue? Is
>> it ZFS? I've enabled the samba full_audit module to track file
>> deletions, so I should have more information on that side.
>>
>> If anyone has seen similar behavior please let me know
>>
>> Mike C
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 6054 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.freebsd.org/pipermail/freebsd-fs/attachments/20130503/0a784aa1/attachment.bin>
More information about the freebsd-fs
mailing list