Binary file corruption in raidz pool
Thorsten Schlich
thorsten.schlich at wetteronline.de
Thu Jun 26 14:24:53 UTC 2014
Good morning,
we are discovering some file corruption in two of our zpools in our
archive system.
Our archive system which stores our historical files contains two
servers with a raidz zpool where the data is stored. The file-sending
servers are based on zfs too but they have only 1 disk in the pool. The
archive servers have FreeBSD 9.2 installed the other servers are
currently on FreeBSD 8.3.
In one case we store 15682 files every day (meteorological data) in our
own binary format. But when the files are copied to the archive an
amount between 9000 and 10000 files differ every day. When i compare the
values of the original data with the copied data i can see various
little differences.
The data is only copied if the access time is older than 2 days so no
one has changed the data at transfer.
We've tested at this point with various situations (copy to ufs, copy to
several other servers) but we couldn't reproduce the behaviour. Only
this two servers with a raidz pool are having this behaviour. Disabling
compression had changed nothing.
I ask for a helping hand in investigating the problem. Is there a way to
get more information out of zfs or a debug logging? Do you have any
other ideas?
Below this you can find the config of the raidz pools. If i can provide
more info please let me know how.
Thanks in advance.
Regards,
Thorsten
NAME PROPERTY VALUE SOURCE
tank size 21,8T -
tank capacity 69% -
tank altroot - default
tank health ONLINE -
tank guid 4365850585010436054 default
tank version - default
tank bootfs - default
tank delegation on default
tank autoreplace off default
tank cachefile - default
tank failmode wait default
tank listsnapshots off default
tank autoexpand on local
tank dedupditto 0 default
tank dedupratio 1.00x -
tank free 6,66T -
tank allocated 15,1T -
tank readonly off -
tank comment - default
tank expandsize 0 -
tank freeing 0 default
tank feature at async_destroy enabled local
tank feature at empty_bpobj enabled local
tank feature at lz4_compress enabled local
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
aacd0 ONLINE 0 0 0
aacd1 ONLINE 0 0 0
aacd2 ONLINE 0 0 0
aacd3 ONLINE 0 0 0
aacd4 ONLINE 0 0 0
aacd5 ONLINE 0 0 0
NAME PROPERTY VALUE SOURCE
tank type filesystem -
tank creation Fr Feb 14 8:13 2014 -
tank used 12,5T -
tank available 5,25T -
tank referenced 683M -
tank compressratio 1.74x -
tank mounted yes -
tank quota none default
tank reservation none default
tank recordsize 128K default
tank mountpoint /space local
tank sharenfs off default
tank checksum on default
tank compression gzip-9 local
tank atime on default
tank devices on default
tank exec on default
tank setuid on default
tank readonly off default
tank jailed off default
tank snapdir hidden default
tank aclmode discard default
tank aclinherit restricted default
tank canmount on default
tank xattr off temporary
tank copies 1 default
tank version 5 -
tank utf8only off -
tank normalization none -
tank casesensitivity sensitive -
tank vscan off default
tank nbmand off default
tank sharesmb off default
tank refquota none default
tank refreservation none default
tank primarycache all default
tank secondarycache all default
tank usedbysnapshots 0 -
tank usedbydataset 683M -
tank usedbychildren 12,5T -
tank usedbyrefreservation 0 -
tank logbias latency default
tank dedup off default
tank mlslabel -
tank sync standard default
tank refcompressratio 1.94x -
tank written 683M -
tank logicalused 21,7T -
tank logicalreferenced 1,28G -
More information about the zfs-devel
mailing list