[Bug 204661] Either zdb doesn't correctly report block size of zfs root files, or ZFS isn't applying recordsize to files in the zfs root
bugzilla-noreply at freebsd.org
bugzilla-noreply at freebsd.org
Wed Nov 18 16:50:31 UTC 2015
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=204661
Bug ID: 204661
Summary: Either zdb doesn't correctly report block size of zfs
root files, or ZFS isn't applying recordsize to files
in the zfs root
Product: Base System
Version: 10.2-RELEASE
Hardware: amd64
OS: Any
Status: New
Severity: Affects Many People
Priority: ---
Component: misc
Assignee: freebsd-bugs at FreeBSD.org
Reporter: chris at acsi.ca
Hi,
This is an issue on FreeBSD, but not Solaris.
It works as I expect on a Solaris 11.3-BETA machine.
A file in the root of a zfs dataset doesn't seem to take the set recordsize,
yet if I create a sub-dataset, it does.
Perhaps I am not using zdb correctly to examine the file in the root, as it's
displaying like it's a DSL Directory, not a ZFS Plain File, but if that's the
case, then zdb's switches are different for FreeBSD than from Solaris.
This behaviour has existed for a while, for all the 10.x's I'm quite sure, and
still happens today on a 10.2-p7 RELEASE machine.
Example:
# zpool create pool92_1 da1 da11 da9 da12 da13 da14
# zfs set recordsize=64k pool92_1
# zfs get recordsize pool92_1
NAME PROPERTY VALUE SOURCE
pool92_1 recordsize 64K local
# cd /pool92_1
# dd if=/dev/random of=./test_file bs=1M count=12
12+0 records in
12+0 records out
12582912 bytes transferred in 0.325594 secs (38645997 bytes/sec)
# ls -i
8 test_file
# zdb -dd pool92_1 8
Dataset mos [META], ID 0, cr_txg 4, 144K, 45 objects
Object lvl iblk dblk dsize lsize %full type
8 1 16K 512 0 512 0.00 DSL directory
# zfs get recordsize pool92_1
NAME PROPERTY VALUE SOURCE
pool92_1 recordsize 64K local
# zfs create pool92_1/folder
# zfs get recordsize pool92_1/folder
NAME PROPERTY VALUE SOURCE
pool92_1/folder recordsize 64K inherited from pool92_1
# cd folder
# dd if=/dev/random of=./test_file bs=1M count=12
12+0 records in
12+0 records out
12582912 bytes transferred in 0.384501 secs (32725305 bytes/sec)
# ls -i
8 test_file
# zdb -dd pool92_1/folder 8
Dataset pool92_1/folder [ZPL], ID 49, cr_txg 44, 12.1M, 8 objects
Object lvl iblk dblk dsize lsize %full type
8 3 16K 64K 12.0M 12.0M 100.00 ZFS plain file
The full dumps if you're interested are:
# zdb -dddd pool92_1 8
Dataset mos [META], ID 0, cr_txg 4, 126K, 52 objects, rootbp
DVA[0]=<2:446000:1000> DVA[1]=<3:443800:200> DVA[2]=<4:408000:200> [L0 DMU
objset] fletcher4 lz4 LE contiguous unique triple size=800L/200P birth=60L/60P
fill=52 cksum=be53af6aa:46998543bc9:d8c388544dcd:1cb577de44fcab
Object lvl iblk dblk dsize lsize %full type
8 1 16K 512 0 512 0.00 DSL directory
256 bonus DSL directory
dnode flags:
dnode maxblkid: 0
creation_time = Wed Nov 18 11:59:47 2015
head_dataset_obj = 0
parent_dir_obj = 2
origin_obj = 0
child_dir_zapobj = 10
used_bytes = 0
compressed_bytes = 0
uncompressed_bytes = 0
quota = 0
reserved = 0
props_zapobj = 9
deleg_zapobj = 0
flags = 1
used_breakdown[HEAD] = 0
used_breakdown[SNAP] = 0
used_breakdown[CHILD] = 0
used_breakdown[CHILD_RSRV] = 0
used_breakdown[REFRSRV] = 0
# zdb -dddd pool92_1/folder 8
Dataset pool92_1/folder [ZPL], ID 49, cr_txg 44, 12.1M, 8 objects, rootbp
DVA[0]=<2:443000:1000> DVA[1]=<3:442800:200> [L0 DMU objset] fletcher4 lz4 LE
contiguous unique double size=800L/200P birth=60L/60P fill=8
cksum=b55f2d725:46568e61db8:e07315731014:1eacd8bf5cb44a
Object lvl iblk dblk dsize lsize %full type
8 3 16K 64K 12.0M 12.0M 100.00 ZFS plain file
168 bonus System attributes
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 191
path /test_file
uid 0
gid 0
atime Wed Nov 18 12:03:10 2015
mtime Wed Nov 18 12:03:10 2015
ctime Wed Nov 18 12:03:10 2015
crtime Wed Nov 18 12:03:10 2015
gen 47
mode 100644
size 12582912
parent 4
links 1
pflags 40800000004
On a Solaris 11.3-BETA machine, here's what I see:
root at solaris175:~# zpool create pool175 c2t1d0
root at solaris175:~# zfs set recordsize=64k pool175
root at solaris175:/pool175# dd if=/dev/random of=./test_file bs=1073741824
count=120
root at solaris175:/pool175# ls -all
total 267
drwxr-xr-x 2 root root 3 Nov 18 16:39 .
drwxr-xr-x 27 root sys 30 Nov 18 16:37 ..
-rw-r--r-- 1 root root 124800 Nov 18 16:39 test_file
root at solaris175:/pool175# ls -i
10 test_file
root at solaris175:/pool175# zdb -dd pool175 10
Dataset pool175 [ZPL], ID 18, cr_txg 1, 161K, 8 objects
Object lvl iblk dblk dsize lsize %full type
10 2 16K 64K 130K 128K 100.00 ZFS plain file
Yes, there are two differences here (I couldn't use bs=1M for dd, and I only
used one disk) between the FreeBSD and Solaris machines, but I feel it's still
valid enough to illustrate my point.
I do believe something more than just the reporting is off, as workloads that
benefit from recordsize=64k are slower in the root dataset than in a
sub-dataset.
--
You are receiving this mail because:
You are the assignee for the bug.
More information about the freebsd-bugs
mailing list