gptzfsboot and 4k sector raidz
Daniel Mayfield
dan at 3geeks.org
Thu Sep 1 17:46:16 UTC 2011
>> I noticed that the free data space was also bigger. I tried it with
>> raidz on the 512B sectors and it claimed to have only 5.3T of space.
>> With 4KB sectors, it claimed to have 7.25T of space. Seems like
>> something is wonky in the space calculations?
>
> Hmmmm. It didn't occur to me that the space calculations might be wonky. That could explain why I was seeing disk usage much higher on 4K than 512-bytes for all my zfs datasets. Here's my zpool/zfs output w/ 512-byte sectors (4-disk raidz):
>
> [root at flanker/ttypts/0(~)#] zpool list tank
> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
> tank 7.12T 698G 6.44T 9% 1.16x ONLINE -
> [root at flanker/ttypts/0(~)#] zfs list tank
> NAME USED AVAIL REFER MOUNTPOINT
> tank 604G 4.74T 46.4K legacy
>
> It's a raidz1-0 of four 2TB disks, so the space available should be (4-1=3)*2TB=6TB? Although I presume that's 6-marketing-terabtyes, which translates to ... 6000000000000/(1024^4)=5. And I've got 64k boot, 8G swap, 16G scratch on each drive *before* the tank, so eh, I guess 4.74T sounds about right.
>
> The 7.12T reported by zpool doesn't seem to be taking into account the reduced space from the raidz parity. *shrug*
>
> Enough about sizes; what's your read/write performance like between 512-byte/4K? I didn't think to test performance in the 4K configuration; I really wish I had, now.
I didn't test performance. I'm doing all the work running from the mfsBSD boot disc. I'm not sure a simple 'dd' is a good test, but if you have suggestions, I'm open.
daniel
More information about the freebsd-fs
mailing list