gptzfsboot and 4k sector raidz
Trent Nelson
trent at snakebite.org
Thu Sep 1 17:18:05 UTC 2011
On 01-Sep-11 12:30 PM, Daniel Mayfield wrote:
>
> On Sep 1, 2011, at 7:56 AM, Trent Nelson wrote:
>
>> On 01-Sep-11 2:11 AM, Daniel Mayfield wrote:
>>> I just set this up on an Athlon64 machine I have w/ 4 WD EARS
>>> 2TB disks. I followed the instructions here:
>>> http://www.leidinger.net/blog/2011/05/03/another-root-on-zfs-howto-optimized-for-4k-sector-drives/,
>>>but just building a single pool so three partitions per disk (boot,
>>> swap and zfs). I'm using the mfsBSD image to do the boot code.
>>> When I reboot to actually come up from ZFS, the loader spins for
>>> half a second and then the machine reboots. I've seen a number
>>> of bug reports on gptzfsboot and 4k sector pools, but I never saw
>>> one fail so early. What data would the ZFS people need to help
>>> fix this?
>>
>> FWIW, I experienced the exact same issue about a week ago with four
>> new WD EARS 2TB disks. I contemplated looking into fixing it,
>> until I noticed the crazy disk usage with 4K sectors. On my old
>> box, my /usr/src dataset was ~450MB (mirrored 512-byte drives), on
>> the new box with the 2TB 4k sector drives, /usr/src was
>> 1.5-something GB. Exact same settings.
>
> I noticed that the free data space was also bigger. I tried it with
> raidz on the 512B sectors and it claimed to have only 5.3T of space.
> With 4KB sectors, it claimed to have 7.25T of space. Seems like
> something is wonky in the space calculations?
Hmmmm. It didn't occur to me that the space calculations might be
wonky. That could explain why I was seeing disk usage much higher on 4K
than 512-bytes for all my zfs datasets. Here's my zpool/zfs output w/
512-byte sectors (4-disk raidz):
[root at flanker/ttypts/0(~)#] zpool list tank
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
tank 7.12T 698G 6.44T 9% 1.16x ONLINE -
[root at flanker/ttypts/0(~)#] zfs list tank
NAME USED AVAIL REFER MOUNTPOINT
tank 604G 4.74T 46.4K legacy
It's a raidz1-0 of four 2TB disks, so the space available should be
(4-1=3)*2TB=6TB? Although I presume that's 6-marketing-terabtyes, which
translates to ... 6000000000000/(1024^4)=5. And I've got 64k boot, 8G
swap, 16G scratch on each drive *before* the tank, so eh, I guess 4.74T
sounds about right.
The 7.12T reported by zpool doesn't seem to be taking into account the
reduced space from the raidz parity. *shrug*
Enough about sizes; what's your read/write performance like between
512-byte/4K? I didn't think to test performance in the 4K
configuration; I really wish I had, now.
Trent.
More information about the freebsd-fs
mailing list