large disk > 8 TB
Lan Tran
lan at hangwithme.com
Wed Dec 12 07:41:24 PST 2007
Ivan Voras wrote:
> Michael Fuckner wrote:
>
>> Lan Tran wrote:
>>
>>> I have a Dell PERC 6/E controller connected to an external Dell MD1000
>>> storage, which I set up RAID 6 for. The RAID BIOS reports 8.5 TB. I
>>> installed 7BETA4 amd64 and Sysinstall/dmesg.boot detects this correctly:
>>> mfid1: <MFI Logical Disk> on mfi1
>>> mfid1: 8578560MB (17568890880 sectors) RAID volume 'raid6' is optimal"
>>>
>>> However, after I created a zfs zpool on this device it only shows 185
>>> GB. # zpool create tank /dev/mfid1s1d
>>> # zpool list
>>> NAME SIZE USED AVAIL CAP HEALTH ALTROOT
>>> tank 185G 111K 185G 0% ONLINE -
>>>
>>> also with 'dh':
>>> # df -h tank
>>> Filesystem Size Used Avail Capacity Mounted on
>>> tank 182G 0B 182G 0% /tank
>>>
>>>
>> The main purpose of ZFS is doing Software raid (which is even faster
>> than HW Raid nowadays).
>>
>> You should export all disks seperately to the OS- and then you don't
>> have the 4GB limit wrapping the size to 185GB.
>>
>
> This is the wrong way around. Why would something wrap drive sizes at a
> 32-bit limit? The driver and the GEOM systems are 64-bit clean, if this
> is a problem in ZFS, it's a serious one.
>
> I don't have the drive capacity to create a large array, but I assume
> someone has tested ZFS on large arrays (Pawel?)
>
> Can you run "diskinfo -v " on the large array (the 8.5 TB one) and
> verify the system sees it all?
>
>
>
# diskinfo -v mfid1
mfid1
512 # sectorsize
8995272130560 # mediasize in bytes (8.2T)
17568890880 # mediasize in sectors
1093612 # Cylinders according to firmware.
255 # Heads according to firmware.
63 # Sectors according to firmware.
I realized that Sysinstall cannot handle fdisk/disklabel that are more
than 2 TB after some searching. So it is not a ZFS issue. I deleted
and re-created the raw device with 'newfs /dev/mfid1' command. I can
see the 8 TB slice now. I went back to hardware RAID because while
testing ZFS raidz2 the hot spare did not kick in, if one of the disk was
pulled out of the bay. I think because it's a RAID controller and not a
JBOD card and each disk is exported to the OS a RAID 0. The Dell PERC
6/E reorders the "disk groups" when a disk is missing. There are 15
disk groups for 15 "virtual disks" and they are labeled as disk group 1
to 15. Disk group 1 is mapped to virtual disk 1 and so on. After
pulling out disk 13 for example, the disk group to virtual disk mappings
are changed and mismatched. A JBOD card would work nicely with ZFS. I
don't see an option in the card BIOS to make it act as a JBOD instead.
Thanks for all your responses. I'm a happy camper to see all the space
in one big fat slice :).
Lan
More information about the freebsd-hardware
mailing list