large RAID volume partition strategy
kometen at gmail.com
Fri Aug 17 15:10:57 PDT 2007
> I have a shiny new big RAID array. 16x500GB SATA 300+NCQ drives
> connected to the host via 4Gb fibre channel. This gives me 6.5Tb of
> raw disk.
> I've come up with three possibilities on organizing this disk. My
> needs are really for a single 1Tb file system on which I will run
> postgres. However, in the future I'm not sure what I'll really need.
> I don't plan to ever connect any other servers to this RAID unit.
> The three choices I've come with so far are:
> 1) Make one RAID volume of 6.5Tb (in a RAID6 + hot spare
> configuration), and make one FreeBSD file system on the whole partition.
> 2) Make one RAID volume of 6.5Tb (in a RAID6 + hot spare
> configuration), and make 6 FreeBSD partitions with one file system each.
> 3) Make 6 RAID volumes and expose them to FreeBSD as multiple drives,
> then make one partition + file system on each "disk". Each RAID
> volume would span across all 16 drives, and I could make the volumes
> of differing RAID levels, if needed, but I'd probably stick with RAID6
> I'm not keen on option 1 because of the potentially long fsck times
> after a crash.
If you want to avoid the long fsck-times your remaining options are a
journaling filesystem or zfs, either requires an upgrade from freebsd
6.2. I have used zfs and had a serverstop due to powerutage in out
area. Our zfs-samba-server came up fine with no data corruption. So I
will suggest freebsd 7.0 with zfs.
Short fsck-times and ufs2 don't do well together. I know there is
background-fsck but for me that is not an option.
When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.
More information about the freebsd-stable