fjwcash at gmail.com
Sat Oct 11 20:54:27 UTC 2008
On 10/11/08, Danny Braniss <danny at cs.huji.ac.il> wrote:
>> > I'm asking, because I want to deploy some zfs fileservers soon, and so
>> > far the solution is either PXE boot, or keep one disk UFS (or boot off a
>> > USB)
For the servers we're deploying FreeBSD+ZFS on, mainly large backup
systems with 24 drives, we're putting / onto either CompactFlash
(using IDE adapters) or USB sticks (using internal connectors), using
gmirror to provide fail-over for /. That way, we can boot off UFS,
have full access to single-user mode and /rescue, and use every bit of
each disk for ZFS. Works quite nicely.
>> > Today's /(root+usr) is somewhere between .5 to 1Gb(kernel+debug+src),
>> > and is readonly, so having 1 disk UFS seems to be a pitty.
/ by itself (no /usr, /home, /tmp, or /var) is under 300 MB on our
systems (FreeBSD 7-STABLE from August, amd64). Definitely not worth
dedicating an entire 500 GB drive to, or even a single slice or
partition to. By putting / onto separate media (like CF, USB,
whatever), you can dedicate all your harddrive space to ZFS.
> Initially, I was not thrilled with ZFS, but once you cross the
Once you start using ZFS features, especially snapshots, it's really
hard to move to non-pooled-storage setups. Even LVM on Linux becomes
hard to work with. There's just no easier way to work with multi-TB
storage setups using 10+ drives.
Even for smaller systems with only 3 drives, it's so much nicer
working with pooled storage systems like ZFS. My home server uses a 2
GB USB stick for / with 3x 120 GB drives for ZFS, with separate
filesystems for /usr, /usr/ports, /usr/src, /usr/obj, /usr/local,
/home, /var, and /tmp. No fussing around with partition sizes ahead
of time is probably the single greatest feature, with
instant/unlimited snapshots a very close second.
>> I think (hope?) you can use the "remaining" (e.g. non-UFS/non-gmirror)
>> part of the 2nd disk for ZFS as well, otherwise the space would go
>> to waste. The "Root on ZFS configuration" FreeBSD ZFS Wiki page
>> seems to imply you can.
I did this for awhile. 3x 120 GB drives configured as:
10 GB slice for /
2 GB slice for swap
108 GB slice to ZFS
The first slice was configured as a 3-way gmirror, and the last slice
was configured as a raidz pool. But performance wasn't that great.
Moved / to a USB stick, and dedicated the entire drives to the zpool,
and things have been a lot smoother.
fjwcash at gmail.com
More information about the freebsd-hackers