ZFS: root pool considerations, multiple pools on the same disk

krad kraduk at gmail.com
Tue Dec 20 12:54:00 UTC 2011


On 20 December 2011 07:57, Peter Jeremy <peterjeremy at acm.org> wrote:

> On 2011-Dec-19 12:46:22 +0000, Hugo Silva <hugo at barafranca.com> wrote:
> >I've been thinking about whether it makes sense to separate the rpool
> >from the data pool(s)..
>
> I think it does.  I have 6 1TB disks with 8GB carved off the front of
> each disk for root & swap.  I initially used a separate (gmirrored)
> UFS root (including /usr/src and /usr/obj) because I didn't completely
> trust ZFS.  I've since moved to a 3-way mirrored ZFS root, with the
> "root" area of the remaining 3 disks basically spare (I use them for
> upgrades).  The bulk of the disks form a 6-way RAIDZ2 data pool.
>
> I still think having a separate root makes sense because it should
> simplify recovery if everything goes pear-shaped.
>
> >One idea would be creating a 4-way mirror on small partitions for the
> >rpool (sturdier), and a zfs raid-10 on the remaining larger partition.
>
> I'd recommend having two 2-way mirrored root pools that you update
> alternately.  There are a couple of failure modes where it can be
> difficult to difficult to get back to a known working state without
> a second boot/root.
>
> >I'm curious about the performance implications (if any) of having >1
> >zpools on the same disks (considering that during normal usage, it'll be
> >the data pool seeing 99.999% of the action) and whether anyone has
> >thought the same and/or applied this concept in production.
>
> I haven't done any performance comparisons but would expect this to
> be similar to having multiple UFS filesystems on one disk.
>
> --ghr be,
> Peter Jeremy
>

even easier option might just be to boot off a flash drive with a minimal
installation on. then mount all the writable parts of the system off the
pool (/tmp. var. home etc.) along with any meatier bits if the installation
etc databases.

Having said all that unless your doing lots of logging its unlikely the
main os will actually cause much read/writes of the binaries on zfs. If you
have a decent amount of stuff going on you will find most of the frequently
used stuff will be in arc so you might be better off having the os on the
main pool for simplicity. After all its the data thats the important part
of the system not the os. OS configs are easy to back up and having a usb
stick as a live recovery os is not a hard thing to do so os recovery is
easy. Data recovery is another matter though. If you do put it on the same
pool though I would separate the os off into its own hierarchy though.
Somthing along the lines of this

As you can see I create a new root fs every time I make world so rolling
back is fairly easy

system-4k/be                              29.4G   120G   264K  /system-4k/be
system-4k/be/root20110930                 1.73G   120G  1.31G  legacy
system-4k/be/root20111011                 2.03G   120G  1.69G  legacy
system-4k/be/root20111023                 1.98G   120G  1.68G
 /system-4k/be/root20111023
system-4k/be/root20111028                 2.00G   120G  1.68G
 /system-4k/be/root20111028
system-4k/be/root20111112                 2.08G   120G  1.76G
 /system-4k/be/root20111112
system-4k/be/root20111125                 2.56G   120G  2.16G
 /system-4k/be/root20111125
system-4k/be/tmp                           372K   122G   372K  /tmp
system-4k/be/usr-local                    3.32G   120G  3.32G  /usr/local/
system-4k/be/usr-obj                       731M   120G   731M  /usr/obj
system-4k/be/usr-ports                    2.34G   120G  1.71G  /usr/ports
system-4k/be/usr-ports/distfiles           641M   120G   641M
 /usr/ports/distfiles
system-4k/be/usr-src                       705M   120G   705M  /usr/src
system-4k/be/var                          2.34G   126G   875M  /var
system-4k/be/var/log                      1.46G   126G  1.46G  /var/log
system-4k/be/var/mysql                    34.0M   126G  34.0M  /var/db/mysql


More information about the freebsd-fs mailing list