ZFS: separate pools
spork at bway.net
Tue May 4 02:16:59 UTC 2010
On Sun, 2 May 2010, Wes Morgan wrote:
> On Sun, 2 May 2010, Eric Damien wrote:
>> Hello list.
>> I am taking my first steps with ZFS. In the past, I used to have two UFS
>> slices: one dedicated to the o.s. partitions, and the second to data (/home,
>> etc.). I read on that it was possible to recreate that logic with zfs, using
>> separate pools.
>> Considering the example of
>> any idea how I can adapt that to my needs? I am concerned about all the
>> different mountpoints.
> Well, you need not create all those filesystems if you don't want them.
> The pool and FreeBSD will function just fine.
> However, as far as storage is concerned, there is no disadvantage to
> having additional mount pounts. The only limits each filesystem will have
> is a limit you explicitly impose. There are many advantages, though. Some
> datasets are inherently compressible or incompressible. Other datasets you
> may not want to schedule for snapshots, or allow files to be executed,
> suid, checksummed, block sizes, you name it (as the examples in the wiki
> Furthermore, each pool requires its own vdev. If you create slices on a
> drive and then make each slice its own pool, I would wonder if zfs's
> internal queuing would understand the topology and be able to work as
> efficiently. Just a thought, though.
I have two boxes setup where zfs is on top of slices like that. One has a
small zpool across 3 disks - the rest of those disks and 3 other disks of
the same size also make up another zpool. The hardware is old, so
performance just is not spectacular (old 8 port 3Ware PATA card). I can't
tell if this config is contributing to the somewhat anemic (by today's
standards) r/w speeds or not.
Another has 4 drives with a gmirror setup on two of the drives for the OS
(20G out of 1TB). This box performs extremely well (bonnie++ shows
123MB/s writes, 142MB/s reads).
Just some random data. I know when I was reading about ZFS I did come
across some vague notion that zfs wanted the entire drive to better deal
with queueing, not sure if that was official Sun docs or some random blog
> freebsd-stable at freebsd.org mailing list
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"
More information about the freebsd-stable