ZfS & GEOM with many odd drive sizes

Doug Rabson dfr at rabson.org
Thu Jul 19 20:07:07 UTC 2007


On Thursday 19 July 2007, Mark Powell wrote:
> On Thu, 19 Jul 2007, Pawel Jakub Dawidek wrote:
> > On Thu, Jul 19, 2007 at 11:19:08AM +0100, Mark Powell wrote:
> >>   What I want to know is, does the new volume have to be the same
> >> actual device name or can it be substituted with another?
> >>   i.e. can I remove, for example, one of the 448GB gconcats e.g.
> >> gc1 and replace that with a new 750GB drive e.g. ad6?
> >>   Eventually so that once all volumes are replaced the zpool could
> >> be, for example, 4x750GB or 2.25TB of usable storage.
> >>   Many thanks for any advice on these matters which are new to me.
> >
> > All you described above should work.
>
> Thanks Pawel. For your response and much so for all your time spent
> working on ZFS.
>
> Should I expect much greater CPU usage with ZFS?
>    I previously had a geom raid5 array which barely broke a sweat on
> benchmarks i.e simple large dd read and writes. With ZFS on the same
> hardware I notice 50-60% system CPU usage is usual during such tests.
> Before the network was a bottleneck, but now it's the zfs array. I
> expected it would have to do a bit more 'thinking', but is such a
> dramatic increase normal?
>
>    Many thanks again.

ZFS does a checksum on every block it reads from the disk which may be 
your problem. In normal usage, this isn't a big deal due because many 
reads get data from the cache.


More information about the freebsd-fs mailing list