ZfS & GEOM with many odd drive sizes

Mark Powell M.S.Powell at salford.ac.uk
Thu Jul 19 17:19:18 UTC 2007


On Thu, 19 Jul 2007, Pawel Jakub Dawidek wrote:

> On Thu, Jul 19, 2007 at 11:19:08AM +0100, Mark Powell wrote:
>>   What I want to know is, does the new volume have to be the same actual
>> device name or can it be substituted with another?
>>   i.e. can I remove, for example, one of the 448GB gconcats e.g. gc1 and
>> replace that with a new 750GB drive e.g. ad6?
>>   Eventually so that once all volumes are replaced the zpool could be, for
>> example, 4x750GB or 2.25TB of usable storage.
>>   Many thanks for any advice on these matters which are new to me.
>
> All you described above should work.

Thanks Pawel. For your response and much so for all your time spent 
working on ZFS.

Should I expect much greater CPU usage with ZFS?
   I previously had a geom raid5 array which barely broke a sweat on 
benchmarks i.e simple large dd read and writes. With ZFS on the same 
hardware I notice 50-60% system CPU usage is usual during such tests. 
Before the network was a bottleneck, but now it's the zfs array. I 
expected it would have to do a bit more 'thinking', but is such a dramatic 
increase normal?

   Many thanks again.

-- 
Mark Powell - UNIX System Administrator - The University of Salford
Information Services Division, Clifford Whitworth Building,
Salford University, Manchester, M5 4WT, UK.
Tel: +44 161 295 4837  Fax: +44 161 295 5888  www.pgp.com for PGP key


More information about the freebsd-fs mailing list