ZfS & GEOM with many odd drive sizes

Mark Powell M.S.Powell at salford.ac.uk
Thu Jul 19 10:45:59 UTC 2007


Hi,
   I'd like to experiment with ZFS.
   To that end I'd like to get a running array from a rather ad hoc 
collection of old drives.

  3x250GB
  3x200GB
  1x400GB


I planned to arrange them in 3 pairs of of 250+200. Therefore I'd end up 
with an effective 4 drives:

  3x450GB
  1x400GB

I'd gmirror to make a small 2GB root and swap from the extra 50GB on the 3 
pairs. Then gconcat to join the remaining 448GB from each pair into a 
volume. Apparently root is possible on ZFS with a small ufs to boot from:

http://wiki.freebsd.org/ZFSOnRoot

   Then make a zfs raidz from the 3x448+1x400. Effectively giving a zpool 
of 1200GB real storage. 3x48GB will not be accessible now as the last 
volume will only be the 400GB on the last drive.
   I want to be able to increase the size of this volume later, by 
replacing drives when they fail, or it becomes economical to do so.
   I know removing a volume from a zpool and replacing it with a larger one 
is possible. The zpool will self-heal the data onto the new volume. 
Eventually when the final volume is replaced by a larger one the extra 
space becomes available for use. That's correct right?
   What I want to know is, does the new volume have to be the same actual 
device name or can it be substituted with another?
   i.e. can I remove, for example, one of the 448GB gconcats e.g. gc1 and 
replace that with a new 750GB drive e.g. ad6?
   Eventually so that once all volumes are replaced the zpool could be, for 
example, 4x750GB or 2.25TB of usable storage.
   Many thanks for any advice on these matters which are new to me.

-- 
Mark Powell - UNIX System Administrator - The University of Salford
Information Services Division, Clifford Whitworth Building,
Salford University, Manchester, M5 4WT, UK.
Tel: +44 161 295 4837  Fax: +44 161 295 5888  www.pgp.com for PGP key


More information about the freebsd-fs mailing list