ZFS resize disk vdev
zbeeble at gmail.com
Mon Dec 15 12:39:42 PST 2008
On Tue, Dec 9, 2008 at 11:04 AM, Bryan Alves <bryanalves at gmail.com> wrote:
> On Tue, Dec 9, 2008 at 3:22 AM, James R. Van Artsdalen <
> james-freebsd-fs2 at jrv.org> wrote:
> > I'm not sure how ZFS reacts to an existing disk drive suddenly becoming
> > larger. Real disk drives don't do that and ZFS is intended to use real
> > disks. There are some uberblocks (pool superblocks) at the end of the
> > disk and ZFS probably won't be able to find them if the uberblocks at
> > the front of the disk are clobbered and the "end of the disk" has moved
> > out away from the remaining uberblocks.
Very well, in fact. In fact, one way to "grow" a RAID Z1 or Z2 pool is to
replace each disk with a larger one. When the last one is finished
resilvering, you will have more space.
My reason for wanting to use my hardware controller isn't for speed, it's
> for the ability to migrate in place. I'm currently using 5 750GB drives,
> and I would like the flexibility to be able to purchase a 6th and grow my
> array by 750GB in place. If I could achieve something, anything, similar
> ZFS (namely, buy an amount of disks smaller than the number of total disks
> in the array and see a gain in storage capacity), I would use ZFS.
You can't add one disk... but you can add several (easily). There are two
ways ZFS grows and both are well documented.
The first is add another set of disks (at least 2 for mirroring, 3 for Z1
and 4 for z2). ZFS recomends not more than 9 disks per RAID group anyways.
In my case, I have 6 750G drives in my array. They're pretty much full...
so I'm looking at adding another 6 1T drives shortly. This is transparent
and the "industry" would call this RAID50... that is two raid 5 (Z1) groups
The second way to add space is to replace disks with larger ones
(one-by-one). Lets say, down the road, that my disks are full again and 4T
disks are common and cheap. I replace each 750G disk with a 4T disk and let
things resilver. My array would have been 8.75 gig (3.75T from the 750's
and 5T from the 1T drives) and it would suddenly be 25T (20 from the 4T
drives and 5T from the 1T drives). This increase in space occurs when the
last drive is resilvered.
This last step is good because at some point drives are not worth the power
to run. I turned off my array of 18G SCSI drives a couple of years ago ---
it wasn't worth the power. In the ZFS realm... instead of transferring the
data and turning off the system, you upgrade.
More information about the freebsd-fs