ZFS resize disk vdev
bryanalves at gmail.com
Mon Dec 15 13:12:09 PST 2008
On Mon, Dec 15, 2008 at 3:39 PM, Zaphod Beeblebrox <zbeeble at gmail.com>wrote:
> On Tue, Dec 9, 2008 at 11:04 AM, Bryan Alves <bryanalves at gmail.com> wrote:
>> On Tue, Dec 9, 2008 at 3:22 AM, James R. Van Artsdalen <
>> james-freebsd-fs2 at jrv.org> wrote:
>> > I'm not sure how ZFS reacts to an existing disk drive suddenly becoming
>> > larger. Real disk drives don't do that and ZFS is intended to use real
>> > disks. There are some uberblocks (pool superblocks) at the end of the
>> > disk and ZFS probably won't be able to find them if the uberblocks at
>> > the front of the disk are clobbered and the "end of the disk" has moved
>> > out away from the remaining uberblocks.
> Very well, in fact. In fact, one way to "grow" a RAID Z1 or Z2 pool is to
> replace each disk with a larger one. When the last one is finished
> resilvering, you will have more space.
> My reason for wanting to use my hardware controller isn't for speed, it's
>> for the ability to migrate in place. I'm currently using 5 750GB drives,
>> and I would like the flexibility to be able to purchase a 6th and grow my
>> array by 750GB in place. If I could achieve something, anything, similar
>> ZFS (namely, buy an amount of disks smaller than the number of total disks
>> in the array and see a gain in storage capacity), I would use ZFS.
> You can't add one disk... but you can add several (easily). There are two
> ways ZFS grows and both are well documented.
> The first is add another set of disks (at least 2 for mirroring, 3 for Z1
> and 4 for z2). ZFS recomends not more than 9 disks per RAID group anyways.
> In my case, I have 6 750G drives in my array. They're pretty much full...
> so I'm looking at adding another 6 1T drives shortly. This is transparent
> and the "industry" would call this RAID50... that is two raid 5 (Z1) groups
> striped together.
> The second way to add space is to replace disks with larger ones
> (one-by-one). Lets say, down the road, that my disks are full again and 4T
> disks are common and cheap. I replace each 750G disk with a 4T disk and let
> things resilver. My array would have been 8.75 gig (3.75T from the 750's
> and 5T from the 1T drives) and it would suddenly be 25T (20 from the 4T
> drives and 5T from the 1T drives). This increase in space occurs when the
> last drive is resilvered.
> This last step is good because at some point drives are not worth the power
> to run. I turned off my array of 18G SCSI drives a couple of years ago ---
> it wasn't worth the power. In the ZFS realm... instead of transferring the
> data and turning off the system, you upgrade.
In the case of option one, after this stripe of 2 raidz's is created though,
those old drives can't be pulled from the array can they? More
specifically, after we "upgrade" to what would be termed Raid50, we can't
"downgrade" back to Raid5, right?
More information about the freebsd-fs