ZFS: Can't repair raidz2 (Cannot replace a replacing device)

Steven Schlansker stevenschlansker at gmail.com
Thu Dec 24 01:02:37 UTC 2009



On Dec 23, 2009, at 4:32 PM, Rich wrote:

> That's fascinating - I'd swear it used to be the case (in
> Solaris-land, at least) that resilvering with a smaller vdev resulted
> in it shrinking the available space on other vdevs as though they were
> all as large as the smallest vdev available.

Pretty sure that this doesn't exist for raidz.  I haven't tried, though,
and Sun's bug database's search blows chunks.  I remember seeing
a bug filed on it before, but I can't for the life of me find it.

> 
> In particular, I'd swear I've done this with some disk arrays I have
> laying around with 7x removable SCA drives, which I have in 2, 4.5, 9,
> and 18 GB varieties...
> 
> But maybe I'm just hallucinating, or this went away a long time ago.
> (This was circa b70 in Solaris.)

Shrinking of mirrored drives seems like it might be working.
Again Sun's bug database isn't clear at all about what can /
can't be shrunk - maybe I should get a Solaris bootdisk and see
if I can shrink it from there...

> 
> I know you can't do this in FreeBSD; I've also run into the
> "insufficient space" problem when trying to replace with a smaller
> vdev.
> 
> - Rich
> 
> On Wed, Dec 23, 2009 at 7:29 PM, Steven Schlansker
> <stevenschlansker at gmail.com> wrote:
>> 
>> On Dec 22, 2009, at 3:15 PM, Miroslav Lachman wrote:
>> 
>>> Steven Schlansker wrote:
>>>> As a corollary, you may notice some funky concat business going on.
>>>> This is because I have drives which are very slightly different in size (<  1MB)
>>>> and whenever one of them goes down and I bring the pool up, it helpfully (?)
>>>> expands the pool by a whole megabyte then won't let the drive back in.
>>>> This is extremely frustrating... is there any way to fix that?  I'm
>>>> eventually going to keep expanding each of my drives one megabyte at a time
>>>> using gconcat and space on another drive!  Very frustrating...
>>> 
>>> You can avoid it by partitioning the drives to the well known 'minimal' size (size of smallest disk) and use the partition instead of raw disk.
>>> For example ad12s1 instead of ad12 (if you creat slices by fdisk)
>>> of ad12p1 (if you creat partitions by gpart)
>> 
>> 
>> Yes, this makes sense.  Unfortunately, I didn't do this when I first made the array
>> as the documentation says you should use whole disks so that it can enable the write
>> cache, which I took to mean you shouldn't use a partition table.  And now there's no
>> way to fix it after the fact, as you can't shrink a zpool even by a single
>> MB :(
>> 
>> 
>> _______________________________________________
>> freebsd-fs at freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>> 
> 
> 
> 
> -- 
> 
> [We] use bad software and bad machines for the wrong things. -- R. W. Hamming



More information about the freebsd-fs mailing list