"zpool remove" failing
jdelisle
jdelisle at gmail.com
Sat Jan 19 16:39:19 UTC 2019
I'm grasping at straws here, but could this functionality be incomplete in
11.2-RELEASE? Would connecting my disks to a 12.0-RELEASE system help?
It's easy for me to do that, but is it safe to move them to a 12.0-RELEASE
system, run the "zpool remove", then move them back to my 11.2-RELEASE
system?
Is there any diagnostic output I could share that would help?
I'm still confused by the error I'm getting from "zpool remove".. I'm
attempting to remove a top-level mirror vdev, and they're all the same
sector-size.
On Thu, Jan 17, 2019 at 5:34 PM jdelisle <jdelisle at gmail.com> wrote:
> Hello,
>
> I've run into a problem and I'm hoping the knowledgeable folks here can
> help.
>
> I've accidentally added a mirror vdev to a pool, and would like to remove
> it. Note "mirror-8" in the output down below, it's what I'd like to remove.
>
> When I attempt to remove it, I get the following message:
>
> [root at omega ~]# zpool remove nebula mirror-8
> cannot remove mirror-8: invalid config; all top-level vdevs must have the
> same sector size and not be raidz.
>
> The disks in my pool are all 512 byte sector disks.
>
> Any idea what I'm doing wrong?
>
> ##### My system's uname -a - it's FreeNAS.
> [root at omega ~]# uname -a
> FreeBSD omega.nebula.pw 11.2-STABLE FreeBSD 11.2-STABLE #0
> r325575+fc3d65faae6(HEAD): Thu Dec 20 16:12:30 EST 2018
> root at nemesis.tn.ixsystems.com:/freenas-releng-final/freenas/_BE/objs/freenas-releng-final/freenas/_BE/os/sys/FreeNAS.amd64
> amd64
>
> ##### The pool I accidentally added a mirror vdev to (mirror-8):
> [root at omega ~]# zpool status -v nebula
> pool: nebula
> state: ONLINE
> scan: none requested
> config:
>
> NAME STATE READ
> WRITE CKSUM
> nebula ONLINE 0
> 0 0
> mirror-0 ONLINE 0
> 0 0
> gptid/2fdc125f-15f4-11e9-b1b0-000c29f308bf ONLINE 0
> 0 0
> gptid/30816c63-15f4-11e9-b1b0-000c29f308bf ONLINE 0
> 0 0
> mirror-1 ONLINE 0
> 0 0
> gptid/31403c46-15f4-11e9-b1b0-000c29f308bf ONLINE 0
> 0 0
> gptid/31f1f182-15f4-11e9-b1b0-000c29f308bf ONLINE 0
> 0 0
> mirror-2 ONLINE 0
> 0 0
> gptid/32a4cfa3-15f4-11e9-b1b0-000c29f308bf ONLINE 0
> 0 0
> gptid/3356fcab-15f4-11e9-b1b0-000c29f308bf ONLINE 0
> 0 0
> mirror-3 ONLINE 0
> 0 0
> gptid/341311e6-15f4-11e9-b1b0-000c29f308bf ONLINE 0
> 0 0
> gptid/34c9952c-15f4-11e9-b1b0-000c29f308bf ONLINE 0
> 0 0
> mirror-4 ONLINE 0
> 0 0
> gptid/3580b3ad-15f4-11e9-b1b0-000c29f308bf ONLINE 0
> 0 0
> gptid/364188ba-15f4-11e9-b1b0-000c29f308bf ONLINE 0
> 0 0
> mirror-5 ONLINE 0
> 0 0
> gptid/370908e5-15f4-11e9-b1b0-000c29f308bf ONLINE 0
> 0 0
> gptid/37cf00a5-15f4-11e9-b1b0-000c29f308bf ONLINE 0
> 0 0
> mirror-6 ONLINE 0
> 0 0
> gptid/388e6fef-15f4-11e9-b1b0-000c29f308bf ONLINE 0
> 0 0
> gptid/3945fee7-15f4-11e9-b1b0-000c29f308bf ONLINE 0
> 0 0
> mirror-7 ONLINE 0
> 0 0
> gptid/3a08fb45-15f4-11e9-b1b0-000c29f308bf ONLINE 0
> 0 0
> gptid/3ad7a643-15f4-11e9-b1b0-000c29f308bf ONLINE 0
> 0 0
> mirror-8 ONLINE 0
> 0 0
> da33p1 ONLINE 0
> 0 0
> da34p1 ONLINE 0
> 0 0
>
> errors: No known data errors
> [root at omega ~]#
>
>
> ##### My ZFS features etc.
> [root at omega ~]# zpool get all nebula
> NAME PROPERTY VALUE
> SOURCE
> nebula size 21.8T -
> nebula capacity 64% -
> nebula altroot /mnt local
> nebula health ONLINE -
> nebula guid 9027025247116299332
> default
> nebula version -
> default
> nebula bootfs -
> default
> nebula delegation on
> default
> nebula autoreplace off
> default
> nebula cachefile /data/zfs/zpool.cache local
> nebula failmode continue local
> nebula listsnapshots off
> default
> nebula autoexpand on local
> nebula dedupditto 0
> default
> nebula dedupratio 1.00x -
> nebula free 7.76T -
> nebula allocated 14.0T -
> nebula readonly off -
> nebula comment -
> default
> nebula expandsize - -
> nebula freeing 0
> default
> nebula fragmentation 2% -
> nebula leaked 0
> default
> nebula bootsize -
> default
> nebula checkpoint - -
> nebula feature at async_destroy enabled
> local
> nebula feature at empty_bpobj active
> local
> nebula feature at lz4_compress active
> local
> nebula feature at multi_vdev_crash_dump enabled
> local
> nebula feature at spacemap_histogram active
> local
> nebula feature at enabled_txg active
> local
> nebula feature at hole_birth active
> local
> nebula feature at extensible_dataset enabled
> local
> nebula feature at embedded_data active
> local
> nebula feature at bookmarks enabled
> local
> nebula feature at filesystem_limits enabled
> local
> nebula feature at large_blocks enabled
> local
> nebula feature at sha512 enabled
> local
> nebula feature at skein enabled
> local
> nebula feature at device_removal enabled
> local
> nebula feature at obsolete_counts enabled
> local
> nebula feature at zpool_checkpoint enabled
> local
>
> ###### The two disks in the mirrored vdev I wish to remove:
> [root at omega ~]# diskinfo -v /dev/da33
> /dev/da33
> 512 # sectorsize
> 120034123776 # mediasize in bytes (112G)
> 234441648 # mediasize in sectors
> 4096 # stripesize
> 0 # stripeoffset
> 14593 # Cylinders according to firmware.
> 255 # Heads according to firmware.
> 63 # Sectors according to firmware.
> ATA INTEL SSDSA2CW12 # Disk descr.
> CVPR1132000R120LGN # Disk ident.
> id1,enc at n50050cc1020371ce/type at 0/slot at 13 # Physical path
> Yes # TRIM/UNMAP support
> 0 # Rotation rate in RPM
> Not_Zoned # Zone Mode
>
> [root at omega ~]# diskinfo -v /dev/da34
> /dev/da34
> 512 # sectorsize
> 120034123776 # mediasize in bytes (112G)
> 234441648 # mediasize in sectors
> 4096 # stripesize
> 0 # stripeoffset
> 14593 # Cylinders according to firmware.
> 255 # Heads according to firmware.
> 63 # Sectors according to firmware.
> ATA INTEL SSDSA2CW12 # Disk descr.
> CVPR119300D5120LGN # Disk ident.
> id1,enc at n50050cc1020371ce/type at 0/slot at 7 # Physical path
> Yes # TRIM/UNMAP support
> 0 # Rotation rate in RPM
> Not_Zoned # Zone Mode
>
> [root at omega ~]#
>
>
> ##### This is an example of one of the many disks in my mirrored pairs.
> They're all the same make/ model of disk, and are 512 byte sectors just
> like the SSDs above.
> [root at omega ~]# diskinfo -v /dev/da10
> /dev/da10
> 512 # sectorsize
> 3000592982016 # mediasize in bytes (2.7T)
> 5860533168 # mediasize in sectors
> 0 # stripesize
> 0 # stripeoffset
> 364801 # Cylinders according to firmware.
> 255 # Heads according to firmware.
> 63 # Sectors according to firmware.
> HITACHI HUS72303CLAR3000 # Disk descr.
> YHJEXGHD # Disk ident.
> id1,enc at n50050cc1020371ce/type at 0/slot at f # Physical path
> No # TRIM/UNMAP support
> 7200 # Rotation rate in RPM
> Not_Zoned # Zone Mode
>
> [root at omega ~]#
>
More information about the freebsd-fs
mailing list