zfs disk replace issue
    Bakul Shah 
    bakul at bitblocks.com
       
    Mon May 28 07:17:13 UTC 2012
    
    
  
I have a zpool with 2 mirrors of two disks each. 3 of the
disks are 1TB. I replaced three original 300GB disks with
the TB disks and there were no problems
Recently I upgraded to a new machine and trasferred the old
zfs disks to the new machine and everything was ok.
I then replaced the final 300GB disk with a 1TB disk. I
noticed that after resilver finished (in two hours), "zpool
status" kept showing 'replacing 0' and showed the old and new
disk in the pool. I thought it would automatically take out
the old disk?  So then I manually "zpool detach"ed the old
disk but the size of the mirror has not changed.
Is this a bug or did I miss some step? I'd appreciate any help
to make the extra space usable! This pool is root so it
was mounted when I did this.  May be that was the problem?
Thanks,
Bakul
Rough transcript follows:
$ zpool iostat -v
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
z            832G   330G     49     98   887K   471K
  mirror     603G   327G     37     64   800K   329K
    ada2p1      -      -     19     20   425K   330K
    ada3p1      -      -     19     20   409K   330K
  mirror     229G  3.39G     12     33  86.9K   142K
    ada4p1      -      -      6     33  47.0K   143K
    ada1p1      -      -      6     33  64.7K   143K
----------  -----  -----  -----  -----  -----  -----
$  gpart list ada1 ada2 ada3 ada4 | grep -A2 p1
1. Name: ada1p1
   Mediasize: 1000204851712 (931G)
   Sectorsize: 512
--
1. Name: ada2p1
   Mediasize: 1000204851712 (931G)
   Sectorsize: 512
--
1. Name: ada3p1
   Mediasize: 1000204851712 (931G)
   Sectorsize: 512
--
1. Name: ada4p1
   Mediasize: 1000204851712 (931G)
   Sectorsize: 512
    
    
More information about the freebsd-fs
mailing list