ZFS RAID 10 capacity expansion and uneven data distribution

InterNetX - Juergen Gotteswinter jg at internetx.com
Tue May 12 14:19:26 UTC 2015



Am 12.05.2015 um 15:58 schrieb Kai Gallasch:
> Hello list.
> 
> What is the preferred way to expand a mirrored or RAID 10 zpool with
> additional mirror pairs?
> 
> On one server I am currently using a four disk RAID 10 zpool:
> 
> 	zpool              ONLINE       0     0     0
> 	  mirror-0         ONLINE       0     0     0
> 	    gpt/zpool-da2  ONLINE       0     0     0
> 	    gpt/zpool-da3  ONLINE       0     0     0
> 	  mirror-1         ONLINE       0     0     0
> 	    gpt/zpool-da4  ONLINE       0     0     0
> 	    gpt/zpool-da5  ONLINE       0     0     0
> 
> Originally the pool consisted of only one mirror (zpool-da2 and zpool-da3)
> 
> I then used "zpool add" to add mirror-1 to the pool
> 
> Directly afer adding the new mirror I had all old data physically
> sitting on the old mirror and no data on the new disks.

yep, this is the expected result

> 
> So there is much imbalance in the data distribution across the RAID 10.
> The effect is now, that the IOPS are not evently distributed about all
> devs of the pool and e.g. "gstat -p" when the server is very busy
> showed, that the old mirror pair can max out at 100% I/O usage while the
> other one is almost idle.

right, works as designed

> 
> Also: I also noted that the old mirror-pair had a FRAG about 50%, while
> the new one only has 3%.
> 

same here

> So is it generally not a good idea to expand a mirrored pool or RAID 10
> pool with new mirror pairs?
> 

depends

> Or by which procedure can the existing data in the pool be evenly
> distributed about all devices inside the pool?
> 

destroy / recreate

> Any hint appreciated.
> 
> Regards,
> Kai.
> 


More information about the freebsd-fs mailing list