ZFS RAID 10 capacity expansion and uneven data distribution

Kai Gallasch k at free.de
Tue May 12 14:05:36 UTC 2015


Hello list.

What is the preferred way to expand a mirrored or RAID 10 zpool with
additional mirror pairs?

On one server I am currently using a four disk RAID 10 zpool:

	zpool              ONLINE       0     0     0
	  mirror-0         ONLINE       0     0     0
	    gpt/zpool-da2  ONLINE       0     0     0
	    gpt/zpool-da3  ONLINE       0     0     0
	  mirror-1         ONLINE       0     0     0
	    gpt/zpool-da4  ONLINE       0     0     0
	    gpt/zpool-da5  ONLINE       0     0     0

Originally the pool consisted of only one mirror (zpool-da2 and zpool-da3)

I then used "zpool add" to add mirror-1 to the pool

Directly afer adding the new mirror I had all old data physically
sitting on the old mirror and no data on the new disks.

So there is much imbalance in the data distribution across the RAID 10.
The effect is now, that the IOPS are not evently distributed about all
devs of the pool and e.g. "gstat -p" when the server is very busy
showed, that the old mirror pair can max out at 100% I/O usage while the
other one is almost idle.

Also: I also noted that the old mirror-pair had a FRAG about 50%, while
the new one only has 3%.

So is it generally not a good idea to expand a mirrored pool or RAID 10
pool with new mirror pairs?

Or by which procedure can the existing data in the pool be evenly
distributed about all devices inside the pool?

Any hint appreciated.

Regards,
Kai.

-- 
PGP-KeyID = 0x70654D7C4FB1F588
Der Techniker ist informiert.



-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <http://lists.freebsd.org/pipermail/freebsd-fs/attachments/20150512/9b225e99/attachment.sig>


More information about the freebsd-fs mailing list