How is data written to a pool with multiple VDevs
Danny Carroll
fbsd at dannysplace.net
Wed Sep 1 00:19:03 UTC 2010
Hello All,
I'm in the process of upgrading my home file server and at the same time
I'm going to reorganise the pool.
Currently it looks like this:
nas# zpool status
pool: areca
state: ONLINE
scrub: scrub completed after 12h59m with 0 errors on Sun Aug 29
16:09:09 2010
config:
NAME STATE READ WRITE CKSUM
areca ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da0 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
da3 ONLINE 0 0 0
da4 ONLINE 0 0 0
da5 ONLINE 0 0 0
da6 ONLINE 0 0 0
da7 ONLINE 0 0 0
da8 ONLINE 0 0 0
da9 ONLINE 0 0 0
da10 ONLINE 0 0 0
da11 ONLINE 0 0 0
When I am done I will be using larger drives and I want it to look like
this (I did not know about the 6 drive per vdev recommendation when I
created the array originally):
nas# zpool status
pool: areca
state: ONLINE
scrub: scrub completed after 12h59m with 0 errors on Sun Aug 29
16:09:09 2010
config:
NAME STATE READ WRITE CKSUM
areca ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da0 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
da3 ONLINE 0 0 0
da4 ONLINE 0 0 0
da5 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da6 ONLINE 0 0 0
da7 ONLINE 0 0 0
da8 ONLINE 0 0 0
da9 ONLINE 0 0 0
da10 ONLINE 0 0 0
da11 ONLINE 0 0 0
I plan to do the migration in 2 steps.
1. Put 6 of the new drives into a spare machine I have and create a 6
disk raidz array.
2. Copy the data accross.
3. Import those 6 drives into the old server and add the remaining 6 to
the pool in a second raidz vdev.
I believe that data is striped accross the 2 vdevs much like a raid0
array stripes across 2 drives.
My question is: Since the data will initially be on only one of the
raidz vdevs, when I add the second one, will ZFS re-organise the data on
the whole pool to be efficient or will I end up with only new data being
striped.
If that is the case then I will try and find a way to build the new
array fully on a temporary machine first before copying the data.
-D
More information about the freebsd-fs
mailing list