Change block size on ZFS pool

Matthias Fechner idefix at fechner.net
Thu Jun 19 12:22:14 UTC 2014


Am 12.05.2014 18:55, schrieb Trond Endrestøl:
>>
>> Be very careful!

ok, I tried it now and the recreation of the pool worked fine. But the
rename of the pool failed as it mounted the pool double after the reboot
and that destroyed everything. Problem is, you will not see the problem
immediately but after your next reboot.
Luckily I had a backup I could use.

Here what I did, maybe someone sees the problem:
Adjust sector to 4k
With the upgrade to FreeBSD10 I see now the error message:

        NAME                                            STATE     READ
WRITE CKSUM
        zroot                                           ONLINE       0
   0     0
          mirror-0                                      ONLINE       0
   0     0
            gptid/504acf1f-5487-11e1-b3f1-001b217b3468  ONLINE       0
   0     0  block size: 512B configured, 4096B native
            gpt/disk1                                   ONLINE       0
   0   330  block size: 512B configured, 4096B native
We would like to allign the partitions to 4k sectors and recreate the
zpool with 4k size without losing data or require to restore it from a
backup. Type gpart show ada0 to see if partion allignment is fine. This
is fine:

=>        34  3907029101  ada2  GPT  (1.8T)
          34           6        - free -  (3.0K)
          40         128     1  freebsd-boot  (64K)
         168     8388608     2  freebsd-swap  (4.0G)
     8388776  3898640352     3  freebsd-zfs  (1.8T)
  3907029128           7        - free -  (3.5K)
Create the partions as explained above, we will handle here only the
steps how to convert the zpool to 4k size. Make sure you have a bootable
usb stick with mfsbsd. Boot from it and try to mount your pool: Login
with root and password mfsroot

zpool import -f -o altroot=/mnt zroot
If it can import your pool and see your data in /mnt you can reboot
again and boot up the normal system. Now make a backup of your pool. If
anything goes wrong you would need it. I used rsync to copy all
important data to another pool where I had enough space for it. I had
the problem that I had running zfs-snapshot-mgmt which stopped working
with the new zfs layout with FreeBSD10 so I had at first to remove all
auto snapshots as that will make it imposible to copy the pool (I had
over 100000 snapshots on the system).

zfs list -H -t snapshot -o name |grep auto | xargs -n 1 zfs destroy -r
Detach one of the mirrors:

zpool detach zroot gptid/504acf1f-5487-11e1-b3f1-001b217b3468
My disk was named disk0 but it does not show up on /dev/gpt/disk0 so I
had to reboot. As we removed the first disk it can be possible that you
must say your BIOS to boot from the second harddisk. Clear ZFS label:

zpool labelclear /dev/gpt/disk0
Create gnop(8) device emulating 4k disk blocks:

gnop create -S 4096 /dev/gpt/disk0
Create a new single disk zpool named zroot1 using the gnop device as the
vdev:

zpool create zroot1 gpt/disk0.nop
Export the zroot1:

zpool export zroot1
Destroy the gnop device:

gnop destroy /dev/gpt/disk0.nop
Reimport the zroot1 pool, searching for vdevs in /dev/gpt

zpool import -d /dev/gpt zroot1
Create a snapshot:

zfs snapshot -r zroot at transfer
Transfer the snapshot from zroot to zroot1, preserving every detail,
without mounting the destination filesystems

zfs send -R zroot at transfer | zfs receive -duv zroot1
Verify that the zroot1 has indeed received all datasets

zfs list -r -t all zroot1
Now boot from the usbstick the mfsbsd. Import your pools:

zpool import -fN zroot
zpool import -fN zroot1
Make a second snapshot and copy it incremental:

zfs snapshot -r zroot at transfer2
zfs send -Ri zroot at transfer zroot at transfer2 | zfs receive -Fduv zroot1
Correct the bootfs option

zpool set bootfs=zroot1/ROOT/default zroot1
Edit the loader.conf:

mkdir -p /zroot1
mount -t zfs zroot1/ROOT/default /zroot1
vi /zroot1/boot/loader.conf
vfs.root.mountfrom="zfs:zroot1/ROOT/default"
Destroy the old zroot

zpool destroy zroot
Reboot again into your new pool, make sure everything is mounted
correctly. Attach the disk to the pool

zpool attach zroot1 gpt/disk0 gpt/disk1
I reinstalled the gpt bootloader, not necessary but I wanted to be sure
a current version of it is on both disks:

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2
Wait while you allow the newly attached mirror to resilver completely.
You can check the status with

zpool status zroot1
(with the old allignment it took me about 7 days for the resilver, with
the 4k allignment now it takes only about 2 hours by a speed of about
90MB/s) After the pool finished you maybe want to remove the snapshots:

zfs destroy -r zroot1 at transfer
zfs destroy -r zroot1 at transfer2
!!!!! WARNING RENAME OF THE POOL FAILED AND ALL DATA IS LOST !!!!! If
you want to rename the pool back to zroot boot again from the USB stick:

zpool import -fN zroot1 zroot
Edit the loader.conf:

mkdir -p /zroot
mount -t zfs zroot/ROOT/default /zroot1
vi /zroot/boot/loader.conf
vfs.root.mountfrom="zfs:zroot/ROOT/default"




Gruß
Matthias

-- 

"Programming today is a race between software engineers striving to
build bigger and better idiot-proof programs, and the universe trying to
produce bigger and better idiots. So far, the universe is winning." --
Rich Cook


More information about the freebsd-questions mailing list