ZFS migration - New pool lost after reboot
Sebastian Wolfgarten
sebastian at wolfgarten.com
Mon May 2 20:42:59 UTC 2016
Hi,
just to follow-up on my own email earlier on - I managed to get the new pool booting by amending /boot/loader.conf as follows:
root at vm:~ # cat /boot/loader.conf
vfs.root.mountfrom="zfs:newpool/ROOT/default"
kern.geom.label.gptid.enable="2"
zfs_load="YES"
However, when rebooting I can see he is using the new pool however I am running into issues as he can’t seem to find some essential files in /usr:
Mounting local file systems
eval: zfs not found
eval: touch not found
/etc/rc: cannot create /dev/null: No such file or directory
/etc/rc: date: not found
Here is what „zfs list“ looks like:
root at vm:~ # zfs list
NAME USED AVAIL REFER MOUNTPOINT
newpool 385M 5.41G 19K /mnt/zroot
newpool/ROOT 385M 5.41G 19K /mnt
newpool/ROOT/default 385M 5.41G 385M /mnt
newpool/tmp 21K 5.41G 21K /mnt/tmp
newpool/usr 76K 5.41G 19K /mnt/usr
newpool/usr/home 19K 5.41G 19K /mnt/usr/home
newpool/usr/ports 19K 5.41G 19K /mnt/usr/ports
newpool/usr/src 19K 5.41G 19K /mnt/usr/src
newpool/var 139K 5.41G 19K /mnt/var
newpool/var/audit 19K 5.41G 19K /mnt/var/audit
newpool/var/crash 19K 5.41G 19K /mnt/var/crash
newpool/var/log 44K 5.41G 44K /mnt/var/log
newpool/var/mail 19K 5.41G 19K /mnt/var/mail
newpool/var/tmp 19K 5.41G 19K /mnt/var/tmp
zroot 524M 26.4G 96K /zroot
zroot/ROOT 522M 26.4G 96K none
zroot/ROOT/default 522M 26.4G 522M /
zroot/tmp 74.5K 26.4G 74.5K /tmp
zroot/usr 384K 26.4G 96K /usr
zroot/usr/home 96K 26.4G 96K /usr/home
zroot/usr/ports 96K 26.4G 96K /usr/ports
zroot/usr/src 96K 26.4G 96K /usr/src
zroot/var 580K 26.4G 96K /var
zroot/var/audit 96K 26.4G 96K /var/audit
zroot/var/crash 96K 26.4G 96K /var/crash
zroot/var/log 103K 26.4G 103K /var/log
zroot/var/mail 96K 26.4G 96K /var/mail
zroot/var/tmp 92.5K 26.4G 92.5K /var/tmp
I am assuming I have to amend the zfs parameters for the mount points but I can’t seem to figure out what’s wrong. I tried things like:
zfs set mountpoint=/usr newpool/usr
zfs set mountpoint=/tmp newpool/tmp
zfs set mountpoint=/var newpool/var
Unfortunately this did not solve the issue. Any ideas?
Many thanks.
Best regards
Sebastian
> Am 02.05.2016 um 21:43 schrieb Sebastian Wolfgarten <sebastian at wolfgarten.com>:
>
> Hi Matthias,
> dear list,
>
> I have build a new VM to test this further without affecting my live machine. When doing all these steps (including the amendment of loader.conf on the new pool), my system will boots up with the old pool. Any ideas why?
>
> Here is what I did:
>
> 1) Create required partitions on temporary hard disk ada2
> gpart create -s GPT ada2
> gpart add -t freebsd-boot -s 128 ada2
> gpart add -t freebsd-swap -s 4G -l newswap ada2
> gpart add -t freebsd-zfs -l newdisk ada2
> gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2
>
> 2) Create new pool (newpool)
>
> zpool create -o cachefile=/tmp/zpool.cache newpool gpt/newdisk
>
> 3) Create snapshot of existing zroot pool and copy it over to new pool
> zfs snapshot -r zroot at movedata
> zfs send -vR zroot at movedata | zfs receive -vFd newpool
> zfs destroy -r zroot at movedata
>
> 4) Make the new pool bootable
>
> zpool set bootfs=newpool/ROOT/default newpool
>
> 5) Mount new pool and prepare for reboot
>
> cp /tmp/zpool.cache /tmp/newpool.cache
> zpool export newpool
> zpool import -c /tmp/newpool.cache -R /mnt newpool
> cp /tmp/newpool.cache /mnt/boot/zfs/zpool.cache
> in /mnt/boot/loader.conf the value of kern.geom.label.gptid.enable=„0“ changed to „2"
> zfs set mountpoint=/ newpool/ROOT
> reboot
>
> After the reboot, the machine is still running of the old zfs striped mirror but I can mount the newpool without any problems:
>
> root at vm:~ # cat /boot/loader.conf
> kern.geom.label.gptid.enable="0"
> zfs_load="YES"
> root at vm:~ # zpool import -c /tmp/newpool.cache -R /mnt newpool
> root at vm:~ # cd /mnt
> root at vm:/mnt # ls -la
> total 50
> drwxr-xr-x 19 root wheel 26 May 2 23:33 .
> drwxr-xr-x 18 root wheel 25 May 2 23:37 ..
> -rw-r--r-- 2 root wheel 966 Mar 25 04:52 .cshrc
> -rw-r--r-- 2 root wheel 254 Mar 25 04:52 .profile
> -rw------- 1 root wheel 1024 May 2 01:45 .rnd
> -r--r--r-- 1 root wheel 6197 Mar 25 04:52 COPYRIGHT
> drwxr-xr-x 2 root wheel 47 Mar 25 04:51 bin
> -rw-r--r-- 1 root wheel 9 May 2 23:27 bla
> drwxr-xr-x 8 root wheel 47 May 2 01:44 boot
> drwxr-xr-x 2 root wheel 2 May 2 01:32 dev
> -rw------- 1 root wheel 4096 May 2 23:21 entropy
> drwxr-xr-x 23 root wheel 107 May 2 01:46 etc
> drwxr-xr-x 3 root wheel 52 Mar 25 04:52 lib
> drwxr-xr-x 3 root wheel 4 Mar 25 04:51 libexec
> drwxr-xr-x 2 root wheel 2 Mar 25 04:51 media
> drwxr-xr-x 2 root wheel 2 Mar 25 04:51 mnt
> drwxr-xr-x 2 root wheel 2 May 2 23:33 newpool
> dr-xr-xr-x 2 root wheel 2 Mar 25 04:51 proc
> drwxr-xr-x 2 root wheel 147 Mar 25 04:52 rescue
> drwxr-xr-x 2 root wheel 7 May 2 23:27 root
> drwxr-xr-x 2 root wheel 133 Mar 25 04:52 sbin
> lrwxr-xr-x 1 root wheel 11 Mar 25 04:52 sys -> usr/src/sys
> drwxrwxrwt 6 root wheel 7 May 2 23:33 tmp
> drwxr-xr-x 16 root wheel 16 Mar 25 04:52 usr
> drwxr-xr-x 24 root wheel 24 May 2 23:21 var
> drwxr-xr-x 2 root wheel 2 May 2 01:32 zroot
> root at vm:/mnt # cd boot
> root at vm:/mnt/boot # cat loader.conf
> kern.geom.label.gptid.enable="2"
> zfs_load=„YES"
>
> My question is: How do I make my system permanently boot off the newpool such that I can destroy the existing zroot one?
>
> Many thanks for your help, it is really appreciated.
>
> Best regards
> Sebastian
>
>> Am 29.04.2016 um 10:25 schrieb Matthias Fechner <idefix at fechner.net>:
>>
>> Am 28.04.2016 um 23:14 schrieb Sebastian Wolfgarten:
>>> 5) Mount new pool and prepare for reboot
>>>
>>> cp /tmp/zpool.cache /tmp/newpool.cache
>>> zpool export newpool
>>> zpool import -c /tmp/newpool.cache -R /mnt newpool
>>> cp /tmp/newpool.cache /mnt/boot/zfs/zpool.cache
>>> zfs set mountpoint=/ newpool/ROOT
>>> reboot
>>
>> I think you forgot to adapt vfs.zfs.mountfrom= in /boot/loader.conf on the new pool?
>>
>>
>>
>> Gruß
>> Matthias
>>
>> --
>>
>> "Programming today is a race between software engineers striving to
>> build bigger and better idiot-proof programs, and the universe trying to
>> produce bigger and better idiots. So far, the universe is winning." --
>> Rich Cook
>
> _______________________________________________
> freebsd-questions at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "freebsd-questions-unsubscribe at freebsd.org"
More information about the freebsd-questions
mailing list