ZFS: amd64, devd, root file system.
se at FreeBSD.org
Sat Apr 14 20:03:26 UTC 2007
Pawel Jakub Dawidek wrote:
> On Sat, Apr 14, 2007 at 11:21:37AM +0200, Stefan Esser wrote:
>> It is amazingly simple to get a test setup going and it worked fine
>> in my initial simple test cases. But now I've run into problems that
>> probably are not technical but caused by a lack of understanding ...
> This is not the first report that it doesn't work as it should. One was
> that /boot/defaults/loader.conf wasn't fresh enough, and there were no:
thanks for the reply, I got it working with some effort, see below ...
This is apparently implied by zfs_load="YES" and redundant.
These are defined in /boot/defaults/loader.conf ...
> lines at the end. Can you verify you have them?
> Can you send me log of full boot process?
I even performed a "boot -v" but did not see anything useful. But
a "zpool status" gave:
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
scrub: none requested
NAME STATE READ WRITE CKSUM
test UNAVAIL 0 0 0 insufficient replicas
ad0s2 UNAVAIL 0 0 0 cannot open
This could be fixed by exporting and then importing the pool (with -f).
There after the pool could be mounted and I could manually set up the
complete file system hierarchy. I verified that "/boot/zfs/zpool.cache"
was updated during the import (written to the boot partition), but the
next reboot failed again and I again got the same error status as shown
I made an attempt to fix it by creating another pool on an (during the
tests) unused swap partition on my "normal" UFS system disk (which I
had made an IDE slave for these tests). After copying the necessary
files over to the newly created "test2" pool on the SWAP partition I
got a system that mounted "zfs:test2" and that just worked ...
Not working: zpool create test ad0s2
Working: zpool create test2 ad1s1b
(I.e. "test2" could be mounted automatically, while "test" required me
to boot with an UFS root and to export/import the pool before it could
be manually mounted.)
Well, after some more testing I destroyed the pool "test" and created
it on "ad0s2c" instead of "ad0s2", and voila, I had my problem solved.
It appears, that a zpool can be manually mounted if it resides on ad0s2,
but in order to make the kernel accept it during boot, it must be in a
BSD partition. Does that make sense? (I did not want to try again with
another pool in a slice, since I did not want to give up what I had just
achieved with so much effort ;-)
>> Hmmm, there are a few points that I do not fully understand:
>> It seems that ZFS "legacy" mounts are not supported under FreeBSD,
>> is this correct? (E.g. if I enter "zfs set mountpoint=legacy test"
>> then "test" can not be mounted with "zfs mount test" and there is
>> no other way to mount it since we do not have a "mount_zfs", yet?)
> They are supported. "legacy" means that you no longer use 'zfs mount' to
> mount them, but simply mount(8) (or /etc/fstab). There is no mount_zfs
> and there won't be one, because we are moving away from such commands.
> You should use 'mount -t zfs' instead.
Hmmm, didn't work during some of my tests, but does work now ...
This is very nice!
>> I tried to set the mountpoint of my to-be root file system to "/"
>> with "zfs set mountpoint=/ test" but I'm under the impression that
>> this does not really work. Setting it to "//" does appear to have
>> the desired effect, though, but may lead to a panic during shutdown.
>> (Sorry, I've got no core-dumps but could try producing one later
>> if there is interest. The panic is because of a ref count becoming
>> negative but I did not write down the message.)
> The mount point can be set to whatever you like, but you can still mount
> it using different mount point by hand (via mount(8)).
> The most proper way is probably to set mountpoint to "legacy".
Ok, this will come next when I have some more spare time (next weekend),
I guess. For now I'm testing the system and then I'll decide whether
I'll keep it running with ZFS or reconnect the UFS disk that currently
is stored in a save place. I think we need a handbook section for ZFS
and it could suggest reasonable choices for FreeBSD.
>> I decided to have multiple zfs file systems (test/var, test/usr ...)
>> and can see them with zfs list. What is the correct way to get them
>> mounted automatically? (Assuming I get the problem to have the kernel
>> automatically mount the ZFS root solved ...)
> zfs_enable="YES" in your /etc/rc.conf.
Yes, that one I got (together with zfs_load="YES" in loader.conf).
>> Do I need fstab entries for for ZFS file systems (e.g. "test/usr")
>> or does ZFS mount them automatically when the pool "test" is mounted?
> They are mount via rc.d/zfs script.
Oh well, I should have looked there instead of asking ;-)
Hmmm, I assume that "zfs mount -a" will ignore file systems that are
marked as "legacy", and those will instead mounted together with other
local file systems?
>> Or do I need a fstab line for each of them?
>> What's supposed to go into /etc/zfs, besides the ZFS exports file?
> For now only exports file. zpool.cache use to be there as well, but we
> need it in /boot/zfs/ to be able to have root-on-ZFS.
Yes, I see. It might be useful to make zpool.cache available in /etc/zfs
via a symlink, but this might also cause confusion or inconsistencies
and I see good reasons to maintain that file in /boot/zfs.
Ok, it took me quite a few hours to get ZFS installed the way I wanted
it, and it seems that ad0s2 and ad0s2c are quite different with regard
to their suitability to hold ZFS pools. Was this to be expected?
Or is the diagnosis wrong and something else is responsible that it
works for me, now?
More information about the freebsd-current