Removing log device from ZFS pool
Steven Hartland
killing at multiplay.co.uk
Tue May 20 09:58:43 UTC 2014
Simply don't that will break the world, log devices must be persistent
Regards
Steve
----- Original Message -----
From: "Anthony Ananich" <anton.ananich at inpun.com>
To: <zfs-devel at freebsd.org>
Sent: Tuesday, May 20, 2014 10:46 AM
Subject: Removing log device from ZFS pool
> Hi!
>
> Here is what I tried to do:
>
> 1) create zfs pool (two hard disks)
> 2) add log device to the pool
> 3) add cache device to the pool
> 4) reboot server
>
> In this scenario log device dies during the reboot.
>
> -----
> # zpool list tank
> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
> tank 928G 274G 654G 29% 1.00x ONLINE -
>
> # zpool status tank
> pool: tank
> state: ONLINE
> scan: none requested
> config:
> NAME STATE READ WRITE CKSUM
> tank ONLINE 0 0 0
> mirror-0 ONLINE 0 0 0
> gpt/disk1 ONLINE 0 0 0
> gpt/disk2 ONLINE 0 0 0
> errors: No known data errors
>
> # mdconfig -a -t swap -s 128m -u 1
> # zpool add tank log /dev/md1
> # zpool status tank
> pool: tank
> state: ONLINE
> scan: none requested
> config:
> NAME STATE READ WRITE CKSUM
> raptor2 ONLINE 0 0 0
> mirror-0 ONLINE 0 0 0
> gpt/disk1 ONLINE 0 0 0
> gpt/disk2 ONLINE 0 0 0
> logs
> md1 ONLINE 0 0 0
> errors: No known data errors
> -----
>
> As long as I'm using volatile device /dev/md1 in this example, it is
> destroyed during reboot.
>
> Due to documentation this is not critical, I can just ignore unsaved
> data and discard uncommitted log entries.
>
> However it does not work for me in reality:
>
> -----
> # zpool status tank
> pool: tank
> state: FAULTED
> status: An intent log record could not be read.
> Waiting for adminstrator intervention to fix the faulted pool.
> action: Either restore the affected device(s) and run 'zpool online',
> or ignore the intent log records by running 'zpool clear'.
> see: http://illumos.org/msg/ZFS-8000-K4
> scan: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> tank FAULTED 0 0 0
> mirror-0 ONLINE 0 0 0
> gpt/disk1 ONLINE 0 0 0
> gpt/disk2 ONLINE 0 0 0
> logs
> 6324139563861643487 UNAVAIL 0 0 0 was /dev/md1
>
> # zpool clear tank
> cannot clear errors for tank: one or more devices is currently unavailable
>
> # zpool remove tank 6324139563861643487
> cannot open 'tank': pool is unavailable
>
> # zpool online tank md1
> cannot open ‘tank’: pool is unavailable
>
> -----
>
> O wonder if I'm doing something wrong and this is expected behaviour?
> Or that's just a bug?
>
> I'm using zfs v5000 at FreeBSD 9.2-RELEASE.
>
> Regards,
> Anthony
> _______________________________________________
> zfs-devel at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/zfs-devel
> To unsubscribe, send any mail to "zfs-devel-unsubscribe at freebsd.org"
More information about the zfs-devel
mailing list