Removing log device from ZFS pool

Anthony Ananich anton.ananich at inpun.com
Tue May 20 09:52:44 UTC 2014


Hi!

Here is what I tried to do:

1) create zfs pool (two hard disks)
2) add log device to the pool
3) add cache device to the pool
4) reboot server

In this scenario log device dies during the reboot.

-----
# zpool list tank
NAME      SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
tank      928G   274G   654G    29%  1.00x  ONLINE  -

# zpool status tank
  pool: tank
 state: ONLINE
  scan: none requested
config:
NAME                    STATE     READ WRITE CKSUM
tank                    ONLINE       0     0     0
  mirror-0              ONLINE       0     0     0
    gpt/disk1           ONLINE       0     0     0
    gpt/disk2           ONLINE       0     0     0
errors: No known data errors

# mdconfig -a -t swap -s 128m -u 1
# zpool add tank log /dev/md1
# zpool status tank
  pool: tank
 state: ONLINE
  scan: none requested
config:
NAME                    STATE     READ WRITE CKSUM
raptor2                 ONLINE       0     0     0
  mirror-0              ONLINE       0     0     0
    gpt/disk1           ONLINE       0     0     0
    gpt/disk2           ONLINE       0     0     0
logs
  md1                   ONLINE       0     0     0
errors: No known data errors
-----

As long as I'm using volatile device /dev/md1 in this example, it is
destroyed during reboot.

Due to documentation this is not critical, I can just ignore unsaved
data and discard uncommitted log entries.

However it does not work for me in reality:

-----
# zpool status tank
 pool: tank
state: FAULTED
status: An intent log record could not be read.
Waiting for adminstrator intervention to fix the faulted pool.
action: Either restore the affected device(s) and run 'zpool online',
or ignore the intent log records by running 'zpool clear'.
  see: http://illumos.org/msg/ZFS-8000-K4
 scan: none requested
config:

NAME                    STATE     READ WRITE CKSUM
tank                    FAULTED      0     0     0
 mirror-0               ONLINE       0     0     0
   gpt/disk1            ONLINE       0     0     0
   gpt/disk2            ONLINE       0     0     0
logs
 6324139563861643487   UNAVAIL      0     0     0  was /dev/md1

# zpool clear tank
cannot clear errors for tank: one or more devices is currently unavailable

# zpool remove tank 6324139563861643487
cannot open 'tank': pool is unavailable

# zpool online tank md1
cannot open ‘tank’: pool is unavailable

-----

O wonder if I'm doing something wrong and this is expected behaviour?
Or that's just a bug?

I'm using zfs v5000 at FreeBSD 9.2-RELEASE.

Regards,
Anthony


More information about the zfs-devel mailing list