Removing log device from ZFS pool
Anthony Ananich
anton.ananich at inpun.com
Tue May 20 11:00:00 UTC 2014
Hi, Steven!
Thank you very much, it solved my problem!
# zpool export tank
# zpool import -m tank
# zpool status tank
pool: tank
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://illumos.org/msg/ZFS-8000-2Q
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
mirror-0 ONLINE 0 0 0
gpt/disk1 ONLINE 0 0 0
gpt/disk2 ONLINE 0 0 0
logs
6324139563861643487 UNAVAIL 0 0 0 was /dev/md1
cache
gpt/disk3 ONLINE 0 0 0
errors: No known data errors
# zpool clear tank
# zpool remove tank 6324139563861643487
# zpool status tank
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gpt/disk1 ONLINE 0 0 0
gpt/disk2 ONLINE 0 0 0
cache
gpt/disk3 ONLINE 0 0 0
errors: No known data errors
Kind regards,
Anthony
On Tue, May 20, 2014 at 1:31 PM, Steven Hartland
<killing at multiplay.co.uk> wrote:
> Try importing with -m (Enables import with missing log devices.)
>
> Regard
>
> Steve
>
> ----- Original Message ----- From: "Anthony Ananich"
> <anton.ananich at inpun.com>
> To: "Steven Hartland" <killing at multiplay.co.uk>
> Cc: "zfs-devel" <zfs-devel at freebsd.org>
> Sent: Tuesday, May 20, 2014 11:21 AM
> Subject: Re: Removing log device from ZFS pool
>
>
>
> Hi, Steve,
>
> Thanks for the quick reply. So if I got you right, the data in the
> pool is lost, right?
>
> This doc assert the log device is not critical:
> http://docs.oracle.com/cd/E19253-01/819-5461/ghbxs/
>
> Disks die from time to time. That's a reality. However in this
> particular case I'm nearly sure that all the transactions was
> committed to the persistent disks and ZIL was empty just due to
> specifics of the scenario.
>
> The only problem is that I'm using zfs v5000 and can not use Oracle
> Solaris to restore the pool. It does not support v5000.
>
> Do I have to patch sources to walk around this problem? Is there a
> more easy way?
>
> Thanks,
> Anthony
>
>
>
>
>
> On Tue, May 20, 2014 at 12:58 PM, Steven Hartland
> <killing at multiplay.co.uk> wrote:
>>
>> Simply don't that will break the world, log devices must be persistent
>>
>> Regards
>> Steve
>> ----- Original Message ----- From: "Anthony Ananich"
>> <anton.ananich at inpun.com>
>> To: <zfs-devel at freebsd.org>
>> Sent: Tuesday, May 20, 2014 10:46 AM
>> Subject: Removing log device from ZFS pool
>>
>>
>>> Hi!
>>>
>>> Here is what I tried to do:
>>>
>>> 1) create zfs pool (two hard disks)
>>> 2) add log device to the pool
>>> 3) add cache device to the pool
>>> 4) reboot server
>>>
>>> In this scenario log device dies during the reboot.
>>>
>>> -----
>>> # zpool list tank
>>> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
>>> tank 928G 274G 654G 29% 1.00x ONLINE -
>>>
>>> # zpool status tank
>>> pool: tank
>>> state: ONLINE
>>> scan: none requested
>>> config:
>>> NAME STATE READ WRITE CKSUM
>>> tank ONLINE 0 0 0
>>> mirror-0 ONLINE 0 0 0
>>> gpt/disk1 ONLINE 0 0 0
>>> gpt/disk2 ONLINE 0 0 0
>>> errors: No known data errors
>>>
>>> # mdconfig -a -t swap -s 128m -u 1
>>> # zpool add tank log /dev/md1
>>> # zpool status tank
>>> pool: tank
>>> state: ONLINE
>>> scan: none requested
>>> config:
>>> NAME STATE READ WRITE CKSUM
>>> raptor2 ONLINE 0 0 0
>>> mirror-0 ONLINE 0 0 0
>>> gpt/disk1 ONLINE 0 0 0
>>> gpt/disk2 ONLINE 0 0 0
>>> logs
>>> md1 ONLINE 0 0 0
>>> errors: No known data errors
>>> -----
>>>
>>> As long as I'm using volatile device /dev/md1 in this example, it is
>>> destroyed during reboot.
>>>
>>> Due to documentation this is not critical, I can just ignore unsaved
>>> data and discard uncommitted log entries.
>>>
>>> However it does not work for me in reality:
>>>
>>> -----
>>> # zpool status tank
>>> pool: tank
>>> state: FAULTED
>>> status: An intent log record could not be read.
>>> Waiting for adminstrator intervention to fix the faulted pool.
>>> action: Either restore the affected device(s) and run 'zpool online',
>>> or ignore the intent log records by running 'zpool clear'.
>>> see: http://illumos.org/msg/ZFS-8000-K4
>>> scan: none requested
>>> config:
>>>
>>> NAME STATE READ WRITE CKSUM
>>> tank FAULTED 0 0 0
>>> mirror-0 ONLINE 0 0 0
>>> gpt/disk1 ONLINE 0 0 0
>>> gpt/disk2 ONLINE 0 0 0
>>> logs
>>> 6324139563861643487 UNAVAIL 0 0 0 was /dev/md1
>>>
>>> # zpool clear tank
>>> cannot clear errors for tank: one or more devices is currently
>>> unavailable
>>>
>>> # zpool remove tank 6324139563861643487
>>> cannot open 'tank': pool is unavailable
>>>
>>> # zpool online tank md1
>>> cannot open ‘tank’: pool is unavailable
>>>
>>> -----
>>>
>>> O wonder if I'm doing something wrong and this is expected behaviour?
>>> Or that's just a bug?
>>>
>>> I'm using zfs v5000 at FreeBSD 9.2-RELEASE.
>>>
>>> Regards,
>>> Anthony
>>> _______________________________________________
>>> zfs-devel at freebsd.org mailing list
>>> http://lists.freebsd.org/mailman/listinfo/zfs-devel
>>> To unsubscribe, send any mail to "zfs-devel-unsubscribe at freebsd.org"
>>
>>
>>
>
More information about the zfs-devel
mailing list