everlasting log device
John
jwd at slowblink.com
Fri Jul 8 04:43:01 UTC 2011
----- Jason Hellenthal's Original Message -----
>
>
> On Fri, Jul 08, 2011 at 12:22:01AM +0300, Volodymyr Kostyrko wrote:
> > Hi all.
> >
> > When I get my hands on SSD device I tried to setup a log/cache partition
> > for my pools. Everything works fine until one day I realized that I have
> > a better place to stick this SSD in. I have upgraded system from
> > RELENG_8_2 to RELENG_8 and tried to remove devices. From my two pools
> > one was successfully freed from log/cache devices yet another one
> > refuses to live without log device:
> >
> > # zpool upgrade
> > This system is currently running ZFS pool version 28.
> >
> > All pools are formatted using this version.
> >
> > # zfs upgrade
> > This system is currently running ZFS filesystem version 5.
> >
> > All filesystems are formatted with the current version.
> >
> > # zpool status
> > pool: utwig
> > state: DEGRADED
> > status: One or more devices could not be opened. Sufficient replicas
> > exist for
> > the pool to continue functioning in a degraded state.
> > action: Attach the missing device and online it using 'zpool online'.
> > see: http://www.sun.com/msg/ZFS-8000-2Q
> > scan: resilvered 0 in 0h21m with 0 errors on Sat Jul 2 15:07:35 2011
> > config:
> >
> > NAME STATE READ
> > WRITE CKSUM
> > utwig DEGRADED 0
> > 0 0
> > mirror-0 ONLINE 0
> > 0 0
> > gptid/ecb17af1-9119-11df-bb0b-00304f4e6d80 ONLINE 0
> > 0 0
> > gptid/03aed1f5-95a3-11df-bb0b-00304f4e6d80 ONLINE 0
> > 0 0
> > logs
> > gptid/231b9002-a4a5-11e0-a114-3f386a87752c UNAVAIL 0
> > 0 0 cannot open
> >
> > errors: No known data errors
> >
> > pool: utwig-sas
> > state: ONLINE
> > scan: none requested
> > config:
> >
> > NAME STATE READ WRITE CKSUM
> > utwig-sas ONLINE 0 0 0
> > mirror-0 ONLINE 0 0 0
> > aacd1 ONLINE 0 0 0
> > aacd2 ONLINE 0 0 0
> >
> > errors: No known data errors
> >
> > # zpool remove utwig gptid/231b9002-a4a5-11e0-a114-3f386a87752c && echo good
> > good
> >
> > And nothing changes - system needs that partition.
> >
> > One more weird thing.
> >
> > # zpool iostat -v utwig
> > capacity operations
> > bandwidth
> > pool alloc free read write
> > read write
> > -------------------------------------- ----- ----- ----- -----
> > ----- -----
> > utwig 284G 172G 41 70
> > 272K 793K
> > mirror 284G 172G 41 70
> > 272K 793K
> > gptid/ecb17af1-9119-11df-bb0b-00304f4e6d80 - - 8
> > 27 456K 794K
> > gptid/03aed1f5-95a3-11df-bb0b-00304f4e6d80 - - 8
> > 27 459K 794K
> > gptid/231b9002-a4a5-11e0-a114-3f386a87752c 148K 3,97G 0
> > 0 0 0
> > -------------------------------------- ----- ----- ----- -----
> > ----- -----
> >
> > System claims that this log device has 148K data. Is this the size of
> > unwritten data? The number is still the same when booting into single
> > user mode and doesn't change at all.
> >
> > Can I remove this log device? Should I recreate the pool to get rid of
> > this behavior?
> >
>
> If you have the possibility to re-create the pool then Id definately
> suggest it.
>
> If you remove this device (physically) your pool will not be operable
> unfortunately there is still somehting missing to allow SLOGs to be
> removed from a running pool yet, what that might be is beyond me at this
> time. You might try to export the pool then boot into single user mode
> and reimport the pool and try the removal procedure but I raelly dont
> think that will help you.
>
> Good luck.
I have the same issue. Easy to ignore most of the time. Really
annoying at others. Haven't figured out a way to fix/avoid it
yet. This is running a current systems just a few days old. It's
been around for awhile though.
# zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
pool1 4.95G 971G 1 0 4.40K 803
raidz1 4.95G 971G 1 0 4.40K 803
da0 - - 0 0 3.19K 287
da1 - - 0 0 3.17K 287
da2 - - 0 0 3.20K 279
da3 - - 0 0 3.17K 279
da4 - - 0 0 3.20K 287
da5 - - 0 0 3.20K 299
da6 - - 0 0 3.17K 289
hast/md0 0 250M 0 0 0 0
hast/md1 4K 250M 0 0 0 0
-John
More information about the freebsd-fs
mailing list