ZFS with errors
Trond Endrestøl
Trond.Endrestol at fagskolen.gjovik.no
Wed Apr 13 13:56:30 UTC 2016
On Wed, 13 Apr 2016 15:41+0200, Trond Endrestøl wrote:
> On Wed, 13 Apr 2016 14:52+0200, Luciano Mannucci wrote:
>
> >
> > Hello all,
> >
> > I'm testing ZFS, so please forgive me for my dumb questions...
> > I have a pool with a zfs filesystem that shows erors. I tried to get
> > rid of them by removing the files listed by zpool status -v and
> > restoring without seeing anything different.
> > I tried with zpool scrub <mypool> and got only more errors.
> > My situation now is:
> > root at vodka:~ # zpool status -v
> > pool: expool1
> > state: ONLINE
> > status: One or more devices has experienced an error resulting in data
> > corruption. Applications may be affected.
> > action: Restore the file in question if possible. Otherwise restore the
> > entire pool from backup.
> > see: http://illumos.org/msg/ZFS-8000-8A
> > scan: scrub repaired 0 in 7h29m with 9 errors on Fri Apr 8 20:09:47 2016
> > config:
> >
> > NAME STATE READ WRITE CKSUM
> > expool1 ONLINE 0 0 10
> > gptid/8ccea78c-05ef-4a3a-9502-4106ca736958 ONLINE 0 0 8
> > gptid/864c27ea-ecd2-4cf9-9450-1afad9065fa1 ONLINE 0 0 0
> > diskid/DISK-WD-WCC4M2780110s3 ONLINE 0 0 0
> > diskid/DISK-WD-WCC4M2780110s1 ONLINE 0 0 12
> >
> > errors: Permanent errors have been detected in the following files:
> >
> > expool1/mirrors:<0x600>
> > /var/spool/mirrors/mageia/distrib/4/x86_64/media/core/release/kde4-style-bespin-icons-0.1-0.1649svn.1.mga4.noarch.rpm
> > /var/spool/mirrors/mageia/distrib/4/x86_64/media/core/release/lilypond-doc-2.18.0-1.mga4.noarch.rpm
> > /var/spool/mirrors/mageia/distrib/4/x86_64/media/core/updates/wesnoth-data-1.10.7-2.1.mga4.noarch.rpm
> > expool1/mirrors:<0x6dba>
> > root at vodka:~ #
> >
> >
> > is there a way to see which "9 errors" did scrub find?
> > what I'm I suposed to do to clear the situation?
>
> Try:
>
> zpool clear expool1
> zpool scrub expool1
>
> Let the scrub finish.
>
> If the same three files reappears, then refetch those files.
> The lines ending with a hexadecimal number represents deleted files,
> as far as I know.
>
> Once you have mended your files, run "clear" and "scrub" one more
> time.
I noticed you have four disks in a striped configuration, aka RAID 0.
There's no redundancy in this pool, making it hard for ZFS to
automatically repair your files.
Maybe you should destroy your pool and recreate it using a mirrored
configuration. Maybe, mirror disks 1 & 2, and disks 3 & 4, e.g.
zpool create expool1 mirror gptid/8ccea78c-05ef-4a3a-9502-4106ca736958 gptid/864c27ea-ecd2-4cf9-9450-1afad9065fa1 mirror diskid/DISK-WD-WCC4M2780110s3 diskid/DISK-WD-WCC4M2780110s1
^^^^^^ ^^^^^^
Some proper disk labelling and setting the
kern.geom.label.gptid.enable tunable to 0 in /etc/loader.conf might
make it easier for you to identify each disk.
--
+-------------------------------+------------------------------------+
| Vennlig hilsen, | Best regards, |
| Trond Endrestøl, | Trond Endrestøl, |
| IT-ansvarlig, | System administrator, |
| Fagskolen Innlandet, | Gjøvik Technical College, Norway, |
| tlf. mob. 952 62 567, | Cellular...: +47 952 62 567, |
| sentralbord 61 14 54 00. | Switchboard: +47 61 14 54 00. |
+-------------------------------+------------------------------------+
More information about the freebsd-questions
mailing list