Academic exercise: trying to recover a corrupted pool

Borja Marcos borjam at sarenet.es
Mon Jan 15 16:01:43 UTC 2018



> On 8 Jan 2018, at 15:25, Borja Marcos <borjam at sarenet.es> wrote:
> 
> 
> Hi,
> 
> ONLY AS AN ACADEMIC EXERCISE, WARNING :)
> 
> I have a broken ZFS pool and I’m wondering wether it should be readable. The pool was made with four
> apparently troublesome OCZ SSD drives pulled from other systems. They are connected to a LSI2008 adapter.
> 
> The pool was created as a raidz2, so it’s supposed to survive the loss of two drives. It has lost two of them
> and I am unable to import it.
> 
> I have lost no useful data, I was using it just for testing. Now it has become an interesting study subject though :)
> 
> Any ideas? I have tried to recover even doing the “radical thing” (zdb -Z -AAA -e -p /dev poolname). No success.


Now this is interesting. I copied the two surviving drives to data files on another system using “dd”.

And I used mdconfig to create file backed ram disks.

mdconfig -a -f /pool/disk1
mdconfig -a -f /pool/disk2

Trying an import with

zpool import -R /mnt -N -m -f -F -X poolname

I got a panic.

Fatal trap 12: page fault while in kernel mode
cpuid = 1; apic id = 02
fault virtual address   = 0x188
fault code              = supervisor read data, page not present
instruction pointer     = 0x20:0xffffffff81381901
stack pointer           = 0x28:0xfffffe046bf2b250
frame pointer           = 0x28:0xfffffe046bf2b270
code segment            = base 0x0, limit 0xfffff, type 0x1b
                        = DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags        = interrupt enabled, resume, IOPL = 0
current process         = 0 (zio_read_intr_6_0)
trap number             = 12
panic: page fault
cpuid = 1
KDB: stack backtrace:
#0 0xffffffff806e3c17 at kdb_backtrace+0x67
#1 0xffffffff806a0176 at vpanic+0x186
#2 0xffffffff8069ffe3 at panic+0x43
#3 0xffffffff809953bd at trap_fatal+0x34d
#4 0xffffffff80995419 at trap_pfault+0x49
#5 0xffffffff80994c6a at trap+0x29a
#6 0xffffffff80979bb1 at calltrap+0x8
#7 0xffffffff81380fba at vdev_queue_io_to_issue+0x23a
#8 0xffffffff81380d33 at vdev_queue_io+0x103
#9 0xffffffff813a3bbc at zio_vdev_io_start+0x24c
#10 0xffffffff813a05bc at zio_execute+0xac
#11 0xffffffff8139ff0b at zio_nowait+0xcb
#12 0xffffffff8138205c at vdev_raidz_io_start+0x48c
#13 0xffffffff813a3c1d at zio_vdev_io_start+0x2ad
#14 0xffffffff813a05bc at zio_execute+0xac
#15 0xffffffff8139ff0b at zio_nowait+0xcb
#16 0xffffffff813802f1 at vdev_mirror_io_done+0x1f1
#17 0xffffffff813a3f58 at zio_vdev_io_done+0x1c8
Uptime: 27d3h5m23s





 


More information about the freebsd-fs mailing list