ZFS pools in "trouble"
Peter Eriksson
pen at lysator.liu.se
Wed Feb 26 20:26:42 UTC 2020
What type of hardware are you using? Server, controllers and disks?
- Peter
> On 26 Feb 2020, at 18:09, Willem Jan Withagen <wjw at digiware.nl> wrote:
>
> Hi,
>
> I'm using my pools in perhaps a rather awkward way as underlying storage for my ceph cluster:
> 1 disk per pool, with log and cache on SSD
>
> For one reason or another one of the servers has crashed ad does not really want to read several of the pools:
> ----
> pool: osd_2
> state: UNAVAIL
> Assertion failed: (reason == ZPOOL_STATUS_OK), file /usr/src/cddl/contrib/opensolaris/cmd/zpool/zpool_main.c, line 5098.
> Abort (core dumped)
> ----
>
> The code there is like:
> ----
> default:
> /*
> * The remaining errors can't actually be generated, yet.
> */
> assert(reason == ZPOOL_STATUS_OK);
>
> ----
> And this on already 3 disks.
> Running:
> FreeBSD 12.1-STABLE (GENERIC) #0 r355208M: Fri Nov 29 10:43:47 CET 2019
>
> Now this is a test cluster, so no harm there in matters of data loss.
> And the ceph cluster probably can rebuild everything if I do not lose too many disk.
>
> But the problem also lies in the fact that not all disk are recognized by the kernel, and not all disk end up mounted. So I need to remove a pool first to get more disks online.
>
> Is there anything I can do the get them back online?
> Or is this a lost cause?
>
> --WjW
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
More information about the freebsd-fs
mailing list