How to get rid of an unavailable pool?
ipluta at wp.pl
Sun May 3 14:06:53 UTC 2020
W dniu 2020-05-02 o 10:03, Trond Endrestøl pisze:
> On Sat, 2 May 2020 06:15+0200, Ireneusz Pluta wrote:
>> Hi group,
>> (Sorry if this post appears twice. The first one, initially sent from another
>> email account, does not seem to appear.)
>> I have (or rather had) a pool like this:
>> $ sudo zpool status -v t
>> pool: t
>> state: UNAVAIL
>> status: One or more devices are faulted in response to IO failures.
>> action: Make sure the affected devices are connected, then run 'zpool clear'.
>> see: http://illumos.org/msg/ZFS-8000-HC
>> scan: none requested
>> NAME STATE READ WRITE CKSUM
>> t UNAVAIL 0 0 0
>> mirror-0 UNAVAIL 0 0 0
>> 4304281762335857859 REMOVED 0 0 0 was /dev/da5
>> 1909766900844089131 REMOVED 0 0 0 was /dev/da10
>> errors: Permanent errors have been detected in the following files:
>> That was a temporary test pool. I forgot to destroy or at least export the
>> pool before pulling these da5 and da10 drives out of the drivebay of the
>> server. Now it can't be exported or destroyed, the respective zpool operations
>> hust hang. How to get rid now of this pool, preferably without reboot? The da5
>> and da10 are no longer available to be put back, as they have been already
>> moved elsewhere, and are now part of another pool.
>> I guess the pool got stuck at the time of running
>> /etc/periodic/security/100.chksetuid, when find operation within it tried to
>> traverse into the mountpoint of the pool.
>> The system is FreeBSD 11.2.
> The pool might still be listed in /boot/zfs/zpool.cache. The only way
> I can think of to get rid of the old pool, is to delete this file and
> reboot. If you have more pools than your root pool, you should reboot
> to singleuser mode, mount the root fs read-write, import the
> remaining pools, and either exit the SUS shell or reboot.
thank you for your advice.
Yes, that state was unrecoverable without reboot. Additionally I found this little thread
whose last post helped me a lot with understanding what was going on under the hood, and why.
So I followed the procedure carefully, taking special care of first stopping important applications
and unmounting other big and valuable datasets. Forced hard reset was necessary, the reboot command
just froze. However, there was one exception: I skipped deleting /boot/zfs/zpool.cache, to avoid
falling into single user mode and importing my pools manually (I felt very uncomfortable going to do
that remotely, with that crappy IPMIView console redirection). The system booted cleanly with all
pools imported. The UNAVAIL pool got imported too, however, it did not get mounted, so there was no
chance of any I/O attempt to it. The first thing I did after login was: `zpool destroy t`, which
Prior to doing all that, I reproduced that state and excercised the procedure on a virtual machine.
More information about the freebsd-questions