ZFS-FreeBSD 10.1 -zpool degraded/5638733535486357169 REMOVED 0 0 0 was /dev/label/z3
motty.cruz at gmail.com
Fri Apr 24 21:36:08 UTC 2015
yes you're right, I am using : Dual controller (J2600sD)
I have two FreeBSD 10.1 machines connected through SAS LSI card, to the
Both Servers mount the drivers, however zpool is mounted only on Master
I confirm that rebooting slave machine, zpool on the Master machine
reports driver "REMOVED" error and zpool goes into degraded mode.
to get the zpool back on-line/clear the errors, I had to run "zpool
clear tank". however, this can corrupt data once this machine go into
I have a similar setup but running FreeBSD 8.2, that problem does not
exist in FreeBSD 8.2.
am posting this information in case someone out there can help me fix
On 04/24/2015 02:13 PM, Ricky G wrote:
> Hey Motty,
> Reviewed your hardware and I'm still not quite positive on how the
> setup works. From what I understand, this rack has two separate
> servers running with two controllers that are connected providing
> shared access to the storage. Please correct me if I am mistaken.
> Someone else maybe more familiar may give you a better answer. I have
> never used nor seen something like this. However, I will still suggest
> what I suggested before, do hardware tests and make sure its not
> faulty. Swap this z3 drive with another drive and see if its
> that particular drive, or that particular port.
> Date: Fri, 24 Apr 2015 12:56:13 -0700
> From: motty.cruz at gmail.com
> To: ricky1252 at hotmail.com
> Subject: Re: ZFS-FreeBSD 10.1 -zpool degraded/5638733535486357169
> REMOVED 0 0 0 was /dev/label/z3
> Hello Ricky,
> thank you very much for your reply;
> The JBOD Promise vess j26000SD has two SAS ports, 1 port is plug to
> Machine A(master) 2nd port is plug to Machine B(slave). Both Machines
> see the drivers, however zpool is only loaded on Machine A(master.
> Also, forgot to mention that if I ran "zpool clear tank" fixes that
> issue. I am sure that rebooting the slave machine zpool on Master
> machine reports drive was removed.
> any suggestions?
> Thanks again,
> On 04/24/2015 12:48 PM, Ricky G wrote:
> Hey there,
> This isn't really enough information to diagnose anything. What
> relation does machine B have with A? is B a jail? is it a DAS or
> SAN? are you positive rebooting the other machine that is causing
> the drive to drop and the drive/backplane/port?
> Based on the little info provided I suggest moving drive to
> another port/backplane and running smart test on the drive
> assuming its not an ssd.
> > Date: Fri, 24 Apr 2015 12:24:16 -0700
> > From: motty.cruz at gmail.com <mailto:motty.cruz at gmail.com>
> > To: freebsd-questions at freebsd.org
> <mailto:freebsd-questions at freebsd.org>; motty.cruz at gmail.com
> <mailto:motty.cruz at gmail.com>
> > Subject: ZFS-FreeBSD 10.1 -zpool degraded/5638733535486357169
> REMOVED 0 0 0 was /dev/label/z3
> > Hello,
> > I get the following error:
> > 5638733535486357169REMOVED000was /dev/label/z3
> > I have two FreeBSD 10.1 64bit machine A (master) and Machine
> B(slave) if
> > for whatever reason slave machine reboot, Machine A, reports zpool
> > degraded and usually same disk (in this case z3) always appear
> to be
> > "REMOVED".
> > any ideas? JBOD Promise Vess J2000D Series and LSI Card SAS9200.
> > Thanks in advance!
> > -Motty
> > _______________________________________________
> > freebsd-questions at freebsd.org
> <mailto:freebsd-questions at freebsd.org> mailing list
> > http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> > To unsubscribe, send any mail to
> "freebsd-questions-unsubscribe at freebsd.org"
> <mailto:freebsd-questions-unsubscribe at freebsd.org>
More information about the freebsd-questions