Server doesn't boot when 3 PCIe slots are populated
list1 at gjunka.com
Mon Jan 15 22:30:28 UTC 2018
On 15/01/2018 16:59, Mehmet Erol Sanliturk wrote:
> On Mon, Jan 15, 2018 at 7:31 PM, Valeri Galtsev
> <galtsev at kicp.uchicago.edu <mailto:galtsev at kicp.uchicago.edu>> wrote:
> >> The funny thing is that very often it's enough to pull out one
> of the
> >> cards and put it back in. Then the system boots fine with all three
> >> cards.
> >> I had that a few times. Once it's booted it works, I can
> restart the
> >> system
> >> and it boots every time. As soon as I power off, unplug from
> the power
> >> main, wait a few minutes and power it on again, the issue comes
> back -
> >> can't boot as NVMe can't be enumerated.
> >> I though it might be caused by the hardware being too cold. I
> left the
> >> server once overnight but it didn't boot up, it was trying and
> >> restarting
> >> the whole night.
> > The above explanation brings mind to the "impedance mismatch in
> > electronics" problem .
> Hm, I wouldn't say so. First of all, I will seriously doubt that sane
> cards are out of specs as far as impedance is concerned.
> But before going further, let's make sure we talk about the same
> thing. I
> assume impedance mismatch is what is related to impedance of the load
> attached to transmission line to be different from impedance of
> transmission line itself. In such case part of transmitted signal is
> reflected from the load back into transmission line. This can make
> mess as
> transmitted signal is mixed with this reflected at different
> positions of
> the loads along the same transmission line. One has to have really
> mismatch (over 20% at least) to make that matter. Many of us
> remember this
> in at least two computer related cases: 1. we used terminators at
> the end
> of SCSI cables (or attached "self-terminating SCSI device to the
> end of
> line). 2. In some system boards in which memory buses had no
> the manual would say to populate slots beginning from the
> fartherst away
> from CPU (to defeat reflection from open end of memory bus lines).
> I have never heard of anything like that on PCI express bus. If I am
> wrong, could you give some pointer so I can read about it.
> Thanks in advance for pointers! (I know: you learn something every
> day -
> which I bet I am about to ;-)
> > ( Please search
> > impedance mismatch in electronics
> > impedance matching in electronics
> > in Internet if you want explanations about them . )
> > When all of these cards are inserted into slots simultaneously ,
> > accumulated electronic effect may distort behaviour of your
> mother board
> > circuits or attached card circuit(s) .
> > Therefore , if you can find another NVMe and/or network card ,
> please test
> > their effect .
> > Such tests may be inconclusive because mother board circuits may be
> > affected negatively from "properly" operating add on cards when
> they are
> > inserted together .
> > If it is feasible for you , you may use USB attached network
> card(s) to
> > eliminate network card attachment .
> > Or you may use a more capable one NVMe card instead of two
> smaller NVMe
> > cards , or you may use only one of them , or/and select an SATA
> SSD .
> > Such a choice would save your investment and produces a working
> > with
> > a "little" loss when compared to "all" .
> > Mehmet Erol Sanliturk
> > _______________________________________________
> > freebsd-questions at freebsd.org
> <mailto:freebsd-questions at freebsd.org> mailing list
> > https://lists.freebsd.org/mailman/listinfo/freebsd-questions
> > To unsubscribe, send any mail to
> > "freebsd-questions-unsubscribe at freebsd.org
> <mailto:freebsd-questions-unsubscribe at freebsd.org>"
> Valeri Galtsev
> Sr System Administrator
> Department of Astronomy and Astrophysics
> Kavli Institute for Cosmological Physics
> University of Chicago
> Phone: 773-702-4247
> The problem of "impedance matching" occurs between any two interacting
> circuits : When a circuit gives its "output" to another circuit as
> "input" there exists this problem irrespective of subjects and kinds
> of circuits . Obviously , behaviours are not exactly the same .
> If you search the following phrase in Internet , you will find a large
> amount of links :
> impedance matching circuit design
> If we think a computer main board slots , the following may occur :
> Assume a slot has a voltage level for triggering input into an add on
> card , i.e. , add on card is affected when it senses a voltage level
> equal or greater than that level . The lower level values will not
> trigger the add on card .
> Assume an add on card is working .
> Assume a new add on card is also working alone .
> When both of these add on cards are inserted into slots , the power
> drawn will lower the voltage level of the surrounding circuit more
> than a single card .
> If this lowered voltage level is less than threshold level of the
> added cards ( one of them , or both of them ) it ( they ) will not
> sense the signals from the surrounding circuits . Therefore , it
> (they) will not respond to the action requesting signals .
> In one of the previous messages ,
> it is said that
> I am observing a strange behavior where the system doesn't boot if all
> three PCIe slots are populated. It shows this message:
> nvme0: <Generic NVMe Device> mem 0xfd8fc000-0xfd8fffff irq 24 at device
> 0.0 on pci1
> nvme0: controller ready did not become 1 within 30000 ms
> nvme0: did not complete shutdown within 5 seconds of notification
> The I see a kernel panic/dump and the system reboots after 15 seconds.
> If I remove one card, either one of the NVMe drives or the network card,
> the system boots fine.
> A good example may be the above message .
> Mehmet Erol Sanliturk
I tried a different pair of NVMe cards (different adapters with
different SSD disks) and the result was exactly the same. Note, that the
pair that I tried was previously working in this motherboard without
problems for many months, so it's safe to assume that the addition of
the network card is causing this problem. But then again, the network
card with one of the NVMe drives works fine too.
Could be that all three cause some sort of impedance mismatch but that's
kind of hard to believe - these are simple cards, there is almost no
circuits on the NVMe adapter and the network card is just a chipset with
I will have to look into other solutions, e.g. using SATA drives
instead, but neither card was cheap, especially the pair of NVMe drives,
so I am trying to figure out if there is anything I could do to make
them cooperate before giving up.
BTW Is there any way to verify from which group you received this thread
so that I can remove either freebsd-drivers or freebsd-questions from
More information about the freebsd-questions