nfs lockd errors after NetApp software upgrade.

Rick Macklem rmacklem at uoguelph.ca
Wed Jan 8 23:12:17 UTC 2020


Switch to using TCP should avoid the DRC crap. (Most systems except FreeBSD only do
DRC for UDP.)

I assume that by "transaction ID", they are referring to the XID in the RPC header.
(I'll take a look at how it is maintained for UDP in the krpc. Btw, although their code
expecting it to change for a different RPC isn't surprising, the xid's behaviour is
"underspecified" in the Sun RPC RFC.)

rick

________________________________________
From: Daniel Braniss <danny at cs.huji.ac.il>
Sent: Wednesday, January 8, 2020 12:08 PM
To: Rick Macklem
Cc: Richard P Mackerras; Adam McDougall; freebsd-stable at freebsd.org
Subject: Re: nfs lockd errors after NetApp software upgrade.

top posting NetAPP reply:
…
Here you can see transaction ID (0x5e15f77a) being used over port 886 and the NFS server successfully responds.

    4480695                2020-01-08 12:20:54       132.65.116.111  132.65.60.56       NLM      0x5e15f77a (1578497914)             886                V4 UNLOCK Call (Reply In 4480696) FH:0x54b075a0 svid:13629 pos:0-0
    4480696                2020-01-08 12:20:54       132.65.60.56    132.65.116.111     NLM      0x5e15f77a (1578497914)             4045               V4 UNLOCK Reply (Call In 4480695)

Here you see that 2 minutes later the client uses the same transaction ID (0x5e15f77a) and the same port again, but the file handle is different, so the client is unlocking a different file.

    4591136                2020-01-08 12:22:54       132.65.116.111  132.65.60.56       NLM      0x5e15f77a (1578497914)             886                [RPC retransmission of #4480695]V4 UNLOCK Call (Reply In 4480696) FH:0xb14b75a8 svid:13629 pos:0-0
    4592588                2020-01-08 12:22:57       132.65.116.111  132.65.60.56       NLM      0x5e15f77a (1578497914)             886                [RPC retransmission of #4480695]V4 UNLOCK Call (Reply In 4480696) FH:0xb14b75a8 svid:13629 pos:0-0
    4598862                2020-01-08 12:23:03       132.65.116.111  132.65.60.56       NLM      0x5e15f77a (1578497914)             886                [RPC retransmission of #4480695]V4 UNLOCK Call (Reply In 4480696) FH:0xb14b75a8 svid:13629 pos:0-0
    4608871                2020-01-08 12:23:21       132.65.116.111  132.65.60.56       NLM      0x5e15f77a (1578497914)             886                [RPC retransmission of #4480695]V4 UNLOCK Call (Reply In 4480696) FH:0xb14b75a8 svid:13629 pos:0-0
    4635984                2020-01-08 12:23:59       132.65.116.111  132.65.60.56       NLM      0x5e15f77a (1578497914)             886                [RPC retransmission of #4480695]V4 UNLOCK Call (Reply In 4480696) FH:0xb14b75a8 svid:13629 pos:0-0

transaction ID reuse is also seen for a number of other transaction IDs starting at the same time.

Withing ONTAP 9.3 we have changed the way our Replay-Cache tracks requests by including a checksum of the RPC request. Both in in this and earlier releases ONTAP would cache the call in frame 4480695, but starintg in 9.3 we then cache the checksum as part of that.

When the client sends the request in frame 4591136 it uses the same transaction ID (0x5e15f77a) and same port again. Here the problem is that we already hold a checksum in cache for the “same transaction”
 …

this seems to be happening after the client did not receive the response and re-transmits the request.

danny


On 24 Dec 2019, at 5:02, Rick Macklem <rmacklem at uoguelph.ca<mailto:rmacklem at uoguelph.ca>> wrote:

Richard P Mackerras wrote:
Hi,

We had some bully type workloads emerge when we moved a lot of block
storage from old XIV to new all flash 3PAR. I wonder if your IMAP issue
might have emerged just because suddenly there was the opportunity with all
flash. QOS is good on 9.x ONTAP. If anyone says it’s not then they last
looked on 8.x. So I suggest you QOS the IMAP workload.

Nobody should be using UDP with NFS unless they have a very specific set
of circumstances. TCP was a real step forward.
Well, I can't argue with this, considering I did the first working implementation
of NFS over TCP. It was actually Mike Karels that suggested I try doing so,
There's a paper in a very old Usenix Conference Proceedings, but it is so old
that it isn't on the Usenix web page (around 1988 in Denver, if I recall).  I don't
even have a copy myself, although I was the author.

Now, having said that, I must note that the Network Lock Manager (NLM) and
Network Status Monitor (NSM) were not NFS. They were separate stateful
protocols (poorly designed imho) that Sun never published.

NFS as Sun designed it (NFSv2 and NFSv3) were "stateless server" protocols,
so that they could work reliably without server crash recovery.
However, the NLM was inherently stateful, since it was dealing with file locks.

So, you can't really lump the NLM with NFS (and you should avoid use of the
NLM over any transport imho).

NFSv4 tackled the difficult problem of having a "stateful server" and crash recovery,
which resulted in a much more complex protocol (compare the size of RFC-1813
vs RFC-5661 to get some idea of this).

rick

Cheers

Richard
_______________________________________________
freebsd-stable at freebsd.org<mailto:freebsd-stable at freebsd.org> mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"
_______________________________________________
freebsd-stable at freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"



More information about the freebsd-stable mailing list