[Bug 277584] Can't connect with SSH after changing net.inet.udp.recvspace on FreeBSD 13.2
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Fri, 08 Mar 2024 19:23:03 UTC
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277584
Bug ID: 277584
Summary: Can't connect with SSH after changing
net.inet.udp.recvspace on FreeBSD 13.2
Product: Base System
Version: 13.2-STABLE
Hardware: amd64
OS: Any
Status: New
Severity: Affects Only Me
Priority: ---
Component: conf
Assignee: bugs@FreeBSD.org
Reporter: claudejgilbert@gmail.com
When increasing net.inet.udp.recvspace on a server (FreeBSD 13.2), I noticed I
can no longer SSH into that server if I set the value higher than 1.86 MB.
The current SSH session is kept alive, but if I initiate another simultaneous
ssh connection, I get the following error message:
kex_exchange_identification: Connection closed by remote host
Connection closed by 172.31.29.181 port 22
Also, if I close the current SSH session, I am locked out of the server.
In /var/log/messages, I notice the following:
Jan 11 16:57:23 server-1 sshd[14120]: fatal: bad addr or host: <NULL> (Name
does not resolve)
I then checked the ena0 interface:
#ifconfig ena0 -v
ifconfig: socket(family 2,SOCK_DGRAM): No buffer space available
Finally, I checked the network memory buffer space:
#netstat -m
3984/2376/6360 mbufs in use (current/cache/total)
0/1270/1270/1004997 mbuf clusters in use (current/cache/total/max)
0/1270 mbuf+clusters out of packet secondary zone in use (current/cache)
3982/1352/5334/502498 4k (page size) jumbo clusters in use
(current/cache/total/max)
0/0/0/148888 9k jumbo clusters in use (current/cache/total/max)
0/0/0/83749 16k jumbo clusters in use (current/cache/total/max)
16924K/8542K/25466K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0 sendfile syscalls
0 sendfile syscalls completed without I/O request
0 requests for I/O initiated by sendfile
0 pages read by sendfile as part of a request
0 pages were valid at time of a sendfile request
0 pages were valid and substituted to bogus page
0 pages were requested for read ahead by applications
0 pages were read ahead by sendfile
0 times sendfile encountered an already busy page
0 requests for sfbufs denied
0 requests for sfbufs delayed
As far as I can tell, it doesn't look like network memory buffers are close to
full.
I also found that if I increase kern.ipc.maxsockbuf to 3 MB, I am able to
increase the net.inet.udp.recvspace to 2 MB and SSH on the server. Apparently,
the kern.ipc.maxsockbuf must be set a little higher than the
net.inet.udp.recvspace, which makes sense.
The command I used to increase/decrease udp socket buffer space:
#sysctl net.inet.udp.recvspace=value
Server specs:
FreeBSD server-1 13.2-RELEASE-p8 FreeBSD 13.2-RELEASE-p8 GENERIC amd64
I'm not sure whether this is a bug or not, but I have no idea why this happens.
This is a reference to the following discussion on the FreeBSD forums:
https://forums.freebsd.org/threads/cant-connect-with-ssh-after-changing-net-inet-udp-recvspace.91874/
--
You are receiving this mail because:
You are the assignee for the bug.