Frequent hickups on the networking layer

Rick Macklem rmacklem at uoguelph.ca
Tue Apr 28 21:06:10 UTC 2015


Mark Schouten wrote:
> Hi,
> 
> 
> I've got a FreeBSD 10.1-RELEASE box running with iscsi on top of ZFS.
> I've had some major issues with it where it would stop processing
> traffic for a minute or two, but that's 'fixed' by disabling TSO. I
> do have frequent iscsi errors, which are luckily fixed on the iscsi
> layer, but they do cause an occasional errormessage on both the
> iscsi client and server. Also, I see input errors on the FreeBSD
> server, but I'm unable to find out what those are. I do see a
> relation between iscsi-errormessages and the number of ethernet
> inputerrors on the server.
> 
> 
> I saw this message [1] which made me have a look at `vmstat -z`, and
> that shows me the following:
> 
> 
> vmstat -z | head -n 1; vmstat -z | sort -k 6 -t , | tail -10 ITEM
>                   SIZE  LIMIT     USED     FREE      REQ FAIL SLEEP
> zio_data_buf_94208:   94208,      0,     162,       5,  135632,   0,
>   0 zio_data_buf_98304:   98304,      0,     118,       9,  101606,
>   0,   0 zio_link_cache:          48,      0,       6,
>   30870,24853549414,   0,   0 8 Bucket:                64,      0,
>     145,    2831,148672720,  11,   0 32 Bucket:              256,
>      0,     859,     731,231513474,  52,   0 mbuf_jumbo_9k:
>         9216, 604528,    7230,    2002,11764806459,108298123,   0 64
> Bucket:              512,      0,     808,
>     352,147120342,16375582,   0 256 Bucket:            2048,      0,
>     500,      50,307051808,189685088,   0 vmem btag:
>               56,      0, 1671605, 1291509,198933250,36431,   0 128
> Bucket:            1024,      0,     410,     106,65267164,772374,
>   0
> 
> 
> I am using jumboframes. Could it be that the inputerrors AND my
> frequent hickups come from all those failures to allocate 9k jumbo
> mbufs?
There have been email list threads discussing how allocating 9K jumbo
mbufs will fragment the KVM (kernel virtual memory) used for mbuf
cluster allocation and cause grief. If your
net device driver is one that allocates 9K jumbo mbufs for receive
instead of using a list of smaller mbuf clusters, I'd guess this is
what is biting you.
As far as I know (just from email discussion, never used them myself),
you can either stop using jumbo packets or switch to a different net
interface that doesn't allocate 9K jumbo mbufs (doing the receives of
jumbo packets into a list of smaller mbuf clusters).

I remember Garrett Wollman arguing that 9K mbuf clusters shouldn't
ever be used. I've cc'd him, in case he wants to comment.

I don't know how to increase the KVM that the allocator can use for
9K mbuf clusters nor do I know if that can be used as a work around.

rick

> And can I increase the in [1] mentioned sysctls at will?
> 
> 
> Thanks
> 
> 
> 
> 
> 
> 
> [1]:
> https://lists.freebsd.org/pipermail/freebsd-questions/2013-August/252827.html
> 
> 
> Met vriendelijke groeten,
> 
> --
> Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
> Mark Schouten  | Tuxis Internet Engineering
> KvK: 61527076 | http://www.tuxis.nl/
> T: 0318 200208 | info at tuxis.nl


More information about the freebsd-net mailing list