svn commit: r344188 - in head: lib/libc/sys sys/vm

Gleb Smirnoff glebius at
Fri Feb 15 23:36:24 UTC 2019

Author: glebius
Date: Fri Feb 15 23:36:22 2019
New Revision: 344188

  For 32-bit machines  rollback the default number of vnode pager pbufs
  back to the lever before r343030.  For 64-bit machines reduce it slightly,
  too.  Together with r343030 I bumped the limit up to the value we use at
  Netflix to serve 100 Gbit/s of sendfile traffic, and it probably isn't a
  good default.
  Provide a loader tunable to change vnode pager pbufs count. Document it.


Modified: head/lib/libc/sys/sendfile.2
--- head/lib/libc/sys/sendfile.2	Fri Feb 15 22:55:13 2019	(r344187)
+++ head/lib/libc/sys/sendfile.2	Fri Feb 15 23:36:22 2019	(r344188)
@@ -25,7 +25,7 @@
 .\" $FreeBSD$
-.Dd January 25, 2019
+.Dd February 15, 2019
@@ -48,6 +48,7 @@ The
 system call
 sends a regular file or shared memory object specified by descriptor
 .Fa fd
 out a stream socket specified by descriptor
 .Fa s .
@@ -224,6 +225,19 @@ implementation of
 .Fn sendfile
 is "zero-copy", meaning that it has been optimized so that copying of the file data is avoided.
+.Ss physical paging buffers
+.Fn sendfile
+uses vnode pager to read file pages into memory.
+The pager uses a pool of physical buffers to run its I/O operations.
+When system runs out of pbufs, sendfile will block and report state
+.Dq Li zonelimit .
+Size of the pool can be tuned with
+.Va vm.vnode_pbufs
+.Xr loader.conf 5
+tunable and can be checked with
+.Xr sysctl 8
+OID of the same name at runtime.
+.Ss sendfile(2) buffers
 On some architectures, this system call internally uses a special
 .Fn sendfile
@@ -279,9 +293,11 @@ buffers usage respectively.
 These values may also be viewed through
 .Nm netstat Fl m .
-If a value of zero is reported for
-.Va kern.ipc.nsfbufs ,
-your architecture does not need to use
+.Xr sysctl 8
+.Va kern.ipc.nsfbufs
+doesn't exist, your architecture does not need to use
 .Fn sendfile
 buffers because their task can be efficiently performed
 by the generic virtual memory structures.
@@ -363,11 +379,13 @@ does not support
 The socket peer has closed the connection.
+.Xr loader.conf 5 ,
 .Xr netstat 1 ,
 .Xr open 2 ,
 .Xr send 2 ,
 .Xr socket 2 ,
 .Xr writev 2 ,
+.Xr sysctl 8 ,
 .Xr tuning 7
 .%A K. Elmeleegy

Modified: head/sys/vm/vnode_pager.c
--- head/sys/vm/vnode_pager.c	Fri Feb 15 22:55:13 2019	(r344187)
+++ head/sys/vm/vnode_pager.c	Fri Feb 15 23:36:22 2019	(r344188)
@@ -115,13 +115,23 @@ SYSCTL_PROC(_debug, OID_AUTO, vnode_domainset, CTLTYPE
     &vnode_domainset, 0, sysctl_handle_domainset, "A",
     "Default vnode NUMA policy");
+static int nvnpbufs;
+    &nvnpbufs, 0, "number of physical buffers allocated for vnode pager");
 static uma_zone_t vnode_pbuf_zone;
 static void
 vnode_pager_init(void *dummy)
-	vnode_pbuf_zone = pbuf_zsecond_create("vnpbuf", nswbuf * 8);
+#ifdef __LP64__
+	nvnpbufs = nswbuf * 2;
+	nvnpbufs = nswbuf / 2;
+	TUNABLE_INT_FETCH("vm.vnode_pbufs", &nvnpbufs);
+	vnode_pbuf_zone = pbuf_zsecond_create("vnpbuf", nvnpbufs);
 SYSINIT(vnode_pager, SI_SUB_CPU, SI_ORDER_ANY, vnode_pager_init, NULL);

More information about the svn-src-all mailing list