svn commit: r251282 - head/sys/kern

Konstantin Belousov kostikbel at gmail.com
Sat Jun 15 10:43:10 UTC 2013


On Tue, Jun 04, 2013 at 06:14:49PM +1000, Bruce Evans wrote:
> On Tue, 4 Jun 2013, Konstantin Belousov wrote:
> 
> > On Mon, Jun 03, 2013 at 02:24:26AM -0700, Alfred Perlstein wrote:
> >> On 6/3/13 12:55 AM, Konstantin Belousov wrote:
> >>> On Sun, Jun 02, 2013 at 09:27:53PM -0700, Alfred Perlstein wrote:
> >>>> Hey Konstaintin, shouldn't this be scaled against the actual amount of
> >>>> KVA we have instead of an arbitrary limit?
> >>> The commit changes the buffer cache to scale according to the available
> >>> KVA, making the scaling less dumb.
> >>>
> >>> I do not understand what exactly do you want to do, please describe the
> >>> algorithm you propose to implement instead of my change.
> >>
> >> Sure, how about deriving the hardcoded "32" from the maxkva a machine
> >> can have?
> >>
> >> Is that possible?
> > I do not see why this would be useful. Initially I thought about simply
> > capping nbuf at 100000 without referencing any "memory". Then I realized
> > that this would somewhat conflict with (unlikely) changes to the value
> > of BKVASIZE due to "factor".
> 
> The presence of BKVASIZE in 'factor' is a bug.  My version never had this
> bug (see below for a patch).  The scaling should be to maximize nbuf,
> subject to non-arbitrary limits on physical memory and kva, and now an
> arbirary limit of about 100000 / (BKVASIZE / 16384) on nbuf.  Your new
> limit is arbitrary so it shouldn't affect nbuf depending on BKVASIZE.
I disagree with the statement that the goal is to maximize nbuf. The
buffer cache currently is nothing more then a header and i/o record for
the set of the wired pages. For non-metadata on UFS, buffers doenot map
the pages into KVA, so it becomes purely an array of pointers to page
and some additional bookkeeping.

I want to eventually break the coupling between size of the buffer map
and the nbuf. Right now, typical population of the buffer map is around
20%, which means that we waste >= 100MB of KVA on 32bit machines, where
the KVA is precious. I would also consider shrinking the nbufs much
lower, but the cost of wiring and unwiring the pages for the buffer
creation and reuse is the blocking point.

> 
> Expanding BKVASIZE should expand kva use, but on i386 this will soon
> hit a non-arbitary kva limit so nbuf will not be as high as preferred.
> nbuf needs to be very large mainly to support file systems with small
> buffers.  Even 100000 only gives 50MB of buffering if the fs block
> size is 512.  This would shrink to only 12.5MB if BKVASIZE is expanded
> by a factor of 4 and the bug is not fixed.  If 25000 buffers after
> expanding BKVASIZE is enough, then that should be the arbitary limit
> (independent of BKVASIZE) so as to save physical memory.
Yes, this is another reason to decouple the nbuf and buffer map.

> 
> On i386 systems with 1GB RAM, nbuf defaults to about 7000.  With an
> fs block size of 512, that can buffer 3.5MB.  Expanding BKVASIZE by a
> factor of 4 shrinks this to 0.875MB in -current.  That is ridiculously
> small.  VMIO limits the lossage from this.
> 
> BKVASIZE was originally 8KB.  I forget if nbuf was halved by not modifying
> the scale factor when it was expanded to 16KB.  Probably not.  I used to
> modify the scale factor to get twice as many as the default nbuf, but
> once the default nbuf expanded to a few thousand it became large enough
> for most purposes so I no longer do this.
Now, with the default UFS block size being 32KB, it is effectively halved
once more.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 834 bytes
Desc: not available
URL: <http://lists.freebsd.org/pipermail/svn-src-head/attachments/20130615/72525ecb/attachment.sig>


More information about the svn-src-head mailing list