svn commit: r251586 - head/sys/arm/ti

Konstantin Belousov kostikbel at gmail.com
Tue Jun 11 05:22:08 UTC 2013


On Tue, Jun 11, 2013 at 01:10:52AM +0200, Olivier Houchard wrote:
> On Mon, Jun 10, 2013 at 11:13:58PM +0200, Olivier Houchard wrote:
> > On Mon, Jun 10, 2013 at 10:37:36PM +0300, Konstantin Belousov wrote:
> > > On Mon, Jun 10, 2013 at 12:02:20PM -0500, Alan Cox wrote:
> > > > On 06/10/2013 06:08, Olivier Houchard wrote:
> > > > > On Mon, Jun 10, 2013 at 06:55:47AM +0300, Konstantin Belousov wrote:
> > > > >> On Sun, Jun 09, 2013 at 10:51:12PM +0000, Olivier Houchard wrote:
> > > > >>> Author: cognet
> > > > >>> Date: Sun Jun  9 22:51:11 2013
> > > > >>> New Revision: 251586
> > > > >>> URL: http://svnweb.freebsd.org/changeset/base/251586
> > > > >>>
> > > > >>> Log:
> > > > >>>   Increase the maximum KVM available on TI chips. Not sure why we suddenly need
> > > > >>>   that much, but that lets me boot with 1GB of RAM.
> > > > >> I suspect that the cause is the combination of limited KVA and
> > > > >> lack of any limitation for the buffer map. I noted that ARM lacks
> > > > >> VM_BCACHE_SIZE_MAX after a report from mav about similar (?) problem a
> > > > >> day ago.
> > > > >>
> > > > >> In essence, the buffer map is allowed to take up to ~330MB when no
> > > > >> upper limit from VM_BCACHE_SIZE_MAX is specified.
> > > > >
> > > > > Hi Konstantin,
> > > > >
> > > > > Thanks for the hint !
> > > > > It seems only i386 and sparc64 sets it, what would be a good value, 200M, as
> > > > > it is on i386 ?
> > > > >
> > > > 
> > > > Since there are many arm platforms with less than 1 GB of kernel virtual
> > > > address (KVA) space, VM_BCACHE_SIZE_MAX should be made to scale down
> > > > from 200 MB with the available KVA space.  See how VM_KMEM_SIZE_MAX is
> > > > currently defined on arm.
> > > 
> > > In fact, Ithink it does not make much sense to scale the buffer cache up.
> > > It is mostly wasted space now.  As I measured it, on typical load you
> > > have only 10-20% of instantiated buffers mapped.
> > > 
> > > Alexander Motin reported that he tested the equivalent of the following
> > > change.  With it committed, I think that r251586 could be reverted.
> > > 
> > > diff --git a/sys/arm/include/param.h b/sys/arm/include/param.h
> > > index 9ffb118..5c738c2 100644
> > > --- a/sys/arm/include/param.h
> > > +++ b/sys/arm/include/param.h
> > > @@ -128,6 +128,11 @@
> > >  #define USPACE_SVC_STACK_BOTTOM		(USPACE_SVC_STACK_TOP - 0x1000)
> > >  #define USPACE_UNDEF_STACK_TOP		(USPACE_SVC_STACK_BOTTOM - 0x10)
> > >  #define USPACE_UNDEF_STACK_BOTTOM	(FPCONTEXTSIZE + 10)
> > > +
> > > +#ifndef VM_BCACHE_SIZE_MAX
> > > +#define	VM_BCACHE_SIZE_MAX	(128 * 1024 * 1024)
> > > +#endif
> > > +
> > >  /*
> > >   * Mach derived conversion macros
> > >   */
> > 
> > 
> > I tested it with my changes reverted and it works indeed, so I'm fine with
> > this being committed and my changes being reverted.
> > 
> 
> In fact I spoke too soon. It's getting further, but I'm ending up getting
> vm_thread_new: kstack allocation failed
> Probably because I have a local patch that aligns the stack on 32kB, which
> is something we have to do if we want to store curthread on the kstack.
> It will boot if I reduce VM_DCACHE_SIZE_MAX to 64MB, but it's probably not
> the best thing to do.

The other cause of increased KVA use is the vm radix trie used to keep
the collection of the vm object' pages. When I profiled KVA use for PAE
on i386, which has similar problem of exhausted KVA, the radix trie
popped up as the reason.

IMO the current sizing of the trie for the worst case is not attainable
for any practical situation.  Anyway, this is separate.

I will commit the bcache limit change after make universe passes.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 834 bytes
Desc: not available
URL: <http://lists.freebsd.org/pipermail/svn-src-all/attachments/20130611/193862e6/attachment.sig>


More information about the svn-src-all mailing list