cvs commit: src/sys/amd64/conf GENERIC
mux at freebsd.org
Mon Jun 14 12:57:44 PDT 2004
Alfred Perlstein wrote:
> * Maxime Henrion <mux at freebsd.org> [040614 12:40] wrote:
> > Alfred Perlstein wrote:
> > > * John Baldwin <jhb at FreeBSD.org> [040614 11:57] wrote:
> > > >
> > > > I'm betting it is just triggering a race. When I first did the adaptive
> > > > mutexes I stress tested it (-j <bignum> buildworld loops) on an ultra60, an
> > > > alpha ds20, and a quad pii-xeon w/o any lockups.
> > >
> > > Just a side note, I think it's the Berkeley DB's recent code that
> > > will spin a number of times based on the number of CPUs present in
> > > the system. Meaning, it might make sense to spin more on a quad
> > > than on a dual proc machine. It might be worth checking this out.
> > What do you mean by spinning more? As far as I know, with adaptive
> > mutexes, if some thread tries to lock a blocking mutex that is already
> > held by another thread currently running on another CPU, then it spins
> > instead of blocking, assuming the other thread will soon release the
> > mutex. Obviously, this has more chances to happen if there are more
> > CPUs in the system but I don't get what you mean here.
> Specify that test-and-set mutexes should spin tas_spins times without
> blocking. The value defaults to 1 on uniprocessor systems and to 50
> times the number of processors on multiprocessor systems.
> The database environment's test-and-set spin count may also be set
> using the environment's DB_CONFIG file. The syntax of the entry in
> that file is a single line with the string "set_tas_spins", one or
> more whitespace characters, and the number of spins. Because the
> DB_CONFIG file is read when the database environment is opened, it
> will silently overrule configuration done before that time.
> Just an interesting idea.
Hmm, indeed. This would work if mutexes are generally held for very short
periods of time, and that the cost of context switching overcomes the
cost of spinning and not doing anything else. I'm not entirely sure why
this should happen more when the number of processors increases (why would
justify setting this value as described above). Maybe this is because with
more CPUs, there are less chances that the thread holding the mutex gets
preempted (by an ithread or another thread in a fully preemptive kernel),
which would cause the holding thread to hold the lock for longer.
Anyways, this is quite interesting and it would be nice if someone could
implement this, and run benchmarks to see how much it helps.
More information about the cvs-all