svn commit: r298933 - in head: share/man/man9 sys/amd64/include sys/dev/acpica sys/dev/drm2 sys/dev/drm2/i915 sys/kern sys/sys sys/x86/acpica sys/x86/x86

Bruce Evans brde at optusnet.com.au
Thu May 5 12:21:51 UTC 2016


On Wed, 4 May 2016, John Baldwin wrote:

> On Tuesday, May 03, 2016 11:19:44 AM John Baldwin wrote:
>> On Wednesday, May 04, 2016 03:58:40 AM Bruce Evans wrote:
>>
>>> BTW, I don't like select's and bitset's use of longs.  Using unsigned
>>> for select is a historical mistake.  Bitset apparently copied select
>>> except it unimproved to signed long.  Bitstring uses unsigned char with
>>> no optimizations.  Sigset uses uint32_t with no obvious optimizations,
>>> but compilers do a good job with with it due to its fixed size.  I doubt
>>> that the manual optimization of using a wider size is important.
>>
>> I agree, but cpuset_t is already part of the ABI in existing releases. :(
>> Changing it to uint32_t would break the ABI for big-endian platforms.
>
> How about this:

It seems reasonable.  I didn't check many details.

> diff --git a/sys/sys/_bitset.h b/sys/sys/_bitset.h
> index 26a8848..89dd7b6 100644
> --- a/sys/sys/_bitset.h
> +++ b/sys/sys/_bitset.h
> @@ -36,26 +36,15 @@
>  * Macros addressing word and bit within it, tuned to make compiler
>  * optimize cases when SETSIZE fits into single machine word.
>  */
> -#define	_BITSET_BITS		(sizeof(long) * NBBY)
> +#define	_BITSET_BITS		(sizeof(long) * 8)
>
> -#define	__bitset_words(_s)	(howmany(_s, _BITSET_BITS))
> +#define	_howmany(x, y)	(((x) + ((y) - 1)) / (y))
>
> -#define	__bitset_mask(_s, n)						\
> -	(1L << ((__bitset_words((_s)) == 1) ?				\
> -	    (__size_t)(n) : ((n) % _BITSET_BITS)))
> -
> -#define	__bitset_word(_s, n)						\
> -	((__bitset_words((_s)) == 1) ? 0 : ((n) / _BITSET_BITS))
> +#define	__bitset_words(_s)	(_howmany(_s, _BITSET_BITS))

This still has a bogus underscore on _s.

Unfortunately, the problem pointed out by kib does apply here, and in
select.h too, not like I said before.  Our select implemenation breaks
user code like:

 	#include <sys/types.h>	// for its undocumented nested pollution

 	void foo(void) { static void *_howmany(void); }

Change it to use upper case instead of adding an underscore.  Lower case
should indicate a safe (function-like) macro.

>
> #define	BITSET_DEFINE(t, _s)						\
> struct t {								\
>         long    __bits[__bitset_words((_s))];				\
> }
>
> -#define	BITSET_T_INITIALIZER(x)						\
> -	{ .__bits = { x } }
> -
> -#define	BITSET_FSET(n)							\
> -	[ 0 ... ((n) - 1) ] = (-1L)
> -

I didn't expect you to clean the layering too, but this reminds me that
I don't like the layering.  Little _foo.h heads may be the cleanest
way to reduce namespace pollution, but they are bad for readabilty and
efficiency.  New header should be designed to be clean from the start
and not need multiple layers.  For bitsets, we now have _bitset.h,
bitset.h, _cpuset.h, cpuset.h and bitstring.h.  Since yesterday we have
had gibbs' improved bitstring.h.  It is too over-engineered for me, but
it s now just as optimal as bitset.h (perhaps more).  gibbs used bitstrings
instead of bitsets since he thought bitsets were too kernel-ish.

We still have much larger messes using little headers cleaning time
types.  This was used mainly to clean sys/stat.h, but that cleaning is
mostly broken now, and sys/time.h and time.h remain a mess of pollution.
sys/stat.h now uses timespecs unconditionally, but they only exist in
POSIX >= 2001.  Old versions used __timespec (which was declared in
the little headers) in the !_BSD_VISIBLE case.  We still have the little
headers -- _timespec.h, timespec.h and _timeval.h.  _timespec existed
solely to declare struct __timespec and timespec.h existed solely to
declare struct timespec and some pollution.  Now _timespec.h exists
solely to declare struct timespec, and timespec.h grew more pollution.
_timeval.h always declared solely to declare struct timeval and
associated types.

The little time headers do serve the purpose providing a single place
to declare 1 struct with minor collateral pollution, but since the
struct is so simple it is better to repeat its definition (ifdefed).

> diff --git a/sys/sys/bitset.h b/sys/sys/bitset.h
> index d130522..f1c7bf8 100644
> --- a/sys/sys/bitset.h
> +++ b/sys/sys/bitset.h
> @@ -32,6 +32,13 @@
> #ifndef _SYS_BITSET_H_
> #define	_SYS_BITSET_H_
>
> +#define	__bitset_mask(_s, n)						\
> +	(1L << ((__bitset_words((_s)) == 1) ?				\
> +	    (__size_t)(n) : ((n) % _BITSET_BITS)))
> +

The longs should be u_longs.  Compilers would warn about shifting this
1L the sign bit.  Maybe the expression is too complicated for them to
see that.

Bruce


More information about the svn-src-all mailing list