cvs commit: src/sys/pci agp.c

Bruce Evans bde at zeta.org.au
Fri Oct 27 08:49:39 UTC 2006


On Fri, 27 Oct 2006, Seigo Tanimura wrote:

> On Fri, 27 Oct 2006 05:16:37 +0900 (JST),
>  Hiroki Sato <hrs at FreeBSD.org> said:
> ...
> hrs>  I have doubt about this change because int memsize->u_int memsize
> hrs>  does not solve the problem directly; memsize never occurs wraparound
> hrs>  actually and an implicit cast to unsigned int just makes the problem
> hrs>  invisible.  The questionable code fragment in agp.c is the following:
>
> hrs>  memsize = ptoa(Maxmem) >> 20;
> hrs>  for (i = 0; i < agp_max_size; i++) {
> hrs>          if (memsize <= agp_max[i][0])
> hrs>                  break;
> hrs>  }
>
> hrs>  ptoa(Maxmem)>>20 will occur a wraparound problem when Maxmem>=2GB, so
> hrs>  this part should be fixed instead.  BTW, this should be a problem
> hrs>  only on i386 since the definition of ptoa() is "#define ptoa(x) ((x)
> hrs>  << PAGE_SHIFT)".  The other platforms use a cast like "#define
> hrs>  ptoa(x) ((unsigned long)(x) << PAGE_SHIFT)".

I added some casts like that, but the author of PAE (jake@) wanted to
avoid casts in macros.  I don't remember exactly why.  There are
arguments for both ways.

> hrs>  I think it can be solved by using "ptoa((unsigned long)Maxmem)" or
> hrs>  so, but I am not sure if this is reasonable because there are more
> hrs>  notional types like vm_paddr_t.  If "#define ptoa(x) ((vm_paddr_t)(x)
> hrs>  << PAGE_SHIFT)" works fine on all platforms, it looks more reasonable
> hrs>  to me, but I have a misunderstanding?

Casting to vm_paddr_t is much better than casting to u_long.

I may have added the similar cast to btoc().  It is wrong.  btoc()
cass to vm_offset_t but it should cast to vm_ooffset_t if anything so
that casting to it doesn't break the case where the arg is a large
physical byte count.  This is one of the arguments for not casting --
if we don't cast, then if the arg is an unsigned type or a signed type
with a positive value, than at least right shifting will give the
correct value; then the cast, if any, should be applied to the result
of the shift, to convert to the different type for a page count;
however, such a cast would blindly truncate the value if it is
unrepresentable in the new type, so it probably doesn't belong in the
macro.

In my local version, there is no cast in btoc(), but there is a cast
to vm_size_t in ctob().  It is unclear whether btoc()/ctob() are supposed
to work with physical or virtual byte counts.

I guess jake's argument for not casting in the macro even for left
shifts was similar: suppose we start with a page count and left shift
to a byte count as in ptoa().  Then the result is a vm_paddr_t and
might not fit in a vm_paddr_t.  The caller needs to be aware that the
result might be very large, and it is a small step from there to
casting the arg appropriately, and an explicit cast provides some
self-documentation that this case can happen.  In the above code, the
caller just assumes ptoa() doesn't overflow and that right shifting by
20 gives a size that doesn't overflow on assignment to `int memsize'.

MD code for Maxmem in i386/machdep.c always casts it in calls to ptoa().
However, half the calls bogusly cast it to uintmax_t and then bogusly
assume that the result is still uintmax_t so that it can be printed using
%ju.  It is the results that must be cast to uintmax_t; the args must be
cast as necessary to avoid overflow.  One of the buggy calls is for
caclulating the size in MB as above; it spells the right shift by 20 as
division by 1048576.

I added not-so-similar casts to btodb() and dbtob().  The one for
btodb() is bogus :-(.  It wants to promote to an unsigned type in case
the byte count is negative (the arg has type off_t so it can be negative
in theory), but that case shouldn't happen, and if it does then the
casts mess up the result further.  For efficiency, It wants to promote
to the smallest unsigned type possible, but it doesn't handle the case
where sizeof(unsigned) < sizeof(unsigned long).  it assumes that
sizeof(daddr_t) < sizeof(unsigned long long).  It became even more
bogus many years after it was written, when daddr_t was changed from
32 bits to 64 bits.  Now the optimized case is rarely used.

> ptoa() is machine-dependent; (unsigned long) should be OK.
> (vm_paddr_t) would mess up the namespace.

The machine-dependence and duplication of macros for converting between
bytes and pages is another bug.  For i386's, there are atop()/ptoa(),
i386_btop()/i386_ptob() which are "MD", and btoc()/ctob() which are
MI.  The difference between addresses and byte counts is subtle but
shouldn't be MD.  The difference between pages and clicks should be
large (clicks should no longer exist) and MI, but is currently subtle
and MD.  vm has different types for sizes, addresses and offsets and
for virtual/physical variants of some of these, but there aren't
enough macros for all the combinations and having a choice for only
some of the combinations is worse than not having a choice.

(vm_offset_t) is still used for the bogus btoc() macro.  It is as MI
as vm_[p]addr_t (i.e., it is not very MI, but is MI enough, since we
only support machines with flat address spaces).  There is a minor
problem with vm_paddr_t only being declared in vm headers while it
is referenced in param.h.

> What hassles me is that Maxmem is *NOT* unsigned, grrr...

Using unsigned types to represent values just because the values are
never negative is another common type error.  It mainly gives sign
extension bugs for arithmetic on such values in expressions where
intermediate values are negative, and inhibits detection of overflows
via overflow traps.

Thus the signedness of Maxmem is a feature, but strictly, Maxmem should
be a vm_size_t, which happens to be uint32_t or uint64_t on supported
arches.  However, even vm is sloppy about page counts.  It uses struct
vmmmeter for lots of fundamental page counts, and struct vmmeter uses
u_int for page counts.  vm also uses scale factors when it initializes
parameters based on these page counts, so it will break long before the
counts reach UINT_MAX.  Using signed integers for the page counts and
getting overflow traps when the scaling overflows would be a good way
of detecting the breakage.  Someone should fix this before machines with
a few TB of physical memory are common.

Bruce


More information about the cvs-src mailing list