libthr and 1:1 threading.
Jeff Roberson
jroberson at chesapeake.net
Wed Apr 2 14:42:11 PST 2003
On Wed, 2 Apr 2003, Daniel Eischen wrote:
> On Wed, 2 Apr 2003, Jeff Roberson wrote:
> > On Wed, 2 Apr 2003, Julian Elischer wrote:
> >
> > >
> > >
> > > On Wed, 2 Apr 2003, Juli Mallett wrote:
> > >
> > > > * De: Jeff Roberson <jroberson at chesapeake.net> [ Data: 2003-04-02 ]
> > > > [ Subjecte: Re: libthr and 1:1 threading. ]
> > > > > On Wed, 2 Apr 2003, Terry Lambert wrote:
> > > > > > Also, any ETA on the per process signal mask handing bug in
> > > > > > libthr? Might not be safe to convert everything up front, in
> > > > > > a rush of eager enthusiasm...
> > > > >
> > > > > Which bug is that? I'm not aware of it.
> > > >
> > > > I think Terry is referring to the Uncertainty & Doubt as if it were
> > > > a bug over the lack of a process sigmask (moved into the threads),
> > > > as raised by the M:N group.
> > >
> > > I think this IS a problem. We need a per-process mask.
> > > to block signals that no thread is interested in.
> > > Since M:N threads do not have a kernel thread for each userland thread,
> > > there is nowhere to store this info any more.
> > >
> >
> > Then set the mask to be the same on all threads in the process. The mask
> > is set in swapcontext though so it seems reasonable to me that it is
> > atomically updated when you schedule a new user thread on a kse.
>
> Jeff, are you _listening_ to us? We've said multiple times
> that the UTS does not enter the kernel when performing thread
> switches. The UTS does NOT use setcontext(), getcontext(),
> or swapcontext().
I had not seen anyone mention this. If this is the case then I suggest
the masks and pending sets be kept in user space. You can install blank
handlers for everything so that they are kept pending until the uts has a
chance to pick them up in the upcall.
If you really want a process wide mask allow me to do it. The single code
is quite tricky and it's already been butchered enough. I think we should
discuss this a bit more first though.
Cheers,
Jeff
More information about the freebsd-current
mailing list