sending messages, user process <--> kernel module

Jerry Toung jtoung at arc.nasa.gov
Fri Nov 7 14:41:03 PST 2003


Thank you very much for the inputs.

On Friday 07 November 2003 01:53 pm, Robert Watson wrote:
> On Fri, 7 Nov 2003, Jerry Toung wrote:
> > I am trying to do asynchronous send/receive between a user process that
> > I am writing and a kernel module that I am also writing.  I thought
> > about implementing something similar to unix routing socket, but I will
> > have to define a new domain and protosw.  Beside that idea, what else
> > would you suggest?
>
> This is actually somewhat of a FAQ, since it comes up with relative
> frequency.  I should dig up my most recent answer and forward that to you,
> but the quicky answers off the top of my head are:
>
> (1) One frequent answer is a pseudo-device -- for example, /dev/log
>     buffers kernel log output for syslogd to pick up asynchronously.  Arla
>     and Coda both use pseudo-devices as a channel for local procedure
>     calls to/from userspace to support their respective file systems using
>     userspace cache managers.
>
> (2) Have the kernel open a file system FIFO and have the process on that
>     FIFO.  The client-side NFS locking code uses /var/run/lock to ship
>     locking events to a userspace rpc.lockd.  However, responses from
>     rpc.lockd are then delivered to the kernel using a system call
>     synchronously from the user process, as opposed to via a FIFO.
>
> (3) The routing socket approach can work quite well, especially if you
>     need multicast semantics for messages, not to mention well-defined
>     APIs for managing buffer size, etc. Another instance of this approach
>     is PF_KEY, used for IPsec key management.  As you point out, it
>     requires digging into other code and a fair amount of implementation
>     overhead.
>
> (4) You can have kernel code create and listen on sockets in existing
>     domains, including UNIX domain sockets and TCP/IP sockets.  The NFS
>     client and server code both make use of sockets directly in the
>     kernel for RPCs.
>
> Some of the particularly nice benefits of (2) and (4) is that it's easy to
> implement userspace test code, since the fifo/socket is just used as a
> rendezvous and doesn't care if the other end is in kernel or not.
> Likewise, the blocking/buffering/... semantics are quite well defined,
> which means you won't be tracking down wakeups, select semantics, thread
> behavior and synchronization, etc, as you might do in (1).
>
> Robert N M Watson             FreeBSD Core Team, TrustedBSD Projects
> robert at fledge.watson.org      Network Associates Laboratories



More information about the freebsd-hackers mailing list