IDMS : Weekly status report #1 of 14

David Chisnall theraven at
Mon Jul 1 08:36:37 UTC 2013

On 1 Jul 2013, at 08:02, Ambarisha B <b.ambarisha at> wrote:

> Hi,
> Sorry for the delayed response, I was away from my system for a couple of days.
> On Thu, Jun 27, 2013 at 6:42 PM, David Chisnall <theraven at> wrote:
>> The fetch utility has been the case study for a lot of the compartmentalisation work on Capsicum so far.  Have you considered how the download manager will handle exploitable bugs in, for example, the HTTP header parsing in libfetch?
> Actually I was not sure how much of libfetch can be used in the download manager service at all, because we're thinking of profiling the download speed etc. 

If functionality that you need is not available in libfetch, then it should probably be added.  For example, adding a callback that will be invoked during downloading to update status messages.

>> I note that your plan is to use a thread, rather than a forked process, for each request, which means that it can not run in sandboxed mode.
> I was not aware of the concerns with fetch that you pointed out. But I don't see any serious drawbacks with doing forked processes as opposed to threads. I don't think process creation overhead is a problem anyways, considering that there is a network transaction involved. Originally I thought forked processes were unnecessary because I was not aware of the sandboxing mode etc. Even now I'll have to take a closer look into it.

Thank you.

>> What privilege do you imagine the daemon running with?  One of the problems with fetch currently is that it is often invoked as root when downloading ports distfiles and so runs with ambient privilege of the root user.
> I think the daemon just needs to run as a separate "trusted" user (because it handles the requests of various users, also consider the case when root requests the service for a file). So, even if there is a vulnerability in the daemon, it is contained (till root makes a request atleast). What is the right way to design this?

Ideally, the daemon should run in sandboxed (cap_enter()) mode for each worker and should run as an unprivileged user for the daemon.  The flow I would expect would be:

- Root (or other) user runs command to get a distfile.  This passes the URL and a file descriptor opened for writing to the daemon
- Daemon receives message containing URL from a command and parses the protocol and the remote host
- Daemon opens socket for the connection.
- Daemon forks worker
- Worker calls cap_enter()
- Worker invokes libfetch to get file from remote server and write it to the file descriptor that it inherits from the parent process
- Worker may provide status messages to the parent
- Worker exits
- Daemon or original command validates the download's hash

The most vulnerable part is the worker, as it is the only part that is talking directly to a remote server (which may be a not-so-trusted and potentially compromised mirror).  If the server is compromised then it can inject headers or other control messages designed to exploit bugs in libfetch.  These are contained by being in sandboxed mode - all that it can do maliciously is write bad data to the file, which a compromised server could do anyway by just providing you with bad data.  

The validation of the download against the distfile hash is performed outside of the worker, so a compromised worker can not circumvent this.

We assume that the URL is not malicious, because someone who can provide a malicious URL can also provide a hash and URL for a malicious distfile and just give you a trojaned program to run later.  

Optionally, the daemon may chroot() itself to an empty directory somewhere before dropping privileges.  It only needs to be unsandboxed to be able to open network connections.


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 881 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <>

More information about the soc-status mailing list