Sequential disk IO saturates system

Wiktor Niesiobedzki bsd at vink.pl
Tue Sep 14 17:02:50 UTC 2010


Hi,

You may try to play with kern.sched.preempt_thresh  setting (as per
http://docs.freebsd.org/cgi/getmsg.cgi?fetch=665455+0+archive/2010/freebsd-stable/20100905.freebsd-stable).

Renice'ing the process doesn't give any improvement, as this is g_eli*
thread that is consuming your CPU, which has pretty high priority.

Since my last update, I don't see that much of the problem, but
previously, dd if=/dev/gzero.eli of=/dev/null bs=1M, could cause CPU
starvation of any other processes. Now that don't happen anymore
(though I see some performance drops during txg commits, eg. in
network throughput)

I've also changed vfs.zfs.txg.synctime to 1 second (default - 5
seconds), so txg commits are shorter, though more often. This help
alleviate my problems. YMMV.


Cheers,

Wiktor Niesiobedzki


2010/9/14 grarpamp <grarpamp at gmail.com>:
> We have [re]nice to deal with user processes.
>
> Is there no way to effectively rate limit the disk pipe? As it is
> now, this machine can't do any userland work because it's completely
> buried by the simple degenerate case of:
>  cp /fs_a/.../giga_size_files /fs_b/...
>
> Geli and zfs are in use, yet that doesn't seem to be an excuse for
> this behavior.
>
> I can read 60MB/s off the raw spindles without much issue.
>
> Yet add geli and I get like 15MB/s, which is completely fine as
> well, except the box gets swamped in system time when doing that.
> And around 11MB/s off geli+zfs, caveat above swamping of course.
>
> And although they perform at about the same MB/s rates, it's the
> bulk writes that seem to thoroughly dispatch the system, far more
> than the reads do. This one really hurts and removes all usability.
>
> Sure, maybe one could set some ancient PIO mode on the [s]ata/scsi
> channels [untested here]. But it seems far less than ideal as users
> commonly mix raw and geli+zfs partitions on the same set of spindles.
>
> Is there a description of the underlying issue available?
>
> And unless I'm missing[?] something like an already existing insertable
> geom rate limit, or a way to renice kernel processes...  is it right
> to say that FreeBSD needs these options and/or some equivalent work
> in this area?
>
> As I'm without an empty raw disk right now, I can only write to zfs
> and thus still have yet to test with writes to spindle and geli.
> Regardless, perhaps the proper solution lies with the right sort
> of future knob as in the previous paragraph?
> _______________________________________________
> freebsd-performance at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
> To unsubscribe, send any mail to "freebsd-performance-unsubscribe at freebsd.org"
>


More information about the freebsd-performance mailing list