Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

Jeremy Chadwick freebsd at jdc.parodius.com
Sun Jul 11 21:45:49 UTC 2010


On Sun, Jul 11, 2010 at 02:12:13PM -0700, Richard Lee wrote:
> On Sun, Jul 11, 2010 at 01:47:57PM -0700, Jeremy Chadwick wrote:
> > On Sun, Jul 11, 2010 at 11:25:12AM -0700, Richard Lee wrote:
> > > This is on clean FreeBSD 8.1 RC2, amd64, with 4GB memory.
> > > 
> > > The closest I found by Googling was this:
> > > http://forums.freebsd.org/showthread.php?t=9935
> > > 
> > > And it talks about all kinds of little tweaks, but in the end, the
> > > only thing that actually works is the stupid 1-line perl code that
> > > forces the kernal to free the memory allocated to (non-zfs) disk
> > > cache, which is the "Inact"ive memory in "top."
> > > 
> > > I have a 4-disk raidz pool, but that's unlikely to matter.
> > > 
> > > Try to copy large files from non-zfs disk to zfs disk.  FreeBSD will
> > > cache the data read from non-zfs disk in memory, and free memory will
> > > go down.  This is as expected, obviously.
> > > 
> > > Once there's very little free memory, one would expect whatever is
> > > more important to kick out the cached data (Inact) and make memory
> > > available.
> > > 
> > > But when almost all of the memory is taken by disk cache (of non-zfs
> > > file system), ZFS disks start threshing like mad and the write
> > > throughput goes down in 1-digit MB/second.
> > > 
> > > I believe it should be extremely easy to duplicate.  Just plug in a
> > > big USB drive formatted in UFS (msdosfs will likely do the same), and
> > > copy large files from that USB drive to zfs pool.
> > > 
> > > Right after clean boot, gstat will show something like 20+MB/s
> > > movement from USB device (da*), and occasional bursts of activity on
> > > zpool devices at very high rate.  Once free memory is exhausted, zpool
> > > devices will change to constant low-speed activity, with disks
> > > threshing about constantly.
> > > 
> > > I tried enabling/disabling prefetch, messing with vnode counts,
> > > zfs.vdev.min/max_pending, etc.  The only thing that works is that
> > > stupid perl 1-liner (perl -e '$x="x"x1500000000'), which returns the
> > > activity to that seen right after a clean boot.  It doesn't last very
> > > long, though, as the disk cache again consumes all the memory.
> > > 
> > > Copying files between zfs devices doesn't seem to affect anything.
> > > 
> > > I understand zfs subsystem has its own memory/cache management.
> > > Can a zfs expert please comment on this?
> > > 
> > > And is there a way to force the kernel to not cache non-zfs disk data?
> > 
> > I believe you may be describing two separate issues:
> > 
> > 1) ZFS using a lot of memory but not freeing it as you expect
> > 2) Lack of disk I/O scheduler
> > 
> > For (1), try this in /boot/loader.conf and reboot:
> > 
> > # Disable UMA (uma(9)) for ZFS; amd64 was moved to exclusively use UMA
> > # on 2010/05/24.
> > # http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057162.html
> > vfs.zfs.zio.use_uma="0"
> > 
> > For (2), may try gsched_rr:
> > 
> > http://svnweb.freebsd.org/viewvc/base/releng/8.1/sys/geom/sched/README?view=markup
> > 
> > -- 
> > | Jeremy Chadwick                                   jdc at parodius.com |
> > | Parodius Networking                       http://www.parodius.com/ |
> > | UNIX Systems Administrator                  Mountain View, CA, USA |
> > | Making life hard for others since 1977.              PGP: 4BD6C0CB |
> 
> vfs.zfs.zio.use_uma is already 0.  It looks to be the default, as I never
> touched it.

Okay, just checking, because the default did change at one point, as the
link in my /boot/loader.conf denotes.  Here's further confirmation (same
thread), the first confirming on i386, the second confirming on amd64:

http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057168.html
http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057239.html

> And in my case, Wired memory is stable at around 1GB.  It's
> the Inact memory that takes off, but only if reading from non-zfs file
> system.  Without other file systems, I can keep moving files around and
> see no adverse slowdown.  I can also scp huge files from another system
> into the zfs machine, and it doesn't affect memory usage (as reported by
> top), nor does it affect performance.

Let me get this straight:

The system has ZFS enabled (kernel module loaded), with a 4-disk raidz1
pool defined and used in the past (Wired being @ 1GB, due to ARC).  The
same system also has UFS2 filesystems.  The ZFS pool vdevs consist of
their own dedicated disks, and the UFS2 filesystems also have their own
disk (which appears to be USB-based).

When any sort of read I/O is done on the UFS2 filesystems, Inact
skyrockets, and as a result this impacts performance of ZFS.

If this is correct: can you remove USB from the picture and confirm the
problem still happens?  This is the first I've heard of the UFS caching
mechanism "spiraling out of control".

By the way, all the "stupid perl 1-liner" does is make a process with an
extremely large SIZE, and RES will grow to match it (more or less).  The
intention is to cause the VM to force a swap-out + free of memory by
stressing the VM.  Using 'x1500000000', you'll find something like this:

  PID USERNAME   THR PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
82353 jdc          1  76    0  1443M  1068M STOP    1   0:01 10.69% perl5.10.1

With use of 'x9999999999', you'll see something like this (note SIZE):

  PID USERNAME   THR PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
82535 jdc          1  56    0  9549M   881M STOP    1   0:01  7.28% perl5.10.1
 
(I'm quite aware of what this does in perl, just noting that for
posterity).

Be aware this can impact all processes on the machine, specifically idle
processes will be swapped out resulting in loss of convenient things
such as display of the fully-qualified path to the program and their
environment (e.g. ARGV arguments).  You'll see them as "<process>" (note
the brackets), I believe.

> As for gsched_rr, I don't believe this is related.  There is only ONE
> access to the zfs devices (4 sata drives), which is purely a sequential
> write.

You're correct, my apologies.

-- 
| Jeremy Chadwick                                   jdc at parodius.com |
| Parodius Networking                       http://www.parodius.com/ |
| UNIX Systems Administrator                  Mountain View, CA, USA |
| Making life hard for others since 1977.              PGP: 4BD6C0CB |



More information about the freebsd-stable mailing list