ZFS hang issue and prefetch_disable
Pawel Jakub Dawidek
pjd at FreeBSD.org
Wed Jul 23 07:50:39 UTC 2008
On Tue, Jul 22, 2008 at 01:57:27PM -0700, Matt Simerson wrote:
> Deadlocks under heavy IO load on the ZFS file system with
> prefetch_disable=0. Setting vfs.zfs.prefetch_disable=1 results in a
> stable system.
> Two machines. Identically built. Both exhibit identical behavior.
> 8 cores (2 x E5420) x 2.5GHz, 16 GB RAM, 24 x 1TB disks.
> FreeBSD 7.0 amd64
> dmesg: http://matt.simerson.net/computing/zfs/dmesg.txt
> Boot disk is a read only 1GB compact flash
> # cat /etc/fstab
> /dev/ad0s1a / ufs ro,noatime 2 2
> # df -h /
> Filesystem 1K-blocks Used Avail Capacity Mounted on
> /dev/ad0s1a 939M 555M 309M 64% /
> RAM has been boosted as suggested in ZFS Tuning Guide
> # cat /boot/loader.conf
> vm.kmem_size= 1610612736
> vm.kmem_size_max= 1610612736
> I haven't mucked much with the other memory settings as I'm using
> amd64 and according to the FreeBSD ZFS wiki, that isn't necessary.
> I've tried higher settings for kmem but that resulted in a failed
> boot. I have ample RAM And would love to use as much as possible for
> network and disk I/O buffers as that's principally all this system does.
> Disks & ZFS options
> Sun's "Best Practices" suggests limiting the number of disks in a
> raidz pool to no more than 6-10, IIRC. ZFS is configured as shown:
> I'm using all of the ZFS default properties except: atime=off,
> I'm using these machines as backup servers. I wrote an application
> that generates a list of the thousands of VPS accounts we host. For
> each host, it generates a rsnapshot configuration file and backs up up
> their VPS to these systems via rsync. The application manages
> concurrency and will span additional rsync processes if system i/o
> load is below a defined thresh-hold. Which is to say, I can crank up
> or down the amount of network and disk IO the system sees.
> With vfs.zfs.prefetch_disable=1, a hang will occur within a few hours
I guess you wanted '0' here?
> (no more than a day). If I keep the i/o load (measured via iostat)
> down to a low level (< 200 iops) then I still get hangs but less
> frequently (1-6 days). The only way I have found to prevent the hangs
> is by setting vfs.zfs.prefetch_disable=1.
This is more or less a known problem. It is related to low memory/kva
conditions. Alan Cox is working on vm.kmem_size limitation. I saw Kris
using ZFS with some very large vm.kmem_size. Not sure if all the code is
already committed, but this would be something you should definiatelly
try on your hardware. I've also the most recent ZFS version in perforce
that is beeing tested by few other guys and I'd like to commit it to
HEAD soon (depends on test results of course). There are plenty
improvements and some may fix your problem too.
BTW. Do you see prefetch helpful for your workloads? I always turn it
off on my systems, because it has negative impact on performance, but
maybe my hardware is too weak to take advantage out of it.
One more thing. There was a small bug in prefetch code, but I've no idea
if it is related to hangs you are seeing. If that's not a problem for
you, can you try this patch:
If you want to play with tunning ZFS prefetch, you might find this
patches useful (taken from perforce version):
Pawel Jakub Dawidek http://www.wheel.pl
pjd at FreeBSD.org http://www.FreeBSD.org
FreeBSD committer Am I Evil? Yes, I Am!
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 187 bytes
Desc: not available
Url : http://lists.freebsd.org/pipermail/freebsd-fs/attachments/20080723/2e9d5f88/attachment.pgp
More information about the freebsd-fs