ZFSKnownProblems - needs revision?
Freddie Cash
fjwcash at gmail.com
Wed Apr 8 17:19:57 UTC 2009
On April 8, 2009 5:30 am Ivan Voras wrote:
<snip>
> Specifically:
> * Are the issues on the list still there?
> * Are there any new issues?
> * Is somebody running ZFS in production (non-trivial loads) with
> success? What architecture / RAM / load / applications used?
> * How is your memory load? (does it leave enough memory for other
> services)
>
> Please also note are you using the "new" ZFS port (in 8-CURRENT) or the
> "old" one (in 7-STABLE).
I'm running the following three setups with ZFS:
Home file server
generic P4 3.0 GHz system with 2 GB RAM
2 GB USB stick for / and /usr
3x 120 GB SATA HDs
onboard Marvel gigabit NIC
32-bit FreeBSD 7.1-RELEASE
pool has a single 3-way raidz1 vdev
Work file server 1 & 2
5U chenbro case w/1350 Watt 4-way redundant PSU
Tyan h2000M motherboard
2x dual-core Opteron 2200-series CPUs at 2.8 GHz
8 GB ECC DDR2-SDRAM
2x 2 GB CompactFlash using gmirror for / and /usr (server 1)
2x 2 GB USB sticks using gmirror for / and /usr (server 2)
3Ware 9550SXU PCI-X RAID controller
3Ware 9650SE PCIe RAID controller
24x 500 GB Western Digital SATA HDs
4-port Intel PRO/1000 gigabit NIC configured using lagg(4)
64-bit FreeBSD 7.1-RELEASE
pool on each server has 3 8-way raidz2 vdevs
On my home box, it took a little bit of tuning to get it stable. The
hardest part was finding the right setting for vm.kmem_size_max and
vfs.zfs.arc_max. After about of month of tweaking, twiddling, crashing, and
rebooting, I hit upon 1G for kmem and 256M for zfs arc. Since then, it's
been rock-solid. This box runs KDE 4.2.2, is used for watching movies,
downloading, office work, and sharing files out via Samba and NFS to the
rest of the house.
On the work servers, it took about 6 weeks to get the right settings for
loader.conf to make it stable. After much trial and error, we are using
1596M for kmem_size_max, and 512M for zfs_arc_max. These boxes do remote
backups for ~90 Linux and FreeBSD boxes using rsync. The backup script runs
parallel rsync processes for each remote site, doing sequential backups of
each server at the site. We wait 250s before starting the next site backup.
Takes just under 5 hours to do incremental backups for all 90 sites. We get
(according to MRTG) a sustained 80 MBytes/sec read/write during the backups.
It may be more, as we can't get the 64-bit disk counters to work, and have
to poll the 32-bit counters every 60 secs.
During the trial-and-error period, we did have a lot of livelocks,
deadlocks, and kernel panics. Things have been very stable on both boxes
for the past two months. We don't run into any out-of-memory issues.
We use swap on ZVol for all the systems listed above. So far, that hasn't
been an issue (knock wood). :)
iSCSI support works nicely as well, using the net/iscsi-target port. Only
done minor desktop-style testing using a Debian Linux initiator.
Haven't had any issues sharing the ZFS filesystems via NFS either. We use a
couple NFS shares for really old SCO boxes that refuse to install rsync.
Even when the full backup run is going, and these boxes are copying files
via NFS, we haven't hit any lockups.
We run with vfs.zfs.prefetch_disable=1 and vfs.zfs.zil_disable=0 on all
systems.
We're really looking forward to FreeBSD 8 with the ZFS improvements.
Especially the auto-tuning and much higher kmem_max. We'd like to be able
to give ZFS 3-4 GB for the ARC.
We've also heavily modified /etc/sysctl.conf and upped a bunch of the
network-related sysctls. Doing so increased our SSH throughput from ~30
Mbits/sec across all connections to over 90 Mbits/sec per SSH connection.
So far, we've been very impressed with ZFS support on FreeBSD. Makes it
really hard to use LVM on our Linux systems. :)
--
Freddie
fjwcash at gmail.com
More information about the freebsd-stable
mailing list