ZFS in productions 64 bit

Freddie Cash fjwcash at gmail.com
Mon Jul 6 21:24:34 UTC 2009


On Mon, Jul 6, 2009 at 7:27 AM, Tonix (Antonio Nati)
<tonix at interazioni.it>wrote:

> Is anyone using in heavy production environment a ZFS FS with AMD 64 bit?
>

We're using FreeBSD 7.2 on our backup servers.  The primary backup server
does remote backups for over 105 servers, every night.  And then pushes the
changes to the secondary backup server, every day.

Both servers are:
  5U Chenbro case, with 24 hot-swappable SATA drive bays
  1350 watt, 4-way redundant PSU (yes, it's overkill)
  Tyan h2000M motherboard
  2x AMD Opteron 2220 CPUs @ 2.8 GHz (dual-core)
  3Ware 9550SXU-12ML PCI-X RAID controller
  3Ware 9650SE-12ML PCIe RAID controller
  Intel PRO/1000MT PCI-X quad-port gigabit NIC
  24x 500 GB SATA harddrives
  2x CompactFlash drives in CF-to-IDE or CF-to-SATA adapters

The CompactFlash are configured using gmirror, and hold / and /usr.  (/usr
is there as we originally had some issues booting into single-user mode and
getting the zpool up and running.)

zpool is configured with 3x raidz2 vdevs, each vdev uses 8 harddrives.
Gives us ~ 10 TB usable space in the pool.  Everything other than / and /usr
are ZFS, including /usr/src, /usr/obj, /usr/ports, /var, /tmp, /usr/local,
/home, and so on.

Over the course of a backup run, we average 80 MBytes/sec writes, which is
limited by the horrible upload performance of the remote ADSL sites.  We've
benchmarked the system maxing out at 550 MBytes/sec write and 5.5 GBytes/sec
read.

We had to do a lot of manual tuning when we started out, to limit
vm.kmem_size_max and vfs.zfs.arc_max, and to disable prefetch
(vfs.zfs.prefetch_disable=1), as we started with 7-STABLE shortly after 7.0
was released.

With FreeBSD 7.2, we've removed the tuning, but left prefetch disabled (with
prefetch enabled, we'd lockup the system after about 5 hours of heavy rsync
usage ... no swap space left).

Our backups are done using rsync.  We serialise the backups of the systems
at each remote site, but run the backups for multiple sites in parallel.

The only "non-standard" change we made was to switch to openssh-portable
from ports, and enable the HPN patches.  We saw our rsync throughput go up
by 30% after tuning the network sysctls, using HPN.

Other than trying to use USB sticks instead of CompactFlash originally,
during the initial tuning phase, and when experimenting with prefetch, the
system has been rock solid.

Our next big ZFS project will be using similar hardware to create our own
SAN setup, using iSCSI exports, for a virtualisation setup (Linux+KVM on the
processing nodes, FreeBSD+ZFS on the storage nodes).

-- 
Freddie Cash
fjwcash at gmail.com


More information about the freebsd-isp mailing list