ZFS performance gains real or imaginary?
Matt Simerson
matt at corp.spry.com
Thu Dec 18 10:19:37 PST 2008
Did I miss some major ZFS performance enhancements?
I upgraded the disks in my home file server to 1.5TB disks. Rather
than using gmirror as I did last time, I decided to use ZFS to mirror
them. The file server was running 7.0 and booted off a CF card so it
was simply a matter of adding in the extra disks, configuring them
with ZFS, and copying all the data over.
[root at storage] ~ # zpool status
pool: tank
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror ONLINE 0 0 0
ad11 ONLINE 0 0 0
ad13 ONLINE 0 0 0
ZFS under FreeBSD 7 is horrendously slow. It took almost two days to
copy 600GB of data (a bunch of MP3s, movies, and UFS backups of my
servers in data centers) to the ZFS volume. Once completed, I removed
the old disks. The file system performance after switching to ZFS is
quite underwhelming. I notice it when doing any sort of writes to it.
This echoes my experience with ZFS on my production backup servers at
work. (all systems are multi-core Intel with 4GB+ RAM).
$ ssh back01 uname -a
FreeBSD back01.int.spry.com 8.0-CURRENT FreeBSD 8.0-CURRENT #0: Fri
Aug 15 16:42:36 PDT 2008 root at back01.int.spry.com:/usr/obj/usr/src/
sys/BACK01 amd64
$ ssh back02 uname -a
FreeBSD back02.int.spry.com 8.0-CURRENT FreeBSD 8.0-CURRENT #1: Wed
Aug 13 13:57:19 PDT 2008 root at back02.int.spry.com:/usr/obj/usr/src/
sys/BACK02-HEAD amd64
On the two systems above (amd64 with 16GB of RAM and 24 1TB disks) I
get about 30 days of uptime before the system hangs with a ZFS error.
They write backups to disk 24x7 and never stop. I could not anything
near that level of stability with back03 (below) which was much older
hardware maxed out at 4GB of RAM. I finally resolved the stability
issues on back03 by ditching ZFS and using geom_stripe across the two
hardware RAID arrays.
$ ssh back03 uname -a
FreeBSD back03.int.spry.com 8.0-CURRENT FreeBSD 8.0-CURRENT #0: Tue
Oct 28 16:54:22 PDT 2008 root at back03.int.spry.com:/usr/obj/usr/src/
sys/GENERIC amd64
Yesterday I did a cvsup to 8-HEAD and built a new kernel and world. I
installed the new kernel, and then paniced slightly when I booted off
the new kernel and the ZFS utilities proved completely worthless in
attempts to get /usr and /var mounted (which are both on ZFS). It took
a quick Google search to remember the solution:
mount -t zfs tank/usr /usr
mount -t zfs tank/var /var
After installing world and rebooting, the system is positively snappy.
File system interaction, which is lethargic on every ZFS system I've
installed seems to be much faster. I haven't benchmarked the IO
performance but something definitely changed. It's almost like the
latency has decreased. Would changes committed since mid-August (when
I built my last ZFS servers from -HEAD + the patch) and now explain
this?
If so, then I really should be upgrading my production ZFS servers to
the latest -HEAD.
Matt
PS: I am using compression and getting the following results:
[root at storage] ~ # zfs get compressratio
NAME PROPERTY VALUE SOURCE
tank compressratio 1.12x -
tank/usr compressratio 1.12x -
tank/usr/.snapshots compressratio 2.09x -
tank/var compressratio 2.13x -
In retrospect, I wouldn't bother with compression on /usr. But, /
usr/.snapshots is my rsnapshot based backups of my servers sitting in
remote data centers. Since the majority of changes between snapshots
is log files, the data is quite compressible and ZFS compressions is
quite effective. It's also quite effective on /var, as is shown. ZFS
compression is effectively getting me 1/3 more disk space off my 1.5TB
disks.
More information about the freebsd-fs
mailing list