More zfs benchmarks

Jonathan Belson jon at witchspace.com
Sun Feb 14 17:42:38 UTC 2010


Hiya

After reading some earlier threads about zfs performance, I decided to test my own server.  I found the results rather surprising...

The machine is a Dell SC440, dual core 2GHz E2180, 2GB of RAM and ICH7 SATA300 controller.  There are three Hitachi 500GB drives (HDP725050GLA360) in a raidz1 configuration (version 13).  I'm running amd64 7.2-STABLE from 14th Jan.


First of all, I tried creating a 200MB file on / (the only non-zfs partition):

# dd if=/dev/zero of=/root/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 6.158355 secs (34053769 bytes/sec)

# dd if=/dev/zero of=/root/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 5.423107 secs (38670674 bytes/sec)

# dd if=/dev/zero of=/root/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 6.113258 secs (34304982 bytes/sec)


Next, I tried creating a 200MB file on a zfs partition:

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 58.540571 secs (3582391 bytes/sec)

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 46.867240 secs (4474665 bytes/sec)

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 21.145221 secs (9917853 bytes/sec)

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 19.387938 secs (10816787 bytes/sec)

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 21.378161 secs (9809787 bytes/sec)

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 23.774958 secs (8820844 bytes/sec)

Ouch!  Ignoring the first result, that's still over three times slower than the non-zfs test.


With a 2GB test file:

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 547.901945 secs (3827605 bytes/sec)

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 595.052017 secs (3524317 bytes/sec)

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 517.326470 secs (4053827 bytes/sec)

Even worse :-(


Reading 2GB from a raw device:

dd if=/dev/ad4s1a of=/dev/null bs=1M count=2000
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 13.914145 secs (77169084 bytes/sec)

 
Reading 2GB from a zfs partition (unmounting each time):

dd if=/tank/test/zerofile.000 of=/dev/null bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 29.905155 secs (70126772 bytes/sec)

dd if=/tank/test/zerofile.000 of=/dev/null bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 32.557361 secs (64414066 bytes/sec)

dd if=/tank/test/zerofile.000 of=/dev/null bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 34.137874 secs (61431828 bytes/sec)

For reading, there seems to be much less of a disparity in performance.

I notice that one drive is on atapci0 and the other two are on atapci1, but surely it wouldn't make this much of a difference to write speeds?

Cheers,

--Jon



More information about the freebsd-stable mailing list