An order of magnitude higher IOPS needed with ZFS than UFS

Attila Nagy bra at fsn.hu
Tue Jun 11 21:08:35 UTC 2013


Hi,

I have two identical machines. They have 14 disks hooked up to a HP 
smartarray (SA from now on) controller.
Both machines have the same SA configuration and layout: the disks are 
organized into mirror pairs (HW RAID1).

On the first machine, these mirrors are formatted with UFS2+SU (default 
settings), on the second machine they are used as separate zpools 
(please don't tell me that ZFS can do the same, I know). Atime is turned 
off, otherwise, no other modifications (zpool/zfs or sysctl parameters).
The file systems are loaded more or less evenly with serving of some kB 
to few megs files.

The machines act as NFS servers, so there is one, maybe important 
difference here: the UFS machine runs 8.3-RELEASE, while the ZFS one 
runs 9.1-STABLE at r248885.
They get the same type of load, and according to nfsstat and netstat, 
the loads don't explain the big difference which can be seen in disk 
IOs. In fact, the UFS host seems to be more loaded...

According to gstat on the UFS machine:
dT: 60.001s  w: 60.000s  filter: da
  L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w %busy Name
     0     42     35    404    6.4      8    150  214.2 21.5| da0
     0     30     21    215    6.1      9    168  225.2 15.9| da1
     0     41     33    474    4.5      8    158  211.3 18.0| da2
     0     39     30    425    4.6      9    163  235.0 17.1| da3
     1     31     24    266    5.1      7     93  174.1 14.9| da4
     0     29     22    273    5.9      7     84  200.7 15.9| da5
     0     37     30    692    7.1      7    115  206.6 19.4| da6

and on the ZFS one:
dT: 60.001s  w: 60.000s  filter: da
  L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w %busy Name
     0    228    201   1045   23.7     27    344   53.5 88.7| da0
     5    185    167    855   21.1     19    238   44.9 73.8| da1
    10    263    236   1298   34.9     27    454   53.3 99.9| da2
    10    255    235   1341   28.3     20    239   64.8 92.9| da3
    10    219    195    994   22.3     23    257   46.3 81.3| da4
    10    248    221   1213   22.4     27    264   55.8 90.2| da5
     9    231    213   1169   25.1     19    229   54.6 88.6| da6

I've seen a lot of cases where ZFS required more memory and CPU (and 
even IO) to handle the same load, but they were nowhere this bad (often 
a 10x increase).

Any ideas?

BTW, the file systems are 77-78% full according to df (so ZFS holds 
more, because UFS is -m 8).

Thanks,


More information about the freebsd-fs mailing list