ZFS slow write performance

Paul Mather paul at gromit.dlib.vt.edu
Wed Aug 5 15:34:08 UTC 2009


I have a system I intend to use to back up a remote system via rsync.   
It is running FreeBSD/i386 7.2-STABLE and has a ZFS raidz1 pool  
consisting of four 1 TB SATA drives.  The system has 768 MiB of RAM  
and a 2 GHz Pentium 4 CPU.

Currently, I am just trying to rsync data locally from a read-only  
UFS2-mounted USB-attached hard drive, and am getting (IMHO) poor write  
speeds of only about 5 MiB/sec.  I can't figure out why this is so  
relatively low.

Looking at gstat shows the source drive and destination drives as  
cruising along at an average of 30%--50% busy (the destination drives  
averaging between 1800--2000 kBps each in the gstat display).  Top  
shows an average of ~20% system time and ~70% idle (though when I  
changed "compression=on" to "compression=gzip-9" for the target file  
system, system CPU load shot up to ~70% utilization).  Memory usage is  
pretty static, with ~165 MiB wired and 512-523 MiB inactive RAM usage.

Given there appears to be nothing stressing the system, why isn't it  
apparently making more use of the available resources, in particular,  
disk bandwidth?  A dd of a large file from the source USB drive  
reports a transfer rate of about 15 MiB/sec from it, so getting only  
about a third of this when rsyncing to an otherwise idle ZFS pool is  
disappointing when the source drive can obviously go faster than it  
is.  If I dd /dev/zero to a file on the target ZFS file system I get  
about 15 MiB/sec write speed with "compression=off" set and about 18  
MiB/sec with "compression=on" set, indicating that the target can go  
faster, too.

Even though I am rsyncing from one local filesystem to another, could  
the problem lie with rsync overheads?  Has anyone else encountered  
poor rsync performance with ZFS and can offer any tuning advice?   
Otherwise, does anyone have any advice for speeding up my local copy  
performance?

Here is some dmesg information about the attached hardware:

atapci0: <SiI SiI 3114 SATA150 controller> port 0xecf8-0xecff, 
0xecf0-0xecf3,0xece0-0xece7,0xecd8-0xecdb,0xecc0-0xeccf mem  
0xff8ffc00-0xff8fffff irq 16 at device 7.0 on pci1
atapci0: [ITHREAD]
ata2: <ATA channel 0> on atapci0
ata2: [ITHREAD]
ata3: <ATA channel 1> on atapci0
ata3: [ITHREAD]
ata4: <ATA channel 2> on atapci0
ata4: [ITHREAD]
ata5: <ATA channel 3> on atapci0
ata5: [ITHREAD]
[...]
ehci0: <Intel 82801DB/L/M (ICH4) USB 2.0 controller> mem  
0xffa00000-0xffa003ff irq 23 at device 29.7 on pci0
ehci0: [GIANT-LOCKED]
ehci0: [ITHREAD]
usb3: EHCI version 1.0
usb3: companion controllers, 2 ports each: usb0 usb1 usb2
usb3: <Intel 82801DB/L/M (ICH4) USB 2.0 controller> on ehci0
usb3: USB revision 2.0
uhub3: <Intel EHCI root hub, class 9/0, rev 2.00/1.00, addr 1> on usb3
uhub3: 6 ports with 6 removable, self powered
umass0: <Maxtor OneTouch, class 0/0, rev 2.00/1.21, addr 2> on uhub3
[...]
ad4: 953869MB <Seagate ST31000340AS SD15> at ata2-master SATA150
ad6: 953869MB <Seagate ST31000340AS SD15> at ata3-master SATA150
ad8: 953869MB <Seagate ST31000340AS SD15> at ata4-master SATA150
ad10: 953869MB <Seagate ST31000340AS SD15> at ata5-master SATA150
da0 at umass-sim0 bus 0 target 0 lun 0
da0: <Maxtor OneTouch 0121> Fixed Direct Access SCSI-4 device
da0: 40.000MB/s transfers
da0: 953869MB (1953525168 512 byte sectors: 255H 63S/T 121601C)


I have the following tuning in /boot/loader.conf:

vm.kmem_size="640M"
vm.kmem_size_max="640M"
vfs.zfs.arc_max="320M"
#vfs.zfs.vdev.cache.size="5M"
vfs.zfs.prefetch_disable="1"

Any help or advice is appreciated.

Cheers,

Paul.




More information about the freebsd-geom mailing list