zfs zvol vs file i/o performance
John
jwd at FreeBSD.org
Fri Feb 8 22:08:30 UTC 2013
Hi Folks,
Recently I was chasing down some performance differences and thought
I'd post some results for general discussion.
The system is a Dell R620 w/128GB ram, 2xLSI 9207-8e cards, and a pair
of HP D2700 shelves.
Two test areas created:
zfs create -b 128K -V 300G pool0/lun000004
vs
zfs create pool0/lun000004
truncate -s 300G /pool0/lun000004/fun000004
The dataset was destroyed and re-created between the different
types of test runs, thus the same name. The areas are filled with
random data prior to the test runs also (no null blocks).
Some test numbers are below. In general, there appears to be about
an 800MB/sec difference in I/O rates. Please note the test case is 300GB
which is 2.5 x the about of ram in the system. The system is configured
with defaults for loader.conf and sysctl.conf.
I'd be curious if anyone else can replicate this.
Comments welcome.
Cheers,
John
For the file based lun: 1.96GB/sec
# dd if=fun000004 of=/dev/null bs=64k
4915200+0 records in
4915200+0 records out
322122547200 bytes transferred in 165.930512 secs (1941309910 bytes/sec)
# dd if=fun000004 of=/dev/null bs=128k
2457600+0 records in
2457600+0 records out
322122547200 bytes transferred in 163.977835 secs (1964427371 bytes/sec)
# dd if=fun000004 of=/dev/null bs=256k
1228800+0 records in
1228800+0 records out
322122547200 bytes transferred in 163.109616 secs (1974883854 bytes/sec)
# dd if=fun000004 of=/dev/null bs=384k
819200+0 records in
819200+0 records out
322122547200 bytes transferred in 162.981242 secs (1976439392 bytes/sec)
# dd if=fun000004 of=/dev/null bs=768k
409600+0 records in
409600+0 records out
322122547200 bytes transferred in 163.756843 secs (1967078390 bytes/sec)
For the zvol based lun: 1.1GB/sec
# dd if=/dev/zvol/pool0/lun000004 of=/dev/null bs=64k
4915200+0 records in
4915200+0 records out
322122547200 bytes transferred in 305.941880 secs (1052888043 bytes/sec)
# dd if=/dev/zvol/pool0/lun000004 of=/dev/null bs=128k
2457600+0 records in
2457600+0 records out
322122547200 bytes transferred in 270.188876 secs (1192212469 bytes/sec)
# dd if=/dev/zvol/pool0/lun000004 of=/dev/null bs=256k
1228800+0 records in
1228800+0 records out
322122547200 bytes transferred in 270.208030 secs (1192127959 bytes/sec)
# dd if=/dev/zvol/pool0/lun000004 of=/dev/null bs=384k
819200+0 records in
819200+0 records out
322122547200 bytes transferred in 271.366702 secs (1187037852 bytes/sec)
# dd if=/dev/zvol/pool0/lun000004 of=/dev/null bs=512k
614400+0 records in
614400+0 records out
322122547200 bytes transferred in 269.715238 secs (1194306075 bytes/sec)
# dd if=/dev/zvol/pool0/lun000004 of=/dev/null bs=768k
409600+0 records in
409600+0 records out
322122547200 bytes transferred in 269.289512 secs (1196194181 bytes/sec)
The pool config:
# zpool status
pool: pool0
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
pool0 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
multipath/Z0 ONLINE 0 0 0
multipath/Z1 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
multipath/Z2 ONLINE 0 0 0
multipath/Z3 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
multipath/Z4 ONLINE 0 0 0
multipath/Z5 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
multipath/Z6 ONLINE 0 0 0
multipath/Z7 ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
multipath/Z8 ONLINE 0 0 0
multipath/Z9 ONLINE 0 0 0
mirror-5 ONLINE 0 0 0
multipath/Z10 ONLINE 0 0 0
multipath/Z11 ONLINE 0 0 0
mirror-6 ONLINE 0 0 0
multipath/Z12 ONLINE 0 0 0
multipath/Z13 ONLINE 0 0 0
mirror-7 ONLINE 0 0 0
multipath/Z14 ONLINE 0 0 0
multipath/Z15 ONLINE 0 0 0
mirror-8 ONLINE 0 0 0
multipath/Z16 ONLINE 0 0 0
multipath/Z17 ONLINE 0 0 0
mirror-9 ONLINE 0 0 0
multipath/Z18 ONLINE 0 0 0
multipath/Z19 ONLINE 0 0 0
mirror-10 ONLINE 0 0 0
multipath/Z20 ONLINE 0 0 0
multipath/Z21 ONLINE 0 0 0
mirror-11 ONLINE 0 0 0
multipath/Z22 ONLINE 0 0 0
multipath/Z23 ONLINE 0 0 0
mirror-12 ONLINE 0 0 0
multipath/Z24 ONLINE 0 0 0
multipath/Z25 ONLINE 0 0 0
mirror-13 ONLINE 0 0 0
multipath/Z26 ONLINE 0 0 0
multipath/Z27 ONLINE 0 0 0
mirror-14 ONLINE 0 0 0
multipath/Z28 ONLINE 0 0 0
multipath/Z29 ONLINE 0 0 0
mirror-15 ONLINE 0 0 0
multipath/Z30 ONLINE 0 0 0
multipath/Z31 ONLINE 0 0 0
mirror-16 ONLINE 0 0 0
multipath/Z32 ONLINE 0 0 0
multipath/Z33 ONLINE 0 0 0
mirror-17 ONLINE 0 0 0
multipath/Z34 ONLINE 0 0 0
multipath/Z35 ONLINE 0 0 0
mirror-18 ONLINE 0 0 0
multipath/Z36 ONLINE 0 0 0
multipath/Z37 ONLINE 0 0 0
mirror-19 ONLINE 0 0 0
multipath/Z38 ONLINE 0 0 0
multipath/Z39 ONLINE 0 0 0
mirror-20 ONLINE 0 0 0
multipath/Z40 ONLINE 0 0 0
multipath/Z41 ONLINE 0 0 0
mirror-21 ONLINE 0 0 0
multipath/Z42 ONLINE 0 0 0
multipath/Z43 ONLINE 0 0 0
mirror-22 ONLINE 0 0 0
multipath/Z44 ONLINE 0 0 0
multipath/Z45 ONLINE 0 0 0
mirror-23 ONLINE 0 0 0
multipath/Z46 ONLINE 0 0 0
multipath/Z47 ONLINE 0 0 0
spares
multipath/Z48 AVAIL
multipath/Z49 AVAIL
errors: No known data errors
And what a disk looks like in the system:
# camcontrol inquiry da0
pass2: <HP EG0600FBDSR HPD4> Fixed Direct Access SCSI-5 device
pass2: Serial Number EA01PC91LPW91239
pass2: 600.000MB/s transfers, Command Queueing Enabled
More information about the freebsd-fs
mailing list