IO Performance under VMware on LSI RAID controller

Guy Helmer guy.helmer at gmail.com
Thu Sep 19 16:25:46 UTC 2013


Normally I build VMware ESXi servers with enterprise-class WD SATA drives and I/O performance in FreeBSD VMs on the servers is fine.
Whenever I build a VMware ESXi server with a RAID controller, IO performance is awful in FreeBSD VMs. I've previously seen this effect with VMware ESXi 3ware 9690SA-8I and 9650 RAID controllers, and now I'm seeing similar performance with a Dell 6/iR controller.

Any suggestions would be appreciated.

Guy

Details of the current environment: VMware ESXi 5.1 on Dell R610 4GB RAM, SAS 6/iR controller, 2x500GB disks in RAID1 set (default stripe size) and 1x1TB (no RAID). From VMware's client, I see I/O rates in the sub-MBps range and latencies peaking occasionally at 80 ms.

FreeBSD 9.2 (RC2) amd64 in a VM with 2GB RAM assigned, virtual disks assigned from both the RAID1 set and 1TB (no RAID) drive, UFS+soft updates file systems.

The virtual drives show up in FreeBSD attached to an mpt virtual controller:
mpt0: <LSILogic 1030 Ultra4 Adapter> port 0x1400-0x14ff mem 0xd0040000-0xd005ffff,0xd0020000-0xd003ffff irq 17 at device 16.0 on pci0
mpt0: MPI Version=1.2.0.0
I don't see anything else sharing the interrupt - vmstat -i shows:
irq17: mpt0                        77503         27

gstat is showing an abysmal 6 to 16 ops/s for requests on the virtual disks.

I've used gpart to setup the GPT partition table on the virtual disk assigned from the 1TB drive with alignment for the first UFS partition at 1MB to try to optimize alignment:

Geom name: da0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 268435422
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da0p1
   Mediasize: 524288 (512k)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0
   rawuuid: d9e6e3e8-1bdb-11e3-b7c5-000c29cbf143
   rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
   label: gpboot
   length: 524288
   offset: 20480
   type: freebsd-boot
   index: 1
   end: 1063
   start: 40
2. Name: da0p2
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 1048576
   Mode: r1w1e2
   rawuuid: fbd6cf40-1bdb-11e3-b7c5-000c29cbf143
   rawtype: 516e7cb6-6ecf-11d6-8ff8-00022d09712b
   label: gprootfs
   length: 2147483648
   offset: 1048576
   type: freebsd-ufs
   index: 2
   end: 4196351
   start: 2048
3. Name: da0p3
   Mediasize: 4294967296 (4.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2148532224
   Mode: r1w1e1
   rawuuid: 0658208d-1bdc-11e3-b7c5-000c29cbf143
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: gpswap
   length: 4294967296
   offset: 2148532224
   type: freebsd-swap
   index: 3
   end: 12584959
   start: 4196352
4. Name: da0p4
   Mediasize: 130995437056 (122G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2148532224
   Mode: r1w1e2
   rawuuid: 0ca5bc32-1bdc-11e3-b7c5-000c29cbf143
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: gpusrfs
   length: 130995437056
   offset: 6443499520
   type: freebsd-swap
   index: 4
   end: 268435422
   start: 12584960
Consumers:
1. Name: da0
   Mediasize: 137438953472 (128G)
   Sectorsize: 512
   Mode: r3w3e8

sysctl vfs shows:
vfs.ufs.dirhash_reclaimage: 5
vfs.ufs.dirhash_lowmemcount: 179
vfs.ufs.dirhash_docheck: 0
vfs.ufs.dirhash_mem: 0
vfs.ufs.dirhash_maxmem: 3481600
vfs.ufs.dirhash_minsize: 2560
vfs.ufs.rename_restarts: 0
vfs.nfs.downdelayinitial: 12
vfs.nfs.downdelayinterval: 30
vfs.nfs.keytab_enctype: 1
vfs.nfs.skip_wcc_data_onerr: 1
vfs.nfs.nfs3_jukebox_delay: 10
vfs.nfs.reconnects: 0
vfs.nfs.bufpackets: 4
vfs.nfs.debuglevel: 0
vfs.nfs.callback_addr: 
vfs.nfs.realign_count: 0
vfs.nfs.realign_test: 0
vfs.nfs.nfs_directio_allow_mmap: 1
vfs.nfs.nfs_keep_dirty_on_error: 0
vfs.nfs.nfs_directio_enable: 0
vfs.nfs.clean_pages_on_close: 1
vfs.nfs.commit_on_close: 0
vfs.nfs.prime_access_cache: 0
vfs.nfs.access_cache_timeout: 60
vfs.nfs.diskless_rootpath: 
vfs.nfs.diskless_valid: 0
vfs.nfs.nfs_ip_paranoia: 1
vfs.nfs.defect: 0
vfs.nfs.iodmax: 20
vfs.nfs.iodmin: 0
vfs.nfs.iodmaxidle: 120
vfs.devfs.rule_depth: 1
vfs.devfs.generation: 113
vfs.nfsd.disable_checkutf8: 0
vfs.nfsd.server_max_nfsvers: 4
vfs.nfsd.server_min_nfsvers: 2
vfs.nfsd.nfs_privport: 0
vfs.nfsd.async: 0
vfs.nfsd.enable_locallocks: 0
vfs.nfsd.issue_delegations: 0
vfs.nfsd.commit_miss: 0
vfs.nfsd.commit_blks: 0
vfs.nfsd.mirrormnt: 1
vfs.nfsd.minthreads: 1
vfs.nfsd.maxthreads: 1
vfs.nfsd.threads: 0
vfs.nfsd.request_space_used: 0
vfs.nfsd.request_space_used_highest: 0
vfs.nfsd.request_space_high: 13107200
vfs.nfsd.request_space_low: 8738133
vfs.nfsd.request_space_throttled: 0
vfs.nfsd.request_space_throttle_count: 0
vfs.nfsd.fha.enable: 1
vfs.nfsd.fha.bin_shift: 22
vfs.nfsd.fha.max_nfsds_per_fh: 8
vfs.nfsd.fha.max_reqs_per_nfsd: 0
vfs.nfsd.fha.fhe_stats: No file handle entries.
vfs.pfs.trace: 0
vfs.pfs.vncache.misses: 0
vfs.pfs.vncache.hits: 0
vfs.pfs.vncache.maxentries: 0
vfs.pfs.vncache.entries: 0
vfs.acl_nfs4_old_semantics: 0
vfs.flushwithdeps: 0
vfs.unmapped_buf_allowed: 1
vfs.barrierwrites: 1
vfs.notbufdflashes: 0
vfs.flushbufqtarget: 100
vfs.mappingrestarts: 0
vfs.getnewbufrestarts: 337501
vfs.getnewbufcalls: 349444
vfs.hifreebuffers: 1524
vfs.lofreebuffers: 762
vfs.numfreebuffers: 13601
vfs.dirtybufthresh: 3084
vfs.hidirtybuffers: 3427
vfs.lodirtybuffers: 1713
vfs.numdirtybuffers: 11
vfs.recursiveflushes: 341175
vfs.altbufferflushes: 0
vfs.bdwriteskip: 0
vfs.dirtybufferflushes: 0
vfs.hirunningspace: 3538944
vfs.lorunningspace: 2359296
vfs.bufdefragcnt: 0
vfs.buffreekvacnt: 339965
vfs.bufreusecnt: 347292
vfs.hibufspace: 222625792
vfs.lobufspace: 222560256
vfs.maxmallocbufspace: 11131289
vfs.bufmallocspace: 0
vfs.maxbufspace: 223281152
vfs.unmapped_bufspace: 290652160
vfs.bufspace: 291602432
vfs.runningbufspace: 131072
vfs.vmiodirenable: 1
vfs.cache.numfullpathfound: 47
vfs.cache.numfullpathfail4: 0
vfs.cache.numfullpathfail2: 0
vfs.cache.numfullpathfail1: 0
vfs.cache.numfullpathcalls: 47
vfs.cache.numupgrades: 32
vfs.cache.numneghits: 2853
vfs.cache.numnegzaps: 16
vfs.cache.numposhits: 356750
vfs.cache.numposzaps: 282
vfs.cache.nummisszap: 10
vfs.cache.nummiss: 35615
vfs.cache.numchecks: 377860
vfs.cache.dotdothits: 19
vfs.cache.dothits: 146
vfs.cache.numcalls: 395710
vfs.cache.numcache: 29909
vfs.cache.numneg: 336
vfs.ncsizefactor: 2
vfs.ncnegfactor: 16
vfs.read_min: 1
vfs.read_max: 64
vfs.write_behind: 1
vfs.typenumhash: 1
vfs.lookup_shared: 1
vfs.usermount: 0
vfs.worklist_len: 3
vfs.timestamp_precision: 0
vfs.reassignbufcalls: 515051
vfs.vlru_allow_cache_src: 0
vfs.freevnodes: 27829
vfs.wantfreevnodes: 27833
vfs.numvnodes: 29507
vfs.ffs.doreallocblks: 1
vfs.ffs.doasyncfree: 1
vfs.ffs.compute_summary_at_mount: 0

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.freebsd.org/pipermail/freebsd-hackers/attachments/20130919/64f57c09/attachment.sig>


More information about the freebsd-hackers mailing list