FreeBSD 7.1 disk performance issue on ESXi 3.5

Ivan Voras ivoras at
Wed Feb 11 11:33:30 PST 2009

2009/2/11 Antony Mawer <fbsd-performance at>:

> How would one go about gathering data on such a scenario to help improve
> this? We were planning a project involving VMware deployments with FreeBSD
> 7.1 systems in the near future, but if performance is that bad it is likely
> to be a show stopper.

I have now tested it under ESXi 3.5, and here's what I find:

In FreeBSD 7.1 amd64, 4 vCPUs performance for dbench is :
1 proc : 155 MB/s, 2 proc: 175 MB/s, 4 proc: 188 MB/s
The same performance *as reported by VMWare's Infrastructure Client*
("performance" tab): around 50 MB/s in all cases
Visual inspection of drives' LED indicators (2 drive 10k RPM RAID0 hw
array) confirms constant activity.

In Ubuntu 8.10 amd64, 4 vCPUs, performance for dbench is :
1 proc: 375 MB/s, 2 proc: 660 MB/s, 4 proc: 1055 MB/s (sic!)
The same performance *as reported by VMWare Infrastructure Client*:
around 25 MB/s in all cases (sic!)
Visual inspection of drives: very sporadic activity

The maximum performance expected from this array is around 150 MB/s
*at peaks* - there is physically no way it can go above this, so I
judge the above measurements bogus.

This is all very strange. Someone here is caching more than it should
be, and it looks like it's VMWare. It doesn't look as clock skew in
the guests since "iostat 1" et al work at about 1sec wallclock time.
The "visual inspection" oddity inspired me to do another benchmark:

Bonnie++ reports:
For FreeBSD: write: 52 MB/s, rewrite: 21 MB/s, read: 45 MB/s

For Linux: write: 141 MB/s, rewrite: 55 MB/s, read: 168 MB/s

VMWare's Infrastructure Client agrees with these performance
measurements in both cases, and drives are blinking as expected.

As previously demonstrated by me and others, Linux usually has
significantly better file system performance in the non-virtualized
case, so the difference could be simply increased by the

More information about the freebsd-performance mailing list