8-STABLE Slow Write Speeds on ESXI 4.0

Jeremy Chadwick freebsd at jdc.parodius.com
Tue Aug 10 04:05:21 UTC 2010


On Mon, Aug 09, 2010 at 11:59:46PM -0400, Joshua Boyd wrote:
> On Mon, Aug 9, 2010 at 12:11 PM, Jeremy Chadwick
> <freebsd at jdc.parodius.com>wrote:
> 
> > On Mon, Aug 09, 2010 at 05:12:21PM +0200, Ivan Voras wrote:
> > > On 9 August 2010 16:55, Joshua Boyd <boydjd at jbip.net> wrote:
> > > > On Sat, Aug 7, 2010 at 1:58 PM, Ivan Voras <ivoras at freebsd.org> wrote:
> > > >>
> > > >> On 7 August 2010 19:03, Joshua Boyd <boydjd at jbip.net> wrote:
> > > >> > On Sat, Aug 7, 2010 at 7:57 AM, Ivan Voras <ivoras at freebsd.org>
> > wrote:
> > > >>
> > > >> >> It's unlikely they will help, but try:
> > > >> >>
> > > >> >> vfs.read_max=32
> > > >> >>
> > > >> >> for read speeds (but test using the UFS file system, not as a raw
> > > >> >> device
> > > >> >> like above), and:
> > > >> >>
> > > >> >> vfs.hirunningspace=8388608
> > > >> >> vfs.lorunningspace=4194304
> > > >> >>
> > > >> >> for writes. Again, it's unlikely but I'm interested in results you
> > > >> >> achieve.
> > > >> >>
> > > >> >
> > > >> > This is interesting. Write speeds went up to 40MBish. Still slow,
> > but 4x
> > > >> > faster than before.
> > > >> > [root at git ~]# dd if=/dev/zero of=/var/testfile bs=1M count=250
> > > >> > 250+0 records in
> > > >> > 250+0 records out
> > > >> > 262144000 bytes transferred in 6.185955 secs (42377288 bytes/sec)
> > > >> > [root at git ~]# dd if=/var/testfile of=/dev/null
> > > >> > 512000+0 records in
> > > >> > 512000+0 records out
> > > >> > 262144000 bytes transferred in 0.811397 secs (323077424 bytes/sec)
> > > >> > So read speeds are up to what they should be, but write speeds are
> > still
> > > >> > significantly below what they should be.
> > > >>
> > > >> Well, you *could* double the size of "runningspace" tunables and try
> > that
> > > >> :)
> > > >>
> > > >> Basically, in tuning these two settings we are cheating: increasing
> > > >> read-ahead (read_max) and write in-flight buffering (runningspace) in
> > > >> order to offload as much IO to the controller (in this case vmware) as
> > > >> soon as possible, so to reschedule horrible IO-caused context switches
> > > >> vmware has. It will help sequential performance, but nothing can help
> > > >> random IOs.
> > > >
> > > > Hmm. So what you're saying is that FreeBSD doesn't properly support the
> > ESXI
> > > > controller?
> > >
> > > Nope, I'm saying you will never get raw disk-like performance with any
> > > "full" virtualization product, regardless of specifics. If you want
> > > performance, go OS-level (like jails) or some example of
> > > paravirtualization.
> > >
> > > > I'm going to try 7.3-RELEASE today, just to make sure that this isn't a
> > > > regression of some kind. It seems from reading other posts that this
> > used to
> > > > work properly and satisfactorily.
> > >
> > > Nope, I've been messing around with VMWare for a long time and the
> > > performance penalty was always there.
> >
> > I thought Intel VT-d was supposed to help address things like this?
> >
> 
> Our ESXI boxes are AMD rigs, so VT-d doesn't help here.

AMD offers the same technology; it's called AMD-Vi these days, and was
previously known as IOMMU.  I don't have any familiarity with it.

-- 
| Jeremy Chadwick                                   jdc at parodius.com |
| Parodius Networking                       http://www.parodius.com/ |
| UNIX Systems Administrator                  Mountain View, CA, USA |
| Making life hard for others since 1977.              PGP: 4BD6C0CB |



More information about the freebsd-stable mailing list