amr performance woes and a bright side

Sven Willenberger sven at dmv.com
Mon Mar 14 16:46:01 PST 2005


I have been testing a new box for utlimate use as a postgresql server:
dual opteron (2.2GHz), 8G RAM, LSI 320-2x megaraid (battery-backed
memory) with 2 single 73 GB drives and an 8x146GB RAID 0+1 array
(hitachi U320 10k RPM). In doing so I have also tested the amd64
5.3-stable release againt gentoo x86_64 and Fedora FC3/x86_64.

First the bad news:

The linux boxen were configured with the postgres data drives on the
raid 0+1 using XFS with a separate pg_xlog on a different drive. Both
gentoo and FC3 were using 2.6.x kernels using the x86_64 distro.
pgbench was initialized using no scaling factor (1 mill rows), scaling
10 (10 million) and 100. 
With no scaling the linux boxen hit about 160 tps using 10 connections
and 1000 -2000 transactions.
The BSD system hit 100-120 tps. This is a difference I could potentially
live with. Now enter the scaled tables:
Linux systems hit marks of 450+ tps when pgbenching againt millions of
rows while the BSD box stayed at 100 tps or worse .. dipping as low as
90 tps.

Bonnie benchmarks:
Linux:
Sequential output: Per Char = 65000 K/sec, Block = 658782 K/sec, Rewrite
= 639654 K/sec
Sequential input: Per Char = 66240 K/sec, Block = 1278993 K/sec
Sequential create: create 641/sec , read n/a, delete 205/sec
Random create: create 735/sec, read n/a, delete 126/sec

BSD:
Sequential output: Per Char = 370K/sec (!!), block = 132281 K/sec,
Rewrite = 124070 K/sec
Sequential input: Per Char = 756 K/sec, block = 700402 K/sec
Sequential create: create 139/sec, read 6308/seec, delete n/a
Random create: create 137/sec, read 5877/sec, delete n/a

the bonnie tests were run several times with similar results.

It would seem to me that the pgbench marks and tests are being hampered
by comparatively poor I/O to the raid array and disks under the amr
driver control. I am hoping there are some tweaks that I could do or
perhaps some patches to the driver in -CURRENT that could be
applied/backported/MFC'ed to try and improve this performance.

Oh, the "bright" side? FreeBSD is the only OS here that didn't kernel
Oops due to memory allocation issues, or whatever caused them (the
backtrace showed kmalloc). That may be because of the XFS file system (I
didn't try EXT3 or its kin) or because of issues with LSI and the linux
kernel or who knows what. I am hoping to get the stability and OS
performance of FreeBSD and the raw disk performance witnessed in the
Linux systems all rolled up into one. Help?

Sven





More information about the freebsd-amd64 mailing list