8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

Dan Naumov dan.naumov at gmail.com
Mon Jan 25 07:34:53 UTC 2010


On Mon, Jan 25, 2010 at 7:33 AM, Bob Friesenhahn
<bfriesen at simple.dallas.tx.us> wrote:
> On Mon, 25 Jan 2010, Dan Naumov wrote:
>>
>> I've checked with the manufacturer and it seems that the Sil3124 in
>> this NAS is indeed a PCI card. More info on the card in question is
>> available at
>> http://green-pcs.co.uk/2009/01/28/tranquil-bbs2-those-pci-cards/
>> I have the card described later on the page, the one with 4 SATA ports
>> and no eSATA. Alright, so it being PCI is probably a bottleneck in
>> some ways, but that still doesn't explain the performance THAT bad,
>> considering that same hardware, same disks, same disk controller push
>> over 65mb/s in both reads and writes in Win2008. And agian, I am
>> pretty sure that I've had "close to expected" results when I was
>
> The slow PCI bus and this card look like the bottleneck to me. Remember that
> your Win2008 tests were with just one disk, your zfs performance with just
> one disk was similar to Win2008, and your zfs performance with a mirror was
> just under 1/2 that.
>
> I don't think that your performance results are necessarily out of line for
> the hardware you are using.
>
> On an old Sun SPARC workstation with retrofitted 15K RPM drives on Ultra-160
> SCSI channel, I see a zfs mirror write performance of 67,317KB/second and a
> read performance of 124,347KB/second.  The drives themselves are capable of
> 100MB/second range performance. Similar to yourself, I see 1/2 the write
> performance due to bandwidth limitations.
>
> Bob

There is lots of very sweet irony in my particular situiation.
Initially I was planning to use a single X25-M 80gb SSD in the
motherboard sata port for the actual OS installation as well as to
dedicate 50gb of it to a become a designaed L2ARC vdev for my ZFS
mirrors. The SSD attached to the motherboard port would be recognized
only as a SATA150 device for some reason, but I was still seeing
150mb/s throughput and sub 0.1 ms latencies on that disk simply
because of how crazy good the X25-M's are. However I ended up having
very bad issues with the Icydock 2,5" to 3,5" converter jacket I was
using to keep/fit the SSD in the system and it would randomly drop
write IO on heavy load due to bad connectors. Having finally figured
out the cause of my OS installations to the SSD going belly up during
applying updates, I decided to move the SSD to my desktop and use it
there instead, additionally thinking that my perhaps my idea of the
SSD was crazy overkill for what I need the system to do. Ironically
now that I am seeing how horrible the performance is when I am
operating on the mirror through this PCI card, I realize that
actually, my idea was pretty bloody brilliant, I just didn't really
know why at the time.

An L2ARC device on the motherboard port would really help me with
random read IO, but to work around the utterly poor write performance,
I would also need a dedicaled SLOG ZIL device. The catch is that while
L2ARC devices and be removed from the pool at will (should the device
up and die all of a sudden), the dedicated ZILs cannot and currently a
"missing" ZIL device will render the pool it's included in be unable
to import and become inaccessible. There is some work happening in
Solaris to implement removing SLOGs from a pool, but that work hasn't
yet found it's way in FreeBSD yet.


- Sincerely,
Dan Naumov

- Sincerely,
Dan Naumov


More information about the freebsd-stable mailing list