dead slow update servers

hw hw at
Mon Jul 15 04:26:33 UTC 2019

"Kevin P. Neal" <kpn at> writes:

> On Mon, Jul 15, 2019 at 01:23:43AM +0200, hw wrote:
>> Karl Denninger <karl at> writes:
>> > On 7/14/2019 00:10, hw wrote:
>> >> "Kevin P. Neal" <kpn at> writes:
>> >>
>> >>> On Sat, Jul 13, 2019 at 05:39:51AM +0200, hw wrote:
>> >>>> ZFS is great when you have JBODs while storage performance is
>> >>>> irrelevant.  I do not have JBODs, and in almost all cases, storage
>> >>>> performance is relevant.
>> >>> Huh? Is a _properly_ _designed_ ZFS setup really slower? A raidz
>> >>> setup of N drives gets you the performance of roughly 1 drive, but a
>> >>> mirror gets you the write performance of a titch less than one drive
>> >>> with the read performance of N drives. How does ZFS hurt performance?
>> >> Performance is hurt when you have N disks and only get the performance
>> >> of a single disk from them.
>> >
>> > There's no free lunch.  If you want two copies of the data (or one plus
>> > parity) you must write two copies.  The second one doesn't magically
>> > appear.  If you think it did you were conned by something that is
>> > cheating (e.g. said it had written something when in fact it was sitting
>> > in a DRAM chip) and, at a bad time, you're going to discover it was
>> > cheating.
>> >
>> > Murphy is a SOB.
>> I'm not sure what your point is.  Even RAID5 gives you better
>> performance than raidz because it doesn't limit you to a single disk.
> I don't see how this is possible. With either RAID5 or raidz enough
> drives have to be written to recover the data at a minimum. And since
> raidz1 uses the same number of drives as RAID5 it should have similar
> performance characteristics. So read and write performance of raidz1
> should be about the same as RAID5 -- about the speed of a single disk
> since the disks will be returning data roughly in parallel.

Well, if you follow [1], then, in theory, with no more than 4 disks, the
performance could be the same.


> What have you been testing RAID5 with? Bursty loads with large amounts
> of raid controller cache? Of course that's going to appear faster since
> you are writing to memory and not disk in the very short term. But a
> sustained amount of traffic will show raidz1 and RAID5 about the same.

I have been very happy with the overall system performance after I
switched from software RAID5 (mdraid) to a hardware RAID controller,
using the same disks.  It was like night and day difference.  The cache
on the controller was only 512MB.

I'm suspecting that the mainboard I was using had trouble handling
concurrent data transfers to multiple disks and that the CPU wasn't
great at it, either.  This might explain why the system was so sluggish
before changing to hardware RAID.  It was used as a desktop with a
little bit of server stuff running, and just having it all running
seemed to create sluggishness even without much actual load.

Other than that, I'm seeing that ZFS is disappointingly slow (on
entirely different hardware than what was used above) while hardware
RAID has always been nicely fast.

> Oh, and my Dell machines are old enough that I'm stuck with the hardware
> RAID controller. I use ZFS and have raid0 arrays configured with single
> drives in each. I _hate_ it. When a drive fails the machine reboots and
> the controller hangs the boot until I drive out there and dump the card's
> cache. It's just awful.

That doesn't sound like a good setup.  Usually, nothing reboots when a
drive fails.

Would it be a disadvantage to put all drives into a single RAID10 (or
each half of them into one) and put ZFS on it (or them) if you want to
keep ZFS?

> Now Dell offers a vanilla HBA on the "same" server as an
> option. *phew*

That's cool.

More information about the freebsd-questions mailing list