zfs performance degradation

Paul Kraus paul at kraus-haus.org
Thu Sep 24 13:17:08 UTC 2015


On Sep 24, 2015, at 0:47, Dmitrijs <war at dim.lv> wrote:

> 2015.09.23. 23:08, Paul Kraus пишет:
>> On Sep 22, 2015, at 13:38, Dmitrijs <war at dim.lv> wrote:
>> 
>>>  I've encountered strange ZFS behavior - serious performance degradation over few days.
>>> 
>>> Could it happen because of pool being 78% full? So I cannot fill puls full?
>>> Can anyone please advice how could I fix the situation - or is it normal?
>> 
>> So the short answer (way too late for that) is that you can, in fact, not use all of the capacity of a zpool unless the data is written once, never modified, and you do not have any snapshots, clones, or the like.

> Thank you very much for explanation. Am I getting it right - it will not work faster even if I add +4Gb RAM to be 8Gb in total? I am not using DeDuplication and compression, neither planing using them.

If you are seeing the performance degrade due to the zpool being over some capacity threshold, then adding RAM will make little difference. If you are seeing general performance issues, then adding RAM (increasing ARC) _may_ improve the performance.

> So if I plan to work with data a lot, get decent performance and still be sure I'm on the safe side with mirror-raid1, should I choose another filesystem? Especially, if i do not really need snapshots, clones, etc.

What is your definition of “decent” performance ? What does your _real_ workload look like ?

Did you have performance issues doing real work which caused you to try to find the cause -or- were you benchmarking before trying to use the system for real work ?

> Or is it not possible at all, and I should put something like raid0 for work and tolerate slow backup on raid1 at nights?

There are many places in ZFS where you can run into performance bottlenecks. Remember, ZFS was designed for data integrity (end to end checksums), data reliability (lots of ways to get redundancy), and scalability. Performance was secondary from the very beginning. There are lots of other filesystems with much better performance, there are few (if any) with more protection for your data. Do not get me wrong, the performance of ZFS _can_ be very good, but you need to understand your workload and layout the zpool to accommodate that workload.

For example, one of my critical workloads is NFS with sync writes. My zpool layout is many vdevs of 3-way mirrors with a separate ZIL device (SLOG). I have not been able to go production with this server yet because I am waiting on backordered SSDs for the SLOG. The original SSDs I used just did not have the small block write performance I needed.

Another example is one of my _backup_ servers, which has a 6 drive RAIDz2 zpool layout. In this case I am not terribly concerned about performance as I am limited by the 1 Gbps network connection.

Also note that in general, the _best_ performance you can expect of any zpool layout is equivalent to _1_ drives worth of I/O per _vdev_. So my 6 drive RAIDz2 has performance equivalent to _one_ of the drives that make up that vdev. Which is fine for _my_ workload. The rule of thumb for performance that I received over on the OpenZFS mailing list a while back was to assume you can get 100 MB/sec and 100 random I/Ops from a consumer SATA hard disk drive. I have seen nothing, even using “enterprise” grade HDDs, to convince me that is a bad rule of thumb. If your workload is strictly sequential you _may_ get more.

So a zpool made up of one single vdev, no matter how many drives, will average the performance of one of those drives. It does not really matter if it is a 2-way mirror vdev, a 3-way mirror vdev, a RAIDz2 vdev, a RAIDz3 vdev, etc. This is more true for write operations that read (mirrors can achieve higher performance by reading from multiple copies at once).

--
Paul Kraus
paul at kraus-haus.org



More information about the freebsd-questions mailing list