ZFS performance bottlenecks: CPU or RAM or anything else?
Alex Tutubalin
lexa at lexa.ru
Wed May 18 08:28:11 UTC 2016
On 5/18/2016 11:02 AM, Steven Hartland wrote:
> My comment was targeted under the assumption of random IOPs workload,
> which is typically the case, where each RAIDZ group (vdev) will give
> approximately a single drive performance. For a pretty definitive
> guide / answer see:
> http://blog.delphix.com/matt/2014/06/06/zfs-stripe-width/
Thank you for the link.
In my workload (single write stream) IOPs count is very low, disk write
locality is good (each file most likely to fit in single metaslab), so
bandwidth is not limited to single drive bandwidth.
My current box (6x 7200rpm HDDs in raidz1) provides about 430 Mb/s write
bandwidth over SMB link and about 500Mb/s for local writes. It is ~100
Mb/s per spindle, close enough to expected.
I hope, I'll see 2x in bandwidth with 2x spindle count if I do not hit
another performance limiter. So, my initial question was 'is there any
known raidz performance limiter, like CPU or RAM speed/latency'.
>
> There's also some useful practical test results here:
> https://calomel.org/zfs_raid_speed_capacity.html
I've already posted this link in my thread-starting message :)
And, yes, there are very strange similarity in both read and write speed
in 6x and 10x SSD/raidz2 cases.
Unfortunately, this benchmark is not real use case because of:
"Since the disk cache can artificially inflate the results we choose to
disable drive caches completely using Bonnie++ in synchronous test mode
only."
Synchronous mode will result in double writes (ZIL, than data), without
separate ZIL device ZIL to be written to the main pool.
We do not know what will happen with real-life async writes on same
hardware.
Alex Tutubalin
More information about the freebsd-fs
mailing list