ZFS performance bottlenecks: CPU or RAM or anything else?

Jan Bramkamp crest at rlwinm.de
Wed May 18 09:28:53 UTC 2016



On 17/05/16 14:00, Alex Tutubalin wrote:
> Hi,
>
> I'm new to the list, sorry if the subject was discussed earlier (for
> many times), just point to archives....
>
> I'm building new storage server for 'linear read/linear write'
> performance with limited number of parallel data streams (load like
> read/write multi-gigabyte photoshop files, or read many large raw photo
> files).
> Target is to saturate 10G link using SMB or iSCSI.
>
> Several years ago I've tested small zpool (5x3Tb 7200rpm drives in
> RAIDZ) with different CPU/memory combos and have got  these results for
> linear write speed by big chunks:
>
>  440 Mb/sec with Core i3-2120/DDR3-1600 ram (2 channel)
>  360 Mb/sec with core i7-920/DDR3-1333 (3 channel RAM)
>  280 Mb/sec with Core 2Q Q9300 /DDR2-800 (2 channel)
>
> Mixed thoughts:  i7-920 is fastest of the three, RAM linear access also
> fastest, but beaten by i3-2120 with lower latency memory.
>
> Also, I've found this link:
> https://calomel.org/zfs_raid_speed_capacity.html
> For 6x SSD and 10x SSD in RAIDZ2, there is very similar read speed
> (1.7Gb/sec) and very close in write speed (721/806 Mb/sec for 6/10 drives).
>
> Assuming HBA/PCIe performance to be very same for read and write
> operations, write speed is not limited by HBA/bus... so it is limited by
> what?  CPU or RAM or ...?
>
> So, my question is 'what CPU/memory is optimal for ZFS performance'?
>
> In particular:
>   - DDR3 or DDR4 (twice the bandwidth) ?
>  -  limited number of cores and high clock rate (e.g. i3-6xxxx) or many
> cores/slower clock ?
>
> No plans to use compression or deduplication, only raidz2 with 8-10 HDD
> spindles and 3-4-5 SSDs for L2ARC.

Don't forget that you're not just benchmarking CPUs. You're measuring 
whole systems with different disk controllers, memory controllers, 
interrupt routing etc. For example the Core 2 CPU is limited by its old 
design putting the memory controllers into the northbridge.

Maybe you can reduce some of the differences by using the same PCI-e SAS 
HBA in each system.


More information about the freebsd-fs mailing list