FreeBSD 11.1 Beta 2 ZFS performance degradation on SSDs

Karl Denninger karl at denninger.net
Tue Jun 20 19:23:29 UTC 2017


On 6/20/2017 13:50, Caza, Aaron wrote:
>
> I've observed this performance degradation on 6 different hardware systems using 4 differents SSDS (2x Intel 510 120GB, 2x Intel 520 120GB, 2x Intel 540 120GB, 2x Samsung 850 Pro SSDs) on FreeBSD10.3 RELEASE, FreeBSD 10.3 RELEASEp6, FreeBSD 10.3RELEASEp19, FreeBSD 10-Stable, FreeBSD11.0 RELEASE, FreeBSD 11-Stable and now FreeBSD11.1 Beta 2.  This latest testing I'm not doing much in the way of writing - only logging the output of the 'dd' command along with 'zfs-stats -a' and 'uptime' to go along with it once an hour.   Ran for ~20hrs before performance drop kicked in though why it happens is inexplicable as this server isn't doing anything other than running this test hourly.
>
> I have a FreeBSD9.0 system using 2x Intel 520 120GB SSDs that doesn't exhibit this performance degradation, maintaining ~400MB/s speeds even after many days of uptime.  This is using the GEOM ELI layer to provide 4k sector emulation for the mirrored zpool as I previously described.
>
> Interestingly, using the GEOM ELI layering, I was seeing the following
> - FreeBSD 10.3 RELEASE  :  performance ~750MB/s when dd'ing 16GB file
> - FreeBSD 10 Stable         :  performance ~850MB/s when dd'ing 16GB file
> - FreeBSD 11 Stable         :  performance ~950MB/s when dd'ing 16GB file
>
> During the above testing, which was all done after reboot, gstat would show %busy of 90-95%.  When performance degradation hits, %busy drops to ~15%.
>
> Switching to FreeBSD 11.1 Beta 2 with Auto(ZFS) ashift-based 4k emulation of ZFS mirrored pool:
> - FreeBSD 11.1 Beta 2     :  performance ~450MB/s when dd'ing 16GB file with gstat %busy of ~60%.  When performance degradation hits, %busy drops to ~15%.
>
> Now, I expected that removing the GEOM ELI layer and just using vfs.zfs.min_auto_ashift=12 to do the 4k sector emulation would provide even better performance.  It's seems strange to me that it doesn't.

On one of my production systems (albeit in "hot spare" mode) here with
~20 days of uptime (plenty to saturate whatever, and this system DOES
have my patch in it.)

[\u at NewFS /dbms]# ls -al
total 65580101
drwxr-xr-x   4 root   wheel            5 Jun 20 14:06 .
drwxr-xr-x  45 root   wheel           55 Jun  1 10:58 ..
-rw-r-----   1 root   wheel  33554432000 Jun 20 14:13 test
drwxr-xr-x   2 root   wheel            2 Feb  4  2016 ticker-9.5
drwx------  19 pgsql  wheel           29 Apr 29 16:51 ticker-9.6
[\u at NewFS /dbms]# dd if=test of=/dev/null bs=1m
32000+0 records in
32000+0 records out
33554432000 bytes transferred in 43.023505 secs (779909306 bytes/sec)
[\u at NewFS /dbms]# uname -v
FreeBSD 11.0-STABLE #15 r312669M: Mon Jan 23 14:01:03 CST 2017    
karl at NewFS.denninger.net:/usr/obj/usr/src/sys/KSD-SMP

~780 Mbps, more or less; the test file size is ~3x RAM and was created
using a dd off /dev/random, so it should not benefit from compression
(which IS on.)

This is with 128kbps recordsize and lz4 on that particular zfs dataset. 
The physical pool is a mirrored pair of Intel 730s and while this system
is reasonably quiet right now it's not quiescent.  ARC target and fill
at present is 8.43Gb (out of ~12Gb physical)

-- 
Karl Denninger
karl at denninger.net <mailto:karl at denninger.net>
/The Market Ticker/
/[S/MIME encrypted email preferred]/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 2993 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.freebsd.org/pipermail/freebsd-fs/attachments/20170620/51e7b106/attachment.bin>


More information about the freebsd-fs mailing list