FreeBSD 11.1 Beta 2 ZFS performance degradation on SSDs

Freddie Cash fjwcash at gmail.com
Tue Jun 20 19:06:31 UTC 2017


On Tue, Jun 20, 2017 at 11:50 AM, Caza, Aaron <Aaron.Caza at ca.weatherford.com
> wrote:

> I've observed this performance degradation on 6 different hardware systems
> using 4 differents SSDS (2x Intel 510 120GB, 2x Intel 520 120GB, 2x Intel
> 540 120GB, 2x Samsung 850 Pro SSDs) on FreeBSD10.3 RELEASE, FreeBSD 10.3
> RELEASEp6, FreeBSD 10.3RELEASEp19, FreeBSD 10-Stable, FreeBSD11.0 RELEASE,
> FreeBSD 11-Stable and now FreeBSD11.1 Beta 2.  This latest testing I'm not
> doing much in the way of writing - only logging the output of the 'dd'
> command along with 'zfs-stats -a' and 'uptime' to go along with it once an
> hour.   Ran for ~20hrs before performance drop kicked in though why it
> happens is inexplicable as this server isn't doing anything other than
> running this test hourly.
>
> I have a FreeBSD9.0 system using 2x Intel 520 120GB SSDs that doesn't
> exhibit this performance degradation, maintaining ~400MB/s speeds even
> after many days of uptime.  This is using the GEOM ELI layer to provide 4k
> sector emulation for the mirrored zpool as I previously described.
>

​I don't remember if this has been mentioned yet in either of your threads
on this, but what is the output of this command on all your poorly
performing systems:

sysctl ​vfs.zfs.trim.enabled

If it's set to 1 (the default), set it to 0 and re-run your tests.

ZFS Trim support for SSDs was added to 10.0, so any system running FreeBSD
10+ will show a performance drop after awhile when the trim function kicks
in to clear out deleted/unused blocks.  Especially if it's an SSD that
can't run Trim commands in parallel.

You can look at the various ZFS trim-related stats to see what it's doing:

sysctl vfs.zfs | grep trim

-- 
Freddie Cash
fjwcash at gmail.com


More information about the freebsd-fs mailing list