Samsung 840 Pro SSD and quirks

Borja Marcos borjam at sarenet.es
Mon Sep 1 16:11:54 UTC 2014


On Sep 1, 2014, at 5:44 PM, Steven Hartland wrote:

> We saw a noticable performance increase on 4k on our 8TB 840
> array but I too couldn't find any concrete information either.
> 
> If anyone has this info and can confirm either way that would
> be great.

I don´t have actual numbers, just recalling that I tried and I didn't find significant differences using bonnie++ on a ZFS pool. And
I recall that according to the kstats.sysctl variables, trim was indeed working.

Just in case I am repeating the tests right now. I still have the pre-quirks kernel around and I have a pool defined with the default 512 byte blocks.

Version  1.97       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
elibm           96G   123  99 670496  97 310330  63   303  99 818483  56  6281 165
Latency             93190us   20227us     448ms   41198us     454ms   26375us
Version  1.97       ------Sequential Create------ --------Random Create--------
elibm               -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 25723  98 +++++ +++ 24559  98 12694  99 31135 100  4810  99
Latency             15192us      97us     130us   23708us     355us    1199us
1.97,1.97,elibm,1,1409588162,96G,,123,99,670496,97,310330,63,303,99,818483,56,6281,165,16,,,,,25723,98,+++++,+++,24559,98,12694,99,31135,100,4810,99,93190us,20227us,448ms,41198us,454ms,26375us,15192us,97us,130us,23708us,355us,1199us

After a reboot, destroyng and recreating the pool,

Version  1.97       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
elibm           96G   128  99 675094  98 323692  67   303  99 862380  58  9530 189
Latency             64726us   48676us     389ms   36398us     505ms   15594us
Version  1.97       ------Sequential Create------ --------Random Create--------
elibm               -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 24857  97 +++++ +++ 20422  98 21836  98 +++++ +++ 17786  97
Latency             15422us     102us     785us   24590us     125us     170us
1.97,1.97,elibm,1,1409588443,96G,,128,99,675094,98,323692,67,303,99,862380,58,9530,189,16,,,,,24857,97,+++++,+++,20422,98,21836,98,+++++,+++,17786,97,64726us,48676us,389ms,36398us,505ms,15594us,15422us,102us,785us,24590us,125us,170us




The results seem to be more or less similar. I have checked kstats.zfs and in both cases trim was working. The count of unsupported trims was 0 while success and bytes grew as they should.

What am I missing? Note that I am not against preemptive 4K quirk strikes  :) I am comparing with multiple concurrent bonnies just in case or, what did you use to do the test?

Thanks!




Borja.



More information about the freebsd-scsi mailing list