Poor performance on Intel P3600 NVME driver

Mihai Vintila unixro at gmail.com
Tue Jan 20 10:36:16 UTC 2015


I've recreated the tests on FreeBSD 9.3 and 10.1 after re-installing 
everything:
Things to note for Jim:
- firmware upgrade for SSD saved me from errors
- bios ASPM disabled saved me from nvme controller disappearing
- bios force on x16 saved me from nvme lock
- driver is stable now, but i've posted requested information, I'll 
appreciate it if you can take a look and confirm there isn't a hardware 
configuration issue.




Now both 9.3 and 10.1 seem stable with default settings for nvme.
But i've found where the 50x penalty for read between FreeBSD 9.3 ZFS 
and ZoL and FreeBSD 10.1 appeared :

Basically:
Pool created with compression=lz4, atime=off recordsize=4k , trim disabled
On FreeBSD 9.3 p5
|||Command line used: iozone -Rb /root/output.wks -O -i ||0| |-i ||1| 
|-i ||2| |-e -+n -r4K -r 8K -s 1G|
|||Time Resolution = ||0.000001| |seconds.|
|||Processor cache size set to ||1024| |Kbytes.|
|||Processor cache line size set to ||32| |bytes.|
|||File stride size set to ||17| |* record size.|
|||random  random    bkwd   record   stride|
|||KB  reclen   write rewrite    read reread    read   write    read  
rewrite     read   fwrite frewrite   fread  freread|
|||1048576| |4| |61142| |0| |209444| |0| |180129| |41660|
|||1048576| |8| |30643| |0| |118759| |0| |102781| |24244

!!!! Same pool imported on FreeBSD 10.1

|
|Command line used: iozone -Rb /root/output.wks -O -i ||0| |-i ||1| |-i 
||2| |-e -+n -r4K -r 8K -s 1G|
|Time Resolution = ||0.000001| |seconds.|
|Processor cache size set to ||1024| |Kbytes.|
|Processor cache line size set to ||32| |bytes.|
|File stride size set to ||17| |* record size.|
|||random  random    bkwd   record   stride|
|||KB  reclen write rewrite    read    reread    read   write    read 
rewrite     read   fwrite frewrite   fread  freread|
|||1048576| |4| |66777| |0| |184234| |0| |154268| |48701|
|||1048576| |8| |34338| |0| |143999| |0| |124259| |26672

Same pool imported but this time upgraded (embedded_data added) FreeBSD 
10.1||
||Command line used: iozone -Rb /root/output.wks -O -i ||0| |-i ||1| |-i 
||2| |-e -+n -r4K -r 8K -s 1G|
|Time Resolution = ||0.000001| |seconds.|
|Processor cache size set to ||1024| |Kbytes.|
|Processor cache line size set to ||32| |bytes.|
|File stride size set to ||17| |* record size.|
|||random  random    bkwd   record stride|
|||KB  reclen write rewrite    read    reread    read   write    read 
rewrite     read   fwrite frewrite   fread  freread|
|1048576| |4| |72048| |0| |103035| |0| |92246| |45885|
|1048576| |8| |39076| |0| |63655| |0| |59997| |24857|


For Jim, data requested:
Perftest FreeBSD 9.3p5
|nvmecontrol perftest -n ||32| |-o read -s ||4096| |-t30 nvme1ns1|
|Threads: ||32| |Size: ||4096| |READ Time: ||30| |IO/s: ||220880| |MB/s: 
||862|
|||nvmecontrol perftest -n ||32| |-o write -s ||4096| |-t30 nvme1ns1|
|Threads: ||32| |Size: ||4096| |WRITE Time: ||30| |IO/s: ||193949| 
|MB/s: ||757

Perftest FreeBSD 10.1
  nvmecontrol perftest -n 32 -o read -s 4096 -t30 nvme1ns1
Threads: 32 Size:   4096  READ Time:  30 IO/s:  218798 MB/s:  854
root at nvme:~ # nvmecontrol perftest -n 32 -o write -s 4096 -t30 nvme1ns1
Threads: 32 Size:   4096 WRITE Time:  30 IO/s:  212680 MB/s:  830

|

I don't think that the driver is the issue as i get same performance on 
ZFS on Linux, but if you can spot some hardware configuration issues.

pciconf -lc nvme0
pciconf -lc nvme1
nvmecontrol identify nvme0
nvmecontrol identify nvme0ns1
nvmecontrol identify nvme1
nvmecontrol identify nvme1ns1
nvme0 at pci0:4:0:0:       class=0x010802 card=0x370a8086 chip=0x09538086 
rev=0x01 hdr=0x00
     cap 01[40] = powerspec 3  supports D0 D3  current D0
nvmecontrol logpage -p 1 nvme0
     cap 11[50] = MSI-X supports 32 messages, enabled
                  Table in map 0x10[0x2000], PBA in map 0x10[0x3000]
     cap 10[60] = PCI-Express 2 endpoint max data 256(256) FLR link x4(x4)
                  speed 8.0(8.0) ASPM disabled(L0s/L1)
     ecap 0001[100] = AER 1 0 fatal 0 non-fatal 1 corrected
     ecap 0002[150] = VC 1 max VC0
     ecap 0004[180] = Power Budgeting 1
     ecap 000e[190] = ARI 1
     ecap 0003[270] = Serial 1 55cd2e404bce7951
     ecap 0019[2a0] = PCIe Sec 1 lane errors 0
nvmecontrol logpage -p 1 nvme1
root at nvme:~ # pciconf -lc nvme1
nvmecontrol logpage -p 2 nvme0
nvme1 at pci0:5:0:0:       class=0x010802 card=0x370a8086 chip=0x09538086 
rev=0x01 hdr=0x00
     cap 01[40] = powerspec 3  supports D0 D3  current D0
     cap 11[50] = MSI-X supports 32 messages, enabled
                  Table in map 0x10[0x2000], PBA in map 0x10[0x3000]
     cap 10[60] = PCI-Express 2 endpoint max data 256(256) FLR link x4(x4)
                  speed 8.0(8.0) ASPM disabled(L0s/L1)
     ecap 0001[100] = AER 1 0 fatal 0 non-fatal 1 corrected
     ecap 0002[150] = VC 1 max VC0
     ecap 0004[180] = Power Budgeting 1
     ecap 000e[190] = ARI 1
     ecap 0003[270] = Serial 1 55cd2e404bce8781
     ecap 0019[2a0] = PCIe Sec 1 lane errors 0
nvmecontrol logpage -p 2 nvme1root at nvme:~ # nvmecontrol identify nvme0
Controller Capabilities/Features
================================
Vendor ID:                  8086
Subsystem Vendor ID:        8086
Serial Number:              CVMD427400562P0JGN
Model Number:               INTEL SSDPE2ME020T4
Firmware Version:           8DV10110
Recommended Arb Burst:      0
IEEE OUI Identifier:        e4 d2 5c
Multi-Interface Cap:        00
Max Data Transfer Size:     131072

Admin Command Set Attributes
============================
Security Send/Receive:       Not Supported
Format NVM:                  Supported
Firmware Activate/Download:  Supported
Abort Command Limit:         4
Async Event Request Limit:   4
Number of Firmware Slots:    1
Firmware Slot 1 Read-Only:   No
Per-Namespace SMART Log:     No
Error Log Page Entries:      64
Number of Power States:      1

NVM Command Set Attributes
==========================
Submission Queue Entry Size
   Max:                       64
   Min:                       64
Completion Queue Entry Size
   Max:                       16
   Min:                       16
Number of Namespaces:        1
Compare Command:             Not Supported
Write Uncorrectable Command: Supported
Dataset Management Command:  Supported
Volatile Write Cache:        Not Present
root at nvme:~ # nvmecontrol identify nvme0ns1
Size (in LBAs):              3907029168 (3726M)
Capacity (in LBAs):          3907029168 (3726M)
Utilization (in LBAs):       3907029168 (3726M)
Thin Provisioning:           Not Supported
Number of LBA Formats:       7
Current LBA Format:          LBA Format #00
LBA Format #00: Data Size:   512  Metadata Size:     0
LBA Format #01: Data Size:   512  Metadata Size:     8
LBA Format #02: Data Size:   512  Metadata Size:    16
LBA Format #03: Data Size:  4096  Metadata Size:     0
LBA Format #04: Data Size:  4096  Metadata Size:     8
LBA Format #05: Data Size:  4096  Metadata Size:    64
LBA Format #06: Data Size:  4096  Metadata Size:   128
root at nvme:~ # nvmecontrol identify nvme1
Controller Capabilities/Features
================================
Vendor ID:                  8086
Subsystem Vendor ID:        8086
Serial Number:              CVMD427400AY2P0JGN
Model Number:               INTEL SSDPE2ME020T4
Firmware Version:           8DV10110
Recommended Arb Burst:      0
IEEE OUI Identifier:        e4 d2 5c
Multi-Interface Cap:        00
Max Data Transfer Size:     131072

Admin Command Set Attributes
============================
Security Send/Receive:       Not Supported
Format NVM:                  Supported
Firmware Activate/Download:  Supported
Abort Command Limit:         4
Async Event Request Limit:   4
Number of Firmware Slots:    1
Firmware Slot 1 Read-Only:   No
Per-Namespace SMART Log:     No
Error Log Page Entries:      64
Number of Power States:      1

NVM Command Set Attributes
==========================
Submission Queue Entry Size
   Max:                       64
   Min:                       64
Completion Queue Entry Size
   Max:                       16
   Min:                       16
Number of Namespaces:        1
Compare Command:             Not Supported
Write Uncorrectable Command: Supported
Dataset Management Command:  Supported
Volatile Write Cache:        Not Present
root at nvme:~ # nvmecontrol identify nvme1ns1
Size (in LBAs):              3907029168 (3726M)
Capacity (in LBAs):          3907029168 (3726M)
Utilization (in LBAs):       3907029168 (3726M)
Thin Provisioning:           Not Supported
Number of LBA Formats:       7
Current LBA Format:          LBA Format #00
LBA Format #00: Data Size:   512  Metadata Size:     0
LBA Format #01: Data Size:   512  Metadata Size:     8
LBA Format #02: Data Size:   512  Metadata Size:    16
LBA Format #03: Data Size:  4096  Metadata Size:     0
LBA Format #04: Data Size:  4096  Metadata Size:     8
LBA Format #05: Data Size:  4096  Metadata Size:    64
LBA Format #06: Data Size:  4096  Metadata Size:   128
root at nvme:~ # nvmecontrol logpage -p 1 nvme0
Error Information Log
=====================
Entry 01
=========
  Error count:          165
  Submission queue ID:  2
  Command ID:           125
  Status:
   Phase tag:           0
   Status code:         128
   Status code type:    2
   More:                1
   DNR:                 0
  Error location:       0
  LBA:                  1953514256
  Namespace ID:         1
  Vendor specific info: 0
root at nvme:~ # nvmecontrol logpage -p 1 nvme1
Error Information Log
=====================
Entry 01
=========
  Error count:          50
  Submission queue ID:  9
  Command ID:           127
  Status:
   Phase tag:           0
   Status code:         194
   Status code type:    0
   More:                1
   DNR:                 0
  Error location:       16384
  LBA:                  0
  Namespace ID:         1
  Vendor specific info: 0
root at nvme:~ # nvmecontrol logpage -p 2 nvme0
SMART/Health Information Log
============================
Critical Warning State:         0x00
  Available spare:               0
  Temperature:                   0
  Device reliability:            0
  Read only:                     0
  Volatile memory backup:        0
Temperature:                    296 K, 22.85 C, 73.13 F
Available spare:                100
Available spare threshold:      10
Percentage used:                0
Data units (512 byte) read:     0x0000000000000000000000000cfcb16b
Data units (512 byte) written:  0x0000000000000000000000004d0a48e9
Host read commands:             0x00000000000000000000000005c2bf55
Host write commands:            0x000000000000000000000000057e69a8
Controller busy time (minutes): 0x00000000000000000000000000000004
Power cycles:                   0x00000000000000000000000000000139
Power on hours:                 0x0000000000000000000000000000001d
Unsafe shutdowns:               0x0000000000000000000000000000018a
Media errors:                   0x00000000000000000000000000000000
No. error info log entries:     0x00000000000000000000000000000024
root at nvme:~ # nvmecontrol logpage -p 2 nvme1
SMART/Health Information Log
============================
Critical Warning State:         0x00
  Available spare:               0
  Temperature:                   0
  Device reliability:            0
  Read only:                     0
  Volatile memory backup:        0
Temperature:                    295 K, 21.85 C, 71.33 F
Available spare:                100
Available spare threshold:      10
Percentage used:                0
Data units (512 byte) read:     0x0000000000000000000000000968863e
Data units (512 byte) written:  0x00000000000000000000000074092691
Host read commands:             0x00000000000000000000000004d40e0d
Host write commands:            0x000000000000000000000000057c45f5
Controller busy time (minutes): 0x00000000000000000000000000000004
Power cycles:                   0x000000000000000000000000000000a5
Power on hours:                 0x0000000000000000000000000000000c
Unsafe shutdowns:               0x00000000000000000000000000000101
Media errors:                   0x00000000000000000000000000000000
No. error info log entries:     0x0000000000000000000000000000002f

Best regards,
Vintila Mihai Alexandru

On 1/19/2015 6:22 PM, Jim Harris wrote:
>
>
> On Sat, Jan 17, 2015 at 6:29 AM, Oliver Pinter 
> <oliver.pinter at hardenedbsd.org <mailto:oliver.pinter at hardenedbsd.org>> 
> wrote:
>
>     Added Jim to thread, as he is the nvme driver's author.
>
>
> Thanks Oliver.
>
> Hi Mihai-Alexandru,
>
> Can you start by sending me the following?
>
> pciconf -lc nvme0
> pciconf -lc nvme1
> nvmecontrol identify nvme0
> nvmecontrol identify nvme0ns1
> nvmecontrol identify nvme1
> nvmecontrol identify nvme1ns1
> nvmecontrol logpage -p 1 nvme0
> nvmecontrol logpage -p 1 nvme1
> nvmecontrol logpage -p 2 nvme0
> nvmecontrol logpage -p 2 nvme1
>
> I see mention of a FW update, but it wasn't clear if you have run 
> nvmecontrol perftest after the FW update?  If not, could you run those 
> same nvmecontrol perftest runs again?
>
> Thanks,
>
> -Jim
>
>
>
>
>     On Sat, Jan 17, 2015 at 10:26 AM, Mihai-Alexandru Vintila
>     <unixro at gmail.com <mailto:unixro at gmail.com>> wrote:
>     > Trim is already disabled as you can see in previous mail
>     >
>     > Best regards,
>     > Mihai Vintila
>     >
>     >> On 17 ian. 2015, at 01:24, Steven Hartland
>     <killing at multiplay.co.uk <mailto:killing at multiplay.co.uk>> wrote:
>     >>
>     >> Any difference if you disable trim?
>     >>
>     >>> On 16/01/2015 23:07, Mihai Vintila wrote:
>     >>> I've remade the test with atime=off. Drive has 512b physical,
>     but I've created it with 4k gnop anyway. Results are similar with
>     atime
>     >>>        Processor cache line size set to 32 bytes.
>     >>>        File stride size set to 17 * record size.
>     >>> random  random bk wd   record   stride
>     >>>              KB  reclen   write rewrite   read    reread read
>     write re ad  rewrite     read  fwrite frewrite   fread  freread
>     >>>         1048576       4   74427       0  101744        0 93529
>     47925
>     >>>         1048576       8   39072       0   64693        0 61104
>     25452
>     >>>
>     >>> I've also tried to increase vfs.zfs.vdev.aggregation_limit and
>     ended up with a crash (screenshot attached)
>     >>>
>     >>> I'm attaching zfs tunables:
>     >>> sysctl -a|grep vfs.zfs
>     >>> vfs.zfs.arc_max: 34359738368
>     >>> vfs.zfs.arc_min: 4294967296
>     >>> vfs.zfs.arc_average_blocksize: 8192
>     >>> vfs.zfs.arc_meta_used: 5732232
>     >>> vfs.zfs.arc_meta_limit: 8589934592
>     >>> vfs.zfs.l2arc_write_max: 8388608
>     >>> vfs.zfs.l2arc_write_boost: 8388608
>     >>> vfs.zfs.l2arc_headroom: 2
>     >>> vfs.zfs.l2arc_feed_secs: 1
>     >>> vfs.zfs.l2arc_feed_min_ms: 200
>     >>> vfs.zfs.l2arc_noprefetch: 1
>     >>> vfs.zfs.l2arc_feed_again: 1
>     >>> vfs.zfs.l2arc_norw: 1
>     >>> vfs.zfs.anon_size: 32768
>     >>> vfs.zfs.anon_metadata_lsize: 0
>     >>> vfs.zfs.anon_data_lsize: 0
>     >>> vfs.zfs.mru_size: 17841664
>     >>> vfs.zfs.mru_metadata_lsize: 858624
>     >>> vfs.zfs.mru_data_lsize: 13968384
>     >>> vfs.zfs.mru_ghost_size: 0
>     >>> vfs.zfs.mru_ghost_metadata_lsize: 0
>     >>> vfs.zfs.mru_ghost_data_lsize: 0
>     >>> vfs.zfs.mfu_size: 4574208
>     >>> vfs.zfs.mfu_metadata_lsize: 465408
>     >>> vfs.zfs.mfu_data_lsize: 4051456
>     >>> vfs.zfs.mfu_ghost_size: 0
>     >>> vfs.zfs.mfu_ghost_metadata_lsize: 0
>     >>> vfs.zfs.mfu_ghost_data_lsize: 0
>     >>> vfs.zfs.l2c_only_size: 0
>     >>> vfs.zfs.dedup.prefetch: 1
>     >>> vfs.zfs.nopwrite_enabled: 1
>     >>> vfs.zfs.mdcomp_disable: 0
>     >>> vfs.zfs.dirty_data_max: 4294967296
>     >>> vfs.zfs.dirty_data_max_max: 4294967296
>     >>> vfs.zfs.dirty_data_max_percent: 10
>     >>> vfs.zfs.dirty_data_sync: 67108864
>     >>> vfs.zfs.delay_min_dirty_percent: 60
>     >>> vfs.zfs.delay_scale: 500000
>     >>> vfs.zfs.prefetch_disable: 1
>     >>> vfs.zfs.zfetch.max_streams: 8
>     >>> vfs.zfs.zfetch.min_sec_reap: 2
>     >>> vfs.zfs.zfetch.block_cap: 256
>     >>> vfs.zfs.zfetch.array_rd_sz: 1048576
>     >>> vfs.zfs.top_maxinflight: 32
>     >>> vfs.zfs.resilver_delay: 2
>     >>> vfs.zfs.scrub_delay: 4
>     >>> vfs.zfs.scan_idle: 50
>     >>> vfs.zfs.scan_min_time_ms: 1000
>     >>> vfs.zfs.free_min_time_ms: 1000
>     >>> vfs.zfs.resilver_min_time_ms: 3000
>     >>> vfs.zfs.no_scrub_io: 0
>     >>> vfs.zfs.no_scrub_prefetch: 0
>     >>> vfs.zfs.metaslab.gang_bang: 131073
>     >>> vfs.zfs.metaslab.fragmentation_threshold: 70
>     >>> vfs.zfs.metaslab.debug_load: 0
>     >>> vfs.zfs.metaslab.debug_unload: 0
>     >>> vfs.zfs.metaslab.df_alloc_threshold: 131072
>     >>> vfs.zfs.metaslab.df_free_pct: 4
>     >>> vfs.zfs.metaslab.min_alloc_size: 10485760
>     >>> vfs.zfs.metaslab.load_pct: 50
>     >>> vfs.zfs.metaslab.unload_delay: 8
>     >>> vfs.zfs.metaslab.preload_limit: 3
>     >>> vfs.zfs.metaslab.preload_enabled: 1
>     >>> vfs.zfs.metaslab.fragmentation_factor_enabled: 1
>     >>> vfs.zfs.metaslab.lba_weighting_enabled: 1
>     >>> vfs.zfs.metaslab.bias_enabled: 1
>     >>> vfs.zfs.condense_pct: 200
>     >>> vfs.zfs.mg_noalloc_threshold: 0
>     >>> vfs.zfs.mg_fragmentation_threshold: 85
>     >>> vfs.zfs.check_hostid: 1
>     >>> vfs.zfs.spa_load_verify_maxinflight: 10000
>     >>> vfs.zfs.spa_load_verify_metadata: 1
>     >>> vfs.zfs.spa_load_verify_data: 1
>     >>> vfs.zfs.recover: 0
>     >>> vfs.zfs.deadman_synctime_ms: 1000000
>     >>> vfs.zfs.deadman_checktime_ms: 5000
>     >>> vfs.zfs.deadman_enabled: 1
>     >>> vfs.zfs.spa_asize_inflation: 24
>     >>> vfs.zfs.txg.timeout: 5
>     >>> vfs.zfs.vdev.cache.max: 16384
>     >>> vfs.zfs.vdev.cache.size: 0
>     >>> vfs.zfs.vdev.cache.bshift: 16
>     >>> vfs.zfs.vdev.trim_on_init: 0
>     >>> vfs.zfs.vdev.mirror.rotating_inc: 0
>     >>> vfs.zfs.vdev.mirror.rotating_seek_inc: 5
>     >>> vfs.zfs.vdev.mirror.rotating_seek_offset: 1048576
>     >>> vfs.zfs.vdev.mirror.non_rotating_inc: 0
>     >>> vfs.zfs.vdev.mirror.non_rotating_seek_inc: 1
>     >>> vfs.zfs.vdev.max_active: 1000
>     >>> vfs.zfs.vdev.sync_read_min_active: 32
>     >>> vfs.zfs.vdev.sync_read_max_active: 32
>     >>> vfs.zfs.vdev.sync_write_min_active: 32
>     >>> vfs.zfs.vdev.sync_write_max_active: 32
>     >>> vfs.zfs.vdev.async_read_min_active: 32
>     >>> vfs.zfs.vdev.async_read_max_active: 32
>     >>> vfs.zfs.vdev.async_write_min_active: 32
>     >>> vfs.zfs.vdev.async_write_max_active: 32
>     >>> vfs.zfs.vdev.scrub_min_active: 1
>     >>> vfs.zfs.vdev.scrub_max_active: 2
>     >>> vfs.zfs.vdev.trim_min_active: 1
>     >>> vfs.zfs.vdev.trim_max_active: 64
>     >>> vfs.zfs.vdev.aggregation_limit: 131072
>     >>> vfs.zfs.vdev.read_gap_limit: 32768
>     >>> vfs.zfs.vdev.write_gap_limit: 4096
>     >>> vfs.zfs.vdev.bio_flush_disable: 0
>     >>> vfs.zfs.vdev.bio_delete_disable: 0
>     >>> vfs.zfs.vdev.trim_max_bytes: 2147483648
>     >>> vfs.zfs.vdev.trim_max_pending: 64
>     >>> vfs.zfs.max_auto_ashift: 13
>     >>> vfs.zfs.min_auto_ashift: 9
>     >>> vfs.zfs.zil_replay_disable: 0
>     >>> vfs.zfs.cache_flush_disable: 0
>     >>> vfs.zfs.zio.use_uma: 1
>     >>> vfs.zfs.zio.exclude_metadata: 0
>     >>> vfs.zfs.sync_pass_deferred_free: 2
>     >>> vfs.zfs.sync_pass_dont_compress: 5
>     >>> vfs.zfs.sync_pass_rewrite: 2
>     >>> vfs.zfs.snapshot_list_prefetch: 0
>     >>> vfs.zfs.super_owner: 0
>     >>> vfs.zfs.debug: 0
>     >>> vfs.zfs.version.ioctl: 4
>     >>> vfs.zfs.version.acl: 1
>     >>> vfs.zfs.version.spa: 5000
>     >>> vfs.zfs.version.zpl: 5
>     >>> vfs.zfs.vol.mode: 1
>     >>> vfs.zfs.trim.enabled: 0
>     >>> vfs.zfs.trim.txg_delay: 32
>     >>> vfs.zfs.trim.timeout: 30
>     >>> vfs.zfs.trim.max_interval: 1
>     >>>
>     >>> And nvm:
>     >>> ev.nvme.%parent:
>     >>> dev.nvme.0.%desc: Generic NVMe Device
>     >>> dev.nvme.0.%driver: nvme
>     >>> dev.nvme.0.%location: slot=0 function=0
>     handle=\_SB_.PCI0.BR3A.D08A
>     >>> dev.nvme.0.%pnpinfo: vendor=0x8086 device=0x0953
>     subvendor=0x8086 subdevice=0x370a class=0x010802
>     >>> dev.nvme.0.%parent: pci4
>     >>> dev.nvme.0.int_coal_time: 0
>     >>> dev.nvme.0.int_coal_threshold: 0
>     >>> dev.nvme.0.timeout_period: 30
>     >>> dev.nvme.0.num_cmds: 811857
>     >>> dev.nvme.0.num_intr_handler_calls: 485242
>     >>> dev.nvme.0.reset_stats: 0
>     >>> dev.nvme.0.adminq.num_entries: 128
>     >>> dev.nvme.0.adminq.num_trackers: 16
>     >>> dev.nvme.0.adminq.sq_head: 12
>     >>> dev.nvme.0.adminq.sq_tail: 12
>     >>> dev.nvme.0.adminq.cq_head: 8
>     >>> dev.nvme.0.adminq.num_cmds: 12
>     >>> dev.nvme.0.adminq.num_intr_handler_calls: 7
>     >>> dev.nvme.0.adminq.dump_debug: 0
>     >>> dev.nvme.0.ioq0.num_entries: 256
>     >>> dev.nvme.0.ioq0.num_trackers: 128
>     >>> dev.nvme.0.ioq0.sq_head: 69
>     >>> dev.nvme.0.ioq0.sq_tail: 69
>     >>> dev.nvme.0.ioq0.cq_head: 69
>     >>> dev.nvme.0.ioq0.num_cmds: 811845
>     >>> dev.nvme.0.ioq0.num_intr_handler_calls: 485235
>     >>> dev.nvme.0.ioq0.dump_debug: 0
>     >>> dev.nvme.1.%desc: Generic NVMe Device
>     >>> dev.nvme.1.%driver: nvme
>     >>> dev.nvme.1.%location: slot=0 function=0
>     handle=\_SB_.PCI0.BR3B.H000
>     >>> dev.nvme.1.%pnpinfo: vendor=0x8086 device=0x0953
>     subvendor=0x8086 subdevice=0x370a class=0x010802
>     >>> dev.nvme.1.%parent: pci5
>     >>> dev.nvme.1.int_coal_time: 0
>     >>> dev.nvme.1.int_coal_threshold: 0
>     >>> dev.nvme.1.timeout_period: 30
>     >>> dev.nvme.1.num_cmds: 167
>     >>> dev.nvme.1.num_intr_handler_calls: 163
>     >>> dev.nvme.1.reset_stats: 0
>     >>> dev.nvme.1.adminq.num_entries: 128
>     >>> dev.nvme.1.adminq.num_trackers: 16
>     >>> dev.nvme.1.adminq.sq_head: 12
>     >>> dev.nvme.1.adminq.sq_tail: 12
>     >>> dev.nvme.1.adminq.cq_head: 8
>     >>> dev.nvme.1.adminq.num_cmds: 12
>     >>> dev.nvme.1.adminq.num_intr_handler_calls: 8
>     >>> dev.nvme.1.adminq.dump_debug: 0
>     >>> dev.nvme.1.ioq0.num_entries: 256
>     >>> dev.nvme.1.ioq0.num_trackers: 128
>     >>> dev.nvme.1.ioq0.sq_head: 155
>     >>> dev.nvme.1.ioq0.sq_tail: 155
>     >>> dev.nvme.1.ioq0.cq_head: 155
>     >>> dev.nvme.1.ioq0.num_cmds: 155
>     >>> dev.nvme.1.ioq0.num_intr_handler_calls: 155
>     >>> dev.nvme.1.ioq0.dump_debug: 0
>     >>>
>     >>> Best regards,
>     >>> Vintila Mihai Alexandru
>     >>>
>     >>>> On 1/17/2015 12:13 AM, Barney Wolff wrote:
>     >>>> I suspect Linux defaults to noatime - at least it does on my
>     rpi.  I
>     >>>> believe the FreeBSD default is the other way.  That may
>     explain some
>     >>>> of the difference.
>     >>>>
>     >>>> Also, did you use gnop to force the zpool to start on a 4k
>     boundary?
>     >>>> If not, and the zpool happens to be offset, that's another
>     big hit.
>     >>>> Same for ufs, especially if the disk has logical sectors of
>     512 but
>     >>>> physical of 4096.  One can complain that FreeBSD should
>     prevent, or
>     >>>> at least warn about, this sort of foot-shooting.
>     >>>>
>     >>>>> On Fri, Jan 16, 2015 at 10:21:07PM +0200, Mihai-Alexandru
>     Vintila wrote:
>     >>>>> @Barney Wolff it's a new pool with only changes
>     recordsize=4k and
>     >>>>> compression=lz4 . On linux test is on ext4 with default
>     values. Penalty is
>     >>>>> pretty high. Also there is a read penalty for read as well
>     between ufs and
>     >>>>> zfs. Even on nvmecontrol perftest you can see the read
>     penalty it's not
>     >>>>> normal to have same result for both write and read
>     >>>
>     >>> _______________________________________________
>     >>> freebsd-stable at freebsd.org <mailto:freebsd-stable at freebsd.org>
>     mailing list
>     >>> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
>     >>> To unsubscribe, send any mail to
>     "freebsd-stable-unsubscribe at freebsd.org
>     <mailto:freebsd-stable-unsubscribe at freebsd.org>"
>     >>
>     >> _______________________________________________
>     >> freebsd-stable at freebsd.org <mailto:freebsd-stable at freebsd.org>
>     mailing list
>     >> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
>     >> To unsubscribe, send any mail to
>     "freebsd-stable-unsubscribe at freebsd.org
>     <mailto:freebsd-stable-unsubscribe at freebsd.org>"
>     > _______________________________________________
>     > freebsd-stable at freebsd.org <mailto:freebsd-stable at freebsd.org>
>     mailing list
>     > http://lists.freebsd.org/mailman/listinfo/freebsd-stable
>     > To unsubscribe, send any mail to
>     "freebsd-stable-unsubscribe at freebsd.org
>     <mailto:freebsd-stable-unsubscribe at freebsd.org>"
>
>



More information about the freebsd-stable mailing list