Question about bottle neck in storage

John Fleming john at spikefishsolutions.com
Sun Sep 29 15:35:57 UTC 2019


On Tue, Sep 24, 2019 at 1:09 PM Warner Losh <imp at bsdimp.com> wrote:
>
>
>
> On Tue, Sep 24, 2019 at 5:46 PM John Fleming <john at spikefishsolutions.com> wrote:
>>
>> Is there anyway to see how busy a SAS/Sata controller is vs disks? I
>> have a R720 with 14 Samsung 860 EVOs in it (its a lab server) in raid
>> 10 ZFS.
>>
>> When firing off a dd I (bs=1G count=10) seems like the disks never go
>> above %50 busy. I'm trying to figure out if i'm maxing out SATA 3 BW
>> or if its something else (like terrible dd options).
>
>
> Two points to consider here. First, NVMe has lots of queues and needs lots of concurrent transactions to saturate, so the 50% busy means you are no where close to saturating the drives. Schedule more I/O too fix that. It's better to do lots and lots of concurrent DD to different parts of the drive, or to use fio with the aio kernel option and posixaio I/O scheduling method.
>
> I use the following script, but often need to increase the number of threads / jobs to saturate.
>
> ; SSD testing: 128k I/O 64 jobs 32 deep queue
>
> [global]
> direct=1
> rw=randread
> refill_buffers
> norandommap
> randrepeat=0
> bs=128k
> ioengine=posixaio
> iodepth=32
> numjobs=64
> runtime=60
> group_reporting
> thread
>
> [ssd128k]
>
I didn't catch what utilty was using that. I started poking around an
iozone and bonnie++

BTW these are SATA not nvme.

> Second, the system's % busy statistics are misleading. They are the %of the time that a command is outstanding on the drive. 100% busy can be a tiny percentage of the total bandwidth you can get from the drive.
>
>>
>> my setup is Dell R720 with 2 x LSI 9361 cards. Each card is going to a
>> dedicated 8 drive board inside the front of the R720. Basically i'm
>> just saying its not a single SAS cable to 14 drives.
>>
>> Don't have cpu info hand.. zeon something. DDR3-1600 (128GB)
>>
>> Both controllers are in 8x slots running PCIe gen 3.
>>
>> BTW i'm sure this has been asked a million times but what would be
>> some decent benchmark tests while i'm at it?
>
>
> See above... :)
>
> Warner

So my UPS got angry and shut everything down. I figured this was a
good change to look at iostats again.

This is while the array is being scrubbed.

I'm very happy with these numbers!
BTW da0 and 8 are OS drives and not raid 10 members.

extended device statistics
device       r/s     w/s     kr/s     kw/s  ms/r  ms/w  ms/o  ms/t qlen  %b
da0            0       0      0.0      0.0     0     0     0     0    0   0
da1         4003       7 505202.5    207.6     0     0     1     0    2 100
da2         3980      10 508980.2    265.5     0     0     0     0    2 100
da3         3904       8 499675.8    183.1     0     0     0     0    2  99
da4         3850       8 488870.5    263.9     0     0     0     0    2 100
da5         4013      11 513640.6    178.8     0     0     1     0    2 100
da6         3851      13 489035.8    286.4     0     0     1     0    2 100
da7         3931      12 503197.6    271.6     0     0     0     0    2 100
da8            0       0      0.0      0.0     0     0     0     0    0   0
da9         4002       8 505164.1    207.6     0     0     1     0    2 100
da10        3981      10 509133.8    265.5     0     0     0     0    2 100
da11        3905       8 499791.0    183.1     0     0     0     0    2 100
da12        3851       9 488985.6    263.9     0     0     0     0    2 100
da13        4012      11 513576.6    178.8     0     0     1     0    2 100
da14        3850      14 488971.8    286.4     0     0     0     0    2 100
da15        3930      12 503108.0    271.6     0     0     0     0    2 100


More information about the freebsd-stable mailing list