performance with LSI SAS 1064

Scott Long scottl at samsco.org
Thu Aug 30 07:51:32 PDT 2007


Eric Anderson wrote:
> Scott Long wrote:
>> Lutieri G. wrote:
>>> 2007/8/30, Eric Anderson <anderson at freebsd.org>:
>>>> I'm confused - you said in your first post you were getting 3MB/s, 
>>>> where
>>>>   above you show something like 55MB/s.
>>> Sorry! using blogbench i got 3MB/s and 100% busy. Once is 100% busy i
>>> thinked that 3MB/s is the maximum speed. But i was wrong...
>>
>> %busy is a completely useless number for a anything but untagged,
>> uncached disk subsystems.  It's only an indirect measure of latency, and
>> there are better tools for measuring latency (gstat).
>>
>>>> You didn't say what kind of disks, or how many, the configuration, 
>>>> etc -
>>>> so it's hard to answer much.  The 55MB/s seems pretty decent for many
>>>> hard drives in a sequential use state (which is what dd tests really).
>>>>
>>> SAS disks. Seagate, i don't know what is the right model of disks.
>>>
>>> Ok. If 55Mb/s is a decent speed i'm happy. I'm getting problems with
>>> squid cache and maybe should be a problem related with disks. But...
>>> i'm investigating and discharging problems.
>>>
>>>
>>>> Your errors before were probably caused because your queue depth is set
>>>> to 255 (or 256?) and the adapter can't do that many.  You should use
>>>> camcontrol to reduce it, to maybe 32.  See the camcontrol man page for
>>>> the right usage.  It's something that needs setting on every boot, so a
>>>> startup file is a good place for it maybe.
>>>>
>>> Is there any way of get the right number to reduce?!
>>>
>>
>> If you're seeing erratic performance in production _AND_ you're seeing
>> lots of accompanying messages on the console about tag depth jumping
>> around, you can use camcontrol to force the depth to a lower number of
>> you're choosing.  This kind of problem is pretty rare, though.
> 
> Scott, you are far more of a SCSI guru than I, so please correct me if 
> this is incorrect.  Can't you get a good estimate, by knowing the queue 
> depth of the target(s), and dividing it by the number of initiators?  So 
> in his case, he has one initiator, and (let's say) one target.  If the 
> queue depth of the target (being the Seagate SAS drive) is 128 (see 
> Seagate's paper here: 
> http://www.seagate.com/staticfiles/support/disc/manuals/enterprise/savvio/Savvio%2015K.1/SAS/100407739b.pdf 
> ), then he should have to reduce it down from 25[56] to 128, correct?
> 
> With QLogic cards connected to a fabric, I saw queue depth issues under 
> heavy load.
> 

I understand what you're saying, but you're a bit confused on 
terminology =-)

There are two factors in the calculation.  One is how many transactions
the controller (the initiator) can have in progress as once.  This is
really independent of what the disks are capable of or how many disks 
are on the bus.  This is normally known to the driver in some 
chip-specific way.  Second is how many tagged transactions a disk can
handle.  This actually isn't something that can be discovered in a
generic way, so the SCSI layer in FreeBSD guesses, and then revises that
guess over time based on feedback from the drive.

Manually setting the queue depth is not something that he "should have 
to [do]".  It perfectly normal to get console messages on occasion about
the OS re-adjusting the depth.  Where it becomes a problem is in high
latency topologies (like FC fabrics) and buggy drive firmware where the 
algorithm winds up thrashing a bit.  For direct attached SAS disks, I
highly doubt that it is needed.  Playing a guessing game with this will
almost certainly result in lower performance.

Scott




More information about the freebsd-scsi mailing list