practical maximum number of drives

Graham Allan allan at physics.umn.edu
Wed Feb 5 14:43:00 UTC 2014



On 2/4/2014 11:36 PM, aurfalien wrote:
> Hi Graham,
>
> When you say behaved better with 1 HBA, what were the issues that
> made you go that route?

It worked fine in general with 3 HBAs for a while but OTOH 2 of the 
drive chassis were being very lightly used (and note I was being quite 
conservative and keeping each chassis as an independent zfs pool).

Actual problems occurred once while I was away but our notes show we got 
some kind of repeated i/o deadlock. As well as all drive i/o stopping, 
we also couldn't use the sg_ses utilities to query the enclosures. This 
reoccurred several times after restarts throughout the day, and 
eventually "we" (again i wasn't here) removed the extra HBAs and 
daisy-chained all the chassis together. An inspired hunch, I guess. No 
issues since then.

Coincidentally a few days later I saw a message on this list from Xin Li 
"Re: kern/177536: [zfs] zfs livelock (deadlock) with high write-to-disk 
load":

  One problem we found in field that is not easy to reproduce is that
  there is a lost interrupt issue in FreeBSD core.  This was fixed in
  r253184 (post-9.1-RELEASE and before 9.2, the fix will be part of the
  upcoming FreeBSD 9.2-RELEASE):

 
http://svnweb.freebsd.org/base/stable/9/sys/kern/kern_intr.c?r1=249402&r2=253184&view=patch

  The symptom of this issue is that you basically see a lot of processes
  blocking on zio->zio_cv, while there is no disk activity.  However,
  the information you have provided can neither prove or deny my guess.
  I post the information here so people are aware of this issue if they
  search these terms.

Something else suggested to me that multiple mps adapters would make 
this worse but I'm not quite sure what. This issue wouldn't exist after 
9.1 anyway.

> Also, curious that you have that many drives on 1 PCI card, is it PCI
> 3 etc… and is saturation an issue?

Pretty sure it's PCIe 2.x but we haven't seen any saturation issues. 
That was of course the motivation for using separate HBAs in the initial 
design but it was more of a hypothetical concern than a real one - at 
least given our use pattern at present. This is more backing storage, 
the more intensive i/o usually goes to a hadoop filesystem.

Graham


More information about the freebsd-fs mailing list