24 TB UFS2 reality check ?

Jeff Mohler speedtoys.racing at gmail.com
Thu Jul 10 01:02:19 UTC 2008


Lets see..a peak of maybe 25-30 random drive IOPS/sec at 15ms MINIMAL
latency per IO (likely more like 35-40)..gonna be ugly.

Complicated by normal load IOPS..you could expect it all to simply
"dissapear" for a day while it reconstructs.

On Wed, Jul 9, 2008 at 5:59 PM, Alexandre Biancalana
<biancalana at gmail.com> wrote:
> On 7/9/08, Juri Mianovich <juri_mian at yahoo.com> wrote:
>>
>>  Hello Jeff,
>>
>>
>>  --- On Tue, 7/8/08, Jeff Mohler <speedtoys.racing at gmail.com> wrote:
>>
>>
>>
>> > One drive has a what..maybe a 1 per 1.0 E15 bits transferred
>>  > uBER, and
>>  > you have 24x that of one drive, as each drive it it's
>>  > statistical crap
>>  > shoot.   Each drive may NEVER hit uBER for you, but one may
>>  > do it
>>  > tomorrow.
>>  >
>>  > Plus, you have commodity firmware levels on those drives
>>  > and commodity
>>  > BER mechanisms, so you COULD argue you have another 2x
>>  > liability WRT
>>  > losing it all without HEFTY raid, at least 5+1.
>>
>>
>>
>> Thank you - I understand.  You are worried because of the lack of redundancy.
>>
>>  I didn't want to make my questions any more complicated than they were, but since we are on the topic, I will tell you that _in reality_ I will not make a 24 TB array, I will in fact use the raid-6 functionality (two parity drives) of my card and make a ~22 TB array.
>>
>>  Does that address the concerns you were raising ?  Does 22 data and 2 parity (raid 6) still make you very nervous, or does that completely change the scenario you were worried about ?
>
>
> I did be fewer nervous if you do 2 arrays of 11 disks... what`s the
> time that it will take to do a rebuild of a failed drive in your
> normal load ??
>


More information about the freebsd-fs mailing list