graid3 - requirements or manpage wrong?
Eirik Øverby
ltning at anduin.net
Wed Nov 24 10:34:02 PST 2004
On 24. Nov 2004, at 18:11, Pawel Jakub Dawidek wrote:
> On Wed, Nov 24, 2004 at 10:54:07AM +0100, Eirik ?verby wrote:
> +> to the best of my ability I have been investigating the 'real'
> +> requirements of a raid-3 array, and cannot see that the following
> text
> +> from graid3(8) cannot possibly be correct - and if it is, then the
> +> implementation must be wrong or incomplete (emphasis added):
> +>
> +> label Create a RAID3 device. The last given component will
> contain
> +> parity data, all the rest - regular data. ***Number
> of
> +> compo-
> +> nents has to be equal to 3, 5, 9, 17, etc. (2^n +
> 1).***
> +>
> +> I might be wrong, but I cannot see how a raid-3 array should
> require
> +> (2^n + 1) drives - I am fairly certain I have seen raid-3 arrays
> +> consisting of four drives, for example. This is also what I had
> hoped to
> +> accomplish.
>
> This requirement is because we want sectorsize to be power of 2
> (UFS needs it).
> In RAID3 we want to send every I/O request to all components at once,
> that's why we need sector size to be N*512, where N is a power of 2
> value
> AND because graid3 uses one parity component we need N+1 providers.
OK I see, makes sense. So it's not really a raid3 issue, but an
implementation issue.
The only problem then is - gvinum being in a completely unusable state
(for raid5 anyway), what are my alternatives? I have four 160gb IDE
drives, and I want capacity+redundancy. Performance is a non-issue,
really. What do I do - in software?
/Eirik
>
>
> --
> Pawel Jakub Dawidek http://www.FreeBSD.org
> pjd at FreeBSD.org http://garage.freebsd.pl
> FreeBSD committer Am I Evil? Yes, I Am!
On 24. Nov 2004, at 18:11, Pawel Jakub Dawidek wrote:
On Wed, Nov 24, 2004 at 10:54:07AM +0100, Eirik ?verby wrote:
+> to the best of my ability I have been investigating the 'real'
+> requirements of a raid-3 array, and cannot see that the following
text
+> from graid3(8) cannot possibly be correct - and if it is, then the
+> implementation must be wrong or incomplete (emphasis added):
+>
+> label Create a RAID3 device. The last given component will
contain
+> parity data, all the rest - regular data. ***Number
of
+> compo-
+> nents has to be equal to 3, 5, 9, 17, etc. (2^n +
1).***
+>
+> I might be wrong, but I cannot see how a raid-3 array should require
+> (2^n + 1) drives - I am fairly certain I have seen raid-3 arrays
+> consisting of four drives, for example. This is also what I had
hoped to
+> accomplish.
This requirement is because we want sectorsize to be power of 2
(UFS needs it).
In RAID3 we want to send every I/O request to all components at once,
that's why we need sector size to be N*512, where N is a power of 2
value
AND because graid3 uses one parity component we need N+1 providers.
OK I see, makes sense. So it's not really a raid3 issue, but an
implementation issue.
The only problem then is - gvinum being in a completely unusable state
(for raid5 anyway), what are my alternatives? I have four 160gb IDE
drives, and I want capacity+redundancy. Performance is a non-issue,
really. What do I do - in software?
/Eirik
--
Pawel Jakub Dawidek http://www.FreeBSD.org
pjd at FreeBSD.org http://garage.freebsd.pl
FreeBSD committer Am I Evil? Yes, I Am!
More information about the freebsd-stable
mailing list