ZFS across controllers
Dennis Glatting
freebsd at penx.com
Fri Oct 28 17:00:51 UTC 2011
On Fri, 28 Oct 2011, Bob Friesenhahn wrote:
> On Fri, 28 Oct 2011, Dennis Glatting wrote:
>
>> This seems like a stupid question but is there any significant performance
>> problems with a ZFS RAIDz volume served across multiple controllers?
>
> No. Available IOPS will be dominated/limited by whichever controller
> and disks are slowest to respond.
>
Right. That should have been obvious. Thanks.
I have three other systems with ZFS RAIDz arrays using the motherboard
chip and have good, steady performance. However, those are four 2TB disk
arrays. All of those systems are AMD systems across different motherboards
(e.g., ASUS CROSSHAIR V FORMULA, Gigabyte GA-990FXA-UD7, and the third is
some PoS I had lying around) whereas the system I am having trouble with
is Intel.
I have a fifth system that is also Intel but with a Gigabyte GA-X58A-UD9
motherboard. (That system is a bit wild.) Whereas I am curious as to
whether I am having performance problems with that system I am not in the
position to touch it for some time.
>> I have seen two cases that confuse me. First, performance degrades over
>> time. I have read there was previously a prefetch bug but it has been
>> fixed. I also recently turned off prefetching to test (no opinion yet).
>
> In what way is performance degrading? What means are you using to evaluate
> performance?
>
At the moment it is feeling. I planned Bonnie runs across the volumes but
when I moved the cables around and saw the srub unwilling to calculate
duration, I was thinking I was on to something. Before I moved cables the
scrub indicated somewhere between one to two hundred hours (160, IIRC).
I am working with 1-5TB ASCII files, sometimes larger. The files exist on
a compressed volume but the operations I am doing result in uncompressing.
In one case I am processing the file, adding data, piping it into bzip2
and then onto a different ZFS volume. As time progress a "ls -lh" take ~20
seconds to respond. When I look at the drive idiot lights or iostatus, the
drives are not busy. Top indicates the CPU isn't very busy.
iirc# df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/da0p3 871G 13G 787G 2% /
devfs 1.0k 1.0k 0B 100% /dev
disk-1 14T 121G 14T 1% /disk-1
disk-1/word.lists 15T 1.8T 14T 11% /disk-1/word.lists
disk-2 3.6T 3.8G 3.6T 0% /disk-2
btw:/disk-1/Homes 1.6T 15G 1.6T 1% /Homes
btw:/disk-2/NFS 2.0T 22M 2.0T 0% /NFS
btw:/disk-2/NFS/Word.Lists 4.1T 2.1T 2.0T 51% /Word.Lists
(disk-1/word.lists is compressed using gzip. disk-2 is where the bzip
output was going).
In the second case I am "sort -u" one of the compressed files with the TMP
directory across the same collection of disks (i.e., TMP => /disk-1/tmp,
source /disk-1/word.lists (compressed)). A "ls -lh" of the temporary
directory is fast but the disks also appear not to to be busy.
> Bob
> --
> Bob Friesenhahn
> bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
>
More information about the freebsd-fs
mailing list