ZFS HBAs + LSI chip sets (Was: ZFS hang (system #2))
Freddie Cash
fjwcash at gmail.com
Mon Oct 22 02:42:13 UTC 2012
On Oct 21, 2012 8:54 AM, "Dennis Glatting" <freebsd at penx.com> wrote:
>
> On Sat, 2012-10-20 at 23:52 -0700, Freddie Cash wrote:
> > On Oct 20, 2012 5:11 PM, "Dennis Glatting" <freebsd at pki2.com> wrote:
> > >
> > >
> > > I chosen the LSI2008 chip set because the code was donated by LSI, and
> > > they therefore demonstrated interest in supporting their products
under
> > > FreeBSD, and that chip set is found in a lot of places, notably
> > > Supermicro boards. Additionally, there were stories of success on the
> > > lists for several boards. That said, I have received private email
from
> > > others expressing frustration with ZFS and the "hang" problems, which
I
> > > believe are also the LSI chips.
> > >
> > > I have two questions for the broader list:
> > >
> > > 1) What HBAs are you using for ZFS and what is your level
> > > of success/stability? Also, what is your load?
> >
> > SuperMicro AOC-USAS-8i using the mpt(4) driver on FreeBSD 9-STABLE in
one
> > server (alpha).
> >
> > SuperMicro AOC-USAS2-8i using the mps(4) driver on FreeBSD 9-STABLE in 2
> > servers (beta and omega).
> >
> > I think they were updated on Oct 10ish.
> >
> > The alpha box runs 12 parallel rsync processes to backup 50-odd Linux
> > servers across multiple data centres.
> >
> > The beta box runs 12 parallel rsync processes to backup 100-odd Linux
and
> > FreeBSD servers across 50-odd buildings.
> >
> > Both boxes uses zfs send to replicate the data to omega (each box
saturates
> > a 1 Gbps link during the zfs send).
> >
> > Alpha and omega have 24 SATA 3 Gbps harddrives, configured as 3x 8-drive
> > raidz2 vdevs, with a 32 GB SSD split between OS, log vdev, and cache
vdev.
> >
> > Beta has 16 SATA 6 Gbps harddrives, configured into 3x 5-drive raidz2
> > vdevs, with a cold-spare, and a 32 GB SSD split between OS, log vdev,
and
> > cache vdev.
> >
> > All three have been patched to support feature flags. All three have
> > dedupe enabled, compression enabled, and HPN SSH patches with the NONE
> > cipher enabled.
> >
> > All three run without any serious issues. The only issues we've had are
3,
> > maybe 4, situations where I've tried to destroy multi-TB filesystems
> > without enough RAM in the machine. We're now running a minimum of 32 GB
of
> > RAM with 64 GB in one box.
> >
> > > 2) How well is the LSI chip sets supported under FreeBSD?
> >
> > I have no complaints. And we're ordering a bunch of LSI 9200-series
> > controllers for new servers (PCI brackets instead of UIO).
>
>
> Perhaps I am doing something fundamentally wrong with my SSDs. Currently
> I simply add them to a pool after being ashift aligned via gnop (e.g.,
> -S 4096, depending on page size).
>
> I remember reading somewhere about offsets to insure data is page
> aligned but, IIRC, this was strictly a performance issue. Are you doing
> something different?
All my harddisks are partitioned the same:
# gpart create -s gpt daX
# gpart add -b 2048 -t freebsd-zfs -l some-label daX
For the SSDs, the above are followed by multiple partitions that are on MB
boundaries.
More information about the freebsd-fs
mailing list