SSD recommendations for ZFS cache/log
Mike McLaughlin
obrith at gmail.com
Fri Nov 16 20:15:59 UTC 2012
>
> On Thu, Nov 15, 2012 at 1:18 AM, John <jwd at freebsd.org> wrote:
>
> > ----- Julian Elischer's Original Message -----
> > > On 11/13/12 1:19 PM, Jason Keltz wrote:
> > > >On 11/13/2012 12:41 PM, Bob Friesenhahn wrote:
> > > >>On Mon, 12 Nov 2012, kpneal at pobox.com wrote:
> > > >>>
> > > >>>With your setup of 11 mirrors you have a good mixture of read
> > > >>>and write
> > > >>>performance, but you've compromised on the safety. The reason
> > > >>>that RAID 6
> >
> > ...
> >
> > > >By the way - on another note - what do you or other list members
> > > >think of the new Intel SSD DC S3700 as ZIL? Sounds very promising
> > > >when it's finally available. I spent a lot of time researching
> > > >ZILs today, and one thing I can say is that I have a major
> > > >headache now because of it!!
> > >
> > > ZIL is best served by battery backed up ram or something.. it's tiny
> > > and not a really good fit an SSD (maybe just a partition) L2ARC on
> > > the other hand is a really good use for SSD.
> >
> > Well, since you brought the subject up :-)
> >
> > Do you have any recommendations for an NVRAM unit usable with Freebsd?
> >
>
> I've always had my eyes on something like this for ZIL but never had the
> need to explore it yet: http://www.ddrdrive.com/
> Most recommendations I've seen have also been around mirrored 15krpm disks
> of some sort or even a cheaper battery-backed raid controller in front of
> decent disks. for zil it would just need a tiny bit of RAM anyway.
>
>
First, I wholeheartedly agree with some of the other posts calling for more
documentation and FAQs on ZFS; It's sorely lacking and there is a whole lot
of FUD and outdated information.
I've tested several SSDs, I have a few DDRdrives, and I have a ZuesRAM (in
a TrueNAS appliance - and another on order that I can test with Solaris).
The DDRdrive is OK at best. The latency is quite good, but it's not very
high throughput mostly because it's PCIe 1x I believe. It can do lots of
very small writes but peaks out at about 130MB/sec no matter the blocksize.
If you're using GbE, you're set. If you're using LAGG or 10GbE, it's not
great for the price. I also just had a wicked evening a few days ago when
my building lost power for a few hours at night and UPSs failed. The UPS
that the DDRdrive was attached to died at the same time the one backing the
server and it broke my zpool quite severely - none of the typical recovery
commands worked at all (this was an OpenIndiana box) and the DDRdrive lost
100% of it's configuration - the system thought it was a brand new drive
that didn't belong in the pool (it lost it's partition table, label, etc) .
It was a disappointing display by the DDRdrive. I know it's my own fault
for the power, but the thing is not a good idea if you aren't 100% certain
it's battery will outlast the system UPS/shutdown. The SSD that I've had by
far and away the best luck with that has a supercap is the Intel 320. I've
got a couple systems with 300gb Intel 320's, partitioned to use 15gb for
ZIL (and the rest empty). I've been using them for about a year now and
have been monitoring the wear. They will not exceed their expected write
lifetime until they've written about 1.2PB or more - several years at a
fairly heavy workload for me. It can also do 100-175MB/sec and ~10-20k IOPS
depending on the workload, often outpacing the DDRdrives. I'm going to get
my hands on the new Intel drives with supercaps coming out as soon as
they're available - they look quite promising.
As for the ZuesRAM, it's exceedingly fast at the system level. I haven't
been able to test it thoroughly in my setup though - It seems FreeBSD has a
pretty severe performance issue with sync writes over NFS written to the
ZIL, at least in backing VMware. I have a very high end system from IX that
just can't do more than ~125MB/sec writes (just above 1GbE). It just
flat-lines. The ZuesRAM is certainly not bottle necking and doing o_sync dd
writes over NFS from other *nix sources I can write nearly 500MB/sec (at 4k
bs). My Solaris based systems do not hit the 125MB barrier that FreeBSD
seems to have with VMware. I'm using a 10GbE network for my VMware storage.
More information about the freebsd-fs
mailing list