ZFS l2arc and HAST ? newbie question

Bernd Walter ticso at cicely7.cicely.de
Tue Jun 15 18:13:13 UTC 2010


On Tue, Jun 15, 2010 at 05:53:48PM +0200, Pawel Jakub Dawidek wrote:
> On Tue, Jun 15, 2010 at 03:21:45PM +0200, Thomas Steen Rasmussen wrote:
> > Hello list,
> > 
> > I am playing with HAST in order to build some redundant storage
> > for a mailserver, using ZFS as the filesystem.
> > I have the following zpool layout before stating the HAST experiments:
> > 
> >         NAME              STATE     READ WRITE CKSUM
> >         tank              ONLINE       0     0     0
> >           raidz2          ONLINE       0     0     0
> >             label/hd4     ONLINE       0     0     0
> >             label/hd5     ONLINE       0     0     0
> >             label/hd6     ONLINE       0     0     0
> >             label/hd7     ONLINE       0     0     0
> >         logs              ONLINE       0     0     0
> >           mirror          ONLINE       0     0     0
> >             label/ssd0s1  ONLINE       0     0     0
> >             label/ssd1s1  ONLINE       0     0     0
> >         cache
> >           label/ssd0s2    ONLINE       0     0     0
> >           label/ssd1s2    ONLINE       0     0     0
> > 
> > As I understand it, to accomplish this with HAST I will need to make a
> > HAST resource for each physical disk, like so:
> > 
> >         NAME              STATE     READ WRITE CKSUM
> >         tank              ONLINE       0     0     0
> >           raidz2          ONLINE       0     0     0
> >             hast/hahd4    ONLINE       0     0     0
> >             hast/hahd5    ONLINE       0     0     0
> >             hast/hahd6    ONLINE       0     0     0
> >             hast/hahd7    ONLINE       0     0     0
> > 
> > But what about slog and cache devices, currently on SSD disks for
> > performance reasons ? It doesn't really make sense to synchronize
> > a cache disk over the network, does it ?
> 
> No, it doesn't. Cache is forgotten on import anyway, so don't bother.
> You have to be careful, though, because you probably need to remove old
> cache device from the pool after import on secondary and add local disk.

Unless conditions have changed a missing cache device seem to be without
any sign of problem.
I have been running with USB-sticks for L2ARC and also tested with
missing USB devices at boot time (without reimporting).
Even unplugging a device during access wasn't a problem - at least not
with ZFS.
ZFS didn't even complain when I had a blocked USB and every access
to a specific cache device just timed out - in fact I didn't even
notice any slowness.

Pawel, do you know if there is any chance that ZFS can boot with warm
L2ARC?
I is mentioned in several articles, but my system starts with empty
cache.
My system is already quite old, so probably it is already in.

> > Could I build the zpool with the SSD disks directly (without
> > HAST) and would ZFS survive an export/import on the other host,
> > when the cache disks are suddently different ? I am thinking cache
> > only here, not slog.
> 
> It simply won't find cache disks, you will need to do what I described
> above.
> 
> > Do SSD l2arc / slog even make any sense when I am "deliberately"
> > slowing down the filsystem with network redundancy anyway ?
> 
> Forget about HAST for L2ARC. In case of SLOG it can still be faster over
> the network than pool with local SATA disks without SLOG. As usual the
> best way to verify this is to test it for your workload:)
> 
> > Oh, and is there any problems using labels for HAST devices ? My
> > controller likes to give new device names to disks now and then,
> > and it has been a blessing to use labels instead of device names,
> > so I'd like to continue doing that when using HAST.
> 
> Use labeled providers in hast.conf, but there is no need to label HAST
> providers (/dev/hast/<name>).

I'm very interested to hear about your results with HAST/ZFS combo
because I have a possible use case.

-- 
B.Walter <bernd at bwct.de> http://www.bwct.de
Modbus/TCP Ethernet I/O Baugruppen, ARM basierte FreeBSD Rechner uvm.


More information about the freebsd-fs mailing list