ZFS: How to enable cache and logs.

James L. Lauser james at jlauser.net
Wed May 11 13:59:00 UTC 2011


On Wed, May 11, 2011 at 6:37 AM, Daniel Kalchev <daniel at digsys.bg> wrote:

>
>
> On 11.05.11 13:06, Jeremy Chadwick wrote:
>
>> On Wed, May 11, 2011 at 07:25:52PM +1000, Danny Carroll wrote:
>>
>>> When I move to v28 I will probably wish to enable a L2Arc and also
>>> perhaps dedicated log devices.
>>>
>>>  In the case of ZFS intent logs, you definitely want a mirror.  If you
>> have a single log device, loss of that device can/will result in full
>> data loss of the pool which makes use of the log device.
>>
>
> This is true for v15 pools, not true for v28 pools. In ZFS v28 you can
> remove log devices and in the case of sudden loss of log device (or
> whatever) roll back the pool to a 'good' state. Therefore, for most
> installations single log device might be sufficient. If you value your data,
> you will of course use mirrored log devices, possibly in hot-swap
> configuration and .. have a backup :)
>
> By the way, the SLOG (separate LOG) does not have to be SSD at all.
> Separate rotating disk(s) will also suffice -- it all depends on the type of
> workload. SSDs are better, for the higher end, because of the low latency
> (but not all SSDs are low latency when writing!).
>
> The idea of the SLOG is to separate the ZIL records from the main data
> pool. ZIL records are small, even smaller in v28, but will cause unnecessary
> head movements if kept in the main pool. The SLOG is "write once, read on
> failure" media and is written sequentially. Almost all current HDDs offer
> reasonable sequential write performance for small to medium pools.
>
> The L2ARC needs to be fast reading SSD. It is populated slowly, few MB/sec
> so there is no point to have fast and high-bandwidth write-optimized SSD.
> The benefit from L2ARC is the low latency. Sort of slower RAM.
>
> It is bad idea to use the same SSD for both SLOG and L2ARC, because most
> SSDs behave poorly if you present them with high read and high write loads.
> More expensive units might behave, but then... if you pay few k$ for a SSD,
> you know what you need :)
>
> Daniel
>
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>


I recently learned the hard way that you need to be very careful what you
choose as your ZIL.  On my personal file server, my pool is comprised of 4x
500 GB disks in a RAID-Z and 2x 1.5 TB disks in a mirror.  I also had a 1 GB
Compact Flash card plugged into an IDE adapter, running as the ZIL.  For the
longest time, my write performance was capped at about 5 MB/sec.  In an
attempt to figure out why, I ran gstat, to see that the CF device was pegged
at 100%.

Having recently upgraded to ZFSv28, I decided to try removing the log
device.  Write performance instantly jumped to 45 MB/sec.  Lesson
learned...  If you're going to have a dedicated ZIL, make sure its write
performance exceeds the performance of the pool itself.

On the other hand, again having upgrading to v28, I attempted to use
deduplication on my pool.  Write performance dropped to an abysmal 1
MB/sec.  Why?  Because, as I found out, my system doesn't have enough memory
to keep the dedupe table in memory, nor can it be upgraded to.  But with the
application of a sufficiently large cache device, performance goes right
back up to where it's supposed to be.

--  James L. Lauser
    james at jlauser.net
    http://jlauser.net/


More information about the freebsd-fs mailing list