ZFS: How to enable cache and logs.
Danny Carroll
fbsd at dannysplace.net
Thu May 12 02:26:43 UTC 2011
On 11/05/2011 7:25 PM, Danny Carroll wrote:
> Hello all.
>
> I've been using ZFS for some time now and have never had an issued
> (except perhaps the issue of speed...)
> When v28 is taken into -STABLE I will most likely upgrade to v28 at that
> point. Currently I am running v15 with v4 on disk.
>
> When I move to v28 I will probably wish to enable a L2Arc and also
> perhaps dedicated log devices.
>
> I'm curious about a few things however.
>
> 1. Can I remove either the L2 ARC or the log devices if things don't go
> as planned or if I need to free up some resources?
> 2. What are the best practices for setting up these? Would a geom
> mirror for the log device be the way to go. Or can you just let ZFS
> mirror the log itself?
> 3. What happens when one or both of the log devices fail. Does ZFS
> come to a crashing halt and kill all the data? Or does it simply
> complain that the ZIL is no longer active and continue on it's merry way?
>
> In short, what is the best way to set up these two features?
>
Replying to myself in order to summarise the recommendations (when using
v28):
- Don't use SSD for the Log device. Write speed tends to be a problem.
- SSD ok for cache if the sizing is right, but without TRIM, don't
expect to take full advantage of the SSD.
- Do use two devices for log and mirror them with ZFS. Bad things
*can* happen if*all* the log devices die.
- Don't colocate L2ARC and Log devices.
- Log devices can be small, ZFS Best practices guide specifies about
50% of RAM as max. Minimum should be Throughput * 10 (1Gb for 100MB/sec
of writes).
let me know if I got anything wrong or missed something important.
Remaining questions.
- Is there any advantage to using a spare partition on a SCSI or SATA
drive as L2Arc? Assuming it was in the machine already but doing nothing?
- If I have 2 pools like this:
# zpool status
pool: tank
state: ONLINE
scrub: scrub completed after 11h7m with 0 errors on Sun May 8 14:17:07
2011
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1 ONLINE 0 0 0
gpt/data0 ONLINE 0 0 0
gpt/data1 ONLINE 0 0 0
gpt/data2 ONLINE 0 0 0
gpt/data3 ONLINE 0 0 0
gpt/data4 ONLINE 0 0 0
gpt/data5 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
gpt/data6 ONLINE 0 0 0
gpt/data7 ONLINE 0 0 0
gpt/data8 ONLINE 0 0 0
gpt/data9 ONLINE 0 0 0
gpt/data10 ONLINE 0 0 0
gpt/data11 ONLINE 0 0 0
errors: No known data errors
pool: system
state: ONLINE
scrub: scrub completed after 1h1m with 0 errors on Sun May 8 15:18:23 2011
config:
NAME STATE READ WRITE CKSUM
system ONLINE 0 0 0
mirror ONLINE 0 0 0
gpt/system0 ONLINE 0 0 0
gpt/system1 ONLINE 0 0 0
And I have free space on the "system" disks. I could give two new
partitions on the system disks to ZFS for the log devices of the "tank"
pool?
If I were worried about performance of my "system" pool, I could also
use spare partitions on (a couple of) the "tank" disks in a similar way.
But it would be silly to use the same disk for ZIL and pool data. In
that case, why would I bother to alter the default.
Thanks for the info!
-D
More information about the freebsd-fs
mailing list