Strange slowdown when cache devices enabled in ZFS
    Freddie Cash 
    fjwcash at gmail.com
       
    Thu Apr 25 15:51:45 UTC 2013
    
    
  
I haven't had a chance to run any of the DTrace scripts on any of my ZFS
systems, but I have narrowed down the issue a bit.
If I set primarycache=all and secondarycache=all, then adding an L2ARC
device to the pool will lead to zfskern{l2arc_feed_thread} taking up 100%
of one CPU core and stalling I/O to the pool.
If I set primarycache=all and secondarycache=metadata, then adding an L2ARC
device to the pool speeds things up (zfs send/recv saturates a 1 Gbps link;
and the nightly rsync backups run finishes 4 hours earlier).
I haven't tested the other two combinations (metadata/metadata;
metadata/all) as yet.
This is consistent across two ZFS systems so far:
  - 8-core Opteron 6100-series CPU with 48 GB of RAM; 44 GB ARC, 40 GB
metadata limit; 3x raidz2
  - 2x 8-core Opteron 6100-series CPU with 128 GB of RAM; 64 GB ARC, 60 GB
metadata limit; 5x raidz2
Still reading up on dtrace/hwpmc as time permits.  Just wanted to pass
along the above to show I haven't forgotten about this yet.  :)  $JOB/$LIFE
slows things down sometimes.  :)
-- 
Freddie Cash
fjwcash at gmail.com
    
    
More information about the freebsd-fs
mailing list