ZFS + dovecot

Mike Tancsa mike at sentex.net
Thu Feb 18 20:53:27 UTC 2016

On 2/18/2016 3:18 PM, Paul Macdonald wrote:
> I'm starting to see reduced performance on a 2 disk SATA server with a
> mirrored 2TB pool running dovecot.

More memory helps with ZFS of course.  How fragmented is your spool ?
Its generally a bad thing for performance once things get above 80%
capacity.  Is free space greater than 20% ?

Also,  If you do
zpool get fragmentation <pool name>

How high is the fragmentation ?

How upto date is your OS ? Sometimes I find with certain workloads,
setting the limit of arc is better for performance on low memory
servers. I had a box (12G of RAM) that does squid which needed
vfs.zfs.arc_max set in /boot/loader.conf so that arc would not take too
much memory.  But other boxes never needed that. Not sure when it
becomes an issue and needs manual intervention.  But might be something
to look at.

On our imap/pop3 server,  (32G, 16G ARC limit), ARC works well

ZFS Subsystem Report                            Thu Feb 18 15:50:30 2016

ARC Efficiency:                                 243.15m
        Cache Hit Ratio:                90.12%  219.11m
        Cache Miss Ratio:               9.88%   24.03m
        Actual Hit Ratio:               88.46%  215.08m

        Data Demand Efficiency:         95.49%  85.08m
        Data Prefetch Efficiency:       5.52%   4.81m

          Anonymously Used:             0.03%   59.64k
          Most Recently Used:           8.51%   18.64m
          Most Frequently Used:         89.66%  196.44m
          Most Recently Used Ghost:     0.47%   1.03m
          Most Frequently Used Ghost:   1.34%   2.94m

          Demand Data:                  37.08%  81.25m
          Prefetch Data:                0.12%   265.88k
          Demand Metadata:              60.96%  133.57m
          Prefetch Metadata:            1.84%   4.03m

          Demand Data:                  15.96%  3.83m
          Prefetch Data:                18.92%  4.55m
          Demand Metadata:              58.73%  14.12m
          Prefetch Metadata:            6.39%   1.54m


L2 SSD is hardly touched

ZFS Subsystem Report                            Thu Feb 18 15:50:57 2016

L2 ARC Summary: (HEALTHY)
        Passed Headroom:                        1.94m
        Tried Lock Failures:                    84.50k
        IO In Progress:                         109
        Low Memory Aborts:                      1
        Free on Write:                          1.29k
        Writes While Full:                      611
        R/W Clashes:                            0
        Bad Checksums:                          0
        IO Errors:                              0
        SPA Mismatch:                           566.26m

L2 ARC Size: (Adaptive)                         16.23   GiB
        Header Size:                    0.04%   6.77    MiB

L2 ARC Breakdown:                               24.03m
        Hit Ratio:                      4.71%   1.13m
        Miss Ratio:                     95.29%  22.90m
        Feeds:                                  592.45k

L2 ARC Buffer:
        Bytes Scanned:                          41.86   TiB
        Buffer Iterations:                      592.45k
        List Iterations:                        2.37m
        NULL List Iterations:                   85.88k

L2 ARC Writes:
        Writes Sent: (FAULTED)                          128.62k
          Done Ratio:                   100.00% 128.62k
          Error Ratio:                  0.00%   0



Mike Tancsa, tel +1 519 651 3400
Sentex Communications, mike at sentex.net
Providing Internet services since 1994 www.sentex.net
Cambridge, Ontario Canada   http://www.tancsa.com/

More information about the freebsd-questions mailing list