ZFS: Almost a minute of dirty buffers?

Zaphod Beeblebrox zbeeble at gmail.com
Tue Mar 19 20:45:42 UTC 2013


During a recent protracted power outage at home, it came time to shutdown
my ZFS fileserver.  This doesn't happen often --- it's a reliable performer.

The kicker is that _after_ the buffers have been sync'd for UFS
(root/var/usr on UFS), ZFS spends some time shutting down --- or that's
what I believe since the disk usage lights on the ZFS drives are going
crazy.

... and ZFS takes nearly a minute of very active disk to shutdown ?!!?

Are these dirty buffers?  What is it doing?  This period of disk blinking
seems to be related to uptime (ie: longer uptime, longer blinking on
shutdown).

For the curious, my ZFS config is:

[1:1:301]root at virtual:~> zpool status
  pool: vr2
 state: ONLINE
  scan: resilvered 30.8M in 0h2m with 0 errors on Tue Feb 26 20:41:45 2013
config:

        NAME               STATE     READ WRITE CKSUM
        vr2                ONLINE       0     0     0
          raidz1-0         ONLINE       0     0     0
            label/vr2-d0   ONLINE       0     0     0
            label/vr2-d1   ONLINE       0     0     0
            label/vr2-d2a  ONLINE       0     0     0
            label/vr2-d3a  ONLINE       0     0     0
            label/vr2-d4   ONLINE       0     0     0
            label/vr2-d5   ONLINE       0     0     0
            label/vr2-d6   ONLINE       0     0     0
            label/vr2-d7c  ONLINE       0     0     0
            label/vr2-d8   ONLINE       0     0     0
          raidz1-1         ONLINE       0     0     0
            gpt/vr2-e0     ONLINE       0     0     0
            gpt/vr2-e1     ONLINE       0     0     0
            gpt/vr2-e2     ONLINE       0     0     0
            gpt/vr2-e3     ONLINE       0     0     0
            gpt/vr2-e4     ONLINE       0     0     0
            gpt/vr2-e5     ONLINE       0     0     0
            gpt/vr2-e6     ONLINE       0     0     0
            gpt/vr2-e7     ONLINE       0     0     0

errors: No known data errors

I know that the two vdevs are not the same size (9 disks and 8 disks), but
I noticed this behavior when there was only one vdev in this array, too.

Most of the ZFS usage is via NFS, SMB or iSCSI.


More information about the freebsd-fs mailing list