Delayed atime updates ("lazytime")

Kevin Oberman rkoberman at
Wed Nov 26 19:45:50 UTC 2014

On Wed, Nov 26, 2014 at 10:06 AM, Marcus Reid <marcus at> wrote:

> Hi,
> Looks like Linux is about to grow another solution to handling atime
> updates differently:
> In short, it will only write out atime changes periodically (daily), or
> if there is another reason to write out the inode, or if the inode is
> about to be pushed out of cache.  This seems like a pretty good
> compromise.
> Currently, the ZFS configuration that results from using bsdinstall
> disables atime on all but /var/mail, which is the only example of
> disabling atime by default that I'm aware of outside of Gentoo Linux.
> I can't seem to find any information that talks about the rationale
> behind that, though a couple things come to mind:
>   - some additional IO generated (but that's always been the case)
>   - additional wear on SSD devices (enough to compel the change?)
>   - zfs snapshot growth (but the snapshot stops growing after one
>     full set of inode updates)
>   - wake up otherwise idle spinning media on a laptop (the actual reason
>     that was cited as motivation for the change)
> Something like lazytime would address most of those concerns, and people
> who are even more OCD than that could disable atime completely on their
> machine.
> Marcus
About time. VMS started doing this over a quarter century ago. Worked very
well. Of course, the VMS file system (ODS-2) has little in common with
either ZFS or UFS, but it had an interesting twist.

There was a per-disk update "window" that could be modified on a per-file
basis, so that you could specify the "update atime for every access" if you
really needed it, but normally it would only update atime every so many
seconds. I don't remember the system default any more. This kept almost
everyone happy. VMS previously had no equivalent to atime and had lots of
request for it, but the developers did not want to impact performance as
drastically as updating the access time on every access would have done.

I don't know how or if such a scheme could be implemented in FreeBSD file
systems, but it was a very nice way of handling the issue.
R. Kevin Oberman, Network Engineer, Retired
E-mail: rkoberman at

More information about the freebsd-fs mailing list