UFS2 snapshots on large filesystems
Eric Anderson
anderson at centtech.com
Mon Nov 14 10:49:05 PST 2005
Oliver Fromme wrote:
> Eric Anderson <anderson at centtech.com> wrote:
> > Oliver Fromme wrote:
> > > I just accidentally pulled the wrong power cord ...
> > > So now I can give you first-hand numbers. :-}
> > >
> > > This is a 250 Gbyte data disk that has been newfs'ed
> > > with -i 65536, so I get about 4 million inodes:
> > >
> > > Filesystem iused ifree %iused
> > > /dev/ad0s1f 179,049 3,576,789 5%
> > >
> > > So I still have 95% of free inodes, even though the
> > > filesystem is fairly good filled:
> > >
> > > Filesystem 1K-blocks Used Avail Capacity
> > > /dev/ad0s1f 237,652,238 188,173,074 30,466,986 86%
> > >
> > > fsck(8) took about 2 minutes, which is acceptable, I
> > > think. Note that I always disable background fsck
> > > (for me personally, it has more disadvantages than
> > > advantages).
> > >
> > > This is what fsck(8) reported when the machin came
> > > back up:
> > >
> > > /dev/ad0s1f: 179049 files, 94086537 used, 24739582 free
> > > (26782 frags, 3089100 blocks, 0.0% fragmentation)
> >
> > 180k inodes seems like a pretty small amount to me.
>
> It's my multimedia disk. It contains mainly multimedia
> files, such as images, audio and video files.
>
> > Here's some info from some of my filesystems:
> >
> > # df -i
> > Filesystem 1K-blocks Used Avail Capacity iused ifree %iused Mounted on
> > /dev/amrd0s1d 13065232 1109204 10910810 9% 663 1695079 0% /var
> > /dev/label/vol1 1891668564 1494254268 246080812 86% 68883207 175586551 28% /vol1
> > /dev/label/vol2 1891959846 924337788 816265272 53% 59129223 185364087 24% /vol2
> > /dev/label/vol3 1892634994 1275336668 465887528 73% 31080812 213506706 13% /vol3
> >
> > Even /var has over 1million.
>
> No. Your /var has just 663 inodes in use, and it has about
> 1.7 million unused inodes which is just a waste.
Oops! Thanks for the correction - I misread it in my pasting frenzy. :)
It may be a waste, but perhaps the right answer would be in the form
of a patch to make sysinstall create /var partitions with different
settings, if you feel strongly about it. Me personally, in this case, I
don't care about the space I lose here, since to me it is negligable.
> Your other file systems use much more inodes, but they're
> also much bigger (2 Tbyte) than mine, and they seem to
> contain different kind of data.
Right, this is typical for the types of data I store, which often
average 8-16k per file, which I think is the default expectation for
UFS2 filesystems, so I'm making a generalization that a majority of
users also have a ~16k average filesize.
> > I think your tests are interesting,
> > however not very telling of many real-world scenarios.
>
> As mentioned above, my "test" was done on my multimedia
> file system with an average file size of roughly 1 Mbyte.
> Such file systems are quite real-world. :-)
>
> On a file system containing exclusively video files, innd
> cycle buffers or similarly large files, the inode density
> can be reduced even further. If you have a 2 Tbyte file
> system that contains only a few thousand files, then you're
> wasting 60 Gbytes for unused inode data.
True - agreed, however I'm assuming most users of FreeBSD's UFS2
filesystem are in the 16k average filesize range. If the average
users' average file size is larger, than the default newfs parameters
should be changed, I just don't have any data or research to support
that, so I'm not certain.
> Of course, if you design a file system for different
> purposes, your requirements might be completely different.
> A maildir server or squid proxy server definitely requires
> a much higher inode density, for example.
If a filesystem were to be designed from scratch, having the inode
density variable or automatically grow to fulfill the needs, would be
the most efficient probably.
Eric
--
------------------------------------------------------------------------
Eric Anderson Sr. Systems Administrator Centaur Technology
Anything that works is better than anything that doesn't.
------------------------------------------------------------------------
More information about the freebsd-fs
mailing list