quotas safe on >2 TB filesystems in 6.1-RELEASE ?
Eric Anderson
anderson at centtech.com
Wed Dec 20 07:43:36 PST 2006
On 12/20/06 09:15, Arone Silimantia wrote:
> Eric,
>
> Thanks for your comments and help - your posts on this
> list are much appreciated. Comments in line below:
>
>
> --- Eric Anderson <anderson at centtech.com> wrote:
>
>> With 9TB without
>> any journaling, you might run into problems if you
>> crash and need to
>> fsck - the number of files you could have on the
>> file system could well
>> require more memory/time than you have available.
>
>
> Hmmm...the time required is more dependent on inodes
> than on size of data / size of files, right ?
>
> My 9 TB dataset uses about 36 million inodes.
>
> Any comments on that number ? Large ? Pedestrian ?
> Typical ?
Sounds like you have a lot of larger files (~250k per file on average
possibly), which helps the fsck times. 36 million inodes should be
fsck'able with enough memory (maybe ~3gb ish? That's a wild guess). I
have two 10Tb file systems, one has 180million inodes. I don't attempt
to fsck it, because it would take a very long time, and I might possibly
run out of memory (I have 8Gb of memory). I use GJOURNAL on these, and
am very happy about it (thanks Pawel!).
df snippet:
Filesystem 1K-blocks Used Avail Capacity
iused ifree %iused Mounted on
/dev/label/vol10-data.journal 9925732858 8780187028 351487202 96%
180847683 1102147515 14% /vol10
/dev/label/vol11-data.journal 9925732858 2598987846 6532686384 28%
49422705 1233572493 4% /vol11
I also have several 2Tb partitions (set up prior to gjournal being
available), that are *FULL*, and each have about 25-45million inodes on
them. Those fsck in about 4-7hours each, using between 1Gb and 3Gb of
memory to do so.
>
> I am hoping that that could be fsck'd (modern hitachi
> SATA drives, raid-6, 3ware) in 48 hours ... or am I
> way off ?
Depends on the drives really, and maybe caching. The geom_cache module
(still beta probably, and not in the src tree currently I don't believe)
is said to improve fsck times. 48 hours is a long time, and I *think*
it should complete within that time frame, but you'd really have to test
it to be sure.
>>> But I do absolutely need to run quotas (both user
>> and
>>> group) on this 9 GB array. I also need to
>> successfully
>>> use all quota admin tools (repquota, edquota,
>> quota,
>>> etc.)
>>>
>>> Can I get an assurance that this is totally safe,
>>> sane, and fit to run in a mission critical, data
>>> critical environment ? Anyone doing it currently
>> ?
>>> Any comments or warnings of _any kind_ much
>> appreciated.
>>
>> I don't think anyone will say 'I promise it will
>> work' of course, but I
>> would start by using the latest 6-STABLE source
>> since there have been
>> quite a number of updates to file system related
>> code since 6.1.
>
>
> Ok, but all of the CLI tools (edquota, repquota,
> quota, quotacheck, quotaon) are all known-good for
> "bigdisk" ?
>
> And there is no known "quotas just don't work with
> bigdisk" problems ?
>
> I was hoping someone out there was running quotas with
> 6.1-RELEASE on a >2TB filesystem and could report favorably...
I'm not certain. There were some bugs in quotas, that recently were
fixed (Kris Kennaway I think reported them and saw the fixes into the
tree), and prior to that I saw those consistently, so I stopped using
them. I haven't tried since the fixes have been in place, and the fixes
(if I recall correctly) had to do with background fsck (softupdates
maybe) and not the size of the disk.
You could try this in a mock-up environment, by creating a sparse file
and use it with mdconfig, then newfs it, enabling quotas, mount, and
then use a script to create massive amounts (36million-ish) files of
about 200k-ish in size, with random users, in a similar fashion as your
real data, and see if all goes well. You can also try your fsck that
way too.
Eric
--
------------------------------------------------------------------------
Eric Anderson Sr. Systems Administrator Centaur Technology
An undefined problem has an infinite number of solutions.
------------------------------------------------------------------------
More information about the freebsd-fs
mailing list