comments on newfs raw disk ? Safe ? (7 terabyte array)

Arone Silimantia aronesimi at yahoo.com
Tue Feb 13 05:58:10 UTC 2007


Oliver,


On Mon, 12 Feb 2007, Oliver Fromme wrote:

 >  > > You'll probably want to reduce the inode density (i.e.
 >  > > increase the bytes-per-inode ratio).  With the default
 >  > > value, an fsck will be a royal pain, no matter whether you
 >  > > use background fsck (with snapshots) or not.  It might even
 >  > > not work at all if you don't have a huge amount of RAM.
 >  >
 >  > Well, I have 4 GB of physical RAM, and 4 GB of swap - so does that
 >  > total of 8 GB satisfy the "1 GB per TB" requirement, or do I really
 >  > need >5.5 GB of actual swap space (in addition to the physical) ?
 >
 > That "1 GB per TB" requirement is just a rule of thumb.
 > I don't know hoe accurate it is.  Also note that it is
 > desirable to avoid having fsck use swap, because it will
 > be even slower then.  A lot slower.


Ok, understood.  But regardless of performance, fsck will use
BOTH physical and swap, so as far as fsck is concerned, I have 8 GB of
memory ?


 > # newfs -i 65536
 >
 > That will leave room for about 15 million inodes per TB,
 > which is plenty for your needs.
 >
 > By the way, reducing the inode density like that will also
 > give your more space for actual file data.  In UFS2, every
 > inode takes 256 bytes.  Reducing the bytes-per-inode ratio
 > from 4 KB to 64 KB will give you additional 60 GB of space.
 > _And_ it will reduce the memory and time requirements of
 > fsck.


Thank you - this is great advice.



 >  > Well, I don't mind a 24 hour fsck, and I would like to remove
 >  > complexity and not be so on the bleeding edge with things.  Since I
 >  > am only using 5mil inodes per TB anyway, that ends up being 25-30
 >  > million inodes in the 5 TB drive which I think could fsck in a day
 >  > so.
 >
 > I suggest you test it before putting it into production,
 > i.e. populate the file system with the expected number of
 > files, then run fsck.


Well, here is what I am assuming, and I would like to get some
confirmation on these two points:

- The time it takes to fsck is not a function of how many inodes are
initialized from newfs, but how many you are _actually using_.

- But the amount of memory the fsck takes is a function of how many inodes
exist, regardless of how many you are actually using.

Are these two interpretations correct ?


 >  > I just need to know if my 4+4 GB of memory is enough, and if this
 >  > option in loader.conf:
 >  >
 >  > kern.maxdsiz="2048000000"
 >
 > That will limit the process size to 2 GB.  You might need
 > to set it higher if fsck needs more than that.  (I assume
 > you're running FreeBSD/amd64, or otherwise you'll run into
 > process size limitations anyway.)


Well ... no, I am using normal x86 FreeBSD on an Intel based system.  I
have 4 GB of physical ram, and 4 GB of swap.  So I am tempted to just make
that number 4096000000 and be done with it ... if fsck doesn't need
that much memory, there is no harm to the system in simply having an
inflated limit like that, is there ?

I guess if I want to be safe and guard against a rogue, runaway, memory
eating process, I could ratchet it up to (physical_ram - 256 megs).

Which brings me to my last question:

I understand why it's not useful to try to compute fsck _times_ - there
are so many factors from disk speed to array speed to stripe size to
population, etc. - who knows how long it will take.

BUT, why isn't it possible to compute fsck _memory needs_ ?  If I have a
filesystem of A size with X inodes init'd, and Y inodes used, shouldn't I
be able to compute how much memory fsck will need ?

Thanks again.


 
---------------------------------
No need to miss a message. Get email on-the-go 
with Yahoo! Mail for Mobile. Get started.


More information about the freebsd-fs mailing list