comments on newfs raw disk ? Safe ? (7 terabyte array)

Arone Silimantia aronesimi at yahoo.com
Fri Feb 9 19:06:53 UTC 2007


Oliver,

Thank you for your detailed response - my own response is inline below:


Oliver Fromme <olli at lurza.secnetix.de> wrote: Arone Silimantia wrote:
 > Big 3ware sata raid with 16 disks.  First two disks are a mirror to 
 > boot off of.  I installed the system with sysinstall and created all the 
 > partitions on the boot mirror, etc., and just didn't even touch the 
 > 14-disk array that was also created.
 > [...]
 > newfs -m 0 /dev/da1

You didn't mention the size of the FS, but I guess it's at
least 4 TB, probably more.


Well, in the subject line I mentioned 7 TB, but I have since rearranged some things and it will be 5.5 TB.



You'll probably want to reduce the inode density (i.e.
increase the bytes-per-inode ratio).  With the default
value, an fsck will be a royal pain, no matter whether you
use background fsck (with snapshots) or not.  It might even
not work at all if you don't have a huge amount of RAM.



Well, I have 4 GB of physical RAM, and 4 GB of swap - so does that total of 8 GB satisfy the "1 GB per TB" requirement, or do I really need >5.5 GB of actual swap space (in addition to the physical) ?


If you increase the ratio to 64 K, it will lower the fsck
time and RAM requirement by an order of magnitude, while
there are still about 15 million inodes available per TB.
If possible, increase the ratio (-i option) further.  It
depends on the expected average file size and the maximum
number of files that you intend to store on the FS, of
course.


Ok, I will look into this.  My data population uses a little less than 5 million inodes per TB, so this may be workable to tune.  So I see the default is '4' - so I could run newfs with:

newfs -i 8

to do what you are suggesting ?


Depending on your application, it might also make sense to
_carefully_ (!) adjust the fragment and block sizes of the
FS (-f and -b options to newfs).  However, note that non-
standard values are not widely used and might expose bugs,
especially on large file systems.  If you change them, you
should at least perform some extensive stress testing.


I think I'll make things simple by steering clear of this...



Another thing that should be mentioned is the fact that
"-m 0" will result in two things:  First, it will make the
FS slower, especially when its getting full, then it will
be _much_ slower.  Second, it increases fragmentation.

I recommend you don't use the -m option at leave it at the
default.  Yes, that means that a whole lot of GB will not
be available to users (non-root), but for that price you'll
get a fast file system.  Also note that you can change
that option at a later date with tunefs(8), so if you
decide that you _really_ need that extra space, and speed
is not an issue at all, then you can change the -m value
any time.


Ok, that' s good advice - I will leave it at the default.


Oh by the way, I also agree with Eric that you should have
a look at gjournal.  It pratically removes the fsck issues.
At the moment it's only in -current, but I think Pawel
provided a port for 6.x.


Well, I don't mind a 24 hour fsck, and I would like to remove complexity and not be so on the bleeding edge with things.  Since I am only using 5mil inodes per TB anyway, that ends up being 25-30 million inodes in the 5 TB drive which I think could fsck in a day or so.

I just need to know if my 4+4 GB of memory is enough, and if this option in loader.conf:

kern.maxdsiz="2048000000"

is sufficient...

Again, many thanks.

 
---------------------------------
Expecting? Get great news right away with email Auto-Check.
Try the Yahoo! Mail Beta.


More information about the freebsd-fs mailing list