Writing contigiously to UFS2?

Bruce Evans brde at optusnet.com.au
Wed Sep 26 13:06:58 PDT 2007


On Wed, 26 Sep 2007, Rick C. Petty wrote:

> On Wed, Sep 26, 2007 at 05:59:24PM +1000, Bruce Evans wrote:
>> On Tue, 25 Sep 2007, Rick C. Petty wrote:
>>
>> That's insignificantly more.  Even doubling the size wouldn't make much
>> difference.  I see differences of at most 25% going the other way and
>
> Some would say that 25% difference is significant.  Obviously you disagree.

No, 25% is significant, but it takes intentional mistuning combined with
no attempt to optimize the mistuned case and bugs for the general case
that are more harmful for the mistuned case to get as much as 25%.

>>     4K blocks, 512-frags -e 512  (broken default):     40MB/S
>>     4K blocks, 512-frags -e 1024 (broken default):     44MB/S
                                      er, fixed default
>>     4K blocks, 512-frags -e 2048 (best), kernel fixes: 47MB/S
>>     4K blocks, 512-frags -e 8192 (try too hard), kernel fixes
>>        (kernel fixes are not complete enough to handle this case;
>>        defaults and -e values which are < the cg size work best except
>>        possibly when the fixes are complete):          45MB/S
>>     16K blocks, 2K-frags -e 2K   (broken default):     50MB/S
>>     16K blocks, 2K-frags -e 4K   (fixed default):      50.5MB/S
>>     16K blocks, 2K-frags -e 8K   (best):               51.5MB/S
>>     16K blocks, 2K-frags -e 64K  (try too hard):       < 51MB/S again
        64K-blocks, 8K-frags -e barely matters             close to max 52 MB/S
 	  (I was able to create a perfectly contiguous (modulo indirect
 	  blocks which were allocated as contiguously as possible) file
 	  of size 1GB on a fs with a cg size of almost 2GB.  A second file
 	  would not have been allocated so well, since it would be started
 	  on the same cg as the directory inode = same cg as the first file.)
>
> Are you talking about throughputs now?  I was just talking about space.
> Time and space are usually mutually-exclusive optimizations.

These are all throughputs starting with a new file system.  Since it's
a new file system with defaults for most parameters, it has the usual
space/ time tuning (-m 8 -o time), but normal space/time tuning doesn't
apply for huge files anyway since there are no normal fragments.

>> ...
>>> size.  You should be able to create 2-4 CGs to span each of your 1TB
>>> drives without increasing the block size and thus minimum allocation unit.
>>
>> In theory it won't work.  From fs.h:
>> ...
>> Only offsets to the inode blocks, etc. are stored in the superblock.
>
> Yes, the offset to the cylinder group block and the offset to the inode
> block are in the superblock (struct fs).  It wouldn't be too difficult to
> tweak the ffs code to read in CGs larger than one block, by checking the
> difference between fs_iblkno and fs_cblkno.  I'm saying it's theoretically
> possible, although it will require tweaks in ffs code.  Again, I think it's
> worth investigating, especially if you believe there are performance
> penalties for having block sizes greater than the kernel buffer size.

But then it won't be binary compatible.

The performance penalties are easier to fix (should just never have existed
on 64-bit platforms).

My main point here is that small cylinder groups alone are not a problem
for large files provided they are not too small.  They cost a few percent
in best cases.  In worst cases, this loss is in the noise.

Bruce


More information about the freebsd-fs mailing list