Unexpected SU+J inconsistency AGAIN -- please, don't shift topic to ZFS!

Lev Serebryakov lev at FreeBSD.org
Wed Mar 6 12:53:44 UTC 2013


Hello, Don.
You wrote 6 марта 2013 г., 14:01:08:

>> DL> With NCQ or TCQ, the drive can have a sizeable number of writes
>> DL> internally queued and it is free to reorder them as it pleases even with
>> DL> write caching disabled, but if write caching is disabled it has to delay
>> DL> the notification of their completion until the data is on the platters
>> DL> so that UFS+SU can enforce the proper dependency ordering.
>>   But, again, performance would be terrible :( I've checked it. On
>>  very sparse multi-threaded patterns (multiple torrents download on
>>  fast channel in my simple home case, and, I think, things could be
>>  worse in case of big file server in organization) and "simple" SATA
>>  drives it significant worse in my experience :(

DL> I'm surprised that a typical drive would have enough onboard cache for
DL> write caching to help signficantly in that situation.  Is the torrent
   It is 5x64MiB in my case, oh, effectively, 4x64MiB :)
   Really, I could repeat experiment with some predictable and
  repeatable benchmark. What in out ports could be used for
  massively-parallel (16+ files) random (with blocks like 64KiB and
  file sizes like 2+GiB) but "repeatable" benchmark?

DL> software doing a lot of fsync() calls?  Those would essentially turn
  Nope. It trys to avoid fsync(), of course

DL> Creating a file by writing it in random order is fairly expensive.  Each
DL> time a new block is written by the application, UFS+SU has to first find
DL> a free block by searching the block bitmaps, mark that block as
DL> allocated, wait for that write of the bitmap block to complete, write
DL> the data to that block, wait for that to complete, and then write the
DL> block pointer to the inode or an indirect block.  Because of the random
DL> write ordering, there is probably not enough locality to do coalesce
DL> multiple updates to the bitmap and indirect blocks into one write before
DL> the syncer interval expires.  These operations all happen in the
DL> background after the write() call, but once you hit the I/O per second
DL> limit of the drive, eventually enough backlog builds to stall the
DL> application.  Also, if another update needs to be done to a block that
DL> the syncer has queued for writing, that may also cause a stall until the
DL> write completes.  If you hack the torrent software to create and
DL> pre-zero each file before it starts downloading it, then each bitmap and
DL> indirect block will probably only get written once during that operation
DL> and won't get written again during the actual download, and zeroing the
DL> data blocks will be sequential and fast. During the download, the only
DL> writes will be to the data blocks, so you might see something like a 3x
DL> performance improvement.
   My client (transmission, from ports) is configured to do "real
  preallocation" (not sparse one), but it doesn't help much. It surely
  limited by disk I/O :(
    But anyway, torrent client is bad benchmark if we start to speak
  about some real experiments to decide what could be improved in
  FFS/GEOM stack, as it is not very repeatable.


-- 
// Black Lion AKA Lev Serebryakov <lev at FreeBSD.org>



More information about the freebsd-fs mailing list