on st_blksize value

Andriy Gapon avg at freebsd.org
Wed Mar 24 06:10:11 UTC 2010


on 24/03/2010 01:54 Andrew Snow said the following:
> Andriy Gapon wrote:
> 
>> One practical benefit can be with ZFS: if a filesystem has recordsize
>> > PAGE_SIZE
>> (e.g. default 128K) and it has checksums or compression enabled, then
>> (over-)writing in blocks smaller than recordsize would require reading
>> of a whole
>> record first. 
> 
> Not strictly true: in ZFS the recordsize setting is for the maximum size
> of a record, it can still write smaller than this.  If you overwrite 1K
> in the middle of a 128K record then it should just be writing a 1K
> block.  Each block has its own checksum attached to it so there's no
> need to recalculate checksums for data that isn't changing.

I must admit that know almost zero about ZFS internals, but I see a logical
problem in your explanation - if the original data was written as a single 128K
block, and if changing a 1K range within it would result in a new 1K block, then
the original data is still affected as it needs to account that the range is now
stored in a different block.

Perhaps, I am just misunderstanding what you said.

But you perhaps you were referring to the case of (over)writing a small _file_
as opposed to the case of overwriting a small range within a large file?

-- 
Andriy Gapon


More information about the freebsd-fs mailing list