fs/udf: vm pages "overlap" while reading large dir [patch]

Andriy Gapon avg at icyb.net.ua
Wed Feb 6 23:33:28 UTC 2008

on 06/02/2008 18:34 Andriy Gapon said the following:
> Actually the patch is not entirely correct. max_size returned from
> udf_bmap_internal should be used to calculate number of continuous
> sectors for read-ahead (as opposed to file size in the patch).

Attached is an updated patch.

The most prominent changes from the previous version:
1. udf_read can handle files with data embedded into fentry
(sysutils/udfclient can produce such things for small files).
2. above mentioned files can now be mmap-ed, so that you can not only
cat them, but also cp them; btw, I believe that cp(1) should have logic
to try to fallback to read if mmap() fails - remember kern/92040 ? :-)
3. udf_bmap uses max_size[*] returned by udf_bmap_internal to calculate
a number of contiguous blocks for read-ahead; [*] - this is a number of
contiguous bytes from a given offset within a file.

Things that stay the same:
1. I still use bread() via directory vnode; I think it's better even if
reading via device vnode would work correctly.
2. I still try to read as much data as possible in directory bread(), I
think this is a sort of an ad-hoc read-ahead without all the heuristics
and logic that cluster reading does; I think that this should be ok
because we do not have random reading of directory, it's always
sequential - either to read-in the whole directory or linearly search
through it until wanted entry is found.

Detailed description of the patch.
Hunk1 - this is "just in case" patch, reject files with unsupported
strategy right away instead of deferring failure to udf_bmap_internal.
Hunk2 - fix typo in existing macro, add new macro to check if a file has
data embedded into fentry ("unusual file").
Hunk3 - for an unusual file just uiomove data from fentry.
Hunk4 - cosmetic changes plus a fix for udf_bmap_internal errors not
actually being honored; also added an additional printf, because
udf_strategy should never really be called for unusual files.
Hunk5 - return EOPNOTSUPP for unusual files, this is correct because we
can not correctly bmap them and also this enables correct handling of
this files in vm code (vnode_pager); also code for read-ahead
calculation borrowed from cd9660.
Hunk6- explain function of udf_bmap_internal call in this place.
Hunk7 - some cosmetics; prevent size passed to bread from being greater
than MAXBSIZE; do bread via directory vnode rather than device vnode
(udf_readlblks was a macro-wrapper around bread).

Hunk1 - this is borrowed from cd9660, apparently this data is needed for
correct cluster reading.

Couple of words about testing.
udf in 7.0-RC1 plus this patch correctly handled everything I could
throw at it - huge directories, unusual files, reading, mmaping.
Using udfclientfs I wrote a directory on DVD-RAM UDF disk that contained
 2G of files from ports distfiles. The files varied in size from about
100 of bytes to hundreds of megabytes. I watched systat -vmstat while
copying this directory. With unpatched code some files would not be
copied (small "unusual files"), KB/t value stayed at 2K (meaning reads
always were sector sized), MB/s was about 1. With the patch all files
were copied successfully and correctly (md5 verified), KB/t varied from
2K to 64K (apparently depending on size of currently copied file), MB/s
varied from 1 to 4 which is not bad for these DVD-RAM disk and drive.
If somebody asks I can produce a UDF image that would contain various
test cases: large directory, unusual files, etc. Or you can give a try
to udfclient/udfclientfs from sysutils/udfclient port.

I hope this patch will be useful.

Andriy Gapon
-------------- next part --------------
A non-text attachment was scrubbed...
Name: udf_latest2.diff
Type: text/x-patch
Size: 5893 bytes
Desc: not available
Url : http://lists.freebsd.org/pipermail/freebsd-hackers/attachments/20080206/8b19b743/udf_latest2.bin

More information about the freebsd-hackers mailing list