Capturing I/O traces

Fluffles etc at fluffles.net
Wed Jan 10 01:21:22 PST 2007


Hello list,

Thanks for all your input. After some thought about I/O tracing, I guess
ktrace won't work properly. I do not see the offset/length of the
requests and it does include the actual contents; so it's not what i need.

I think the geom_nop approach might be the best or most simple. Arne
Woerner provided me a modified gnop which logs the I/O requests at
debuglevel 0. But i would need some form of binary logging, to file. So
that with a specially modified gnop it would dump all I/O into a file,
like: /mnt/memorydisk/ad4.nop.log. It would have to log all accesses
binary. I think i will drop the time between requests, so i will just
need the serial order of requests and the following information:

- read or write
- offset
- length

It should be binary, like:
R<32-bit offset><32-bit length>W<32-bit offset><32-bit length>....

The R means Read (so takes 1 byte) and W ofcourse write. This way it
will take 9 bytes per I/O request to log. I could log to memory (md
malloc) to minimize performance penalty. The logging should be able to
turn off/on via sysctl switch or so, so i can prepare the benchmark
setup and turn on tracing just before i launch the application i wish to
trace. When this modified gnop module is done, i need a C-like program
that can read the log and perform the I/O on a given device. Such as
calling the following command:

./tracereproduce <tracefile> <device>
./tracereproduce /mnt/memorydisk/ad4.nop.log /dev/raid5/test

Then it would execute all I/O requests logged in the .log file at the
fastest speed possible, serially. It should record the start time and
end time and calculate the microtime it has used to finish all requests.
Say this is 14.9 seconds for 20.000 I/O requests, then it would produce
1342 I/O per second as the end result. Then i can reproduce the test on
say a gmirror device and i would get another score, being able to
compare between classes.

What do you guys think? Any flaws in my idea? I know this is at geom
level and not VFS-level, but as Arne Woerner pointed out to me;
caching/buffering that happens at the VFS-level will never reach the
geom-layer anyway; so i do not need it. All benchmarking will be done on
one machine so the only thing different is the geom classes; for example
a 4-disk gstripe versus 4-disk graid5, etc. There might be better ways
to do tracing; but since i'm a simple webdeveloper i do not have the
skills to write a sophisticated applications. The geom-tracing approach
outlined above might be my best option and might come very close to
realistic performance-gains. Feedback is appreciated. :)

- Veronica


More information about the freebsd-fs mailing list