Apparent strange disk behaviour in 6.0
Brian Candler
B.Candler at pobox.com
Sat Jul 30 17:14:13 GMT 2005
On Sat, Jul 30, 2005 at 03:29:27AM -0700, Julian Elischer wrote:
> >Please use gstat and look at the service times instead of the
> >busy percentage.
> >
> >
>
> The snapshot below is typical when doing tar from one drive to another..
> (tar c -C /disk1 f- .|tar x -C /disk2 -f - )
>
> dT: 1.052 flag_I 1000000us sizeof 240 i -1
> L(q) ops/s r/s kBps ms/r w/s kBps ms/w d/s kBps ms/d
> %busy Name
> 0 405 405 1057 0.2 0 0 0.0 0 0 0.0
> 9.8| ad0
> 0 405 405 1057 0.3 0 0 0.0 0 0 0.0
> 11.0| ad0s2
> 0 866 3 46 0.4 863 8459 0.7 0 0 0.0
> 63.8| da0
> 25 866 3 46 0.5 863 8459 0.8 0 0 0.0
> 66.1| da0s1
> 0 405 405 1057 0.3 0 0 0.0 0 0 0.0
> 12.1| ad0s2f
> 195 866 3 46 0.5 863 8459 0.8 0 0 0.0
> 68.1| da0s1d
>
> even though the process should be disk limitted neither of the disks is
> anywhere
> near 100%.
Are ad0 and da0 both arrays?
One IDE disk doing 405 reads per second (2.5ms per seek) is pretty good. A
7200rpm drive would have a theoretical average seek time of 1/(7200/60)/2 =
4.2ms, or 7200/60*2 = 240 ops per second. It can be better with read-ahead
caching.
But if really is only 12.1% busy (which the 0.3 ms/r implies), that means it
would be capable of ~3350 operations per second... that's either a seriously
good drive array with tons of cache, or the stats are borked :-)
With a single bog-standard IDE drive tarring up a directory containing some
large .iso images, and piping the output to /dev/null, I get:
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
1 389 388 49318 2.4 1 24 1.4 90.6| ad0s3d
And tarring up /usr/src (again piping to /dev/null) I get:
1 564 564 5034 1.7 0 0 0.0 95.7| ad0s2e
This is with 5-STABLE as of 2005-05-13 (i.e. a bit after 5.4-RELEASE), and
an AMD 2500+ processor. Interestingly, I get a much higher kBps than your
ad0 - although I'm not actually writing the data out again.
Maybe it would be interesting to pipe the output of your tar to /dev/null
and see how the read performance from ad0 compares with your measured <read
from ad0 plus write to da0> performance? Then try just reading from da0?
Regards,
Brian.
More information about the freebsd-current
mailing list