processes not getting fair share of available disk I/O

Dieter freebsd at sopwith.solgatos.com
Wed Dec 13 22:43:32 PST 2006


> Mutex profiling would show if there is a mutex somehow getting in the
> way of your I/O (e.g. if Giant is somehow being forced).  I dont think
> it would show anything though.  You can try to study interrupt issues
> (e.g. look for an interrupt storm during I/O) with vmstat -i.  Other
> than that you'd probably have to get your hands dirtier in the code.

   max        total       count   avg     cnt_hold     cnt_lock name
1158725      1185330        1596   742            0            0 /usr/src/sys/amd64/amd64/pmap.c:1563 (pmap)
1158721      1166593        1596   730            1           17 /usr/src/sys/amd64/amd64/pmap.c:1562 (vm page queue mutex)
 90598       578551      199304     2            3            4 /usr/src/sys/kern/kern_sx.c:157 (lockbuilder mtxpool)
 83234       967612      124000     7            0            0 /usr/src/sys/vm/vm_fault.c:906 (vm object)
 83102      2515439      450378     5            0            0 /usr/src/sys/kern/subr_sleepqueue.c:369 (process lock)
 82878      2049540        3215   637          196            2 /usr/src/sys/kern/kern_synch.c:236 (Giant)
 82632       947545      124000     7            0            4 /usr/src/sys/vm/vm_fault.c:295 (vm object)
 82550       285981      124000     2            4            0 /usr/src/sys/vm/vm_fault.c:929 (process lock)
 46745        46789          11  4253            0            0 /usr/src/sys/kern/vfs_subr.c:1041 (vm object)
 46741        52927         646    81            1            0 /usr/src/sys/vm/vm_object.c:1775 (vm page queue mutex)
 30068       105046        1230    85            2            0 /usr/src/sys/vm/vm_map.c:1380 (vm object)
 24083       300793      136380     2            1            1 /usr/src/sys/vm/vm_object.c:454 (vm object)
 24076        32222        2960    10            2            0 /usr/src/sys/vm/vm_object.c:625 (vm page queue mutex)
 19419        70113        7295     9            0            0 /usr/src/sys/vm/vm_fault.c:787 (vm object)
 16024        65388        5494    11            1            2 /usr/src/sys/vm/vnode_pager.c:1181 (vnode interlock)
 16018        51608        8791     5            7            9 /usr/src/sys/vm/vnode_pager.c:1169 (vm object)
 14398      1084811       25198    43          108            3 /usr/src/sys/kern/kern_sysctl.c:1280 (Giant)
 11940       274443       37582     7            0            1 /usr/src/sys/kern/vfs_bio.c:3082 (vm object)
 11567       625811      312742     2            0            2 /usr/src/sys/kern/kern_lock.c:168 (lockbuilder mtxpool)
 11096        45666        5241     8            1            4 /usr/src/sys/vm/vm_map.c:2404 (vm object)


If I'm reading the man page right, pmap holds a lock for over 1 second?

Here's a capture with the stats reset after the Ethernet->disk process was
already started up and running.

   max        total       count   avg     cnt_hold     cnt_lock name
  9649       277992       20357    13            0            0 /usr/src/sys/kern/vfs_bio.c:3364 (vm object)
  5939       106822        1158    92          135            0 /usr/src/sys/kern/kern_sysctl.c:1280 (Giant)
  5620        17658          29   608            2            0 /usr/src/sys/amd64/amd64/pmap.c:2481 (vm page queue mutex)
  5616        17502          29   603            0            0 /usr/src/sys/amd64/amd64/pmap.c:2482 (pmap)
  5021       446103       79641     5            1            0 /usr/src/sys/kern/subr_sleepqueue.c:369 (process lock)
  3270        16597        6032     2            0            0 /usr/src/sys/amd64/amd64/pmap.c:2602 (pmap)
  2861        15595        7158     2            0            0 /usr/src/sys/kern/kern_sx.c:245 (lockbuilder mtxpool)
  2811         5180         767     6            0            0 /usr/src/sys/vm/vm_map.c:2193 (vm object)
  2542        22712         243    93           29            0 /usr/src/sys/kern/kern_synch.c:236 (Giant)
  2152        14813         658    22            2            0 /usr/src/sys/vm/vm_fault.c:683 (vm object)
  2054         4334         991     4            1            1 /usr/src/sys/kern/vfs_subr.c:1983 (vnode interlock)
  1999        11592        2575     4            0            0 /usr/src/sys/vm/vm_object.c:454 (vm object)
  1497         4845         431    11            0            0 /usr/src/sys/kern/vfs_subr.c:2185 (vnode interlock)
  1157        10887        1288     8            0            0 /usr/src/sys/kern/vfs_subr.c:2073 (vnode interlock)
  1106         3920          33   118            0            0 /usr/src/sys/ufs/ffs/ffs_vfsops.c:1086 (struct mount mtx)
  1082         5893          40   147            0            0 /usr/src/sys/kern/vfs_subr.c:2805 (struct mount mtx)
   905         1812          12   151            0            0 /usr/src/sys/ufs/ffs/ffs_vfsops.c:1117 (struct mount mtx)
   880        15969        1525    10            0            0 /usr/src/sys/vm/vm_fault.c:295 (vm object)
   873         6137         494    12            0            0 /usr/src/sys/kern/vfs_cache.c:457 (vnode interlock)
   868        10460        1595     6            0            0 /usr/src/sys/amd64/amd64/pmap.c:1851 (pmap)
   867        82404       42253     1            0            0 /usr/src/sys/kern/kern_lock.c:168 (lockbuilder mtxpool)
   813         9197         107    85            0            0 /usr/src/sys/vm/vm_map.c:1380 (vm object)
   809         8591          95    90            1            0 /usr/src/sys/vm/vm_map.c:1431 (vm page queue mutex)
   708         1017          27    37            0            0 /usr/src/sys/kern/kern_exec.c:822 (vm object)


The following is with profiling kernel, but with profiling disabled:

    5 users    Load  0.00  0.00  0.01                  Dec 13 22:39

Mem:KB    REAL            VIRTUAL                     VN PAGER  SWAP PAGER
        Tot   Share      Tot    Share    Free         in  out     in  out
Act  446604    5696   461972     6952  749096 count
All 1262444    5696 1620985k     6952         pages
                                                          zfod   Interrupts
Proc:r  p  d  s  w    Csw  Trp  Sys  Int  Sof  Flt        cow    3254 total
             41      8844    1 1099 8706   11      563476 wire   1000 0: clk
                                                    16720 act         1: atkb
 1.2%Sys   3.6%Intr  0.0%User  0.0%Nice 95.2%Idl   685800 inact     5 3: sio1
|    |    |    |    |    |    |    |    |    |          4 cache       4: sio0
=+                                                 749092 free        7: ppc0
                                                          daefr       stray 7
Namei         Name-cache    Dir-cache                     prcfr   128 8: rtc
    Calls     hits    %     hits    %                     react       10: ohc
                                                          pdwak  2121 11: fwo
                                                          pdpgs       14: ata
Disks   ad2   ad4   ad6   ad8  ad10   da0   cd0           intrn       15: ata
KB/t   0.00  0.00  0.00  0.00   128  0.00  0.00    218640 buf
tps       0     0     0     0    10     0     0        10 dirtybuf
MB/s   0.00  0.00  0.00  0.00  1.30  0.00  0.00    100000 desiredvnodes
% busy    0     0     0     0    21     0     0       815 numvnodes
                                                      137 freevnodes

interrupt                          total       rate
irq0: clk                        5401694        999
irq1: atkbd0                           2          0
irq3: sio1                         44965          8
irq4: sio0                           407          0
irq7: ppc0                            49          0
stray irq7                             1          0
irq8: rtc                         691295        127
irq10: ohci0 ohci1+                  253          0
irq11: fwohci0 bge*              2750114        509
irq14: ata0                           57          0
irq15: ata1                          431          0
Total                            8889268       1645




More information about the freebsd-questions mailing list