a Q on measuring system performance.
Dan Nelson
dnelson at allantgroup.com
Fri Mar 25 11:25:53 PST 2005
In the last episode (Mar 24), Yan Yu said:
> I add some codes in various places relating to file operations inside
> the kernel, e.g., fdalloc(), fdused(), fdunused(), fdfree() etc. I am
> trying to measure the overhead added by these instrumentation code.
> my plan is:
> in my user space program, i have something like the following:
> --------------------------------------------
> gettimeofday(&prev_time, NULL);
> for (i=0; i< 1000; i++)
> {
> fd = fopen("tmp", "r" );
> if (fd == NULL)
> {
> break;
> }
> cnt ++;
> }
>
> gettimeofday(&cur_time, NULL);
> t_lapse= misc_tv_offset( &cur_time, &prev_time );
>
> ----------------------------------------------------
> I would run this for the unmodified kernel, and instrumented kernel.
> compare the t_lapse, my concern is that t_lapse includes context switch
> time when the user process is taken out of run queue.
Try using getrusage(), and total up ru_utime+ru_stime.
> I also run "gprof" on the program, some related data is:
> % cumulative self self total
> time seconds seconds calls ms/call ms/call name
> 80.0 0.01 0.01 1000 0.01 0.01 __sys_open [3]
> 20.0 0.01 0.00 1000 0.00 0.00 __sfp [4]
> 0.0 0.01 0.00 1987 0.00 0.00 memcpy [6]
> 0.0 0.01 0.00 1000 0.00 0.00 __sflags [283]
> 0.0 0.01 0.00 1000 0.00 0.01 fopen [1]
>
> i am wonderinf should I better trust gprof instead? so 0.01 ms/call
> for related file operation is the result. or is there some other
> better way to achieve this?
Gprof is better suited for programs that run for minutes to hours.
--
Dan Nelson
dnelson at allantgroup.com
More information about the freebsd-hackers
mailing list