raw filesystem counters

E.S. Rosenberg esr+freebsd-fs at mail.hebrew.edu
Wed Jun 27 02:22:58 UTC 2018


//I hope it's not considered a problem that I'm reviving this old thread.

That is a really cool patch thanks!
Will see if I can get the ZFS admins to allow me to use it...

A small follow up question:
Is there any easily parsable way to find what disks are part of a pool?
zpool status poolname is a nightmare to parse.

Your patched output would be slightly better to parse but still not ideal
because depending on whether or not disks are in raidz or not they may be
more or less indented...


On Tue, Feb 20, 2018 at 10:24 PM, Eric A. Borisch <eborisch at gmail.com>
wrote:

> On Tue, Feb 13, 2018 at 1:56 PM, E.S. Rosenberg
> <esr+freebsd-fs at mail.hebrew.edu> wrote:
> > Wow Eric that is exactly what I was looking for!
> > Thanks!
> > Nothing similar exists for ZFS correct?
> > Thanks again,
> > Eli
>
> Here's a quick patch to zpool that adds a "raw" mode when
> ZPOOL_RAW_STATS is set (to anything) in the environment. Outputs are
> not averaged by time, so the first output has absolute counters from
> boot, and subsequent (if provided an interval, eg zpool iostat 5) *are
> not* averaged over the period. You could certainly average for
> interval but not initial; I just chose to remove all averaging.
>
> https://gist.github.com/eborisch/c610c55cd974b9d4070c2811cc04cd8f
>
> Could also be implemented as a -p (parsable) flag to zpool iostat, but
> this was less intrusive to code up.
>
> On my system (with the above patch):
>
> $  zpool iostat
>
>                capacity     operations    bandwidth
> pool        alloc   free   read  write   read  write
> ----------  -----  -----  -----  -----  -----  -----
> system      67.2G  13.8G      9     51   159K   879K
> tome        2.94T   697G     19     34   949K   645K
> ----------  -----  -----  -----  -----  -----  -----
>
> $ ZPOOL_RAW_STATS=1 zpool iostat
>
> pool/dev,alloc,free,rops,wops,rbytes,wbytes
> system,72199012352,14774075392,42138123,228011166,717996265472,
> 3978917265408
> tome,3237433278464,748296372224,87257557,150639839,4293053411328,
> 2918490030080
>
> $ ZPOOL_RAW_STATS=1 zpool iostat -v
>
> pool/dev,alloc,free,rops,wops,rbytes,wbytes
> system,72200007680,14773080064,42138142,228019481,717997350912,
> 3979089575936
> ::gpt/system,72200007680,14773080064,42138142,228019481,717997350912,
> 3979089575936
> tome,3237679747072,748049903616,87257714,150656638,4293054717952,
> 2918798745600
> ::mirror,3237679747072,748049903616,87257682,146824479,4293052755968,
> 2461179686912
> ::::diskid/DISK-NNNNNNNp1,-,-,49889874,46124191,
> 3718731919360,2468656459776
> ::::diskid/DISK-NNNNNNNp1,-,-,50357481,45933366,
> 3683843850240,2468656459776
> ::gpt/log,1875968,2128830464,32,3832159,1961984,457619058688
>
> With an uptime of ~51 days.
>
> Enjoy!
>   - Eric
>


More information about the freebsd-fs mailing list