High fragmentation on zpool log

krad kraduk at gmail.com
Mon Nov 30 18:25:43 UTC 2015


thats true log devices do only hold a few seconds worth of data, but how
much data that is will vary depending on throughput to the array. It's only
sync data that is written to the log as well, so a simple rsync wouldnt in
normal circumstances generate sync writes, just async. That is assuming you
aren't running it over an nfs mount.

On 30 November 2015 at 16:09, <kpneal at pobox.com> wrote:

> On Mon, Nov 30, 2015 at 09:17:33AM +0000, krad wrote:
> > Fragmentation isn't really a big issue on SSD's as there are no heads to
> > move around like on magnetic drives. Also due to wear levelling, you
> > actually have no idea where a block actually  is on a  memory cell, as
> the
> > drive only gives a logical representation of the layout of blocks, not an
> > actual true mapping.
>
> Well, all of this is true, but I'm not convinced that was the real
> question.
> My interpretation was that the OP was asking how a log device 8GB in size
> can get to be 85% fragmented.
>
> My guess was that 85% fragmentation of a log device may be a sign of a log
> device that is too small. But I thought log devices only held a few seconds
> of activity, so I'm a little confused about how a log device can get to
> be 85% fragmented. Is this pool really moving a gigabyte a second or
> faster?
>
> > On 27 November 2015 at 15:24, Kai Gallasch <k at free.de> wrote:
> >
> > >
> > > Hi.
> > >
> > > Today I had a look at the zpool of a server (FreeBSD 10.2, GENERIC
> > > kernel, 100d uptime, 96GB RAM) I recently installed.
> > >
> > > The pool has eight SAS drives in a raid 10 setup (concatenated mirror
> > > pairs) and uses a cache and a mirrored log.
> > >
> > > The log and cache both are on a pair of Intel SSDs.
> > >
> > > # gpart show -l da9
> > > =>       34  195371501  da9  GPT  (93G)
> > >          34          6       - free -  (3.0K)
> > >          40   16777216    1  log-BTTV5234003K100FGN  (8.0G)
> > >    16777256  178594272    2  cache-BTTV5234003K100FGN  (85G)
> > >   195371528          7       - free -  (3.5K)
> > >
> > >
> > > Is 85% fragmentation of the log device something to worry about?
> > >
> > > Why does zpool list show so unrealistic values for FREE and CAP?
> > > Is this normal?
> > >
> > > Atached: Some output of zpool list.
> > >
> > > Regards,
> > > Kai.
> > >
> > >
> > > (zpool list -v output, ommited columns: EXPANDSZ.,DEDUP,
> > > HEALTH, ALTROOT)
> > >
> > > NAME                             SIZE  ALLOC   FREE  FRAG  CAP
> > > rpool                           7.25T   440G  6.82T    4%   5%
> > >   mirror                        1.81T   110G  1.71T    4%   5%
> > >    gpt/rpool-WMC160D0SVZE          -      -      -      -    -
> > >    gpt/rpool-WMC160D8MJPD          -      -      -      -    -
> > >   mirror                        1.81T   110G  1.70T    4%   5%
> > >    gpt/rpool-WMC160D9DLL2          -      -      -      -    -
> > >    gpt/rpool-WMC160D23CWA          -      -      -      -    -
> > >   mirror                        1.81T   110G  1.71T    4%   5%
> > >    gpt/rpool-WMC160D94930          -      -      -      -    -
> > >    gpt/rpool-WMC160D9V5LW          -      -      -      -    -
> > >   mirror                        1.81T   110G  1.71T    4%   5%
> > >    gpt/rpool-WMC160D9ZV0S          -      -      -      -    -
> > >    gpt/rpool-WMC160D5HFT6          -      -      -      -    -
> > >   mirror                        7.94G  43.2M  7.90G   85%   0%
> > >    gpt/log-BTTV523401U4100FGN      -      -      -      -    -
> > >    gpt/log-BTTV5234003K100FGN      -      -      -      -    -
> > > cache                              -      -      -      -    -
> > >   gpt/cache-BTTV5234003K100FGN  85.2G   142G  16.0E    0%   166%
> > >   gpt/cache-BTTV523401U4100FGN  85.2G   172G  16.0E    0%   202%
> --
> Kevin P. Neal                                http://www.pobox.com/~kpn/
>
> Seen on bottom of IBM part number 1887724:
> DO NOT EXPOSE MOUSE PAD TO DIRECT SUNLIGHT FOR EXTENDED PERIODS OF TIME.
>


More information about the freebsd-fs mailing list