Re: measuring swap partition speed

From: Warner Losh <imp_at_bsdimp.com>
Date: Sat, 23 Dec 2023 21:19:28 UTC
On Thu, Dec 21, 2023 at 5:20 PM void <void@f-m.fm> wrote:

> Hi Mark & Warner,
>
> I tried using fio with Warner's suggestions as a template, and
> pasted the results in the latter part of this message.
>
> On Thu, 21 Dec 2023, at 21:03, Mark Millard wrote:
>
> > # sysctl vm.stats.vm.v_page_size
> > vm.stats.vm.v_page_size: 4096
>
> Here, this setting is:
> # sysctl vm.stats.vm.v_page_size
> vm.stats.vm.v_page_size: 4096
>
> > # sysctl vm.phys_pager_cluster
> > vm.phys_pager_cluster: 1024
>
> It is 1024 here, too.
>
> Only the one usb3 port is occupied.
>
> fio output follows. No idea yet if the settings used are suitable for this
> context
> and would welcome suggestions. I think --filename can be a device? Output
> suggests
> it used hw.ncpu instead of --numjobs=8
>
> ~~~~~~~
> # fio --name=randread --ioengine=posixaio --rw=randread --direct=1 --bs=8k
> --refill_buffers --norandommap --randrepeat=0 --iodepth=32 --numjobs=8
> --runtime=60 --group_reporting --thread --size=2048M
> randread: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T)
> 8192B-8192B, ioengine=posixaio, iodepth=32
> ..
> fio-3.36
> Starting 8 threads
> randread: Laying out IO file (1 file / 2048MiB)
> randread: Laying out IO file (1 file / 2048MiB)
> randread: Laying out IO file (1 file / 2048MiB)
> randread: Laying out IO file (1 file / 2048MiB)
> randread: Laying out IO file (1 file / 2048MiB)
> randread: Laying out IO file (1 file / 2048MiB)
> randread: Laying out IO file (1 file / 2048MiB)
> randread: Laying out IO file (1 file / 2048MiB)
> Jobs: 4 (f=4): [_(1),r(2),_(2),r(1),_(1),r(1)][21.2%][r=288KiB/s][r=36
> IOPS][eta 04m:05s]
> randread: (groupid=0, jobs=8): err= 0: pid=135125: Thu Dec 21 16:43:00 2023
>   read: IOPS=43, BW=351KiB/s (359kB/s)(22.6MiB/65986msec)
>     slat (nsec): min=889, max=1948.6k, avg=4172.45, stdev=36488.07
>     clat (msec): min=1108, max=11660, avg=5644.23, stdev=1282.12
>      lat (msec): min=1108, max=11660, avg=5644.24, stdev=1282.12
>     clat percentiles (msec):
>      |  1.00th=[ 1183],  5.00th=[ 3171], 10.00th=[ 4933], 20.00th=[ 5269],
>      | 30.00th=[ 5470], 40.00th=[ 5604], 50.00th=[ 5738], 60.00th=[ 5873],
>      | 70.00th=[ 5940], 80.00th=[ 6074], 90.00th=[ 6342], 95.00th=[ 6812],
>      | 99.00th=[10671], 99.50th=[10939], 99.90th=[11610], 99.95th=[11610],
>      | 99.99th=[11610]
>    bw (  KiB/s): min=  208, max= 3760, per=100.00%, avg=1535.05,
> stdev=128.23, samples=245
>    iops        : min=   26, max=  470, avg=191.88, stdev=16.03, samples=245
>   lat (msec)   : 2000=3.25%, >=2000=96.75%
>   cpu          : usr=0.00%, sys=0.12%, ctx=22712, majf=0, minf=0
>   IO depths    : 1=0.3%, 2=0.6%, 4=1.1%, 8=4.9%, 16=69.6%, 32=23.6%,
> >=64=0.0%
>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >=64=0.0%
>      complete  : 0=0.0%, 4=94.7%, 8=2.9%, 16=1.5%, 32=0.9%, 64=0.0%,
> >=64=0.0%
>      issued rwts: total=2895,0,0,0 short=0,0,0,0 dropped=0,0,0,0
>      latency   : target=0, window=0, percentile=100.00%, depth=32
>
> Run status group 0 (all jobs):
>    READ: bw=351KiB/s (359kB/s), 351KiB/s-351KiB/s (359kB/s-359kB/s),
> io=22.6MiB (23.7MB), run=65986-65986msec
> #
>

5s+ of latency on the average, max latency of 12s!  Woof. No wonder you
hate life.

Warner