Re: measuring swap partition speed
- Reply: void : "Re: measuring swap partition speed"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Fri, 22 Dec 2023 04:19:45 UTC
void <void@f-m.fm> wrote on
Date: Fri, 22 Dec 2023 00:19:39 UTC :
> Hi Mark & Warner,
>
> I tried using fio with Warner's suggestions as a template, and
> pasted the results in the latter part of this message.
>
> On Thu, 21 Dec 2023, at 21:03, Mark Millard wrote:
>
> . . .
>
> fio output follows. No idea yet if the settings used are suitable for this context
> and would welcome suggestions.
I later supply the output from running the same command,
be it suitable or not. Again: the context is my old USB3
SSD stick, not spinning rust. Also my particular usual
arm clocking configuration for the RPi4B. (I forgot
to mention earlier that it has a case with a fan and
heatsinks.)
Also, this was not a swap space test command. So it
is relevant that the file system involved was UFS,
not ZFS.
# swapinfo
Device 1K-blocks Used Avail Capacity
/dev/label/growfs_swap 8388604 0 8388604 0%
I'll note that monitoring with top showed that the
--numjobs=8 lead to 8 threads (twice the number of
hardware threads).
I'll note that the run left behind:
# ls -C1 randread.*.0randread.0.0
randread.1.0
randread.2.0
randread.3.0
randread.4.0
randread.5.0
randread.6.0
randread.7.0
that I later deleted.
> I think --filename can be a device? Output suggests
> it used hw.ncpu instead of --numjobs=8
>
> ~~~~~~~
> # fio --name=randread --ioengine=posixaio --rw=randread --direct=1 --bs=8k --refill_buffers --norandommap --randrepeat=0 --iodepth=32 --numjobs=8 --runtime=60 --group_reporting --thread --size=2048M
> randread: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=posixaio, iodepth=32
> ...
> fio-3.36
> Starting 8 threads
> randread: Laying out IO file (1 file / 2048MiB)
> randread: Laying out IO file (1 file / 2048MiB)
> randread: Laying out IO file (1 file / 2048MiB)
> randread: Laying out IO file (1 file / 2048MiB)
> randread: Laying out IO file (1 file / 2048MiB)
> randread: Laying out IO file (1 file / 2048MiB)
> randread: Laying out IO file (1 file / 2048MiB)
> randread: Laying out IO file (1 file / 2048MiB)
> Jobs: 4 (f=4): [_(1),r(2),_(2),r(1),_(1),r(1)][21.2%][r=288KiB/s][r=36 IOPS][eta 04m:05s]
> randread: (groupid=0, jobs=8): err= 0: pid=135125: Thu Dec 21 16:43:00 2023
> read: IOPS=43, BW=351KiB/s (359kB/s)(22.6MiB/65986msec)
> slat (nsec): min=889, max=1948.6k, avg=4172.45, stdev=36488.07
> clat (msec): min=1108, max=11660, avg=5644.23, stdev=1282.12
> lat (msec): min=1108, max=11660, avg=5644.24, stdev=1282.12
> clat percentiles (msec):
> | 1.00th=[ 1183], 5.00th=[ 3171], 10.00th=[ 4933], 20.00th=[ 5269],
> | 30.00th=[ 5470], 40.00th=[ 5604], 50.00th=[ 5738], 60.00th=[ 5873],
> | 70.00th=[ 5940], 80.00th=[ 6074], 90.00th=[ 6342], 95.00th=[ 6812],
> | 99.00th=[10671], 99.50th=[10939], 99.90th=[11610], 99.95th=[11610],
> | 99.99th=[11610]
> bw ( KiB/s): min= 208, max= 3760, per=100.00%, avg=1535.05, stdev=128.23, samples=245
> iops : min= 26, max= 470, avg=191.88, stdev=16.03, samples=245
> lat (msec) : 2000=3.25%, >=2000=96.75%
> cpu : usr=0.00%, sys=0.12%, ctx=22712, majf=0, minf=0
> IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=4.9%, 16=69.6%, 32=23.6%, >=64=0.0%
> submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
> complete : 0=0.0%, 4=94.7%, 8=2.9%, 16=1.5%, 32=0.9%, 64=0.0%, >=64=0.0%
> issued rwts: total=2895,0,0,0 short=0,0,0,0 dropped=0,0,0,0
> latency : target=0, window=0, percentile=100.00%, depth=32
>
> Run status group 0 (all jobs):
> READ: bw=351KiB/s (359kB/s), 351KiB/s-351KiB/s (359kB/s-359kB/s), io=22.6MiB (23.7MB), run=65986-65986msec
# fio --name=randread --ioengine=posixaio --rw=randread --direct=1 --bs=8k --refill_buffers --norandommap --randrepeat=0 --iodepth=32 --numjobs=8 --runtime=60 --group_reporting --thread --size=2048M
randread: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=posixaio, iodepth=32
...
fio-3.36
Starting 8 threads
randread: Laying out IO file (1 file / 2048MiB)
randread: Laying out IO file (1 file / 2048MiB)
randread: Laying out IO file (1 file / 2048MiB)
randread: Laying out IO file (1 file / 2048MiB)
randread: Laying out IO file (1 file / 2048MiB)
randread: Laying out IO file (1 file / 2048MiB)
randread: Laying out IO file (1 file / 2048MiB)
randread: Laying out IO file (1 file / 2048MiB)
Jobs: 8 (f=8): [r(8)][100.0%][r=17.4MiB/s][r=2222 IOPS][eta 00m:00s]
randread: (groupid=0, jobs=8): err= 0: pid=100241: Sun Dec 17 12:42:53 2023
read: IOPS=2486, BW=19.4MiB/s (20.4MB/s)(1168MiB/60125msec)
slat (nsec): min=888, max=1494.1k, avg=3446.83, stdev=8015.07
clat (msec): min=58, max=334, avg=102.70, stdev=22.68
lat (msec): min=58, max=334, avg=102.70, stdev=22.68
clat percentiles (msec):
| 1.00th=[ 70], 5.00th=[ 75], 10.00th=[ 79], 20.00th=[ 84],
| 30.00th=[ 89], 40.00th=[ 94], 50.00th=[ 99], 60.00th=[ 105],
| 70.00th=[ 111], 80.00th=[ 118], 90.00th=[ 136], 95.00th=[ 150],
| 99.00th=[ 171], 99.50th=[ 180], 99.90th=[ 211], 99.95th=[ 228],
| 99.99th=[ 268]
bw ( KiB/s): min= 8246, max=24640, per=100.00%, avg=19903.38, stdev=281.80, samples=960
iops : min= 1028, max= 3080, avg=2487.90, stdev=35.24, samples=960
lat (msec) : 100=52.92%, 250=47.07%, 500=0.01%
cpu : usr=0.17%, sys=5.01%, ctx=1016480, majf=0, minf=0
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.7%, 16=75.3%, 32=24.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=93.2%, 8=4.7%, 16=2.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=149497,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
READ: bw=19.4MiB/s (20.4MB/s), 19.4MiB/s-19.4MiB/s (20.4MB/s-20.4MB/s), io=1168MiB (1225MB), run=60125-60125msec
After swapoff instead:
# fio --name=randread --ioengine=posixaio --rw=randread --direct=1 --bs=8k --refill_buffers --norandommap --randrepeat=0 --iodepth=32 --numjobs=8 --runtime=60 --group_reporting --thread --size=2048M
randread: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=posixaio, iodepth=32
...
fio-3.36
Starting 8 threads
randread: Laying out IO file (1 file / 2048MiB)
randread: Laying out IO file (1 file / 2048MiB)
randread: Laying out IO file (1 file / 2048MiB)
randread: Laying out IO file (1 file / 2048MiB)
randread: Laying out IO file (1 file / 2048MiB)
randread: Laying out IO file (1 file / 2048MiB)
randread: Laying out IO file (1 file / 2048MiB)
randread: Laying out IO file (1 file / 2048MiB)
Jobs: 8 (f=8): [r(8)][100.0%][r=17.6MiB/s][r=2259 IOPS][eta 00m:00s]
randread: (groupid=0, jobs=8): err= 0: pid=100211: Sun Dec 17 12:07:16 2023
read: IOPS=2471, BW=19.3MiB/s (20.2MB/s)(1161MiB/60119msec)
slat (nsec): min=889, max=1912.6k, avg=3415.21, stdev=8521.17
clat (msec): min=54, max=762, avg=103.32, stdev=28.95
lat (msec): min=54, max=762, avg=103.32, stdev=28.95
clat percentiles (msec):
| 1.00th=[ 69], 5.00th=[ 75], 10.00th=[ 79], 20.00th=[ 84],
| 30.00th=[ 89], 40.00th=[ 93], 50.00th=[ 99], 60.00th=[ 105],
| 70.00th=[ 110], 80.00th=[ 117], 90.00th=[ 136], 95.00th=[ 150],
| 99.00th=[ 174], 99.50th=[ 188], 99.90th=[ 518], 99.95th=[ 558],
| 99.99th=[ 617]
bw ( KiB/s): min=12336, max=24912, per=100.00%, avg=19913.12, stdev=267.11, samples=954
iops : min= 1542, max= 3114, avg=2489.14, stdev=33.39, samples=954
lat (msec) : 100=52.77%, 250=47.01%, 500=0.11%, 750=0.11%, 1000=0.01%
cpu : usr=0.18%, sys=4.93%, ctx=1005861, majf=0, minf=0
IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=1.3%, 16=74.8%, 32=23.6%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=93.4%, 8=4.5%, 16=1.9%, 32=0.2%, 64=0.0%, >=64=0.0%
issued rwts: total=148600,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
READ: bw=19.3MiB/s (20.2MB/s), 19.3MiB/s-19.3MiB/s (20.2MB/s-20.2MB/s), io=1161MiB (1217MB), run=60119-60119msec
The difference:
Jobs: 4 (f=4) (yours)
vs.
Jobs: 8 (f=8) (mine)
is not obvious, given the same commands.
===
Mark Millard
marklmi at yahoo.com