raid3 is slow

Szabó Péter matyee at mail.alba.hu
Thu Mar 29 12:33:52 UTC 2007


I get a lot of new knowledges from the posts, thank you all. With this new 
knowledges i did a few tests, with very interesting results. Now the array 
state is COMPLETE.

----------------- Read test from the raid device (no 
encryption) -----------------

# dd if=/dev/raid3/nmivol of=/dev/null bs=1m count=30000
30000+0 records in
30000+0 records out
31457280000 bytes transferred in 335.859291 secs (93662081 bytes/sec)

# top -S
last pid: 19340;  load averages:  0.46,  1.01,  1.97 
up 0+18:31:47  13:29:56
104 processes: 2 running, 89 sleeping, 13 waiting
CPU states:  0.0% user,  0.0% nice, 33.2% system,  4.3% interrupt, 62.5% 
idle
Mem: 65M Active, 568M Inact, 316M Wired, 39M Cache, 111M Buf, 10M Free
Swap:

  PID USERNAME  THR PRI NICE   SIZE    RES STATE    TIME   WCPU COMMAND
   11 root        1 171   52     0K     8K RUN    944:41 59.77% idle
   40 root        1  -8    0     0K     8K r3:w1   34:06 19.63% g_raid3 
nmivol
    3 root        1  -8    0     0K     8K -       19:31  5.91% g_up
    4 root        1  -8    0     0K     8K -       33:29  4.79% g_down
19271 root        1  -8    0  3400K  1772K physrd   0:07  1.56% dd

************************************
Everything seems o.k., "g_raid3 nmivol" gets ~20% of WCPU from the begining 
of the test to the end. The load is ~0.5 under the whole test.

------------- Read test from the encrypted partition of the 
raid --------------

# dd if=/dev/raid3/nmivole.bde of=/dev/null bs=1m count=30000
30000+0 records in
30000+0 records out
31457280000 bytes transferred in 1281.651229 secs (24544337 bytes/sec)

# top -S
last pid: 19588;  load averages:  1.24,  0.89,  1.08 
up 0+18:47:03  13:45:12
110 processes: 4 running, 95 sleeping, 11 waiting
CPU states:  0.0% user,  0.0% nice, 88.3% system,  1.6% interrupt, 10.2% 
idle
Mem: 74M Active, 570M Inact, 314M Wired, 37M Cache, 111M Buf, 1656K Free
Swap:

  PID USERNAME  THR PRI NICE   SIZE    RES STATE    TIME   WCPU COMMAND
15400 root        1  -8    0     0K     8K -       65:23 67.43% g_bde 
raid3/nmivole
   11 root        1 171   52     0K     8K RUN    954:01 11.96% idle
   40 root        1  -8    0     0K     8K r3:w1   34:36  4.98% g_raid3 
nmivol
    3 root        1  -8    0     0K     8K -       19:55  4.69% g_up
19588 root        1  96    0  9628K  9096K select   0:00  2.60% perl
    4 root        1  -8    0     0K     8K -       33:41  1.51% g_down

************************************
Still o.k. but it seems, the hardware is too weak for raid3 and gbde 
together. "g_bde raid3/nmivole" gets ~70% WCPU, load is ~ 1.0, and that is 
the case under the whole test.

Now the write test, and the interesting things. I can do it to a file only, 
i can't blast the filesystem.

------------- Write test to the encrypted partition of the 
raid --------------

# dd if=/dev/zero of=/mnt/vol/x.x bs=1m count=30000
13252+0 records in
13251+0 records out
13894680576 bytes transferred in 1595.827345 secs (8706882 bytes/sec)

>> At the begining
# top -S
last pid: 19972;  load averages:  1.04,  0.50,  0.66 
up 0+19:10:26  14:08:35
104 processes: 3 running, 89 sleeping, 12 waiting
CPU states:  0.0% user,  0.0% nice, 70.8% system,  0.8% interrupt, 28.4% 
idle
Mem: 65M Active, 679M Inact, 196M Wired, 22M Cache, 111M Buf, 35M Free
Swap:

  PID USERNAME  THR PRI NICE   SIZE    RES STATE    TIME   WCPU COMMAND
   11 root        1 171   52     0K     8K RUN    963:45 52.15% idle
15400 root        1  -8    0     0K     8K -       75:58 24.27% g_bde 
raid3/nmivole
    3 root        1  -8    0     0K     8K -       20:49  4.88% g_up
19970 root        1 -16    0  3400K  1772K wdrain   0:04  3.29% dd
    4 root        1  -8    0     0K     8K -       34:07  2.29% g_down
   40 root        1  -8    0     0K     8K r3:w1   35:33  2.05% g_raid3 
nmivol

It seems fine, but the load 1.0, i think is a litle bit high. gbde gets only 
~30% of WCPU and g_down gets ~3%. I don't know what is the task of g_down.

>> After a few minutes
# top -S
last pid: 20351;  load averages:  3.23,  2.84,  2.24 
up 0+19:34:37  14:32:46
104 processes: 4 running, 88 sleeping, 12 waiting
CPU states:  0.0% user,  0.0% nice, 83.6% system,  1.6% interrupt, 14.8% 
idle
Mem: 65M Active, 611M Inact, 266M Wired, 53M Cache, 111M Buf, 1080K Free
Swap:

  PID USERNAME  THR PRI NICE   SIZE    RES STATE    TIME   WCPU COMMAND
    4 root        1  -8    0     0K     8K -       42:47 53.27% g_down
15400 root        1  -8    0     0K     8K -       82:18 19.38% g_bde 
raid3/nmivole
   11 root        1 171   52     0K     8K RUN    968:44 12.60% idle
    3 root        1  -8    0     0K     8K -       22:12  3.03% g_up
19970 root        1 -16    0  3400K  1772K wdrain   1:12  2.93% dd
   40 root        1  -8    0     0K     8K r3:w1   36:18  1.07% g_raid3 
nmivol


************************************
At this point i stop the test, before the copmuter goes unreachable.

It seems terible!!! g_down gets more and more and more WCPU, and the load 
goes to the skies. I don't know, what should i think. Now i can see, at the 
end i will change the raid3 to stripe ;) But i don't want to :(

If somebody have an idea, please post it!

Matyee

----- Original Message ----- 
From: "Dag-Erling "Smørgrav"" <des at des.no>


Szabó Péter <matyee at mail.alba.hu> writes:
> Array problem solved. But my problem is not the low read/write
> performance, my problem is the high load. I start a single bittorrent
> download to the encoded raid3 partition with 2.5MB/s speed, and the
> load is 2.5.

Even after the array has finished rebuilding the reconnected consumer?

DES
-- 
Dag-Erling Smørgrav - des at des.no




More information about the freebsd-geom mailing list