gjournal performance issues

Fluffles etc at fluffles.net
Mon Jun 25 22:37:10 UTC 2007


Hello list,

I'm testing gjournal present in -CURRENT as of 13 June 2007. So far i'm 
not really impressed with performance, i'm writing to the list for any 
suggestions and information regarding gjournal.

First, my setup:
8 disks in RAID5 using geom_raid5, gjournal on top where both the 
journal (1GB) and the data is stored on the same consumer. Since 
gjournal uses both metadata and file/dir for journaling, this means 
that, theoretically, the write speed of sequential operations is 
doubled. Unfortunately, it appears to have crashed.

My problems:
- first, throughput appears to be only 8% of the throughput when not 
using gjournal at all. Whereas it should be close to 50%.
- second, during the 'switch' (writing the journal to its final location 
and starting a new journal) it appears no read operations are possible 
to the .journal device. If the .journal is /usr, that means the whole 
system basically freezes for 3 to 5 seconds. Not really sexy. Why would 
it block read requests?
- when using one consumer for both journal and data, it appears the 
journal is placed at the end of the device. Why? Normally, the beginning 
of a disk is the fastest and therefore preferable location for the journal.
- when analysing graid5 sysctl statistics, it appears gjournal is 
causing non-contiguous I/O which causes a lot of 2-phase I/O's 
(involving both reading and writing for 1 write request), the 
performance difficulties are most probably related to this issue. Why 
doesn't gjournal read a chunk of journal (50MB) and then write it? And 
why doesn't it write contiguously?

I've tried:
- playing with graid5 tunables, including disabling write-back buffer
- playing with journal tunables, including disabling optimization 
(combining), reducing parallel operations to 1, reducing journal switch 
time and more
- kmem is 500MB, gjournal can use 250MB of kernel memory for it's cache 
(more than the default)
- standard UFS2 using async option and without softupdates, newfs used 
with -J parameter

Anyone has any input? I was hoping for at least 40MB/s throughput and no 
blocking I/O for read requests.

Regards,

- Veronica



More information about the freebsd-geom mailing list