Increasing GELI performance

Fluffles etc at fluffles.net
Mon Jul 30 20:35:41 UTC 2007


Pawel Jakub Dawidek wrote:
> On Fri, Jul 27, 2007 at 10:00:35PM +0100, Dominic Bishop wrote:
>   
>> I just tried your suggestion of geli on the raw device and it is no better
>> at all:
>>
>> dd if=/dev/da0.eli of=/dev/null bs=1m count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 1048576000 bytes transferred in 29.739186 secs (35259069 bytes/sec)
>>
>> dd if=/dev/zero of=/dev/da0.eli bs=1m count=1000
>> 1000+0 records in
>> 1000+0 records out
>> 1048576000 bytes transferred in 23.501061 secs (44618241 bytes/sec)
>>
>> Using top -S with 1s refresh to list the geli processes whilst doing this it
>> seems only one of them is doing anything at any given time, the others are
>> sitting in a state of "geli:w", I assume that is a truncation of something,
>> maybe geli:wait at a guess.
>>     
>
> No matter how many cores/cpus you have if you run single-threaded
> application. What you do exactly is:
> 1. Send read of 128kB.
> 2. One of geli threads picks it up, decrypts and sends it back.
> 3. Send next read of 128kB.
> 4. One of geli threads picks it up, decrypts and sends it back.
> ...
>
> All threads will be used when there are more threads accessing provider.
>   

But isn't it true that the UFS filesystem utilizes read-ahead and with 
that a multiple I/O queue depth (somewhere between 7 to 9 queued I/O's) 
- even when using something like dd to sequentially read a file on a 
mounted filesystem ? Then this read-ahead will cause multiple I/O 
request coming in and geom_eli can use multiple threads to maximize I/O 
throughput. Maybe Dominic can try playing with the "vfs.read_max" sysctl 
variable.

- Veronica


More information about the freebsd-geom mailing list