ZFS on top of GELI

Dan Naumov dan.naumov at gmail.com
Tue Jan 12 20:49:54 UTC 2010


2010/1/12 Rafał Jackiewicz <freebsd at o2.pl>:
>>Thanks, could you do the same, but using 2 .eli vdevs mirrorred
>>together in a zfs mirror?
>>
>>- Sincerely,
>>Dan Naumov
>
> Hi,
>
> Proc: Intell Atom 330 (2x1.6Ghz) - 1 package(s) x 2 core(s) x 2 HTT threads
> Chipset: Intel 82945G
> Sys: 8.0-RELEASE FreeBSD 8.0-RELEASE #0
> empty file: /boot/loader.conf
> Hdd:
>   ad4: 953869MB <Seagate ST31000533CS SC15> at ata2-master SATA150
>   ad6: 953869MB <Seagate ST31000533CS SC15> at ata3-master SATA150
> Geli:
>   geli init -s 4096 -K /etc/keys/ad4s2.key /dev/ad4s2
>   geli init -s 4096 -K /etc/keys/ad6s2.key /dev/ad6s2
>
>
> Results:
> ****************************************************
>
> *** single drive                        write MB/s      read  MB/s
> eli.journal.ufs2                        23              14
> eli.zfs                         19              36
>
>
> *** mirror                              write MB/s      read  MB/s
> mirror.eli.journal.ufs2 23              16
> eli.zfs                         31              40
> zfs                                     83              79
>
>
> *** degraded mirror             write MB/s      read MB/s
> mirror.eli.journal.ufs2 16              9
> eli.zfs                         56              40
> zfs                                     86              71
>
> ****************************************************

Thanks a lot for your numbers, the relevant part for me was this:

*** mirror                      write MB/s      read  MB/s
eli.zfs                         	31              40
zfs                                     83              79

*** degraded mirror             write MB/s      read MB/s
eli.zfs                         	56              40
zfs                                     86              71

31 mb/s writes and 40 mb/s reads is something that I guess I could
potentially live with. I am guessing the main problem of stacking ZFS
on top of geli like this is the fact that writing to a mirror requires
double the CPU use, because we have to encrypt all written data twice
(once to each disk) instead of encrypting first and then writing the
encrypted data to 2 disks as would be the case if we had crypto
sitting on top of ZFS instead of ZFS sitting on top of crypto.

I now have to reevaluate my planned use of an SSD though, I was
planning to use a 40gb partition on an Intel 80GB X25-M G2 as a
dedicated L2ARC device for a ZFS mirror of 2 x 2tb disks. However
these numbers make it quite obvious that I would already be
CPU-starved at 40-50mb/s throughput on the encrypted ZFS mirror, so
adding an l2arc SSD, while improving latency, would do really nothing
for actual disk read speeds, considering the l2arc itself would too,
have to sit on top of a GELI device.

- Sincerely,
Dan Naumov


More information about the freebsd-questions mailing list