Encrypting raid5 volume with geli

Ulf Lilleengen ulf.lilleengen at gmail.com
Sat Dec 13 06:30:22 PST 2008


On Fri, Dec 12, 2008 at 5:00 PM, Michael Jung <mikej at paymentallianceintl.com
> wrote:

> FreeBSD charon.confluentasp.com 7.1-PRERELEASE FreeBSD 7.1-PRERELEASE
> #2: Thu Sep  4 12:06:08 EDT 2008
>
> In the interest of this thread I tried to duplicate the problem. I
> created:
>
> 10 drives:
> D d9                    State: up       /dev/da9        A: 0/17366 MB
> (0%)
> D d8                    State: up       /dev/da8        A: 0/17366 MB
> (0%)
> D d7                    State: up       /dev/da7        A: 0/17366 MB
> (0%)
> D d6                    State: up       /dev/da6        A: 0/17366 MB
> (0%)
> D d5                    State: up       /dev/da5        A: 0/17366 MB
> (0%)
> D d4                    State: up       /dev/da4        A: 0/17366 MB
> (0%)
> D d3                    State: up       /dev/da3        A: 0/17366 MB
> (0%)
> D d2                    State: up       /dev/da2        A: 0/17366 MB
> (0%)
> D d1                    State: up       /dev/da1        A: 0/17366 MB
> (0%)
> D d0                    State: up       /dev/da0        A: 0/17366 MB
> (0%)
>
> 1 volume:
> V test                  State: up       Plexes:       1 Size:        152
> GB
>
> 1 plex:
> P test.p0            R5 State: up       Subdisks:    10 Size:        152
> GB
>
> 10 subdisks:
> S test.p0.s9            State: up       D: d9           Size:         16
> GB
> S test.p0.s8            State: up       D: d8           Size:         16
> GB
> S test.p0.s7            State: up       D: d7           Size:         16
> GB
> S test.p0.s6            State: up       D: d6           Size:         16
> GB
> S test.p0.s5            State: up       D: d5           Size:         16
> GB
> S test.p0.s4            State: up       D: d4           Size:         16
> GB
> S test.p0.s3            State: up       D: d3           Size:         16
> GB
> S test.p0.s2            State: up       D: d2           Size:         16
> GB
> S test.p0.s1            State: up       D: d1           Size:         16
> GB
> S test.p0.s0            State: up       D: d0           Size:         16
> GB
>
> Which I can newfs and mount
>
> (root at charon) /etc# mount /dev/gvinum/test /mnt
> (root at charon) /etc# df -h
> Filesystem                 Size    Used   Avail Capacity  Mounted on
> /dev/ad4s1a                357G    119G    209G    36%    /
> devfs                      1.0K    1.0K      0B   100%    /dev
> 172.0.255.28:/data/unix    1.3T    643G    559G    54%    /nas1
> /dev/gvinum/test           148G    4.0K    136G     0%    /mnt
>
> But with /dev/gvinum/test unmounted if I try:
>
> (root at charon) /etc# geli init -P -K /root/test.key /dev/gvinum/test
> geli: Cannot store metadata on /dev/gvinum/test: Operation not
> permitted.
> (root at charon) /etc#
>
> My random file was created like
>
> dd if=/dev/random of=/root/test.key bs=64 count=1
>
> I use GELI at home with no trouble, although not with a gvinum volume.
>

Hello,

When I tried this myself, I also got the EPERM error in return. I though
this
was very strange. I went through the gvinum code today, and put debugging
prints everywhere, but everything looked fine, and it was only raid5 volumes

that failed. Then I saw that the EPERM error came from the underlying
providers
of geom (more specifially from the read requests to the parity stripes etc),
so
I was starting to suspect that it was not a gvinum error. But still, I was
able to
write/read from the disks from outside of gvinum!

Then, I discovered in geom userland code that it opens the disk where
metadata should be written in write only mode. Then I discovered the reason:
gvinum tries to write to the stripe in question, but has to read back the
parity data from one of the other stripes. But, they are opened O_WRONLY, so
the request fails. I tried opening the device as O_RDWR, and everything is
find.

Phew :) You can bet I was frustrated

I hope to commit the attached change in the near future.

-- 
Ulf Lilleengen
-------------- next part --------------
A non-text attachment was scrubbed...
Name: geomfix.diff
Type: application/octet-stream
Size: 316 bytes
Desc: not available
Url : http://lists.freebsd.org/pipermail/freebsd-geom/attachments/20081213/15cbd4da/geomfix.obj


More information about the freebsd-geom mailing list