NFSv4 Kerberos mount from Linux

Rick Macklem rmacklem at uoguelph.ca
Fri Oct 12 00:37:58 UTC 2018


From: Peter Eriksson wrote:
>Just a few comments (random brain-dump) in case someone else is having problems >with NFS & Kerberos.
>
>
>We’ve been using NFSv4 with Kerberos from Linux clients here for many years (with >Solaris-based NFS servers and MIT Kerberos) and lately using FreeBSD as the NFS >server OS (in an Microsoft AD Kerberos environment).
I agree with the other post that this is a very useful document and it would be
nice to keep it somewhere where it is easier to find than in the mailing list
archive.
I don't know where that could be, so hopefully someone else in FreeBSD-land
can suggest a good place?

The one area you don't discuss (and maybe isn't really a problem?) is what
ticket encryption type(s) you use.
Kerberized NFS still uses DES (someday this may change, but I think that requires
implementation of RPCSEC_GSS V3), so it needs an 8byte session key.
(I have never seen a documented way to convert a session key of greater than
 8bytes into an 8byte session key for RPCSEC_GSS to use. As such, I have no idea
 what happens if you choose a ticket encryption type that results in a greater
 than 8byte key.)

Here's a couple of minor comments:
[lots of good stuff snipped]
>4. rc.conf (we have a lot of users in our AD so we have to use a large number for >usermax, replace “liu.se<http://liu.se>” with your NFSv4 “domain” for >nfsuserd_flags)
>
>gssd_enable="YES"
>nfs_server_enable="YES"
>nfsv4_server_enable="YES"
>nfscbd_enable="YES"
Since the nfscbd is only used by the client (and only when delegations or pNFS is
 enabled in the server), I believe this is harmless, but unnecessary for the server.
>mountd_enable="YES"
>nfsuserd_enable="YES"
>nfsuserd_flags="-manage-gids -domain liu.se<http://liu.se> -usertimeout 10 -usermax 100000 16"
>
Btw, if you have many clients doing NFSv4.0 mounts, tuning of your DRC is advised.
(The defaults are for extremely small NFS servers.) NFSv4.1 mounts don't use the
DRC, so if that is what your clients are doing, this isn't necessary.
It's a little off topic, but for NFSv4.0 (and/or NFSv3) mounts and a reasonable
sized NFS server, reasonable settings in /etc/sysctl.conf might be:
vfs.nfsd.tcpcachetimeo=600
vfs.nfsd.tcphighwater=100000
vfs.nfsd.v4statelimit=10000000
vfs.nfsd.clienthashsize=10000
vfs.nfsd.statehashsize=10000
- If most/all of your mounts are NFSv4.1, then instead of the above, you might want:
vfs.nfsd.sessionhashsize=10000

>5. Make sure you use NTPD so the clock is correct.
>
>
>* All clients (Solaris 10, OmniOS, MacOS 10.12-10.14, FreeBSD 11.0-11.2, CentOS 7, >Debian 9, Ubuntu 17-18 tested):
>
>1. Make sure FQDN is in /etc/hosts
>
>2. Make sure you use NTPD so the clock is correct.
>
>3. Have a “host/FQDN at REALM” Kerberos host principal in /etc/krb5.keytab (nfs or >root is not needed for NFS-mounting to work)
I'm guessing that you use the "gssname=host" mount option for your FreeBSD
clients? (The name after "gssname=" is whatever name you have before
"/FQDN at REALM" for this principal name.)
This is what I referred to as the host-based initiator credential.

Thanks for posting this, rick

4. We use a fairly default /etc/krb5.conf, sort of like:

[libdefaults]
default_realm = REALM
        dns_lookup_realm = true

        ticket_lifetime = 24h
        renew_lifetime = 7d
        forwardable = true

        default_ccache_name = KEYRING:persistent:%{uid}

KEYRING probably only works on Linux and there are some problems with KEYRING in Debian & Ubuntu since not everything in it supports it due to them using Heimdal instead of MIT (like for smbclient) but it mostly works. Works fine in CentOS 7 though - in general CentOS 7 feels more “enterprise”-ready than Debian & Ubuntu. The old classic FILE-ccaches should work fine though.

For mounting we use the automounter and a "executable map” (perl script) that looks up records in DNS (Hesiod-style) since the built-in Hesiod support in most automounters is a bit.. lacking. Works quite well. You can find the scripts we use here:

http://www.grebo.net/~peter/nfs

(The dns-update scripts use data from an SQL database so probably isn’t directly usable to anybody else. We use the same SQL database to populate a locally developed BerkeleyDB-based NSS-database on each FreeBSD server in order to speed things up since AD/LDAP-looks with ~90k users and silly amounts of AD groups takes forever, even with cacheing).

Some Linux-specific stuff:

Packages needed:

  CentOS:
  - nfs-utils
  - libnfsidmap
  - nfs4-acl-tools
  - autofs

  Debian:
  - keyutils
  - nfs-kernel-server # rpc.idmapd needs this due to a bug in Debian

  Ubuntu:
  - keyutils

  Other nice-to have packages:
  - hesiod
  - autofs-hesiod

Some settings to check for:

  /etc/default/nfs-common:
    NEED_IDMAPD=yes
    NEED_GSSD=yes

  /etc/idmapd.conf (replace “liu.se<http://liu.se>” with your NFSv4 “domain”):
    Domain=liu.se<http://liu.se>

  /etc/request-key.d/id_resolver.conf (should be there already if using a modern Linux and you’ve added the packages above):
    create id_resolver * * /usr/sbin/nfsidmap %k %d


MacOS:

Basically require the latest - 10.14 (Mojave) - for things to work smoothly. In theory 10.12 & 10.13 should work but there is some bug in them that causes the OS to panic when you try to use NFS & Kerberos. 10.11 and earlier doesn’t support good enough encryption for us…  But with 10.14 you just need to get a Kerberos ticket and then you can mount things just fine.

/etc/nfs.conf should contain (replace “liu.se<http://liu.se>” with your NFSv4 “domain”):
nfs.client.default_nfs4domain=liu.se<http://liu.se>



(There are a lot of problems you can run into with Microsofts AD implementation of Kerberos too that we’ve had to be fighting with, but that’s a whole other topic)

- Peter


On 10 Oct 2018, at 23:47, Rick Macklem <rmacklem at uoguelph.ca<mailto:rmacklem at uoguelph.ca>> wrote:

Felix Winterhalter wrote:
On 10/4/18 5:21 PM, Rick Macklem wrote:
[stuff snipped]
I am now trying to mount this directory as root first without having to
deal with user keytabs or tickets.

This works fine with -sec=sys and nfsv4.1 and nfsv3 and -sec=krb5p.
This does not however work with nfsv4 and krb5p or any other krb5 flavor.
Sorry, I'm not sure what you are saying here. Is it
1 - no version of NFS works for krb5p or
2 - NFSv4.1 works for krb5p, but NFSv4.0 does not or
3 - only nfsv3 works for krb5p
[snipped lots of text]

#3 is indeed what was happening. I could mount with krb5p for nfsv3
(which I was not aware was even doable) however nfsv4 would stubbornly
refuse to do any mounting.
Yes, RPCSEC_GSS was done by Sun for NFSv3 and it was a good fit, since NFSv3
does not have any server state to maintain. As such, all RPCs are atomic operations
done by users (which for Kerberized mounts must have a TGT in a credential cache).

NFSv4 wasn't really a good fit for the model, because the NFSv4 server maintains
lock state (NFSv4 Opens are a form of lock used by Windows at file open time).
There are "state maintenance" operations that must be done by the user doing
the mount (usually root), where they typically don't have a TGT in a credential
cache.
--> The ugly solution for this is typically a host-based client credential in a keytab
     on the client. (Usually a principal like "root/client-host.domain at REALM" or
     "host/client-host.domain at REALM" or "nfs/client-host.domain at REALM"
      in the default keytab on the client.)

I have now after a lot of try and error figured out what I need to do in
order to make it work.

To start with I have kerberos credentials with both host/ and nfs/ on
both client and server. Mounting nfsv4 shares with krb5p from a linux
server has also worked in this context.
Yes, I'm assuming that satisfied the host-based client credential as I described
above.

I leave you to judge whether what I found out is intended behaviour or
if something weird is going on.
Yes, sounds like intended behaviour, since the client must have a Kerberos
credential to use for the "state maintenance" operations that are not done on
behalf of a user.

My exports file originally looked something like this:

/nfsTests/ /nfsTests/testexport /nfsTests/otherexport -maproot=root
-sec=krb5p clients

V4: /nfsTests -sec=krb5p clients

Which allowed me to do nfsv3 krb5p mounts but not nfsv4 krb5p mounts.

Changing the exports file to this:

/nfsTests/ /nfsTests/testexport /nfsTests/otherexport -maproot=root
-sec=krb5p clients

V4: /nfsTests -sec=krb5p,krb5i clients
This suggests that there is a bug in the client, where it uses krb5i instead of krb5p
at some point in the mounting process. (I have also seen cases where the client
erroneously falls back on using sys at some point in the mounting process.)
(You did mention before you were using the Linux client. If you are using a FreeBSD
client, I would be interested in looking at this.)

Allows nfsv4 krb5p mounts to work for some reason I do not understand.
Not setting the -sec option on the V4 line apparently defaults to
-sec=sys and doesn't allow any krb5 mounts. I'm not sure that this is a
good default as I wasn't even aware that the -sec option needed to be
set on this line.
In FreeBSD, defaults are meant to maintain backwards compatibility. This means that
AUTH_SYS should work by default. Also, AUTH_SYS is what 99.9999% of FreeBSD
NFS mounts still use, from what I've seen.)

I've got packet traces of the nfsv3 krb5 and krb5i mounts and I'll make
traces of the two nfsv4 mount attempts and send them to you if you're
interested. I'm still not sure what exactly is happening here.
The successful one for NFSv4 might be interesting. If you look at it in
wireshark, I suspect you'd find somewhere during the mount that it
did RPCs which were not krb5p and that would show why the addition
of krb5i made it work.

I did suggest you start with -sec=sys:krb5:krb5i:krb5p and, once that works,
remove the security flavours one at a time until the mount doesn't work.
(Then you capture packets for the minimal case that does work and look at
what security flavours the client is using for all RPCs done during the mount.)

You now know why almost no one uses Kerberized NFSv4 mounts.
Unfortunately, the NFSv4 working group has never gotten around to
a better solution. Discussion of a host based encryption technique using
something like SSL has happened, but no one has gone beyond that.

rick
_______________________________________________
freebsd-fs at freebsd.org<mailto:freebsd-fs at freebsd.org> mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-fs
To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org<mailto:freebsd-fs-unsubscribe at freebsd.org>"



More information about the freebsd-fs mailing list