Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1061180 - NFS client failures with server using gssproxy instead of rpc.svcgssd
Summary: NFS client failures with server using gssproxy instead of rpc.svcgssd
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: gssproxy
Version: 7.0
Hardware: All
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Simo Sorce
QA Contact: Yin.JianHong
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-02-04 14:05 UTC by Guenther Deschner
Modified: 2018-11-09 09:43 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-03-11 15:56:59 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Guenther Deschner 2014-02-04 14:05:04 UTC
This bug is created as a clone of upstream ticket:
https://fedorahosted.org/gss-proxy/ticket/98

On a Koji hub, /mnt/koji needs to be exported to the builders.  In a kerberized environment using rpc.svcgssd on the NFSv4.1 server, this could be accomplished by the following in '''/etc/exports''', where /export/koji is mode 0755:
{{{
/export 10.10.10.0/24(fsid=0,crossmnt,sec=krb5p:krb5i)
/export/koji 10.10.10.0/24(ro,all_squash,sec=krb5p:krb5i)
}}}
The '''all_squash''' option is in place to allow the '''kojibuilder''' user who runs mock on the koji-builder client to access /mnt/koji without the need for '''kojibuilder''' credentials.  The client has /mnt/koji created from '''/etc/fstab''' as:
{{{
kojihub.example.com:/koji /mnt/koji nfs ro,minorversion=1,sec=krb5i,x-systemd.automount 0 0
}}}

The issue comes in after switching from rpc.svcgssd on the NFSv4.1 server to gssproxy:

The koji-builder clients are no longer able to access /mnt/koji and rpc.gssd fails with the following errors:
{{{
ERROR: GSS-API: error in gss_acquire_cred(): GSS_S_FAILURE (Unspecified GSS failure.  Minor code may provide more information) - Can't find client principal kojibuilder@...
Error doing scandir on directory '/run/user/<uidnum_of_kojibuilder_user>': No such file or directory
}}}

'''NOTE:''' The only change is switching from rpc.svcgssd to gssproxy on the NFSv4.1 server.

However, creating and exporting a keytab for the kojibuilder user and enabling gssproxy on the koji-builder client allows access again, though not via the '''nobody''' user, as expected.

I'm not an expert on the implications of using '''all_squash''' on a sec=krb5 mount, and I  actually think rpc.svcgssd might be "wrong" by allowing access to an export for a user without any creds (even if that user is supposed to be mapped to nobody via all_squash).

Comment 1 Anthony Messina 2014-02-04 14:17:57 UTC
Short summary:

There seems to be only one strange issue I've come across with gss-proxy vs. 
rpc.svcgssd: https://fedorahosted.org/gss-proxy/ticket/98.

This is with regard to how access for the "nfsnobody" user is handled.  The ticket attempts to show that with rpc.svcgssd, a host with host credentials and a user without credentials can still access NFS shares with 0755 directories and 0644 files (via the host credentials and mapped to the nfsnobody user).

With gss-proxy, I had to create user credentials for kojibuilder@REALM because the access wasn't allowed via the nfsnobody path.  I'm not sure if this is resolved, or by design, etc.  But it is the only issue I've seen with gss-proxy vs. rpc.svcgssd.

Comment 2 Anthony Messina 2014-02-04 14:21:04 UTC
Oh, it should be noted that my initial (and continuing) report is based on F19, although this ticket is against RHEL7.

Comment 4 Simo Sorce 2014-02-20 19:07:58 UTC
(In reply to Anthony Messina from comment #2)
> Oh, it should be noted that my initial (and continuing) report is based on
> F19, although this ticket is against RHEL7.

Anthony, can you tell me what kernel are you using, I recall a kernel bug that may account for this oddity.

Comment 5 Anthony Messina 2014-02-20 22:17:24 UTC
(In reply to Simo Sorce from comment #4)
> (In reply to Anthony Messina from comment #2)
> > Oh, it should be noted that my initial (and continuing) report is based on
> > F19, although this ticket is against RHEL7.
> 
> Anthony, can you tell me what kernel are you using, I recall a kernel bug
> that may account for this oddity.

Actually, I have found the time to upgrade the server to F20, and am now running 3.13.3-201.fc20.x86_64.  I'll see if I can figure out how to reproduce this issue.

Comment 6 Simo Sorce 2014-03-11 14:12:24 UTC
(In reply to Anthony Messina from comment #5)
> (In reply to Simo Sorce from comment #4)
> > (In reply to Anthony Messina from comment #2)
> > > Oh, it should be noted that my initial (and continuing) report is based on
> > > F19, although this ticket is against RHEL7.
> > 
> > Anthony, can you tell me what kernel are you using, I recall a kernel bug
> > that may account for this oddity.
> 
> Actually, I have found the time to upgrade the server to F20, and am now
> running 3.13.3-201.fc20.x86_64.  I'll see if I can figure out how to
> reproduce this issue.

Any look reproducing this ?

Comment 7 Dmitri Pal 2014-03-11 14:17:11 UTC
Since we do not have confirmation whether this issue is still present we are moving it out till a later release.

Comment 8 Anthony Messina 2014-03-11 14:32:50 UTC
I'm in the middle of working on it right now.  I should have confirmation soon.  So far, it looks like this might be a non-issue any longer.

Comment 9 Anthony Messina 2014-03-11 14:42:30 UTC
Ok, I can no longer reproduce this issue using gssproxy-0.3.1-0.fc20.x86_64 and kernel-3.13.6-200.fc20.x86_64 on the server and client.

Sorry for the noise.

Comment 10 Guenther Deschner 2014-03-11 15:56:59 UTC
We were also not able to reproduce it here.

Thanks for verifiying, Anthony!

Closing as "closed currentrelease" then.


Note You need to log in before you can comment on or make changes to this bug.