Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1057292 - option rpc-auth-allow-insecure should default to "on"
Summary: option rpc-auth-allow-insecure should default to "on"
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: access-control
Version: 3.4.2
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-01-23 18:58 UTC by Richard W.M. Jones
Modified: 2015-12-01 16:45 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-10-07 13:49:43 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Richard W.M. Jones 2014-01-23 18:58:10 UTC
Description of problem:

If you connect to gluster from a non-root client, it fails with:

$ qemu-img create gluster://server-gluster2/vmdisks/test1 1G
Formatting 'gluster://server-gluster2/vmdisks/test1', fmt=raw size=1073741824 
qemu-img: Gluster connection failed for server=server-gluster2 port=0 volume=vmdisks image=test1 transport=tcp
qemu-img: gluster://server-gluster2/vmdisks/test1: error while creating raw: Transport endpoint is not connected

The real error is completely hidden, but if you happen to find
the right file on the right server, you can see it's:

[2014-01-23 18:44:33.788604] E [rpcsvc.c:521:rpcsvc_handle_rpc_call] 0-glusterd: Request received from non-privileged port. Failing request

To avoid this you have to edit /etc/glusterfs/glusterd.vol and
add (on all bricks AFAICT):

    option rpc-auth-allow-insecure on

and restart glusterd.

Seriously, no one uses port numbers to guarantee security.
It's not 1980.  This setting should default to ON.

Version-Release number of selected component (if applicable):

3.4.2.

How reproducible:

100%

Steps to reproduce:

Try to use glusterfs.

Comment 1 Joe Julian 2014-01-29 21:17:14 UTC
You can actually change the volume setting through the cli by setting server.allow-insecure

This change in the default behavior should be conditional on ssl being enabled.

It's also definitely not reproducible with your steps. Most users "Try to use glusterfs" without encountering your issue. Running mount as a non-root user results in "mount: only root can do that" precluding your issue without additional steps.

I would consider the more apt bug to be that the client doesn't report the error if it cannot acquire a "secure" port and that deficiency results in the connection being refused.

Comment 3 Richard W.M. Jones 2014-04-01 14:31:19 UTC
(In reply to Joe Julian from comment #1)
> It's also definitely not reproducible with your steps. Most users "Try to
> use glusterfs" without encountering your issue. Running mount as a non-root
> user results in "mount: only root can do that" precluding your issue without
> additional steps.

There are now lots of ways to access gluster without using mount, or root:

 - qemu, qemu-img
 - libvirt session storage
 - libguestfs (multiple tools)

and it breaks in the way I described on every one of those.

Comment 4 Ben England 2014-05-30 10:56:07 UTC
Joe, I thought that "gluster set vol your-volume allow-insecure on" only affected glusterfsd behavior, not glusterd.  To make glusterd function, you still need to edit /etc/glusterfs/glusterd.vol by hand.  This is pretty outrageous for a supposedly scalable product to require config-file-editing on each node.  

Security is a concern, but that's not the user's problem, it's the developer's job.  Ceph seems to handle this up front by PKI, couldn't Gluster do something like this to authenticate peers using openssl library when they first communicate?

This will impact any large-scale deployment of Gluster that relies on libgfapi being used by an application.

Comment 5 Ben England 2015-03-03 13:12:45 UTC
Aren't SSL sockets upstream as of Gluster 3.6?  Couldn't they be used for talking to glusterd and reading initial volfile?  That would get rid of the biggest problem, having to edit /etc/glusterfs/glusterd.vol by hand, right?

Comment 6 Niels de Vos 2015-05-17 21:59:25 UTC
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5.

This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs@gluster.org".

If there is no response by the end of the month, this bug will get automatically closed.

Comment 7 Ben England 2015-05-18 16:00:22 UTC
I see that defaults for glusterfs-3.7 are "rpc-auth-allow-insecure" in /etc/glusterfs/glusterd.vol and "server.allow-insecure" is default for gluster volume parameters!  What a pleasant surprise.  So you can close this as fixed in RHS 3.1.

Comment 8 Richard W.M. Jones 2015-05-18 16:06:56 UTC
Closing upstream based on comment 7.

Comment 9 Deepak C Shetty 2015-07-14 09:57:13 UTC
(In reply to Ben England from comment #7)
> I see that defaults for glusterfs-3.7 are "rpc-auth-allow-insecure" in
> /etc/glusterfs/glusterd.vol and "server.allow-insecure" is default for
> gluster volume parameters!  What a pleasant surprise.  So you can close this
> as fixed in RHS 3.1.

I don't think this is right.

I just installed latest glusterfs 3.7.2 and I don't see either of them as enabled.

[root@f21-docker yum.repos.d]# rpm -qa| grep glusterfs
glusterfs-client-xlators-3.7.2-3.fc21.x86_64
glusterfs-api-3.7.2-3.fc21.x86_64
glusterfs-fuse-3.7.2-3.fc21.x86_64
glusterfs-server-3.7.2-3.fc21.x86_64
glusterfs-libs-3.7.2-3.fc21.x86_64
glusterfs-3.7.2-3.fc21.x86_64
glusterfs-cli-3.7.2-3.fc21.x86_64

[root@f21-docker yum.repos.d]# cat /etc/glusterfs/glusterd.vol 
volume management
    type mgmt/glusterd
    option working-directory /var/lib/glusterd
    option transport-type socket,rdma
    option transport.socket.keepalive-time 10
    option transport.socket.keepalive-interval 2
    option transport.socket.read-fail-log off
    option ping-timeout 30
#   option base-port 49152
end-volume

[root@f21-docker yum.repos.d]# glusterfs --version
glusterfs 3.7.2 built on Jun 23 2015 12:05:44
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.


[root@f21-docker yum.repos.d]# gluster vol create vol1 f21-docker:/brick1 force
volume create: vol1: success: please start the volume to access data
[root@f21-docker yum.repos.d]# gluster v start vol1
volume start: vol1: success
[root@f21-docker yum.repos.d]# gluster v info
 
Volume Name: vol1
Type: Distribute
Volume ID: 19d7bbdd-bbff-42ba-ac75-b197b508af00
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: f21-docker:/brick1
Options Reconfigured:
performance.readdir-ahead: on

Comment 10 Deepak C Shetty 2015-07-14 10:10:18 UTC
FWIW, i also did a group virt setting, that too didn't bring in the allow-insecure option (looking at the virt file, its not part of it, 'guess user/admin needs to set it manually)

[root@f21-docker yum.repos.d]# gluster v set vol1 group virt
volume set: success

[root@f21-docker yum.repos.d]# gluster v info
 
Volume Name: vol1
Type: Distribute
Volume ID: 19d7bbdd-bbff-42ba-ac75-b197b508af00
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: f21-docker:/brick1
Options Reconfigured:
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.readdir-ahead: on

Comment 12 Kaleb KEITHLEY 2015-10-07 13:49:43 UTC
GlusterFS 3.4.x has reached end-of-life.

If this bug still exists in a later release please reopen this and change the version or open a new bug.

Comment 13 Kaleb KEITHLEY 2015-10-07 13:50:53 UTC
GlusterFS 3.4.x has reached end-of-life.\                                                   \                                                                               If this bug still exists in a later release please reopen this and change the version or open a new bug.


Note You need to log in before you can comment on or make changes to this bug.