Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1511339 - In Replica volume 2*2 when quorum is set, after glusterd restart nfs server is coming up instead of self-heal daemon
Summary: In Replica volume 2*2 when quorum is set, after glusterd restart nfs server i...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
Assignee: Sanju
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1507933 1511768 1511782
TreeView+ depends on / blocked
 
Reported: 2017-11-09 07:50 UTC by Sanju
Modified: 2019-03-25 16:30 UTC (History)
7 users (show)

Fixed In Version: glusterfs-6.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1507933
: 1511768 1511782 (view as bug list)
Environment:
Last Closed: 2019-03-25 16:30:19 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Gluster.org Gerrit 21675 None Merged glusterd: volume status should not show NFS daemon 2018-11-25 13:18:13 UTC

Comment 1 Worker Ant 2017-11-09 07:51:58 UTC
REVIEW: https://review.gluster.org/18703 (glusterd: gluster volume status displaying NFS server instead of self heal daemon) posted (#1) for review on master by Sanju Rakonde

Comment 2 Atin Mukherjee 2017-11-09 12:08:37 UTC
Description of problem:
Instead of self daemon, nfs server is coming up after glusterd restart.

Version-Release number of selected component (if applicable):
mainline

How reproducible:
2/2

Steps to Reproduce:
1. Created replica volume 2*2 using 3 nodes cluster and started it.
2. Stop glusterd on other two nodes.
3. Check the volume status on node where glusterd is running and do glusterd restart
4. check the gluster vol status

Actual results:
Before glusterd restart

Status of volume: replica_vol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.52:/bricks/brick0/replica_vo
l                                           N/A       N/A        N       N/A  
Brick 10.70.37.52:/bricks/brick1/replica_vo
l                                           N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       4062 
 
Task Status of Volume replica_vol
------------------------------------------------------------------------------
There are no active volume tasks

After glusterd restart

Status of volume: replica_vol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.52:/bricks/brick0/replica_vo
l                                           N/A       N/A        N       N/A  
Brick 10.70.37.52:/bricks/brick1/replica_vo
l                                           N/A       N/A        N       N/A  
NFS Server on localhost                     N/A       N/A        N       N/A  
 
Task Status of Volume replica_vol
------------------------------------------------------------------------------
There are no active volume tasks


Expected results:
Self heal daemon  should come up, not nfs-server daemon

Comment 3 Worker Ant 2017-11-09 19:00:28 UTC
COMMIT: https://review.gluster.org/18703 committed in master by  

------------- glusterd: display gluster volume status, when quorum type is server

Problem: when server-quorum-type is server, after restarting glusterd
in the node which is up, gluster volume status is giving incorrect
information.

Fix: check whether server is blank, before adding other keys into the
dictionary.

Change-Id: I926ebdffab330ccef844f23f6d6556e137914047
BUG: 1511339
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>

Comment 4 Shyamsundar 2018-03-15 11:20:54 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.0, please open a new bug report.

glusterfs-4.0.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-March/000092.html
[2] https://www.gluster.org/pipermail/gluster-users/

Comment 5 Worker Ant 2018-11-19 10:21:11 UTC
REVIEW: https://review.gluster.org/21675 (glusterd: volume status should not show NFS daemon) posted (#1) for review on master by Sanju Rakonde

Comment 6 Worker Ant 2018-11-25 13:18:09 UTC
REVIEW: https://review.gluster.org/21675 (glusterd: volume status should not show NFS daemon) posted (#3) for review on master by Atin Mukherjee

Comment 7 Shyamsundar 2019-03-25 16:30:19 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.