Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.

Bug 1511768

Summary: In Replica volume 2*2 when quorum is set, after glusterd restart nfs server is coming up instead of self-heal daemon
Product: [Community] GlusterFS Reporter: Sanju <srakonde>
Component: glusterdAssignee: Sanju <srakonde>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: low Docs Contact:
Priority: low    
Version: 3.13CC: akrai, amukherj, bmekala, bugs, rhs-bugs, storage-qa-internal, vbellur
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.13.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1511339 Environment:
Last Closed: 2017-12-08 17:45:38 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On: 1511339, 1511782    
Bug Blocks: 1507933    

Comment 1 Worker Ant 2017-11-10 05:31:38 UTC
REVIEW: https://review.gluster.org/18711 (glusterd: display gluster volume status, when quorum type is server) posted (#1) for review on release-3.13 by Sanju Rakonde

Comment 2 Worker Ant 2017-11-14 15:33:35 UTC
COMMIT: https://review.gluster.org/18711 committed in release-3.13 by \"Sanju Rakonde\" <srakonde@redhat.com> with a commit message- glusterd: display gluster volume status, when quorum type is server

Problem: when server-quorum-type is server, after restarting glusterd
in the node which is up, gluster volume status is giving incorrect
information.

Fix: check whether server is blank, before adding other keys into the
dictionary.

Change-Id: I926ebdffab330ccef844f23f6d6556e137914047
BUG: 1511768
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
(cherry picked from commit 046c7e3199fca715592762e271e6061ac99b0c4b)

Comment 3 Shyamsundar 2017-12-08 17:45:38 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.0, please open a new bug report.

glusterfs-3.13.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-December/000087.html
[2] https://www.gluster.org/pipermail/gluster-users/