Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1511768 - In Replica volume 2*2 when quorum is set, after glusterd restart nfs server is coming up instead of self-heal daemon
Summary: In Replica volume 2*2 when quorum is set, after glusterd restart nfs server i...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.13
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
Assignee: Sanju
QA Contact:
URL:
Whiteboard:
Depends On: 1511339 1511782
Blocks: 1507933
TreeView+ depends on / blocked
 
Reported: 2017-11-10 05:29 UTC by Sanju
Modified: 2017-12-08 17:45 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.13.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1511339
Environment:
Last Closed: 2017-12-08 17:45:38 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Comment 1 Worker Ant 2017-11-10 05:31:38 UTC
REVIEW: https://review.gluster.org/18711 (glusterd: display gluster volume status, when quorum type is server) posted (#1) for review on release-3.13 by Sanju Rakonde

Comment 2 Worker Ant 2017-11-14 15:33:35 UTC
COMMIT: https://review.gluster.org/18711 committed in release-3.13 by \"Sanju Rakonde\" <srakonde@redhat.com> with a commit message- glusterd: display gluster volume status, when quorum type is server

Problem: when server-quorum-type is server, after restarting glusterd
in the node which is up, gluster volume status is giving incorrect
information.

Fix: check whether server is blank, before adding other keys into the
dictionary.

Change-Id: I926ebdffab330ccef844f23f6d6556e137914047
BUG: 1511768
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
(cherry picked from commit 046c7e3199fca715592762e271e6061ac99b0c4b)

Comment 3 Shyamsundar 2017-12-08 17:45:38 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.0, please open a new bug report.

glusterfs-3.13.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-December/000087.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.