Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1514424 - gluster volume splitbrain info needs to display output of each brick in a stream fashion instead of buffering and dumping at the end
Summary: gluster volume splitbrain info needs to display output of each brick in a str...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.10
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: Karthik U S
QA Contact:
URL:
Whiteboard:
Depends On: 1506104
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-17 11:24 UTC by Karthik U S
Modified: 2017-12-08 16:46 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.10.8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1506104
Environment:
Last Closed: 2017-12-08 16:46:32 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Comment 1 Worker Ant 2017-11-17 11:31:28 UTC
REVIEW: https://review.gluster.org/18799 (cluster/afr: Print heal info split-brain output in stream fashion) posted (#1) for review on release-3.10 by Karthik U S

Comment 2 Worker Ant 2017-11-27 13:50:33 UTC
COMMIT: https://review.gluster.org/18799 committed in release-3.10 by \"Karthik U S\" <ksubrahm@redhat.com> with a commit message- cluster/afr: Print heal info split-brain output in stream fashion

Problem:
When we trigger the heal info split-brain command the o/p is not
streamed as it is received, but dumped at the end for all the bricks
together. This gives a perception that the command is hung.

Fix:
When we get a split brain entry while crawling throught the pending
heal entries, flush that immediately so that it prints the output
in a stream fashion and doesn't look like the cli is hung.

Change-Id: I7547e86b83202d66616749b8b31d4d0dff0abf07
BUG: 1514424
Signed-off-by: karthik-us <ksubrahm@redhat.com>
(cherry picked from commit 05f9c13f4d69e4113f5a851f4097ef35ba3f33b2)

Comment 3 Shyamsundar 2017-12-08 16:46:32 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.8, please open a new bug report.

glusterfs-3.10.8 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-December/000086.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.