Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1366222 - "heal info --xml" not showing the brick name of offline bricks.
Summary: "heal info --xml" not showing the brick name of offline bricks.
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On: 1366128
Blocks: 1366489
TreeView+ depends on / blocked
 
Reported: 2016-08-11 10:14 UTC by Ravishankar N
Modified: 2017-03-27 18:19 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.9.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1366128
: 1366489 (view as bug list)
Environment:
Last Closed: 2017-03-27 18:19:04 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Ravishankar N 2016-08-11 10:14:47 UTC
+++ This bug was initially created as a clone of Bug #1366128 +++

Description of problem:
======================
When the bricks are offline, if we want to get the heal info xml output for all the bricks , the offline brick names are not available. Instead we have the message 'information not available' for the 'name' tag.However The "heal info" command output displays the brick names even for the offline brick. It would be good to have the same info in the xml output as well. 


Version-Release number of selected component (if applicable):
==============================================================
glusterfs-3.7.9-10.el7rhgs.x86_64

How reproducible:
==================
1/1

Steps to Reproduce:
=====================
1. Create a replicated volume. Start the volume. Create mount. Create files from mount

2. bring down a brick. Modify the files.

3. Execute "heal <volname> info --xml" 

Actual results:
================
Offline bricks names are not shown in the xml output

Expected results:
==================
bricks names to be shown in xml output as well.

--- Additional comment from Red Hat Bugzilla Rules Engine on 2016-08-11 02:41:28 EDT ---

This bug is automatically being proposed for the current release of Red Hat Gluster Storage 3 under active development, by setting the release flag 'rhgs‑3.2.0' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

--- Additional comment from  on 2016-08-11 02:42:00 EDT ---

Output of 'heal info --xml' command:
====================================
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <healInfo>
    <bricks>
      <brick hostUuid="1af55777-086f-4dd7-bd2f-54981eeab596">
        <name>rhsauto030.lab.eng.blr.redhat.com:/bricks/brick0/hosdu_brick0</name>
        <file gfid="2fc97d4b-05c5-417d-84e7-392455938af6">/file1</file>
        <file gfid="00000000-0000-0000-0000-000000000001">/</file>
        <file gfid="e4c3c22d-4c21-402a-8101-bf4a0f30021a">/file2</file>
        <file gfid="d893cdd2-c37f-4b11-93aa-cff1c35d724e">/file3</file>
        <file gfid="a507c02f-3e68-4568-bf49-accc4ca57d36">/file4</file>
        <file gfid="c8fd2784-d20b-45d1-9a79-bfd0c2ea7d6a">/file5</file>
        <file gfid="232449a1-b55b-4cd7-8964-7294b8f058dc">/file6</file>
        <file gfid="146732dd-aed2-45ea-9da1-3046ae131046">/file7</file>
        <file gfid="4f54483a-9f8e-41ee-b9dc-71f4bb54ff2c">/file8</file>
        <file gfid="a35e17c6-899a-4dee-8c69-2d8db47994c9">/file9</file>
        <file gfid="ce8b6800-b491-4d90-963a-fd0e2e0d740d">/file10</file>
        <status>Connected</status>
        <numberOfEntries>11</numberOfEntries>
      </brick>
      <brick hostUuid="-">
        <name>information not available</name>
        <status>Transport endpoint is not connected</status>
        <numberOfEntries>-</numberOfEntries>
      </brick>
    </bricks>
  </healInfo>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
</cliOutput>

Output of 'heal info' command:
================================
Brick rhsauto030.lab.eng.blr.redhat.com:/bricks/brick0/hosdu_brick0
/file1 
/ 
/file2 
/file3 
/file4 
/file5 
/file6 
/file7 
/file8 
/file9 
/file10 
Status: Connected
Number of entries: 11

Brick rhsauto031.lab.eng.blr.redhat.com:/bricks/brick0/hosdu_brick1
Status: Transport endpoint is not connected
Number of entries: -

Comment 1 Vijay Bellur 2016-08-11 10:15:37 UTC
REVIEW: http://review.gluster.org/15146 (glfsheal: print brick name and path even when brick is down) posted (#1) for review on master by Ravishankar N (ravishankar@redhat.com)

Comment 2 Vijay Bellur 2016-08-12 06:25:33 UTC
COMMIT: http://review.gluster.org/15146 committed in master by Pranith Kumar Karampuri (pkarampu@redhat.com) 
------
commit 5ef32c57f327e1dd4e9d227b9c8fd4b6f6fb4970
Author: Ravishankar N <ravishankar@redhat.com>
Date:   Thu Aug 11 10:10:25 2016 +0000

    glfsheal: print brick name and path even when brick is down
    
    The xml variant of heal info command does not display brick name when
    the brick is down due to a failure to fetch the hostUUID. But the non
    xml variant does. So fixed the xml variant to print the remote_host
    and remote_subvol even when the brick is down.
    
    Change-Id: I16347eb4455b9bcc7a9b0127f8783140b6016578
    BUG: 1366222
    Signed-off-by: Ravishankar N <ravishankar@redhat.com>
    Reviewed-on: http://review.gluster.org/15146
    Reviewed-by: Anuradha Talur <atalur@redhat.com>
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>

Comment 3 Shyamsundar 2017-03-27 18:19:04 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.0, please open a new bug report.

glusterfs-3.9.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2016-November/029281.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.