Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1509102 - In distribute volume after glusterd restart, brick goes offline
Summary: In distribute volume after glusterd restart, brick goes offline
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd
Version: rhgs-3.3
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: RHGS 3.4.0
Assignee: Atin Mukherjee
QA Contact: Rajesh Madaka
URL:
Whiteboard:
Depends On: 1509845 1511293 1511301
Blocks: 1503134
TreeView+ depends on / blocked
 
Reported: 2017-11-03 04:50 UTC by akarsha
Modified: 2018-09-04 06:39 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.12.2-2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1509845 (view as bug list)
Environment:
Last Closed: 2018-09-04 06:38:02 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 None None None 2018-09-04 06:39:45 UTC

Description akarsha 2017-11-03 04:50:37 UTC
Description of problem:
After glusterd restart on same node, brick goes offline.

Version-Release number of selected component (if applicable):
3.8.4-50

How reproducible:
3/3

Steps to Reproduce:
1. Created a distribute volume with 3 bricks of each node and start it.
2. Stopped glusterd on other two node and check the volume status where glusterd is running.
3. Restart glusterd on node where glusterd is running and check volume status.

Actual results:
Before restart glusterd

Status of volume: testvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.52:/bricks/brick0/testvol    49160     0          Y       17734
 
Task Status of Volume testvol
------------------------------------------------------------------------------
There are no active volume tasks

After restart glusterd

Status of volume: testvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.52:/bricks/brick0/testvol    N/A       N/A        N       N/A  
 
Task Status of Volume testvol
------------------------------------------------------------------------------
There are no active volume tasks


Expected results:
Brick must be online after restart glusterd.

Additional info:
Glusterd is stopped on other two nodes.

Comment 4 Atin Mukherjee 2017-11-06 08:03:29 UTC
upstream patch : https://review.gluster.org/18669

Comment 7 Atin Mukherjee 2018-01-03 13:34:11 UTC
There's an issue with this patch as it causes regression to brick multiplexing node reboot scenario. One more patch https://review.gluster.org/19134 is required to fix this completely.

Comment 9 Rajesh Madaka 2018-02-16 10:29:28 UTC
Verified this bug for distributed volume and replica3 volume with 6 node cluster.

verified scenario:

-> Created distribute volume with each brick from each node in 6 node cluster.
-> Then stop the glusterd service of 5 nodes
-> Then verified gluster volume status from where glusterd is running.
-> Volume status showing correct and brick is online from which node glusterd is running
-> Then restarted glusterd service and verified gluster vol status from where glusterd is running.
-> Gluster volume status showing correct and brick is online.

Same steps followed for replica3 volume also, verified for replica3 volume.

Moving this bug to verified state

 verified version : glusterfs-3.12.2-4

Comment 11 errata-xmlrpc 2018-09-04 06:38:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607


Note You need to log in before you can comment on or make changes to this bug.