Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1367270 - [HC]: After bringing down and up of the bricks VM's are getting paused
Summary: [HC]: After bringing down and up of the bricks VM's are getting paused
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.7.14
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1333406 1363721 1367272
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-16 06:09 UTC by Krutika Dhananjay
Modified: 2016-09-01 09:33 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.7.15
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1363721
Environment:
Last Closed: 2016-09-01 09:21:42 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Krutika Dhananjay 2016-08-16 06:09:47 UTC
+++ This bug was initially created as a clone of Bug #1363721 +++

+++ This bug was initially created as a clone of Bug #1333406 +++

Description of problem:
=====================
After bringing down and up of the bricks, VM's are getting paused

Version-Release number of selected component (if applicable):
=============
glusterfs-server-3.7.9-2.el7rhgs.x86_64

How reproducible:


Steps to Reproduce:
=====================
1. Create 1x3 volume and host few VM's on the gluster volumes
2. Login to the VM's and run script to populate data (using DD) 
3. While IO is going on bring down one of the brick and after some time bring up the brick and bring down another brick 
4. After some time Bring up the down brick and bring down another brick during the brick down and bring up process observed few VM's are getting paused 

Actual results:
==================
Virtual machines are getting paused 


Expected results:
=================
VM's should not be paused 

Additional info:
===================
[root@zod ~]# gluster vol info
 
Volume Name: data
Type: Replicate
Volume ID: 5021c1f8-0b2f-4b34-92ea-a087afe84ce3
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: server1:/rhgs/data/data-brick1
Brick2: server2:/rhgs/data/data-brick2
Brick3: server3:/rhgs/data/data-brick3
Options Reconfigured:
diagnostics.client-log-level: INFO
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
nfs.disable: on
cluster.shd-max-threads: 16
 
Volume Name: engine
Type: Replicate
Volume ID: 5e14889a-0ffc-415f-8fbd-259451972c46
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: server1:/rhgs/engine/engine-brick1
Brick2: server2:/rhgs/engine/engine-brick2
Brick3: server3:/rhgs/engine/engine-brick3
Options Reconfigured:
cluster.shd-max-threads: 16
nfs.disable: on
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.readdir-ahead: on
 
Volume Name: vmstore
Type: Replicate
Volume ID: edd3e117-138e-437b-9e65-319084fecc4b
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: server1:/rhgs/vmstore/vmstore-brick1
Brick2: server2:/rhgs/vmstore/vmstore-brick2
Brick3: server3:/rhgs/vmstore/vmstore-brick3
Options Reconfigured:
cluster.shd-max-threads: 16
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
nfs.disable: on
[root@zod ~]#


--- Additional comment from Sahina Bose on 2016-05-19 05:42:59 EDT ---

This bug is related to cyclic network outage test causing file to be in split brain and is not a very likely scenario.


--- Additional comment from Krutika Dhananjay on 2016-07-18 01:39:27 EDT ---

(In reply to RajeshReddy from comment #0)
> Description of problem:
> =====================
> After bringing down and up of the bricks, VM's are getting paused
> 
> Version-Release number of selected component (if applicable):
> =============
> glusterfs-server-3.7.9-2.el7rhgs.x86_64
> 
> How reproducible:
> 
> 
> Steps to Reproduce:
> =====================
> 1. Create 1x3 volume and host few VM's on the gluster volumes
> 2. Login to the VM's and run script to populate data (using DD) 
> 3. While IO is going on bring down one of the brick and after some time
> bring up the brick and bring down another brick 
> 4. After some time Bring up the down brick and bring down another brick
> during the brick down and bring up process observed few VM's are getting
> paused 
> 
> Actual results:
> ==================
> Virtual machines are getting paused 
> 
> 
> Expected results:
> =================
> VM's should not be paused 


Just wondering whether it is possible at all to keep the VM from pausing in this scenario. The best we can do is to prevent the shard/vm image from going into a split-brain when bricks are brought offline and back online in cyclic order, which means the VM(s) will _still_ pause (with EROFS?) at some point, only this time after the particular file/shard is healed, IO may be resumed from inside the VM without requiring manual intervention to fix the split-brain.

@Pranith: Are the above statements correct? Or is there a way to actually keep the VM from pausing?

-Krutika

--- Additional comment from Pranith Kumar K on 2016-07-18 06:14:41 EDT ---

You are correct, we can't prevent VMs getting paused. We only need to make sure that split-brains won't happen. Please note that this case may lead to the VM image going extremely bad, but all we can guarantee is the file not going into split-brain.

--- Additional comment from Vijay Bellur on 2016-08-03 09:06:18 EDT ---

REVIEW: http://review.gluster.org/15080 (cluster/afr: Prevent split-brain when bricks are brought on and off in cyclic order) posted (#1) for review on master by Krutika Dhananjay (kdhananj@redhat.com)

--- Additional comment from Vijay Bellur on 2016-08-03 09:07:12 EDT ---

REVIEW: http://review.gluster.org/15080 (cluster/afr: Prevent split-brain when bricks are brought off and on in cyclic order) posted (#2) for review on master by Krutika Dhananjay (kdhananj@redhat.com)

--- Additional comment from Vijay Bellur on 2016-08-04 07:46:41 EDT ---

REVIEW: http://review.gluster.org/15080 (cluster/afr: Prevent split-brain when bricks are brought off and on in cyclic order) posted (#3) for review on master by Krutika Dhananjay (kdhananj@redhat.com)

--- Additional comment from Vijay Bellur on 2016-08-04 22:33:30 EDT ---

REVIEW: http://review.gluster.org/15080 (cluster/afr: Prevent split-brain when bricks are brought off and on in cyclic order) posted (#4) for review on master by Krutika Dhananjay (kdhananj@redhat.com)

--- Additional comment from Vijay Bellur on 2016-08-09 03:30:40 EDT ---

REVIEW: http://review.gluster.org/15080 (cluster/afr: Prevent split-brain when bricks are brought off and on in cyclic order) posted (#5) for review on master by Krutika Dhananjay (kdhananj@redhat.com)

--- Additional comment from Vijay Bellur on 2016-08-09 04:22:24 EDT ---

REVIEW: http://review.gluster.org/15080 (cluster/afr: Prevent split-brain when bricks are brought off and on in cyclic order) posted (#6) for review on master by Krutika Dhananjay (kdhananj@redhat.com)

--- Additional comment from Vijay Bellur on 2016-08-11 05:42:06 EDT ---

REVIEW: http://review.gluster.org/15145 (cluster/afr: Bug fixes in txn codepath) posted (#1) for review on master by Krutika Dhananjay (kdhananj@redhat.com)

--- Additional comment from Vijay Bellur on 2016-08-11 21:23:15 EDT ---

REVIEW: http://review.gluster.org/15145 (cluster/afr: Bug fixes in txn codepath) posted (#2) for review on master by Krutika Dhananjay (kdhananj@redhat.com)

--- Additional comment from Vijay Bellur on 2016-08-15 06:40:58 EDT ---

COMMIT: http://review.gluster.org/15145 committed in master by Pranith Kumar Karampuri (pkarampu@redhat.com) 
------
commit 79b9ad3dfa146ef29ac99bf87d1c31f5a6fe1fef
Author: Krutika Dhananjay <kdhananj@redhat.com>
Date:   Fri Aug 5 12:18:05 2016 +0530

    cluster/afr: Bug fixes in txn codepath
    
    AFR sets transaction.pre_op[] array even before actually doing the
    pre-op on-disk. Therefore, AFR must not only consider the pre_op[] array
    but also the failed_subvols[] information before setting the pre_op_done[]
    flag. This patch fixes that.
    
    Change-Id: I78ccd39106bd4959441821355a82572659e3affb
    BUG: 1363721
    Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
    Reviewed-on: http://review.gluster.org/15145
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: Ravishankar N <ravishankar@redhat.com>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
    Reviewed-by: Anuradha Talur <atalur@redhat.com>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>

Comment 1 Vijay Bellur 2016-08-16 06:11:04 UTC
REVIEW: http://review.gluster.org/15162 (cluster/afr: Bug fixes in txn codepath) posted (#1) for review on release-3.7 by Krutika Dhananjay (kdhananj@redhat.com)

Comment 2 Vijay Bellur 2016-08-17 10:22:57 UTC
COMMIT: http://review.gluster.org/15162 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu@redhat.com) 
------
commit e65e066c4f993aac626112e718ee66d35d15c6a8
Author: Krutika Dhananjay <kdhananj@redhat.com>
Date:   Fri Aug 5 12:18:05 2016 +0530

    cluster/afr: Bug fixes in txn codepath
    
            Backport of: http://review.gluster.org/15145
    
    AFR sets transaction.pre_op[] array even before actually doing the
    pre-op on-disk. Therefore, AFR must not only consider the pre_op[] array
    but also the failed_subvols[] information before setting the pre_op_done[]
    flag. This patch fixes that.
    
    Change-Id: I8163256a6de254be43a7a526c6d2f9dc30e0e1df
    BUG: 1367270
    Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
    Reviewed-on: http://review.gluster.org/15162
    Reviewed-by: Ravishankar N <ravishankar@redhat.com>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: Anuradha Talur <atalur@redhat.com>

Comment 3 Vijay Bellur 2016-08-20 09:31:35 UTC
REVIEW: http://review.gluster.org/15222 (cluster/afr: Prevent split-brain when bricks are brought off and on in cyclic order) posted (#1) for review on release-3.7 by Krutika Dhananjay (kdhananj@redhat.com)

Comment 4 Worker Ant 2016-08-22 10:05:11 UTC
COMMIT: http://review.gluster.org/15222 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu@redhat.com) 
------
commit febaa1e46d3a91a29c4786a17abf29cfc7178254
Author: Krutika Dhananjay <kdhananj@redhat.com>
Date:   Thu Jul 28 21:29:59 2016 +0530

    cluster/afr: Prevent split-brain when bricks are brought off and on in cyclic order
    
            Backport of: http://review.gluster.org/15080
    
    When the bricks are brought offline and then online in cyclic
    order while writes are in progress on a file, thanks to inode
    refresh in write txns, AFR will mostly fail the write attempt
    when the only good copy is offline. However, there is still a
    remote possibility that the file will run into split-brain if
    the brick that has the lone good copy goes offline *after* the
    inode refresh but *before* the write txn completes (I call it
    in-flight split-brain in the patch for ease of reference),
    requiring intervention from admin to resolve the split-brain
    before the IO can resume normally on the file. To get around this,
    the patch does the following things:
    i) retains the dirty xattrs on the file
    ii) avoids marking the last of the good copies as bad (or accused)
        in case it is the one to go down during the course of a write.
    iii) fails that particular write with the appropriate errno.
    
    This way, we still have one good copy left despite the split-brain situation
    which when it is back online, will be chosen as source to do the heal.
    
    Change-Id: I7c13c6ddd5b8fe88b0f2684e8ce5f4a9c3a24a08
    BUG: 1367270
    Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
    Reviewed-on: http://review.gluster.org/15222
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: Oleksandr Natalenko <oleksandr@natalenko.name>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>

Comment 5 Kaushal 2016-09-01 09:21:42 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.15, please open a new bug report.

glusterfs-3.7.15 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-devel/2016-September/050714.html
[2] https://www.gluster.org/pipermail/gluster-users/

Comment 6 Kaushal 2016-09-01 09:33:27 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.15, please open a new bug report.

glusterfs-3.7.15 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-devel/2016-September/050714.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.