Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1055747 - CLI shows another transaction in progress when one node in cluster abruptly shutdown
Summary: CLI shows another transaction in progress when one node in cluster abruptly s...
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-01-20 20:47 UTC by Paul Cuzner
Modified: 2016-06-17 15:57 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-06-17 15:57:04 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)
glusterd log file from one of the nodes showing the locking issue. (deleted)
2014-01-20 20:47 UTC, Paul Cuzner
no flags Details

Description Paul Cuzner 2014-01-20 20:47:45 UTC
Created attachment 852843 [details]
glusterd log file from one of the nodes showing the locking issue.

Description of problem:
Using on glusterfs-3.5.0-0.3.beta1.el6 RHEL6.5 - I have a 4-way cluster (VM's) and a distributed volume - one brick from each node. 

Test - simulate abrupt loss of a node by performing a force shut of one node in the cluster. 

Result - CLI is either unresponsive or inaccurate - ie. if you're in the gluster console at the time you have to break out.

Attempting a vol status returns;

"Another transaction is in progress. Please try again after sometime."

At this point peer status and pool list still show the node that was powered off as part of the cluster.

After 5 minutes, vol status is still not working and peer information remains out of date


Version-Release number of selected component (if applicable):
[root@glfs35-1 ~]# rpm -qa | grep gluster
glusterfs-fuse-3.5.0-0.3.beta1.el6.x86_64
glusterfs-devel-3.5.0-0.3.beta1.el6.x86_64
glusterfs-api-3.5.0-0.3.beta1.el6.x86_64
glusterfs-3.5.0-0.3.beta1.el6.x86_64
glusterfs-cli-3.5.0-0.3.beta1.el6.x86_64
glusterfs-server-3.5.0-0.3.beta1.el6.x86_64
glusterfs-libs-3.5.0-0.3.beta1.el6.x86_64


How reproducible:
Test performed 4 times - issue occurred on 3 tests

Steps to Reproduce:
1. Power off a node (not an orderly shutdown)
2. Observe CLI behaviour


Actual results:


Expected results:
I've done the same test on RHS 2.1u1 and this does not happen

Additional info:
errors in the attached log indicate lock acquisition problems.

Comment 2 Niels de Vos 2016-06-17 15:57:04 UTC
This bug is getting closed because the 3.5 is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.


Note You need to log in before you can comment on or make changes to this bug.