Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1058929 - Invalid error message "E [afr-transaction.c:876:afr_changelog_pre_op_cbk] 0-vol-replicate-1: xattrop failed on child vol-client-5: Success "
Summary: Invalid error message "E [afr-transaction.c:876:afr_changelog_pre_op_cbk] 0-v...
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate
Version: 2.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Pranith Kumar K
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-01-28 18:45 UTC by spandura
Modified: 2016-09-17 12:19 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-03 17:21:13 UTC


Attachments (Terms of Use)

Description spandura 2014-01-28 18:45:42 UTC
Description of problem:
===========================
When bricks go offline in a distribute-replicate volume, In fuse mount logs observed the following ERROR message :

[2014-01-28 13:56:22.314396] E [afr-transaction.c:876:afr_changelog_pre_op_cbk] 0-vol-replicate-1: xattrop failed on child vol-client-5: Success

The message reports "xattrop" failed and at end it reports Success.

Reporting "Success" when the operation failed is not a valid Message. 

Version-Release number of selected component (if applicable):
==================================================================
glusterfs 3.4.0.58rhs built on Jan 25 2014 07:04:08

How reproducible:


Steps to Reproduce:
=====================
1.Create a distribute-replicate volume (2 x 3). 

2.Create fuse mount. 

3.From fuse mount execute "self_heal_all_file_types_script1.sh"

4.Once the shell script is executed successfully, bring down glusterfsd process on NODE2 

5. From fuse mount execute "self_heal_all_file_types_script2.sh"

6. While the shell script execution is in progress, use "godown" program of xfs_tests on "brick_mount_point" on NODE3 to bring down bricks. 

Actual results:
=====================
Invalid ERROR Messages reported. 

Additional info:
====================
root@rhs-client13 [Jan-28-2014-18:23:38] >gluster v info
 
Volume Name: vol
Type: Distributed-Replicate
Volume ID: 5626c545-0691-4ee1-967a-e6ff47e3611e
Status: Started
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: rhs-client11:/rhs/bricks/b1
Brick2: rhs-client12:/rhs/bricks/b1-rep1
Brick3: rhs-client13:/rhs/bricks/b1-rep2
Brick4: rhs-client11:/rhs/bricks/b2
Brick5: rhs-client12:/rhs/bricks/b2-rep1
Brick6: rhs-client13:/rhs/bricks/b2-rep2

Comment 5 Vivek Agarwal 2015-12-03 17:21:13 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.


Note You need to log in before you can comment on or make changes to this bug.