Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1366482 - SAMBA-DHT : Crash seen while rename operations in cifs mount and windows access of share mount
Summary: SAMBA-DHT : Crash seen while rename operations in cifs mount and windows acce...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: 3.8.2
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: Raghavendra G
QA Contact:
URL:
Whiteboard:
Depends On: 1345732 1345748 1366483
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-12 06:07 UTC by Raghavendra G
Modified: 2016-08-24 10:21 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.8.3
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1345748
Environment:
Last Closed: 2016-08-24 10:21:04 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Raghavendra G 2016-08-12 06:07:53 UTC
+++ This bug was initially created as a clone of Bug #1345748 +++

+++ This bug was initially created as a clone of Bug #1345732 +++

Description of problem:
DHT crashed when a directory was renamed from "xyz" to ".xyz" in cifs mount and then in windows the same directory "xyz" was set to hidden i.e by right click, properties and check the hidden check box.Again in server cifs mount rename back the ".xyz" to "xyz" and in windows just double clicked the "xyz" directory and was unable to access the share in windows.

Version-Release number of selected component (if applicable):
glusterfs-3.7.9-9.el7rhgs.x86_64
samba-client-4.4.3-7.el7rhgs.x86_64

How reproducible:
1/1

Steps to Reproduce:
1.In an existing setup with Distributed-Replicate volume with samba-ctdb setup and VSS plugins
2.Do a cifs mount and mount the share in windows8 as well
3.Create files,directories in the share.
3.Play a video file (not necessarily from the same directory which will be renamed in the below steps)
4.goto cifs mount rename any directory containing files say "xyz" to ".xyz"
5.Goto windows share right click on "xyz" properties and check the Hidden checkbox.
6.Goto the cifs mount location and rename again ".xyz" to "xyz"
7.Double click on "xyz"

Actual results:
Share was not accessible.The video which was being played crashed in mid way.

Expected results:
Should not be any issue

Additional info:

-------------------------------BT---------------------------------
(gdb) bt
#0  0x00007f61e3e335f7 in raise () from /lib64/libc.so.6
#1  0x00007f61e3e34ce8 in abort () from /lib64/libc.so.6
#2  0x00007f61e5794beb in dump_core () at ../source3/lib/dumpcore.c:322
#3  0x00007f61e5787fe7 in smb_panic_s3 (why=<optimized out>) at ../source3/lib/util.c:814
#4  0x00007f61e7c7957f in smb_panic (why=why@entry=0x7f61e7cc054a "internal error") at ../lib/util/fault.c:166
#5  0x00007f61e7c79796 in fault_report (sig=<optimized out>) at ../lib/util/fault.c:83
#6  sig_fault (sig=<optimized out>) at ../lib/util/fault.c:94
#7  <signal handler called>
#8  0x0000000000000000 in ?? ()
#9  0x00007f61c547e136 in dht_selfheal_dir_finish (frame=frame@entry=0x7f61c7f8987c, this=this@entry=0x7f61b800dc10, ret=ret@entry=0, invoke_cbk=invoke_cbk@entry=1)
    at dht-selfheal.c:121
#10 0x00007f61c5482d2f in dht_selfheal_directory (frame=frame@entry=0x7f61c7f8987c, dir_cbk=dir_cbk@entry=0x7f61c54938c0 <dht_lookup_selfheal_cbk>, 
    loc=loc@entry=0x7f61c40641e0, layout=layout@entry=0x7f61a8000990) at dht-selfheal.c:2125
#11 0x00007f61c5499563 in dht_lookup_dir_cbk (frame=0x7f61c7f8987c, cookie=<optimized out>, this=0x7f61b800dc10, op_ret=<optimized out>, op_errno=0, inode=0x7f61bd56c51c, 
    stbuf=0x7f61a8002510, xattr=0x7f61c79a4484, postparent=0x7f61a8002580) at dht-common.c:737
#12 0x00007f61c57390d3 in afr_lookup_done (frame=frame@entry=0x7f61c7f8a494, this=this@entry=0x7f61b800bae0) at afr-common.c:1825
#13 0x00007f61c5739734 in afr_lookup_sh_metadata_wrap (opaque=0x7f61c7f8a494) at afr-common.c:1989
#14 0x00007f61cc5f7262 in synctask_wrap (old_task=<optimized out>) at syncop.c:380
#15 0x00007f61e3e45110 in ?? () from /lib64/libc.so.6
#16 0x0000000000000000 in ?? ()
(gdb) f 9
#9  0x00007f61c547e136 in dht_selfheal_dir_finish (frame=frame@entry=0x7f61c7f8987c, this=this@entry=0x7f61b800dc10, ret=ret@entry=0, invoke_cbk=invoke_cbk@entry=1)
    at dht-selfheal.c:121
121	                local->selfheal.dir_cbk (frame, NULL, frame->this, ret,

------------------CLient log--------------------------------

[2016-06-13 04:35:35.819883] W [MSGID: 101182] [inode.c:174:__foreach_ancestor_dentry] 0-DOG-dht: per dentry fn returned 1
[2016-06-13 04:35:35.819907] C [MSGID: 101184] [inode.c:228:__is_dentry_cyclic] 0-meta-autoload/inode: detected cyclic loop formation during inode linkage. inode (408e7032-cc0f-479f-a450-b17302802adf) linking under itself as .samba2
[2016-06-13 04:35:35.820427] W [MSGID: 109005] [dht-selfheal.c:2064:dht_selfheal_directory] 0-DOG-dht: linking inode failed (408e7032-cc0f-479f-a450-b17302802adf/.samba2) => 408e7032-cc0f-479f-a450-b17302802adf

--- Additional comment from Vijay Bellur on 2016-06-13 02:58:20 EDT ---

REVIEW: http://review.gluster.org/14707 (cluster/dht: initial cbk before attempting inode-link) posted (#1) for review on master by Raghavendra G (rgowdapp@redhat.com)

--- Additional comment from Vijay Bellur on 2016-06-13 02:59:01 EDT ---

REVIEW: http://review.gluster.org/14707 (cluster/dht: initialize cbk before attempting inode-link) posted (#2) for review on master by Raghavendra G (rgowdapp@redhat.com)

--- Additional comment from Raghavendra G on 2016-06-13 03:12:36 EDT ---

(In reply to Vijay Bellur from comment #2)
> REVIEW: http://review.gluster.org/14707 (cluster/dht: initialize cbk before
> attempting inode-link) posted (#2) for review on master by Raghavendra G
> (rgowdapp@redhat.com)

Though this patch fixes the crash. Bigger question is how did loc->parent happen to contain "samba2". The rename is done in root directory and hence if there is a lookup on ".samba2", loc->parent should've been root.

1. We need more information on smb traffic during the test
2. Probably there is a bug in higher layers (smb-server, gfapi, smb-client etc) which fill wrong loc->parent (not ruling out dht is not modifying loc->parent, but highly unlikely).

regards,
Raghavendra

--- Additional comment from Vijay Bellur on 2016-06-14 00:03:36 EDT ---

REVIEW: http://review.gluster.org/14707 (cluster/dht: initialize cbk before attempting inode-link) posted (#3) for review on master by Raghavendra G (rgowdapp@redhat.com)

--- Additional comment from Vijay Bellur on 2016-06-17 08:23:54 EDT ---

COMMIT: http://review.gluster.org/14707 committed in master by Jeff Darcy (jdarcy@redhat.com) 
------
commit a4d35ccb8afeefae4d9cdd36ac19b0e97d0d04d0
Author: Raghavendra G <rgowdapp@redhat.com>
Date:   Mon Jun 13 12:26:24 2016 +0530

    cluster/dht: initialize cbk before attempting inode-link
    
    Otherwise inode-link failures in selfheal codepath will result in a
    crash.
    
    Change-Id: I9061629ae9d1eb1ac945af5f448d0d8b397a5022
    BUG: 1345748
    Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
    Reviewed-on: http://review.gluster.org/14707
    Reviewed-by: N Balachandran <nbalacha@redhat.com>
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    Reviewed-by: Poornima G <pgurusid@redhat.com>
    Reviewed-by: Susant Palai <spalai@redhat.com>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: Jeff Darcy <jdarcy@redhat.com>

Comment 1 Vijay Bellur 2016-08-12 06:09:21 UTC
REVIEW: http://review.gluster.org/15155 (cluster/dht: initialize cbk before attempting inode-link) posted (#1) for review on release-3.8 by Raghavendra G (rgowdapp@redhat.com)

Comment 2 Vijay Bellur 2016-08-12 06:49:03 UTC
REVIEW: http://review.gluster.org/15157 (cluster/dht: initialize cbk before attempting inode-link) posted (#1) for review on release-3.8 by Raghavendra G (rgowdapp@redhat.com)

Comment 3 Vijay Bellur 2016-08-16 06:20:43 UTC
COMMIT: http://review.gluster.org/15157 committed in release-3.8 by Raghavendra G (rgowdapp@redhat.com) 
------
commit fe871ea1ccbfcfc2ef0b6eb6610a683569e5dca9
Author: Raghavendra G <rgowdapp@redhat.com>
Date:   Mon Jun 13 12:26:24 2016 +0530

    cluster/dht: initialize cbk before attempting inode-link
    
    Otherwise inode-link failures in selfheal codepath will result in a
    crash.
    
    > Change-Id: I9061629ae9d1eb1ac945af5f448d0d8b397a5022
    > BUG: 1345748
    > Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
    > Reviewed-on: http://review.gluster.org/14707
    > Reviewed-by: N Balachandran <nbalacha@redhat.com>
    > Smoke: Gluster Build System <jenkins@build.gluster.org>
    > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    > Reviewed-by: Poornima G <pgurusid@redhat.com>
    > Reviewed-by: Susant Palai <spalai@redhat.com>
    > CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    > Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
    (cherry picked from commit a4d35ccb8afeefae4d9cdd36ac19b0e97d0d04d0)
    
    Change-Id: I9061629ae9d1eb1ac945af5f448d00dba97a5022
    BUG: 1366482
    Signed-off-by: Raghavendra G <rgowdapp@redhat.com>
    Reviewed-on: http://review.gluster.org/15157
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>

Comment 4 Niels de Vos 2016-08-24 10:21:04 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.3, please open a new bug report.

glusterfs-3.8.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/announce/2016-August/000059.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.