Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1057580 - glusterfs process crashed when arequal-checksum is executed on fuse mount.
Summary: glusterfs process crashed when arequal-checksum is executed on fuse mount.
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs
Version: 2.1
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-01-24 12:03 UTC by spandura
Modified: 2015-12-03 17:22 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-03 17:22:30 UTC


Attachments (Terms of Use)

Description spandura 2014-01-24 12:03:02 UTC
Description of problem:
===========================
glusterfs crash observed when executing arequal-check from fuse mount on a dis-rep volume.

Core was generated by `/usr/sbin/glusterfs --volfile-server=king --volfile-id=/vol /mnt/vol'.
Program terminated with signal 11, Segmentation fault.
#0  0x0000003ffa289b53 in memcpy () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install glusterfs-3.4.0.57rhs-1.el6rhs.x86_64
(gdb) bt
#0  0x0000003ffa289b53 in memcpy () from /lib64/libc.so.6
#1  0x0000003ffa3181f0 in xdrmem_getbytes () from /lib64/libc.so.6
#2  0x0000003ffa31790a in xdr_opaque_internal () from /lib64/libc.so.6
#3  0x0000003ffc20acf8 in xdr_gfs3_read_rsp () from /usr/lib64/libgfxdr.so.0.0.0
#4  0x0000003ffc2078c3 in xdr_to_generic () from /usr/lib64/libgfxdr.so.0.0.0
#5  0x00007f343e000559 in client3_3_readv_cbk ()
   from /usr/lib64/glusterfs/3.4.0.57rhs/xlator/protocol/client.so
#6  0x0000003ffbe0dfb5 in rpc_clnt_handle_reply ()
   from /usr/lib64/libgfrpc.so.0.0.0
#7  0x0000003ffbe0f577 in rpc_clnt_notify () from /usr/lib64/libgfrpc.so.0.0.0
#8  0x0000003ffbe0adf8 in rpc_transport_notify ()
   from /usr/lib64/libgfrpc.so.0.0.0
#9  0x00007f343f479d86 in socket_event_poll_in (this=0xfd6d90) at socket.c:2119
#10 0x00007f343f47b69d in socket_event_handler (fd=<value optimized out>, 
    idx=<value optimized out>, data=0xfd6d90, poll_in=1, poll_out=0, poll_err=0)
    at socket.c:2229
#11 0x0000003ffb662437 in ?? () from /usr/lib64/libglusterfs.so.0.0.0
#12 0x00000000004069d7 in main ()
(gdb) 

Version-Release number of selected component (if applicable):
=================================================================
glusterfs 3.4.0.57rhs built on Jan 13 2014 06:59:05

How reproducible:
==================
Often

Steps to Reproduce:
======================
1.Create a dis-rep volume ( 2 x 3 ). Start the volume.

2.Create fuse/nfs mount. 

3. Create files/dirs. 

4. From fuse/nfs mount calculate arequal-checksum.


Actual results:
=================
glusterfs process crashed 

Additional info:
===========================

root@king [Jan-24-2014-11:20:19] >gluster v info vol
 
Volume Name: vol
Type: Distributed-Replicate
Volume ID: 3394a1cf-a7bb-4ee6-98ca-f0f9f37daf26
Status: Started
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: king:/rhs/bricks/b1
Brick2: hicks:/rhs/bricks/b1-rep1
Brick3: darrel:/rhs/bricks/b1-rep2
Brick4: cutlass:/rhs/bricks/b2
Brick5: fan:/rhs/bricks/b2-rep1
Brick6: mia:/rhs/bricks/b2-rep2
root@king [Jan-24-2014-11:20:23] >
root@king [Jan-24-2014-11:20:24] >gluster v status vol
Status of volume: vol
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick king:/rhs/bricks/b1				49153	Y	16267
Brick hicks:/rhs/bricks/b1-rep1				49153	Y	15281
Brick darrel:/rhs/bricks/b1-rep2			49153	Y	15478
Brick cutlass:/rhs/bricks/b2				49153	Y	15396
Brick fan:/rhs/bricks/b2-rep1				49153	Y	9393
Brick mia:/rhs/bricks/b2-rep2				49153	Y	18943
NFS Server on localhost					2049	Y	26476
Self-heal Daemon on localhost				N/A	Y	16282
NFS Server on hicks					2049	Y	25375
Self-heal Daemon on hicks				N/A	Y	15300
NFS Server on fan					2049	Y	19679
Self-heal Daemon on fan					N/A	Y	9412
NFS Server on darrel					2049	Y	25574
Self-heal Daemon on darrel				N/A	Y	15496
NFS Server on cutlass					2049	Y	25681
Self-heal Daemon on cutlass				N/A	Y	15415
NFS Server on mia					2049	Y	29211
Self-heal Daemon on mia					N/A	Y	18962
 
Task Status of Volume vol
------------------------------------------------------------------------------
There are no active volume tasks
 
root@king [Jan-24-2014-11:20:28] >

root@king [Jan-24-2014-11:20:46] >gluster pool list
UUID					Hostname	State
fd6f0d89-7e0e-4f7d-bda8-290406faa4dd	darrel	Connected 
92c6c45b-830f-4d1a-99c2-9fa3a37445cc	cutlass	Connected 
5155163a-25eb-47c3-9713-7f1f74eda260	hicks	Connected 
1a4e09c8-a7a5-4132-8041-19f0aabbab7a	mia	Connected 
364b9192-ff31-4d8d-87d3-b896480231c6	fan	Connected 
41b3d871-c923-469f-8ba9-6d8adf0a3753	localhost	Connected 
root@king [Jan-24-2014-11:21:00] >


root@king [Jan-24-2014-11:21:26] >ip
          inet 10.70.34.119  Bcast:10.70.35.255  Mask:255.255.254.0
root@king [Jan-24-2014-11:21:34] >


root@hicks [Jan-24-2014-11:21:26] >ip
          inet 10.70.34.118  Bcast:10.70.35.255  Mask:255.255.254.0
root@hicks [Jan-24-2014-11:21:34] >

root@darrel [Jan-24-2014-11:21:26] >ip
          inet 10.70.34.115  Bcast:10.70.35.255  Mask:255.255.254.0
root@darrel [Jan-24-2014-11:21:34] >


root@cutlass [Jan-24-2014-10:31:57] >ip
          inet 10.70.34.116  Bcast:10.70.35.255  Mask:255.255.254.0
root@cutlass [Jan-24-2014-11:22:14] >


root@fan [Jan-24-2014-10:31:57] >ip
          inet 10.70.34.91  Bcast:10.70.35.255  Mask:255.255.254.0
root@fan [Jan-24-2014-11:22:14] >


root@mia [Jan-24-2014-10:31:57] >ip
          inet 10.70.34.92  Bcast:10.70.35.255  Mask:255.255.254.0
root@mia [Jan-24-2014-11:22:14] >


Fuse mount log:-
===================

patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 2014-01-24 10:44:43configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.4.0.57rhs
/lib64/libc.so.6[0x3ffa232960]
/lib64/libc.so.6(memcpy+0x2f3)[0x3ffa289b53]
/lib64/libc.so.6[0x3ffa3181f0]
/lib64/libc.so.6(xdr_opaque+0x7a)[0x3ffa31790a]
/usr/lib64/libgfxdr.so.0(xdr_gfs3_read_rsp+0x78)[0x3ffc20acf8]
/usr/lib64/libgfxdr.so.0(xdr_to_generic+0x73)[0x3ffc2078c3]
/usr/lib64/glusterfs/3.4.0.57rhs/xlator/protocol/client.so(client3_3_readv_cbk+0xb9)[0x7f343e000559]
/usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5)[0x3ffbe0dfb5]
/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x147)[0x3ffbe0f577]
/usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x28)[0x3ffbe0adf8]
/usr/lib64/glusterfs/3.4.0.57rhs/rpc-transport/socket.so(+0x8d86)[0x7f343f479d86]
/usr/lib64/glusterfs/3.4.0.57rhs/rpc-transport/socket.so(+0xa69d)[0x7f343f47b69d]
/usr/lib64/libglusterfs.so.0[0x3ffb662437]
/usr/sbin/glusterfs(main+0x6c7)[0x4069d7]
/lib64/libc.so.6(__libc_start_main+0xfd)[0x3ffa21ecdd]
/usr/sbin/glusterfs[0x404619]
---------

Comment 3 Vivek Agarwal 2015-12-03 17:22:30 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.


Note You need to log in before you can comment on or make changes to this bug.