Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1057580 - glusterfs process crashed when arequal-checksum is executed on fuse mount.
Summary: glusterfs process crashed when arequal-checksum is executed on fuse mount.
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs
Version: 2.1
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact:
Depends On:
TreeView+ depends on / blocked
Reported: 2014-01-24 12:03 UTC by spandura
Modified: 2015-12-03 17:22 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Last Closed: 2015-12-03 17:22:30 UTC
Target Upstream Version:

Attachments (Terms of Use)

Description spandura 2014-01-24 12:03:02 UTC
Description of problem:
glusterfs crash observed when executing arequal-check from fuse mount on a dis-rep volume.

Core was generated by `/usr/sbin/glusterfs --volfile-server=king --volfile-id=/vol /mnt/vol'.
Program terminated with signal 11, Segmentation fault.
#0  0x0000003ffa289b53 in memcpy () from /lib64/
Missing separate debuginfos, use: debuginfo-install glusterfs-
(gdb) bt
#0  0x0000003ffa289b53 in memcpy () from /lib64/
#1  0x0000003ffa3181f0 in xdrmem_getbytes () from /lib64/
#2  0x0000003ffa31790a in xdr_opaque_internal () from /lib64/
#3  0x0000003ffc20acf8 in xdr_gfs3_read_rsp () from /usr/lib64/
#4  0x0000003ffc2078c3 in xdr_to_generic () from /usr/lib64/
#5  0x00007f343e000559 in client3_3_readv_cbk ()
   from /usr/lib64/glusterfs/
#6  0x0000003ffbe0dfb5 in rpc_clnt_handle_reply ()
   from /usr/lib64/
#7  0x0000003ffbe0f577 in rpc_clnt_notify () from /usr/lib64/
#8  0x0000003ffbe0adf8 in rpc_transport_notify ()
   from /usr/lib64/
#9  0x00007f343f479d86 in socket_event_poll_in (this=0xfd6d90) at socket.c:2119
#10 0x00007f343f47b69d in socket_event_handler (fd=<value optimized out>, 
    idx=<value optimized out>, data=0xfd6d90, poll_in=1, poll_out=0, poll_err=0)
    at socket.c:2229
#11 0x0000003ffb662437 in ?? () from /usr/lib64/
#12 0x00000000004069d7 in main ()

Version-Release number of selected component (if applicable):
glusterfs built on Jan 13 2014 06:59:05

How reproducible:

Steps to Reproduce:
1.Create a dis-rep volume ( 2 x 3 ). Start the volume.

2.Create fuse/nfs mount. 

3. Create files/dirs. 

4. From fuse/nfs mount calculate arequal-checksum.

Actual results:
glusterfs process crashed 

Additional info:

root@king [Jan-24-2014-11:20:19] >gluster v info vol
Volume Name: vol
Type: Distributed-Replicate
Volume ID: 3394a1cf-a7bb-4ee6-98ca-f0f9f37daf26
Status: Started
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Brick1: king:/rhs/bricks/b1
Brick2: hicks:/rhs/bricks/b1-rep1
Brick3: darrel:/rhs/bricks/b1-rep2
Brick4: cutlass:/rhs/bricks/b2
Brick5: fan:/rhs/bricks/b2-rep1
Brick6: mia:/rhs/bricks/b2-rep2
root@king [Jan-24-2014-11:20:23] >
root@king [Jan-24-2014-11:20:24] >gluster v status vol
Status of volume: vol
Gluster process						Port	Online	Pid
Brick king:/rhs/bricks/b1				49153	Y	16267
Brick hicks:/rhs/bricks/b1-rep1				49153	Y	15281
Brick darrel:/rhs/bricks/b1-rep2			49153	Y	15478
Brick cutlass:/rhs/bricks/b2				49153	Y	15396
Brick fan:/rhs/bricks/b2-rep1				49153	Y	9393
Brick mia:/rhs/bricks/b2-rep2				49153	Y	18943
NFS Server on localhost					2049	Y	26476
Self-heal Daemon on localhost				N/A	Y	16282
NFS Server on hicks					2049	Y	25375
Self-heal Daemon on hicks				N/A	Y	15300
NFS Server on fan					2049	Y	19679
Self-heal Daemon on fan					N/A	Y	9412
NFS Server on darrel					2049	Y	25574
Self-heal Daemon on darrel				N/A	Y	15496
NFS Server on cutlass					2049	Y	25681
Self-heal Daemon on cutlass				N/A	Y	15415
NFS Server on mia					2049	Y	29211
Self-heal Daemon on mia					N/A	Y	18962
Task Status of Volume vol
There are no active volume tasks
root@king [Jan-24-2014-11:20:28] >

root@king [Jan-24-2014-11:20:46] >gluster pool list
UUID					Hostname	State
fd6f0d89-7e0e-4f7d-bda8-290406faa4dd	darrel	Connected 
92c6c45b-830f-4d1a-99c2-9fa3a37445cc	cutlass	Connected 
5155163a-25eb-47c3-9713-7f1f74eda260	hicks	Connected 
1a4e09c8-a7a5-4132-8041-19f0aabbab7a	mia	Connected 
364b9192-ff31-4d8d-87d3-b896480231c6	fan	Connected 
41b3d871-c923-469f-8ba9-6d8adf0a3753	localhost	Connected 
root@king [Jan-24-2014-11:21:00] >

root@king [Jan-24-2014-11:21:26] >ip
          inet  Bcast:  Mask:
root@king [Jan-24-2014-11:21:34] >

root@hicks [Jan-24-2014-11:21:26] >ip
          inet  Bcast:  Mask:
root@hicks [Jan-24-2014-11:21:34] >

root@darrel [Jan-24-2014-11:21:26] >ip
          inet  Bcast:  Mask:
root@darrel [Jan-24-2014-11:21:34] >

root@cutlass [Jan-24-2014-10:31:57] >ip
          inet  Bcast:  Mask:
root@cutlass [Jan-24-2014-11:22:14] >

root@fan [Jan-24-2014-10:31:57] >ip
          inet  Bcast:  Mask:
root@fan [Jan-24-2014-11:22:14] >

root@mia [Jan-24-2014-10:31:57] >ip
          inet  Bcast:  Mask:
root@mia [Jan-24-2014-11:22:14] >

Fuse mount log:-

patchset: git://
signal received: 11
time of crash: 2014-01-24 10:44:43configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs

Comment 3 Vivek Agarwal 2015-12-03 17:22:30 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.

Note You need to log in before you can comment on or make changes to this bug.