Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1696136 - gluster fuse mount crashed, when deleting 2T image file from oVirt Manager UI
Summary: gluster fuse mount crashed, when deleting 2T image file from oVirt Manager UI
Keywords:
Status: POST
Alias: None
Product: GlusterFS
Classification: Community
Component: sharding
Version: mainline
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: ---
Assignee: Krutika Dhananjay
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On: 1694595
Blocks: 1694604
TreeView+ depends on / blocked
 
Reported: 2019-04-04 08:28 UTC by Krutika Dhananjay
Modified: 2019-04-05 09:06 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1694595
Environment:
Last Closed:
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Gluster.org Gerrit 22507 None Open features/shard: Fix crash during background shard deletion in a specific case 2019-04-04 16:28:37 UTC
Gluster.org Gerrit 22517 None Open features/shard: Fix extra unref when inode object is lru'd out and added back 2019-04-05 09:06:48 UTC

Description Krutika Dhananjay 2019-04-04 08:28:38 UTC
+++ This bug was initially created as a clone of Bug #1694595 +++

Description of problem:
------------------------
When deleting the 2TB image file , gluster fuse mount process has crashed

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
glusterfs-3.12.2-47

How reproducible:
-----------------
1/1

Steps to Reproduce:
-------------------
1. Create a image file of 2T from oVirt Manager UI
2. Delete the same image file after its created successfully

Actual results:
---------------
Fuse mount crashed

Expected results:
-----------------
All should work fine and no fuse mount crashes

--- Additional comment from SATHEESARAN on 2019-04-01 08:33:14 UTC ---

frame : type(0) op(0)
frame : type(0) op(0)
patchset: git://git.gluster.org/glusterfs.git
signal received: 11
time of crash: 
2019-04-01 07:57:53
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.12.2
/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0x9d)[0x7fc72c186b9d]
/lib64/libglusterfs.so.0(gf_print_trace+0x334)[0x7fc72c191114]
/lib64/libc.so.6(+0x36280)[0x7fc72a7c2280]
/usr/lib64/glusterfs/3.12.2/xlator/features/shard.so(+0x9627)[0x7fc71f8ba627]
/usr/lib64/glusterfs/3.12.2/xlator/features/shard.so(+0x9ef1)[0x7fc71f8baef1]
/usr/lib64/glusterfs/3.12.2/xlator/cluster/distribute.so(+0x3ae9c)[0x7fc71fb15e9c]
/usr/lib64/glusterfs/3.12.2/xlator/cluster/replicate.so(+0x9e8c)[0x7fc71fd88e8c]
/usr/lib64/glusterfs/3.12.2/xlator/cluster/replicate.so(+0xb79b)[0x7fc71fd8a79b]
/usr/lib64/glusterfs/3.12.2/xlator/cluster/replicate.so(+0xc226)[0x7fc71fd8b226]
/usr/lib64/glusterfs/3.12.2/xlator/protocol/client.so(+0x17cbc)[0x7fc72413fcbc]
/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)[0x7fc72bf2ca00]
/lib64/libgfrpc.so.0(rpc_clnt_notify+0x26b)[0x7fc72bf2cd6b]
/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7fc72bf28ae3]
/usr/lib64/glusterfs/3.12.2/rpc-transport/socket.so(+0x7586)[0x7fc727043586]
/usr/lib64/glusterfs/3.12.2/rpc-transport/socket.so(+0x9bca)[0x7fc727045bca]
/lib64/libglusterfs.so.0(+0x8a870)[0x7fc72c1e5870]
/lib64/libpthread.so.0(+0x7dd5)[0x7fc72afc2dd5]
/lib64/libc.so.6(clone+0x6d)[0x7fc72a889ead]

--- Additional comment from SATHEESARAN on 2019-04-01 08:37:56 UTC ---

1. RHHI-V Information
----------------------
RHV 4.3.3
RHGS 3.4.4

2. Cluster Information
-----------------------
[root@rhsqa-grafton11 ~]# gluster pe s
Number of Peers: 2

Hostname: rhsqa-grafton10.lab.eng.blr.redhat.com
Uuid: 46807597-245c-4596-9be3-f7f127aa4aa2
State: Peer in Cluster (Connected)
Other names:
10.70.45.32

Hostname: rhsqa-grafton12.lab.eng.blr.redhat.com
Uuid: 8a3bc1a5-07c1-4e1c-aa37-75ab15f29877
State: Peer in Cluster (Connected)
Other names:
10.70.45.34

3. Volume information
-----------------------
Affected volume: data
[root@rhsqa-grafton11 ~]# gluster volume info data
 
Volume Name: data
Type: Replicate
Volume ID: 9d5a9d10-f192-49ed-a6f0-c912224869e8
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: rhsqa-grafton10.lab.eng.blr.redhat.com:/gluster_bricks/data/data
Brick2: rhsqa-grafton11.lab.eng.blr.redhat.com:/gluster_bricks/data/data
Brick3: rhsqa-grafton12.lab.eng.blr.redhat.com:/gluster_bricks/data/data (arbiter)
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
[root@rhsqa-grafton11 ~]# gluster volume status data
Status of volume: data
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick rhsqa-grafton10.lab.eng.blr.redhat.co
m:/gluster_bricks/data/data                 49154     0          Y       23403
Brick rhsqa-grafton11.lab.eng.blr.redhat.co
m:/gluster_bricks/data/data                 49154     0          Y       23285
Brick rhsqa-grafton12.lab.eng.blr.redhat.co
m:/gluster_bricks/data/data                 49154     0          Y       23296
Self-heal Daemon on localhost               N/A       N/A        Y       16195
Self-heal Daemon on rhsqa-grafton12.lab.eng
.blr.redhat.com                             N/A       N/A        Y       52917
Self-heal Daemon on rhsqa-grafton10.lab.eng
.blr.redhat.com                             N/A       N/A        Y       43829
 
Task Status of Volume data
------------------------------------------------------------------------------
There are no active volume tasks

Comment 1 Worker Ant 2019-04-04 16:28:38 UTC
REVIEW: https://review.gluster.org/22507 (features/shard: Fix crash during background shard deletion in a specific case) posted (#1) for review on master by Krutika Dhananjay

Comment 2 Worker Ant 2019-04-05 09:06:49 UTC
REVIEW: https://review.gluster.org/22517 (features/shard: Fix extra unref when inode object is lru'd out and added back) posted (#1) for review on master by Krutika Dhananjay


Note You need to log in before you can comment on or make changes to this bug.