Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1686255 - glusterd leaking memory when issued gluster vol status all tasks continuosly
Summary: glusterd leaking memory when issued gluster vol status all tasks continuosly
Keywords:
Status: MODIFIED
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd
Version: rhgs-3.4
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
: RHGS 3.5.0
Assignee: Sanju
QA Contact: Bala Konda Reddy M
URL:
Whiteboard:
Depends On: 1691164 1694610 1694612
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-07 05:48 UTC by Bala Konda Reddy M
Modified: 2019-04-11 07:59 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
The memory leak at glusterd resulted during gluster volume status all operation has been addressed in Red Hat Gluster Storage 3.5
Clone Of:
: 1691164 (view as bug list)
Environment:
Last Closed:
Target Upstream Version:


Attachments (Terms of Use)
Top output of glusterd for all six nodes of the cluster (deleted)
2019-03-07 05:48 UTC, Bala Konda Reddy M
no flags Details

Description Bala Konda Reddy M 2019-03-07 05:48:42 UTC
Created attachment 1541678 [details]
Top output of glusterd for all six nodes of the cluster

Description of problem:
glusterd is leaking memory when issused "gluster vol status tasks" continuosly for 12 hours. The memory increase is from 250MB to 1.1GB. The increase have been 750 MB.


Version-Release number of selected component (if applicable):
glusterfs-3.12.2-45.el7rhgs.x86_64

How reproducible:
1/1

Steps to Reproduce:
1. On a six node cluster with brick-multiplexing enabled
2. Created 150 disperse volumes and 250 replica volumes and started them
3. Taken memory footprint from all the nodes
4. Issued "while true; do gluster volume status all tasks; sleep 2; done" with a time gap of 2 seconds 

Actual results:
Seen a memory increase of glusterd on Node N1 from 260MB to 1.1GB

Expected results:
glusterd memory shouldn't leak

Additional info:
Attaching the screenshot of the top output before and after the command has been executed.

The setup in same state for further debugging.

Comment 9 Atin Mukherjee 2019-03-12 09:11:54 UTC
Sanju,

Looks like there's a leak on the remote glusterd i.e. in the op-sm framework based on the periodic statedump I captured while testing this.

The impacted data types are:

gf_common_mt_gf_timer_t
gf_common_mt_asprintf
gf_common_mt_strdup
gf_common_mt_char
gf_common_mt_txn_opinfo_obj_t

Please check if we're not cleaning up txn_opinfo in some place in this transaction, fixing that might implicitly fix the other leaks too.


Note You need to log in before you can comment on or make changes to this bug.