Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1365743 - GlusterFS - Memory Leak - High Memory Utilization
Summary: GlusterFS - Memory Leak - High Memory Utilization
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.8.2
Hardware: All
OS: All
Target Milestone: ---
QA Contact:
Depends On:
TreeView+ depends on / blocked
Reported: 2016-08-10 06:43 UTC by Oleksandr Natalenko
Modified: 2016-08-12 09:48 UTC (History)
1 user (show)

Fixed In Version: glusterfs-3.8.2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2016-08-12 09:48:48 UTC
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:

Attachments (Terms of Use)

Description Oleksandr Natalenko 2016-08-10 06:43:36 UTC
Backport of to 3.8.

Comment 1 Vijay Bellur 2016-08-10 06:44:03 UTC
REVIEW: (glusterd: Fix memory leak in glusterd (un)lock RPCs) posted (#1) for review on release-3.8 by Oleksandr Natalenko (

Comment 2 Vijay Bellur 2016-08-10 10:32:31 UTC
COMMIT: committed in release-3.8 by Niels de Vos ( 
commit 26471bc310db9ac010935b6fa2716ae555c6f1c7
Author: root <>
Date:   Tue Jul 5 14:33:15 2016 +0530

    glusterd: Fix memory leak in glusterd (un)lock RPCs
    Problem:  At the time of execute "gluster volume profile <vol> info" command
              It does have memory leak in glusterd.
    Solution: Modify the code to prevent memory leak in glusterd.
    Fix    : 1) Unref dict and free dict_val buffer in glusterd_mgmt_v3_lock_peer and
    Test   : To verify the patch run below loop to generate io traffic
             for (( i=0 ; i<=1000000 ; i++ ));
               do echo "hi Start Line " > file$i;
               cat file$i >> /dev/null;
             To verify the improvement in memory leak specific to glusterd run below command
             cnt=0;while [ $cnt -le 1000 ]; do
             pmap -x <glusterd-pid> | grep total;
             gluster volume profile distributed info > /dev/null; cnt=`expr $cnt + 1`; done
             After apply this patch it will reduce leak significantly.
    > Reviewed-on:
    > Smoke: Gluster Build System <>
    > CentOS-regression: Gluster Build System <>
    > NetBSD-regression: NetBSD Build System <>
    > Reviewed-by: Atin Mukherjee <>
    > Reviewed-by: Prashanth Pai <>
    BUG: 1365743
    Change-Id: I52a0ca47adb20bfe4b1848a11df23e5e37c5cea9
    Signed-off-by: Mohit Agrawal <>
    Signed-off-by: Oleksandr Natalenko <>
    Reviewed-by: Atin Mukherjee <>
    Smoke: Gluster Build System <>
    Reviewed-by: Prashanth Pai <>
    NetBSD-regression: NetBSD Build System <>
    CentOS-regression: Gluster Build System <>

Comment 3 Niels de Vos 2016-08-12 09:48:48 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.2, please open a new bug report.

glusterfs-3.8.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.


Note You need to log in before you can comment on or make changes to this bug.