Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1367305 - thread CPU saturation limiting throughput on write workloads
Summary: thread CPU saturation limiting throughput on write workloads
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: fuse
Version: 3.7.14
Hardware: All
OS: All
unspecified
low
Target Milestone: ---
Assignee: Oleksandr Natalenko
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-16 07:40 UTC by Oleksandr Natalenko
Modified: 2016-09-01 09:33 UTC (History)
1 user (show)

Fixed In Version: glusterfs-3.7.15
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-09-01 09:21:42 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Oleksandr Natalenko 2016-08-16 07:40:59 UTC
Backport of https://bugzilla.redhat.com/show_bug.cgi?id=1361678 fix to 3.7.

Comment 1 Vijay Bellur 2016-08-16 07:47:07 UTC
REVIEW: http://review.gluster.org/15168 (cluster/afr: copy loc before passing to syncop) posted (#1) for review on release-3.7 by Oleksandr Natalenko (oleksandr@natalenko.name)

Comment 2 Vijay Bellur 2016-08-17 11:44:15 UTC
COMMIT: http://review.gluster.org/15168 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu@redhat.com) 
------
commit 318aacabbc482bcc2e1686988a77ad0bc054837e
Author: Pranith Kumar K <pkarampu@redhat.com>
Date:   Tue Aug 2 15:19:00 2016 +0530

    cluster/afr: copy loc before passing to syncop
    
    Problem:
    When io-threads is enabled on the client side, io-threads destroys the
    call-stub in which the loc is stored as soon as the c-stack unwinds.
    Because afr is creating a syncop with the address of loc passed in
    setxattr by the time syncop tries to access it, io-threads would have
    already freed the call-stub. This will lead to crash.
    
    Fix:
    Copy loc to frame->local and use it's address.
    
    > Reviewed-on: http://review.gluster.org/15070
    > Smoke: Gluster Build System <jenkins@build.gluster.org>
    > CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    > Reviewed-by: Ravishankar N <ravishankar@redhat.com>
    
    BUG: 1367305
    Change-Id: I16987e491e24b0b4e3d868a6968e802e47c77f7a
    Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
    Signed-off-by: Oleksandr Natalenko <oleksandr@natalenko.name>
    Reviewed-on: http://review.gluster.org/15168
    Reviewed-by: Ravishankar N <ravishankar@redhat.com>
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>

Comment 3 Kaushal 2016-09-01 09:21:42 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.15, please open a new bug report.

glusterfs-3.7.15 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-devel/2016-September/050714.html
[2] https://www.gluster.org/pipermail/gluster-users/

Comment 4 Kaushal 2016-09-01 09:33:27 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.15, please open a new bug report.

glusterfs-3.7.15 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-devel/2016-September/050714.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.