Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1365463 - [Perf] : Poor metadata performance on Ganesha v4 mounts
Summary: [Perf] : Poor metadata performance on Ganesha v4 mounts
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: ganesha-nfs
Version: 3.8
Hardware: x86_64
OS: Linux
urgent
high
Target Milestone: ---
Assignee: Soumya Koduri
QA Contact: Ambarish
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-09 10:50 UTC by Ambarish
Modified: 2017-11-07 10:40 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-11-07 10:40:08 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Ambarish 2016-08-09 10:50:30 UTC
Description of problem:
-----------------------

Metadata performance on Ganesha v4 mounts is  poor when compared to GlusterNFS :

*On Ganesha v4 mounts* :

stat : 959 files/sec
chmod : 820 files/sec
setxattr : 1318 files/sec
getxattr : 334 files/sec

*On Gluster NFS* :

stat : 1596 files/sec
chmod : 1477 files/sec
setxattr : 1750 files/sec
getxattr : 1535 files/sec


Both these results are with "Smallfile Perf enhancements" - cluster.lookup-optimize is on and server.event-threads and client.event-threads are 4 each.

Version-Release number of selected component (if applicable):
-------------------------------------------------------------

glusterfs-server-3.8.1-0.4.git56fcf39.el7rhgs.x86_64
nfs-ganesha-gluster-2.4-0.dev.26.el7rhgs.x86_64
pacemaker-libs-1.1.13-10.el7.x86_64
pcs-0.9.143-15.el7.x86_64


How reproducible:
-----------------

100%

Steps to Reproduce:
------------------
My setup consists of 4 servers and 4 clients.Each client mounts from 1 server.

Run smallfile workload in a distributed-multithreaded way on  Ganehsa v4 and Gluster NFS mounts :

python /small-files/smallfile/smallfile_cli.py --operation stat --threads 8  --file-size 64 --files 10000 --top /gluster-mount --host-set "`echo $CLIENT | tr ' ' ','`"

python /small-files/smallfile/smallfile_cli.py --operation chmod --threads 8  --file-size 64 --files 10000 --top /gluster-mount --host-set "`echo $CLIENT | tr ' ' ','`"

etc

Actual results:
--------------

Metadata perf is comparitively poor.

Expected results:
-----------------

Additional info:
----------------

Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 3ee2c046-939b-4915-908b-859bfcad0840
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gqas001.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick0
Brick2: gqas014.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick1
Brick3: gqas015.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick2
Brick4: gqas016.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick3
Options Reconfigured:
client.event-threads: 4
server.event-threads: 4
cluster.lookup-optimize: on
ganesha.enable: on
features.cache-invalidation: on
nfs.disable: on
performance.readdir-ahead: on
performance.stat-prefetch: off
server.allow-insecure: on
nfs-ganesha: enable
cluster.enable-shared-storage: enable

Comment 2 Ambarish 2016-08-09 11:50:41 UTC
I see a problem on smallfile renames on Ganesha v3 as well :

On Ganesha v3 mounts : 156 files/sec
On GlusterNFS : 367 files/sec

The rest of the stuff looked comparable.

Comment 3 Ambarish 2016-08-09 16:44:32 UTC
Igonre comment#2,it was for BZ#1365459.

What I meant to say was there's a problem with smallfile listing on v3 mounts.

On Ganesha v3 mounts : 940 files/sec
On GlusterNFS : 1703 files/sec

There's a downstream bug for the same :
https://bugzilla.redhat.com/show_bug.cgi?id=%201345909

Comment 5 Niels de Vos 2016-09-12 05:39:43 UTC
All 3.8.x bugs are now reported against version 3.8 (without .x). For more information, see http://www.gluster.org/pipermail/gluster-devel/2016-September/050859.html

Comment 6 Niels de Vos 2017-11-07 10:40:08 UTC
This bug is getting closed because the 3.8 version is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.


Note You need to log in before you can comment on or make changes to this bug.