Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1513486 - [Perf] Slow random reads performance when compared to random writes on SMB
Summary: [Perf] Slow random reads performance when compared to random writes on SMB
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: samba
Version: rhgs-3.3
Hardware: All
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Poornima G
QA Contact: Karan Sandha
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-15 14:09 UTC by Karan Sandha
Modified: 2018-11-19 06:47 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-19 06:47:05 UTC


Attachments (Terms of Use)
Profiles with 8 combinations mentioned in the profiles (deleted)
2017-11-15 14:09 UTC, Karan Sandha
no flags Details

Description Karan Sandha 2017-11-15 14:09:09 UTC
Description of problem:
Random reads are performing slow on SMB3 when compared to random writes

Version-Release number of selected component (if applicable):
3.8.4.52 (3.3.1) 

How reproducible:
100%

Steps to Reproduce:
1. Mount the gluster volume using SMB 
2. Run random reads and random writes with perf tuning. 
3. Compare both the workloads with each other 

Actual results:
random reads are 100Mb/s per node
random reads are 400Mb/s per node

Expected results:
reads should be more than the writes

Additional info:

Comment 2 Karan Sandha 2017-11-15 14:09:59 UTC
Created attachment 1352631 [details]
Profiles with 8 combinations mentioned in the profiles

Comment 5 Poornima G 2018-02-20 06:28:54 UTC
Hi,

I ran the iozone test case on 3*3 from windows client, have attached the output. The random reads are faster than random writes, except in two cases:
 file size 1024 and record size 128, 
 file size 131072, and record size 512,

But on rerunning it again, the random reads are faster than random writes.

So it doesn't seem like a generic random read-write issue. So i think it can be deferred for 3.4.0?

@Karan did you run iozone or small_file.py? Is it possible to try on 3.4.0?

Comment 9 Poornima G 2018-11-19 06:47:05 UTC
Since cifs protocol is less priority as per downstream and windows performance has consistence numbers for the same testing, closing as wontfix.


Note You need to log in before you can comment on or make changes to this bug.