Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1689786 - du -sh o/p calculation still not completed even after about a week [NEEDINFO]
Summary: du -sh o/p calculation still not completed even after about a week
Keywords:
Status: NEW
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: core
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Raghavendra G
QA Contact: nchilaka
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-18 06:31 UTC by nchilaka
Modified: 2019-04-15 15:29 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:
rhinduja: needinfo? (rgowdapp)


Attachments (Terms of Use)

Description nchilaka 2019-03-18 06:31:44 UTC
Description of problem:
=========================
I have triggered a du -sh command on the root of the mounted volume and even after a week, I still don't have the o/p
[root@rhs-client18 rpcx3]# date;du -sh IOs;date
Mon Mar 11 12:24:17 IST 2019



Version-Release number of selected component (if applicable):
====================
3.12.2-43


How reproducible:
===============
hit it once on my system setup for rpc tests

Steps to Reproduce:
====================
more details @ https://docs.google.com/spreadsheets/d/17Yf9ZRWnWOpbRyFQ2ZYxAAlp9I_yarzKZdjN8idBJM0/edit#gid=1472913705
1. was running system tests for about 3 weeks
2. In current state , a rebalance is still going on for about last >2+ weeks
(refer bz#1686425 )
3. apart from the above, I set client and server event threads to 8 as part of https://bugzilla.redhat.com/show_bug.cgi?id=1409568#c31
4. IOs going on from clients are as below:
 a) 4 clients: just appending to a file whose name as same as host name(all different)
 b) another client: only on this client, I remounted the volume after setting event threads. From this client running IOs as explained in https://bugzilla.redhat.com/show_bug.cgi?id=1409568#c31 and previous comments
 c) from another  client: running below IOs
2109.lookup	(Detached) --->find *|xargs stat from root of mount
1074.top	(Detached)--->top and free o/p every minute captured to a file on mount, in append mode 
1058.rm-rf	(Detached) -->removal of old untarred linux directories
801.kernel	(Detached) --->linux untar into new directories, on same parent dir as above

d) c) from another  client: running below IOs
2109.lookup	(Detached) --->find *|xargs stat from root of mount
1074.top	(Detached)--->top and free o/p every minute captured to a file on mount, in append mode 
1058.rm-rf	(Detached) -->removal of old untarred linux directories
801.kernel	(Detached) --->linux untar into new directories, on same parent dir as above
du -sh -->on root of volume from only one of the clients, not yet over even after a week



Additional info:
===============
[root@rhs-client19 glusterfs]# gluster v status
Status of volume: rpcx3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick rhs-client19.lab.eng.blr.redhat.com:/
gluster/brick1/rpcx3                        49152     0          Y       10824
Brick rhs-client25.lab.eng.blr.redhat.com:/
gluster/brick1/rpcx3                        49152     0          Y       5232 
Brick rhs-client32.lab.eng.blr.redhat.com:/
gluster/brick1/rpcx3                        49152     0          Y       10898
Brick rhs-client25.lab.eng.blr.redhat.com:/
gluster/brick2/rpcx3                        49153     0          Y       5253 
Brick rhs-client32.lab.eng.blr.redhat.com:/
gluster/brick2/rpcx3                        49153     0          Y       10904
Brick rhs-client38.lab.eng.blr.redhat.com:/
gluster/brick2/rpcx3                        N/A       N/A        N       N/A  
Brick rhs-client32.lab.eng.blr.redhat.com:/
gluster/brick3/rpcx3                        49154     0          Y       10998
Brick rhs-client38.lab.eng.blr.redhat.com:/
gluster/brick3/rpcx3                        49153     0          Y       8999 
Brick rhs-client19.lab.eng.blr.redhat.com:/
gluster/brick3/rpcx3                        49153     0          Y       10826
Brick rhs-client38.lab.eng.blr.redhat.com:/
gluster/brick3/rpcx3-newb                   49154     0          Y       8984 
Brick rhs-client19.lab.eng.blr.redhat.com:/
gluster/brick2/rpcx3-newb                   49155     0          Y       29805
Brick rhs-client25.lab.eng.blr.redhat.com:/
gluster/brick3/rpcx3-newb                   49155     0          Y       30021
Brick rhs-client19.lab.eng.blr.redhat.com:/
gluster/brick1/rpcx3-newb                   49156     0          Y       29826
Brick rhs-client25.lab.eng.blr.redhat.com:/
gluster/brick1/rpcx3-newb                   49156     0          Y       30042
Brick rhs-client32.lab.eng.blr.redhat.com:/
gluster/brick1/rpcx3-newb                   49156     0          Y       1636 
Snapshot Daemon on localhost                49154     0          Y       10872
Self-heal Daemon on localhost               N/A       N/A        Y       29849
Quota Daemon on localhost                   N/A       N/A        Y       29860
Snapshot Daemon on rhs-client25.lab.eng.blr
.redhat.com                                 49154     0          Y       9833 
Self-heal Daemon on rhs-client25.lab.eng.bl
r.redhat.com                                N/A       N/A        Y       30065
Quota Daemon on rhs-client25.lab.eng.blr.re
dhat.com                                    N/A       N/A        Y       30076
Snapshot Daemon on rhs-client38.lab.eng.blr
.redhat.com                                 49155     0          Y       9214 
Self-heal Daemon on rhs-client38.lab.eng.bl
r.redhat.com                                N/A       N/A        Y       8958 
Quota Daemon on rhs-client38.lab.eng.blr.re
dhat.com                                    N/A       N/A        Y       8969 
Snapshot Daemon on rhs-client32.lab.eng.blr
.redhat.com                                 49155     0          Y       11221
Self-heal Daemon on rhs-client32.lab.eng.bl
r.redhat.com                                N/A       N/A        Y       1658 
Quota Daemon on rhs-client32.lab.eng.blr.re
dhat.com                                    N/A       N/A        Y       1668 
 
Task Status of Volume rpcx3
------------------------------------------------------------------------------
Task                 : Rebalance           
ID                   : 2cd252ed-3202-4c7f-99bd-6326058c797f
Status               : in progress         
 

NOte: One brick is down as part due to https://bugzilla.redhat.com/show_bug.cgi?id=1689785
However that brick was down only since today morning,  and du -sh has been triggered a week  ago

[root@rhs-client19 glusterfs]# gluster v info
 
Volume Name: rpcx3
Type: Distributed-Replicate
Volume ID: f7532c65-63d0-4e4a-a5b5-c95238635eff
Status: Started
Snapshot Count: 0
Number of Bricks: 5 x 3 = 15
Transport-type: tcp
Bricks:
Brick1: rhs-client19.lab.eng.blr.redhat.com:/gluster/brick1/rpcx3
Brick2: rhs-client25.lab.eng.blr.redhat.com:/gluster/brick1/rpcx3
Brick3: rhs-client32.lab.eng.blr.redhat.com:/gluster/brick1/rpcx3
Brick4: rhs-client25.lab.eng.blr.redhat.com:/gluster/brick2/rpcx3
Brick5: rhs-client32.lab.eng.blr.redhat.com:/gluster/brick2/rpcx3
Brick6: rhs-client38.lab.eng.blr.redhat.com:/gluster/brick2/rpcx3
Brick7: rhs-client32.lab.eng.blr.redhat.com:/gluster/brick3/rpcx3
Brick8: rhs-client38.lab.eng.blr.redhat.com:/gluster/brick3/rpcx3
Brick9: rhs-client19.lab.eng.blr.redhat.com:/gluster/brick3/rpcx3
Brick10: rhs-client38.lab.eng.blr.redhat.com:/gluster/brick3/rpcx3-newb
Brick11: rhs-client19.lab.eng.blr.redhat.com:/gluster/brick2/rpcx3-newb
Brick12: rhs-client25.lab.eng.blr.redhat.com:/gluster/brick3/rpcx3-newb
Brick13: rhs-client19.lab.eng.blr.redhat.com:/gluster/brick1/rpcx3-newb
Brick14: rhs-client25.lab.eng.blr.redhat.com:/gluster/brick1/rpcx3-newb
Brick15: rhs-client32.lab.eng.blr.redhat.com:/gluster/brick1/rpcx3-newb
Options Reconfigured:
client.event-threads: 8
server.event-threads: 8
cluster.rebal-throttle: aggressive
diagnostics.client-log-level: INFO
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
diagnostics.latency-measurement: on
diagnostics.count-fop-hits: on
features.uss: enable
features.quota: on
features.inode-quota: on
features.quota-deem-statfs: on
You have new mail in /var/spool/mail/root


#########################
sosreports and logs to follow


Note You need to log in before you can comment on or make changes to this bug.