Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1597876 - Inconsistency in Event messages
Summary: Inconsistency in Event messages
Keywords:
Status: NEW
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: web-admin-tendrl-commons
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: gowtham
QA Contact: sds-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-03 18:43 UTC by Ju Lim
Modified: 2019-04-11 08:18 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:


Attachments (Terms of Use)
WA - Events and Alerts screenshot (deleted)
2018-07-03 18:43 UTC, Ju Lim
no flags Details

Description Ju Lim 2018-07-03 18:43:30 UTC
Created attachment 1456318 [details]
WA - Events and Alerts screenshot

Description of problem:
I observed some inconsistency in event messages:


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Install WA
2. Generate some events (e.g. brick near full condition)
3. Look at the Events page

Actual results:

Excerpt from Events page:

## Inconsistency #1:

Brick utilization on tendrl-node-1:|gluster|brick1|brick1 in vol1 at 87.8 % and nearing full capacity

Brick:tendrl-node-1:/gluster/brick1/brick1 in volume:vol1 has Started

*** Note: tendrl-node-1:/gluster/brick1/brick1 is called also tendrl-node-1:|gluster|brick1|brick1

## Inconsistency #2:

Brick:tendrl-node-1:/gluster/brick1/brick1 in volume:vol1 has Started
Memory utilization on node tendrl-node-1 in ju_cluster back to normal
Service: glustershd is connected in cluster ju_cluster
Cluster:ju_cluster is healthy

*** Note: The object type is prefix sometimes with a ":" or ": " and sometimes with a "space" or sometimes not mentioned.

## Inconsistency #3:
Memory utilization on node tendrl-node-1 in ju_cluster back to normal
Memory utilization on node tendrl-node-1 in ju_cluster at 84.44 % and running out of memory

*** Use of the word "node" inconsistent as we use the word "host" elsewhere in the product (e.g. Hosts page)

Expected results:
1. It would be better for device name to look like what users expect, e.g.
tendrl-node-1:/gluster/brick1/brick1
2. 


Additional info:

Comment 2 Ju Lim 2018-07-03 18:45:17 UTC
Release Information:

$ rpm -qa | grep tendrl | sort
tendrl-ansible-1.6.3-2.el7.centos.noarch
tendrl-api-1.6.3-20180626T110501.5a1c79e.noarch
tendrl-api-httpd-1.6.3-20180626T110501.5a1c79e.noarch
tendrl-commons-1.6.3-20180628T114340.d094568.noarch
tendrl-grafana-plugins-1.6.3-20180622T070617.1f84bc8.noarch
tendrl-grafana-selinux-1.5.4-20180227T085901.984600c.noarch
tendrl-monitoring-integration-1.6.3-20180622T070617.1f84bc8.noarch
tendrl-node-agent-1.6.3-20180618T083110.ba580e6.noarch
tendrl-notifier-1.6.3-20180618T083117.fd7bddb.noarch
tendrl-selinux-1.5.4-20180227T085901.984600c.noarch
tendrl-ui-1.6.3-20180625T085228.23f862a.noarch

Comment 3 Ju Lim 2018-07-03 18:46:39 UTC
Related to fixes made previously per https://github.com/Tendrl/commons/issues/843.

Comment 4 Nishanth Thomas 2018-07-11 14:16:22 UTC
This is an enhancement. Proposing to move out of 3.40

Comment 5 Filip Balák 2018-08-09 12:26:44 UTC
Here is list of alerts I was able to get from WA during testing:

Service: glustershd is disconnected in cluster <cluster>
Brick utilization on <node>:<path> in <volume> at <%S> and nearing full capacity
Brick:<node>:<path> in volume:<volume> has Started
Brick:<node>:<path> in volume:<volume> has Stopped
Peer <node> in cluster <cluster> is Connected
Peer <node> in cluster <cluster> is Disconnected
Volume:<volume> is down
Volume:<volume> is (degraded)
Cluster:<cluster> is healthy
Geo-replication between <node>:<path> and <volume> is Active
Geo-replication between <node>:<path> and <volume> is Passive
Geo-replication between <node>:<path> and <volume> is faulty
Status of volume: <volume> in cluster <cluster> changed from Stopped to Started
Status of volume: <volume> in cluster <>cluster changed from Started to Stopped
Cpu utilization on node <node> in <cluster> at <%S> and running out of cpu
Cpu utilization on node <node> in <cluster> back to normal
Memory utilization on node <node> in <cluster> at <%S> and running out of memory
Memory utilization on node <node> in <cluster> back to normal
Swap utilization on node <node> in <clsuter> back to normal
Swap utilization on node <node> in <cluster> at <%S> and running out of swap space

Tested with tendrl-notifier-1.6.3-4.el7rhgs.noarch


Note You need to log in before you can comment on or make changes to this bug.