Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.

Bug 1356137

Summary: doubled RBDs after import renamed cluster
Product: Red Hat Storage Console Reporter: Martin Kudlej <mkudlej>
Component: UIAssignee: sankarshan <sankarshan>
Status: CLOSED WONTFIX QA Contact: sds-qe-bugs
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 2CC: mkudlej, nthomas, sankarshan
Target Milestone: ---Flags: nthomas: needinfo? (mkudlej)
Target Release: 3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-03-23 04:03:32 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
same issue after forget, rename,import - rbds are doubled none

Description Martin Kudlej 2016-07-13 13:11:01 UTC
Created attachment 1179265 [details]
same issue after forget, rename,import - rbds are doubled

Description of problem:
After importing cluster with renamed pool I see that there are doubled RBDs with same name and with different pool name(one RBD has old name and other has new name). I don't see this in API:

$ ./clusters.sh | jq .
[
  {
    "id": "d4bc9ee7-e678-4de8-b495-6b18dd6cb066",
    "name": "rbd11",
    "tags": [],
    "clusterid": "0bdcdbc4-aadd-4965-b2d9-1cde97e7e936",
    "clustername": "cluster1",
    "storageid": "9581be3b-575b-4857-a261-ce680148e058",
    "storagename": "pool1x",
    "size": "1024MB",
    "snapshots_enabled": false,
    "snapshot_schedule_ids": [],
    "quota_enabled": false,
    "quota_params": {},
    "options": {},
    "usage": {
      "used": 0,
      "total": 1073741824,
      "percentused": 0,
      "updatedat": "2016-07-13 15:07:05.803104046 +0200 CEST"
    },
    "almstatus": 0,
    "almwarncount": 0,
    "almcritcount": 0
  },
  {
    "id": "5207ee4f-9065-4737-84dd-25ef18895c61",
    "name": "rbd12",
    "tags": [],
    "clusterid": "0bdcdbc4-aadd-4965-b2d9-1cde97e7e936",
    "clustername": "cluster1",
    "storageid": "9581be3b-575b-4857-a261-ce680148e058",
    "storagename": "pool1x",
    "size": "1024MB",
    "snapshots_enabled": false,
    "snapshot_schedule_ids": [],
    "quota_enabled": false,
    "quota_params": {},
    "options": {},
    "usage": {
      "used": 0,
      "total": 1073741824,
      "percentused": 0,
      "updatedat": "2016-07-13 15:07:05.80561469 +0200 CEST"
    },
    "almstatus": 0,
    "almwarncount": 0,
    "almcritcount": 0
  }
]

but I see them in UI. I expect that this is issue of UI.


Version-Release number of selected component (if applicable):
ceph-ansible-1.0.5-25.el7scon.noarch
ceph-installer-1.0.12-4.el7scon.noarch
rhscon-ceph-0.0.31-1.el7scon.x86_64
rhscon-core-0.0.32-1.el7scon.x86_64
rhscon-core-selinux-0.0.32-1.el7scon.noarch
rhscon-ui-0.0.46-1.el7scon.noarch

How reproducible:
100%

Steps to Reproduce:
1. create cluster in USM
2. unmanage and forget it
3. renamed it from CLI
4. import cluster back

Expected results:
List of RBDs will be same as from API.

Comment 2 Nishanth Thomas 2016-07-13 20:33:22 UTC
Do you have a setup where this is reproducible?
This looks very unlikely to me(API returns properly and UI shows wrong data??). There is also chance that your browser cache might be causing this, so you may try after clearing out the browser cache

Comment 3 Martin Kudlej 2016-07-14 08:10:08 UTC
I think this is not problem of browser cache(I've tried to reload with clearing cache) but Javascript application in browser.

Comment 4 Nishanth Thomas 2016-07-14 11:05:57 UTC
provide a setup where this reproducible