Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1356137 - doubled RBDs after import renamed cluster [NEEDINFO]
Summary: doubled RBDs after import renamed cluster
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat
Component: UI
Version: 2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 3
Assignee: sankarshan
QA Contact: sds-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-07-13 13:11 UTC by Martin Kudlej
Modified: 2017-03-23 04:03 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-23 04:03:32 UTC
nthomas: needinfo? (mkudlej)


Attachments (Terms of Use)
same issue after forget, rename,import - rbds are doubled (deleted)
2016-07-13 13:11 UTC, Martin Kudlej
no flags Details

Description Martin Kudlej 2016-07-13 13:11:01 UTC
Created attachment 1179265 [details]
same issue after forget, rename,import - rbds are doubled

Description of problem:
After importing cluster with renamed pool I see that there are doubled RBDs with same name and with different pool name(one RBD has old name and other has new name). I don't see this in API:

$ ./clusters.sh | jq .
[
  {
    "id": "d4bc9ee7-e678-4de8-b495-6b18dd6cb066",
    "name": "rbd11",
    "tags": [],
    "clusterid": "0bdcdbc4-aadd-4965-b2d9-1cde97e7e936",
    "clustername": "cluster1",
    "storageid": "9581be3b-575b-4857-a261-ce680148e058",
    "storagename": "pool1x",
    "size": "1024MB",
    "snapshots_enabled": false,
    "snapshot_schedule_ids": [],
    "quota_enabled": false,
    "quota_params": {},
    "options": {},
    "usage": {
      "used": 0,
      "total": 1073741824,
      "percentused": 0,
      "updatedat": "2016-07-13 15:07:05.803104046 +0200 CEST"
    },
    "almstatus": 0,
    "almwarncount": 0,
    "almcritcount": 0
  },
  {
    "id": "5207ee4f-9065-4737-84dd-25ef18895c61",
    "name": "rbd12",
    "tags": [],
    "clusterid": "0bdcdbc4-aadd-4965-b2d9-1cde97e7e936",
    "clustername": "cluster1",
    "storageid": "9581be3b-575b-4857-a261-ce680148e058",
    "storagename": "pool1x",
    "size": "1024MB",
    "snapshots_enabled": false,
    "snapshot_schedule_ids": [],
    "quota_enabled": false,
    "quota_params": {},
    "options": {},
    "usage": {
      "used": 0,
      "total": 1073741824,
      "percentused": 0,
      "updatedat": "2016-07-13 15:07:05.80561469 +0200 CEST"
    },
    "almstatus": 0,
    "almwarncount": 0,
    "almcritcount": 0
  }
]

but I see them in UI. I expect that this is issue of UI.


Version-Release number of selected component (if applicable):
ceph-ansible-1.0.5-25.el7scon.noarch
ceph-installer-1.0.12-4.el7scon.noarch
rhscon-ceph-0.0.31-1.el7scon.x86_64
rhscon-core-0.0.32-1.el7scon.x86_64
rhscon-core-selinux-0.0.32-1.el7scon.noarch
rhscon-ui-0.0.46-1.el7scon.noarch

How reproducible:
100%

Steps to Reproduce:
1. create cluster in USM
2. unmanage and forget it
3. renamed it from CLI
4. import cluster back

Expected results:
List of RBDs will be same as from API.

Comment 2 Nishanth Thomas 2016-07-13 20:33:22 UTC
Do you have a setup where this is reproducible?
This looks very unlikely to me(API returns properly and UI shows wrong data??). There is also chance that your browser cache might be causing this, so you may try after clearing out the browser cache

Comment 3 Martin Kudlej 2016-07-14 08:10:08 UTC
I think this is not problem of browser cache(I've tried to reload with clearing cache) but Javascript application in browser.

Comment 4 Nishanth Thomas 2016-07-14 11:05:57 UTC
provide a setup where this reproducible


Note You need to log in before you can comment on or make changes to this bug.