Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1115845 - Enable sync of LUNs after storage domain activation for FC - duplicate LUNs
Summary: Enable sync of LUNs after storage domain activation for FC - duplicate LUNs
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.4.0
Hardware: Unspecified
OS: Linux
unspecified
high
Target Milestone: ---
: 3.5.0
Assignee: Daniel Erez
QA Contact: Elad
URL:
Whiteboard: storage
Depends On:
Blocks: rhev3.5beta 1156165
TreeView+ depends on / blocked
 
Reported: 2014-07-03 08:57 UTC by Paul Dwyer
Modified: 2018-12-06 17:13 UTC (History)
13 users (show)

Fixed In Version: ovirt-3.5.0_rc1.1
Doc Type: Bug Fix
Doc Text:
LUN information synchronization [1] is now invoked whenever the status of a storage domain changes to 'Active' (for example, when a storage domain is detected as active upon activating the storage pool manager). Previously, this process was activated only when manually activating a domain. [1] The process of synchronizing LUN information from the underlying storage with the engine database, such as when adding, removing, or extending a LUN in storage, is properly reflected in the engine database and consequently in the user interface and REST API.
Clone Of:
Environment:
Last Closed: 2015-02-11 18:05:15 UTC
oVirt Team: Storage
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:0158 normal SHIPPED_LIVE Important: Red Hat Enterprise Virtualization Manager 3.5.0 2015-02-11 22:38:50 UTC
oVirt gerrit 29692 master MERGED core: sync LUNs on SD status changes to active Never
oVirt gerrit 31272 ovirt-engine-3.5 MERGED core: drop ConnectSingleAsyncOperation Never
oVirt gerrit 31273 ovirt-engine-3.5 MERGED core: renaming connectAllHostsToPool Never
oVirt gerrit 31274 ovirt-engine-3.5 MERGED core: SyncLunsInfo - avoidable executions/db deadlock Never
oVirt gerrit 31275 ovirt-engine-3.5 MERGED core: sync LUNs on SD status changes to active Never

Description Paul Dwyer 2014-07-03 08:57:52 UTC
Description of problem:
Customer has tested feature included in RHEV3.4 (https://bugzilla.redhat.com/show_bug.cgi?id=1066081), following similar process that was tested by QA results in for the original LUNS alongside new replicated LUNS and does not sync the new replicated LUNS

Version-Release number of selected component (if applicable):
RHEV3.4

How reproducible:
Always

Steps to Reproduce:
Full details (& screenshots) in attached doc

Replicated LUNs are attached to hypervisors in DR (hypervisors only see replicated LUNS)
edit storage domain

Actual results:
Duplicate LUNs (old and new)

Expected results:
New Replicated LUNs listed after syncing

Additional info:
See attached doc for full details of customer testing and results

Comment 3 Daniel Erez 2014-07-03 09:10:22 UTC
Hi Paul,

* Have you reactivated (maintenance + activate) the storage domain after attaching the LUNs (the sync operation should be performed on storage domain activation).

* Can you please attach the full engine logs.

Comment 5 Daniel Erez 2014-07-03 12:56:59 UTC
According to the attached engine log, it seems that the SD hasn't been reactivated, and hence the LUNs info synchronization wasn't invoked.
Please check whether the issue is resolved after reactivation (in any case, I think we should add it as a note in the documentation/release-notes).

Comment 15 Elad 2014-09-16 10:34:09 UTC
From what I got from the conversation above, the steps to reproduce are:
In 2 DCs env:
1) In DC1: Have a FC domain resides on a LUN from the storage server
2) Have a replicated LUN which contains the exact same data as in the LUN which is part of the storage domain in the DC1
3) Expose the replicated LUN to a different host in DC2
4) Maintenance the SD in DC1 and activate it
5) The device list in DC2 should include the replicated LUN and the should be available to be picked 

Daniel, Please correct me if I'm wrong

Comment 16 Daniel Erez 2014-09-21 07:22:00 UTC
(In reply to Elad from comment #15)
> From what I got from the conversation above, the steps to reproduce are:
> In 2 DCs env:
> 1) In DC1: Have a FC domain resides on a LUN from the storage server
> 2) Have a replicated LUN which contains the exact same data as in the LUN
> which is part of the storage domain in the DC1
> 3) Expose the replicated LUN to a different host in DC2
> 4) Maintenance the SD in DC1 and activate it
> 5) The device list in DC2 should include the replicated LUN and the should
> be available to be picked 
> 
> Daniel, Please correct me if I'm wrong

Instead of step (4), you should have a storage domain in DC2 ready for activation. The enhancement addressed in this bug is that the 'syncLunsInfo' operation should now be invoked when an SD is being detected as Active. So, for example, try a similar scenario which instead of manually reactivating the domains, reactivate the hosts. E.g. have DC2 up and running, move hosts to maintenance, expose the replicate LUN, reactivate the hosts - check whether 'syncLunsInfo' action is being invoked when the storage domain is back to Active status.

Comment 17 Elad 2014-10-13 14:09:57 UTC
SyncLunsInfoForBlockStorageDomainCommand is called for auto activate a block domains and not only for manual activation.

Verified using rhev3.5 vt5

Comment 19 errata-xmlrpc 2015-02-11 18:05:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0158.html


Note You need to log in before you can comment on or make changes to this bug.