Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1122021 - there must be at most one instance of dwh per engine
Summary: there must be at most one instance of dwh per engine
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine-dwh
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 3.5.0
Assignee: Yedidyah Bar David
QA Contact: Petr Matyáš
Whiteboard: integration
Depends On: 1118350
Blocks: rhev3.5beta 1156165
TreeView+ depends on / blocked
Reported: 2014-07-22 11:25 UTC by Yedidyah Bar David
Modified: 2015-02-11 18:16 UTC (History)
10 users (show)

Fixed In Version: vt3 - rhevm-dwh-3.5.0-3.el6ev
Doc Type: Bug Fix
Doc Text:
Clone Of: 1118350
Last Closed: 2015-02-11 18:16:05 UTC
oVirt Team: ---
Target Upstream Version:

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2015:0177 normal SHIPPED_LIVE rhevm-dwh 3.5 bug fix and enhancement update 2015-02-11 23:11:50 UTC
oVirt gerrit 31321 None None None Never
oVirt gerrit 31322 None None None Never
oVirt gerrit 31325 master MERGED packaging: setup: Prevent more than one dwh per engine Never

Description Yedidyah Bar David 2014-07-22 11:25:22 UTC
+++ This bug was initially created as a clone of Bug #1118350 +++

Description of problem:

Since we now allow running dwh and engine on separate hosts, it's possible to setup two (or more) dwh instances against a single engine.

This will seem to work well - no conflicts/failures/etc are expected - but in practice only one of the dwh servers will get each update on the engine, so the history will be scattered around them and no-one will have a single correct view of the history.

For now, we should prevent that. We should add a row somewhere in the engine db (Yaniv told me in which table but I don't remember currently) during setup, if it does not exist already, and do something if it does (abort, alert the user and ask for confirmation, etc.).

In the future we might decide that there is use for more than one dwh and add support for that.

Comment 1 Yaniv Lavi 2014-07-23 10:29:13 UTC
We need a fix for this for both setup and service start. Please consider options.
Maybe register the service using unique hash and only allow reinstall from any other machine when that has is cleared from dwh_history_timekeeping. Setup will put this in the DWH context and it will match it at startup.


Comment 2 Shirly Radco 2014-07-31 11:45:26 UTC
Proposed solution :

1. We also have the value of "DwhCurrentlyRunning" in the "dwh_history_timekeeping" table, in the engine database. This is updated each time the service start.

2. We want to address the issue of running more that one instance od DWH on seprate hosts.
During setup - 
3. Create a key generated based on the host name and a random number.
4. The key will be on the engine side and on the host side, were the dwh process is running.
5. When the service will start it will check that both sides are identical.

6.In case a user tries to install another instance of dwh this will fail during setup when he tries to conect to the engine.

  6.1. If the value of "DwhCurrentlyRunning" is true then the user will get an error message saying to the user that he alredy has a running dwh with on <host name>. And please stop the processes on the other host if he wishes to replace it.
If the process is not really running the user will have to manually update the "DwhCurrentlyRunning" to false.

  6.2. If the value of "DwhCurrentlyRunning" is false then the user will get a warning message saying that he has another dwh installation on <host name> and if he wishes to replace it permenently and loss all data from the previus installation.

If the user choses to replace the installation the the key in the engine will update according to the new host.

ON cleanup - the data regarding the keys will be removed as well.

Comment 3 Shirly Radco 2014-08-07 08:28:23 UTC
If the user choses to replace the installation we also need to add to the etl process a check up on start up if it can collect data from the engine by compering the key on the engine side to the key on the host it is corrently on so if it is the old etl it will fail.

Comment 4 Yaniv Lavi 2014-08-07 11:02:11 UTC
Sound ok to me. Please move forward with this.


Comment 5 Yedidyah Bar David 2014-08-13 09:36:15 UTC
Moving to POST - for changes see upstream bug #1118350

Comment 7 Yedidyah Bar David 2014-11-04 13:57:37 UTC
Does not require doc text, this bug was needed only because we now allow separate hosts, see bug 1100200 .

Comment 9 errata-xmlrpc 2015-02-11 18:16:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

Note You need to log in before you can comment on or make changes to this bug.