Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1114253 - PRD35 - [RFE] Allow to perform fence operations from a host in another DC
Summary: PRD35 - [RFE] Allow to perform fence operations from a host in another DC
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.4.0
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: 3.5.0
Assignee: Eli Mesika
QA Contact: sefi litmanovich
URL:
Whiteboard: infra
Depends On: 1054778 1090803 1131411
Blocks: rhev3.5beta 1156165
TreeView+ depends on / blocked
 
Reported: 2014-06-29 09:00 UTC by Oved Ourfali
Modified: 2016-02-10 19:02 UTC (History)
14 users (show)

Fixed In Version: vt1.3
Doc Type: Enhancement
Doc Text:
Previously, a host performing a fencing operation had to be in the same data center as the host being fenced. Now, a host can be fenced by a host from a different data center.
Clone Of: 1054778
Environment:
Last Closed: 2015-02-11 18:05:02 UTC
oVirt Team: Infra
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:0158 normal SHIPPED_LIVE Important: Red Hat Enterprise Virtualization Manager 3.5.0 2015-02-11 22:38:50 UTC
oVirt gerrit 26513 None None None Never

Description Oved Ourfali 2014-06-29 09:00:08 UTC
+++ This bug was initially created as a clone of Bug #1054778 +++

Description of problem:

When you shut down a host in a data center with no other host, you are
unable to start it using the configured power management in ovirt.
Version-Release number of selected component (if applicable):

ovirt-engine 3.3.2 on EL 6

How reproducible:
shut down a host in a datacenter with a single host (e.g. "init 0" on a shell)

Steps to Reproduce:
1. shut down a host e.g. in a local storage DC
2. the host becomes "non responsive"
3. try to start the host via the configured powermanagement

Actual results:

Error while executing action:

hostname:

    There is no other Host in the Data Center that can be used to test the Power Management settings.

Expected results:

the hosts starts, there is no test of power management necessary.

Additional info:

no other action helps to circumvent this test:
put host in maintenance, confirm manually host has been rebooted.

you end up with the failing power management test (why should it test it with
another host in the first place?)

Related Bug: BZ1053434

--- Additional comment from Itamar Heim on 2014-01-17 12:56:07 EST ---

my first instinct was this was similar to bug 837539, but its not.

the engine doesn't perform fence operations from the engine rather from another host in the cluster/dc, hence needs "another running host" (by asking vdsm on the other host to call the fence script)

eli, maybe until we can do this from engine, we can allow doing this from a host not in same DC?
(wouldn't work for an engine with really only a single host, but for most use cases should be good enough?)

--- Additional comment from Eli Mesika on 2014-01-26 10:30:23 EST ---

(In reply to Itamar Heim from comment #1)

> eli, maybe until we can do this from engine, we can allow doing this from a
> host not in same DC?
> (wouldn't work for an engine with really only a single host, but for most
> use cases should be good enough?)

Yes, we have now the pm_proxy_preferences field that is set by default to "cluster,DC" maybe we can support that by adding "other" such that in hosts that have this value set to "cluster,DC,other" we will search for proxy in other DCs

--- Additional comment from Itamar Heim on 2014-02-13 13:31:06 EST ---

pushing to target release 3.5, assuming its not planned for 3.4 at this point...

--- Additional comment from Eli Mesika on 2014-04-07 16:19:18 EDT ---

We will address for 3.5 only the option to look for proxy outside the DC where the host is located and try to use other DCs

This will be done by adding to the pm_proxy_preferences field which is defaulted now to "cluster,DC" another option named otherDC.
(The pm_proxy_preferences value is available via the UI Host New/Edit PM TAB in the field named "source" , in the API it is under <pm_proxies>)

The default will stay "cluster,DC" and the admin can change this value per host using the API

Comment 1 Sven Kieske 2014-11-04 07:46:21 UTC
This doc text does not make any sense and is confusing imho:

"We currently limit the host that does the fencing operation to be on the same Dc as the fenced host, although a host in another DC can also do that."

The sentence contradicts itself.

Comment 2 Oved Ourfali 2014-11-04 07:50:20 UTC
(In reply to Sven Kieske from comment #1)
> This doc text does not make any sense and is confusing imho:
> 
> "We currently limit the host that does the fencing operation to be on the
> same Dc as the fenced host, although a host in another DC can also do that."
> 
> The sentence contradicts itself.

Where is the contradiction?
I wrote that the "reason" for the feature is that we limit the host to be in the same DC, while other DC hosts can do that.
And the result is that we now allow to use hosts from another DC as well.

Comment 3 Sven Kieske 2014-11-04 08:04:34 UTC
(In reply to Oved Ourfali from comment #2)
> (In reply to Sven Kieske from comment #1)
> > This doc text does not make any sense and is confusing imho:
> > 
> > "We currently limit the host that does the fencing operation to be on the
> > same Dc as the fenced host, although a host in another DC can also do that."
> > 
> > The sentence contradicts itself.
> 
> Where is the contradiction?
> I wrote that the "reason" for the feature is that we limit the host to be in
> the same DC, while other DC hosts can do that.
> And the result is that we now allow to use hosts from another DC as well.

Your spelling is misleading imho, you do not limit this anymore so imho the wording should be:
"we limitED the host[..]"

but this are just my 2 cents, feel free to keep your wording.
I'm also no native english speaker, so I might be wrong.

Comment 4 Oved Ourfali 2014-11-04 08:06:53 UTC
(In reply to Sven Kieske from comment #3)
> (In reply to Oved Ourfali from comment #2)
> > (In reply to Sven Kieske from comment #1)
> > > This doc text does not make any sense and is confusing imho:
> > > 
> > > "We currently limit the host that does the fencing operation to be on the
> > > same Dc as the fenced host, although a host in another DC can also do that."
> > > 
> > > The sentence contradicts itself.
> > 
> > Where is the contradiction?
> > I wrote that the "reason" for the feature is that we limit the host to be in
> > the same DC, while other DC hosts can do that.
> > And the result is that we now allow to use hosts from another DC as well.
> 
> Your spelling is misleading imho, you do not limit this anymore so imho the
> wording should be:
> "we limitED the host[..]"
> 
> but this are just my 2 cents, feel free to keep your wording.
> I'm also no native english speaker, so I might be wrong.

I see. Fixed.

Comment 6 errata-xmlrpc 2015-02-11 18:05:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0158.html


Note You need to log in before you can comment on or make changes to this bug.