Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1177126 - [RHEVM][FOREMAN-INTEGRATION] after installing a discovered host via foreman provider, fails to find a nic to attach rhevm bridge to
Summary: [RHEVM][FOREMAN-INTEGRATION] after installing a discovered host via foreman p...
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 3.5.0
Assignee: Yaniv Bronhaim
QA Contact: movciari
Whiteboard: infra
Depends On:
Blocks: rhev35rcblocker rhev35gablocker
TreeView+ depends on / blocked
Reported: 2014-12-24 10:02 UTC by sefi litmanovich
Modified: 2016-02-10 19:10 UTC (History)
16 users (show)

Fixed In Version: vt13.7
Doc Type: Release Note
Doc Text:
When using bare-metal provisioning, the firewall definitions on the host will always be overwritten by the host bootstrapping (engine packages installation) process, to allow the engine to interact with VDSM.
Clone Of:
Last Closed: 2015-02-15 09:15:03 UTC
oVirt Team: Infra
Target Upstream Version:

Attachments (Terms of Use)
engine + host deployment + vdsm + supervdsm logs (deleted)
2014-12-24 10:02 UTC, sefi litmanovich
no flags Details

System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:0230 normal SHIPPED_LIVE Red Hat Enterprise Virtualization Manager 3.5.0-1 ASYNC 2015-02-16 19:50:27 UTC
oVirt gerrit 36632 ovirt-engine-3.5 MERGED Override firewall configurations on deploy for provisioned hosts Never

Description sefi litmanovich 2014-12-24 10:02:29 UTC
Created attachment 972720 [details]
engine + host deployment + vdsm + supervdsm logs

Description of problem:

trying to add a host using foreman provider. adding a discovered host using host group rhel 6.6.
host is provisioned on satellite and upon successful build sends rhevm the signal to install the host (according to foreman integration plugin).
host is installed and all stages are successful but ends up on non responsive state.
Looking at ui and in logs you can see rhevm didn't find an interface to attach rhevm bridge to, therefore leaving the host with no network and non responsive.
upon re installation, the nic was found at attached to bridge and host went up.

Version-Release number of selected component (if applicable):

engine: rhevm-3.5.0-0.26.el6ev.noarch
host: vdsm-

How reproducible:

reproduced this scenario twice installing rh6.6 on host.
will try with rh7 to see if it reproduces there as well.

Steps to Reproduce:
1. setup satellite with foreman-discovery and Ovirt_provision_plugin and setup rhevm as compute resource.
2. in satellite set up a host group for rhel 6.6 and verify provisioning works.
3. discover host on satellite's network.
4. add satellite as external provider for rhevm.
5. add host using external provider -> choose the host from discovered host list and choose your host group (in my case rhel6.6).
6. wait for host re provisioning to end and installation in rhevm to start.

Actual results:

host becomes non responsive.

Expected results:

host is insatlled and up.

Additional info:

Comment 1 Yaniv Bronhaim 2014-12-24 10:08:06 UTC
I guess it relates to the os installation itself. something needs to be configured in the provision template to start the dhclient or default config the interfaces i think.. ill check what is missing

Comment 2 sefi litmanovich 2014-12-24 10:31:58 UTC
I don't think it's os related or a problem with provision template, as installing this host this host on rhevm normally or with the plugin but as provisioned (not discovered), does work fine.
I will try to reproduce with rh7 as well and update the results here.

Comment 3 Oved Ourfali 2014-12-25 07:20:28 UTC
In the log we also see failures of foreman to set the DNS name for your host.

2014-12-10 21:50:57,484 ERROR [org.ovirt.engine.core.bll.AddVdsCommand] (ajp-/ [6a4624c5] Command org.ovirt.engine.core.bll.AddVdsCommand throw Vdc Bll exception. With error message VdcBLLException: Create Reverse DNS record for task failed with the following error: ERF12-2357 [ProxyAPI::ProxyException]: Unable to set DNS entry ([RestClient::BadRequest]: 400 Bad Request) for proxy (Failed with error PROVIDER_FAILURE and code 5050)

Were you able to pass this step?

Comment 4 Oved Ourfali 2014-12-25 07:59:40 UTC
Also, please provide full vdsm.log file. The log seems incomplete.

Comment 5 Yaniv Bronhaim 2014-12-28 14:51:17 UTC
Just tried that and the host became up successfully . your issue seems to be related to " No route to host". we need to check your setup again and verify that each installed host can be reached by the engine. 

in this integration you can have such scenario where foreman can reach some hosts, therefore you can provision them by rhevm that can reach foreman. 
but it doesn't say that rhevm can reach the hosts as well. specially if you didn't configure foreman as the resolver of the engine's machine, otherwise rose04 can't be resolved anyhow..

ping me when you're around and we'll check that together, i think changing the resolve.conf on your engine setup to forward requests to the foreman address will solve this

if you can try that please update about the results

Comment 6 sefi litmanovich 2015-01-06 12:51:00 UTC
Reproduced the bug with installation of rhel7 as well as rhel 6.6, bur persists.
after this happens, re-installation works so there's probably some other problem hiding.

Comment 7 movciari 2015-01-20 13:26:28 UTC
org.ovirt.engine-root-3.5.0-30 doesn't seem like version of rpm...
could you provide version of rpm where this is fixed so i test it on correct version, please?

Comment 9 Eyal Edri 2015-02-15 09:15:03 UTC
bugs were moved by ERRATA to RELEASE PENDING bug not closed probably due to errata error.
closing as 3.5.0 is released.

Note You need to log in before you can comment on or make changes to this bug.