Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.

Bug 1057390

Summary: Fedora19: vdsm-4.14.1-2 doesn't create ovirtmgmt
Product: [Retired] oVirt Reporter: Douglas Schilling Landgraf <dougsland>
Component: vdsmAssignee: Dan Kenigsberg <danken>
Status: CLOSED DUPLICATE QA Contact: Aharon Canan <acanan>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.4CC: abaron, acathrow, asegurap, bazulay, dougsland, gklein, iheim, mgoldboi, yeylon
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-01-24 13:45:59 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
vdsm and supervdsm logs none

Description Douglas Schilling Landgraf 2014-01-24 00:59:19 UTC
Description of problem:

- Fedora 19 minimal install for ovirt-engine and node
- dhclient provides em1 ip address for node
- After host deploy ovirt engine it shows: 

  "Failed to configure management network on host 192.168.0.15 due to setup networks failure."

# ifconfig 
em1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.0.15  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::7ae7:d1ff:fe55:fe46  prefixlen 64  scopeid 0x20<link>
        ether 78:e7:d1:55:fe:46  txqueuelen 1000  (Ethernet)
        RX packets 461  bytes 55088 (53.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 202  bytes 46006 (44.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 2  bytes 140 (140.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2  bytes 140 (140.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


# rpm -qa | grep -i vdsm
vdsm-python-zombiereaper-4.14.1-2.fc19.noarch
vdsm-python-4.14.1-2.fc19.x86_64
vdsm-cli-4.14.1-2.fc19.noarch
vdsm-xmlrpc-4.14.1-2.fc19.noarch
vdsm-4.14.1-2.fc19.x86_64

# rpm -qa | grep -i ovirt-engine
ovirt-engine-setup-base-3.4.0-0.5.beta1.fc19.noarch
ovirt-engine-dbscripts-3.4.0-0.5.beta1.fc19.noarch
ovirt-engine-tools-3.4.0-0.5.beta1.fc19.noarch
ovirt-engine-cli-3.4.0.2-1.20140112.git01360ed.fc19.noarch
ovirt-engine-setup-plugin-websocket-proxy-3.4.0-0.5.beta1.fc19.noarch
ovirt-engine-userportal-3.4.0-0.5.beta1.fc19.noarch
ovirt-engine-3.4.0-0.5.beta1.fc19.noarch
ovirt-engine-restapi-3.4.0-0.5.beta1.fc19.noarch
ovirt-engine-lib-3.4.0-0.5.beta1.fc19.noarch
ovirt-engine-websocket-proxy-3.4.0-0.5.beta1.fc19.noarch
ovirt-engine-webadmin-portal-3.4.0-0.5.beta1.fc19.noarch
ovirt-engine-sdk-python-3.4.0.2-1.20140121.git42b7d69.fc19.noarch
ovirt-engine-backend-3.4.0-0.5.beta1.fc19.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-3.4.0-0.5.beta1.fc19.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.4.0-0.5.beta1.fc19.noarch


# ip l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN mode DEFAULT 
    link/ether aa:4d:79:5a:53:be brd ff:ff:ff:ff:ff:ff
3: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
    link/ether 78:e7:d1:55:fe:46 brd ff:ff:ff:ff:ff:ff
4: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT 
    link/ether 1e:d1:94:ce:27:d0 brd ff:ff:ff:ff:ff:ff


Talked with Antoni over irc, he shared:
=============================================
<apuimedo> ifcfg-ovirtmgmt doesn't have bootproto
<apuimedo> MainProcess|Thread-15::DEBUG::2014-01-23 19:08:09,441::configNetwork::589::setupNetworks::(setupNetworks) Setting up network according to configuration: networks:{'ovirtmgmt': {'nic': 'em1', 'STP': 'no', 'bridged': 'true'}}, bondings:{}, options:{'connectivityCheck': 'true', 'connectivityTimeout': 120}
<apuimedo> what vdsm received
<apuimedo> didn't define a bootproto
<apuimedo> thus the ovirtmgmt didn't get an IP
<apuimedo> didn't receive ping from engine
<apuimedo> and the connectivity check rolled back the changes

Comment 1 Douglas Schilling Landgraf 2014-01-24 01:01:16 UTC
Created attachment 854666 [details]
vdsm and supervdsm logs

Comment 2 Dan Kenigsberg 2014-01-24 12:49:09 UTC
Given the getCaps report, I assume that ifcfg-em1 was missing when you tried to install vdsm. If so, it is a dup of bug 987813.

Thread-14::DEBUG::2014-01-23 18:58:59,963::BindingXMLRPC::977::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:e707191b0e4'}], 'FC': []}, 'packages2': {'kernel': {'release': '200.fc19.x86_64', 'buildtime': 1389863891.0, 'version': '3.12.8'}, 'spice-server': {'release': '3.fc19', 'buildtime': 1383130020L, 'version': '0.12.4'}, 'vdsm': {'release': '2.fc19', 'buildtime': 1390310463L, 'version': '4.14.1'}, 'qemu-kvm': {'release': '2.fc19', 'buildtime': 1384762225L, 'version': '1.6.1'}, 'libvirt': {'release': '1.fc19', 'buildtime': 1387094943L, 'version': '1.1.3.2'}, 'qemu-img': {'release': '2.fc19', 'buildtime': 1384762225L, 'version': '1.6.1'}, 'mom': {'release': '20140120.gitfd877c5.fc19', 'buildtime': 1390225304L, 'version': '0.3.2'}}, 'cpuModel': 'Intel(R) Core(TM)2 Duo CPU     E7500  @ 2.93GHz', 'hooks': {}, 'cpuSockets': '1', 'vmTypes': ['kvm'], 'supportedProtocols': ['2.2', '2.3'], 'networks': {}, 'bridges': {';vdsmdummy;': {'addr': '', 'cfg': {}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'stp': 'off', 'ipv6gateway': '::', 'gateway': '', 'ports': []}}, 'uuid': 'D5C89400-A35A-1015-A11F-95AD54AE5ADB', 'lastClientIface': 'em1', 'nics': {'em1': {'netmask': '255.255.255.0', 'addr': '192.168.0.15', 'hwaddr': '78:e7:d1:55:fe:46', 'cfg': {}, 'ipv6addrs': ['fe80::7ae7:d1ff:fe55:fe46/64'], 'speed': 1000, 'mtu': '1500'}}, 'software_revision': '2', 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,sse4_1,xsave,lahf_lm,dtherm,tpr_shadow,vnmi,flexpriority,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:e707191b0e4', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', '3.2', '3.3', '3.4'], 'reservedMem': '321', 'bondings': {'bond0': {'netmask': '', 'addr': '', 'slaves': [], 'hwaddr': '66:8d:51:44:6a:2b', 'cfg': {}, 'ipv6addrs': [], 'mtu': '1500'}}, 'software_version': '4.14', 'memSize': '7857', 'cpuSpeed': '2936.000', 'version_name': 'Snow Man', 'vlans': {}, 'cpuCores': '2', 'kvmEnabled': 'true', 'guestOverhead': '65', 'management_ip': '0.0.0.0', 'cpuThreads': '2', 'emulatedMachines': [u'pc', u'pc-q35-1.4', u'pc-q35-1.5', u'q35', u'isapc', u'pc-0.10', u'pc-0.11', u'pc-0.12', u'pc-0.13', u'pc-0.14', u'pc-0.15', u'pc-1.0', u'pc-1.1', u'pc-1.2', u'pc-1.3', u'pc-i440fx-1.4', u'pc-i440fx-1.5', u'none'], 'rngSources': ['random'], 'operatingSystem': {'release': '6', 'version': '19', 'name': 'Fedora'}, 'lastClient': '192.168.0.14'}}

Comment 3 Douglas Schilling Landgraf 2014-01-24 13:45:59 UTC
(In reply to Dan Kenigsberg from comment #2)
> Given the getCaps report, I assume that ifcfg-em1 was missing when you tried
> to install vdsm. If so, it is a dup of bug 987813.

Correct, closing this one. Thanks!

*** This bug has been marked as a duplicate of bug 987813 ***