Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1687340 - VM migration fails with "Attempt to migrate guest to the same host" error [NEEDINFO]
Summary: VM migration fails with "Attempt to migrate guest to the same host" error
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: libvirt
Version: 4.3.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ovirt-4.3.3
: 4.3.0
Assignee: Michal Skrivanek
QA Contact: meital avital
URL:
Whiteboard:
Depends On:
Blocks: 1687346
TreeView+ depends on / blocked
 
Reported: 2019-03-11 10:34 UTC by bipin
Modified: 2019-03-12 13:41 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1687346 (view as bug list)
Environment:
Last Closed: 2019-03-12 13:41:40 UTC
oVirt Team: Virt
Target Upstream Version:
michal.skrivanek: needinfo? (bshetty)


Attachments (Terms of Use)

Description bipin 2019-03-11 10:34:08 UTC
Description of problem:
=======================
Migration of vm's error out suggesting that it cant be migrated to same host. This is seen on rhv 4.3.  


Version-Release number of selected component
============================================
rhvh-4.3.0.5-0.20190305.0+1
glusterfs-3.12.2-46
kernel 3.10.0-957.10.1.el7.x86_64
vdsm-4.30.10-1.el7ev.x86_64
libvirt-4.5.0-10.el7_6.6.x86_64
ovirt-engine-4.3.2-0.1.el7.noarch

How reproducible:
================
1/1


Steps to Reproduce:
==================
1.Setup RHHI 1.6 enviornment
2.Create vm's using the RHV-M. (Compute --> Virtual machines --> New)
3.Once the vm's are up, try migrating 

Actual results:
==============
The migration fails

Expected results:
================
Migration shouldn't fail

Additional info:
===============

Comment 3 Michal Skrivanek 2019-03-11 13:18:26 UTC
so is it a same host or not?

Comment 4 Michal Skrivanek 2019-03-11 13:33:55 UTC
I noticed it's not migrating over ovirtmgmt. Can you describe how exactly is networking between engine and hosts set up?

Comment 5 bipin 2019-03-11 17:44:27 UTC
Hi Michal,

Basically the host is configured with 2 network  a.ovirtmgmt network ( for vm migration)  b.Gluster network (All storage activities)
That's really strange that its not migrating via ovirtmgmt coz i see the IP's used by these vm's are using the same network as ovirtmgmt. 
I have pinged you offline on the host access details.Please feel free to take a look.

Comment 6 Ryan Barry 2019-03-12 00:10:12 UTC
What are the odds that the ipv6 DNS configuration differs?

Comment 8 Michal Skrivanek 2019-03-12 07:35:42 UTC
Hi,
both(if not all) hosts have the same BIOS UUID - 00000000-0000-0000-0000-ac1f6b400622. That means they are identified as the same host and live migration is refused. Are those physical servers?
Either way, you can work around that by e.g. https://lists.ovirt.org/archives/list/users@ovirt.org/message/UMVDYTBZARKIPLDKG23BQONSJGIDHEX3/

Comment 9 bipin 2019-03-12 09:27:29 UTC
(In reply to bipin from comment #5)
> Hi Michal,
> 
> Basically the host is configured with 2 network  a.ovirtmgmt network ( for
> vm migration)  b.Gluster network (All storage activities)
> That's really strange that its not migrating via ovirtmgmt coz i see the
> IP's used by these vm's are using the same network as ovirtmgmt. 
> I have pinged you offline on the host access details.Please feel free to
> take a look.

Correction. The gluster network is used for vm migration not the ovirtmgmt network.

Comment 11 bipin 2019-03-12 10:50:26 UTC
All the 3 mentioned hosts above are physical servers

Comment 12 Sandro Bonazzola 2019-03-12 12:42:29 UTC
4.3.0 has been already released, automatically re-targeting to 4.3.3 for re-evaluation

Comment 14 Michal Skrivanek 2019-03-12 13:12:08 UTC
ah, sorry, I see it's overwritten in libvirtd.conf actually. both(all?) hosts have set the same host_uuid="c51f28d8-cd98-4a0d-9f1f-8da7e996106f". Someone must have sync those conf files manually as I find it unlikely that they got the same random uuid

Comment 15 Michal Skrivanek 2019-03-12 13:26:36 UTC
if you don't have the initial deployment logs I can't really identify how did it happen and have to close the bug again. Please reinstall one of the hosts from scratch and check the host_uuid setting, it should be set randomly.


Note You need to log in before you can comment on or make changes to this bug.