Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1599073 - Running the same VM on two different hosts at the same time
Summary: Running the same VM on two different hosts at the same time
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Virt
Version: 4.2.5
Hardware: x86_64
OS: Unspecified
unspecified
urgent vote
Target Milestone: ---
: ---
Assignee: Michal Skrivanek
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-08 14:16 UTC by Yosi Ben Shimon
Modified: 2018-07-09 10:33 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-07-09 10:33:16 UTC
oVirt Team: Virt


Attachments (Terms of Use)
logs (deleted)
2018-07-08 14:16 UTC, Yosi Ben Shimon
no flags Details

Description Yosi Ben Shimon 2018-07-08 14:16:51 UTC
Created attachment 1457281 [details]
logs

Description of problem:
While trying to reproduce a bug, I came across a situation where a single VM is running on two different hosts at the same time.

running "ps -ef | grep qemu" on host A:
root       567     1  0 Jul04 ?        00:00:05 /usr/bin/qemu-ga --method=virtio-serial --path=/dev/virtio-ports/org.qemu.guest_agent.0 --blacklist=guest-file-open,guest-file-close,guest-file-read,guest-file-write,guest-file-seek,guest-file-flush,guest-exec,guest-exec-status -F/etc/qemu-ga/fsfreeze-hook
qemu      7679     1  2 16:22 ?        00:00:34 /usr/libexec/qemu-kvm -name guest=new_VM,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-9-new_VM/master-key.aes -machine pc-i440fx-rhel7.5.0,accel=kvm,usb=off,dump-guest-core=off -cpu Conroe,vme=on,x2apic=on,hypervisor=on -m size=1048576k,slots=16,maxmem=4194304k -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0,mem=1024 -uuid 08239417-1b9e-4056-ad1e-fb4006d45da7 -smbios type=1,manufacturer=oVirt,product=RHEV Hypervisor,version=7.5-8.el7,serial=B2B44EA8-75E6-4692-ACE6-702DCDE89289,uuid=08239417-1b9e-4056-ad1e-fb4006d45da7 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-9-new_VM/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2018-07-08T13:22:35,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=ua-764acaa8-4367-4b1c-ad6b-a8e770fc8010,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=ua-eda60c70-9355-432a-8d5b-892e23d3b2ed,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,id=drive-ua-14ec153f-1aa8-47e6-8442-ae692f281bb7,readonly=on,werror=report,rerror=report -device ide-cd,bus=ide.1,unit=0,drive=drive-ua-14ec153f-1aa8-47e6-8442-ae692f281bb7,id=ua-14ec153f-1aa8-47e6-8442-ae692f281bb7 -drive file=/rhev/data-center/mnt/blockSD/d35af2ac-4c7f-4e1b-8caa-a46c9e1af668/images/5721d449-3e58-4b84-a089-82392d2783ec/a8507d59-3833-43a1-afed-f9ca2c39d550,format=qcow2,if=none,id=drive-ua-5721d449-3e58-4b84-a089-82392d2783ec,serial=5721d449-3e58-4b84-a089-82392d2783ec,cache=none,werror=stop,rerror=stop,aio=native -device scsi-hd,bus=ua-764acaa8-4367-4b1c-ad6b-a8e770fc8010.0,channel=0,scsi-id=0,lun=0,drive=drive-ua-5721d449-3e58-4b84-a089-82392d2783ec,id=ua-5721d449-3e58-4b84-a089-82392d2783ec,bootindex=1 -netdev tap,fd=33,id=hostua-2991f701-d8f4-4860-b6c3-9d94ddc9e2bb,vhost=on,vhostfd=35 -device virtio-net-pci,netdev=hostua-2991f701-d8f4-4860-b6c3-9d94ddc9e2bb,id=ua-2991f701-d8f4-4860-b6c3-9d94ddc9e2bb,mac=00:1a:4a:16:26:6f,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/08239417-1b9e-4056-ad1e-fb4006d45da7.ovirt-guest-agent.0,server,nowait -device virtserialport,bus=ua-eda60c70-9355-432a-8d5b-892e23d3b2ed.0,nr=1,chardev=charchannel0,id=channel0,name=ovirt-guest-agent.0 -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/08239417-1b9e-4056-ad1e-fb4006d45da7.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=ua-eda60c70-9355-432a-8d5b-892e23d3b2ed.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=ua-eda60c70-9355-432a-8d5b-892e23d3b2ed.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice port=5900,tls-port=5901,addr=10.35.83.180,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -device qxl-vga,id=ua-3292a2ec-0d81-4215-a2d7-513c4072d4a2,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -incoming defer -device virtio-balloon-pci,id=ua-1be00b1c-71b9-4c9d-a110-7df838def9a9,bus=pci.0,addr=0x6 -object rng-random,id=objua-d3edbaec-9e4b-4c56-ab4b-6e51ffeae2ae,filename=/dev/urandom -device virtio-rng-pci,rng=objua-d3edbaec-9e4b-4c56-ab4b-6e51ffeae2ae,id=ua-d3edbaec-9e4b-4c56-ab4b-6e51ffeae2ae,bus=pci.0,addr=0x7 -msg timestamp=on
root     11698 24058  0 16:43 pts/0    00:00:00 grep --color=auto qemu


running "ps -ef | grep qemu" on host B:
root       571     1  0 Jul04 ?        00:00:05 /usr/bin/qemu-ga --method=virtio-serial --path=/dev/virtio-ports/org.qemu.guest_agent.0 --blacklist=guest-file-open,guest-file-close,guest-file-read,guest-file-write,guest-file-seek,guest-file-flush,guest-exec,guest-exec-status -F/etc/qemu-ga/fsfreeze-hook
qemu      9106     1  2 16:29 ?        00:00:23 /usr/libexec/qemu-kvm -name guest=new_VM,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-9-new_VM/master-key.aes -machine pc-i440fx-rhel7.5.0,accel=kvm,usb=off,dump-guest-core=off -cpu Conroe -m size=1048576k,slots=16,maxmem=4194304k -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0,mem=1024 -uuid 08239417-1b9e-4056-ad1e-fb4006d45da7 -smbios type=1,manufacturer=oVirt,product=RHEV Hypervisor,version=7.5-8.el7,serial=B2B44EA8-75E6-4692-ACE6-702DCDE89289,uuid=08239417-1b9e-4056-ad1e-fb4006d45da7 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-9-new_VM/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2018-07-08T13:29:23,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=ua-764acaa8-4367-4b1c-ad6b-a8e770fc8010,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=ua-eda60c70-9355-432a-8d5b-892e23d3b2ed,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,id=drive-ua-14ec153f-1aa8-47e6-8442-ae692f281bb7,readonly=on,werror=report,rerror=report -device ide-cd,bus=ide.1,unit=0,drive=drive-ua-14ec153f-1aa8-47e6-8442-ae692f281bb7,id=ua-14ec153f-1aa8-47e6-8442-ae692f281bb7 -drive file=/rhev/data-center/mnt/blockSD/d35af2ac-4c7f-4e1b-8caa-a46c9e1af668/images/5721d449-3e58-4b84-a089-82392d2783ec/a8507d59-3833-43a1-afed-f9ca2c39d550,format=qcow2,if=none,id=drive-ua-5721d449-3e58-4b84-a089-82392d2783ec,serial=5721d449-3e58-4b84-a089-82392d2783ec,cache=none,werror=stop,rerror=stop,aio=native -device scsi-hd,bus=ua-764acaa8-4367-4b1c-ad6b-a8e770fc8010.0,channel=0,scsi-id=0,lun=0,drive=drive-ua-5721d449-3e58-4b84-a089-82392d2783ec,id=ua-5721d449-3e58-4b84-a089-82392d2783ec,bootindex=1 -netdev tap,fd=32,id=hostua-2991f701-d8f4-4860-b6c3-9d94ddc9e2bb,vhost=on,vhostfd=34 -device virtio-net-pci,netdev=hostua-2991f701-d8f4-4860-b6c3-9d94ddc9e2bb,id=ua-2991f701-d8f4-4860-b6c3-9d94ddc9e2bb,mac=00:1a:4a:16:26:6f,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/08239417-1b9e-4056-ad1e-fb4006d45da7.ovirt-guest-agent.0,server,nowait -device virtserialport,bus=ua-eda60c70-9355-432a-8d5b-892e23d3b2ed.0,nr=1,chardev=charchannel0,id=channel0,name=ovirt-guest-agent.0 -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/08239417-1b9e-4056-ad1e-fb4006d45da7.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=ua-eda60c70-9355-432a-8d5b-892e23d3b2ed.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=ua-eda60c70-9355-432a-8d5b-892e23d3b2ed.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice port=5900,tls-port=5901,addr=10.35.83.182,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -device qxl-vga,id=ua-3292a2ec-0d81-4215-a2d7-513c4072d4a2,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=ua-1be00b1c-71b9-4c9d-a110-7df838def9a9,bus=pci.0,addr=0x6 -object rng-random,id=objua-d3edbaec-9e4b-4c56-ab4b-6e51ffeae2ae,filename=/dev/urandom -device virtio-rng-pci,rng=objua-d3edbaec-9e4b-4c56-ab4b-6e51ffeae2ae,id=ua-d3edbaec-9e4b-4c56-ab4b-6e51ffeae2ae,bus=pci.0,addr=0x7 -msg timestamp=on
root     12050 28777  0 16:43 pts/0    00:00:00 grep --color=auto qemu

Version-Release number of selected component (if applicable):
ovirt-engine-4.2.5-0.1.el7ev.noarch
vdsm-4.20.33-1.el7ev.x86_64

How reproducible:
Unknown.

Steps to Reproduce:
1.
2.
3.

Actual results:
A VM is running on two different hosts.

Expected results:
The VM should run only on one host a time.

Additional info:
Attached engine + vdsm + qemu logs.

Comment 2 Michal Skrivanek 2018-07-09 07:14:56 UTC
you've clicked on Confirm Host has been rebooted, didn't you?

Comment 3 Yosi Ben Shimon 2018-07-09 07:57:08 UTC
(In reply to Michal Skrivanek from comment #2)
> you've clicked on Confirm Host has been rebooted, didn't you?

Yes, I did, during testing a scenario with SPM reelecting when the SPM is in a non-responsive state.

Comment 4 Doron Fediuck 2018-07-09 10:33:16 UTC
(In reply to Yosi Ben Shimon from comment #3)
> (In reply to Michal Skrivanek from comment #2)
> > you've clicked on Confirm Host has been rebooted, didn't you?
> 
> Yes, I did, during testing a scenario with SPM reelecting when the SPM is in
> a non-responsive state.

As you can see in [1]:

WARNING
Do not use the Confirm host has been rebooted option unless you have manually rebooted the host. Using this option while the host is still running can lead to a virtual machine image corruption.

[1] https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/html-single/administration_guide/#Manually_fencing_or_isolating_a_nonresponsive_host


Note You need to log in before you can comment on or make changes to this bug.