Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 909708 - [rhevm] engine: After adding device to vm with snapshot in preview cannot run the vm
Summary: [rhevm] engine: After adding device to vm with snapshot in preview cannot run...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.1.2
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 3.5.0
Assignee: Shahar Havivi
QA Contact: meital avital
URL:
Whiteboard: virt
Depends On:
Blocks: rhev3.5beta 1156165
TreeView+ depends on / blocked
 
Reported: 2013-02-10 16:34 UTC by vvyazmin@redhat.com
Modified: 2014-10-23 17:42 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-05-25 11:56:50 UTC
oVirt Team: ---
Target Upstream Version:


Attachments (Terms of Use)
## Logs vdsm, rhevm (deleted)
2013-02-10 16:34 UTC, vvyazmin@redhat.com
no flags Details


Links
System ID Priority Status Summary Last Updated
oVirt gerrit 11179 None None None Never
oVirt gerrit 13327 None None None Never

Description vvyazmin@redhat.com 2013-02-10 16:34:40 UTC
Created attachment 695816 [details]
## Logs vdsm, rhevm

Description of problem:
After adding disk to vm with snapshot in preview, cannot run the vm

Version-Release number of selected component (if applicable):


How reproducible:
RHEVM 3.1 - SI27 environment:

RHEVM: rhevm-3.1.0-46.el6ev.noarch
VDSM: vdsm-4.10.2-7.0.el6ev.x86_64
LIBVIRT: libvirt-0.10.2-18.el6.x86_64
QEMU & KVM: qemu-kvm-rhev-0.12.1.2-2.348.el6.x86_64
SANLOCK: sanlock-2.6-2.el6.x86_64

Steps to Reproduce:
1. Create a vm with snapshot
2. Put snapshot in preview
3. Add a new disk to vm
4. Run vm

 Actual results:
Failed run VM

Expected results:
Action, adding a disk to vm with snapshot in preview mode, should be blocked, with relevant pop-up: “Error while executing action: Cannot add disk to Virtual Machine. VM is previewing a Snapshot.”

Additional info:

/var/log/ovirt-engine/engine.log
2013-02-10 20:12:08,451 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (QuartzScheduler_Worker-74) START, DestroyVDSCommand(HostName = green-vdsb, HostId = 
e6b1d3d2-739b-11e2-bab8-001a4a16974a, vmId=2226226c-179d-45d6-a840-1e7769ad41a1, force=false, secondsToWait=0, gracefully=false), log id: 3c689f9a
2013-02-10 20:12:08,536 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (QuartzScheduler_Worker-74) FINISH, DestroyVDSCommand, log id: 3c689f9a
2013-02-10 20:12:08,588 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-74) Running on vds during rerun failed vm: null
2013-02-10 20:12:08,589 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-74) vm 100-vm-7 running in db and not running in vds - add to rerun treatment. vds green-vdsb
2013-02-10 20:12:08,613 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-74) Rerun vm 2226226c-179d-45d6-a840-1e7769ad41a1. Called from vds green-vdsb

/var/log/vdsm/vdsm.log

Thread-5225::DEBUG::2013-02-10 20:12:01,799::vm::676::vm.Vm::(_startUnderlyingVm) vmId=`2226226c-179d-45d6-a840-1e7769ad41a1`::_ongoingCreations released
Thread-5225::ERROR::2013-02-10 20:12:01,800::vm::700::vm.Vm::(_startUnderlyingVm) vmId=`2226226c-179d-45d6-a840-1e7769ad41a1`::The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 662, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/libvirtvm.py", line 1518, in _run
    self._connection.createXML(domxml, flags),
  File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 104, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2645, in createXML
    if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: internal error process exited while connecting to monitor: qemu-kvm: -drive file=/rhev/data-center/ace217ef-82f0-4d92-a031-873545d5fa5a/5b5acf7c-0243-4568-8822-f1e
dd23be806/images/e000f44b-9d25-4c11-8fb8-8e2ecb4b2f8d/06142413-7b0b-4238-9b7b-cb8f722cc3f4,if=none,id=drive-virtio-disk0,format=qcow2,serial=e000f44b-9d25-4c11-8fb8-8e2ecb4b2f8d
,cache=none,werror=stop,rerror=stop,aio=native: could not open disk image /rhev/data-center/ace217ef-82f0-4d92-a031-873545d5fa5a/5b5acf7c-0243-4568-8822-f1edd23be806/images/e000
f44b-9d25-4c11-8fb8-8e2ecb4b2f8d/06142413-7b0b-4238-9b7b-cb8f722cc3f4: Operation not permitted

Thread-5225::DEBUG::2013-02-10 20:12:01,866::vm::1047::vm.Vm::(setDownStatus) vmId=`2226226c-179d-45d6-a840-1e7769ad41a1`::Changed state to Down: internal error process exited w
hile connecting to monitor: qemu-kvm: -drive file=/rhev/data-center/ace217ef-82f0-4d92-a031-873545d5fa5a/5b5acf7c-0243-4568-8822-f1edd23be806/images/e000f44b-9d25-4c11-8fb8-8e2e
cb4b2f8d/06142413-7b0b-4238-9b7b-cb8f722cc3f4,if=none,id=drive-virtio-disk0,format=qcow2,serial=e000f44b-9d25-4c11-8fb8-8e2ecb4b2f8d,cache=none,werror=stop,rerror=stop,aio=nativ
e: could not open disk image /rhev/data-center/ace217ef-82f0-4d92-a031-873545d5fa5a/5b5acf7c-0243-4568-8822-f1edd23be806/images/e000f44b-9d25-4c11-8fb8-8e2ecb4b2f8d/06142413-7b0
b-4238-9b7b-cb8f722cc3f4: Operation not permitted

Comment 1 Allon Mureinik 2013-02-13 09:48:37 UTC
This is not specific to disks - adding any device, across the board, hardware profiles should not be changed in preview mode.

Comment 2 Libor Spevak 2013-03-25 12:11:07 UTC
The check in canDoAction when adding a disk to the VM in snapshot preview is already handled in upstream commit:

core: AddDisk preview validation
    
In AddDiskCommand, moved the validation that the VM is not in preview
from ImagesHandler to SnapshotValidator, which is a more logical place
for it.
    
This patch contains the following:
* A new method in SnapshotValidator, vmNotInPreview(vmId)
* Tests for the aforementioned method in SnapshotValidatorTest.
* The usage of the aforementioned method in AddDiskCommand
* Minor amendments to AddDiskToVmCommandTest's mocking in light of the
  previous change.
    
Note: This patch is part of a series of patches aimed at removing the
      preview validation from ImagesHandler altogether.
    
Change-Id: Ib282279a4b938d6fb3b08e9b2d127af4653bd51c
Signed-off-by: Allon Mureinik <amureini@redhat.com>

--

I tested adding a network interface (NIC). This is working without problems:

2013-03-25 11:42:38,660 INFO  [org.ovirt.engine.core.bll.network.vm.AddVmInterfaceCommand] (http--0.0.0.0-8700-1) [417e5f57] Running command: AddVmInterfaceCommand internal: false. Entities affected :  ID: f542882a-eb1e-401b-a52e-3b47a3ae7bc2 Type: VM,  ID: 00000000-0000-0000-0000-000000000009 Type: Network
2013-03-25 11:42:38,802 INFO  [org.ovirt.engine.core.bll.network.vm.ActivateDeactivateVmNicCommand] (http--0.0.0.0-8700-1) [417e5f57] Running command: ActivateDeactivateVmNicCommand internal: true. Entities affected :  ID: f542882a-eb1e-401b-a52e-3b47a3ae7bc2 Type: VM
2013-03-25 11:42:38,805 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugNicVDSCommand] (http--0.0.0.0-8700-1) [417e5f57] START, HotPlugNicVDSCommand(HostName = XXXXXXXXXX.redhat.com, HostId = 65cab951-e838-4702-a145-98842332b026, vm.vm_name=Fedora17_test1_imported2, nic=VmNetworkInterface {id=a19a6380-bde0-4dd3-aa93-421abdd10962, networkName=ovirtmgmt, speed=1000, type=3, name=nic3, macAddress=00:1a:4a:16:01:b2, active=true, linked=true, portMirroring=false, vmId=f542882a-eb1e-401b-a52e-3b47a3ae7bc2, vmName=null, vmTemplateId=null}, vmDevice=VmDevice {vmId=f542882a-eb1e-401b-a52e-3b47a3ae7bc2, deviceId=a19a6380-bde0-4dd3-aa93-421abdd10962, device=bridge, type=interface, bootOrder=0, specParams={}, address=, managed=true, plugged=true, readOnly=false, deviceAlias=}), log id: 5f883b57
2013-03-25 11:42:39,207 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugNicVDSCommand] (http--0.0.0.0-8700-1) [417e5f57] FINISH, HotPlugNicVDSCommand, log id: 5f883b57

--

I also tried hot plug a LUN (iSCSI) disks, with different results, not sure, if this should be disabled at all (class HotPlugDiskToVmCommand)?

Comment 4 Libor Spevak 2013-03-25 15:03:47 UTC
External Trackers (cannot add):
Already merged:
http://gerrit.ovirt.org/#/c/11179/
Proposal:
http://gerrit.ovirt.org/#/c/13327

Comment 5 Libor Spevak 2013-04-17 14:05:15 UTC
Merged u/s: 5152cf50911e441dbc775921ee5bd06051ea9529

Comment 7 vvyazmin@redhat.com 2013-05-26 02:52:16 UTC
Tested on RHEVM 3.2 - SF17.1 environment:

RHEVM: rhevm-3.2.0-11.28.el6ev.noarch
VDSM: vdsm-4.10.2-21.0.el6ev.x86_64
LIBVIRT: libvirt-0.10.2-18.el6_4.5.x86_64
QEMU & KVM: qemu-kvm-rhev-0.12.1.2-2.355.el6_4.3.x86_64
SANLOCK: sanlock-2.6-2.el6.x86_64

During add a new disk to VM with snapshot in preview mode – I get exception “Cannot add Virtual Machine Disk. VM is previewing a Snapshot.” and this action blocked – this is OK.

But when I add a existing  disk to VM with snapshot in preview mode, this action doesn't blocked, and  I succeed attached disk, and power on VM – Is it OK?

Comment 8 Michal Skrivanek 2013-09-13 09:12:29 UTC
indeed that doesn't sound right...

Comment 12 Shahar Havivi 2014-05-25 11:56:50 UTC
Adding disk during preview is currently blocked.
Closing the bug


Note You need to log in before you can comment on or make changes to this bug.