Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1114587 - Failed VM migrations do not release VM resource lock properly leading to failures in subsequent migration attempts
Summary: Failed VM migrations do not release VM resource lock properly leading to fail...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 3.4.1
Assignee: Arik
QA Contact: Lukas Svaty
URL:
Whiteboard: virt
Depends On: 1104030
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-06-30 12:37 UTC by rhev-integ
Modified: 2018-12-06 17:05 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Previously, if a migrating virtual machine thread acquired a resource lock but failed for some reason, it would not properly release the lock. This caused other threads to fail while acquiring locks to perform operations like migrate/Run VM. Now, a patch has fixed the regression that caused VMs to remain locked after failed migration.
Clone Of: 1104030
Environment:
Last Closed: 2014-07-29 16:24:35 UTC
oVirt Team: ---
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:0960 normal SHIPPED_LIVE Red Hat Enterprise Virtualization Manager 3.4.1 2014-07-29 20:23:10 UTC
oVirt gerrit 26686 None None None Never
oVirt gerrit 28403 master MERGED core: fix reattempt to go to maintenance mechanism Never

Comment 1 Michal Skrivanek 2014-06-30 12:39:23 UTC
already fixed in 3.4, "just" needs testing

Comment 3 Ilanit Stein 2014-07-16 08:33:36 UTC
how this bug should be verified please?

Comment 4 Lukas Svaty 2014-07-18 09:09:17 UTC
For verification used av10.2
Hosts rhel6, rhel7 with up to date VDSM

Steps:
1. Migrate 2 VMs from host1(SPM) to host2
2. While migration in process move host2 to maintenance
3. Wait for Migration to Fail and see in logs if locks were successfully freed 

2014-07-18 11:03:03,168 INFO  [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-4-thread-25) [47572d25] Lock Acquired to object EngineLock [exclusiveLocks= key: e3c73b01-1900-4b24-bfa7-e75b17e9dc0d value: VM
, sharedLocks= ]
2014-07-18 11:03:03,168 INFO  [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-4-thread-36) [6e952ea4] Lock Acquired to object EngineLock [exclusiveLocks= key: 4d7aa507-1b32-4618-a5a2-884500dbbbc1 value: VM
, sharedLocks= ]
2014-07-18 11:03:07,433 INFO  [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-4-thread-2) [23c7912b] Lock freed to object EngineLock [exclusiveLocks= key: 4d7aa507-1b32-4618-a5a2-884500dbbbc1 value: VM
, sharedLocks= ]
2014-07-18 11:03:07,435 INFO  [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-4-thread-5) [23c7912b] Lock freed to object EngineLock [exclusiveLocks= key: e3c73b01-1900-4b24-bfa7-e75b17e9dc0d value: VM
, sharedLocks= ]

4. Activate host2 again and try to migrate VMs to it (successful)
5. Tried migration on host3 (rhevh) also successful
6. Actions like like Shutdown/Pause/Start VM should pass as well as the lock is freed


please verify these steps, once the steps will be approved by Arik I will move this bug to VERIFIED

Thanks

Comment 5 Michal Skrivanek 2014-07-23 07:29:36 UTC
looks ok

Comment 7 errata-xmlrpc 2014-07-29 16:24:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-0960.html


Note You need to log in before you can comment on or make changes to this bug.