Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1509807 - [BLOCKED] Memory volumes are left behind when VMs disks are moved
Summary: [BLOCKED] Memory volumes are left behind when VMs disks are moved
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.1.6
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ovirt-4.3.0
: ---
Assignee: Tal Nisan
QA Contact: Elad
URL:
Whiteboard:
Depends On: 1150245
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-06 06:48 UTC by Germano Veit Michel
Modified: 2018-08-06 13:53 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-08-06 09:01:28 UTC
oVirt Team: Storage


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Bugzilla 1509588 None None None Never

Internal Links: 1509588

Description Germano Veit Michel 2017-11-06 06:48:57 UTC
Description of problem:

When a snapshot (with memory) is created, the engine runs MemoryStorageHandler to decide where to place the memory volume. But when the VMs disks are moved, the memory volume is not relocated (MemoryStorageHandler is not evaluated again). This can lead to the VM living in a completely different set of SDs than the memory volume(s), causing a few problems:

1) SD with memory volume can be detached: if I detach the memory volume SD and try to preview the VM, I get this when starting the VM, which doesn't make any sense:

2017-11-06 16:44:02,510+10 WARN  [org.ovirt.engine.core.bll.RunVmCommand] (default task-1) [8ffaa3e8-45ea-4c7f-84cb-aea0e928195a] Validation of action 'RunVm' failed for user admin@internal. Reasons: VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_NO_HOSTS

2) If I detach the SD of the memory volume (A) and also the SD of the VM (B), then attach back B to try to import the VM, I get this: "Cannot import VM. Storage Domain doesn't exist" due to the missing SD of the memory volume.

Not sure if the memory volumes should follow the disks via MemoryStorageHandler, but this doesn't seem right.

Version-Release number of selected component (if applicable):
ovirt-engine-4.1.6

How reproducible:
100%

Steps to Reproduce:
1. Create VM with disk on SD A
2. Snapshot it (include memory)
3. Move VM disk to SD B
4. Memory volume still on SD A
5. Detach SD A and B
6. Attach SD B and try to import the VM

AND

1. Create VM with disk on SD A
2. Snapshot it (include memory)
3. Move VM disk to SD B
4. Memory volume still on SD A
5. Detach SD A
6. Try to preview the VM and start it

Comment 2 Tal Nisan 2017-11-06 10:27:06 UTC
The memory volumes should be treated as any other disk and not be moved automatically but the option to move them manually should indeed be added as requested in RFE 1150245

*** This bug has been marked as a duplicate of bug 1150245 ***

Comment 3 Yaniv Lavi 2017-11-06 11:07:43 UTC
(In reply to Tal Nisan from comment #2)
> The memory volumes should be treated as any other disk and not be moved
> automatically but the option to move them manually should indeed be added as
> requested in RFE 1150245
> 
> *** This bug has been marked as a duplicate of bug 1150245 ***

Reopening to track issue downstream

Comment 8 Yaniv Lavi 2018-08-06 09:01:28 UTC
We now display memory disk as vDisk and allow to move memory volume like vDisks.
Please use these flows.


Note You need to log in before you can comment on or make changes to this bug.