Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1160743 - [SnapshotOverview][UX] Removing the last snapshot Volume Via SnapshotOverview should wipe also the snapshot entry from snapshots tab
Summary: [SnapshotOverview][UX] Removing the last snapshot Volume Via SnapshotOverview...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine-webadmin-portal
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 3.6.0
Assignee: nobody nobody
QA Contact: Pavel Stehlik
URL:
Whiteboard: virt
Depends On:
Blocks: 1034885
TreeView+ depends on / blocked
 
Reported: 2014-11-05 14:33 UTC by Ori Gofen
Modified: 2016-05-26 01:49 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-06-02 11:03:03 UTC
oVirt Team: ---
Target Upstream Version:


Attachments (Terms of Use)
images (deleted)
2014-11-05 14:33 UTC, Ori Gofen
no flags Details
logs (deleted)
2014-12-01 11:21 UTC, Ori Gofen
no flags Details

Description Ori Gofen 2014-11-05 14:33:19 UTC
Created attachment 954065 [details]
images

Description of problem:

Deleting a single Snapshot which is the only volume on the image's chain is equal to removing the whole snapshot(view image).
the view from Snapshot-overview is correct but the snapshot table should be updated also.

The volume is wiped successfully, no snapshot on storage: 

root@camel-vdsb /rhev/data-center
 # tree
.
├── 0517903a-67b4-45d0-a751-d34130de5fd0
│   ├── c329ac69-6d02-4f79-bcfb-81b0b3c3fe04 -> /rhev/data-center/mnt/10.35.160.108:_RHEV_ogofen_3/c329ac69-6d02-4f79-bcfb-81b0b3c3fe04
│   └── mastersd -> /rhev/data-center/mnt/10.35.160.108:_RHEV_ogofen_3/c329ac69-6d02-4f79-bcfb-81b0b3c3fe04
└── mnt
    └── 10.35.160.108:_RHEV_ogofen_3
        ├── c329ac69-6d02-4f79-bcfb-81b0b3c3fe04
        │   ├── dom_md
        │   │   ├── ids
        │   │   ├── inbox
        │   │   ├── leases
        │   │   ├── metadata
        │   │   └── outbox
        │   ├── images
        │   │   └── c1084b0b-818d-4c70-ad84-2aa843a8bbbe
        │   │       ├── 8ba32151-74c2-4a98-bda1-fd6100687b47
        │   │       ├── 8ba32151-74c2-4a98-bda1-fd6100687b47.lease
        │   │       └── 8ba32151-74c2-4a98-bda1-fd6100687b47.meta
        │   └── master
        │       ├── tasks
        │       └── vms
        └── __DIRECT_IO_TEST__

12 directories, 9 files
Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1.remove a single snapshot which is the last on the volume chain

Actual results:
the Vm snapshot table contains that snapshot even though its empty/null

Expected results:
the Vm snapshot table should not contain that snapshot


Additional info:

Comment 1 Daniel Erez 2014-11-09 08:47:42 UTC
This is actually the correct behavior, removing a disk snapshot merely removes a volume in the chain. Deleting all disk snapshots of a disk just remove the disk from each snapshot. I.e. the VM configuration of the snapshot may still be relevant, hence, should be displayed in Snapshots sub-tab.

Comment 2 Ori Gofen 2014-11-09 09:08:43 UTC
Erez, this is not a live snapshot but a regular snapshot, there is no VM configuration image that is part of the snapshot, its just an entry on oVirt db without further information, see the /thev/data-center tree.

Comment 3 Daniel Erez 2014-11-09 09:13:37 UTC
(In reply to Ori Gofen from comment #2)
> Erez, this is not a live snapshot but a regular snapshot, there is no VM
> configuration image that is part of the snapshot, its just an entry on oVirt
> db without further information, see the /thev/data-center tree.

The VM configuration is represented in DB for every snapshot (regular/live), which includes various properties of the VM (memory/cpus/etc). So it is saved in case the user wants to restore the old VM properties.

Comment 4 Ori Gofen 2014-11-09 09:29:52 UTC
If The VM configuration is represented in the DB what's the point of creating an image which contains the data on hosts upon live snapshot?

Comment 5 Allon Mureinik 2014-11-09 10:02:54 UTC
(In reply to Ori Gofen from comment #4)
> If The VM configuration is represented in the DB what's the point of
> creating an image which contains the data on hosts upon live snapshot?
The main point of a live snapshot is the memory dump, which is not stored in DB. The necessity to save the VM's configuration is AFAIK, a limitation of the current implementation, not a theoretical must.

Anyway, this is the virt's team domain - moving to them to consider whether its possible (and/or worth while) to remove this volume.

Comment 6 Ori Gofen 2014-11-09 10:22:55 UTC
well per you'r comment, Allon, it seems like a bug, I expect only VM's with conf image to be able to restore that data, otherwise, and if that data exists anyways, the conf image is redundant data which not only takes space without reason but confuses QA.

Comment 7 Ori Gofen 2014-12-01 11:21:13 UTC
Created attachment 963241 [details]
logs

I have discussed the, "configuration snapshot", issue with virt's team and opened a bug about it, please see BZ#1164852.

But per Derez's comment #3 I expect as a user to be able to restore the old VM properties upon our case (otherwise there's no use for leaving the configuration right?), and this not possible because this behavior (which Derez suggested) is not supported by libvirt.

I will walk you through my flow with images to explain better:

1. created 4 snapshots
snapshot over view ->
https://www.dropbox.com/s/kbsw0ilfbn65sog/4snapshots_snapshotoverview.png?dl=0

vm view ->
https://www.dropbox.com/s/eccv8ylmt7ic75r/4snapshot_vm_view.png?dl=0

2. removed all single snapshots, now the snapshot over view is empty, while the vm view is still "full", this behavior is what we want to avoid(because it causes ERRORs and failures).

snapshot over view ->
https://www.dropbox.com/s/5xxckp6lrdcf2hz/snapshot_overview_after_remove.png?dl=0

vm view ->
https://www.dropbox.com/s/p6mydvndt1bi8mn/vm_view_after_remove.png?dl=0

3. attempted to preview a snapshot, the result is that engine complains:

"message: unsupported configuration: boot order '1' used for more than one device."

2014-12-01 12:47:39,207 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-24) [22d6a97c] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM vm_snap_1 is down with error. Exit message: unsupported configuration: boot order '1' used for more than one device.
2014-12-01 12:47:39,207 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-24) [22d6a97c] Running on vds during rerun failed vm: null
2014-12-01 12:47:39,209 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-24) [22d6a97c] VM vm_snap_1 (390fc8a6-f63d-41e5-b4e3-28fb056e732a) is running in db and not running in VDS ogofen-2
2014-12-01 12:47:39,209 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand] (DefaultQuartzScheduler_Worker-24) [22d6a97c] START, FullListVdsCommand(HostName = ogofen-2, HostId = bf25d6e9-26c7-4821-af71-c457803d8d73, vds=Host[ogofen-2,bf25d6e9-26c7-4821-af71-c457803d8d73], vmIds=[390fc8a6-f63d-41e5-b4e3-28fb056e732a]), log id: 7355b7dd
2014-12-01 12:47:39,214 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand] (DefaultQuartzScheduler_Worker-24) [22d6a97c] FINISH, FullListVdsCommand, return: [], log id: 7355b7dd
2014-12-01 12:47:39,221 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-24) [22d6a97c] Rerun vm 390fc8a6-f63d-41e5-b4e3-28fb056e732a. Called from vds ogofen-2
2014-12-01 12:47:39,233 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-19) [22d6a97c] Correlation ID: 3e3618a8, Job ID: 23f89fa2-da31-4893-82d5-eb00053a1097, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM vm_snap_1 on Host ogofen-2.
2014-12-01 12:47:39,240 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-7-thread-19) [22d6a97c] Lock Acquired to object EngineLock [exclusiveLocks= key: 390fc8a6-f63d-41e5-b4e3-28fb056e732a value: VM
, sharedLocks= ]
2014-12-01 12:47:39,272 INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (org.ovirt.thread.pool-7-thread-19) [22d6a97c] START, IsVmDuringInitiatingVDSCommand( vmId = 390fc8a6-f63d-41e5-b4e3-28fb056e732a), log id: 1897bf44
2014-12-01 12:47:39,272 INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (org.ovirt.thread.pool-7-thread-19) [22d6a97c] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 1897bf44
2014-12-01 12:47:39,278 WARN  [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-7-thread-19) [22d6a97c] CanDoAction of action RunVm failed. Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_NO_HOSTS
2014-12-01 12:47:39,278 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-7-thread-19) [22d6a97c] Lock freed to object EngineLock [exclusiveLocks= key: 390fc8a6-f63d-41e5-b4e3-28fb056e732a value: VM
, sharedLocks= ]
2014-12-01 12:47:39,287 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-19) [22d6a97c] Correlation ID: 3e3618a8, Job ID: 23f89fa2-da31-4893-82d5-eb00053a1097, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM vm_snap_1 (User: admin).

ui ->
https://www.dropbox.com/s/sf1at7a66xjt17a/snapshot_preview_fail.png?dl=0

vdsm also complains that "Unknown libvirterror: ecode: 67 edom: 20 level: 2 message: unsupported configuration"

I guess it doesn't understands why anyone should need to preview a redundant configuration with no bootable disk, makes sense actually.

Thread-7636::DEBUG::2014-11-30 19:05:25,805::libvirtconnection::143::root::(wrapper) Unknown libvirterror: ecode: 67 edom: 20 level: 2 message: unsupported configuration: boot order '1' used for more than one device
Thread-7636::DEBUG::2014-11-30 19:05:25,805::vm::2294::vm.Vm::(_startUnderlyingVm) vmId=`390fc8a6-f63d-41e5-b4e3-28fb056e732a`::_ongoingCreations released
Thread-7636::ERROR::2014-11-30 19:05:25,805::vm::2331::vm.Vm::(_startUnderlyingVm) vmId=`390fc8a6-f63d-41e5-b4e3-28fb056e732a`::The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 2271, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/virt/vm.py", line 3385, in _run
    self._connection.createXML(domxml, flags),
  File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line 111, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2665, in createXML
    if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: unsupported configuration: boot order '1' used for more than one device

expected behavior:
This operation should be supported as Erez suggested at comment #3
Or when removing the last single snapshot should wipe also the snapshot entry from snapshots tab.

Comment 8 Allon Mureinik 2014-12-01 12:15:46 UTC
You removed part of the VM and it doesn't boot anymore - this is hardly surprising.
There's no way to validate this safety from the engine without some introspection from guest side.

Comment 9 Allon Mureinik 2014-12-01 12:16:24 UTC
closed by mistake, sorry.

Comment 10 Allon Mureinik 2014-12-01 12:16:24 UTC
closed by mistake, sorry.

Comment 11 Michal Skrivanek 2015-06-02 11:03:03 UTC
Closing old bugs. If this issue is still relevant/important in current version, please re-open the bug.


Note You need to log in before you can comment on or make changes to this bug.