Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1050838 - [engine-backend] snapshot isn't removed from DB although its creation failed due to read only disk
Summary: [engine-backend] snapshot isn't removed from DB although its creation failed ...
Status: CLOSED DUPLICATE of bug 1056169
Alias: None
Product: oVirt
Classification: Retired
Component: ovirt-engine-core
Version: 3.4
Hardware: x86_64
OS: Unspecified
Target Milestone: ---
: 3.4.0
Assignee: Sergey Gotliv
QA Contact: Aharon Canan
Whiteboard: storage
Depends On:
TreeView+ depends on / blocked
Reported: 2014-01-09 08:20 UTC by Elad
Modified: 2016-02-10 17:52 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Last Closed: 2014-02-04 10:33:22 UTC
oVirt Team: Storage

Attachments (Terms of Use)
engine log, screenshot and snapshots table (deleted)
2014-01-09 08:20 UTC, Elad
no flags Details

Description Elad 2014-01-09 08:20:35 UTC
Created attachment 847523 [details]
engine log, screenshot and snapshots table

Description of problem:

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Create a snapshot from a VM that has RO disk attached
2. The operation will fail (as reported in BZ#1050835)

Actual results:

Snapshot operation failure:

2014-01-08 21:49:28,618 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-33) Failed in SnapshotVDS method
2014-01-08 21:49:28,620 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-33) Command org.ovirt.engine.core.vdsbroker.vdsbroke
r.SnapshotVDSCommand return value
 StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=48, mMessage=Snapshot failed]]
2014-01-08 21:49:28,621 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-33) HostName = nott-vds2
2014-01-08 21:49:28,622 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-33) Command SnapshotVDSCommand(HostName = nott-vds2,
 HostId = 1b516908-48a3-4796-aba6-a4c598bd5d2f, vmId=476c7b76-706d-4799-9c3f-98c58f12fc03) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException
: Failed to SnapshotVDS, error = Snapshot failed, code = 48
2014-01-08 21:49:28,623 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-33) FINISH, SnapshotVDSCommand, log id: 8b4ee15
2014-01-08 21:49:28,625 WARN  [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] (org.ovirt.thread.pool-6-thread-33) Wasnt able to live snapshot due to error: VdcBLLExc
eption: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, cod
e = 48 (Failed with error SNAPSHOT_FAILED and code 48). VM will still be configured to the new created snapshot
2014-01-08 21:49:28,640 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-33) Correlation ID: 38912d67, Job ID: 743831
0a-c91d-4651-be5d-41994181acba, Call Stack: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VD
SGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error SNAPSHOT_FAILED and code 48)

Snapshot a2b6ba4b-d0cf-4074-b481-55cf2f5176a6 still exists on DB:

### su - postgres -c "psql -U postgres engine -c  'select snapshot_id,vm_id,snapshot_type,status,creation_date,description   from snapshots;'"  | less -S

            snapshot_id              |                vm_id                 | snapshot_type | status |       creation_date        | description
 2c1b25a9-e0c2-4852-939f-3bf948ee64c7 | cd573134-f347-4865-b714-8dc27914636b | ACTIVE        | OK     | 2014-01-08 21:35:01.015+02 | Active VM
 6141f4e8-26fd-48ec-8d57-bbaee888b8fc | 476c7b76-706d-4799-9c3f-98c58f12fc03 | ACTIVE        | OK     | 2014-01-08 21:01:58.43+02  | Active VM
 a2b6ba4b-d0cf-4074-b481-55cf2f5176a6 | 476c7b76-706d-4799-9c3f-98c58f12fc03 | REGULAR       | OK     | 2014-01-08 21:48:56.192+02 | 1

Expected results:
If snapshot creation failed, the snapshot should be removed from the data-base

Additional info:
engine log, screenshot and snapshots table

Comment 1 Elad 2014-01-09 08:22:10 UTC
Version-Release number of selected component (if applicable):

How reproducible:

Comment 2 Maor 2014-01-12 12:21:03 UTC
This scenario was also discussed at BZ870928

The problem described here is that the running qemu proces can not reference to the new created volume, though the snapshot was created successfully.

while the VM has no problem running on the new active volume after reboot, perhaps it might be better to leave it, then handle other issues with delete snapshot.

Note You need to log in before you can comment on or make changes to this bug.