Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1097820 - Don't wipe disks when deleting images on file storage domains
Summary: Don't wipe disks when deleting images on file storage domains
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.4.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 3.5.0
Assignee: Idan Shaby
QA Contact: Aharon Canan
URL:
Whiteboard: storage
Depends On:
Blocks: 1070823 rhev3.5beta 1156165
TreeView+ depends on / blocked
 
Reported: 2014-05-14 15:20 UTC by Federico Simoncelli
Modified: 2016-02-10 16:56 UTC (History)
10 users (show)

Fixed In Version: vt3
Doc Type: Bug Fix
Doc Text:
Cause: Irrelevant Consequence: Irrelevant Fix: Since the file system is responsible for handling block allocation, there is no need for wiping disks on file domains. Thus, the engine will pass wipe after delete = false to VDSM for file domains even if the disk's wipe after delete property is true. Result: On file-based storage domains (NSF, POSIX, GlusterFS and local FS), disks will not be wiped in any case - this is a logical setting so that a disk can be moved between file and block storage and retain this property.
Clone Of:
Environment:
Last Closed: 2015-02-16 19:10:40 UTC
oVirt Team: Storage
Target Upstream Version:
scohen: Triaged+


Attachments (Terms of Use)
logs (deleted)
2014-09-14 11:42 UTC, Aharon Canan
no flags Details


Links
System ID Priority Status Summary Last Updated
oVirt gerrit 31687 master MERGED core: WAD should be Ignored on File Domain Disks Never
oVirt gerrit 31905 ovirt-engine-3.5 MERGED core: WAD should be Ignored on File Domain Disks Never

Description Federico Simoncelli 2014-05-14 15:20:12 UTC
Description of problem:
When we delete volumes/images from file storage domains we shouldn't wipe them (postZero='false').

Version-Release number of selected component (if applicable):
rhevm-3.4.0-0.20.el6ev

How reproducible:
100%

Steps to Reproduce:
1. create a disk on a file domain enabling "Wipe After Delete"
2. delete the disk

Actual results:
Engine sends deleteImage with postZero='true' and a long task to wipe the image is initiated.

Expected results:
Engine shouldn't try to wipe the image on file domains (postZero='false')

Additional info:
This may affect some other flows such as deleting a snapshot.

Comment 4 Eyal Edri 2014-09-10 20:21:47 UTC
fixed in vt3, moving to on_qa.
if you believe this bug isn't released in vt3, please report to rhev-integ@redhat.com

Comment 5 Aharon Canan 2014-09-14 11:41:39 UTC
I am not sure why we need this tag in such case, 
i think it will be better to block it (greyed out) as in any case we are not really wiping.

anyway, 
remove disk with "wipe after delete" flag marked really send postZeros = false

2014-09-14 14:27:55,133 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (org.ovirt.thread.pool-6-thread-17) [61cd8ee0] START, DeleteImageGroupVDSCommand( storagePoolId = 00000002-0002-0002-0002-0000000002d3, ignoreFailoverLimit = false, storageDomainId = 2d70ca06-a401-4113-bc94-b1fef8ca8efe, imageGroupId = 5fd3c90b-dfc5-4e52-b9bc-094113969055, postZeros = false, forceDelete = false), log id: 478e45b6


but, 
in case of removing snapshot of a disk (NFS domain) it still send postZeros = true

2014-09-14 14:33:35,660 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (ajp-/127.0.0.1:8702-10) [70904fe7] START, DeleteImageGroupVDSCommand( storagePoolId = 00000002-0002-0002-0002-0000000002d3, ignoreFailoverLimit = false, storageDomainId = 9a875929-df82-4d50-9659-617a2423ef13, imageGroupId = 662c44ca-f2d9-43bd-b69d-0f87349dbb2d, postZeros = true, forceDelete = false), log id: 570e5f39
2014-09-14 14:33:36,232 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (ajp-/127.0.0.1:8702-10) [70904fe7] START, DeleteImageGroupVDSCommand( storagePoolId = 00000002-0002-0002-0002-0000000002d3, ignoreFailoverLimit = false, storageDomainId = 9a875929-df82-4d50-9659-617a2423ef13, imageGroupId = fca441fc-8940-4c97-9092-a1b4a204b19e, postZeros = true, forceDelete = false), log id: 23622497

Comment 6 Aharon Canan 2014-09-14 11:42:04 UTC
Created attachment 937315 [details]
logs

Comment 7 Allon Mureinik 2014-09-14 12:54:32 UTC
(In reply to Aharon Canan from comment #5)
> I am not sure why we need this tag in such case, 
> i think it will be better to block it (greyed out) as in any case we are not
> really wiping.
This has been discussed, re-discussed and triple-discussed already.
See bug 1122510.


> anyway, 
> remove disk with "wipe after delete" flag marked really send postZeros =
> false
> 
> 2014-09-14 14:27:55,133 INFO 
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (org.ovirt.thread.pool-6-thread-17) [61cd8ee0] START,
> DeleteImageGroupVDSCommand( storagePoolId =
> 00000002-0002-0002-0002-0000000002d3, ignoreFailoverLimit = false,
> storageDomainId = 2d70ca06-a401-4113-bc94-b1fef8ca8efe, imageGroupId =
> 5fd3c90b-dfc5-4e52-b9bc-094113969055, postZeros = false, forceDelete =
> false), log id: 478e45b6
> 
> 
> but, 
> in case of removing snapshot of a disk (NFS domain) it still send postZeros
> = true
> 
> 2014-09-14 14:33:35,660 INFO 
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (ajp-/127.0.0.1:8702-10) [70904fe7] START, DeleteImageGroupVDSCommand(
> storagePoolId = 00000002-0002-0002-0002-0000000002d3, ignoreFailoverLimit =
> false, storageDomainId = 9a875929-df82-4d50-9659-617a2423ef13, imageGroupId
> = 662c44ca-f2d9-43bd-b69d-0f87349dbb2d, postZeros = true, forceDelete =
> false), log id: 570e5f39
> 2014-09-14 14:33:36,232 INFO 
> [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
> (ajp-/127.0.0.1:8702-10) [70904fe7] START, DeleteImageGroupVDSCommand(
> storagePoolId = 00000002-0002-0002-0002-0000000002d3, ignoreFailoverLimit =
> false, storageDomainId = 9a875929-df82-4d50-9659-617a2423ef13, imageGroupId
> = fca441fc-8940-4c97-9092-a1b4a204b19e, postZeros = true, forceDelete =
> false), log id: 23622497
Idan - treatment seems to be missing in RemoveDiskSnapshotTaskHandler - please take a look?

Comment 8 Allon Mureinik 2014-09-15 08:31:05 UTC
Although a VM's disk may be on NFS domains, this does not mean the MEMORY VOLUMES are necessarily on the same domain. 

In this case, domain 9a875929-df82-4d50-9659-617a2423ef13 referenced above is clearly a block domain, and thus should definitely be wiped:

2014-09-14 11:40:36,930 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] (ajp-/127.0.0.1:8702-8) [6dd4dae2] START, CreateVGVDSCommand(HostName = purple-vds1.qa.lab.tlv.redhat.com, HostId = c72d5509-b934-4757-ab28-b1cc107026b8, storageDomainId=9a875929-df82-4d50-9659-617a2423ef13, deviceList=[360060160f4a03000ddbee0108fdbe311, 360060160f4a03000debee0108fdbe311], force=true), log id: 79497d9b
2014-09-14 11:40:38,422 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] (ajp-/127.0.0.1:8702-8) [6dd4dae2] FINISH, CreateVGVDSCommand, return: JyucZr-vAxa-kDg5-sqmJ-oVf8-N0sZ-nA7tZp, log id: 79497d9b

Comment 9 Allon Mureinik 2014-09-15 08:32:08 UTC
(In reply to Allon Mureinik from comment #8)
> In this case, domain 9a875929-df82-4d50-9659-617a2423ef13 referenced above
> is clearly a block domain, and thus should definitely be wiped:
The volume on it, that is.

Since the VDSCommand is DeleteImageGroupVDSCommand, this is clearly a memory volume and not a disk volume.

Comment 10 Aharon Canan 2014-09-15 13:10:59 UTC
rechecked 

for snapshot - 
------------------
2014-09-15 16:07:32,240 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.MergeSnapshotsVDSCommand] (ajp-/127.0.0.1:8702-2) [5cb9665a] START, MergeSnapshotsVDSCommand( storagePoolId = b347116f-be37-4dd3-a0f2-0fb229c0953d, ignoreFailoverLimit = false, storageDomainId = 37e6c10a-691b-42df-92d9-58449018d32a, imageGroupId = 7a8bcd96-66f7-4c04-9aa4-604b47bbf3fb, imageId = 40bd459a-f459-4901-84e3-aa53ea985eda, imageId2 = 687d59d1-3b5c-4594-a1d4-b0d8bea690fd, vmId = 16428ee6-cc30-4c4b-8863-b78280fb89bf, postZero = false), log id: 2b75b155

Comment 11 Allon Mureinik 2015-02-16 19:10:40 UTC
RHEV-M 3.5.0 has been released, closing this bug.


Note You need to log in before you can comment on or make changes to this bug.