Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1602007 - Detach storage domain, that is in maintenance fails with: Cannot detach Storage. Related operation is currently in progress.
Summary: Detach storage domain, that is in maintenance fails with: Cannot detach Stora...
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.6.10
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: ---
Assignee: nobody nobody
QA Contact: Elad
Depends On:
TreeView+ depends on / blocked
Reported: 2018-07-17 15:13 UTC by Natalie Gavrielov
Modified: 2018-07-17 17:10 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2018-07-17 17:10:34 UTC
oVirt Team: Storage
Target Upstream Version:

Attachments (Terms of Use)

Description Natalie Gavrielov 2018-07-17 15:13:44 UTC
Description of problem:
A strange case of trying to detach a storage domain (this case iscsi storage type) right after it’s moved to maintenance. 

Version-Release number of selected component (if applicable):

How reproducible:
Not sure, seems like an edge case.

Steps to Reproduce:
1. Move an iscsi storage domain to maintenance. 
2. Make sure the storage domain’s status is maintenance (using REST-API). 
3. Detach the storage domain.

Actual results:
Detach storage domain fails with:
2018-07-16 23:37:53,374 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (ajp-/ [] Operation Failed: [Cannot detach Storage. Related operation is currently in progress. Please try again later.]	

Expected results:
For the operation to succeed.
Prior to the detach operation we perform GET via REST-API to check the storage domain's status and make sure it's in maintenance mode:
From art.log:
2018-07-16 23:37:53,244 - MainThread - storagedomains - DEBUG - GET request content is --  url:/api/datacenters/498e5036-bd7a-47e5-8e41-869032a56da3/storagedomains 
2018-07-16 23:37:53,310 - MainThread - storagedomains - DEBUG - Skipping duplicate log-messages...
2018-07-16 23:37:53,311 - MainThread - storagedomains - DEBUG - Response body for GET request is: 
    <storage_domain href="/api/datacenters/498e5036-bd7a-47e5-8e41-869032a56da3/storagedomains/aa7186d4-460a-4254-929a-84098b191fa8" id="aa7186d4-460a-4254-929a-84098b191fa8">
            <link href="/api/datacenters/498e5036-bd7a-47e5-8e41-869032a56da3/storagedomains/aa7186d4-460a-4254-929a-84098b191fa8/deactivate" rel="deactivate"/>
            <link href="/api/datacenters/498e5036-bd7a-47e5-8e41-869032a56da3/storagedomains/aa7186d4-460a-4254-929a-84098b191fa8/activate" rel="activate"/>
        <data_center href="/api/datacenters/498e5036-bd7a-47e5-8e41-869032a56da3" id="498e5036-bd7a-47e5-8e41-869032a56da3"/>
            <data_center id="498e5036-bd7a-47e5-8e41-869032a56da3"/>
            <state>maintenance</state>       <-- Here
            <volume_group id="CIx1zQ-CfH7-KA1m-QCNJ-wVd4-NLN4-e3cTQd">
                <logical_unit id="3600a09803830447a4f244c4657595044">
                    <product_id>LUN C-Mode</product_id>
                <logical_unit id="3600a09803830447a4f244c4657595044">
                    <product_id>LUN C-Mode</product_id>

Additional info:
3 seconds prior to detaching the storage domain we get the following in engine’s log:
2018-07-16 23:37:50,527 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-9) [] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Storage Domain iscsi_0 (Data Center golden_env_mixed) successfully moved to Maintenance as it's no longer accessed by any Host of the Data Center.

Comment 2 Tal Nisan 2018-07-17 17:10:34 UTC
It's 3.6, seems like a race and an edge case, I'm closing it due to the above, please reopen if it occurs again

Note You need to log in before you can comment on or make changes to this bug.