Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.

Bug 1516330

Summary: [RFE][Cinder] Allow to move disk images between "volume types" for Cinder.
Product: [oVirt] ovirt-engine Reporter: Konstantin Shalygin <k0ste>
Component: BLL.StorageAssignee: Tal Nisan <tnisan>
Status: NEW --- QA Contact: Elad <ebenahar>
Severity: medium Docs Contact:
Priority: high    
Version: 4.1.6CC: bugs, ebenahar
Target Milestone: ---Keywords: FutureFeature
Target Release: ---Flags: ylavi: ovirt-4.3?
ylavi: planning_ack+
rule-engine: devel_ack?
rule-engine: testing_ack?
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On:    
Bug Blocks: 1539837    
Attachments:
Description Flags
How it looks on oVirt. none

Description Konstantin Shalygin 2017-11-22 13:53:29 UTC
Description of problem:

At this time is impossible move one oVirt disk from one "volume type" to another.
For example: move disk from fast (nvme-based) storage to cold (hdd-based) storage.

Version-Release number of selected component (if applicable):
4.1.6

Actual results:
Move disk is impossible.

Expected results:
Move disk between Cinder "volume types" can be done from oVirt.

Additional info:
For now I do movements via create new disk on target pool and migrate data via cp/rsync or via qemu-img, then delete old disk.
Actually this is only one critical missed feature for a year of oVirt + Ceph usage.

Comment 1 Konstantin Shalygin 2017-11-22 14:06:50 UTC
Created attachment 1357527 [details]
How it looks on oVirt.

Screenshot from oVirt. On Cinder-side this is look like this:

[replicated-rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = replicated-rbd
rbd_pool = replicated_rbd
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = e0828f39-2832-4d82-90ee-23b26fc7b20a
report_discard_supported = true

[ec-rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ec-rbd
rbd_pool = ec_rbd
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = ab3b9537-c7ee-4ffb-af47-5ae3243acf70
report_discard_supported = true

[solid-rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = solid-rbd
rbd_pool = solid_rbd
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = f420a0d4-1681-463f-ab2a-f85e216ada77
report_discard_supported = true


And on ceph:

[root@ceph-mon0 ceph]# ceph osd pool ls
replicated_rbd
ec_rbd
ec_cache
solid_rbd

Comment 3 Sandro Bonazzola 2019-01-28 09:34:24 UTC
This bug has not been marked as blocker for oVirt 4.3.0.
Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.