Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1516330 - [RFE][Cinder] Allow to move disk images between "volume types" for Cinder.
Summary: [RFE][Cinder] Allow to move disk images between "volume types" for Cinder.
Keywords:
Status: NEW
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.1.6
Hardware: All
OS: All
high
medium vote
Target Milestone: ---
: ---
Assignee: Tal Nisan
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks: 1539837
TreeView+ depends on / blocked
 
Reported: 2017-11-22 13:53 UTC by Konstantin Shalygin
Modified: 2019-03-28 15:16 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
oVirt Team: Storage
ylavi: ovirt-4.3?
ylavi: planning_ack+
rule-engine: devel_ack?
rule-engine: testing_ack?


Attachments (Terms of Use)
How it looks on oVirt. (deleted)
2017-11-22 14:06 UTC, Konstantin Shalygin
no flags Details

Description Konstantin Shalygin 2017-11-22 13:53:29 UTC
Description of problem:

At this time is impossible move one oVirt disk from one "volume type" to another.
For example: move disk from fast (nvme-based) storage to cold (hdd-based) storage.

Version-Release number of selected component (if applicable):
4.1.6

Actual results:
Move disk is impossible.

Expected results:
Move disk between Cinder "volume types" can be done from oVirt.

Additional info:
For now I do movements via create new disk on target pool and migrate data via cp/rsync or via qemu-img, then delete old disk.
Actually this is only one critical missed feature for a year of oVirt + Ceph usage.

Comment 1 Konstantin Shalygin 2017-11-22 14:06:50 UTC
Created attachment 1357527 [details]
How it looks on oVirt.

Screenshot from oVirt. On Cinder-side this is look like this:

[replicated-rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = replicated-rbd
rbd_pool = replicated_rbd
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = e0828f39-2832-4d82-90ee-23b26fc7b20a
report_discard_supported = true

[ec-rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ec-rbd
rbd_pool = ec_rbd
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = ab3b9537-c7ee-4ffb-af47-5ae3243acf70
report_discard_supported = true

[solid-rbd]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = solid-rbd
rbd_pool = solid_rbd
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = f420a0d4-1681-463f-ab2a-f85e216ada77
report_discard_supported = true


And on ceph:

[root@ceph-mon0 ceph]# ceph osd pool ls
replicated_rbd
ec_rbd
ec_cache
solid_rbd

Comment 3 Sandro Bonazzola 2019-01-28 09:34:24 UTC
This bug has not been marked as blocker for oVirt 4.3.0.
Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.


Note You need to log in before you can comment on or make changes to this bug.