Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1081536 - [RFE] Making VM pools able to allocate VMs to multiple storage domains to balance disk usage
Summary: [RFE] Making VM pools able to allocate VMs to multiple storage domains to bal...
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.3.0
Hardware: All
OS: Linux
Target Milestone: ovirt-4.1.0-beta
: ---
Assignee: Shahar Havivi
QA Contact: sefi litmanovich
: 1062441 (view as bug list)
Depends On: 1356488
Blocks: 1066135 1415559
TreeView+ depends on / blocked
Reported: 2014-03-27 14:25 UTC by Luca Villa
Modified: 2018-03-12 16:23 UTC (History)
19 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
With this release, when creating virtual machine pools using a template that is present in more than one storage domain, virtual machine disks can be distributed to multiple storage domains by selecting "Auto select target" in New Pool -> Resource Allocation -> Disk Allocation.
Clone Of:
Last Closed: 2017-04-25 00:47:13 UTC
oVirt Team: Virt
Target Upstream Version:
sherold: Triaged+
mavital: testing_plan_complete+

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Red Hat Bugzilla 1129251 None None None Never
Red Hat Bugzilla 1425493 None None None Never
Red Hat Knowledge Base (Solution) 768663 None None None Never
Red Hat Product Errata RHEA-2017:0997 normal SHIPPED_LIVE Red Hat Virtualization Manager (ovirt-engine) 4.1 GA 2017-04-18 20:11:26 UTC
oVirt gerrit 61274 master MERGED core: [RFE] Allocate VmPools disks to multiple storage domains 2016-08-15 12:55:40 UTC
oVirt gerrit 61275 master MERGED ui: [RFE] Allocate VmPools disks to multiple storage domains 2016-08-15 12:56:34 UTC

Internal Links: 1129251 1425493

Description Luca Villa 2014-03-27 14:25:43 UTC
Nature and description of the request:

The same template in RHEV can be copied to more than one SD.
When a VM is created based on such a template its disks
are allocated by default on the least used SD.
The same applies upon pools' creation, however when a new VM within the pool
is instantiated it's disks are allocated on the same SD as the pool even
if there are less used SDs where the template also resides.
This request is to make RHEV capable to dynamically allocate the disks
of a VM in a pool based on the level of usage among SDs.

Comment 5 Michal Skrivanek 2014-08-22 11:07:39 UTC
complexity depends how fancy this needs to be.
If we should just simply do a dumb round robin at the pool creation (or extension) time then it's not difficult
As long as we keep it simple, i.e. at the pool creation time only

Comment 8 Michal Skrivanek 2015-03-31 09:57:01 UTC
*** Bug 1062441 has been marked as a duplicate of this bug. ***

Comment 11 Michal Skrivanek 2015-06-05 12:13:36 UTC
This bug did not make it in time for 3.6 release, moving out

Comment 12 sefi litmanovich 2016-12-04 11:06:10 UTC
1.Please Review the attached test cases for this RFE and let me know either here or private message if I should change/remove/add something to it.

2. Please see related bz -

3. Maybe we can add a feature that allows the user to remove vm's disk from one SD which will result in immediately re create the disk from the other SD? Don't know if this is useful but just a thought.

Comment 13 Yaniv Lavi 2016-12-14 16:18:40 UTC
This bug had requires_doc_text flag, yet no documentation text was provided. Please add the documentation text and only then set this flag.

Comment 14 sefi litmanovich 2017-02-21 16:52:24 UTC
Verified on rhevm-, host with vdsm-4.19.6-1.el7ev.x86_64, according to attached test cases. 

Test run:

There's only one bug which doesn't block the feature, but does limit the ability to use it:

Note You need to log in before you can comment on or make changes to this bug.