Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1051037 - Vm start on not correct host under power saving policy
Summary: Vm start on not correct host under power saving policy
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 3.5.0
Assignee: Martin Sivák
QA Contact: Artyom
URL:
Whiteboard: sla
Depends On:
Blocks: rhev3.5beta 1156165
TreeView+ depends on / blocked
 
Reported: 2014-01-09 15:32 UTC by Artyom
Modified: 2016-02-10 20:17 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-08-27 09:16:18 UTC
oVirt Team: SLA
Target Upstream Version:


Attachments (Terms of Use)

Description Artyom 2014-01-09 15:32:44 UTC
Description of problem:
Have two hosts when on first cpu loading 50% and on second 0, start new vm, vm run on host with cpu loading 0.

Version-Release number of selected component (if applicable):
is31

How reproducible:
Always

Steps to Reproduce:
1. Add two hosts, create new vm, run vm, check on what host vm started(for example host_1), poweroff vm.
2. Change cluster policy to power saving(with default parameters), load host_2 cpu to 50%
3. Run vm

Actual results:
Vm run on host_1

Expected results:
Vm run on host_2(because we have power_saving weight module)

Additional info:
Hosts have the same cpu's(same number of cpu's and cores per cpu's)
It's no logs, because no any useful information about internal scheduling process

Comment 1 Artyom 2014-01-22 15:57:30 UTC
After some discussion with Martin, revealed that weight module looks on cpu usage of host, just when it have at least one vm, so you can fast start many vms on host with cpu usage 0%, and after this because balancing policy, vm's starts to migrate on host with cpu usage 50%, one vm each balance round. Now question if it desirable behavior, if it's ok I just close this bug, if not I can close this bug and open some RFE bug.
Martin your decision?

Comment 2 Martin Sivák 2014-02-18 09:22:09 UTC
Probably close or repurpose this bug and file the RFE. This issue only happens when you start your first Vm in the cluster so it can be considered a corner case.

Comment 3 Martin Sivák 2014-08-27 09:16:18 UTC
Technically we do not support loading a host by something else than VMs. The logic will work properly after the first VM is started.

If you want to change this behaviour, file an RFE and propose a new behaviour.


Note You need to log in before you can comment on or make changes to this bug.