Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1359947 - Very slow undercloud while scaling up an overcloud
Summary: Very slow undercloud while scaling up an overcloud
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: instack-undercloud
Version: 7.0 (Kilo)
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: async
: 7.0 (Kilo)
Assignee: James Slagle
QA Contact: Arik Chernetsky
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-07-25 21:36 UTC by David Hill
Modified: 2018-02-28 17:45 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-02-28 17:45:11 UTC


Attachments (Terms of Use)

Description David Hill 2016-07-25 21:36:27 UTC
Description of problem:
Very slow undercloud while scaling up an overcloud to 42 computes from initially 31 computes.

There doesn't seem to be any IO contentions on the disks / memory and yet it takes 2/3/4 hours before we get a success/failure while adding those 11 new compute nodes.

Version-Release number of selected component (if applicable):


How reproducible:
Every time

Steps to Reproduce:
1. Scale up/down the computes.
2.
3.

Actual results:
Very slow before we get a failure

Expected results:
Should be faster 

Additional info:
This is an undercloud running in KVM with 4VCPUs , 26GB of RAM and disks mounted from a HP 3PAR storage.   It seems like mysql needs a bit of cleanup because raw_template is 16GB and ceilometer database is somewhat big (like 30GB or so)

Comment 4 Alex Schultz 2018-02-28 17:45:11 UTC
Closing as we won't be backporting this to 7.  If this is still happening on newer versions, please feel free to open up a new BZ or reopen this one with additional details.


Note You need to log in before you can comment on or make changes to this bug.