Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1512703 - Long time to refresh network provider on OpenStack
Summary: Long time to refresh network provider on OpenStack
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: Providers
Version: 5.8.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: GA
: 5.10.0
Assignee: Sam Lucidi
QA Contact: Jadh
URL:
Whiteboard:
Depends On:
Blocks: 1468726 1554541 1554543
TreeView+ depends on / blocked
 
Reported: 2017-11-13 21:37 UTC by Andrea Perotti
Modified: 2018-08-01 02:48 UTC (History)
9 users (show)

Fixed In Version: 5.10.0.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1554541 1554543 (view as bug list)
Environment:
Last Closed: 2018-08-01 02:48:08 UTC
Category: ---
Cloudforms Team: Openstack


Attachments (Terms of Use)

Description Andrea Perotti 2017-11-13 21:37:43 UTC
Description of problem:
cloudforms refresh an openstack network manager it took more than 5 minutes, when refresh a cloud manager o storage manager, it took about one minute.

Look at the example below:

Worker PID:             2930
Message ID:             1000000461622
Message fetch time:     2017-10-30T10:17:48.535652
Message time in queue:  11.466717693 seconds
Provider:               Openstack::CloudManager
EMS Name:               OSP9
Refresh type:           full
Refresh start time:     2017-10-30T10:17:48.623460
Refresh timings:
  collect_inventory_for_targets:       0.002257 seconds
  parse_legacy_inventory:              25.098824 seconds
  parse_targeted_inventory:            25.099295 seconds
  save_inventory:                      3.148565 seconds
  ems_refresh:                         28.250349 seconds
Refresh end time:       2017-10-30T10:18:16.874024
Message delivered time: 2017-10-30T10:18:17.001680
Message state:          ok
Message delivered in:   28.465715544 seconds


---
Worker PID:             2940
Message ID:             1000000461781
Message fetch time:     2017-10-30T10:22:11.088119
Message time in queue:  234.240138968 seconds
Provider:               Openstack::NetworkManager
EMS Name:               OSP9 Network Manager
Refresh type:           full
Refresh start time:     2017-10-30T10:22:11.092803
Refresh timings:
  collect_inventory_for_targets:       0.002506 seconds
  parse_legacy_inventory:              241.966221 seconds
  parse_targeted_inventory:            241.966239 seconds
  save_inventory:                      5.755467 seconds
  ems_refresh:                         247.724421 seconds
Refresh end time:       2017-10-30T10:26:18.817420
Message delivered time: 2017-10-30T10:26:18.829892
Message state:          ok
Message delivered in:   247.741669532 seconds

Version-Release number of selected component (if applicable):
CFME 4.5

How reproducible:
Always

Comment 3 Sam Lucidi 2017-11-14 18:01:49 UTC
There is a pull request waiting to be backported that should resolve the slow refresh of the network manager. https://github.com/ManageIQ/manageiq/pull/16427

Comment 6 Sam Lucidi 2017-11-28 19:23:25 UTC
The PR for Fine has been merged, moving to POST.

Comment 19 Jadh 2018-01-25 13:38:59 UTC
Verified on RHOS 10, CFME 5.8.3.1

1. Created 100 cloud tenants from OSP
2. Added admin as member to all Tenants 
3. Perform refresh
4. measure the time till all tenants appear on cloud tenant page


Measured time was 4 min

Comment 20 Jadh 2018-01-25 15:26:17 UTC
Also set is_admin to true at settings.yaml:

:ems:
  :ems_openstack:
    :excon:
      :omit_default_port: true
      :read_timeout: 60
    :refresh:
      :is_admin: true

Comment 21 Sam Lucidi 2018-03-08 17:29:46 UTC
For master: https://github.com/ManageIQ/manageiq-providers-openstack/pull/216 (already merged)

Fine backport: https://github.com/ManageIQ/manageiq/pull/16695


Note You need to log in before you can comment on or make changes to this bug.