Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1689674 - Cannot recreate Octavia Load Balancer in ERROR state [NEEDINFO]
Summary: Cannot recreate Octavia Load Balancer in ERROR state
Keywords:
Status: ON_QA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-octavia
Version: 14.0 (Rocky)
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: zstream
: 14.0 (Rocky)
Assignee: Nir Magnezi
QA Contact: Bruna Bonguardo
URL:
Whiteboard:
Depends On: 1697794
Blocks: 1689679
TreeView+ depends on / blocked
 
Reported: 2019-03-17 14:58 UTC by Bruna Bonguardo
Modified: 2019-04-15 10:41 UTC (History)
7 users (show)

Fixed In Version: openstack-octavia-3.0.2-0.20181219195055.ec4c88e.el7ost
Doc Type: Bug Fix
Doc Text:
Fix load balancers that could not be failed over when in ERROR provisioning status.
Clone Of:
: 1689679 (view as bug list)
Environment:
Last Closed:
Target Upstream Version:
bbonguar: needinfo? (nmagnezi)


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
OpenStack gerrit 638790 None None None 2019-03-17 15:51:55 UTC
OpenStack gerrit 643005 None None None 2019-03-17 15:51:55 UTC

Description Bruna Bonguardo 2019-03-17 14:58:16 UTC
Description of problem:

After restarting a controller node, all the Octavia Load Balancers and their respective amphorae go to an ERROR state. There is no way of bringing the Load Balancers up, without deleting and recreating them manually.


Version-Release number of selected component (if applicable):

Openstack version 14. Puddle from January 17 2019.


How reproducible: 100%


Steps to Reproduce:

1. Create an environment with one Undercloud, one Controller and one Compute.
2. Create multiple Octavia Load Balancers.
3. Restart the Controller node ~ All Load balancers and amphorae will get to a ERROR state. The controller restart will trigger a scheduling issue between the Compute and Controller nodes.
4. Restart the Compute node.

Actual results:

All EXISTING Octavia Load Balancers and Amphorae go to an ERROR state.
There is no way of bringing the Load Balancers and Amphorae back up. Even after restarting the Octavia dockers in the controller.
All NEW Load Balancers are created without errors. 

Expected results:

We should be able to RECREATE/REBUILD Load Balancers which are in ERROR state, and after the creation, the Load Balancers themselves should be in ACTIVE state after the Compute issue is resolved.

Comment 13 Bruna Bonguardo 2019-04-14 13:49:10 UTC
FYI, this bug depends on bug https://bugzilla.redhat.com/show_bug.cgi?id=1697794
The octavia_api docker is in "Restarting" state and the Octavia api returns "Service Unavailable (HTTP 503)", as following:

(undercloud) [stack@undercloud-0 ~]$ cat /etc/yum.repos.d/latest-installed 
14  -p 2019-04-05.1

(overcloud) [stack@undercloud-0 ~]$ openstack service list
+----------------------------------+------------+----------------+
| ID                               | Name       | Type           |
+----------------------------------+------------+----------------+
| 0afcb8db5e25487a833135266c4c8296 | octavia    | load-balancer  |
| 2693e14bfe8e4bfdba26e024abd5f890 | neutron    | network        |
| 385e9a5cbe6f4677b5ef95fb6d836a64 | placement  | placement      |
| 39bbe7e9674d4a84affb550f732fb3dc | swift      | object-store   |
| 3e9fb9d5bf2f4360b0081f7098aa41a5 | panko      | event          |
| 4096bd4f408647d1843cf97372633ae3 | aodh       | alarming       |
| 49286be8ea974495b91a6b45555826c1 | gnocchi    | metric         |
| 4f60162ba8314f93b4e022dff6ae4de5 | ceilometer | metering       |
| 503f647797fe4ebfb7b32844ec7814e0 | glance     | image          |
| 6875c54de16d45fa86fa184b9ff0c494 | heat-cfn   | cloudformation |
| 6f2188f955764c85bfccc40c8ed176ba | heat       | orchestration  |
| 8b88a1251f3e4122a782dbf6f380f89a | keystone   | identity       |
| 8f77467ac3c2416a85d270945c6d7e56 | cinderv2   | volumev2       |
| 9b45e9eaf66a4336940744185ad823bb | cinder     | volume         |
| 9b8f77fbc98c4a2197732054bb37ea1f | cinderv3   | volumev3       |
| ebf0ebf9c49a4bcd9f1d9d483b3ab298 | nova       | compute        |
+----------------------------------+------------+----------------+

On controller:
(overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer list
Service Unavailable (HTTP 503)

[root@controller-0 ~]# docker ps | grep octavia_api
b7ee0425d9c0        192.168.24.1:8787/rhosp14/openstack-octavia-api:2019-03-28.1                 "kolla_start"            4 days ago          Restarting (126) 40 hours ago                       octavia_api


Note You need to log in before you can comment on or make changes to this bug.