Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1512375 - healthmonitor did not take effect
Summary: healthmonitor did not take effect
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-neutron-lbaas
Version: 10.0 (Newton)
Hardware: x86_64
OS: Linux
high
high
Target Milestone: z7
: 10.0 (Newton)
Assignee: Carlos Goncalves
QA Contact: Alexander Stafeyev
URL:
Whiteboard:
Depends On:
Blocks: 1522711 1522713
TreeView+ depends on / blocked
 
Reported: 2017-11-13 03:28 UTC by Meiyan Zheng
Modified: 2018-02-27 16:41 UTC (History)
10 users (show)

Fixed In Version: openstack-neutron-lbaas-9.2.2-2.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1522711 1522713 (view as bug list)
Environment:
Last Closed: 2018-02-27 16:41:21 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:0357 normal SHIPPED_LIVE openstack-neutron bug fix advisory 2018-02-27 21:38:41 UTC
OpenStack gerrit 521250 None None None 2017-11-19 10:48:06 UTC

Description Meiyan Zheng 2017-11-13 03:28:47 UTC
Description of problem:

Create a healthmonitor and add to a pool, shutdown or delete one member but member state is not set to inactive when the max of failed retries is reached for the instance.



Version-Release number of selected component (if applicable):
RHOSP10


How reproducible:


Steps to Reproduce:
1. Create Lbaasv2: 

$ nova list
+--------------------------------------+-------+--------+------------+-------------+---------------------+
| ID                                   | Name  | Status | Task State | Power State | Networks            |
+--------------------------------------+-------+--------+------------+-------------+---------------------+
| e7e10318-ac85-4191-8a56-fed46f2422f9 | node1 | ACTIVE | -          | Running     | private=10.10.1.103 |
| 8d691b56-cedb-4723-82f4-65938e2f71a8 | node2 | ACTIVE | -          | Running     | private=10.10.1.102 |
+--------------------------------------+-------+--------+------------+-------------+---------------------+

$ neutron lbaas-loadbalancer-status  041b24a3-55e1-4880-b04f-cf22ab057b30
{
    "loadbalancer": {
        "name": "lb1", 
        "provisioning_status": "ACTIVE", 
        "listeners": [
            {
                "name": "listener1", 
                "provisioning_status": "ACTIVE", 
                "pools": [
                    {
                        "name": "pool1", 
                        "provisioning_status": "ACTIVE", 
                        "healthmonitor": {
                            "provisioning_status": "ACTIVE", 
                            "type": "HTTP", 
                            "id": "083f068f-3ee1-471b-a6ba-2438648078c0", 
                            "name": ""
                        }, 
                        "members": [
                            {
                                "name": "", 
                                "provisioning_status": "ACTIVE", 
                                "address": "10.10.1.103", 
                                "protocol_port": 80, 
                                "id": "97e4de09-7e88-41fb-9e7f-5b061fd4d44f", 
                                "operating_status": "ONLINE"
                            }, 
                            {
                                "name": "", 
                                "provisioning_status": "ACTIVE", 
                                "address": "10.10.1.102", 
                                "protocol_port": 80, 
                                "id": "9bdc780f-2553-40cf-82bf-aa43ab94ce8a", 
                                "operating_status": "ONLINE"
                            }
                        ], 
                        "id": "17c63199-b63e-4f22-83a3-baea98345923", 
                        "operating_status": "ONLINE"
                    }
                ], 
                "l7policies": [], 
                "id": "2c1f8311-d834-45af-8efb-05ff1684e2a6", 
                "operating_status": "ONLINE"
            }
        ], 
        "pools": [
            {
                "name": "pool1", 
                "provisioning_status": "ACTIVE", 
                "healthmonitor": {
                    "provisioning_status": "ACTIVE", 
                    "type": "HTTP", 
                    "id": "083f068f-3ee1-471b-a6ba-2438648078c0", 
                    "name": ""
                }, 
                "members": [
                    {
                        "name": "", 
                        "provisioning_status": "ACTIVE", 
                        "address": "10.10.1.103", 
                        "protocol_port": 80, 
                        "id": "97e4de09-7e88-41fb-9e7f-5b061fd4d44f", 
                        "operating_status": "ONLINE"
                    }, 
                    {
                        "name": "", 
                        "provisioning_status": "ACTIVE", 
                        "address": "10.10.1.102", 
                        "protocol_port": 80, 
                        "id": "9bdc780f-2553-40cf-82bf-aa43ab94ce8a", 
                        "operating_status": "ONLINE"
                    }
                ], 
                "id": "17c63199-b63e-4f22-83a3-baea98345923", 
                "operating_status": "ONLINE"
            }
        ], 
        "id": "041b24a3-55e1-4880-b04f-cf22ab057b30", 
        "operating_status": "ONLINE"
    }
}



2. Delete one of the member:
  
$ nova delete e7e10318-ac85-4191-8a56-fed46f2422f9

$ nova list
+--------------------------------------+-------+--------+------------+-------------+---------------------+
| ID                                   | Name  | Status | Task State | Power State | Networks            |
+--------------------------------------+-------+--------+------------+-------------+---------------------+
| 8d691b56-cedb-4723-82f4-65938e2f71a8 | node2 | ACTIVE | -          | Running     | private=10.10.1.102 |
+--------------------------------------+-------+--------+------------+-------------+---------------------+


3. Check lbaas status again:

$ neutron lbaas-loadbalancer-status  041b24a3-55e1-4880-b04f-cf22ab057b30
{
    "loadbalancer": {
        "name": "lb1", 
        "provisioning_status": "ACTIVE", 
        "listeners": [
            {
                "name": "listener1", 
                "provisioning_status": "ACTIVE", 
                "pools": [
                    {
                        "name": "pool1", 
                        "provisioning_status": "ACTIVE", 
                        "healthmonitor": {
                            "provisioning_status": "ACTIVE", 
                            "type": "HTTP", 
                            "id": "083f068f-3ee1-471b-a6ba-2438648078c0", 
                            "name": ""
                        }, 
                        "members": [
                            {
                                "name": "", 
                                "provisioning_status": "ACTIVE", 
                                "address": "10.10.1.103", 
                                "protocol_port": 80, 
                                "id": "97e4de09-7e88-41fb-9e7f-5b061fd4d44f", 
                                "operating_status": "ONLINE"
                            }, 
                            {
                                "name": "", 
                                "provisioning_status": "ACTIVE", 
                                "address": "10.10.1.102", 
                                "protocol_port": 80, 
                                "id": "9bdc780f-2553-40cf-82bf-aa43ab94ce8a", 
                                "operating_status": "ONLINE"
                            }
                        ], 
                        "id": "17c63199-b63e-4f22-83a3-baea98345923", 
                        "operating_status": "ONLINE"
                    }
                ], 
                "l7policies": [], 
                "id": "2c1f8311-d834-45af-8efb-05ff1684e2a6", 
                "operating_status": "ONLINE"
            }
        ], 
        "pools": [
            {
                "name": "pool1", 
                "provisioning_status": "ACTIVE", 
                "healthmonitor": {
                    "provisioning_status": "ACTIVE", 
                    "type": "HTTP", 
                    "id": "083f068f-3ee1-471b-a6ba-2438648078c0", 
                    "name": ""
                }, 
                "members": [
                    {
                        "name": "", 
                        "provisioning_status": "ACTIVE", 
                        "address": "10.10.1.103", 
                        "protocol_port": 80, 
                        "id": "97e4de09-7e88-41fb-9e7f-5b061fd4d44f", 
                        "operating_status": "ONLINE"
                    }, 
                    {
                        "name": "", 
                        "provisioning_status": "ACTIVE", 
                        "address": "10.10.1.102", 
                        "protocol_port": 80, 
                        "id": "9bdc780f-2553-40cf-82bf-aa43ab94ce8a", 
                        "operating_status": "ONLINE"
                    }
                ], 
                "id": "17c63199-b63e-4f22-83a3-baea98345923", 
                "operating_status": "ONLINE"
            }
        ], 
        "id": "041b24a3-55e1-4880-b04f-cf22ab057b30", 
        "operating_status": "ONLINE"
    }
}


Actual results:
The operating_status for member 10.10.1.103 is still online.

Expected results:
The operating_status for member 10.10.1.103 should be offline or error. 


Additional info:
There are some similar issue reported in upstream:
https://bugs.launchpad.net/neutron/+bug/1548774
https://bugs.launchpad.net/octavia/+bug/1607309

Comment 1 Jakub Libosvar 2017-11-13 14:35:13 UTC
Nir is going to look at this one

Comment 11 errata-xmlrpc 2018-02-27 16:41:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0357


Note You need to log in before you can comment on or make changes to this bug.