Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1595391 - OCP 3.9 on OSP 10 Ref Arch Indicates setting up LB for the application routes but never follows through
Summary: OCP 3.9 on OSP 10 Ref Arch Indicates setting up LB for the application routes...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Reference Architecture
Version: 3.9.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: rlopez
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-06-26 19:54 UTC by Eric Jones
Modified: 2018-07-13 18:02 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-07-13 18:02:06 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Eric Jones 2018-06-26 19:54:08 UTC
Description of problem:
The Reference Architecture for installing OCP 3.9 on OSP 10 [0] indicates early on [1] that it will create a loadbalancer that will be used to direct application traffic to the router pods [2].

However, later when setting up the HAProxy [3] and the bastion host [4], we never do anything that would allow for traffic to be directed to the infra nodes.

Per my customer, we need to simply open up the iptables firewall to allow the 80 and 443 traffic through. This likely just needs to be added to the documentation (I think post install as it might be OCP install adding the iptables firewall to the loadbalancer).

[0] https://access.redhat.com/documentation/en-us/reference_architectures/2018/html-single/deploying_and_managing_openshift_3.9_on_red_hat_openstack_platform_10/

[1] https://access.redhat.com/documentation/en-us/reference_architectures/2018/html-single/deploying_and_managing_openshift_3.9_on_red_hat_openstack_platform_10/#node_instances_and_components

[2] In order to have a single entry point for applications, a load balancer is created as part of the installation in the same Red Hat OpenStack Platform tenant where it can reach the infrastructure nodes for load balancing the incoming traffic to the routers running in the infra nodes.

[3] https://access.redhat.com/documentation/en-us/reference_architectures/2018/html-single/deploying_and_managing_openshift_3.9_on_red_hat_openstack_platform_10/#haproxy

[4] https://access.redhat.com/documentation/en-us/reference_architectures/2018/html-single/deploying_and_managing_openshift_3.9_on_red_hat_openstack_platform_10/#bastion_configuration_for_red_hat_openshift_container_platform

Comment 1 Eric Jones 2018-06-27 17:17:45 UTC
More specifically the customer that Identified this issue has elaborated on what exactly they had to do to get it all working:

* Add 2 new TCP frontends & backends on the LB HAproxy, pointing to the OCP routers running on the infra nodes. These hadn't been added by the openshift-ansible installer.
* Open the LB HAProxy IPTables firewall to allow that 80/443 traffic to the OCP routers. The reference architecture already had security groups for this traffic on the LB HAProxy, but the on-VM IPTables firewall was blocking it.

Comment 2 rlopez 2018-06-29 19:54:45 UTC
Hi Eric,

I added the info on the RA to be clearer. In order to have a single entry point for applications, the load balancer either requires making changes to the existing DNS that allows for wildcards to use the Round Robin algorithm across the different infra nodes *or* the backend & frontend entries within the HAproxy instance, specifically within the /etc/haproxy/haproxy.cfg file are required.


Note You need to log in before you can comment on or make changes to this bug.