Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1360895 - Unattended Install fails on the loadbalancer
Summary: Unattended Install fails on the loadbalancer
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.2.0
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: ---
Assignee: Samuel Munilla
QA Contact: liujia
Depends On:
TreeView+ depends on / blocked
Reported: 2016-07-27 18:20 UTC by Eric Jones
Modified: 2016-08-18 19:30 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
OpenShift 3.2
Last Closed: 2016-08-18 19:30:13 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1639 normal SHIPPED_LIVE OpenShift Enterprise atomic-openshift-utils bug fix and enhancement update 2016-08-18 23:26:45 UTC

Description Eric Jones 2016-07-27 18:20:43 UTC
Description of problem:
Customer attempts to run the quick install in unattended mode and running into problems producing the correct LB group in the hosts file.

Customer has put the issue (and further details) on github:

Has also identified that he tried ooinstall from HEAD of branch enterprise-3.2 (SHA fe61946d893deab8f0937f8222ea3fb9c11722fc) and the bug is still there.

Comment 2 liujia 2016-08-05 10:51:22 UTC
[Host Info]

Package from latest:
Build Time:2016-08-04.2

[Verify Steps]:
on host-194
1, scp hosts&installer.cfg.yml to host-194 from an host which has been quick isntalled successfully.

2, edit hosts&installer.cfg.yml's info(hostname&ip) to consistent with host-194

3, run "atomic-openshift-installer --unattended install",got an info as follows:
Any roles assigned to hosts must be defined.

4, according to the workaround in, edit installer.cfg.yml to 
    containerized: false
    containerized: false
    containerized: false

5, run "atomic-openshift-installer --unattended install" again,got an info as follows:
*** Installation Summary ***

  - OpenShift Master
  - OpenShift Node
  - Etcd (Embedded)
  - Storage

Total OpenShift Masters: 1
Total OpenShift Nodes: 1

NOTE: Add a total of 3 or more Masters to perform an HA installation.

Gathering information from hosts...
 [WARNING]: Failure using method (v2_runner_on_ok) in callback plugin (</usr/lib/python2.7/site-packages/ooinstall/ansible_plugins/facts_callback.CallbackModule object at
0x1923d90>): runner_on_ok() takes exactly 3 arguments (2 given)
[DEPRECATION WARNING]: Using bare variables is deprecated. Update your playbooks so that the environment value uses the full variable syntax ('{{g_all_hosts}}').
feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
There was a problem fetching the required information. Please see /tmp/ansible.log for details.

This seems to be the same error with the bug-1364398, so i think the left verify steps can only be run after the bug-1364398 has been fixed.

Comment 3 liujia 2016-08-11 03:05:39 UTC
Host Info:

Build Time:2016-08-09.2

Verify Steps:
on host-15
1, scp installer.cfg.yml to host-15 from an host which has been quick installed successfully.

2, edit installer.cfg.yml's info(hostname&ip) as follows:

ansible_callback_facts_yaml: /root/.config/openshift/.ansible/callback_facts.yaml
ansible_config: /usr/share/atomic-openshift-utils/ansible.cfg
ansible_inventory_path: /root/.config/openshift/hosts
ansible_log_path: /tmp/ansible.log
  ansible_ssh_user: root
  - connect_to:
    hostname: jliu03.novalocal
    ip: x.x.x.189
    public_hostname: jliu03.novalocal
    public_ip: x.x.x.190
    - master
    - etcd
    - node
    - storage
  - connect_to:
    hostname: jliu04.novalocal
    ip: x.x.x.191
    public_hostname: jliu04.novalocal
    public_ip: x.x.x.191
    - master
    - etcd
    - node
  - connect_to:
    hostname: jliu05.novalocal
    ip: x.x.x.194
    public_hostname: jliu05.novalocal
    public_ip: x.x.x.192
    - master
    - etcd
    - node
  - connect_to:
    preconfigured: true
  - master_lb
  master_routingconfig_subdomain: ''
  proxy_exclude_hosts: ''
  proxy_http: ''
  proxy_https: ''
    etcd: {}
    master: {}
    master_lb: {}
    node: {}
    storage: {}
variant: openshift-enterprise
variant_version: '3.2'
version: v2

3, run "atomic-openshift-installer --unattended install",got an info as follows:

*** Installation Summary ***

  - OpenShift Master
  - OpenShift Node
  - Etcd Member
  - Storage
  - OpenShift Master
  - OpenShift Node
  - Etcd Member
  - OpenShift Master
  - OpenShift Node
  - Etcd Member
  - Load Balancer (Preconfigured)

Total OpenShift Masters: 3
Total OpenShift Nodes: 3

NOTE: Multiple Masters specified, this will be an HA deployment with a separate
etcd cluster. You will be prompted to provide the FQDN of a load balancer and
a host for storage once finished entering hosts.

WARNING: Dedicated Nodes are recommended for an HA deployment. If no dedicated
Nodes are specified, each configured Master will be marked as a schedulable

Gathering information from hosts...

then it can start install with ansible

Installed environment detected. is already an OpenShift Master is already an OpenShift Master is already an OpenShift Master is currently uninstalled
Adding additional nodes...

Wrote atomic-openshift-installer config: /root/.config/openshift/installer.cfg.yml
Wrote ansible inventory: /root/.config/openshift/hosts

Ready to run installation process.

PLAY [localhost] ***************************************************************

4, check hosts file creating sucessfully.

Verify result:
It can go on intalling steps with ansible and create hosts file right.

Comment 5 errata-xmlrpc 2016-08-18 19:30:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

Note You need to log in before you can comment on or make changes to this bug.