Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1518511 - [quick installer][3.7]installer failed during configure nfs when nfs host not belong to OCP nodes
Summary: [quick installer][3.7]installer failed during configure nfs when nfs host not...
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.7.0
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 3.7.z
Assignee: Scott Dodson
QA Contact: Wenkai Shi
Depends On:
Blocks: 1610417
TreeView+ depends on / blocked
Reported: 2017-11-29 06:37 UTC by Wenkai Shi
Modified: 2018-07-31 14:57 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1610417 (view as bug list)
Last Closed: 2018-07-31 14:57:39 UTC
Target Upstream Version:

Attachments (Terms of Use)

Description Wenkai Shi 2017-11-29 06:37:08 UTC
Description of problem:
Install OCP by quick installer. It will failed when nfs role is an extra host which not belong to OCP nodes.
It works well when nfs host is also nodes role.

Version-Release number of the following components:

How reproducible:

Steps to Reproduce:
1. Install OCP by quick installer, set nfs role to an extra host which not belong to one of OCP nodes.

Actual results:
# atomic-openshift-installer -u install
Play 22/88 (Configure nfs)
...........................fatal: []: FAILED! => {"failed": true, "msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'storage'\n\nThe error appears to have been in '/usr/share/ansible/openshift-ansible/roles/openshift_storage_nfs/tasks/main.yml': line 20, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Ensure exports directory exists\n  ^ here\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: 'dict object' has no attribute 'storage'"}

Expected results:
Should be good here.

Additional info:
Set an extra host as nfs role:
# atomic-openshift-installer install

Check installer.cfg.yml, no "openshift_hosted_registry_storage_kind" here:
# vim /root/.config/openshift/installer.cfg.yml
  - connect_to:
    ip: xx.xx.xx.xx
    public_ip: xx.xx.xx.xx
    - storage

Seems this part of code has issue, Since the Class Host didn't read the parameter "openshift_hosted_registry_storage_kind":
# cat /usr/lib/python2.7/site-packages/ooinstall/
        host_props['connect_to'] = hostname_or_ip
        host_props['preconfigured'] = False
        host_props['roles'] = ['storage']
        host_props['openshift_hosted_registry_storage_kind'] = 'nfs'
        storage = Host(**host_props)
# cat /usr/lib/python2.7/site-packages/ooinstall/
class Host(object):
    """ A system we will or have installed OpenShift on. """
    def __init__(self, **kwargs):
        self.ip = kwargs.get('ip', None)
        self.hostname = kwargs.get('hostname', None)
        self.public_ip = kwargs.get('public_ip', None)
        self.public_hostname = kwargs.get('public_hostname', None)
        self.connect_to = kwargs.get('connect_to', None)

        self.preconfigured = kwargs.get('preconfigured', None)
        self.schedulable = kwargs.get('schedulable', None)
        self.new_host = kwargs.get('new_host', None)
        self.containerized = kwargs.get('containerized', False)
        self.node_labels = kwargs.get('node_labels', '')

        # allowable roles: master, node, etcd, storage, master_lb
        self.roles = kwargs.get('roles', [])

        self.other_variables = kwargs.get('other_variables', {})

        if self.connect_to is None:
            raise OOConfigInvalidHostError(
                "You must specify either an ip or hostname as 'connect_to'")

Note You need to log in before you can comment on or make changes to this bug.