Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1365691 - rhel-osp-director: 8.0-9.0 upgrade fails during wsgi migration step.Error: /Stage[main]/Keystone/Exec[keystone-manage bootstrap]: keystone-manage bootstrap --bootstrap-password nsgAW3mK4UCg99AZK7fmZ4RYt returned 2 instead of one of [0]
Summary: rhel-osp-director: 8.0-9.0 upgrade fails during wsgi migration step.Error: ...
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director
Version: 9.0 (Mitaka)
Hardware: Unspecified
OS: Unspecified
Target Milestone: ga
: 9.0 (Mitaka)
Assignee: Angus Thomas
QA Contact: Omri Hochman
Depends On:
TreeView+ depends on / blocked
Reported: 2016-08-09 22:16 UTC by Alexander Chuzhoy
Modified: 2016-08-26 10:25 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2016-08-15 17:45:45 UTC

Attachments (Terms of Use)

Description Alexander Chuzhoy 2016-08-09 22:16:56 UTC
rhel-osp-director:   8.0-9.0 upgrade fails during wsgi migration step.Error: /Stage[main]/Keystone/Exec[keystone-manage bootstrap]: keystone-manage bootstrap --bootstrap-password nsgAW3mK4UCg99AZK7fmZ4RYt returned 2 instead of one of [0]


Steps to reproduce:
1. Deploy 8.0 with:
openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 --neutron-network-type vxlan --neutron-tunnel-types vxlan --ntp-server --timeout 90 -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e network-environment.yaml --ceph-storage-scale 1

2. Attempt to upgrade to 9.0

The overcloud upgrade fails during wsgi migration step.
2016-08-09 21:39:58 [overcloud-ControllerAllNodesValidationDeployment-skhbl6z2ggbx]: UPDATE_COMPLETE Stack UPDATE completed successfully
2016-08-09 21:39:58 [ControllerDeployment]: SIGNAL_COMPLETE Unknown
2016-08-09 21:39:59 [ControllerDeployment]: SIGNAL_COMPLETE Unknown
2016-08-09 21:40:01 [NetworkDeployment]: SIGNAL_COMPLETE Unknown
2016-08-09 21:40:02 [NetworkDeployment]: SIGNAL_COMPLETE Unknown
2016-08-09 21:40:02 [2]: SIGNAL_COMPLETE Unknown
2016-08-09 21:40:03 [0]: SIGNAL_COMPLETE Unknown
Stack overcloud UPDATE_FAILED
Deployment failed:  Heat Stack update failed.

Debugging with heat:
Error: /Stage[main]/Keystone/Exec[keystone-manage bootstrap]: Failed to call refresh: keystone-manage bootstrap --bootstrap-password nsgAW3mK4UCg99AZK7fmZ4RYt returned 2 instead of one of [0]
Error: /Stage[main]/Keystone/Exec[keystone-manage bootstrap]: keystone-manage bootstrap --bootstrap-password nsgAW3mK4UCg99AZK7fmZ4RYt returned 2 instead of one of [0]

Checking the pcs resources on controller:

[root@overcloud-controller-0 ~]# pcs status|grep -i stop -B1
 Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]

Comment 4 Jiri Stransky 2016-08-10 13:00:25 UTC
The root cause i think is that at the environment already contains OSP 9 openstack-puppet-modules (openstack-puppet-modules-8.x) when at this stage it should still contain OSP 8 ones (openstack-puppet-modules-7.x).

[root@overcloud-controller-0 ~]# rpm -q openstack-puppet-modules

Is it possible that the environment was deployed with OSP 9 image rather than OSP 8 image?

There might also be one other environment misconfig that i noticed -- the OSP 8 repos aren't enabled on the controllers:

[root@overcloud-controller-0 ~]# ls /etc/yum.repos.d/

Having OSP 8 repos enabled is a necessity for the AODH migration step, which should be done before the Keystone migration. So this environment is generally in a peculiar state :)

Comment 5 Jiri Stransky 2016-08-10 13:41:57 UTC
Correction, it probably wasn't OSP 9 image, but something updated OPM specifically. Here's yum log from controller 0:

Aug 09 21:13:17 Installed: rhn-org-trusted-ssl-cert-1.0-1.noarch
Aug 09 21:23:07 Erased: 1:openstack-ceilometer-alarm-5.0.2-2.el7ost.noarch
Aug 09 21:24:43 Updated: 2:python-oslo-config-3.9.0-1.el7ost.noarch
Aug 09 21:24:44 Installed: python-gnocchiclient-2.2.0-1.el7ost.noarch
Aug 09 21:24:44 Updated: python-wsme-0.8.0-1.el7ost.noarch
Aug 09 21:24:44 Installed: python-pika-0.10.0-3.el7ost.noarch
Aug 09 21:24:44 Installed: python-pika_pool-0.1.3-3.el7ost.noarch
Aug 09 21:24:45 Updated: python-oslo-messaging-4.5.0-2.el7ost.noarch
Aug 09 21:24:45 Installed: python-aodh-2.0.3-2.el7ost.noarch
Aug 09 21:24:45 Installed: openstack-aodh-common-2.0.3-2.el7ost.noarch
Aug 09 21:24:46 Installed: openstack-aodh-evaluator-2.0.3-2.el7ost.noarch
Aug 09 21:24:52 Installed: openstack-aodh-notifier-2.0.3-2.el7ost.noarch
Aug 09 21:24:58 Installed: openstack-aodh-listener-2.0.3-2.el7ost.noarch
Aug 09 21:25:05 Installed: openstack-aodh-api-2.0.3-2.el7ost.noarch
Aug 09 21:34:39 Updated: 1:openstack-puppet-modules-8.1.7-2.el7ost.noarch

By the timestamp it doesn't look like the OPM update was pulled in by the AODH migration though, and AODH migration should be the step in upgrade workflow right before Keystone migration.

Comment 7 Jiri Stransky 2016-08-10 13:55:21 UTC
Another piece of info -- the openstack-puppet-modules upgrade wasn't triggered by Heat, it happened at a time when os-collect-config was idling:

Comment 8 Mike Burns 2016-08-15 17:45:45 UTC
Talked to @sasha and this hasn't reproduced.  Given it appears that it was impacted by something updating OPM outside of the upgrade process, let's call this NOTABUG until it reproduces.

Comment 9 Charlie Llewellyn 2016-08-25 21:34:33 UTC
Hi I am also experiencing this error. I can confirm the puppet modules have been updated as per the upgrade documentation (section 3.4.3):

This sounds like a mistake in the documentation?

I can confirm downgrading the OPM to 7.x

 for i in $(nova list|grep ctlplane|awk -F' ' '{ print $12 }'|awk -F'=' '{ print $2 }'); do ssh -o StrictHostKeyChecking=no heat-admin@$i "sudo yum -y downgrade openstack-puppet-modules-7.0.19-1.el7ost.noarch" ; done

and running 'pcs resource cleanup' on the controllers before redeploying appears to have resolved the issue.

Comment 10 Jiri Stransky 2016-08-26 08:27:35 UTC
Hello, could you please post what was the version of the openstack-puppet-modules package before downgrading to 7.0.19-1? Was the overcloud already receiving OSP 9 packages on overcloud?

At the point of "3.4.3. Upgrading Keystone" in the docs, only the undercloud (the Director node) should be on OSP 9 repositories. The overcloud servers should still be on OSP 8 repositories, so updating the openstack-puppet-modules on overcloud should result in getting the latest OPM in 7.x series, but not 8.x yet.

The overcloud should switch to OSP 9 repositories during "3.4.4. Installing the Upgrade Scripts".

Is there a chance that the repository switching commands in "3.2. Upgrading the Director" were run on the overcloud too perhaps?

Comment 11 Charlie Llewellyn 2016-08-26 08:42:02 UTC
This is very likely as I explicitly enabled the OSP9 repos on the controllers as presumably they are required to install the aodh packages in 3.4.2? Without them being enabled then the puppet update fails as it cannot satisfy the package requirements. It could be I'm missing something though?

Comment 12 Jiri Stransky 2016-08-26 08:56:57 UTC
The AODH packages are in OSP 8 repos too, so the replacement of ceilometer alarm with AODH should succeed while the controllers are on OSP 8 repos.

Comment 13 Charlie Llewellyn 2016-08-26 09:28:18 UTC
Okay this must be my error, apologies for the confusion. Thanks Charlie

Note You need to log in before you can comment on or make changes to this bug.