Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1511842 - cinder-volume stays down since ocata
Summary: cinder-volume stays down since ocata
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 11.0 (Ocata)
Hardware: All
OS: All
Target Milestone: z4
: 11.0 (Ocata)
Assignee: Eric Harney
QA Contact: Avi Avraham
Depends On:
TreeView+ depends on / blocked
Reported: 2017-11-10 09:30 UTC by Nilesh
Modified: 2018-02-13 16:29 UTC (History)
12 users (show)

Fixed In Version: openstack-cinder-10.0.6-2.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2018-02-13 16:29:16 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
OpenStack gerrit 501325 None None None 2017-11-15 14:39:47 UTC
Red Hat Product Errata RHBA-2018:0306 normal SHIPPED_LIVE openstack-cinder bug fix advisory 2018-02-14 00:16:06 UTC

Description Nilesh 2017-11-10 09:30:17 UTC
While upgrading to ocata, we hit this bug:

Cu's ceph cluster is large. It has close to 400 volumes, some of which are very large.

This is fixed upstream in:

Cu wants to update the openstack-cinder package to include this patch?

Comment 4 Tzach Shefi 2017-11-20 13:13:46 UTC
Again unsure about needed verification steps.

1. Deploy with Ceph (external in my case)

2. Create a few volumes, do we know many were needed to trip this 10 50 100 400 ? 
Does it matter if volumes are 1G or fewer volumes but larger sized? 

3. Then see how long restart takes? 
systemctl restart openstack-cinder-volume

Comment 6 Tzach Shefi 2017-11-22 17:09:47 UTC
Verified on:

Gorka suggested decrease these cinder.conf options below
would speed things up a bit.
60->30 for periodic interval
60 -> 5 for periodic_fuzzy_delay

Restarted service to update settings

I've created several volumes 12+ volumes, totaling ~1T+ in provisioned capacity.
Filled with: udev/random or large iso/qcow2. 
cloned and changed data nothing I did caused changes service state. 

Openstack-cinder-volume remains up since system was installed up time of  24H.
All the volumes I added were created within the last 4h
No glitch in service status while using watch -d -n 10.

Comment 28 errata-xmlrpc 2018-02-13 16:29:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

Note You need to log in before you can comment on or make changes to this bug.