Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1061002 - Error "Failed to schedule_create_volume: No valid host was found" during Cinder create
Summary: Error "Failed to schedule_create_volume: No valid host was found" during Cind...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 4.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 5.0 (RHEL 7)
Assignee: Flavio Percoco
QA Contact: Dafna Ron
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-02-04 07:05 UTC by jliberma@redhat.com
Modified: 2016-04-27 03:50 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-02-26 08:06:52 UTC


Attachments (Terms of Use)

Description jliberma@redhat.com 2014-02-04 07:05:21 UTC
Description of problem:
Multiple cinder-volume attempts failed with error "Failed to schedule_create_volume: No valid host was found" in /var/log/cinder/scheduler.log.


Version-Release number of selected component (if applicable):
2014-01-31.1 puddle

How reproducible:
Intermittent, 80% of tests failed initially.

Steps to Reproduce:
1. Deploy Foreman.
2. Attach block storage server to iSCSI, discover a target, and create a cinder-volumes VG on it.
3. Change cinder_backend_iscsi = true in LVM storage host group parameters.
3. Add servers to Neutron Controller, Compute, Networker, and LVM Block storage hosts via Foreman. 
4. Source keystonerc_admin on controller node.
5. Run cinder create --display_name test0 5

Actual results:
80% of time error in /var/log/cinder/scheduler.log on controller node:
2014-02-04 00:28:06.440 12068 WARNING cinder.scheduler.host_manager [req-2522324a-e34f-4bc8-8c64-962302b8f3ff a84a428dbe244b9ca066cbdd51ff009d d6b8b8f09e
4a4c5d9b6d8649d5cedf30] volume service is down or disabled. (host: rhos7.cloud.lab.eng.bos.redhat.com)
2014-02-04 00:28:06.457 12068 ERROR cinder.volume.flows.create_volume [req-2522324a-e34f-4bc8-8c64-962302b8f3ff a84a428dbe244b9ca066cbdd51ff009d d6b8b8f09e4a4c5d9b6d8649d5cedf30] Failed to schedule_create_volume: No valid host was found.

cinder service-list shows volume service down, although service openstack-cinder-volume status on storage node shows running.

Several attempts to restart cinder-scheduler and then cinder-volume will eventually show both services as up in cinder-service list.

Expected results:
Volume created successfully.

Additional info:
Filing this bug for tracking purposes. I have redeployed and reproduced 3-4x in past 24 hours. Hopefully it is particular to my setup and no one else will see this error. If you see it too please comment.

Comment 1 Eric Harney 2014-02-04 15:42:10 UTC
Is there more than one cinder-volume node involved here?

Main things to check are that the cinder-volume driver is initialized (logged at debug level) and that the scheduler log shows information coming back from update_volume_stats indicating how much free space it has.

Since your log indicates that the volume service is down, it sounds like the scheduler wasn't able to reach the volume service.

Comment 3 jliberma@redhat.com 2014-02-13 17:49:40 UTC
Eric -- I have not been able to reproduce this error. The cinder-volume service was unreachable for 4 out of 5 build attempts. I redeployed the entire setup (including the Foreman server and all OpenStack servers) and saw same problem. The cinder-volume service would die 80% of the time when I attempted to create a new volume via Heat template or manually.

However, in the las week or so I have not seen the problem return. Thanks!

Comment 4 Flavio Percoco 2014-02-18 11:29:20 UTC
Can we close this bug?

It looks like it can't be reproduced anymore.

Comment 5 jliberma@redhat.com 2014-02-18 18:38:18 UTC
Fine with me.


Note You need to log in before you can comment on or make changes to this bug.