Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1062693 - [RFE]foreman should config nova for libgfapi access to cinder volumes
Summary: [RFE]foreman should config nova for libgfapi access to cinder volumes
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-foreman-installer
Version: 4.0
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: async
: 4.0
Assignee: John Eckersberg
QA Contact: nlevinki
URL:
Whiteboard:
: 1020483 (view as bug list)
Depends On: 1020483
Blocks: 1040649 1045196
TreeView+ depends on / blocked
 
Reported: 2014-02-07 18:00 UTC by Steve Reichard
Modified: 2016-04-26 16:22 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-09-08 16:38:00 UTC


Attachments (Terms of Use)

Description Steve Reichard 2014-02-07 18:00:39 UTC
Description of problem:


With RHEL 6.5 cinder can be accesses using libgfapi, which is my understanding provides better performance and supports the volume snapshot features.

relates to packstack BZ 1020483

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Martin Magr 2014-03-27 11:24:46 UTC
*** Bug 1020483 has been marked as a duplicate of this bug. ***

Comment 3 John Eckersberg 2014-03-27 19:38:03 UTC
Changed title to reflect that nova, not cinder, should use libgfapi.

Cinder does not support using libgfapi as far as I can tell.  The class cinder.volumes.drivers.GlusterfsDriver explicitly requires mount.glusterfs and uses the fuse layer to interface with gluster.

Foreman already has all the necessary knobs exposed to use cinder in this way.  Set cinder_backend_gluster to true, and then customize cinder_gluster_servers and cinder_gluster_volume for the environment.  I've verified that I can point this at a dummy gluster server and successfully do 'cinder create 1' to create a test volume.

Nova however, via libvirt/qemu, can use libgfapi to access gluster on the compute nodes.  At a glance, that does not appear configurable in foreman.  I'm investigating that piece more now.

Comment 4 Steve Reichard 2014-03-28 16:40:58 UTC
not sure that title is accurate either. 

it is using libgfapi | libvirtd | nova for cinder volumes.

At this point we can not make nova use libgfapi for instance/ephemeral storage.

Comment 5 John Eckersberg 2014-03-28 16:53:00 UTC
Review for puppet-nova adding required support - https://review.openstack.org/83816

Comment 6 John Eckersberg 2014-03-28 18:17:40 UTC
The above patch was rejected.  There's a new nova::config class that is more appropriate for this, so I'll see what's involved to backport it for havana.

Comment 7 John Eckersberg 2014-03-28 18:48:45 UTC
New plan, instead of going through the hassle of backporting nova::config, we can just create nova_config resources directly in quickstack::compute_common when cinder_backend_gluster is set.

Also, we'll need to make sure glusterfs-api is installed on the compute nodes so that libgfapi is available for qemu.  I've submitted a pull request for that bit here - https://github.com/redhat-openstack/puppet-openstack-storage/pull/5

Comment 8 John Eckersberg 2014-04-15 16:39:55 UTC
PR - https://github.com/redhat-openstack/astapor/pull/167

Comment 9 John Eckersberg 2014-04-16 20:30:05 UTC
If there is hope of this ever actually working end to end, these two bugs will need to be fixed:

https://bugzilla.redhat.com/show_bug.cgi?id=1088589
https://bugs.launchpad.net/qemu/+bug/1308542

Particularly the first one.  If that one is fixed it becomes a lot more difficult to fall into the error case of the second bug.


Note You need to log in before you can comment on or make changes to this bug.