Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1064889 - Openstack-foreman doesn't create PoC cinder-volumes VG automatically
Summary: Openstack-foreman doesn't create PoC cinder-volumes VG automatically
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-foreman-installer
Version: 5.0 (RHEL 7)
Hardware: All
OS: Linux
medium
high
Target Milestone: ---
: Installer
Assignee: Jason Guiditta
QA Contact: Ami Jeain
URL:
Whiteboard: storage
Depends On: 1047652 1047656 1055179 1055492 1056055 1056058 1100459
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-02-13 13:35 UTC by Yogev Rabl
Modified: 2014-09-08 16:49 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-09-08 16:49:19 UTC


Attachments (Terms of Use)

Description Yogev Rabl 2014-02-13 13:35:49 UTC
Description of problem:
Unlike the Packstack, the Foreman cannot install and configure the Cinder automatically. The installation requires manual post installation configurations. 

The following bugs are opens: 
1.  1055492 - Puppet doesn't register the Cinder in endpoints in the CC's database
2. 1047656 - openstack-foreman: The cinder driver isn't configured
3. 1047652 - openstack-foreman: foreman doesn't start tgt daemon automatically
4. 1055448 - Foreman glusterfs cinder_gluster_peers override doesn't work
5. 1056058 - [RFE] create cinder-volumes VG backed by iSCSI target
6. 1056055 - [RFE] create cinder-volumes VG backed by a loopback file
7. 1055179 - The LVM block storage host group name should be changed to Cinder Block storage

Version-Release number of selected component (if applicable):
foreman-proxy-1.3.0-3.el6sat.noarch

How reproducible:
100%

Steps to Reproduce:
1. Install The LVM Block Storage in a semi-distributed topology (on a different server than the Cloud Controller).

Comment 1 Dafna Ron 2014-02-13 13:53:20 UTC
reviewing all the storage related bugs for forman, we released that we cannot install cinder lvm backend. 
There are several bugs opened with different system configurations all mounting to one outcome: cinder does not install and configure lvm backend for cinder the way that packstack does today. 
I asked Yogev to open this bug and link all related bugs to it.

Comment 2 jliberma@redhat.com 2014-02-13 17:44:41 UTC
I have installed and tested the LVM Block Storage group many times on a different server than the cloud controller. It works fine provided you create the cinder-volumes VG manually before adding the server to the host group. This is consistent with Packstack. I have tested both with a local loopback device and with an external iSCSI LUN.

Comment 3 Dafna Ron 2014-02-13 17:47:58 UTC
(In reply to jliberma@redhat.com from comment #2)
> I have installed and tested the LVM Block Storage group many times on a
> different server than the cloud controller. It works fine provided you
> create the cinder-volumes VG manually before adding the server to the host
> group. 

This is not consistent with packstack which creates the vg. 
There is no user manual intervention at all with packstack - you simply give packstack the type of storage and paramaters for it and it creates and configures it for you. 

This is consistent with Packstack. I have tested both with a local
> loopback device and with an external iSCSI LUN.

Comment 4 jliberma@redhat.com 2014-02-13 18:09:02 UTC
Is the missing VG the basis for this bug?

It would be trivial to add the logic to create a loopback device backed VG if no cinder-volumes group exists, but that is an RFE, not an urgent bug.

Comment 5 Dafna Ron 2014-02-13 18:20:25 UTC
1. please see all related bugs linked on this bug for lvm type storage. 
2. This is not an RFE but a bug since packstack deploys and creates the vg so if we are moving to deployment and configuration through foreman we need to make sure that all the functionality that we had in packstack remains consistent in foreman. 
An RFE for example (which I did not open yet but will) would be asking foreman to deploy cinder with a netapp backend (something not done by packstack today). 
3. I'm glad that its trivial to add the vg - does it means that this can be an easy fix and we can have this functionality merged soon?

Comment 6 jliberma@redhat.com 2014-02-13 20:28:37 UTC
I seem to recall that packstack will use an existing cinder-volume VG if it exists, although you can explicitly tell it to create a POC VG on a loopback device. o in this way it is similar to packstack already.

I have been told many times that while feature parity with packstack is a goal, it is not the most important concern. Packstack has several features (such as SSL) that we don't have in Foreman. That is what I was told by the Foreman developers when I asked for cinder-volume on the controller node and NFS-backed Cinder.

I was also told that if I needed the features before the developers could get to them then I should write them in myself and submit them to astapor. (Which I have been doing.) So feel free to add it and submit it upstream whenever you are ready.

Personally I think its better to ask the customers to make their own VG so long as this step is well documented. This method gives customers a lot of flexibility to chose their backend storage method -- whether it be an iSCSI LUN, a local disk, or a loopback device. This is also what I have heard from the field deployment engineers who interact with customers.

I am going to go off and test the LVM Block Storage group today to make sure it is still working. I will let you know if its broken.

Comment 7 jliberma@redhat.com 2014-02-13 21:31:33 UTC
So I just tested the LVM Block Storage group and it all works fine. I'm happy to provide instructions if you need help getting it working. Thanks, Jacob

[root@rhos1 bz1064889(refarch_member)]# cinder list
+--------------------------------------+--------+----------------------------------+------+-------------+----------+--------------------------------------+
|                  ID                  | Status |           Display Name           | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+--------+----------------------------------+------+-------------+----------+--------------------------------------+
| 358b0d5f-fe43-4e6e-b037-2d6448853220 | in-use | full1-cinder_volume-zlgx3ysbw7nw |  5   |     None    |  false   | 48ae8f01-b5e4-453d-8eb5-7775bb848b14 |
+--------------------------------------+--------+----------------------------------+------+-------------+----------+--------------------------------------+
[root@rhos1 bz1064889(refarch_member)]# source /root/keystonerc_admin 
[root@rhos1 bz1064889(admin)]# cinder service-list
+------------------+------------------------------------+------+---------+-------+----------------------------+
|      Binary      |                Host                | Zone |  Status | State |         Updated_at         |
+------------------+------------------------------------+------+---------+-------+----------------------------+
| cinder-scheduler | rhos1.cloud.lab.eng.bos.redhat.com | nova | enabled |   up  | 2014-02-13T21:27:53.000000 |
|  cinder-volume   | rhos7.cloud.lab.eng.bos.redhat.com | nova | enabled |   up  | 2014-02-13T21:27:52.000000 |
+------------------+------------------------------------+------+---------+-------+----------------------------+

[root@osp4-foreman bz1064889]# curl -s -u admin:changeme -k -H "Accept: application/json, version=2", -H "Content-Type: application/json" https://osp4-foreman/api/hostgroups?search=environment=production  | jgrep -s "hostgroup.id & hostgroup.label"
[
  {
    "hostgroup.id": 1,
    "hostgroup.label": "Controller (Nova Network)"
  },
  {
    "hostgroup.id": 2,
    "hostgroup.label": "Compute (Nova Network)"
  },
  {
    "hostgroup.id": 3,
    "hostgroup.label": "Controller (Neutron)"
  },
  {
    "hostgroup.id": 4,
    "hostgroup.label": "Compute (Neutron)"
  },
  {
    "hostgroup.id": 5,
    "hostgroup.label": "Neutron Networker"
  },
  {
    "hostgroup.id": 6,
    "hostgroup.label": "LVM Block Storage"
  },
  {
    "hostgroup.id": 7,
    "hostgroup.label": "Load Balancer"
  },
  {
    "hostgroup.id": 8,
    "hostgroup.label": "HA Mysql Node"
  },
  {
    "hostgroup.id": 9,
    "hostgroup.label": "Swift Storage Node"
  }
]

[root@osp4-foreman bz1064889]# curl -s -u admin:changeme -k -H "Accept: application/json, version=2", -H "Content-Type: application/json" https://osp4-foreman/api/hosts | jgrep -s "host.name & host.hostgroup_id"
[
  {
    "host.name": "osp4-foreman.cloud.lab.eng.bos.redhat.com"
  },
  {
    "host.hostgroup_id": 3,
    "host.name": "rhos1.cloud.lab.eng.bos.redhat.com"
  },
  {
    "host.hostgroup_id": 4,
    "host.name": "rhos4.cloud.lab.eng.bos.redhat.com"
  },
  {
    "host.hostgroup_id": 4,
    "host.name": "rhos5.cloud.lab.eng.bos.redhat.com"
  },
  {
    "host.hostgroup_id": 5,
    "host.name": "rhos6.cloud.lab.eng.bos.redhat.com"
  },
  {
    "host.hostgroup_id": 6,
    "host.name": "rhos7.cloud.lab.eng.bos.redhat.com"
  }
]

Comment 8 Jason Guiditta 2014-02-14 01:03:00 UTC
Jacob is right about the feature parity thing.  While it is reasonable as a goal in many (but definitely not all) cases, it is not our first priority.  The foreman deployment for RHOS4 is meant to be a reference architecture that a customer can choose to use, or roll their own hostgroups if they have something else they desire.  This of course will grow and become more and more flexible, but the initial goals of this release were to extend what was in the foreman installer for RHOS 3.  Top goals were things like adding support for neutron, starting to bring in ssl, and beginning to support an HA setup.  Packstack has a different goal and different target audience.  In this particular case, it is the opinion of people in the field that customers would not mind setting up their VG for anything production, and in fact may rather control that versus having something magical happen under the covers, which is more appropriate for a testing/PoC kind of environment.  It is my opinion that _if_ this were agreed to by product management as a feature for Foreman manifests to implement, we can consider it. Otherwise, this works as designed, though we perhaps need clearer steps (which Jacob has now provided) of how this is expected to be set up.

Comment 10 jliberma@redhat.com 2014-02-20 16:25:06 UTC
I hear what you are saying about helping novice users and it is a good point. 

The pilot customers and field engineers tell us they prefer to create their own volume group. They don't want an automated installer mucking with their storage. Of course, these are mostly large, experienced customers.

We already assume that customers can perform basic functions such as installing the OS, setting IP addresses, adding packages, and managing iptables. They must do those things to use Foreman. Volume group creation is another basic function we should expect novices to handle.

A well documented procedure should help novice customers get started. If they can't handle VG creation on their own they probably should buy consulting service.

Here are steps to create a cinder-volumes VG on a local partition, an iSCSI target, or a loopback device. I hope this helps with your testing.

# local partition (requires an unused disk or partition)
	pvcreate -yv -ff /dev/sdb
        vgcreate cinder-volumes /dev/sdb

# iSCSI target (requires an initiator and target)
	iscsiadm -m discovery -t st -p 172.31.143.200
        iscsiadm -m node -l
        partprobe -s
        pvcreate -yv -ff /dev/sdb
        vgcreate cinder-volumes /dev/sdb

# loopback device (poc only, requires free space)
	truncate --size 5G /root/cinder-volumes
        losetup -fv /root/cinder-volumes

Comment 11 Jason Guiditta 2014-02-25 22:35:12 UTC
I have added those examples to the rdo doc here:
http://openstack.redhat.com/Deploying_RDO_using_Foreman#Quick_volume_group_creation_for_testing

Comment 12 Jiri Stransky 2014-06-06 17:13:08 UTC
If i understand the problem correctly, the BZ is not about LVM backend not working, but the problem is that if the machine doesn't already contain cinder-volumes VG, Foreman won't create a PoC/testing VG (VG backed by a loopback file, like the one Packstack creates). I'll change the BZ title so that it's more descriptive.


By the way for testing and PoC VG creation we have a script which does the same thing as Packstack does, so that people don't have to figure it out by themselves (i remember i pointed someone to it a while ago):

https://github.com/redhat-openstack/astapor/blob/master/bin/cinder-testing-volume.sh

You can wget the script to a machine and run it like:

bash cinder-testing-volume.sh 5G

to create a 5 gigabyte cinder-volumes VG for PoC/testing.

Comment 17 Mike Orazi 2014-09-08 16:49:19 UTC
This is covered in rhel-osp-installer


Note You need to log in before you can comment on or make changes to this bug.