Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1056383 - Foreman Controller's swift proxy no longer runs per-storage node swift-ring-builder commands
Summary: Foreman Controller's swift proxy no longer runs per-storage node swift-ring-b...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-foreman-installer
Version: 4.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z2
: 4.0
Assignee: Crag Wolfe
QA Contact: nlevinki
URL:
Whiteboard:
: 1064037 (view as bug list)
Depends On:
Blocks: 1064037
TreeView+ depends on / blocked
 
Reported: 2014-01-22 05:05 UTC by Crag Wolfe
Modified: 2015-06-01 16:29 UTC (History)
13 users (show)

Fixed In Version: openstack-foreman-installer-1.0.4-1.el6ost
Doc Type: Bug Fix
Doc Text:
Previously, on using the OpenStack Foreman Installer, swift-proxy on the controller (OpenStack Networking or Compute networking) did not get configured correctly. It should point to the object, account, and container services on the Object Storage nodes, which did not happen. With this fix, swift can be used completely, and all uploads objects are viewable via horizon UI as well.
Clone Of:
: 1064037 (view as bug list)
Environment:
Last Closed: 2014-03-04 20:14:35 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:0213 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform 4 Bug Fix and Enhancement Advisory 2014-03-05 01:11:55 UTC

Description Crag Wolfe 2014-01-22 05:05:53 UTC
Description of problem:

The swift proxy on Controller (Neutron) or Controller (Nova Network) does not get configured correctly.  It should "point" to the object, account and container services on the Swift Storage Nodes.

Version-Release number of selected component (if applicable):
openstack-foreman-installer-1.0.3-1.el6ost.noarch

How reproducible:
Every time.

Steps to Reproduce:
1.  Assign a couple of nodes as Swift Storage host group and edit their parameters as appropriate.  Puppet agent should run on all hosts.
2.  Assign another host as a Controller.  Run the puppet agent.

Actual results:

Swift-proxy is installed, but has no storage node "backends" configured.

Expected results:

Swift-proxy is installed and is able to upload files into containers, etc.

Additional info:

Comment 2 Crag Wolfe 2014-01-22 05:17:15 UTC
To work around the issue, do something like the following on the controller (assuming two storage nodes with /srv/node/device1):

  swift-init proxy start
  cd /etc/swift
  swift-ring-builder account.builder add z1-192.168.200.30:6002/device1 100
  swift-ring-builder container.builder add z1-192.168.200.30:6001/device1 100
  swift-ring-builder object.builder add z1-192.168.200.30:6000/device1 100
  swift-ring-builder account.builder add z1-192.168.200.40:6002/device1 100
  swift-ring-builder container.builder add z1-192.168.200.40:6001/device1 100
  swift-ring-builder object.builder add z1-192.168.200.40:6000/device1 100
  swift-ring-builder account.builder rebalance
  swift-ring-builder container.builder rebalance
  swift-ring-builder object.builder rebalance

To test the workaround, source the openstack admin .rc file, then:

  swift upload newcontainer foo.txt  # assuming foo.txt is a file in local dir
  swift list newcontainer
  swift download newcontainer foo.txt

Try some more files, and verify data shows up under hashed files show up under
/srv/node/device1.

Comment 3 Pete Zaitcev 2014-01-22 23:00:15 UTC
In addition to running swift-ring-builder as Crag indicated in comment #2,
it is necessary to distribute *.ring.gz files to all nodes. I do not know
if this is feasible in Puppet (invoked by Foreman), but some kind of
component has to invoke rsync or scp after the swift-ring-builder run.

Comment 5 Jason Guiditta 2014-02-11 19:47:24 UTC
Crag posted here:
https://github.com/redhat-openstack/astapor/pull/114

Comment 6 Crag Wolfe 2014-02-11 23:14:02 UTC
How to Test: Background Info

The swift proxy lives on a controller (nova-network or neutron) node.
When the swift proxy is set up the first time, the swift ring files
are built and an rsync server set up that the swift storage nodes
(members of the Swift Storage Node hostgroup) access. Intra-swift
communication should take place on its own network separate from the
openstack public, admin or internal networks. For testing you can
co-mingle the networks if you must. You must have least three storage
nodes, however.

How to Test: Details

Step 0: The host group parameters.

On the Controller (Nova Network or Neutron) Host Group, the parameter
$swift_ringserver_ip is the IP address on the controller that provides
rsync access to the storage nodes. $swift_storage_ips is an array
parameter of the IP's of the storage nodes themselves.
$swift_storage_device is the device exposed by each storage nodes
(whatever lives under /srv/node). If you are using the Swift Storage
Node host group to manage the swift storage hosts,
$swift_storage_device should always be set on the controller host
group as "device1". For this setup, we have the constraint that this
device name is the same across all storage nodes, and that there is
only one device. Here are the values I used on the controller:

swift_ringserver_ip: 192.168.203.1
swift_storage_ips: ['192.168.203.2', '192.168.203.3', '192.168.203.4']
swift_storage_device: device1

The value for $swift_shared_secret doesn't matter, as long as it is
set to be the same in the Swift Storage Node host group.

On the Swift Storage Node, $swift_all_ips includes all the storage
node IP's and the swift ringserver ip (whatever the
$swift_ringserver_ip was in the controller). $swift_local_interface
is the interface that "owns" the swift storage ip on a given storage
node. $swift_ring_server is set to the same value as
$swift_ringserver_ip in the Controller. The value for
$swift_ext4_device does not matter if $swift_loopback is true.
Otherwise, it should be a device where an ext4 filesystem already exists.
Note that running the puppet agent results in the $swift_ext4_device
as mounted under /srv/node/device1 as in the loopback case. Finally, make sure
$swift_shared_secret is consistent with the value in the controller. The only
parameter I needed to override in my testing (in the swift_loopback=true case) was:

swift_local_interface:  eth5

Step 1: Run 'puppet apply -tvd --trace' on a node assigned to the
controller group. This should be a 'fresh' host that has not already
been deployed as a controller.

Verify that the ring files were built on the controller:

# cd /etc/swift
# swift-ring-builder account.builder (and/or object.builder, container.builder)
account.builder, build version 3
262144 partitions, 3.000000 replicas, 1 regions, 3 zones, 3 devices, 0.00 balance
The minimum number of hours before a partition can be reassigned is 1
Devices:    id  region  zone      ip address  port  replication ip  replication port      name weight partitions balance meta
             0       0     1   192.168.203.2  6002   192.168.203.2              6002   device1 100.00     262144    0.00
             1       0     2   192.168.203.3  6002   192.168.203.3              6002   device1 100.00     262144    0.00
             2       0     3   192.168.203.4  6002   192.168.203.4              6002   device1 100.00     262144    0.00

Step 2: Run 'puppet apply -tvd --trace' on the swift storage nodes.

Verify that they rsync'ed the ring files:

# ls /etc/swift
account.ring.gz      container.ring.gz      object.ring.gz      swift.conf
account-server       container-server       object-server
account-server.conf  container-server.conf  object-server.conf

Step 3: Try out the swift client.

Get openstack api access -- i.e., download the admin-openrc.sh file
from horizon and source it.

You should be able to do things like:

[root@j3a1 swift]# swift stat
   Account: AUTH_9b66130293124cb3af4300e7c39bf538
Containers: 0
   Objects: 0
     Bytes: 0
Content-Type: text/plain; charset=utf-8
X-Timestamp: 1391901762.58906
X-Put-Timestamp: 1391901762.58906
[root@j3a1 swift]# swift post container1
[root@j3a1 swift]# swift post container2
[root@j3a1 swift]# swift upload container1 /etc/resolv.conf
etc/resolv.conf
[root@j3a1 swift]# swift upload container1 /etc/hosts
etc/hosts
[root@j3a1 swift]# swift list container1
etc/resolv.conf

########### NOTE THINGS ARE NOT NECESSARILY LISTED INSTANTANEOUSLY! #####

[root@j3a1 swift]# swift list container1
etc/hosts
etc/resolv.conf

[root@j3a1 tmp]# echo foo > foo.txt
[root@j3a1 tmp]# swift upload foo.txt
[root@j3a1 tmp]# swift download container1 etc/resolv.conf
 etc/resolv.conf [headers 0.514s, total 0.514s, 0.000 MB/s]
[root@j3a1 tmp]# swift list
container1
container2
[root@j3a1 tmp]# swift list container1
etc/hosts
etc/resolv.conf
[root@j3a1 tmp]# swift list container2
foo.txt
[root@j3a1 tmp]# swift stat
    Account: AUTH_9b66130293124cb3af4300e7c39bf538
Containers: 2
   Objects: 3
     Bytes: 365
Accept-Ranges: bytes
X-Timestamp: 1391901769.79616
Content-Type: text/plain; charset=utf-8

Also, it would be a good idea to verify that files are getting created
under /srv/node/device1 on all the storage nodes.

Comment 8 Mike Orazi 2014-02-11 23:37:57 UTC
*** Bug 1064037 has been marked as a duplicate of this bug. ***

Comment 9 Jason Guiditta 2014-02-12 15:32:25 UTC
Merged to master

Comment 11 Ami Jeain 2014-02-24 14:02:05 UTC
swift (storage) related

Comment 12 Jason Guiditta 2014-02-25 14:50:40 UTC
The doc text is incorrect, it describes the workaround from _before_ this BZ was fixed.  Comment #6 Describes the correct usage with this fix.

Comment 14 Jason Guiditta 2014-02-26 23:55:01 UTC
Crag, can you update the doc text on this to be accurate based on what you changed?

Comment 15 nlevinki 2014-03-02 14:54:14 UTC
verified on image
[root@dhcp163-85 ~]# rpm -qa |grep openstack
openstack-nova-console-2013.2.2-2.el6ost.noarch
openstack-heat-api-cloudwatch-2013.2.2-1.el6ost.noarch
openstack-heat-api-2013.2.2-1.el6ost.noarch
openstack-ceilometer-central-2013.2.2-1.el6ost.noarch
openstack-swift-1.10.0-3.el6ost.noarch
openstack-ceilometer-api-2013.2.2-1.el6ost.noarch
openstack-ceilometer-collector-2013.2.2-1.el6ost.noarch
openstack-swift-plugin-swift3-1.0.0-0.20120711git.1.el6ost.noarch
openstack-nova-common-2013.2.2-2.el6ost.noarch
openstack-glance-2013.2.2-2.el6ost.noarch
openstack-heat-engine-2013.2.2-1.el6ost.noarch
openstack-ceilometer-common-2013.2.2-1.el6ost.noarch
openstack-heat-api-cfn-2013.2.2-1.el6ost.noarch
openstack-nova-conductor-2013.2.2-2.el6ost.noarch
openstack-nova-novncproxy-2013.2.2-2.el6ost.noarch
openstack-keystone-2013.2.2-1.el6ost.noarch
openstack-swift-proxy-1.10.0-3.el6ost.noarch
openstack-dashboard-theme-2013.2.2-1.el6ost.noarch
redhat-access-plugin-openstack-4.0.0-0.el6ost.noarch
openstack-cinder-2013.2.2-1.el6ost.noarch
openstack-nova-cert-2013.2.2-2.el6ost.noarch
openstack-utils-2013.2-3.el6ost.noarch
openstack-heat-common-2013.2.2-1.el6ost.noarch
openstack-nova-scheduler-2013.2.2-2.el6ost.noarch
python-django-openstack-auth-1.1.2-2.el6ost.noarch
openstack-dashboard-2013.2.2-1.el6ost.noarch
openstack-nova-api-2013.2.2-2.el6ost.noarch
[root@dhcp163-85 ~]#

root@dhcp163-85 ~]# swift stat                                               
   Account: AUTH_9d0d39a89b9d4481b6b49fb6092a8016                             
Containers: 2                                                                 
   Objects: 0                                                                 
     Bytes: 0                                                                 
Accept-Ranges: bytes                                                          
X-Timestamp: 1393771053.99637                                                 
Content-Type: text/plain; charset=utf-8                                       
[root@dhcp163-85 ~]# swift list                                               
container1                                                                    
container2                                                                    
[root@dhcp163-85 ~]# swift list container1                                    
[root@dhcp163-85 ~]# swift list container2
python                                    
yacht_book                                
[root@dhcp163-85 ~]# swift list container1
xtremio

Comment 17 errata-xmlrpc 2014-03-04 20:14:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-0213.html


Note You need to log in before you can comment on or make changes to this bug.