Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 986321 - [RHS-RHOS] Cinder volume-create fails during rebalance
Summary: [RHS-RHOS] Cinder volume-create fails during rebalance
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs
Version: 2.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: shishir gowda
QA Contact: Sudhir D
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-07-19 12:08 UTC by Anush Shetty
Modified: 2013-12-09 01:36 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.4.0.20rhs-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
virt rhos cinder integration
Last Closed: 2013-09-23 22:35:55 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Anush Shetty 2013-07-19 12:08:25 UTC
Description of problem: Creating cinder volumes fails while rebalance is going on. The volume-create reports an error.

The cinder-volumes are hosted over RHS volume

Version-Release number of selected component (if applicable):

Cinder:
# rpm -qa | grep cinder
python-cinder-2013.1.2-3.el6ost.noarch
openstack-cinder-2013.1.2-3.el6ost.noarch
python-cinderclient-1.0.4-1.el6ost.noarch

RHS:glusterfs-3.3.0.11rhs-1.el6rhs.x86_64


How reproducible: Consistent


Steps to Reproduce:
1. Create RHS volume 

2. Configure cinder for glusterfs
     # openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
   # openstack-config --set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf
   # openstack-config --set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes

3. Create /etc/cinder/shares.conf 
   # cat /etc/cinder/shares.conf
      10.70.37.66:cinder-vol

4. Restart openstack-cinder services which will mount the RHS cinder volume

5. Add brick to the RHS volume and perform rebalance 

5. During rebalance, try creating cinder volume
   nova volume-create --display-name vol7 10

Actual results:

Creating cinder volume failed

Expected results:

Cinder volume creating should be successful during rebalance

Additional info:

# df -h
Filesystem            Size  Used Avail Use% Mounted on

10.70.37.66:glance-vol
                      300G  6.3G  294G   3% /var/lib/glance/images
10.70.37.66:cinder-vol
                      100G   15G   86G  15% /var/lib/cinder/volumes/cf55327cba40506e44b37f45f55af5e7
10.70.37.66:cinder-vol
                      100G   15G   86G  15% /var/lib/nova/mnt/cf55327cba40506e44b37f45f55af5e7



[root@rhs ~]# gluster volume rebalance cinder-vol status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost                0            0            5            0      completed
                             10.70.37.66                0            0            5            0      completed
                             10.70.37.71                0            0            5            0      completed
                            10.70.37.158                0            0            5            0      completed




[root@rhs ~]# gluster volume status cinder-vol
Status of volume: cinder-vol
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.37.66:/brick6/s1				24023	Y	17578
Brick 10.70.37.173:/brick6/s1				24024	Y	23960
Brick 10.70.37.66:/brick5/s1				24024	Y	18716
Brick 10.70.37.173:/brick5/s1				24025	Y	25240
NFS Server on localhost					38467	Y	25246
Self-heal Daemon on localhost				N/A	Y	25252
NFS Server on 10.70.37.71				38467	Y	12847
Self-heal Daemon on 10.70.37.71				N/A	Y	12853
NFS Server on 10.70.37.158				38467	Y	2849
Self-heal Daemon on 10.70.37.158			N/A	Y	2855
NFS Server on 10.70.37.66				38467	Y	18722
Self-heal Daemon on 10.70.37.66				N/A	Y	18728



[root@rhs ~]# gluster volume info cinder-vol
 
Volume Name: cinder-vol
Type: Distributed-Replicate
Volume ID: 19f5abf1-5739-417a-bcff-e56d0a5baa74
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.37.66:/brick6/s1
Brick2: 10.70.37.173:/brick6/s1
Brick3: 10.70.37.66:/brick5/s1
Brick4: 10.70.37.173:/brick5/s1
Options Reconfigured:
storage.owner-gid: 165
storage.owner-uid: 165
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: on


# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
| 0eb5dc97-aa33-48e1-9d92-6b00aacfb5fa | available |     vol3     |  10  |     None    |  false   |                                      |
| 48d0df07-ec35-4842-abea-a7b95f04a62a |   error   |     vol7     |  10  |     None    |  false   |                                      |
| 5cdc0cf1-6ed3-4a3f-8428-a4605a23a183 | available |     vol4     |  10  |     None    |  false   |                                      |
| 6e2a039d-003e-4a0f-a35d-58eed8650d58 | available |     vol1     |  10  |     None    |  false   |                                      |
| a0d74942-4957-40be-b34f-d83712feb90b | available |     vol2     |  10  |     None    |  false   |                                      |
| bc28a417-cf0f-4f87-b619-7a22449c5167 |   error   |     vol5     |  10  |     None    |  false   |                                      |
| c894392a-1df9-4b46-a71c-9ddaf017db7b |   error   |     vol6     |  10  |     None    |  false   |                                      |
| ed4e8700-fde5-4324-8300-b17942abe06e |   error   |     vol5     |  10  |     None    |  false   |                                      |
| f1ea0af9-cef1-4024-8e47-1f17d7de35e4 |   in-use  |      1       |  15  |     None    |  false   | dad3f2eb-2219-4a3b-b065-89dc21cf59a6 |
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+

Comment 3 Anush Shetty 2013-08-19 12:23:30 UTC
Tried this case again with glusterfs-3.4.0.20rhs-2.el6rhs.x86_64. Didn't see this issue again.

Comment 4 Amar Tumballi 2013-08-19 12:33:16 UTC
Marking ON_QA as per comment #3

Comment 5 Anush Shetty 2013-08-19 12:42:59 UTC
Verified with glusterfs-3.4.0.20rhs-2.el6rhs.x86_64

Comment 6 Scott Haines 2013-09-23 22:35:55 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html


Note You need to log in before you can comment on or make changes to this bug.