Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1360937 - [ceph-ansible] : purge cluster fails in task 'check for a device list' when osd is Directory
Summary: [ceph-ansible] : purge cluster fails in task 'check for a device list' when o...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat
Component: ceph-ansible
Version: 2
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 3
Assignee: Gregory Meno
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-07-27 22:51 UTC by Rachana Patel
Modified: 2017-03-03 17:13 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-03 17:08:45 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Bugzilla 1361228 None None None Never

Internal Links: 1361228

Description Rachana Patel 2016-07-27 22:51:47 UTC
Description of problem:
======================
 purge cluster fails in task 'check for a device list' when osd is Directory


Version-Release number of selected component (if applicable):
==============================================================
10.2.2-29.el7cp.x86_64
ceph-ansible-1.0.5-31.el7scon.noarch


How reproducible:
================
always


Steps to Reproduce:
==================
1.created cluster with one MON and 3 OSD nodes.
values for osds file

crush_location: false
osd_crush_location: "'root={{ ceph_crush_root }} rack={{ ceph_crush_rack }} host={{ ansible_hostname }}'"
osd_directory: true
osd_directories:
  - /var/lib/ceph/osd/mydir1
  - /var/lib/ceph/osd/mydir2
2. did some I/O using raods
3. purge -cluster using
- ansible-playbook purge-cluster.yml -u c1 -i /etc/ansible/31 --verbose

Actual results:
===============
[root@magna044 ceph-ansible]#  ansible-playbook purge-cluster.yml -u c1 -i /etc/ansible/31 --verbose
Are you sure you want to purge the cluster? [no]: yes

PLAY [confirm whether user really meant to purge the cluster] ***************** 

GATHERING FACTS *************************************************************** 
ok: [localhost]

TASK: [exit playbook, if user did not mean to purge cluster] ****************** 
skipping: [localhost]

PLAY [stop ceph cluster] ****************************************************** 

GATHERING FACTS *************************************************************** 
ok: [magna051]
ok: [magna057]
ok: [magna078]

TASK: [check for a device list] *********************************************** 
fatal: [magna051] => error while evaluating conditional: osd_group_name in group_names and devices is not defined and osd_auto_discovery
fatal: [magna057] => error while evaluating conditional: osd_group_name in group_names and devices is not defined and osd_auto_discovery
fatal: [magna078] => error while evaluating conditional: osd_group_name in group_names and devices is not defined and osd_auto_discovery

FATAL: all hosts have already failed -- aborting

PLAY RECAP ******************************************************************** 
           to retry, use: --limit @/root/purge-cluster.retry

localhost                  : ok=1    changed=0    unreachable=0    failed=0   
magna051                   : ok=1    changed=0    unreachable=1    failed=0   
magna057                   : ok=1    changed=0    unreachable=1    failed=0   
magna078                   : ok=1    changed=0    unreachable=1    failed=0   


Expected results:


Additional info:

Comment 4 Ken Dreyer (Red Hat) 2017-03-03 17:08:45 UTC
purge-cluster does not currently support OSD directories.

If purging OSD directories is a priority from product management, please re-open.


Note You need to log in before you can comment on or make changes to this bug.