Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.

Bug 1359537

Summary: [ceph-ansible] : ubuntu - purge-cluster fails in task 'check for anything running ceph' with error - stderr: grep: write error:
Product: Red Hat Storage Console Reporter: Rachana Patel <racpatel>
Component: ceph-ansibleAssignee: Gregory Meno <gmeno>
Status: CLOSED DUPLICATE QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 2CC: adeza, aschoen, ceph-eng-bugs, nthomas, sankarshan
Target Milestone: ---   
Target Release: 2   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-07-25 14:16:29 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Rachana Patel 2016-07-24 19:43:06 UTC
Description of problem:
=======================
On ubuntu cluster purge-cluster fails with error - stderr: grep: write error: 



Version-Release number of selected component (if applicable):
=============================================================



How reproducible:
================
always


Steps to Reproduce:
===================
1. created ubuntu cluster with one MON and 3 OSD node
2. Did some I/O using rgw and rados
3. purge cluster using below command:-

oot@magna044 ceph-ansible]#  ansible-playbook purge-cluster.yml -vv -i  /etc/ansible/ubuntu  --extra-vars '{"ceph_stable": true, "ceph_stable_rh_storage_cdn_install": true , "ceph_stable_rh_storage": true, "monitor_interface": "eth0", "journal_collocation": true, "devices": ["/dev/sdb", "/dev/sdc", "/dev/sdd"], "journal_size": 100, "public_network": "x.x.x.x/21","cephx": true, "fetch_directory": "~/ubu",  "calamari":true, "radosgw_civetweb_port": "8080" }' -u ubuntu

Actual results:
===============

TASK: [check for anything running ceph] *************************************** 
<magna028> REMOTE_MODULE command -  #USE_SHELL
failed: [magna028] => {"changed": true, "cmd": "ps awux | grep -v grep | grep -q -- ceph-", "delta": "0:00:00.024376", "end": "2016-07-16 14:41:32.131136", "failed": true, "failed_when_result": true, "rc": 0, "start": "2016-07-16 14:41:32.106760", "stdout_lines": [], "warnings": []}
stderr: grep: write error: 

FATAL: all hosts have already failed -- aborting



Additional info:

Comment 3 Rachana Patel 2016-07-24 19:48:43 UTC


Version-Release number of selected component (if applicable):
=============================================================

ceph-ansible-1.0.5-24.el7scon.noarch

Comment 4 Alfredo Deza 2016-07-25 14:16:29 UTC

*** This bug has been marked as a duplicate of bug 1339576 ***