Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1512594 - [ceph-ansible] [ceph-container] : automatic prepare osd disk failing - non-zero return code
Summary: [ceph-ansible] [ceph-container] : automatic prepare osd disk failing - non-ze...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Ceph-Ansible
Version: 3.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 3.0
Assignee: leseb
QA Contact: Vasishta
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-13 15:34 UTC by Vasishta
Modified: 2017-12-05 23:50 UTC (History)
11 users (show)

Fixed In Version: RHEL: ceph-ansible-3.0.12-1.el7cp Ubuntu: ceph-ansible_3.0.12-2redhat1 rhceph:ceph-3.0-rhel-7-docker-candidate-78841-20171115231319
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-05 23:50:29 UTC


Attachments (Terms of Use)
File contains contents of inventory file, all.yml, Failure message snippet, ansible-playbook log (deleted)
2017-11-13 15:34 UTC, Vasishta
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:3387 normal SHIPPED_LIVE Red Hat Ceph Storage 3.0 bug fix and enhancement update 2017-12-06 03:03:45 UTC
Github ceph ceph-ansible pull 2170 None None None 2017-11-15 15:17:50 UTC

Description Vasishta 2017-11-13 15:34:34 UTC
Created attachment 1351609 [details]
File contains contents of inventory file, all.yml, Failure message snippet, ansible-playbook log

Description of problem:
Containerized osd disk preparation failing when osd_auto_discovery is set to true.

Version-Release number of selected component (if applicable):
ceph-ansible-3.0.10-2.el7cp.noarch
ansible-2.4.1.0-1.el7ae.noarch
ceph-3.0-rhel-7-docker-candidate-69814-20171110190200

How reproducible:
1/1

Steps to Reproduce:
1. Configure ceph-ansible to initialize containerized cluster having at least one osd node for which osd_auto_discovery set to true
2. run ansible-playbook


Actual results:

TASK [ceph-osd : automatic prepare ceph containerized osd disk collocated]
failed: [magna012] (item=/dev/sdd) => {"changed": true, "cmd": "docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-magna012-sdd -v /etc/ceph:/etc/ceph -v /var/lib/ceph/:/var/lib/ceph/ -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=a_1 -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/sdd -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=5120 brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhceph:ceph-3.0-rhel-7-docker-candidate-69814-20171110190200", ... "failed": true, ...
 
(Please refer attachment for larger log snippet)

Expected results:
OSD disk preparation must be successful when osd auto discovery set to true

Additional info:
Please let me know if anything has been missed

Comment 7 leseb 2017-11-13 16:11:42 UTC
Please send me the output of "parted --script /dev/sdd print"

Thanks.

Comment 8 Vasishta 2017-11-13 16:19:33 UTC
$ sudo parted --script /dev/sdd print
Error: /dev/sdd: unrecognised disk label
Model: ATA Hitachi HUA72201 (scsi)
Disk /dev/sdd: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

Comment 13 leseb 2017-11-15 09:22:38 UTC
@Ken, there is a fix for ceph-ansible AND ceph-container. So QE should also test with the last container image.

Comment 17 Vasishta 2017-11-16 17:13:46 UTC
Working fine with - ceph-3.0-rhel-7-docker-candidate-78841-20171115231319 and ceph-ansible-3.0.12-1.el7cp.noarch.

Moving to VERIFIED state.

Comment 20 errata-xmlrpc 2017-12-05 23:50:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3387


Note You need to log in before you can comment on or make changes to this bug.