Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1696860 - containerized ceph-volume batch fails while waiting for records from udev [NEEDINFO]
Summary: containerized ceph-volume batch fails while waiting for records from udev
Keywords:
Status: NEW
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Container
Version: 3.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 4.0
Assignee: Dimitri Savineau
QA Contact: Vasishta
URL:
Whiteboard:
Depends On:
Blocks: 1578730
TreeView+ depends on / blocked
 
Reported: 2019-04-05 19:02 UTC by John Fulton
Modified: 2019-04-16 14:09 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:
dsavinea: needinfo? (johfulto)


Attachments (Terms of Use)
ceph-volume.log (deleted)
2019-04-05 19:02 UTC, John Fulton
no flags Details

Description John Fulton 2019-04-05 19:02:47 UTC
Created attachment 1552681 [details]
ceph-volume.log

As per a report from a consultant deploying OSP13/RHCS3.2 with bluestore, ceph-volume batch, as executed in a container by ceph-ansible, encountered the symptoms of bug 1676612. See attached cinder log (and snippet [2]). 

The fixed-in of bug 1676612, lvm2-2.02.184-1.el7, is not yet in the latest ceph-container rhceph/rhceph-3-rhel7 [1] at this time, 3-23. 

Could a new rhceph/rhceph-3-rhel7 container be released containing lvm2-2.02.184-1.el7 or newer? 

  John

[1] https://access.redhat.com/containers/?tab=overview#/registry.access.redhat.com/rhceph/rhceph-3-rhel7

[2] 
[root@overcloud-computehci-0 heat-admin]# cat /var/log/ceph/ceph-volume.log 
[2019-04-04 19:18:33,932][ceph_volume.main][INFO  ] Running command: ceph-volume --cluster ceph lvm batch --bluestore --yes --prepare /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/nvme0n1 /dev/nvme1n1 --report --format=json
[2019-04-04 19:18:33,949][ceph_volume.process][INFO  ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2019-04-04 19:26:46,782][ceph_volume.process][INFO  ] stderr WARNING: Device /dev/nvme1n1 not initialized in udev database even after waiting 10000000 microseconds.
[2019-04-04 19:26:46,782][ceph_volume.process][INFO  ] stderr WARNING: Device /dev/sda not initialized in udev database even after waiting 10000000 microseconds.
[2019-04-04 19:26:46,782][ceph_volume.process][INFO  ] stderr WARNING: Device /dev/sdq not initialized in udev database even after waiting 10000000 microseconds.
[2019-04-04 19:26:46,782][ceph_volume.process][INFO  ] stderr WARNING: Device /dev/nvme0n1 not initialized in udev database even after waiting 10000000 microseconds.

Comment 1 John Fulton 2019-04-10 19:18:20 UTC
Though this bug and bug 1666822 are about different problems in ceph-volume, if you're using openstack, then you can use the workaround from bug 1666822 to workaround this bug too. 

See the following attachment from 1666822:

 https://bugzilla.redhat.com/attachment.cgi?id=1549277


Note You need to log in before you can comment on or make changes to this bug.