Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1520004 - per host CephAnsibleDisksConfig are ignored
Summary: per host CephAnsibleDisksConfig are ignored
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 12.0 (Pike)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z1
: 12.0 (Pike)
Assignee: Giulio Fidente
QA Contact: Yogev Rabl
URL:
Whiteboard:
: 1600856 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-12-01 21:59 UTC by Gonéri Le Bouder
Modified: 2018-07-16 15:56 UTC (History)
20 users (show)

Fixed In Version: openstack-tripleo-heat-templates-7.0.3-20.el7ost, openstack-tripleo-common-7.6.3-9.el7ost
Doc Type: Known Issue
Doc Text:
It is only possible to deploy Ceph storage servers if their disk devices are homogeneous.
Clone Of:
Environment:
Last Closed: 2018-01-30 21:24:32 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Launchpad 1736707 None None None 2017-12-06 11:01:14 UTC
Red Hat Product Errata RHBA-2018:0253 normal SHIPPED_LIVE Red Hat OpenStack Platform 12.0 director Bug Fix Advisory 2018-02-16 03:41:33 UTC
OpenStack gerrit 528283 None stable/pike: MERGED tripleo-common: Add json_parse and yaml_parse mistral expression functions (I9970abae47ca355861e37cdb5db0ab24d564b57a) 2018-01-09 17:55:25 UTC
OpenStack gerrit 528755 None stable/pike: MERGED tripleo-common: Consume NodeDataLookup in ceph-ansible (Ia23825aea938f6f9bcf536e35cad562a1b96c93b) 2018-01-09 17:55:15 UTC
OpenStack gerrit 528757 None stable/pike: NEW tripleo-heat-templates: Passes NodeDataLookup to ceph-ansible workflow (Ie7a9f10f0c821b8c642494a4d3933b2901f39d40) 2018-01-09 17:55:06 UTC

Description Gonéri Le Bouder 2017-12-01 21:59:57 UTC
Description of problem:

I've the following template to do the mapping of my disks:

resource_registry:                                                                                                    
  OS::TripleO::CephStorageExtraConfigPre: ./overcloud/puppet/extraconfig/pre_deploy/per_node.yaml                     
                                                                                                                      
parameter_defaults:                                                                                                   
  NodeDataLookup: >                                                                                                   
    {                                                                                                                 
      "4C4C4544-0047-3610-8031-C8C04F4A4B32": {                                                                       
        "CephAnsibleDisksConfig": {                                                                                   
          "dedicated_devices": [                                                                                      
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635362-lun-0",                                        
(...)                                      
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635342-lun-0"                                         
          ],
          "devices": [                                                                                                
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc0-lun-0",
(...)
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fd3-lun-0"
          ],
          "osd_scenario": "non-collocated"
        }
      },
      "4C4C4544-0047-3610-8053-C8C04F484B32": {
        "CephAnsibleDisksConfig": {
          "dedicated_devices": [
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633ee2-lun-0",
(...)
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636322-lun-0"
          ],
          "devices": [
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c0-lun-0",
(...)
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5d3-lun-0"
          ],
          "osd_scenario": "non-collocated"
        }
      },
      "4C4C4544-0047-3610-8054-C8C04F484B32": {
        "CephAnsibleDisksConfig": {
          "dedicated_devices": [
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476349f2-lun-0",
(...)
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476360e2-lun-0"
          ],
          "devices": [
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc0-lun-0",
(...)
            "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdd1-lun-0",
          ],
          "osd_scenario": "non-collocated"
        }
      }
    }



And the following extra template:

resource_registry:
    OS::TripleO::NodeUserData: ./first-boot.yaml

parameter_defaults:
  NovaEnableRbdBackend: true

  CephConfigOverrides:
    journal_size: 10000
    journal_collocation: false
    raw_multi_journal: true


If I go on the first ceph node, I can validate the data are propagated properly with:

cat /etc/puppet/hieradata/4C4C4544-0047-3610-8053-C8C04F484B32.json |jq .

However, when I check /var/log/mistral/ceph-install-workflow.log, ceph-ansible only know about /dev/vda and ignore my CephAnsibleDisksConfig key.


Version-Release number of selected component (if applicable):

Puddle:  RH7-RHOS-12.0 2017-11-28.3 
ceph-ansible-3.0.14-1.el7cp.noarch

Comment 5 Gonéri Le Bouder 2017-12-04 13:58:54 UTC
This set up comes with 3 ceph nodes. Each of them have 3 SSD and 12 SAS disk. Every-time I do a deployment, at last one of the 3 nodes get a disk renamed. We won't be able to do any deployment anymore with the director installer.

So, unless we go through a manual deployment, I don't see any other option to reliably get a working Ceph deployment. I think this will have quite a huge impact and should be mention in the documentation.

Comment 12 Gonéri Le Bouder 2017-12-06 14:20:11 UTC
osd_auto_discovery would not help here because the problem happens after the first reboot and before the Ceph deployment.

Comment 15 Mike Orazi 2017-12-14 19:49:24 UTC
This is blocking partner integration activity.  Accordingly I would like to request a hot fix to allow testing to proceed.

Comment 18 Jon Schlueter 2018-01-09 18:43:44 UTC
Build openstack-tripleo-heat-templates-7.0.3-20.el7ost includes a patch from this bug, please update BZ state accordingly

Comment 19 Jon Schlueter 2018-01-09 18:45:21 UTC
Build openstack-tripleo-common-7.6.3-9.el7ost contains a patch from this bug please update bz accordingly

Comment 31 Gonéri Le Bouder 2018-01-24 20:17:49 UTC
Hum, there is something wrong with my set up. My configuration is always ignored but looks good according to the doc [0]. ceph-ansible still tries to access /dev/vdb.

[0]: https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/node_specific_hieradata.html


This is my file:

parameter_defaults:
  NodeDataLookup: >
    {
      "4C4C4544-0047-3610-8031-C8C04F4A4B32": {
        "dedicated_devices": [
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635362-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635362-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635362-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635362-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633f32-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633f32-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633f32-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633f32-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634632-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634632-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634632-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634632-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635342-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635342-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635342-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47635342-lun-0"
        ],
        "devices": [
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc0-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc1-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc3-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc4-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc5-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc6-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc7-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc8-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fc9-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fca-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fcb-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fd0-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fd1-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fd2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3d4449fd3-lun-0"
        ],
        "osd_scenario": "non-collocated"
      },
      "4C4C4544-0047-3610-8053-C8C04F484B32": {
        "dedicated_devices": [
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633ee2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633ee2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633ee2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47633ee2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636412-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636412-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636412-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636412-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634762-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634762-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634762-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47634762-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636322-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636322-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636322-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a47636322-lun-0"
        ],
        "devices": [
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c0-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c1-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c3-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c4-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c5-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c6-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c7-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c8-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5c9-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5ca-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5cb-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5d0-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5d1-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5d2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b301afe5d3-lun-0"
        ],
        "osd_scenario": "non-collocated"
      },
      "4C4C4544-0047-3610-8054-C8C04F484B32": {
        "dedicated_devices": [
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476349f2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476349f2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476349f2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476349f2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476346d2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476346d2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476346d2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476346d2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476351a2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476351a2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476351a2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476351a2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476360e2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476360e2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476360e2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x5002538a476360e2-lun-0"
        ],
        "devices": [
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc0-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc1-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc3-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc4-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc5-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc6-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc7-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc8-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdc9-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdca-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdcb-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdd0-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdd1-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdd2-lun-0",
          "/dev/disk/by-path/pci-0000:3c:00.0-sas-0x500056b3a24ecdd3-lun-0"
        ],
        "osd_scenario": "non-collocated"
      }
    }

Comment 32 Gonéri Le Bouder 2018-01-24 20:20:01 UTC
Please ignore my previous comment, I was stuck with an old puddle (RH7-RHOS-12.0 2017-12-01.4).

Comment 35 Yogev Rabl 2018-01-29 23:42:27 UTC
The verification failed. Ceph ansible failed to deploy the Ceph cluster with the given configuration of the OSDs

Comment 38 Yogev Rabl 2018-01-30 14:14:03 UTC
Verified with the following configuration:

    NodeDataLookup: |
        {"4929BFB8-0ED4-48D7-B34F-9AD615E96112": {"devices": ["/dev/vdb", "/dev/vdc"], "osd_scenario": "collocated"},
        "9EFD920F-FC86-4AA4-BBD5-CBD075999C6D": {"devices": ["/dev/vdb"], "dedicated_devices": ["/dev/vdc"], "osd_scenario": "non-collocated"},
        "5CCE1DF9-0905-4B7C-A0B9-FEDDB19191C8": {"devices": ["/dev/vdb"], "dedicated_devices": ["/dev/vdc"], "osd_scenario": "non-collocated"}}

Comment 41 errata-xmlrpc 2018-01-30 21:24:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0253

Comment 42 John Fulton 2018-07-13 12:35:35 UTC
*** Bug 1600856 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.