Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1361548 - Missing OSD number for some EC pools
Summary: Missing OSD number for some EC pools
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat
Component: UI
Version: 2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 2
Assignee: Darshan
QA Contact: Martin Kudlej
URL:
Whiteboard:
Depends On:
Blocks: Console-2-GA
TreeView+ depends on / blocked
 
Reported: 2016-07-29 11:54 UTC by Martin Kudlej
Modified: 2016-08-23 19:58 UTC (History)
3 users (show)

Fixed In Version: rhscon-ceph-0.0.39-1.el7scon.x86_64.rpm
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-23 19:58:27 UTC


Attachments (Terms of Use)
missing number of OSDs for some pools (deleted)
2016-08-04 13:27 UTC, Martin Kudlej
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2016:1754 normal SHIPPED_LIVE New packages: Red Hat Storage Console 2.0 2017-04-18 19:09:06 UTC
Gerrithub.io 285741 None None None 2016-08-01 10:27:51 UTC

Description Martin Kudlej 2016-07-29 11:54:58 UTC
Description of problem:
As you can see at screenshot some pools haven't number of OSD in list. It is true that there is not enough OSDs for 8+4 EC pool but there is enough OSDs for 6+3 EC pool.

Version-Release number of selected component (if applicable):
ceph-ansible-1.0.5-31.el7scon.noarch
ceph-installer-1.0.14-1.el7scon.noarch
rhscon-ceph-0.0.38-1.el7scon.x86_64
rhscon-core-0.0.38-1.el7scon.x86_64
rhscon-core-selinux-0.0.38-1.el7scon.noarch
rhscon-ui-0.0.51-1.el7scon.noarch

How reproducible:
most probably 100%

Steps to Reproduce:
1. create cluster 
2. create all types of EC pools

Actual results:
Missing number of OSD for 6+3 and 8+4 pools.

Expected results:
All types of pool will have number of OSDs.

$ ceph -c /etc/ceph/cl1.conf osd crush tree
[
    {
        "id": -1,
        "name": "default",
        "type": "root",
        "type_id": 10,
        "items": [
            {
                "id": -2,
                "name": "mkudlej-usm1-node1",
                "type": "host",
                "type_id": 1,
                "items": [
                    {
                        "id": 0,
                        "name": "osd.0",
                        "type": "osd",
                        "type_id": 0,
                        "crush_weight": 0.009995,
                        "depth": 2
                    }
                ]
            },
            {
                "id": -3,
                "name": "mkudlej-usm1-node2",
                "type": "host",
                "type_id": 1,
                "items": [
                    {
                        "id": 1,
                        "name": "osd.1",
                        "type": "osd",
                        "type_id": 0,
                        "crush_weight": 0.009995,
                        "depth": 2
                    }
                ]
            },
            {
                "id": -4,
                "name": "mkudlej-usm1-node3",
                "type": "host",
                "type_id": 1,
                "items": [
                    {
                        "id": 2,
                        "name": "osd.2",
                        "type": "osd",
                        "type_id": 0,
                        "crush_weight": 0.009995,
                        "depth": 2
                    }
                ]
            },
            {
                "id": -5,
                "name": "mkudlej-usm1-node4",
                "type": "host",
                "type_id": 1,
                "items": [
                    {
                        "id": 3,
                        "name": "osd.3",
                        "type": "osd",
                        "type_id": 0,
                        "crush_weight": 0.009995,
                        "depth": 2
                    }
                ]
            },
            {
                "id": -6,
                "name": "mkudlej-usm2-node1",
                "type": "host",
                "type_id": 1,
                "items": [
                    {
                        "id": 4,
                        "name": "osd.4",
                        "type": "osd",
                        "type_id": 0,
                        "crush_weight": 0.009995,
                        "depth": 2
                    }
                ]
            },
            {
                "id": -7,
                "name": "mkudlej-usm2-node2",
                "type": "host",
                "type_id": 1,
                "items": [
                    {
                        "id": 5,
                        "name": "osd.5",
                        "type": "osd",
                        "type_id": 0,
                        "crush_weight": 0.009995,
                        "depth": 2
                    }
                ]
            },
            {
                "id": -8,
                "name": "mkudlej-usm2-node3",
                "type": "host",
                "type_id": 1,
                "items": [
                    {
                        "id": 6,
                        "name": "osd.6",
                        "type": "osd",
                        "type_id": 0,
                        "crush_weight": 0.009995,
                        "depth": 2
                    }
                ]
            },
            {
                "id": -9,
                "name": "mkudlej-usm2-node4",
                "type": "host",
                "type_id": 1,
                "items": [
                    {
                        "id": 7,
                        "name": "osd.7",
                        "type": "osd",
                        "type_id": 0,
                        "crush_weight": 0.009995,
                        "depth": 2
                    }
                ]
            },
            {
                "id": -10,
                "name": "mkudlej-usm2-node5",
                "type": "host",
                "type_id": 1,
                "items": [
                    {
                        "id": 8,
                        "name": "osd.8",
                        "type": "osd",
                        "type_id": 0,
                        "crush_weight": 0.009995,
                        "depth": 2
                    }
                ]
            },
            {
                "id": -11,
                "name": "mkudlej-usm2-node6",
                "type": "host",
                "type_id": 1,
                "items": [
                    {
                        "id": 9,
                        "name": "osd.9",
                        "type": "osd",
                        "type_id": 0,
                        "crush_weight": 0.009995,
                        "depth": 2
                    }
                ]
            }
        ]
    }
]

Comment 3 Martin Kudlej 2016-07-29 11:58:02 UTC
Created attachment 1185477 [details]
server logs

Comment 5 Martin Kudlej 2016-08-04 13:27:55 UTC
Created attachment 1187512 [details]
missing number of OSDs for some pools

Comment 6 Martin Kudlej 2016-08-04 13:29:24 UTC
Tested with
ceph-ansible-1.0.5-31.el7scon.noarch
ceph-installer-1.0.14-1.el7scon.noarch
rhscon-ceph-0.0.39-1.el7scon.x86_64
rhscon-core-0.0.39-1.el7scon.x86_64
rhscon-core-selinux-0.0.39-1.el7scon.noarch
rhscon-ui-0.0.51-1.el7scon.noarch
and it works.

Comment 8 errata-xmlrpc 2016-08-23 19:58:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2016:1754


Note You need to log in before you can comment on or make changes to this bug.