Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1597765 - Capsule status shows # workers increased by 2
Summary: Capsule status shows # workers increased by 2
Keywords:
Status: NEW
Alias: None
Product: Red Hat Satellite 6
Classification: Red Hat
Component: Capsule - Content
Version: 6.3.2
Hardware: x86_64
OS: Linux
medium
low vote
Target Milestone: Unspecified
Assignee: satellite6-bugs
QA Contact: Lukas Pramuk
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-03 15:05 UTC by Pavel Moravec
Modified: 2019-03-31 03:20 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3515781 None None None 2018-07-03 15:22:27 UTC

Description Pavel Moravec 2018-07-03 15:05:15 UTC
Description of problem:
Checking Capsule status and Services tab, I see:


Pulp node
  Version             1.3.0
Pulp server version   2.13.4.10
Database connection   OK
Messaging connection  OK
Workers               6

While I configured the Capsule with PULP_CONCURRENCY 4 and really there are 4 pulp workers running.

The reason is, katello counts also resource manager and scheduler "worker" among pulp celery workers. Since it fetches the data via API request:

curl -k https://pmoravec-caps63.gsslab.brq2.redhat.com/pulp/api/v2/status/ | python -m json.tool
..
{
    "api_version": "2",
    "database_connection": {
        "connected": true
    },
    "known_workers": [
        {
            "_id": "reserved_resource_worker-2@pmoravec-caps63.gsslab.brq2.redhat.com",
            "_ns": "workers",
            "last_heartbeat": "2018-07-03T14:52:16Z"
        },
        {
            "_id": "reserved_resource_worker-0@pmoravec-caps63.gsslab.brq2.redhat.com",
            "_ns": "workers",
            "last_heartbeat": "2018-07-03T14:52:16Z"
        },
        {
            "_id": "reserved_resource_worker-1@pmoravec-caps63.gsslab.brq2.redhat.com",
            "_ns": "workers",
            "last_heartbeat": "2018-07-03T14:52:18Z"
        },
        {
            "_id": "reserved_resource_worker-3@pmoravec-caps63.gsslab.brq2.redhat.com",
            "_ns": "workers",
            "last_heartbeat": "2018-07-03T14:52:26Z"
        },
        {
            "_id": "resource_manager@pmoravec-caps63.gsslab.brq2.redhat.com",
            "_ns": "workers",
            "last_heartbeat": "2018-07-03T14:52:21Z"
        },
        {
            "_id": "scheduler@pmoravec-caps63.gsslab.brq2.redhat.com",
            "_ns": "workers",
            "last_heartbeat": "2018-07-03T14:52:19Z"
        }
    ],
    "messaging_connection": {
        "connected": true
    },
    "versions": {
        "platform_version": "2.13.4.10"
    }
}

and prints @pulp_status['known_workers'].size . See

/opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.4.5.74/app/views/smart_proxies/pulp_status.html.erb

for the code.

The number is correct from categorization point of view (in the above output, both scheduler and resource_manager are marked as workers), but it is confusing from end-user point of view who set pulp concurrency to 4 and thinks there are 4 workers. And from pulp concurrency point of view, there really are 4 workers only.


Version-Release number of selected component (if applicable):
tfm-rubygem-katello-3.4.5.74-1.el7sat.noarch


How reproducible:
100%


Steps to Reproduce:
1. Have an external Capsule, play with PULP_CONCURRENCY in /etc/default/pulp_workers (and restart pulp services afterwards) there.

2. Check Capsule service status and # of pulp workers


Actual results:
2. WebUI shows 2 more workers than PULP_CONCURRENCY.


Expected results:
2. The numbers to match.


Additional info:


Note You need to log in before you can comment on or make changes to this bug.