Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1596494 - Change the detection of CDS for RHUI 3
Summary: Change the detection of CDS for RHUI 3
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: sos
Version: 7.5
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: rc
: ---
Assignee: Pavel Moravec
QA Contact: Radek Bíba
URL:
Whiteboard:
Depends On: 1596296
Blocks: 1609081 1614954 1654309 1596496
TreeView+ depends on / blocked
 
Reported: 2018-06-29 06:55 UTC by Radek Bíba
Modified: 2019-03-07 15:49 UTC (History)
7 users (show)

Fixed In Version: sos-3.6-3.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1596496 1614954 (view as bug list)
Environment:
Last Closed: 2018-10-30 10:33:42 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Github sosreport sos pull 1375 None None None 2018-07-03 11:08:34 UTC
Red Hat Product Errata RHEA-2018:3144 None None None 2018-10-30 10:34:57 UTC

Description Radek Bíba 2018-06-29 06:55:15 UTC
Description of problem:
The RHUI plug-in for sos reads:

    rhui_debug_path = "/usr/share/rh-rhua/rhui-debug.py"
...
        if self.is_installed("pulp-cds"):
            cds = "--cds"
        else:
            cds = ""

        rhui_debug_dst_path = self.get_cmd_output_path()
        self.add_cmd_output(
            "python %s %s --dir %s"
            % (self.rhui_debug_path, cds, rhui_debug_dst_path),
            suggest_filename="rhui-debug")

The way the script decides whether it's running on a CDS node (as opposed to a RHUA node) worked on RHUI 2, where pulp-cds was indeed a valid package name, but  on RHUI 3 there's no such package. Consequently, the rhui-debug script is executed without --cds and doesn't collect the right config and log files that are present on a CDS node. In fact, no useful data is collected as the files that are collected in the "non-CDS" mode only exist on RHUA nodes.

RHUI 2 is no longer supported (see https://access.redhat.com/support/policy/updates/rhui for the schedule), so it should be safe to change the detection in a way that's not backward compatible with RHUI 2. As for how the detection should be made, there isn't any package specific to CDS nodes, but perhaps you could check if the /etc/httpd/conf.d/03-crane.conf file exists, as it only exists on CDSes. So the implementation could be as follows:

import os
...
        if os.path.isfile("/etc/httpd/conf.d/03-crane.conf"):
            cds = "--cds"
        else:
            cds = ""

I've checked this and it works well.

Version-Release number of selected component (if applicable):
sos-3.5-9.el7_5

How reproducible:
Always

Steps to Reproduce:
1. install sos on a RHUI CDS node
2. until bug 1596296 is fixed, manually create /usr/share/rh-rhua/rhui-debug.py
3. run sosreport -o rhui

Actual results:
The tarball doesn't contain anything useful

Expected results:
The tarball contains CDS-specific files, for example:
etc/httpd/conf.d/25-cds.example.com.conf
var/log/httpd/cds.example.com_access_ssl.log

Comment 1 Bryn M. Reeves 2018-06-29 07:49:07 UTC
For upstream it matters more that nobody is using RHUI 2 (e.g. CentOS, other rebuilds?) than whether or not Red Hat maintains commercial support for it - if that is the case then we will need to handle both for the time being.

I'm assuming that this isn't actually a problem here given RHUI's intimate relationship with Red Hat products, but it would be useful to have confirmation of the fact from the folks who are involved before we make any changes.

Comment 3 Bryn M. Reeves 2018-06-29 07:51:47 UTC
Lastly, what package owns the file "/etc/httpd/conf.d/03-crane.conf"? We generally prefer to key off package names where possible for a number of reasons (for one thing they are less prone to tampering/accidental damage by the admin).

Comment 4 Radek Bíba 2018-06-29 08:27:20 UTC
The thing is, now that RHUI 2 has become unsupported, it shouldn't be necessary to collect logs from RHUI 2 for GSS purposes. Anyway, I don't see any issue with making the detection work on both RHUI 2 and 3. Technically, it could be a matter of:

        if self.is_installed("pulp-cds") or os.path.isfile("/etc/httpd/conf.d/03-crane.conf"):

Right?

No package owns the crane.conf file, it gets created remotely by Puppet when the CDS is added in RHUI.

Comment 5 Radek Bíba 2018-06-29 10:01:06 UTC
(In reply to myslef from comment #0)
> script is executed without --cds and doesn't collect the right config and
> log files that are present on a CDS node. In fact, no useful data is
> collected as the files that are collected in the "non-CDS" mode only exist
> on RHUA nodes.

Correction: some useful data _is_ collected -- the paths that exist on both RHUA and CDS nodes -- but many files are still missing; e.g. the Apache conf file from the example. I was wrong in comment 0 because mistook the situation for another (worse) scenario. Sorry about that.

Comment 6 Bryn M. Reeves 2018-06-29 10:14:07 UTC
> The thing is, now that RHUI 2 has become unsupported, it shouldn't be necessary 
> to collect logs from RHUI 2 for GSS purposes.

Yes, I understand that, but sos today is a project used by a very large number of distributions and organisations - if we are certain that nobody else is using this or shipping it in a distro that would be likely to update to a version of sos that contains this change.

Like I said: given the intimate relationship with Red Hat and our commercial services, I think this is probably fine, but as sos maintainers we do not have the insight into who is using RHUI to be able to make the final call - that's where we need your help with the decision, but it has to be wider than just CEE/GSS concerns.


>        if self.is_installed("pulp-cds") or os.path.isfile("/etc/httpd/conf.d/03-crane.conf"):

Sure: that's why I'm asking.

> No package owns the crane.conf file, it gets created remotely by Puppet when the CDS 

Then it's possibly not a great choice for this trigger - surely there's some other packaged file, or better yet package name (or even executable command), that can distinguish these hosts?

If there were a problem generating the file, or if it was moved or unlinked for some reason this would break collection - a package has the advantage that it's less prone to these accidents, and if it is removed that will be evident in the data collected by other plugins.

Comment 7 Radek Bíba 2018-06-29 11:00:22 UTC
You need a valid Red Hat subscription to be able to download the RHUI ISO or attach the RHUI SKU. And then you can get an entitlement certificate that will allow you to sync selected Red Hat repos. Not sure then if RHUI practically usable on other distributions.

Here are the packages that are installed on CDS nodes and not on RHUA nodes:

python-crane
python-flask
python-itsdangerous
python-pulp-rpm-common
python-werkzeug
rhui-mirrorlist
rhui-oid-validator

Can you choose one of them?

Comment 8 Bryn M. Reeves 2018-06-29 11:11:02 UTC
Thanks! If we are confident that nobody else is building or shipping this, which sounds to be the case, then I've no objection to removing the RHUI2 support upstream.

The package list is helpful and we can use that to write a better check for the CDS nodes.

Another option that may be worth considering is to move more of the collection into the sos plugin itself: more components have been dropping their own "*-debug" scripts recently and moving stuff upstream in sos (although it isn't a problem if RHUI doesn't want to do this now - we can continue running the debug script and collecting the result). The motivation for this is mainly the lack of need to maintain an in-tree script, and the support for size limits, timeouts, and other convenience features that sos provides.

Comment 9 Radek Bíba 2018-06-29 11:21:50 UTC
You're welcome. I can't tell if no one else is rebuilding RHUI, but the source RPMs aren't normally available to the public, so we're _probably_ the only provider.

I agree that ideally there would be no external script, but perhaps in the foreseeable future I think it will be practical for us to keep a script outside sos as we're going to change / enhance it a couple of times. At present, we can do that with any RHUI update (and we release updates quite often), whereas if the script were is sos we'd have to wait for new sos releases and bother you guys with all the changes. :)

Comment 10 Bryn M. Reeves 2018-06-29 11:31:17 UTC
Sure - that's no problem - I just thought I'd mention it as we're already making some changes here. There's no particular need to do this, it's just been a trend in other plugins lately.

I'll get the change made upstream - unfortunately it just missed 3.6, but it will be a trivial patch to apply to the package.

Comment 11 Radek Bíba 2018-06-29 11:36:25 UTC
Thanks!

And just to be clear: the package names I wrote in comment 7 are from RHUI 3 only. On RHUI 2 you should still check if you're on a CDS node by checking for the pulp-cds package. So, a universal check would be e.g.:

        if self.is_installed("pulp-cds") or self.is_installed("rhui-mirrorlist"):

Comment 12 Pavel Moravec 2018-07-03 11:08:35 UTC
devel_ack-ing even into 7.6 if exception and qa_ack will be provided.

I will chase for the exception+, can you Radek pls. state you will OtherQE this BZ as pre-agreed on chat?

Comment 13 Radek Bíba 2018-07-03 11:22:49 UTC
Yes, I will test this. As a RHUI QE, I can easily create a test environment.

Comment 18 errata-xmlrpc 2018-10-30 10:33:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:3144


Note You need to log in before you can comment on or make changes to this bug.