Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1358732 - Include RHGS NFS-Ganesha package related logs in sosreport
Summary: Include RHGS NFS-Ganesha package related logs in sosreport
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: sos
Version: 6.8
Hardware: All
OS: All
unspecified
medium
Target Milestone: rc
: ---
Assignee: Filip Krska
QA Contact: BaseOS QE - Apps
URL: https://github.com/sosreport/sos/pull...
Whiteboard:
Depends On:
Blocks: 1373361 1358734
TreeView+ depends on / blocked
 
Reported: 2016-07-21 11:50 UTC by Soumya Koduri
Modified: 2017-12-06 11:24 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1358734 (view as bug list)
Environment:
Last Closed: 2017-12-06 11:24:49 UTC


Attachments (Terms of Use)

Description Soumya Koduri 2016-07-21 11:50:03 UTC
Description of problem:

The below log files are installed and used by RHGS nfs-ganesha* package. 

/var/log/ganesha.log
/var/log/ganesha-gfapi.log

These files need to be included in the sosreport.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Niels de Vos 2016-07-21 13:50:48 UTC
/var/log/ganesha.log is part of the nfs-ganesha RPM, and only needs to be included when that is installed, or the "nfs-ganesha" service is available.

Note that /var/log/ganesha-gfapi.log only needs to be included in case nfs-ganesha-gluster is installed. This package provides a dynamically loadable 'plugin' called /usr/lib64/ganesha/libfsalgluster.so.

Most likely additional logs for the Ceph/RadosGW integration are needed too. Matt should be able to point those out.

Comment 3 Matt Benjamin (redhat) 2016-07-21 14:10:45 UTC
1. /var/log/ganesha.log -- I don't follow, we ARE installing nfs-ganesha
2. /var/log/ganesha-gfapi.log -- this should not be installed, since we are not shipping any of glusterfs 
3. that's an interesting point;  the the RGW fsal -will- be logging in the same manner as a local radosgw;  We have verified that the FSAL is useable on a node which also has an ordinary radosgw instance configured--and we rely on the ceph.conf information from that;  in particular as we are tech preview, I think there's nothing new needed here for now.

Comment 4 Pavel Moravec 2016-07-23 08:05:37 UTC
(In reply to Niels de Vos from comment #1)
> /var/log/ganesha.log is part of the nfs-ganesha RPM, and only needs to be
> included when that is installed, or the "nfs-ganesha" service is available.
> 
> Note that /var/log/ganesha-gfapi.log only needs to be included in case
> nfs-ganesha-gluster is installed. This package provides a dynamically
> loadable 'plugin' called /usr/lib64/ganesha/libfsalgluster.so.

It is fine to let sos to collect logfiles (or configs) that are present in some deployments only - if the file isn't present, sos will silently skip it.

The more important point is to trigger collecting the files.

Is presence of nfs-ganesha package the common denominator for all the deployments? Or what is the (minimal) set of conditions covering all deployments?

Shouldn't be some command outputs be collected as well?

Shouldn't some config files be collected as well?

Can't some collected data (logs, configs, cmd output) contain customer sensitive information like passwords or SSL certs or keys, that sos should obfuscate?

Comment 5 Soumya Koduri 2016-07-25 07:30:45 UTC
Based on the comment above, I have created a sample plugin pasted below with the configuration,log files and sample commands which we need. Please take a look

https://github.com/soumyakoduri/sos/commit/7d0842d6ad307cfb7a2352bbb8b509c855713edb

Also please note that in case if nfs-ganesha service is not running 'showmount -e localhost' may take a while to exit and throw an error. Is it acceptable? If not, kindly take it out. 

Also can we use any special character to collect all the files present in any folder (if any). In downstream NFS-Ganesha package, we create certain configuration files in '/etc/ganesha/exports' directory. If possible, can we query if this folder is present and capture all the files present in that folder?

Comment 6 Pavel Moravec 2016-07-25 08:07:42 UTC
(In reply to Soumya Koduri from comment #5)
> Based on the comment above, I have created a sample plugin pasted below with
> the configuration,log files and sample commands which we need. Please take a
> look
> 
> https://github.com/soumyakoduri/sos/commit/
> 7d0842d6ad307cfb7a2352bbb8b509c855713edb

Thanks for the plugin proposal. I commented one issue there, plus it would be great (after all comments are processed) to create PR following contribution guidelines [1] (most of it is ok, worth running pep8 and definitely change commit message / subject). Anyway I can raise the PR when desired.

[1] https://github.com/sosreport/sos/wiki/Contribution-Guidelines

> 
> Also please note that in case if nfs-ganesha service is not running
> 'showmount -e localhost' may take a while to exit and throw an error. Is it
> acceptable? If not, kindly take it out. 

How much time it can take? By default, a command is killed after 300 seconds, configurable via:

        self.add_cmd_output("rpcinfo -p localhost")
        self.add_cmd_output("showmount -e localhost", timeout=10)

(let use arbitrary timeout value, 10 is just an example; the option cant be added to a list of commands, just to a singleton cmd)

Isn't there an easy way to detect the service is running? Maybe worth adding that as a prerequisite?

> 
> Also can we use any special character to collect all the files present in
> any folder (if any). In downstream NFS-Ganesha package, we create certain
> configuration files in '/etc/ganesha/exports' directory. If possible, can we
> query if this folder is present and capture all the files present in that
> folder?

import glob

self.add_copy_spec(glob.glob("/etc/ganesha/exports/*"))


Not sure how glob works on nested directories, if ganesha creates them under "exports" dir - in that case you might need to call:

glob.glob(["/etc/ganesha/exports/*","/etc/ganesha/exports/*/*"])

or so.

Comment 7 Soumya Koduri 2016-07-25 10:33:41 UTC
Thanks for your inputs. I have made the relevant changes. Please check the same.

 https://github.com/soumyakoduri/sos/commit/d9041dc27db027fe46f035477e5b6bcd3ad5af88


Adding timeout to the commands seemed right instead of checking for the process 
using "pgrep/ps". Thoughts?

Comment 8 Soumya Koduri 2016-07-25 11:17:52 UTC
Previous commit had non-relevant changes. Here is the updated one:

https://github.com/soumyakoduri/sos/commit/90eed89438cc6670eb756f9024623968303a8456

Comment 9 Bryn M. Reeves 2016-07-25 11:41:50 UTC
> import glob
> 
> self.add_copy_spec(glob.glob("/etc/ganesha/exports/*"))

A "copy spec" is a glob string, or list of glob strings...

From the method docstring:

    def add_copy_spec(self, copyspecs):
        """Add a file specification (can be file, dir,or shell glob) to be
        copied into the sosreport by this module.
        """

Comment 11 Soumya Koduri 2016-09-06 06:07:35 UTC
The changes suggested so far have been addressed in the below pull request -

https://github.com/sosreport/sos/pull/858/commits/97439bb4c35f9a4f9e20ea062fcdd8f2d8437059

Kindly re-review the changes.

Comment 13 Jan Kurik 2017-12-06 11:24:49 UTC
Red Hat Enterprise Linux 6 is in the Production 3 Phase. During the Production 3 Phase, Critical impact Security Advisories (RHSAs) and selected Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available.

The official life cycle policy can be reviewed here:

http://redhat.com/rhel/lifecycle

This issue does not meet the inclusion criteria for the Production 3 Phase and will be marked as CLOSED/WONTFIX. If this remains a critical requirement, please contact Red Hat Customer Support to request a re-evaluation of the issue, citing a clear business justification. Note that a strong business justification will be required for re-evaluation. Red Hat Customer Support can be contacted via the Red Hat Customer Portal at the following URL:

https://access.redhat.com/


Note You need to log in before you can comment on or make changes to this bug.