Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1514142 - SELinux: Many AVC denials on storage server machines after cluster import
Summary: SELinux: Many AVC denials on storage server machines after cluster import
Keywords:
Status: NEW
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: web-admin-tendrl-selinux
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Nishanth Thomas
QA Contact: sds-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks: 1519119
TreeView+ depends on / blocked
 
Reported: 2017-11-16 17:46 UTC by Martin Bukatovic
Modified: 2019-01-08 17:16 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Github Tendrl tendrl-selinux issues 2 None None None 2017-11-16 17:46:20 UTC

Description Martin Bukatovic 2017-11-16 17:46:20 UTC
Description of problem
======================

After a cluster is imported and tendrl starts to monitor it, there are
many avc denials in audit log on machines of the monitored cluster.

Version-Release
===============

Packages on Tendrl Storage machine:

# rpm -qa | grep tendrl | sort
tendrl-collectd-selinux-1.5.3-2.el7rhgs.noarch
tendrl-commons-1.5.4-2.el7rhgs.noarch
tendrl-gluster-integration-1.5.4-2.el7rhgs.noarch
tendrl-node-agent-1.5.4-2.el7rhgs.noarch
tendrl-selinux-1.5.3-2.el7rhgs.noarch

# rpm -qa | grep selinux | sort
libselinux-2.5-11.el7.x86_64
libselinux-python-2.5-11.el7.x86_64
libselinux-utils-2.5-11.el7.x86_64
selinux-policy-3.13.1-166.el7_4.5.noarch
selinux-policy-targeted-3.13.1-166.el7_4.5.noarch
tendrl-collectd-selinux-1.5.3-2.el7rhgs.noarch
tendrl-selinux-1.5.3-2.el7rhgs.noarch

# rpm -qa | grep gluster | sort
glusterfs-3.8.4-52.el7rhgs.x86_64
glusterfs-api-3.8.4-52.el7rhgs.x86_64
glusterfs-cli-3.8.4-52.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-52.el7rhgs.x86_64
glusterfs-events-3.8.4-52.el7rhgs.x86_64
glusterfs-fuse-3.8.4-52.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-52.el7rhgs.x86_64
glusterfs-libs-3.8.4-52.el7rhgs.x86_64
glusterfs-rdma-3.8.4-52.el7rhgs.x86_64
glusterfs-server-3.8.4-52.el7rhgs.x86_64
gluster-nagios-addons-0.2.9-1.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.3.x86_64
python-gluster-3.8.4-52.el7rhgs.noarch
tendrl-gluster-integration-1.5.4-2.el7rhgs.noarch
vdsm-gluster-4.17.33-1.2.el7rhgs.noarch

# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          permissive
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      28

How reproducible
================

100 %

Steps to Reproduce
==================

1. Prepare machines with GlusterFS cluster, including gluster volume
2. Install RHGS WA via tendrl-ansible there
3. Import cluster via RHGS WA web ui
4. Open Grafana dashboard and wait about 30 minutes (so that RHGS WA
   has time to start gathering data for monitoring purposes).
5. Log into one of storage server machines (aka tendrl node), and
   check for avc error messages via ausearch -m avc.

Actual results
==============

There are many avc denials in audit log. And large part of that is related
to collectd:

```
# ausearch -m avc | grep collectd | wc -l
12202
```

```
# ausearch -m avc | grep collectd | tail
type=SYSCALL msg=audit(1510854206.416:21022): arch=c000003e syscall=4 success=yes exit=0 a0=7ffe9877ea30 a1=7ffe9877e8c0 a2=7ffe9877e8c0 a3=2 items=0 ppid=16169 pid=12446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="lvm" exe="/usr/sbin/lvm" subj=system_u:system_r:collectd_t:s0 key=(null)
type=AVC msg=audit(1510854206.416:21022): avc:  denied  { getattr } for  pid=12446 comm="lvm" path="/run/lock/lvm/V_vg_beta_arbiter_2:aux" dev="tmpfs" ino=1335234 scontext=system_u:system_r:collectd_t:s0 tcontext=system_u:object_r:lvm_lock_t:s0 tclass=file
type=SYSCALL msg=audit(1510854206.416:21023): arch=c000003e syscall=87 success=yes exit=0 a0=7ffe9877ea30 a1=7ffe9877e970 a2=7ffe9877e970 a3=2 items=0 ppid=16169 pid=12446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="lvm" exe="/usr/sbin/lvm" subj=system_u:system_r:collectd_t:s0 key=(null)
type=AVC msg=audit(1510854206.416:21023): avc:  denied  { unlink } for  pid=12446 comm="lvm" name="V_vg_beta_arbiter_2:aux" dev="tmpfs" ino=1335234 scontext=system_u:system_r:collectd_t:s0 tcontext=system_u:object_r:lvm_lock_t:s0 tclass=file
type=SYSCALL msg=audit(1510854206.504:21024): arch=c000003e syscall=42 success=yes exit=0 a0=4 a1=7f8764d6de30 a2=10 a3=5 items=0 ppid=1 pid=16169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="reader#4" exe="/usr/sbin/collectd" subj=system_u:system_r:collectd_t:s0 key=(null)
type=AVC msg=audit(1510854206.504:21024): avc:  denied  { name_connect } for  pid=16169 comm="reader#4" dest=2003 scontext=system_u:system_r:collectd_t:s0 tcontext=system_u:object_r:lmtp_port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1510854216.394:21026): arch=c000003e syscall=42 success=no exit=-115 a0=d a1=7f8764d6c100 a2=10 a3=5a items=0 ppid=1 pid=16169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="reader#3" exe="/usr/sbin/collectd" subj=system_u:system_r:collectd_t:s0 key=(null)
type=AVC msg=audit(1510854216.394:21026): avc:  denied  { name_connect } for  pid=16169 comm="reader#3" dest=2379 scontext=system_u:system_r:collectd_t:s0 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1510854216.388:21025): arch=c000003e syscall=2 success=yes exit=3 a0=7fffdd3e3f7b a1=100 a2=1499480 a3=7fffdd3e1fb0 items=0 ppid=16169 pid=12750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="df" exe="/usr/bin/df" subj=system_u:system_r:collectd_t:s0 key=(null)
type=AVC msg=audit(1510854216.388:21025): avc:  denied  { read } for  pid=12750 comm="df" name="1" dev="dm-10" ino=515 scontext=system_u:system_r:collectd_t:s0 tcontext=unconfined_u:object_r:glusterd_brick_t:s0 tclass=dir
```

Expected results
================

There are no avc messages related to collect or any other RHGS WA monitoring
component.

Additional info
===============

See additional information in upstream issue:

https://github.com/Tendrl/tendrl-selinux/issues/2

Comment 3 Martin Bukatovic 2017-11-20 08:42:52 UTC
Moving this BZ into SELinux component.

Comment 4 Nishanth Thomas 2017-11-28 08:30:31 UTC
This bug no longer blocks the SELinux Tracker bug. What is agreed is to 
:provide selinux policies, which will make tendrl components to run in
 permissive mode, so that we can collect avc denial log messages while
 the whole system works in enforcing mode (this matches your description
 above)

So it is expected to see AVC's in the logs. Moving this out of current release and removing the blocks on 1514098

Comment 7 Martin Bukatovic 2017-12-05 16:28:25 UTC
(In reply to Nishanth Thomas from comment #4)
> So it is expected to see AVC's in the logs. Moving this out of current
> release and removing the blocks on 1514098

Ack.

For 3.3.z I'm going to verify that all the avc denial messages comes from
the permissive domains (as you describe in comment 4) during verification of
BZ 1514098.


Note You need to log in before you can comment on or make changes to this bug.