Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1354488 - Multiple SElinux alerts
Summary: Multiple SElinux alerts
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Build
Version: 2.0
Hardware: Unspecified
OS: Unspecified
Target Milestone: rc
: 2.0
Assignee: Boris Ranto
QA Contact: ceph-qe-bugs
Depends On:
TreeView+ depends on / blocked
Reported: 2016-07-11 12:22 UTC by Emilien Macchi
Modified: 2016-07-13 13:50 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2016-07-12 16:04:01 UTC
Target Upstream Version:

Attachments (Terms of Use)

Description Emilien Macchi 2016-07-11 12:22:30 UTC
Description of problem:
Found 26 alerts in /var/log/audit/audit.log when deploying Ceph and OpenStack.

Version-Release number of selected component (if applicable):

Logs are available here:

The complete list of AVCs:

Comment 2 Ken Dreyer (Red Hat) 2016-07-11 14:57:34 UTC
"ceph-selinux-10.2.2-0.el7" looks like an upstream version number, not a Red Hat Ceph Storage version number...

Comment 3 Emilien Macchi 2016-07-11 15:08:51 UTC
Right, I deployed Jewel, provided in CentOS Storage SIG repository.

Comment 4 Boris Ranto 2016-07-12 11:53:32 UTC
These are all var_t target contexts, they should probably be labelled with some ceph_<something>_t label. Can you paste the ceph.conf?

Also, I can see some /srv/data/... paths in the logs. These do not seem as the default ceph locations. What are these files?

Comment 5 Emilien Macchi 2016-07-12 11:55:35 UTC
We're using puppet-ceph to deploy Ceph.
The manifest is here, in this CI tools repository:

The manifest is something we can easily changed, it's only used in CI.
The actual module is here:

Feel free to give any feedback at our way to deploy. Also submit a patch in our CI if needed.

Comment 6 Ken Dreyer (Red Hat) 2016-07-12 16:04:01 UTC
Since this test is not using the ceph RPMs from the Red Hat Ceph Storage product, I'm going to close this BZ and request that you please file tickets with Ceph upstream for now:

In the Redmine ticket, it would be good to mention exactly where you got the ceph-10.2.2-0 RPMs (, not

Comment 7 Emilien Macchi 2016-07-12 16:22:09 UTC
I created an account on and I can't create any ticket. My account ID is "emacchi". I would be grateful if you could help me to solve this bug.


Comment 8 Boris Ranto 2016-07-13 09:49:38 UTC
@Emilien: Hmm, the line #42: '/srv/data' => {} seems quite suspicious. Any idea what does it define? Anyway, it would probably help if you stored the files elsewhere. Depending on the type of files it covers, this could be somewhere under /var/lib/ceph, /var/log/ceph or even /var/run/ceph (or maybe even somewhere under /tmp?).

Comment 9 Emilien Macchi 2016-07-13 12:12:56 UTC
ok I tried to push a patch to change the dir to /var/lib/ceph/data. Let's see how it works now.

Comment 10 Ken Dreyer (Red Hat) 2016-07-13 13:30:11 UTC
You're account should be active in Redmine now, Emilien. If you have questions, please ask zackc in IRC (#sepia channel in OFTC)

Comment 11 Emilien Macchi 2016-07-13 13:44:24 UTC
indeed, using /var/lib/ceph reduced the SElinux alerts to 1. I'll file a bug in Ceph tracker.

Comment 12 Emilien Macchi 2016-07-13 13:50:07 UTC
And here's the upstream bug:

Note You need to log in before you can comment on or make changes to this bug.