Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1519139 - .all index doesn't show up in kinaba UI
Summary: .all index doesn't show up in kinaba UI
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 3.4.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 3.4.z
Assignee: Jeff Cantrill
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-30 09:16 UTC by Nicolas Nosenzo
Modified: 2018-04-13 09:35 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2018-04-11 16:11:35 UTC


Attachments (Terms of Use)
Enabled ops cluster, kibana UI, no log entries under .all index (deleted)
2018-04-08 09:55 UTC, Junqi Zhao
no flags Details
Enabled ops cluster, kibana-ops UI, there are log entries under .all index (deleted)
2018-04-08 09:56 UTC, Junqi Zhao
no flags Details
logging dump output (deleted)
2018-04-08 09:57 UTC, Junqi Zhao
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:1134 normal SHIPPED_LIVE OpenShift Container Platform 3.4 and 3.3 bug fix update 2018-04-18 11:01:11 UTC

Description Nicolas Nosenzo 2017-11-30 09:16:50 UTC
Description of problem:



The error in the kibana logs:

2017-11-29T14:47:36.881182145Z [2017-11-29 14:47:36,881][WARN ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] _all does not exist in cluster metadata

The error in the kibana UI:
Discover: [security_exception] no permissions for indices:data/read/msearch

Facts:
- User has been granted with a cluster-reader role directly, not through groups
- User can query the index pattern and it returned all the hits:
 
# curl -sv -H "X-Proxy-Remote-User: `oc whoami`" -H "Authorization: Bearer `oc whoami -t`" -k https://`oc get svc logging-kibana -o jsonpath='{.spec.clusterIP}'`/elasticsearch/.all/_search?sort=@timestamp:desc | python -mjson.tool

$ head es_all_20171129.txt
{
    "_shards": {
        "failed": 0,
        "successful": 656,
        "total": 656
.....


Version-Release number of selected component (if applicable):

It's seems to be also hitting efk clusters with below components:

/root/buildinfo/Dockerfile-openshift3-logging-curator-v3.4.1.44.26-4
/root/buildinfo/Dockerfile-openshift3-logging-elasticsearch-3.4.1-45
/root/buildinfo/Dockerfile-openshift3-logging-fluentd-3.4.1-30
/root/buildinfo/Dockerfile-openshift3-logging-kibana-3.4.1-36
/root/buildinfo/Dockerfile-openshift3-logging-auth-proxy-3.4.0-7

How reproducible:

100% in customer env.

Steps to Reproduce:
1. Update to latest 3.4 logging image



Actual results:
Kibana UI fails

Expected results:
Kibana UI shows the .all index to a cluster-admin/reader user

Additional info:
This has been reported on https://bugzilla.redhat.com/show_bug.cgi?id=1499762 for 3.6

Comment 1 Nicolas Nosenzo 2017-11-30 09:17:17 UTC
I'm collecting now the list of indices associated with the alias ".all"
 
QUERY=/_alias/.all?pretty es_util

Comment 2 Nicolas Nosenzo 2017-11-30 12:24:06 UTC
The amount of indices under the .all alias:

$ wc -l 20171130_es_util_aliases.log 
3112 20171130_es_util_aliases.log

I've also confirmed that user used for testing has the cluster-reader role.

Comment 3 Jeff Cantrill 2017-12-04 22:19:28 UTC
Nicolas,

Can you tell me if this cluster was deployed with the ops cluster enabled?  Are you possibly seeing behavior as described here: https://bugzilla.redhat.com/show_bug.cgi?id=1519705

Comment 4 Nicolas Nosenzo 2017-12-05 12:07:42 UTC
(In reply to Jeff Cantrill from comment #3)
> Nicolas,
> 
> Can you tell me if this cluster was deployed with the ops cluster enabled? 
> Are you possibly seeing behavior as described here:
> https://bugzilla.redhat.com/show_bug.cgi?id=1519705

Jeff, the cluster was deployed with ops disabled. 

Anyhow, I don't see the "IndexNotFoundException[no such index]" error message you mention on that bugzilla.

Comment 5 Nicolas Nosenzo 2018-01-19 09:26:29 UTC
@Jeff, Is this a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1499762 ?

Comment 6 Jeff Cantrill 2018-01-19 19:38:52 UTC
They are different OCP versions for which we backported the same fix.  We are unlikely to fix in 3.4 so if this is resolved in later releases please close this issue.

Comment 7 Jeff Cantrill 2018-03-14 15:52:03 UTC
This may be resolved by v3.4.1.44.38 of the elasticsearch image which includes a fix to the openshift elasticsearch plugin

Comment 9 Junqi Zhao 2018-04-08 09:54:37 UTC
for non-ops cluster, issue is fixed, .all index could be shown on kibana, there is not error on kibana UI, and project logs could be shown on kibana UI


but for enabled ops cluster, there is not project logs under .all index and separated project indcies, see the attached pictures.

rshed es pods, _all and project.** does not exist in cluster metadata
# cat /elasticsearch/logging-es/logs/logging-es.log
[2018-04-08 09:13:10,648][WARN ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] _all does not exist in cluster metadata
[2018-04-08 09:13:10,648][WARN ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] _all does not exist in cluster metadata
[2018-04-08 09:13:10,649][WARN ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] _all does not exist in cluster metadata
[2018-04-08 09:13:10,649][WARN ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] _all does not exist in cluster metadata
[2018-04-08 09:13:10,760][WARN ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] project.logging.6c1f95df-3af7-11e8-bb90-fa163e531066.2018.04.08" does not exist in cluster metadata
[2018-04-08 09:13:10,760][WARN ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] project.logging.6c1f95df-3af7-11e8-bb90-fa163e531066.2018.04.08" does not exist in cluster metadata
[2018-04-08 09:13:11,780][WARN ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] project.logging.6c1f95df-3af7-11e8-bb90-fa163e531066.2018.04.08" does not exist in cluster metadata

Comment 10 Junqi Zhao 2018-04-08 09:55:31 UTC
Created attachment 1418836 [details]
Enabled ops cluster, kibana UI, no log entries under .all index

Comment 11 Junqi Zhao 2018-04-08 09:56:06 UTC
Created attachment 1418837 [details]
Enabled ops cluster, kibana-ops UI, there are log entries under .all index

Comment 12 Junqi Zhao 2018-04-08 09:56:54 UTC
# openshift version
openshift v3.4.1.44.52
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0


Images
logging-deployer/images/v3.4.1.44.52-2
logging-curator/images/v3.4.1.44.52-2
logging-fluentd/images/v3.4.1.44.38-11
logging-elasticsearch/images/v3.4.1.44.38-12
logging-kibana/images/v3.4.1.44.38-10
logging-auth-proxy/images/v3.4.1.44.38-10

Comment 13 Junqi Zhao 2018-04-08 09:57:21 UTC
Created attachment 1418838 [details]
logging dump output


Note You need to log in before you can comment on or make changes to this bug.