Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1511925 - Can not login Kibana, Kibana error:[security_exception] no permissions for indices:data/read/mget
Summary: Can not login Kibana, Kibana error:[security_exception] no permissions for in...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 3.5.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 3.5.z
Assignee: Rich Megginson
QA Contact: Junqi Zhao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-10 12:45 UTC by Junqi Zhao
Modified: 2017-12-14 21:02 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
I don't think a doc update is required because the broken code was never shipped to customers.
Clone Of:
Environment:
Last Closed: 2017-12-14 21:02:32 UTC


Attachments (Terms of Use)
Kibana UI (deleted)
2017-11-10 12:45 UTC, Junqi Zhao
no flags Details
logging dump output (deleted)
2017-11-10 12:47 UTC, Junqi Zhao
no flags Details
journal log (deleted)
2017-11-13 05:44 UTC, Junqi Zhao
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:3438 normal SHIPPED_LIVE OpenShift Container Platform 3.6 and 3.5 bug fix and enhancement update 2017-12-15 01:58:11 UTC
Github openshift origin-aggregated-logging pull 784 None None None 2017-11-13 23:58:47 UTC

Description Junqi Zhao 2017-11-10 12:45:34 UTC
Created attachment 1350459 [details]
Kibana UI

Description of problem:
Deployed logging 3.5 and tried to login kibana but failed, it seems it is related to authentication,error in kibana:
********************************************************************************
Fatal Error 
Courier Fetch Error: unhandled courier request error: [security_exception] no permissions for indices:data/read/mget

Version: 4.6.4
Build: 10229

Error: unhandled courier request error: [security_exception] no permissions for indices:data/read/mget
handleError@https://kibana.apps.1110-c39.qe.rhcloud.com/bundles/commons.bundle.js?v=10229:98251:23
AbstractReqProvider/AbstractReq.prototype.handleFailure@https://kibana.apps.1110-c39.qe.rhcloud.com/bundles/commons.bundle.js?v=10229:98171:15
callClient/</<@https://kibana.apps.1110-c39.qe.rhcloud.com/bundles/commons.bundle.js?v=10229:98065:14
callClient/<@https://kibana.apps.1110-c39.qe.rhcloud.com/bundles/commons.bundle.js?v=10229:98063:10
processQueue@https://kibana.apps.1110-c39.qe.rhcloud.com/bundles/commons.bundle.js?v=10229:42452:29
scheduleProcessQueue/<@https://kibana.apps.1110-c39.qe.rhcloud.com/bundles/commons.bundle.js?v=10229:42468:28
$eval@https://kibana.apps.1110-c39.qe.rhcloud.com/bundles/commons.bundle.js?v=10229:43696:17
$digest@https://kibana.apps.1110-c39.qe.rhcloud.com/bundles/commons.bundle.js?v=10229:43507:16
$apply@https://kibana.apps.1110-c39.qe.rhcloud.com/bundles/commons.bundle.js?v=10229:43804:14
done@https://kibana.apps.1110-c39.qe.rhcloud.com/bundles/commons.bundle.js?v=10229:38253:37
completeRequest@https://kibana.apps.1110-c39.qe.rhcloud.com/bundles/commons.bundle.js?v=10229:38451:8
requestLoaded@https://kibana.apps.1110-c39.qe.rhcloud.com/bundles/commons.bundle.js?v=10229:38392:10
********************************************************************************

more details please see the attached logging environment dump file

Version-Release number of selected component (if applicable):
# openshift version
openshift v3.5.5.31.47
kubernetes v1.5.2+43a9be4
etcd 3.1.0

images:
logging-curator/images/v3.5.5.31.47-1
logging-elasticsearch/images/3.5.0-48
logging-kibana/images/3.5.0-44
logging-fluentd/images/3.5.0-39
logging-auth-proxy/images/3.5.0-38

How reproducible:
Always

Steps to Reproduce:
1. Deploy logging 3.5, and login kibana
2.
3.

Actual results:
Failed to login kibana.

Expected results:
Could login in kibana.

Additional info:

Comment 1 Junqi Zhao 2017-11-10 12:47:10 UTC
Created attachment 1350461 [details]
logging dump output

Comment 3 Noriko Hosoi 2017-11-10 17:23:57 UTC
Hi @Junqi,

Found these warnings in logging-20171110_072210/project:
29m        29m         1         logging-kibana-1-wzgr5         Pod                                                              Warning   FailedMount         {kubelet host-8-241-5.host.centralci.eng.rdu2.redhat.com}    MountVolume.SetUp failed for volume "kubernetes.io/secret/c4d78e40-c60d-11e7-a33d-fa163ef17798-kibana" (spec.Name: "kibana") pod "c4d78e40-c60d-11e7-a33d-fa163ef17798" (UID: "c4d78e40-c60d-11e7-a33d-fa163ef17798") with: secrets "logging-kibana" not found

29m        29m         8         logging-kibana-1               ReplicationController                                            Warning   FailedCreate        {replication-controller }                                    Error creating: pods "logging-kibana-1-" is forbidden: service account logging/aggregated-logging-kibana was not found, retry after the service account is created

28m        28m         1         logging-kibana                 DeploymentConfig                                                 Warning   FailedCreate        {logging-kibana-1-deploy }                                   Error creating: pods "logging-kibana-1-" is forbidden: service account logging/aggregated-logging-kibana was not found, retry after the service account is created

Could you check any openshift related errors in the system log?

If you search the string [1] with google, you'll find quite a number of people ran into the problem and some of them reported it was a resource issue, e.g., more memory was needed.  I'm hoping the system log could contain any clue.

[1] - mountvolume setup failed for volume kubernetes io secret

Comment 4 Junqi Zhao 2017-11-13 05:43:52 UTC
> Could you check any openshift related errors in the system log?
> 
> If you search the string [1] with google, you'll find quite a number of
> people ran into the problem and some of them reported it was a resource
> issue, e.g., more memory was needed.  I'm hoping the system log could
> contain any clue.
> 
> [1] - mountvolume setup failed for volume kubernetes io secret

I don't think it is related to resource, if it has something to do with resource, the kibana pod can not be in running status. And we did not have this issue before.

Error creating: pods "logging-kibana-1-" is forbidden: service account logging/aggregated-logging-kibana was not found, retry after the service account is created

I think it was try to find sa logging/aggregated-logging-kibana when this sa is not ready, and after a while, this sa is created, so it will not throw this warning then.
# oc get sa -n logging
NAME                               SECRETS   AGE
aggregated-logging-curator         2         3h
aggregated-logging-elasticsearch   2         3h
aggregated-logging-fluentd         2         3h
aggregated-logging-kibana          2         3h
builder                            2         3h
default                            2         3h
deployer                           2         3h

# journalctl | grep -i error | grep -i mount | grep -i "kibana"
found "invalid container name" info, such as
Nov 13 00:42:28 host-8-241-68.host.centralci.eng.rdu2.redhat.com atomic-openshift-node[26871]: I1113 00:42:28.546780   26871 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-origin-openshift.local.volumes-pods-d6c8c186\x2dc815\x2d11e7\x2da6f0\x2dfa163e365aa0-volumes-kubernetes.io\x7esecret-aggregated\x2dlogging\x2dkibana\x2dtoken\x2dpzx0r.mount: invalid container name
Nov 13 00:42:28 host-8-241-68.host.centralci.eng.rdu2.redhat.com atomic-openshift-node[26871]: I1113 00:42:28.547482   26871 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-origin-openshift.local.volumes-pods-d6c8c186\x2dc815\x2d11e7\x2da6f0\x2dfa163e365aa0-volumes-kubernetes.io\x7esecret-kibana.mount: invalid container name

Comment 5 Junqi Zhao 2017-11-13 05:44:28 UTC
Created attachment 1351388 [details]
journal log

Comment 7 Rich Megginson 2017-11-14 23:09:39 UTC
https://github.com/openshift/origin-aggregated-logging/pull/784

koji_builds:
  https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=625144
repositories:
  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:rhaos-3.5-rhel-7-docker-candidate-37544-20171114225315
  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:latest
  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:3.5.0
  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:3.5.0-56
  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:v3.5

Comment 8 Junqi Zhao 2017-11-15 05:31:10 UTC
Issue is fixed, there is not Kibana error:[security_exception] no permissions for indices:data/read/mget now. but a regression bug is found: https://bugzilla.redhat.com/show_bug.cgi?id=1513284

images:
logging-elasticsearch/images/3.5.0-56
logging-kibana/images/3.5.0-51
logging-fluentd/images/3.5.0-46
logging-auth-proxy/images/3.5.0-45
logging-curator/images/v3.5.5.31.47-8

# openshift version
openshift v3.5.5.31.47
kubernetes v1.5.2+43a9be4
etcd 3.1.0

Comment 11 errata-xmlrpc 2017-12-14 21:02:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3438


Note You need to log in before you can comment on or make changes to this bug.