Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1510519 - Removal of cloud-provider credentials from openshift-node
Summary: Removal of cloud-provider credentials from openshift-node
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.7.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 3.11.0
Assignee: Kenny Woodson
QA Contact: Johnny Liu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-07 15:26 UTC by Kenny Woodson
Modified: 2018-08-03 07:13 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-08-02 12:20:51 UTC


Attachments (Terms of Use)

Description Kenny Woodson 2017-11-07 15:26:50 UTC
Description of problem:
Currently there are cloud provider credentials getting laid down on openshift nodes when using AWS.

There are a couple of issues here.
1. This is a security risk if a node gets compromised a user could potentially gain access to those credentials.  They are somewhat limited but would have access to persistent volumes (EBS), tagging, and querying.
2. This is a challenge when moving to golden images.  Any type of authentication specific credentials should be limited and move into the product as there are limited methods of landing said credentials onto the hosts in scale groups.

Version-Release number of selected component (if applicable):
atomic-openshift-master-3.7.0-0.196.0.git.0.8448632.el7.x86_64

How reproducible:
Inside of openshift-ansible the code lays down the cloud-provider credentials inside of the openshift_node role.  When using AWS this is a problem.

Steps to Reproduce:
1. Install openshift into AWS using openshift-ansible.
2. Verify credentials are in the following file:
cat /etc/sysconfig/atomic-openshift-node
...
AWS_ACCESS_KEY_ID=<redacted>
AWS_SECRET_ACCESS_KEY=<redacted>
...
3.

Actual results:
Credentials exist on the nodes.

Expected results:
The credentials are only required to be used on the masters since the cloud-provider calls come from the masters.

Additional info:
Let's tighten up the security by removing the credentials from the nodes and verify that they are not used anywhere but the master nodes.

I have submitted a PR with the code to remove the AWS credentials from the openshift_node role.  This would solve our problem.

https://github.com/openshift/openshift-ansible/pull/5981

If QE could run through a test suite with the following PR this would give us more confidence that the nodes are working correctly without the credentials.

- Tests would include persistent volumes (EBS).
- Move pods to other hosts to ensure attach/detach PVs are working.

Anything else I missed?

Comment 2 Kenny Woodson 2017-11-12 15:36:29 UTC
Comment from https://github.com/openshift/openshift-ansible/pull/5981#issuecomment-343356353 states that the credentials are required because they are used to get information regarding other instances.

Comments by github user fraenkel:
The nodes need either creds or a profile because the kubelet retrieves certain bits of info on start, e.g.
https://github.com/openshift/origin/blob/master/vendor/k8s.io/kubernetes/cmd/kubelet/app/server.go#L205
which is populated via
https://github.com/kubernetes/kubernetes/blob/ff82be09e603b3e0e33d238290635d5a26a8d276/pkg/cloudprovider/providers/aws/aws.go#L1610
that states we might as well get all our information from the instance returned by the EC2 API

Comment 3 Kenny Woodson 2017-11-12 20:35:27 UTC
Instance profile work around until we can officially remove credentials from the node.
https://github.com/openshift/openshift-ansible/pull/6095

Comment 8 ge liu 2018-08-03 07:13:33 UTC
@sdodson, I found that /etc/origin/master/master.env also have AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY items displayed the key in clear text, so this issue is won't be fix also, right?


Note You need to log in before you can comment on or make changes to this bug.