|Summary:||Pod QoS Tier are different between OpenShift 3.2 and 3.3|
|Product:||OpenShift Container Platform||Reporter:||Weihua Meng <wmeng>|
|Component:||Pod||Assignee:||Derek Carr <decarr>|
|Status:||CLOSED ERRATA||QA Contact:||Weihua Meng <wmeng>|
|Version:||3.2.1||CC:||agoldste, aos-bugs, jokerman, mmccomas, tdawson|
|Fixed In Version:||Doc Type:||Bug Fix|
Previously pods that had a resource request of 0 and specified limits the pod was classified as BestEffort when it should have been Burstable for that resource. We have corrected this bug so that they're now correctly classified as Burstable.
|Last Closed:||2016-09-12 17:35:45 UTC||Type:||Bug|
|oVirt Team:||---||RHEL 7.3 requirements from Atomic Host:|
|Cloudforms Team:||---||Target Upstream Version:|
Description Weihua Meng 2016-07-18 10:07:01 UTC
Description of problem: For the same pod, it is considered as QoS BestEffort in Openshift 3.2 but Burstable in OpenShift 3.3. Version-Release number of selected component (if applicable): openshift v188.8.131.52-1-g2265530 kubernetes v1.2.0-36-g4a3f9c5 etcd 2.2.5 openshift v184.108.40.206 kubernetes v1.3.0+57fb9ac etcd 2.3.0+git How reproducible: Always Steps to Reproduce: 1. create a BestEffort quota oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/quota/quota-besteffort.yaml Scopes: BestEffort * Matches all pods that have best effort quality of service. Resource Used Hard -------- ---- ---- pods 0 2 2. create a OpenShift 3.2 BestEffort pod. oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/quota/pod-besteffort.yaml 3. oc describe quota 4. oc describe pods Actual results: On OpenShift 220.127.116.11 3. Scopes: BestEffort * Matches all pods that have best effort quality of service. Resource Used Hard -------- ---- ---- pods 1 2 4. QoS Tier: cpu: BestEffort memory: BestEffort Limits: cpu: 500m memory: 256Mi Requests: memory: 0 cpu: 0 On OpenShift 18.104.22.168 3. Scopes: BestEffort * Matches all pods that have best effort quality of service. Resource Used Hard -------- ---- ---- pods 0 2 4. Limits: cpu: 500m memory: 256Mi Requests: cpu: 0 memory: 0 QoS Tier: Burstable Expected results: Be consistent Additional info:
Comment 1 Derek Carr 2016-07-18 15:01:28 UTC
This looks like a bug in OpenShift 3.2, will investigate.
Comment 2 Derek Carr 2016-07-18 15:24:51 UTC
Kubernetes 1.2 has a bug for how it evaluated QoS when a request=0 and a limit was specified. * In 1.2, a resource was best effort if its request is unspecified or 0. * The proper behavior is to say a resource is best effort if it has no limit specified, and its request is unspecified or 0.
Comment 3 Derek Carr 2016-07-18 17:40:15 UTC
Fix for edge case in Origin PR: https://github.com/openshift/ose/pull/308 The behavior described in OpenShift 22.214.171.124 is correct moving forward.
Comment 4 Troy Dawson 2016-08-01 22:06:09 UTC
The pull request has not been merged. I'm marking this back to assigned. Please move it to Modified when the pull request has been merged. I'm also moving the target to 3.3.0, per the conversation in the pull request.
Comment 5 Andy Goldstein 2016-08-08 20:06:42 UTC
This is a code fix for 3.2.x. Correcting version & target release.
Comment 6 Derek Carr 2016-08-12 16:42:51 UTC
Merged into 3.2.x stream.
Comment 7 Weihua Meng 2016-08-16 03:03:42 UTC
not in latest 3.2 puddle. waiting for new puddle.
Comment 9 Weihua Meng 2016-08-25 09:10:06 UTC
Fixed. openshift v126.96.36.199 kubernetes v1.2.0-36-g4a3f9c5 etcd 2.2.5 Now the Qos tier in OpenShift 3.2.1 is consistent with OpenShift 3.3. for OpenShift 188.8.131.52 Scopes: BestEffort * Matches all pods that have best effort quality of service. Resource Used Hard -------- ---- ---- pods 0 2 QoS Tier: cpu: Burstable memory: Burstable Limits: cpu: 1 memory: 1Gi Requests: memory: 0 cpu: 0
Comment 11 errata-xmlrpc 2016-09-12 17:35:45 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2016:1853