Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1685355 - Even setting desired count of a machineset to 0, still machine exist.
Summary: Even setting desired count of a machineset to 0, still machine exist.
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cloud Compute
Version: 4.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Alberto
QA Contact: Jianwei Hou
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-05 02:55 UTC by jooho lee
Modified: 2019-03-12 16:10 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-03-12 16:10:40 UTC
Target Upstream Version:


Attachments (Terms of Use)
machine controller log (deleted)
2019-03-06 21:47 UTC, jooho lee
no flags Details

Description jooho lee 2019-03-05 02:55:58 UTC
Description of problem:
When I scale down of machineset to 0, machine does not go away.

~~~
$ oc get machineset
NAME                           DESIRED   CURRENT   READY     AVAILABLE   AGE
ocp4-qt4k4-worker-us-east-2a   1         1         1         1           3h31m
ocp4-qt4k4-worker-us-east-2b   1         1         1         1           3h31m
ocp4-qt4k4-worker-us-east-2c   0         0                               3h31m

$ oc get machine
ocp4-qt4k4-worker-us-east-2a-pkmc8   i-0401e185909dbc7d9   running   m4.large    us-east-2   us-east-2a   3h34m
ocp4-qt4k4-worker-us-east-2b-bb6mj   i-0be14d31f5185139c   running   m4.large    us-east-2   us-east-2b   3h34m
ocp4-qt4k4-worker-us-east-2c-58p8n   i-065b278e7db68b254   running   m4.large    us-east-2   us-east-2c   3h34m

~~~



Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. oc scale machineset ocp4-qt4k4-worker-us-east-2b --replicas=0

2. oc get machine
3.

Actual results:
1 worker node always stay in alive

Expected results:
all worker nodes that has link to the machinset have to be gone.

Additional info:

Comment 1 jooho lee 2019-03-05 03:05:44 UTC
Sorry. the reproduce steps were wrong.

1. oc delete node 2b-xxxx
2. oc scale machineset ocp4-qt4k4-worker-us-east-2b --replicas=0
3. oc get machine

Then the machine is not delete properly.

After that, if I scale out to 2, there are 3 machine because the previous machine is not deleted.

Comment 2 Alberto 2019-03-06 12:30:27 UTC
can you share the machine controller logs?

Comment 3 Alberto 2019-03-06 12:53:16 UTC
this is likely happening since the machine deletion is failing to drain the node because the object does exist.
For forcing the deletion of the machine, the annotation machine.openshift.io/exclude-node-draining would need to be set

Comment 4 Alberto 2019-03-06 12:54:35 UTC
(In reply to Alberto from comment #3)
> this is likely happening since the machine deletion is failing to drain the
> node because the object does exist.
> For forcing the deletion of the machine, the annotation
> machine.openshift.io/exclude-node-draining would need to be set

this is likely happening since the machine deletion is failing to drain the node because the object does NOT exist

Comment 5 jooho lee 2019-03-06 21:47:41 UTC
Created attachment 1541612 [details]
machine controller log

Comment 6 jooho lee 2019-03-06 21:48:03 UTC
I upload machine controller log

Comment 7 Alberto 2019-03-07 08:49:38 UTC
This behaviour is intended.
When deleting a machine the associated node will be drained by default unless machine.openshift.io/exclude-node-draining annotation is set.
If it happens that you manually accidentally deleted a node, when deleting the backing machine it will fail to drain and therefore to be deleted.
In that case deleting the machine permanently will require your manual intervention to set the mentioned annotation, which seems safe.


Note You need to log in before you can comment on or make changes to this bug.