Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1684368 - `oc adm prune deployments` could not delete the deployer pod
Summary: `oc adm prune deployments` could not delete the deployer pod
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Command Line Interface
Version: 4.1
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 4.1.0
Assignee: Maciej Szulik
QA Contact: zhou ying
Depends On:
TreeView+ depends on / blocked
Reported: 2019-03-01 05:57 UTC by zhou ying
Modified: 2019-04-02 02:11 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: With the new GC deletion mechanism we weren't setting the propagation policy properly with oc adm prune deployments command. Consequence: The deployer pod was not removed. Fix: Set appropriate deletion options. Result: oc adm prune deployments correctly removes all its dependants.
Clone Of:
Last Closed:
Target Upstream Version:

Attachments (Terms of Use)

Description zhou ying 2019-03-01 05:57:31 UTC
Description of problem:
Use the command `oc adm prune deployment` only delete the RC, the related deployer pod could not be deleted.

Version-Release number of selected component (if applicable):
Client Version: v4.0.6
Server Version: v1.12.4+0cbcfc5afe

How reproducible:

Steps to Reproduce:
1. Create a project, and create an app;
   `oc new-app centos/ruby-25-centos7~`
2. Rollout more than 5  deployments:
   `oc rollout latest dc/ruby-ex`
3. Use the command: 
   `oc adm prune deployments --keep-complete=1 --keep-younger-than=10m  --loglevel=6 --confirm`

Actual results:
3. Only the RC could be deleted , the deployer pod still exist:
[root@preserve-master-yinzhou ~]# oc adm prune deployments --keep-complete=1 --keep-younger-than=10m  --loglevel=6 --confirm
I0301 00:21:12.723659   16584 loader.go:359] Config loaded from file /root/0221/zhouy/auth/kubeconfig
I0301 00:21:12.934121   16584 round_trippers.go:405] GET 200 OK in 209 milliseconds
I0301 00:21:12.969963   16584 round_trippers.go:405] GET 200 OK in 28 milliseconds
I0301 00:21:12.997500   16584 prune.go:54] Creating deployment pruner with keepYoungerThan=10m0s, orphans=false, keepComplete=1, keepFailed=1
I0301 00:21:12.997593   16584 prune.go:113] Deleting deployment "ruby-ex-9"
I0301 00:21:13.030478   16584 round_trippers.go:405] DELETE 200 OK in 32 milliseconds
I0301 00:21:13.030880   16584 prune.go:113] Deleting deployment "ruby-ex-7"
I0301 00:21:13.063955   16584 round_trippers.go:405] DELETE 200 OK in 33 milliseconds
zhouy       ruby-ex-9
zhouy       ruby-ex-7

[root@dhcp-140-138 ~]# oc get po 
NAME                READY   STATUS             RESTARTS   AGE
ruby-ex-1-build     0/1     Completed          0          146m
ruby-ex-10-deploy   0/1     DeadlineExceeded   0          120m
ruby-ex-11-deploy   0/1     Completed          0          29m
ruby-ex-12-4w9sb    1/1     Running            0          25m
ruby-ex-12-deploy   0/1     Completed          0          26m
ruby-ex-3-deploy    0/1     Completed          0          138m
ruby-ex-4-deploy    0/1     Completed          0          136m
ruby-ex-5-deploy    0/1     Completed          0          129m
ruby-ex-6-deploy    0/1     Completed          0          122m
ruby-ex-7-deploy    0/1     Completed          0          122m
ruby-ex-9-deploy    0/1     Completed          0          121m
[root@dhcp-140-138 ~]# oc get rc
ruby-ex-10   0         0         0       121m
ruby-ex-11   0         0         0       29m
ruby-ex-12   1         1         1       26m

Expected results:
3. The RC and related deployer pod should be deleted at the same time.

Additional info:
When use the `oc adm prune builds` the build and builder pod deleted at the same time.

Comment 1 Maciej Szulik 2019-03-01 20:57:41 UTC
Fix in

Comment 2 zhou ying 2019-04-02 02:11:56 UTC
Confirmed with latest ocp, the issue has fixed:
[zhouying@dhcp-140-138 extended]$ oc version --short
Client Version: v4.0.22
Server Version: v1.12.4+87e98f4
Payload: 4.0.0-0.nightly-2019-03-28-030453

[zhouying@dhcp-140-138 extended]$ oc adm prune deployments --keep-complete=1 --keep-younger-than=1m   --confirm
zhouyt      ruby-ex-5
zhouyt      ruby-ex-4
zhouyt      ruby-ex-3
zhouyt      ruby-ex-2
zhouyt      ruby-ex-1
[zhouying@dhcp-140-138 extended]$ oc get po 
NAME               READY   STATUS      RESTARTS   AGE
ruby-ex-3-build    0/1     Completed   0          25m
ruby-ex-4-build    0/1     Completed   0          23m
ruby-ex-5-build    0/1     Completed   0          19m
ruby-ex-6-build    0/1     Completed   0          18m
ruby-ex-6-deploy   0/1     Completed   0          16m
ruby-ex-7-build    0/1     Completed   0          14m
ruby-ex-7-deploy   0/1     Completed   0          13m
ruby-ex-7-ggrdx    1/1     Running     0          13m
[zhouying@dhcp-140-138 extended]$ oc get rc
ruby-ex-6   0         0         0       16m
ruby-ex-7   1         1         1       13m

Note You need to log in before you can comment on or make changes to this bug.