Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1365582 - Potential additional guidance for deleting a deployment in Web Console
Summary: Potential additional guidance for deleting a deployment in Web Console
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Management Console
Version: 3.2.1
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: ---
Assignee: Samuel Padgett
QA Contact: Yadan Pei
Depends On:
TreeView+ depends on / blocked
Reported: 2016-08-09 15:20 UTC by Justin Pierce
Modified: 2017-03-08 18:26 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: The web console prevented users from deleting replication controllers with active pods to avoid orphaning them. Consequence: The "Delete" menu item was disabled for replication controllers when they have active replicas, but it wasn't obvious why. Fix: The web console now provides help text explaining as well as example commands for deleting from the CLI (which will scale the replication controller down automatically). Result:
Clone Of:
Last Closed: 2016-09-27 09:43:10 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1933 normal SHIPPED_LIVE Red Hat OpenShift Container Platform 3.3 Release Advisory 2016-09-27 13:24:36 UTC

Description Justin Pierce 2016-08-09 15:20:27 UTC
Description of problem:
Created a jboss EAP deployment using template in web console. Subsequently attempted to destroy all resources associated with that deployment by deleting the deployment config. 
Deletion of the DC did not cascade -- leaving behind a Deployment, RC, pods, etc. There appeared to be no way to clean up these artifacts through the GUI since the deployment had "Delete" grayed out. It was not intuitive that the GUI needed me to scale the pods to 0 before deletion would be allowed.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Add to project: jboss-eap64-openshift 1.2
2. Used "Try it" quickstart
3. Allow deploy to complete
4. Delete DC or Service without scaling down pods

Actual results:
Deployment/RC/Pods remain and (seemingly) cannot be deleted. The GUI warns you that the DC associated with the Deployment is gone, but "Delete" is grayed out for the deployment #. 

Expected results:
The layout of the GUI overview page communicates two things visually.
(1) The Deployment is the primary artifact of the action I took when I created the application (based on visual size / detail / ability to delve into pod info/etc).
(2) The Service groups everything together (based on its box visually encompassing the deployment).

As a new user to the system, I would anticipate that deleting either DC or Service would delete all application resources (deployment/rc/pods/routes/service). At a minimum, I would expect the GUI to inform me why Delete was being grayed out in my deployment# so that I would know to scale down my pods.

Additional info:

Comment 1 Jessica Forrester 2016-08-10 12:44:00 UTC
As far as good cascading deletion we are already tracking that

The console requiring someone to scale down before they can delete is temporary until we can actually do cascading deletion properly on the backend.  Agreed its not clear that is why Delete is grayed out.

Comment 4 Troy Dawson 2016-08-19 21:39:43 UTC
This has been merged into ose and is in OSE v3.3.0.23 or newer.

Comment 6 Yadan Pei 2016-08-22 02:09:21 UTC
Now when deleting DC/BC, you will get a warning dialog saying when we can't cascade delete on web console, you could run CLI command to delete deployment config/build config and all of its deployments/builds

Comment 8 errata-xmlrpc 2016-09-27 09:43:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

Note You need to log in before you can comment on or make changes to this bug.