Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1510306 - cfme is using dynamic pv mistakenly
Summary: cfme is using dynamic pv mistakenly
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.7.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 3.7.z
Assignee: Tim Bielawa
QA Contact: Gaoyun Pei
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-07 07:42 UTC by Gaoyun Pei
Modified: 2018-10-18 12:57 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-18 12:57:01 UTC


Attachments (Terms of Use)

Description Gaoyun Pei 2017-11-07 07:42:14 UTC
Description of problem:
When installing cfme on ocp-3.7 cluster with cloudprovider enabled, although openshift_management_storage_class=nfs_external was set, but it was still using dynamic pv from the cloud provider.  


Version-Release number of the following components:
openshift-ansible-3.7.0-0.196.0.git.0.27cd7ec.el7.noarch

How reproducible:
Always

Steps to Reproduce:
1. Prepare an ocp-3.7 cluster with cloudprovider enabled, so it will have a default storageclass
[root@host-192-168-2-140 ~]# oc get storageclass
NAME                 TYPE
standard (default)   kubernetes.io/cinder   


2. With the following parameters added in ansible inventory file, run cfme deployment playbook.

[OSEv3:vars]
...
openshift_management_install_beta=true
openshift_management_app_template=miq-template
openshift_management_storage_class=nfs_external
openshift_management_storage_nfs_external_hostname=openshift-x.x.com
openshift_management_storage_nfs_base_dir=/x


#ansible-playbook -i host /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-management/config.yml


3. After playbook finished, check the pod, pv and pvc status under openshift-management project
[root@host-192-168-2-140 ~]# oc get pod
NAME                 READY     STATUS    RESTARTS   AGE
httpd-1-ddb8w        1/1       Running   0          33m
manageiq-0           1/1       Running   0          34m
memcached-1-2dckc    1/1       Running   0          33m
postgresql-1-l8xjw   1/1       Running   0          33m

[root@host-192-168-2-140 ~]# oc get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM                                             STORAGECLASS   REASON    AGE
miq-app                                    5Gi        RWO           Retain          Available                                                                              34m
miq-db                                     15Gi       RWO           Retain          Available                                                                              34m
pvc-b5bb6e9c-c388-11e7-aafd-fa163ee88be0   15Gi       RWO           Delete          Bound       openshift-management/manageiq-postgresql          standard                 34m
pvc-b63c131f-c388-11e7-aafd-fa163ee88be0   5Gi        RWO           Delete          Bound       openshift-management/manageiq-server-manageiq-0   standard                 34m

[root@host-192-168-2-140 ~]# oc describe pv miq-app
Name:		miq-app
Labels:		template=manageiq-app-pv
Annotations:	<none>
StorageClass:	
Status:		Available
Claim:		
Reclaim Policy:	Retain
Access Modes:	RWO
Capacity:	5Gi
Message:	
Source:
    Type:	NFS (an NFS mount that lasts the lifetime of a pod)
    Server:	openshift-x.x.com
    Path:	/x
    ReadOnly:	false
Events:		<none>

[root@host-192-168-2-140 ~]# oc describe pv pvc-b63c131f-c388-11e7-aafd-fa163ee88be0
Name:		pvc-b63c131f-c388-11e7-aafd-fa163ee88be0
Labels:		failure-domain.beta.kubernetes.io/zone=nova
Annotations:	kubernetes.io/createdby=cinder-dynamic-provisioner
		pv.kubernetes.io/bound-by-controller=yes
		pv.kubernetes.io/provisioned-by=kubernetes.io/cinder
StorageClass:	standard
Status:		Bound
Claim:		openshift-management/manageiq-server-manageiq-0
Reclaim Policy:	Delete
Access Modes:	RWO
Capacity:	5Gi
Message:	
Source:
    Type:	Cinder (a Persistent Disk resource in OpenStack)
    VolumeID:	fc8be736-5bab-42bf-939b-86645e94424a
    FSType:	xfs
    ReadOnly:	false
Events:		<none>


[root@host-192-168-2-140 ~]# oc get pvc
NAME                         STATUS    VOLUME                                     CAPACITY   ACCESSMODES   STORAGECLASS   AGE
manageiq-postgresql          Bound     pvc-b5bb6e9c-c388-11e7-aafd-fa163ee88be0   15Gi       RWO           standard       34m
manageiq-server-manageiq-0   Bound     pvc-b63c131f-c388-11e7-aafd-fa163ee88be0   5Gi        RWO           standard       34m


[root@host-192-168-2-140 ~]# oc describe pvc manageiq-server-manageiq-0
Name:		manageiq-server-manageiq-0
Namespace:	openshift-management
StorageClass:	standard
Status:		Bound
Volume:		pvc-b63c131f-c388-11e7-aafd-fa163ee88be0
Labels:		name=manageiq
Annotations:	pv.kubernetes.io/bind-completed=yes
		pv.kubernetes.io/bound-by-controller=yes
		volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/cinder
Capacity:	5Gi
Access Modes:	RWO
Events:
  FirstSeen	LastSeen	Count	From				SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----				-------------	--------	------			-------
  34m		34m		1	persistentvolume-controller			Normal		ProvisioningSucceeded	Successfully provisioned volume pvc-b63c131f-c388-11e7-aafd-fa163ee88be0 using kubernetes.io/cinder



Actual results:
cfme is using cinder dynamic pv provied by openstack

Expected results:
cfme deployment should use the specified storage

Additional info:

Comment 1 Tim Bielawa 2017-11-07 18:28:42 UTC
This is a bug. But not a critical blocker bug IMHO. We can raise a validation error and note that mixing the Management storage classes and OCP storage class is not supported. Best I can tell from some discussion is that this happens:


* OCP is configured is to cloud provider storage
* Management is configured using anything other than cloud provider storage class
* Management role notices you want to use NFS external for PV storage, so it creates PV templates and then processes them into actual PVs
* OCP understands that it is configured to use cloud provider storage for PV Claim requests
* When we process the MIQ/CFME template OCP eventually has to process the request for PV claims and *even though* there are two nfs_external backed PVs available, it elects to create a new one using cloud provider storage instead.

So, the product, afaict, is behaving correctly, but what you expect to get as a customer is not congruent. I think this will be solved with a documentation refresh (possibly?) and an additional validation check before the CFME installation runs.

Comment 2 Tim Bielawa 2017-11-13 15:55:19 UTC
After asking around on the SME list, this appears to be internal Kube behavior. Description of the behavior here:

https://docs.openshift.com/container-platform/3.4/install_config/persistent_storage/dynamically_provisioning_pvs.html#change-default-storage-class

At most I think we might need to make a documentation update. Put simply, don't expect to get an NFS PV if you have dynamic cloud provider storage configured. It just doesn't work that way. When the app is created openshift is going to use cloud provider no matter what, even if you already created appropriately sized NFS PVs.


Note You need to log in before you can comment on or make changes to this bug.