Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1600077 - cns-deploy tool fails to deploy CNS and errors out with an error that could not find gluster pods even though created
Summary: cns-deploy tool fails to deploy CNS and errors out with an error that could n...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: cns-deploy-tool
Version: cns-3.10
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: CNS 3.10
Assignee: Saravanakumar
QA Contact: Ashmitha Ambastha
URL:
Whiteboard:
Depends On:
Blocks: 1444734 1568862
TreeView+ depends on / blocked
 
Reported: 2018-07-11 11:31 UTC by Ashmitha Ambastha
Modified: 2018-12-06 19:55 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-12 09:27:30 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:2685 None None None 2018-09-12 09:28:07 UTC

Description Ashmitha Ambastha 2018-07-11 11:31:04 UTC
Description of problem:
-----------------------
On a set up with OCP deployed with latest builds, oc v3.10.15,
cns-deploy fails with an error could not find the gluster pods even though the gluster pods have been created and are in Ready (1/1) and Running state. Also, I have tried the deployment with pre-pulling the latest CNS images and running the cns-deploy tool. Still fails. 

----------- Running cns-deploy --------------------

# cns-deploy -n storage-project -g topology_cns.json -l cns_sde.log  -v
Welcome to the deployment tool for GlusterFS on Kubernetes and OpenShift.

Before getting started, this script has some requirements of the execution
environment and of the container platform that you should verify.

The client machine that will run this script must have:
 * Administrative access to an existing Kubernetes or OpenShift cluster
 * Access to a python interpreter 'python'

Each of the nodes that will host GlusterFS must also have appropriate firewall
rules for the required GlusterFS ports:
 * 111   - rpcbind (for glusterblock)
 * 2222  - sshd (if running GlusterFS in a pod)
 * 3260  - iSCSI targets (for glusterblock)
 * 24006 - glusterblockd
 * 24007 - GlusterFS Management
 * 24008 - GlusterFS RDMA
 * 49152 to 49251 - Each brick for every volume on the host requires its own
   port. For every new brick, one new port will be used starting at 49152. We
   recommend a default range of 49152-49251 on each host, though you can adjust
   this to fit your needs.

The following kernel modules must be loaded:
 * dm_snapshot
 * dm_mirror
 * dm_thin_pool
 * dm_multipath
 * target_core_user

For systems with SELinux, the following settings need to be considered:
 * virt_sandbox_use_fusefs should be enabled on each node to allow writing to
   remote GlusterFS volumes

In addition, for an OpenShift deployment you must:
 * Have 'cluster_admin' role on the administrative account doing the deployment
 * Add the 'default' and 'router' Service Accounts to the 'privileged' SCC
 * Have a router deployed that is configured to allow apps to access services
   running in the cluster

Do you wish to proceed with deployment?

[Y]es, [N]o? [Default: Y]: y
Using OpenShift CLI.

Checking status of namespace matching 'storage-project':
Flag --show-all has been deprecated, will be removed in an upcoming release
storage-project   Active    3h
Using namespace "storage-project".
Checking for pre-existing resources...
  GlusterFS pods ... 
Checking status of pods matching '--selector=glusterfs=pod':
Flag --show-all has been deprecated, will be removed in an upcoming release
No resources found.
Timed out waiting for pods matching '--selector=glusterfs=pod'.
not found.
  deploy-heketi pod ... 
Checking status of pods matching '--selector=deploy-heketi=pod':
Flag --show-all has been deprecated, will be removed in an upcoming release
No resources found.
Timed out waiting for pods matching '--selector=deploy-heketi=pod'.
not found.
  heketi pod ... 
Checking status of pods matching '--selector=heketi=pod':
Flag --show-all has been deprecated, will be removed in an upcoming release
No resources found.
Timed out waiting for pods matching '--selector=heketi=pod'.
not found.
  glusterblock-provisioner pod ... 
Checking status of pods matching '--selector=glusterfs=block-provisioner-pod':
Flag --show-all has been deprecated, will be removed in an upcoming release
No resources found.
Timed out waiting for pods matching '--selector=glusterfs=block-provisioner-pod'.
not found.
  gluster-s3 pod ... 
Checking status of pods matching '--selector=glusterfs=s3-pod':
Flag --show-all has been deprecated, will be removed in an upcoming release
No resources found.
Timed out waiting for pods matching '--selector=glusterfs=s3-pod'.
not found.
Creating initial resources ... /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/deploy-heketi-template.yaml 2>&1
template.template.openshift.io "deploy-heketi" created
/usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/heketi-service-account.yaml 2>&1
serviceaccount "heketi-service-account" created
/usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/heketi-template.yaml 2>&1
template.template.openshift.io "heketi" created
/usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/glusterfs-template.yaml 2>&1
template.template.openshift.io "glusterfs" created
/usr/bin/oc -n storage-project policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account 2>&1
role "edit" added: "system:serviceaccount:storage-project:heketi-service-account"
/usr/bin/oc -n storage-project adm policy add-scc-to-user privileged -z heketi-service-account
scc "privileged" added to: ["system:serviceaccount:storage-project:heketi-service-account"]
OK
Marking 'dhcp47-75.lab.eng.blr.redhat.com' as a GlusterFS node.
/usr/bin/oc -n storage-project label nodes dhcp47-75.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1
node "dhcp47-75.lab.eng.blr.redhat.com" labeled
Marking 'dhcp47-58.lab.eng.blr.redhat.com' as a GlusterFS node.
/usr/bin/oc -n storage-project label nodes dhcp47-58.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1
node "dhcp47-58.lab.eng.blr.redhat.com" labeled
Marking 'dhcp47-39.lab.eng.blr.redhat.com' as a GlusterFS node.
/usr/bin/oc -n storage-project label nodes dhcp47-39.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1
node "dhcp47-39.lab.eng.blr.redhat.com" labeled
Deploying GlusterFS pods.
/usr/bin/oc -n storage-project process -p NODE_LABEL=glusterfs glusterfs | /usr/bin/oc -n storage-project create -f - 2>&1
daemonset.extensions "glusterfs" created
Waiting for GlusterFS pods to start ... 
Checking status of pods matching '--selector=glusterfs=pod':
Flag --show-all has been deprecated, will be removed in an upcoming release
glusterfs-5cnlp   1/1       Running   0         5m
glusterfs-gq8d8   1/1       Running   0         5m
glusterfs-r6dfd   1/1       Running   0         5m
Timed out waiting for pods matching '--selector=glusterfs=pod'.
pods not found.

---------- End ------------------------------

---------- Result of oc get pods ------------

# oc get pods
NAME              READY     STATUS    RESTARTS   AGE
glusterfs-5cnlp   1/1       Running   0          6m
glusterfs-gq8d8   1/1       Running   0          6m
glusterfs-r6dfd   1/1       Running   0          6m

---------- End --------------------------------

---------- Result of oc describe pod <gluster_pod> ------

# oc describe pod glusterfs-gq8d8
Name:           glusterfs-gq8d8
Namespace:      storage-project
Node:           dhcp47-58.lab.eng.blr.redhat.com/10.70.47.58
Start Time:     Wed, 11 Jul 2018 16:34:36 +0530
Labels:         controller-revision-hash=3896903553
                glusterfs=pod
                glusterfs-node=pod
                pod-template-generation=1
Annotations:    openshift.io/scc=privileged
Status:         Running
IP:             10.70.47.58
Controlled By:  DaemonSet/glusterfs
Containers:
  glusterfs:
    Container ID:   docker://6dbb04dbf2f7c95e9b153bc9586ad6133c83ecabe3539e4aa7d2e8996887fd1b
    Image:          rhgs3/rhgs-server-rhel7:v3.10.0
    Image ID:       docker-pullable://brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhgs3/rhgs-server-rhel7@sha256:ee6f38132f588d236d75960005da0ea1862d04a3955f2a85760c93e25cb14b03
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Wed, 11 Jul 2018 16:34:54 +0530
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      100m
      memory:   100Mi
    Liveness:   exec [/bin/bash -c systemctl status glusterd.service] delay=40s timeout=3s period=25s #success=1 #failure=50
    Readiness:  exec [/bin/bash -c systemctl status glusterd.service] delay=40s timeout=3s period=25s #success=1 #failure=50
    Environment:
      GB_GLFS_LRU_COUNT:  15
      TCMU_LOGDIR:        /var/log/glusterfs/gluster-block
      GB_LOGDIR:          /var/log/glusterfs/gluster-block
    Mounts:
      /dev from glusterfs-dev (rw)
      /etc/glusterfs from glusterfs-etc (rw)
      /etc/ssl from glusterfs-ssl (ro)
      /etc/target from glusterfs-block (rw)
      /run from glusterfs-run (rw)
      /run/lvm from glusterfs-lvm (rw)
      /sys/fs/cgroup from glusterfs-cgroup (ro)
      /var/lib/glusterd from glusterfs-config (rw)
      /var/lib/heketi from glusterfs-heketi (rw)
      /var/lib/misc/glusterfsd from glusterfs-misc (rw)
      /var/log/glusterfs from glusterfs-logs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-nkl4z (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  glusterfs-block:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/target
    HostPathType:  
  glusterfs-heketi:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/heketi
    HostPathType:  
  glusterfs-run:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:  
  glusterfs-lvm:
    Type:          HostPath (bare host directory volume)
    Path:          /run/lvm
    HostPathType:  
  glusterfs-etc:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/glusterfs
    HostPathType:  
  glusterfs-logs:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log/glusterfs
    HostPathType:  
  glusterfs-config:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/glusterd
    HostPathType:  
  glusterfs-dev:
    Type:          HostPath (bare host directory volume)
    Path:          /dev
    HostPathType:  
  glusterfs-misc:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/misc/glusterfsd
    HostPathType:  
  glusterfs-cgroup:
    Type:          HostPath (bare host directory volume)
    Path:          /sys/fs/cgroup
    HostPathType:  
  glusterfs-ssl:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl
    HostPathType:  
  default-token-nkl4z:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-nkl4z
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  node-role.kubernetes.io/compute=true
                 storagenode=glusterfs
Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:
  Type    Reason   Age   From                                       Message
  ----    ------   ----  ----                                       -------
  Normal  Pulled   14m   kubelet, dhcp47-58.lab.eng.blr.redhat.com  Container image "rhgs3/rhgs-server-rhel7:v3.10.0" already present on machine
  Normal  Created  14m   kubelet, dhcp47-58.lab.eng.blr.redhat.com  Created container
  Normal  Started  14m   kubelet, dhcp47-58.lab.eng.blr.redhat.com  Started container


---------- End of oc describe pod <gluster_pod> output ----------

I'll be attaching the log file, cns_sde.log and the topology file, topology_cns.json as well. 

How reproducible: I've tried it thrice.

Steps to Reproduce:
1. Deploy OCP on a set up of 1 Master and 4 worker nodes.
2. Do the pre-requisite steps before running cns-deploy
3. Create the topology.json file and run cns-deploy tool. 

Actual results: cns-deploy tool is failing.

Expected results: cns-deploy tool should pass and deploy CNS successfully.

Additional information: 
-----------------------
# oc version
oc v3.10.15
kubernetes v1.10.0+b81c8f8
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://dhcp47-98.lab.eng.blr.redhat.com:8443
openshift v3.10.15
kubernetes v1.10.0+b81c8f8

# docker images
REPOSITORY                                                                                 TAG                 IMAGE ID            CREATED             SIZE
brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/ose-node                   v3.10               27eb901dbab7        2 days ago          1.21 GB
brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/ose-control-plane          v3.10               8be6d1e02823        2 days ago          635 MB
brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/ose-pod                    v3.10               580b79833b7a        2 days ago          214 MB
brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhgs3/rhgs-server-rhel7               3.3.1-22            231bcbbf4041        5 days ago          287 MB
brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhgs3/rhgs-gluster-block-prov-rhel7   3.3.1-18            d749465add1a        8 days ago          248 MB
brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhgs3/rhgs-volmanager-rhel7           3.3.1-19            a4586d58ab9f        10 days ago         282 MB
brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhel7/etcd                            3.2.15              4f35b6516d22        3 months ago        256 MB

Comment 8 Jose A. Rivera 2018-07-16 14:53:43 UTC
Hmm... I can't see what might be wrong off-hand. In the cns-deploy script, try replacing the following lines (around line 320):

           status=$(echo "${line}" | awk '{print $2}')
           if [[ "${status}" != "1/1" ]]; then
             rc=1
           fi

With this:

           status=$(echo "${line}" | awk '{print $2}')
           if [[ "${status}" != "1/1" ]]; then
             echo "${status}"
             rc=1
           fi

And then see what it reports.

Comment 9 Ashmitha Ambastha 2018-07-18 06:06:00 UTC
I made the change suggested, 

------------- running cns-deploy command -------------------------------
# cns-deploy -n storage-project -g topology_cns.json -l log_after_change -v
Welcome to the deployment tool for GlusterFS on Kubernetes and OpenShift.

Before getting started, this script has some requirements of the execution
environment and of the container platform that you should verify.

The client machine that will run this script must have:
 * Administrative access to an existing Kubernetes or OpenShift cluster
 * Access to a python interpreter 'python'

Each of the nodes that will host GlusterFS must also have appropriate firewall
rules for the required GlusterFS ports:
 * 111   - rpcbind (for glusterblock)
 * 2222  - sshd (if running GlusterFS in a pod)
 * 3260  - iSCSI targets (for glusterblock)
 * 24006 - glusterblockd
 * 24007 - GlusterFS Management
 * 24008 - GlusterFS RDMA
 * 49152 to 49251 - Each brick for every volume on the host requires its own
   port. For every new brick, one new port will be used starting at 49152. We
   recommend a default range of 49152-49251 on each host, though you can adjust
   this to fit your needs.

The following kernel modules must be loaded:
 * dm_snapshot
 * dm_mirror
 * dm_thin_pool
 * dm_multipath
 * target_core_user

For systems with SELinux, the following settings need to be considered:
 * virt_sandbox_use_fusefs should be enabled on each node to allow writing to
   remote GlusterFS volumes

In addition, for an OpenShift deployment you must:
 * Have 'cluster_admin' role on the administrative account doing the deployment
 * Add the 'default' and 'router' Service Accounts to the 'privileged' SCC
 * Have a router deployed that is configured to allow apps to access services
   running in the cluster

Do you wish to proceed with deployment?

[Y]es, [N]o? [Default: Y]: y
Using OpenShift CLI.

Checking status of namespace matching 'storage-project':
Flag --show-all has been deprecated, will be removed in an upcoming release
storage-project   Active    6d
Using namespace "storage-project".
Checking for pre-existing resources...
  GlusterFS pods ... 
Checking status of pods matching '--selector=glusterfs=pod':
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
No resources found.
resources
Timed out waiting for pods matching '--selector=glusterfs=pod'.
not found.
  deploy-heketi pod ... 
Checking status of pods matching '--selector=deploy-heketi=pod':
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
No resources found.
resources
Timed out waiting for pods matching '--selector=deploy-heketi=pod'.
not found.
  heketi pod ... 
Checking status of pods matching '--selector=heketi=pod':
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
No resources found.
resources
Timed out waiting for pods matching '--selector=heketi=pod'.
not found.
  glusterblock-provisioner pod ... 
Checking status of pods matching '--selector=glusterfs=block-provisioner-pod':
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
No resources found.
resources
Timed out waiting for pods matching '--selector=glusterfs=block-provisioner-pod'.
not found.
  gluster-s3 pod ... 
Checking status of pods matching '--selector=glusterfs=s3-pod':
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
No resources found.
resources
Timed out waiting for pods matching '--selector=glusterfs=s3-pod'.
not found.
Creating initial resources ... /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/deploy-heketi-template.yaml 2>&1
template.template.openshift.io "deploy-heketi" created
/usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/heketi-service-account.yaml 2>&1
serviceaccount "heketi-service-account" created
/usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/heketi-template.yaml 2>&1
template.template.openshift.io "heketi" created
/usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/glusterfs-template.yaml 2>&1
template.template.openshift.io "glusterfs" created
/usr/bin/oc -n storage-project policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account 2>&1
role "edit" added: "system:serviceaccount:storage-project:heketi-service-account"
/usr/bin/oc -n storage-project adm policy add-scc-to-user privileged -z heketi-service-account
scc "privileged" added to: ["system:serviceaccount:storage-project:heketi-service-account"]
OK
Marking 'dhcp47-75.lab.eng.blr.redhat.com' as a GlusterFS node.
/usr/bin/oc -n storage-project label nodes dhcp47-75.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1
node "dhcp47-75.lab.eng.blr.redhat.com" labeled
Marking 'dhcp47-58.lab.eng.blr.redhat.com' as a GlusterFS node.
/usr/bin/oc -n storage-project label nodes dhcp47-58.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1
node "dhcp47-58.lab.eng.blr.redhat.com" labeled
Marking 'dhcp47-39.lab.eng.blr.redhat.com' as a GlusterFS node.
/usr/bin/oc -n storage-project label nodes dhcp47-39.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1
node "dhcp47-39.lab.eng.blr.redhat.com" labeled
Deploying GlusterFS pods.
/usr/bin/oc -n storage-project process -p NODE_LABEL=glusterfs glusterfs | /usr/bin/oc -n storage-project create -f - 2>&1
daemonset.extensions "glusterfs" created
Waiting for GlusterFS pods to start ... 
Checking status of pods matching '--selector=glusterfs=pod':
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       ContainerCreating   0         2s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       ContainerCreating   0         4s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       ContainerCreating   0         6s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       ContainerCreating   0         8s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       ContainerCreating   0         11s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       ContainerCreating   0         14s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       ContainerCreating   0         16s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       ContainerCreating   0         18s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       Running             0         20s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       Running   0         22s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       Running   0         25s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       Running   0         28s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       Running   0         30s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       Running   0         32s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       Running   0         34s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       Running   0         37s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       Running   0         39s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       Running   0         41s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       Running   0         43s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       Running   0         45s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       Running   0         48s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       Running   0         50s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       Running   0         52s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       Running   0         54s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       Running   0         56s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       Running   0         59s
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       Running   0         1m
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       Running   0         1m
0/1
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all
glusterfs-7x6lj   0/1       Running   0         1m
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-allksbb6   0/1       Running   0         1m
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-allzzlk7   1/1       Running   0         1m
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-allzzlk7   1/1       Running   0         1m
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-allzzlk7   1/1       Running   0         1m
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-allzzlk7   1/1       Running   0         1m
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-allzzlk7   1/1       Running   0         1m
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
Flag --show-all has been deprecated, will be removed in an upcoming release
--show-all7x6lj   1/1       Running   0         5m
glusterfs-7x6lj   1/1       Running   0         5m
glusterfs-ksbb6   1/1       Running   0         5m
glusterfs-zzlk7   1/1       Running   0         5m
Timed out waiting for pods matching '--selector=glusterfs=pod'.
pods not found.

----------------------- End ------------------------------------------------

Comment 12 Jose A. Rivera 2018-07-20 13:11:32 UTC
Try removing the "--show-all" flag from the command at around line 301.

Comment 13 Ashmitha Ambastha 2018-07-25 08:29:05 UTC
Hi Jose, 

Removing "--show-all" flag from the script has worked. Gluster pods were successfully deployed and it moved forward to deploying the heketi pod.

Comment 14 Humble Chirammal 2018-07-25 09:21:09 UTC
(In reply to Ashmitha Ambastha from comment #13)
> Hi Jose, 
> 
> Removing "--show-all" flag from the script has worked. Gluster pods were
> successfully deployed and it moved forward to deploying the heketi pod.

Thanks Ashmitha for confirming the fix and Saravana for the PR.

Upstream PR to fix this issue #https://github.com/gluster/gluster-kubernetes/pull/502

I am moving status of this bugzilla to POST.

Comment 19 errata-xmlrpc 2018-09-12 09:27:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:2685

Comment 20 vinutha 2018-12-06 19:55:08 UTC
Marking qe-test-coverage as - since the preferred mode of deployment is using ansible


Note You need to log in before you can comment on or make changes to this bug.