Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1599033 - [Tracker -LVM ] Cores are getting generated in gluster pod because of "lvremove -f vg_***/tp_***" [NEEDINFO]
Summary: [Tracker -LVM ] Cores are getting generated in gluster pod because of "lvremo...
Keywords:
Status: NEW
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhgs-server-container
Version: cns-3.10
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Saravanakumar
QA Contact: Nitin Goyal
URL:
Whiteboard:
Depends On: 1599293
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-08 04:02 UTC by Nitin Goyal
Modified: 2019-04-11 08:18 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1599293 (view as bug list)
Environment:
Last Closed:
Target Upstream Version:
knarra: needinfo? (hchiramm)


Attachments (Terms of Use)

Description Nitin Goyal 2018-07-08 04:02:42 UTC
Description of problem: Cores are getting generated in one of the gluster pod while performing heketi device resync operation.


Version-Release number of selected component (if applicable):
6.0.0-7.4.el7rhgs.x86_64


How reproducible:
1/1

Steps to Reproduce:
1. Create and delete pvcs in while loop.
while true
do
    for i in {1..50}
    do
        ./pvc_create.sh c$i 2
        sleep 10
    done

    sleep 20

    for i in {1..50}
    do
        oc delete pvc c$i;
        sleep 10
    done
done

2. After some time when there is mismatch in between gluster vol list and heketi vol list perform heketi-cli device resync operation
# heketi-cli device resync device_id

Actual results:


Expected results:


Additional info:

---------pvc_create.sh---------------------------------------------------------

echo "kind: PersistentVolumeClaim" > glusterfs-pvc-claim.yaml
echo "apiVersion: v1" >> glusterfs-pvc-claim.yaml
echo "metadata:" >> glusterfs-pvc-claim.yaml
echo "  name: "$1 >> glusterfs-pvc-claim.yaml
echo "  annotations:" >> glusterfs-pvc-claim.yaml
echo "    volume.beta.kubernetes.io/storage-class: container" >> glusterfs-pvc-claim.yaml
echo "spec:" >> glusterfs-pvc-claim.yaml
echo "  accessModes:" >> glusterfs-pvc-claim.yaml
echo "   - ReadWriteOnce" >> glusterfs-pvc-claim.yaml
echo "  resources:" >> glusterfs-pvc-claim.yaml
echo "    requests:" >> glusterfs-pvc-claim.yaml
echo "      storage: "$2"Gi" >> glusterfs-pvc-claim.yaml

oc create -f glusterfs-pvc-claim.yaml

-----------------END-----------------------------------------------------------

Comment 2 Nitin Goyal 2018-07-08 04:06:32 UTC
gluster pod version ->

rhgs-server-rhel7:rhgs-3.3.z-rhel-7-containers-candidate-18984-20180704085304

heketi version ->

heketi-client-6.0.0-7.4.el7rhgs.x86_64

Comment 6 Yaniv Kaul 2018-07-08 07:13:21 UTC
Can you add the relevant debuginfo (see above command: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/4c/4a6304d7353346654aa06cb9a0918f635b0851 ) - ensure you have them when printing gdb output.

Comment 8 Nitin Goyal 2018-07-31 05:05:20 UTC
Yaniv,

I will make sure from next time that i have those packages.

Comment 10 RamaKasturi 2019-02-25 06:19:25 UTC
Hello Nikhil,

   We are still not officially supporting this and we will get back on this after this release.

Thanks
kasturi

Comment 12 RamaKasturi 2019-02-25 06:43:21 UTC
Humble,

   Would you be able to help with the request above ?

Thanks
kasturi


Note You need to log in before you can comment on or make changes to this bug.