Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1511010 - [GSS] [Tracking] gluster-block multipath device not being fully cleaned up after pod removal
Summary: [GSS] [Tracking] gluster-block multipath device not being fully cleaned up af...
Keywords:
Status: ASSIGNED
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: gluster-block
Version: cns-3.6
Hardware: All
OS: Linux
medium
low
Target Milestone: ---
: ---
Assignee: Prasanna Kumar Kalever
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On: 1585581
Blocks: 1573420 1622458
TreeView+ depends on / blocked
 
Reported: 2017-11-08 14:11 UTC by Matthew Robson
Modified: 2019-04-13 02:29 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:


Attachments (Terms of Use)
log from pod shutdown with block PV (deleted)
2017-11-08 14:11 UTC, Matthew Robson
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Bugzilla 1585581 None NEW multipath device couldn't be fully cleaned up after the iscsi devices been logged out due to systemd-udevd process still... 2019-04-10 10:25:20 UTC

Internal Links: 1585581

Description Matthew Robson 2017-11-08 14:11:27 UTC
Created attachment 1349460 [details]
log from pod shutdown with block PV

Description of problem:

Bring up a gluster block backed pod and you see the multipath device with 3x paths.

[cloud-user@osenode4 ~]$ sudo multipath -ll
mpathb (360014050b2a1e10336a4600ae4c2eec5) dm-32 LIO-ORG ,TCMU device
size=2.0G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `- 5:0:0:0 sda 8:0  active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 6:0:0:0 sdb 8:16 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 7:0:0:0 sdc 8:32 active ready running

When the POD is moved / scaled down / deleted, the the muultipath device remains:

[cloud-user@osenode4 ~]$ sudo multipath -ll
mpathb (360014050b2a1e10336a4600ae4c2eec5) dm-32
size=2.0G features='1 queue_if_no_path' hwhandler='0' wp=rw

After the fact, a second flush removes it:

[cloud-user@osenode4 ~]$ sudo multipath -f mpathb
Nov 08 09:04:15 | mpathb: map in use
Nov 08 09:04:15 | failed to remove multipath map mpathb

[cloud-user@osenode4 ~]$ sudo multipath -f mpathb
[cloud-user@osenode4 ~]$ sudo multipath -ll

Log Attached.

Shutdown At: Nov 8 08:58:00

First Multipath -f At (failed): Nov 8 09:04:15

Second Multipath -f At (successful): Nov 8 09:04:38

Version-Release number of selected component (if applicable):

CNS 3.6

How reproducible:

Always

Steps to Reproduce:
1. Deploy pod with block PV
2. Scale down to 0
3. Check multipath 

Actual results:

Left over device

Expected results:

Everything gets cleaned up

Additional info:

Comment 40 Amar Tumballi 2018-11-19 08:45:08 UTC
How do we go about 'resolving' the bug? It is open from more than a year. gluster-block in general in OCS releases got much more stabler. But I see customer issue is still open, and we should need decision to proceed further.

And as mentioned above, as it is blocked on some systemd bug, not targetted for any time in near future, we should consider letting customer know and take appropriate action (CLOSED/WONTFIX?) on the bug ?


Note You need to log in before you can comment on or make changes to this bug.