Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.

Bug 1509874

Summary: Cleanup of bundle resource is incomplete [rhel-7.4.z]
Product: Red Hat Enterprise Linux 7 Reporter: Oneata Mircea Teodor <toneata>
Component: pacemakerAssignee: Ken Gaillot <kgaillot>
Status: CLOSED ERRATA QA Contact: Marian Krcmarik <mkrcmari>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 7.4CC: abeekhof, aherr, chjones, cluster-maint, dciabrin, kgaillot, mjuricek, mkrcmari, ushkalim
Target Milestone: rcKeywords: Triaged, ZStream
Target Release: 7.4   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: pacemaker-1.1.16-12.el7_4.5 Doc Type: No Doc Update
Doc Text:
Previously, the "pcs resource cleanup" command ignored stopped child clone resources of a bundle. Consequently, it was not possible to erase the state of the resources. With this update, Pacemaker now recognizes stopped clone resources. As a result, the pcs tool now works correctly with bundles when cleaning up.
Story Points: ---
Clone Of: 1499217 Environment:
Last Closed: 2017-11-30 16:10:18 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On: 1499217, 1514520    
Bug Blocks:    

Description Oneata Mircea Teodor 2017-11-06 09:16:29 UTC
This bug has been copied from bug #1499217 and has been proposed to be backported to 7.4 z-stream (EUS).

Comment 2 Ken Gaillot 2017-11-07 21:42:19 UTC
Per discussion on related Bug 1499217:

> As noted in https://bugzilla.redhat.com/show_bug.cgi?id=1505909, comment #7,
> I tested a scratch build with the provided patch and I can now clean errors
> by doing "pcs resource cleanup galera-bundle". I can also reprobe the state
> of unmanaged resource.
> 
> However, I now face another issue, in that when I "pcs resource manage
> galera-bundle" after the cleanup, a restart operation is triggered, which is
> unexpected and breaks the idiomatic way of "reprobing the current state of a
> resource before gicing back controller to pacemaker".

Based on the described actions:

    pcs resource unmanage galera       
    pcs resource update galera cluster_host_map='ra1:ra1;ra2:ra2;ra3:ra3;foo=foo'
    pcs resource cleanup galera

A restart after galera becomes managed again is expected, due to the resource definition having changed. I would expect that unmanage + cleanup + manage would not trigger a restart.

Comment 3 Ken Gaillot 2017-11-13 23:49:54 UTC
(In reply to Ken Gaillot from comment #2)
> Based on the described actions:
> 
>     pcs resource unmanage galera       
>     pcs resource update galera
> cluster_host_map='ra1:ra1;ra2:ra2;ra3:ra3;foo=foo'
>     pcs resource cleanup galera
> 
> A restart after galera becomes managed again is expected, due to the
> resource definition having changed. I would expect that unmanage + cleanup +
> manage would not trigger a restart.

My mistake, the cleanup after the update should prevent the restart.

In addition to the commits listed in Bug 1499217 Comment 8, we also needed a small part of upstream commit e3b825a.

Comment 4 Damien Ciabrini 2017-11-15 12:57:53 UTC
I've just tested the scratch build and confirm that all the cleanup tests are working.

I also confirm that I no longer see any spurious restart action once I "pcs resource cleanup" unmanaged resource and then "pcs resource manage" it.

Thanks!

Comment 6 Damien Ciabrini 2017-11-16 11:33:19 UTC
Instruction for verifying the fix:
Let ra1 ra2 and ra3 be the name of your controller nodes.

tests consists in making sure that the cleanup:
  . correctly reprobes the state of resources (even when unmanaged),
  . doesn't cause any stop or restart action when unnecessary.


#1. Ensure that the cleanup works. as mention by ken in comment #2

    pcs resource unmanage galera       
    pcs resource update galera cluster_host_map='ra1:ra1;ra2:ra2;ra3:ra3;foo=foo'
    pcs resource cleanup galera

The state of the galera resource should read Master. (previously it was failing to report state and kept in Slave)

#2. Give back control to pacemaker, and ensure no restart is triggered

    pcs resource manage galera

No galera replica should be stopped or restarted. Check in the logs that such operation is not scheduled by pacemaker 

#3. Ensure that the cleanup works when one reprobes the state of the bundle

   pcs resource unmanage galera-bundle
   pcs resource cleanup galera-bundle

This will unmanage the resource _and_ the container and pacemaker-remote that manages it. A cleanup should succefully reprobe the state of the galera resource as in test #1

#4. Give back control to pacemaker and ensure galera server is not restarted nor is the galera docker container.

   pcs resource manage galera-bundle

Like test #2, no restart should happen. The pid of the galera server would stay unchanged, and "docker ps" would show that the galera docker container is still up.

Comment 7 Udi Shkalim 2017-11-16 12:36:39 UTC
Verified on: pacemaker-1.1.16-12.el7_4.5.x86_64

Followed steps on comment #6 and did not notice any restart of galera containers or any issues as mentioned above.

Comment 10 errata-xmlrpc 2017-11-30 16:10:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3328