Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1519379 - Pacemaker does not immediately recover from failed demote actions
Summary: Pacemaker does not immediately recover from failed demote actions
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pacemaker
Version: 7.4
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 7.6
Assignee: Ken Gaillot
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-30 16:26 UTC by Ondrej Faměra
Modified: 2018-10-30 07:59 UTC (History)
8 users (show)

Fixed In Version: pacemaker-1.1.18-12.el7
Doc Type: Bug Fix
Doc Text:
Cause: If a demote action failed, Pacemaker would always bring the resource to a full stop, even if the configuration specified a restart. Consequence: Failed demote actions would lead to the resource being stopped until the next natural recalculation (external event or cluster-recheck-interval). Fix: Pacemaker follows the configured policy for recovery from failed demote actions. Result: Failed demote actions lead to immediate recovery as specified by the configuration.
Clone Of:
Environment:
Last Closed: 2018-10-30 07:57:39 UTC


Attachments (Terms of Use)
Collected data mentioned in BZ description (deleted)
2017-11-30 16:26 UTC, Ondrej Faměra
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3321711 None None None 2018-01-16 12:19:44 UTC
Red Hat Product Errata RHBA-2018:3055 None None None 2018-10-30 07:59:13 UTC

Description Ondrej Faměra 2017-11-30 16:26:20 UTC
Created attachment 1360990 [details]
Collected data mentioned in BZ description

=== Description of problem:
pacemaker seems to wait for 'cluster-recheck-interval' time (by default 15 minutes)
to continue doing operations that "should happen" immediately. 
During time when nothing is happening the 'crm_simulate -LR' shows outstanding transactions
that pacemaker plans to do, but they don't happen until "some action" appears in cluster.
"some action" can be reaching of 'cluster-recheck-interval' or creation/deletion or cluster resources.
Then the outstanding transactions continues.

=== Version-Release number of selected component (if applicable):
pacemaker-1.1.16-12.el7_4.4

=== How reproducible:
Always with DB2 Master/Slave resource after failure of Master instance.

=== Steps to Reproduce:
1. Have cluster running Master/Slave resource DB2 from bz1516180 (so it can be promoted to Master after killing)
2. 'kill -9 21448' processes that are runned by DB2 Master instance 
  (on node running Master instance grep for 'db2wdog' process)
  # ps aux|grep db2wdog
  root     21448  0.0  5.4 1191508 48220 ?       Sl   16:47   0:00 db2wdog 0 [db2inst1]
3. Observe the actions in cluster.

=== Actual results:
Master DB2 instance:
- fails the 'monitor' operation
- fails the 'demote' operation
- do 'stop' operation
- [!!] waits for 15 minutes (cluster-recheck-interval)
- do 'start' operation
- do 'promote' operation if possible

=== Expected results:
Master DB2 instance:
- fails the 'monitor' operation
- fails the 'demote' operation
- do 'stop' operation
- no unnecessary waiting
- do 'start' operation
- do 'promote' operation if possible

=== Additional info:
Cluster behaves as expected when 'record-pending=true' is set on all resources.
  # pcs resource op defaults record-pending=true

This behaviour is very consistent with the DB2 resource agent.

Decreasing 'cluster-recheck-interval' results in decreased time to continue the
actions so this is suspected to be relevant here.

Attached to this bugzilla (attachment.tar.gz) are logs and test that were done in 2 scenarios:
- with_long_delay - original 15 minute issue
- with_record_pending - the improvement with 'record-pending=true'
(included are: crm_report, corosync.log, test-* with steps and timings, PCMK_DEBUG=yes in all tests)

If needed I can setup dedicated systems running above configuration for tests.

Comment 2 Ken Gaillot 2017-12-15 22:17:46 UTC
Looking at the logs, at 13:37:36 everything is fine -- fastvm-rhel-7-4-96 is running the master, and fastvm-rhel-7-4-95 is running the slave (pe-input-233). Node 95 cannot be the master, according to the agent:

Nov 30 13:37:36  db2(DB2_HADR)[21237]:    WARNING: DB2 database db2inst1(0)/sample in status STANDBY/DISCONNECTED_PEER/DISCONNECTED can never be promoted


At 13:37:49 (presumably when the DB2 processes were killed), the monitor fails, and the first thing that happens is the resource agent takes away node 96's ability to be master:

Nov 30 13:37:49 [10170] fastvm-rhel-7-4-96      attrd:     info: attrd_peer_update:     Setting master-DB2_HADR[fastvm-rhel-7-4-96]: 10000 -> (null) from fastvm-rhel-7-4-96

The cluster responds appropriately, by wanting to demote the instance (pe-input-234). Before it can try, the resource's fail count gets updated, and the failed operation is recorded. The cluster responds appropriately again, by wanting to demote, stop, and start the instance as a slave (pe-input-235 and pe-input-236).

Now it tries the demote, and the resource agent returns failure. As a side note, the agent returns 1 (generic error), not 7 (not running). 7 may not have been appropriate in this case (it should be returned only if it's cleanly stopped), but if it had returned 7, pacemaker would have done the right thing from this point.

The failed operation is recorded, and the fail count is updated again. Now, pacemaker makes a mistake: it decides to stop the instance, without a subsequent start (pe-input-237). Currently, pacemaker responds to all failed demotes with a full stop. But in cases such as this, we should continue with a start.

At the next decision point, Pacemaker correctly determines that the start is needed. This is why it works with record-pending=true or at the cluster-recheck-interval -- both of those trigger a new decision.

I will investigate a fix.

Comment 3 Ken Gaillot 2017-12-19 23:38:19 UTC
Bumping to 7.6 due to time constraints. We can consider a 7.5.z if necessary.

Comment 4 Ken Gaillot 2018-01-15 17:59:33 UTC
Fixed upstream as of commit a962eb7

Comment 12 michal novacek 2018-04-17 12:47:02 UTC
qa_ack+: Ondrej is able to test it in the original environment

Comment 17 michal novacek 2018-08-27 11:16:12 UTC
Problem reported fixed in version pacemaker-cli-1.1.18-11.el7_5.3.x86_64
by reporter.

Comment 19 errata-xmlrpc 2018-10-30 07:57:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3055


Note You need to log in before you can comment on or make changes to this bug.