Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 228420 - GFS Filesystem Failure
Summary: GFS Filesystem Failure
Alias: None
Product: Red Hat Enterprise Linux 3
Classification: Red Hat
Component: GFS-kernel
Version: 3.0
Hardware: i686
OS: Linux
Target Milestone: ---
Assignee: Kiersten (Kerri) Anderson
QA Contact: Dean Jansa
Depends On:
TreeView+ depends on / blocked
Reported: 2007-02-13 00:33 UTC by Phillip Short
Modified: 2010-01-12 03:22 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Last Closed: 2007-10-19 18:38:44 UTC
Target Upstream Version:

Attachments (Terms of Use)

Description Phillip Short 2007-02-13 00:33:58 UTC
Description of problem: Cluster consists of one lock server and 2 GFS Nodes. One
of the nodes and the lock server appeared to have communication issues though no
problems existed on the network and the other node did not have any problems
communicating with the lock server. The lock server attempted to fence the node
it was having troubles communicating with out of the cluster but this failed.

Version-Release number of selected component (if applicable): 

How reproducible: Has only occurred once

Steps to Reproduce:
Actual results:

Expected results:

Additional info:  Output from server logs
Lock server reported these errors
au04qws060apor2 lock_gulmd_core[13528]: au04qdb020apor2 missed a heartbeat
(time:1168621522519197 mb:1)
au04qws060apor2 lock_gulmd_core[13528]: au04qdb020apor2 missed a heartbeat
(time:1168621537557378 mb:2)
au04qws060apor2 lock_gulmd_core[13528]: au04qdb020apor2 missed a heartbeat
(time:1168621552595480 mb:3)
au04qws060apor2 lock_gulmd_core[13528]: Client (au04qdb020apor2) expired
au04qws060apor2 lock_gulmd_core[13528]: Forked [32478] fence_node
au04qdb020apor2 with a 0 pause.
au04qws060apor2 lock_gulmd_core[32478]: Gonna exec fence_node au04qdb020apor2

The Node reported these errors
 au04qdb020apor2 lock_gulmd_core[9713]: EOF on xdr
( idx:1 fd:5)
 au04qdb020apor2 lock_gulmd_core[9713]: In core_io.c:425 ( death by:
Lost connection to SLM Master (,
stopping. node reset required to re-activate cluster operations.
 au04qdb020apor2 lock_gulmd_LTPX[9720]: ERROR [ltpx_io.c:613] XDR error
-32:Broken pipe sending to lt000
 au04qdb020apor2 lock_gulmd_LTPX[9720]: ERROR [ltpx_io.c:1005] Got a -32:Broken
pipe trying to send packet to Master 0 on
 au04qdb020apor2 lock_gulmd_LTPX[9720]: EOF on xdr (_ core _: idx:1 fd:5)
 au04qdb020apor2 lock_gulmd_LTPX[9720]: In ltpx_io.c:332 ( death by:
Lost connection to core, cannot continue. node reset required to re-activate
cluster operations.
 au04qdb020apor2 kernel: lock_gulm: ERROR Got an error in gulm_res_recvd err: -71
 au04qdb020apor2 kernel: lock_gulm: ERROR gulm_LT_recver err -104
 au04qdb020apor2 kernel: lock_gulm: ERROR Got a -111 trying to login to
lock_gulmd.  Is it running?

The latest version of GFS change log shows two bugs that have been foxed but I
cannot access the info for these bugs in Bugzilla

Comment 1 RHEL Product and Program Management 2007-10-19 18:38:44 UTC
This bug is filed against RHEL 3, which is in maintenance phase.
During the maintenance phase, only security errata and select mission
critical bug fixes will be released for enterprise products. Since
this bug does not meet that criteria, it is now being closed.
For more information of the RHEL errata support policy, please visit:
If you feel this bug is indeed mission critical, please contact your
support representative. You may be asked to provide detailed
information on how this bug is affecting you.

Note You need to log in before you can comment on or make changes to this bug.