Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 455330 - fence_scsi: service scsi_reserve restart not present.
Summary: fence_scsi: service scsi_reserve restart not present.
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: cman
Version: 5.2
Hardware: All
OS: Linux
Target Milestone: rc
: ---
Assignee: Ryan O'Hara
QA Contact: Cluster QE
Depends On: 409381
TreeView+ depends on / blocked
Reported: 2008-07-14 20:24 UTC by Ryan O'Hara
Modified: 2009-04-16 23:00 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Last Closed: 2009-01-20 21:52:56 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2009:0189 normal SHIPPED_LIVE cman bug-fix and enhancement update 2009-01-20 16:05:55 UTC

Description Ryan O'Hara 2008-07-14 20:24:26 UTC
+++ This bug was initially created as a clone of Bug #409381 +++

Description of problem:

I attempted to "unfence" a node  by restarting the scsi_reserve service,
however running 'service scsi_reserve restart' did nothing.  I had to 
run 'service scsi_reserve stop && service scsi_reserve start'

Should restart be implemented in the init script?  If we don't support restart
there should be some output saying as much when a user attpemts a restart.

Version-Release number of selected component (if applicable):

-- Additional comment from on 2007-12-04 00:16 EST --
You can simply run 'scsi_reserve start', which will register the node with all
relevant devices. If the reservation already exists, it does nothing.

I could implement a 'restart' command if it seems like the right thing to do.
Keep in mind that unlike other services, scsi_reserve is not a long-running
process (daemon).

So would restart actually unregister and re-register the node with all the
devices? Or simply re-run the registration (start)?

Comment 1 RHEL Product and Program Management 2008-07-14 21:00:58 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update

Comment 2 Ryan O'Hara 2008-09-05 21:29:27 UTC
I have a fix for this, which is not truly a restart, but I don't think that is what we want for this particular script.

In my opinion, a *true* restart would be to remove (unregister) our key from all devices and the re-register our key with all devices. We want to avoid removing the key. As soon as the key is remove, the node has no write access to the disk(s). For fencing via SCSI-3 reservations, this is as good as being fenced. Its best that we avoid removing keys from the devices.

The way I have "restart" implemented now is to do almost exactly what "start" does. If the script is called with the "restart" option, we simply get a list of all devices within cluster volumes and register our key with those devices. It does not matter if our node/key is already registered because of the way we create the registration. Then we check to see if a reservation exist for the device, and if not, we create the reservation.

Comment 3 Ryan O'Hara 2008-09-05 21:35:32 UTC
Fixed in RHEL5.

Comment 6 errata-xmlrpc 2009-01-20 21:52:56 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

Note You need to log in before you can comment on or make changes to this bug.