Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 233324 - resync option to cmirrors needs to error out if volume is active
Summary: resync option to cmirrors needs to error out if volume is active
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Cluster Suite
Classification: Retired
Component: lvm2-cluster
Version: 4
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Jonathan Earl Brassow
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2007-03-21 16:27 UTC by Corey Marthaler
Modified: 2010-01-12 04:05 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2007-07-24 10:41:14 UTC


Attachments (Terms of Use)

Description Corey Marthaler 2007-03-21 16:27:29 UTC
Description of problem:
The lvchange cmd shouldn't ask you if you want to deactivate a cmirror inorder
to resync it, if it can't grab the lock anyways.

  test               vg         mwi-a-  2.00G                    test_mlog    
32.42 test_mimage_0(0),test_mimage_1(0)  
  [test_mimage_0]    vg         iwi-ao  2.00G                                  
     /dev/sdb1(768)                     
  [test_mimage_1]    vg         iwi-ao  2.00G                                  
     /dev/sdc1(768)                     
  [test_mlog]        vg         lwi-ao  4.00M                                  
     /dev/sda1(1536) 
                   
[root@link-08 ~]# lvchange --resync /dev/vg/test
  /dev/sdh1: read failed after 0 of 2048 at 0: Input/output error
Do you really want to deactivate logical volume test to resync it? [y/n]: y
  Error locking on node link-08: Volume is busy on another node
  Can't get exclusive access to clustered volume test


Version-Release number of selected component (if applicable):
lvm2-cluster-2.02.21-3.el4

How reproducible:
everytime

Comment 1 Alasdair Kergon 2007-03-21 17:12:34 UTC
I'm not sure that it's worth changing that.

Comment 2 Jun'ichi NOMURA 2007-03-21 18:40:55 UTC
Since it seems normal deactivation ('lvchange -an') works without a need of
exclusive access, I'm afraid relationship between the messages
"Do you really want to deactivate..?" and "Can't get exclusive access.."
is not clear to the user.

How about skipping exclusive-activation or change the message to encourage
'lvchange -an' manually?

Current code sequence is this:
  -----------------------------------
  if (active) {
     ask "Do you really want to deactivate?"
     if (not yes) {
        return failed
     }
  }
  if (cluster) {
     if (!activate_lv_excl) {
        return failed
     }
  }
  deactivate_lv
  -----------------------------------

What if we do this?
  -----------------------------------
  if (active) {
     ask "Do you really want to deactivate?"
     if (not yes) {
        return failed
     }
  } else if (cluster) {
     if (!activate_lv_excl) {
        return failed
     }
  }
  deactivate_lv
  -----------------------------------

I think activate_lv_excl in this case is necessary to make
the activeness test and deactivate_lv atomic.
(To prevent other node activates the lv between them.)

If it's the only reason of activate_lv_excl,
can we just deactivate it if the user said he really want to so?

Same discussion might be applied to lvremove.


Comment 3 Jonathan Earl Brassow 2007-04-16 20:30:34 UTC
Is the volume mounted on a remote node?  If so, I think it's doing the right
thing.  It's saying "you've asked to deactivate this volume, but it's in-use on
another node!  Too bad for you."

I'm closing this NOTABUG.  Feel free to reopen if the volume was not mounted on
another node (or if you feel strongly about what error messages should be
printed).  I think it's doing the right thing.



Note You need to log in before you can comment on or make changes to this bug.