Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 235465 - lock_nolock results in /sbin/mount.gfs: error 19 mounting
Summary: lock_nolock results in /sbin/mount.gfs: error 19 mounting
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: gfs-utils
Version: 5.0
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Robert Peterson
QA Contact: Dean Jansa
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2007-04-05 22:17 UTC by Axel Thimm
Modified: 2010-01-12 03:32 UTC (History)
0 users

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2007-06-19 15:57:26 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Axel Thimm 2007-04-05 22:17:31 UTC
Description of problem:
Trying to locally mount a gfs filesystem results in

# mount -o lockproto=lock_nolock /dev/mapper/test-data /mnt
/sbin/mount.gfs: error 19 mounting /dev/mapper/test-data on /mnt

Version-Release number of selected component (if applicable):
gfs-utils-0.1.11-1.el5
gfs2-utils-0.1.25-1.el5
kernel-xen-2.6.18-8.1.1.el5

How reproducible:
always

Steps to Reproduce:
1.gfs_mkfs -p lock_dlm -t test:data -j4  /dev/mapper/test-data 
2.mount -o lockproto=lock_nolock /dev/mapper/test-data /mnt
  
Actual results:
/sbin/mount.gfs: error 19 mounting /dev/mapper/test-data on /mnt

Expected results:
Should locally mount the filesystem

Additional info:

Comment 1 Nate Straz 2007-04-13 18:25:56 UTC
Was the lock_nolock module loaded when you tried to mount?

Comment 2 Axel Thimm 2007-04-13 19:32:21 UTC
Yes, I checked that it was loaded.

I also continued the cluster setup and found out I was using GFS1. I must had
used gfs_mkfs instead of mkfs.gfs2. In fcat the above error also hints to GFS1
instead of gfs2.

I nuked the setup, recreated volumes and gfs2 filesystems on a proper cluster
and that worked fine. If I umount these filesystems and mount them back with 
lock_nolock it works. So it may be just GFS1 that doesn't mount with lock_nolock.

I'm moving it therefore to gfs-utils, where it belonged. I have no intention of
using GFS1 filesystems, so I can't do further testing on GFS1. Should it pop up
again in a gfs2 context I'll revisit this bug.

Comment 4 Robert Peterson 2007-05-10 15:18:20 UTC
This doesn't happen for me when using code built from the latest cvs 
tree.  I'll see if I can isolate which code fix made it work and make 
sure it got into the stuff for 5.1.


Comment 5 Robert Peterson 2007-06-19 15:57:26 UTC
I cannot recreate this problem at any fix level of RHEL5.  I scratch
built a clean RHEL5 system and performed the following steps without 
error:

[root@tank-04 ~]# modprobe gfs
[root@tank-04 ~]# modprobe gfs2
[root@tank-04 ~]# modprobe lock_nolock
[root@tank-04 ~]# uname -a
Linux tank-04 2.6.18-8.el5 #1 SMP Fri Jan 26 14:15:21 EST 2007 i686 i686 i386
GNU/Linux
[root@tank-04 ~]# pvcreate /dev/sda
  Physical volume "/dev/sda" successfully created
[root@tank-04 ~]# vgcreate bob_vg /dev/sda
  Volume group "bob_vg" successfully created
[root@tank-04 ~]# lvcreate -L 50G bob_vg -n bobs_lv
  Logical volume "bobs_lv" created
[root@tank-04 ~]# gfs_mkfs -p lock_dlm -t test:data -j4  /dev/mapper/bob_vg-bobs_lv 
This will destroy any data on /dev/mapper/bob_vg-bobs_lv.

Are you sure you want to proceed? [y/n] y

Device:                    /dev/mapper/bob_vg-bobs_lv
Blocksize:                 4096
Filesystem Size:           12974628
Journals:                  4
Resource Groups:           198
Locking Protocol:          lock_dlm
Lock Table:                test:data

Syncing...
All Done
[root@tank-04 ~]# mount -o lockproto=lock_nolock /dev/mapper/bob_vg-bobs_lv /mnt
[root@tank-04 ~]# ls /mnt
[root@tank-04 ~]# umount /mnt
[root@tank-04 ~]# 

Note that this symptom does appear if one of the kernel modules is not
loaded at mount time.

I'm closing this as WORKSFORME.  If this is still a problem, please 
reopen the bug record with information on how to recreate it.



Note You need to log in before you can comment on or make changes to this bug.