Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.

Bug 1057666

Summary: When you mount an MD RAID volume via lmi mount create, the mount succeeds but does not show up in mount queries.
Product: Red Hat Enterprise Linux 7 Reporter: Barry Donahue <bdonahue>
Component: openlmi-storageAssignee: Jan Safranek <jsafrane>
Status: CLOSED CURRENTRELEASE QA Contact: Storage QE <storage-qe>
Severity: high Docs Contact:
Priority: high    
Version: 7.0   
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-06-13 12:37:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On: 1058299    
Bug Blocks:    

Description Barry Donahue 2014-01-24 15:16:55 UTC
Description of problem: I create a RAID set and an FS on that raid set via lmi storage scripts. lmi mount list does not find that mount even though the mount was successful.

Version-Release number of selected component (if applicable):

How reproducible: Every time

lmi raid create --name=raid0 0 /dev/sdb /dev/sdc
lmi fs create --label=raid0 ext4 raid0
mkdir /mnt/raid
lmi mount create /dev/md/raid0 /mnt/raid -t ext4 
lmi mount list

lmi mount list does not show the mount but df does.

# lmi mount list
FileSystemSpec     FileSystemType MountPointPath Options                                         OtherOptions
/dev/mapper/VG-lv1 xfs            /mnt/lv        AllowWrite:True, UpdateRelativeAccessTimes:True attr2, inode64, noquota, seclabel
/dev/sda1          xfs            /boot          AllowWrite:True, UpdateRelativeAccessTimes:True attr2, inode64, noquota, seclabel
/dev/sda3          xfs            /              AllowWrite:True, UpdateRelativeAccessTimes:True attr2, inode64, noquota, seclabel

[root@storageqe-07 ~]# df
Filesystem         1K-blocks    Used Available Use% Mounted on
/dev/sda3          138224700 1252740 136971960   1% /
devtmpfs             1946160       0   1946160   0% /dev
tmpfs                1951648      12   1951636   1% /dev/shm
tmpfs                1951648    8808   1942840   1% /run
tmpfs                1951648       0   1951648   0% /sys/fs/cgroup
/dev/sda1             187740   74700    113040  40% /boot
/dev/md127         280548296   60444 266213756   1% /mnt/raid
/dev/mapper/VG-lv1 209612800   32928 209579872   1% /mnt/lv

Comment 2 Jan Safranek 2014-01-27 13:31:55 UTC
Indeed, that's a bug. Actually, there are two bugs mixed together:

- blivet does not report to OpenLMI that /dev/md/raid0 is in fact /dev/md127, I reported is as #1058299.

- OpenLMI does not check all the names of /dev/md/raid0 to see if it is mounted as /dev/md127. I'll fix it when the blivet bug is fixed.

Comment 3 Jan Safranek 2014-01-28 08:47:30 UTC
I have workaround for #1058299, I'm implementing it on OpenLMI side for now.

Comment 6 Ludek Smid 2014-06-13 12:37:35 UTC
This request was resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you have further questions about the request.