Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 593785 - libvirt: 'disk' pool startup error after creating volume
Summary: libvirt: 'disk' pool startup error after creating volume
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.0
Hardware: All
OS: Linux
low
medium
Target Milestone: rc
: 6.0
Assignee: Dave Allan
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-05-19 17:15 UTC by Justin Clift
Modified: 2016-04-26 16:47 UTC (History)
8 users (show)

Fixed In Version: libvirt-0_8_1-14_el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-11-11 14:49:22 UTC
Target Upstream Version:


Attachments (Terms of Use)
Screenshot 1 (deleted)
2010-05-19 17:15 UTC, Justin Clift
no flags Details
Screenshot 2 (deleted)
2010-05-19 17:15 UTC, Justin Clift
no flags Details
Screenshot 3 (deleted)
2010-05-19 17:16 UTC, Justin Clift
no flags Details
Screenshot 4 (deleted)
2010-05-19 17:17 UTC, Justin Clift
no flags Details
Screenshot 5 (deleted)
2010-05-19 17:17 UTC, Justin Clift
no flags Details
Screenshot 6 (deleted)
2010-05-19 17:17 UTC, Justin Clift
no flags Details
Screenshot 7 (deleted)
2010-05-19 17:18 UTC, Justin Clift
no flags Details
virsh pool-list from before the pool was created. (deleted)
2010-05-20 16:05 UTC, Justin Clift
no flags Details
virsh pool-list from after the pool was created. (deleted)
2010-05-20 16:05 UTC, Justin Clift
no flags Details
virsh pool-dumpxml output (deleted)
2010-05-20 16:07 UTC, Justin Clift
no flags Details
virt-manager --debug log (deleted)
2010-05-20 16:07 UTC, Justin Clift
no flags Details
virt-manager --debug stderr (deleted)
2010-05-20 16:08 UTC, Justin Clift
no flags Details
virsh pool-list from after the pool was created. (deleted)
2010-05-20 16:10 UTC, Justin Clift
no flags Details
2nd part of patch series (deleted)
2010-06-14 22:18 UTC, Eric Blake
no flags Details | Diff

Description Justin Clift 2010-05-19 17:15:10 UTC
Created attachment 415198 [details]
Screenshot 1

Description of problem:

On RHEL 6 (beta) x86_64, multipath disk volumes are unable to be created and used with Virt-Manager.  After setting up multipath disk storage pools with virt-manager, subsequent creation of "New Volumes" inside these storage pools doesn't work.  The volumes are created, but Virt-Manager doesn't see them.

This is because Virt-Manager blindly assumes what the name of the created volumes will be, gets it wrong, and can't be overridden.


Version-Release number of selected component (if applicable):

virt-manager-0.8.2-3.el6.noarch


How reproducible:

Every time.


Steps to Reproduce:
1. Create a new multipath block device under /dev/mapper.

     $ ls -la /dev/mapper/vmlun70*
     brw-rw---- 1 root disk 253,  85 May 20 02:56 /dev/mapper/vmlun70
     $


2. Using parted, create a disk label on this device (ie mklabel)

     $ sudo parted /dev/mapper/vmlun70 
     GNU Parted 2.1
     Using /dev/mapper/vmlun70
     Welcome to GNU Parted! Type 'help' to view a list of commands.
     (parted) mklabel                                                          
     New disk label type? msdos                                                
     (parted) q                                                                
     Information: You may need to update /etc/fstab.                           

     $


3. In Virt-Manager, create a new "Physical Disk Device" storage pool using this device.  Make sure the "Format" field in the 2nd dialog screen corresponds to the type of disk label used in parted.

     See attached screenshots 1, 2 and 3.


4. With the new storage pool selected in Virt-Manager, click the "New Volume" button.  Fill in any name and size desired, then click the Finish button.

     See attached screenshot 4.


5. The storage pool will be added to the list of storage pools.  However, after a few seconds the storage pool state changes to "inactive" (due to this bug).

     See attached screenshot 5.


6. Click on the "Start Pool" button for this storage pool.  An error dialog will appear stating that Virt-Manager isn't able to find the new volume.

     See attached screenshots 6 and 7.

   In reality, the volume was created and appears in the file system but has different name than Virt-Manager expected.
   Multipath volumes have a "p" (for "partition") in the volume name:

     $ ls -la /dev/mapper/vmlun70*
     brw-rw---- 1 root disk 253,  85 May 20 02:56 /dev/mapper/vmlun70
     brw-rw---- 1 root disk 253, 148 May 20 02:56 /dev/mapper/vmlun70p1
     $

   Whereas the error message indicates Virt-Manager was looking for (in this instance) /dev/mapper/vmlun701.

   There is no workaround in Virt-Manager to get these storage pools working.

  
Actual results:

Multipath disk volumes are unable to be created and used with Virt-Manager.


Expected results:

Virt-Manager to create and use multipath disk volumes.


Additional info:

Comment 1 Justin Clift 2010-05-19 17:15:52 UTC
Created attachment 415199 [details]
Screenshot 2

Comment 2 Justin Clift 2010-05-19 17:16:28 UTC
Created attachment 415200 [details]
Screenshot 3

Comment 3 Justin Clift 2010-05-19 17:17:04 UTC
Created attachment 415201 [details]
Screenshot 4

Comment 4 Justin Clift 2010-05-19 17:17:26 UTC
Created attachment 415202 [details]
Screenshot 5

Comment 5 Justin Clift 2010-05-19 17:17:53 UTC
Created attachment 415203 [details]
Screenshot 6

Comment 6 Justin Clift 2010-05-19 17:18:22 UTC
Created attachment 415204 [details]
Screenshot 7

Comment 8 Cole Robinson 2010-05-20 14:23:01 UTC
Can you provide:

virsh pool-list --all
virsh pool-dumpxml $poolname for the affected pool
virsh vol-list $poolname for the affected pool
virt-manager --debug output when reproducing this issue

The pool startup error is actually coming from libvirt, so it sounds like libvirt is messing up here. Reassigning

Comment 9 Justin Clift 2010-05-20 16:03:50 UTC
Thanks Cole, it does look like it's virsh getting it wrong.

Attaching the requested logs, except for the the virsh vol-list one, as that gave an error:

****************************************************************************

# virsh vol-list examplelun
error: Failed to list active vols
error: internal error storage pool is not active

#

****************************************************************************

This shows the name of the volume actually created on the system (vmlun70p1):

# ls -la /dev/mapper/vmlun70*
brw-rw---- 1 root disk 253, 126 May 21 01:43 /dev/mapper/vmlun70
brw-rw---- 1 root disk 253, 146 May 21 01:43 /dev/mapper/vmlun70p1
#

Comment 10 Justin Clift 2010-05-20 16:05:09 UTC
Created attachment 415456 [details]
virsh pool-list from before the pool was created.

Comment 11 Justin Clift 2010-05-20 16:05:57 UTC
Created attachment 415457 [details]
virsh pool-list from after the pool was created.

Comment 12 Justin Clift 2010-05-20 16:07:00 UTC
Created attachment 415458 [details]
virsh pool-dumpxml output

Comment 13 Justin Clift 2010-05-20 16:07:46 UTC
Created attachment 415459 [details]
virt-manager --debug log

Comment 14 Justin Clift 2010-05-20 16:08:16 UTC
Created attachment 415460 [details]
virt-manager --debug stderr

Comment 15 Justin Clift 2010-05-20 16:10:28 UTC
Created attachment 415463 [details]
virsh pool-list from after the pool was created.

Previous version of this attachment was incorrect, as I'd selected the wrong one. ;)

Comment 16 RHEL Product and Program Management 2010-06-07 15:59:58 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 17 Daniel Berrange 2010-06-08 17:33:51 UTC
From the fdisk source code

        /* Heuristic: we list partition 3 of /dev/foo as /dev/foo3,
           but if the device name ends in a digit, say /dev/foo1,
           then the partition is called /dev/foo1p3. */

so we just need to copy that heuristic

https://www.redhat.com/archives/libvir-list/2010-June/msg00181.html

Comment 18 Dave Allan 2010-06-10 21:30:41 UTC
libvirt-0_8_1-8_el6 has been built in RHEL-6-candidate with the fix.

Dave

Comment 19 Eric Blake 2010-06-14 21:14:46 UTC
The patch committed as part of 0.8.1-8.el6 is incomplete, per upstream mail traffic: https://www.redhat.com/archives/libvir-list/2010-June/msg00347.html

Comment 20 Eric Blake 2010-06-14 22:18:37 UTC
Created attachment 423985 [details]
2nd part of patch series

Comment 21 Eric Blake 2010-06-17 20:32:25 UTC
Moving back to POST, to incorporate second patch:
http://post-office.corp.redhat.com/archives/rhvirt-patches/2010-June/msg00398.html

Comment 22 Dave Allan 2010-06-29 02:56:51 UTC
libvirt-0_8_1-11_el6 has been built in RHEL-6-candidate with the fix.

Dave

Comment 24 Johnny Liu 2010-07-05 09:40:11 UTC
Re-test this bug with libvirt-0.8.1-13.el6.x86_64, but FAILED.

This bug is fixed partly.

About Comment 17, when creating a volume on a mapper device, the name of new volume always appended with pN, even if the device name ends in a non-digit, such as:

# ls -la /dev/mapper/mymapper*
lrwxrwxrwx. 1 root root 7 Jul  5 13:32 /dev/mapper/mymapper -> ../dm-0
lrwxrwxrwx. 1 root root 7 Jul  5 13:32 /dev/mapper/mymapperp1 -> ../dm-1


So that means this fix patch only fix the scenario that the device name ends in a digit, E.g:
# virsh vol-list xx
Name                 Path                                    
-----------------------------------------
vmlun70p1            /dev/mapper/vmlun70p1                   
vmlun70p2            /dev/mapper/vmlun70p2 

If the mapper device ends in a non-digit, the following error is seen:
# virsh pool-dumpxml xx
<pool type='disk'>
  <name>xx</name>
  <uuid>08a0a104-cbc3-0b96-bd6a-b1f8d00d9474</uuid>
  <capacity>0</capacity>
  <allocation>0</allocation>
  <available>0</available>
  <source>
    <device path='/dev/mapper/mymapper'/>
    <format type='dos'/>
  </source>
  <target>
    <path>/dev</path>
    <permissions>
      <mode>0700</mode>
      <owner>-1</owner>
      <group>-1</group>
    </permissions>
  </target>
</pool>

# virsh pool-list --all
Name                 State      Autostart 
-----------------------------------------
xx                   inactive   yes      

# ls -la /dev/mapper/mymapper*
lrwxrwxrwx. 1 root root 7 Jul  5 13:32 /dev/mapper/mymapper -> ../dm-0
lrwxrwxrwx. 1 root root 7 Jul  5 13:32 /dev/mapper/mymapperp1 -> ../dm-1

# virsh pool-start xx
error: Failed to start pool xx
error: cannot open volume '/dev/mapper/mymapper1': No such file or directory

The libvirt is trying to find a non-existent volume.

Comment 25 Dave Allan 2010-07-07 14:12:34 UTC
*** Bug 611443 has been marked as a duplicate of this bug. ***

Comment 27 Daniel Berrange 2010-07-13 14:19:47 UTC
Dave posted an improved solution upstream

https://www.redhat.com/archives/libvir-list/2010-July/msg00185.html

Comment 29 Dave Allan 2010-07-14 06:20:44 UTC
libvirt-0_8_1-14_el6 has been built in RHEL-6-candidate with the fix.

Dave

Comment 30 Johnny Liu 2010-07-15 09:27:29 UTC
Verified this bug with libvirt-0.8.1-15.el6.x86_64, and PASSED.

Whatever mapper device name ends in digit or non-digit, the volumes can be listed successfully.


When mapper device ends in non-digit:
# ls /dev/mapper/mpatha* -l
lrwxrwxrwx. 1 root root 7 Jul 15 13:06 /dev/mapper/mpatha -> ../dm-0
lrwxrwxrwx. 1 root root 7 Jul 15 13:06 /dev/mapper/mpathap1 -> ../dm-1
# virsh pool-dumpxml xx
<pool type='disk'>
  <name>xx</name>
  <uuid>ec1942d9-0bf3-7793-f46b-83cc839772fd</uuid>
  <capacity>42944186880</capacity>
  <allocation>1052803584</allocation>
  <available>41891351040</available>
  <source>
    <device path='/dev/mapper/mpatha'>
    <freeExtent start='1052835840' end='42944186880'/>
    </device>
    <format type='dos'/>
  </source>
  <target>
    <path>/dev</path>
    <permissions>
      <mode>0700</mode>
      <owner>-1</owner>
      <group>-1</group>
    </permissions>
  </target>
</pool>

# virsh vol-list xx
Name                 Path                                    
-----------------------------------------
mpathap1             /dev/mapper/mpathap1


When mapper device ends in digit:
# ls /dev/mapper/red70* -l
lrwxrwxrwx. 1 root root 7 Jul 15 13:13 /dev/mapper/red70 -> ../dm-0
lrwxrwxrwx. 1 root root 7 Jul 15 13:13 /dev/mapper/red70p1 -> ../dm-1

# virsh pool-dumpxml xx
<pool type='disk'>
  <name>xx</name>
  <uuid>baaefec7-7fdc-1e12-035a-b3b4f51abeff</uuid>
  <capacity>42944186880</capacity>
  <allocation>1052803584</allocation>
  <available>41891351040</available>
  <source>
    <device path='/dev/mapper/red70'>
    <freeExtent start='1052835840' end='42944186880'/>
    </device>
    <format type='dos'/>
  </source>
  <target>
    <path>/dev</path>
    <permissions>
      <mode>0700</mode>
      <owner>-1</owner>
      <group>-1</group>
    </permissions>
  </target>
</pool>
# virsh vol-list xx
Name                 Path                                    
-----------------------------------------
red70p1              /dev/mapper/red70p1

Comment 31 releng-rhel@redhat.com 2010-11-11 14:49:22 UTC
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.