Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1688488 - Migration failed for VM with ConfigMap and Secret (re-sorted SATA)
Summary: Migration failed for VM with ConfigMap and Secret (re-sorted SATA)
Keywords:
Status: POST
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Virtualization
Version: 2.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 2.0
Assignee: Vladik Romanovsky
QA Contact: zhe peng
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-13 20:35 UTC by Denys Shchedrivyi
Modified: 2019-04-16 13:11 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:


Attachments (Terms of Use)
vm yaml (deleted)
2019-03-13 20:35 UTC, Denys Shchedrivyi
no flags Details

Description Denys Shchedrivyi 2019-03-13 20:35:43 UTC
Created attachment 1543726 [details]
vm yaml

Description of problem:
 VM with ConfigMap and Secret can't be migrated. There is an error message on source pod:

{"component":"virt-launcher","kind":"","level":"error","msg":"Failed to migrate vmi","name":"vm-cmap-secret","namespace":"default","pos":"server.go:77","reason":"virError(Code=67, Domain=20, Message='unsupported configuration: Found duplicate drive address for disk with target name 'sda' controller='0' bus='0' target='0' unit='0'')","timestamp":"2019-03-13T20:27:08.436949Z","uid":"c503b278-45cd-11e9-a8c8-fa163e51538d"}



Version-Release number of selected component (if applicable):
2.0

How reproducible:


Steps to Reproduce:
1. create VM with configmap and secret
2. run migration
3.

Actual results:
 Migration failed

Expected results:
 Migration should succeed


Additional info:

 The cut from qemu xml file:

   <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/var/run/kubevirt-ephemeral-disks/container-disk-data/default/vm-cmap-secret/disk_disk0/disk-image.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <alias name='ua-disk0'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/var/run/kubevirt-ephemeral-disks/cloud-init-data/default/vm-cmap-secret/noCloud.iso'/>
      <target dev='vdb' bus='virtio'/>
      <alias name='ua-cloudinitdisk'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/var/run/kubevirt-private/config-map-disks/configmap-disk.iso'/>
      <target dev='sda' bus='sata'/>
      <serial>D23YZ9W6WA5DJ487</serial>
      <alias name='ua-configmap-disk'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/var/run/kubevirt-private/secret-disks/secret-disk.iso'/>
      <target dev='sdb' bus='sata'/>
      <serial>D23YZ9W6WA5DJ489</serial>
      <alias name='ua-secret-disk'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>

Comment 1 Fabian Deutsch 2019-04-02 13:23:22 UTC
The issue of this bug is, that SATA disks are reordered on the destination side. Currently it's unclear why this is happening.

This is SATA specific.

Martin, do you know who could help us here?

Comment 2 Fabian Deutsch 2019-04-02 13:29:24 UTC
Also CCing Dan.

Comment 3 Daniel Berrange 2019-04-02 13:39:01 UTC
There's no enough info to understand the problem here. We'll need to see the current running guest XML on the source QEMU. We'll also need to see the libvirtd logs to capture API call params and more relevant info


As a general rule when debugging libvirt problems with the QEMU driver, we will want the following log setting made in /etc/libvirt/libvirtd.conf

  log_filters="1:libvirt 1:qemu 1:conf 1:security 3:event 3:json 3:file 3:object 1:util"
  log_outputs="1:file:/var/log/libvirt/libvirtd.log"

This collects all the key information for QEMU, while ensure the irrelevant stuff is dropped. Libvirtd must be restarted after applying this change.

Alternatively you can change it on a running libvirtd using the virt-admin tool

   virt-admin daemon-log-filters "1:libvirt 1:qemu 1:conf 1:security 3:event 3:json 3:file 3:object 1:util"
   virt-admin daemon-log-outputs "1:file:/var/log/libvirt/libvirtd.log"

Comment 4 Vladik Romanovsky 2019-04-15 03:27:15 UTC
The address field of the disk element xml was wrongly parsed causing all non-pci disks to share the same address.
I've posted a PR: https://github.com/kubevirt/kubevirt/pull/2185


Note You need to log in before you can comment on or make changes to this bug.