Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1060462 - RFE: Improve migration support
Summary: RFE: Improve migration support
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: Virtualization Tools
Classification: Community
Component: virt-manager
Version: unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Cole Robinson
QA Contact:
URL:
Whiteboard:
: 733388 881092 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-02-01 23:26 UTC by Cole Robinson
Modified: 2015-04-21 21:50 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-04-21 21:50:19 UTC


Attachments (Terms of Use)

Description Cole Robinson 2014-02-01 23:26:47 UTC
Our migration support in virt-manager needs some work. We are never going to be able to do this perfectly, since migration depends on external setup, and really isn't the focus of virt-manager. But there are a few easy-ish things we can do better:

- Promote the 'tunnelled' option to above the 'Advanced options' fold. When properly configured, tunnelled migration can be easier for users, since it doesn't require opening any extra firewall ports. Then again, most virt-manager users use ssh for remote connections, and ssh doesn't play well with tunnelled migration. This all needs more thought.

- If user selects tunnelled migration, and the remote connection is using ssh transport, show a small warning label explaining that the user libvirt is running as needs ssh keys configured for it to work.

- Warn up front if libvirt is going to complain about CPU compat. Maybe point at the unsafe flag

- True OFFLINE support: allow the migration dialog to launch when VM is offline, use the OFFLINE flag.

- When the VM is migrated, if we were connected to the console/details view of the source VM, we should auto open the console/details of the destination VM.

- We need to make sure the source VM is undefined by default. I don't think we even need to give a UI option to _not_ do this, since I'm not sure why anyone doesn't want to permanently migrate a VM, and anyways if we support OFFLINE migrate they can just move it back easy enough.

- Warn if we know that networking will fail, and how it will fail. So, 'default' network will lose all active network connections for example.

Comment 1 Cole Robinson 2014-02-01 23:28:34 UTC
*** Bug 733388 has been marked as a duplicate of this bug. ***

Comment 2 Cole Robinson 2014-02-01 23:29:22 UTC
*** Bug 881092 has been marked as a duplicate of this bug. ***

Comment 3 Cole Robinson 2014-02-09 00:39:22 UTC
Another thing to do would be to make sure that the storage is _actually_ available on both hosts. diskbackend.py:manage_path will auto setup a pool for a given path, so that could be used to introspect the remote host.

Comment 4 Cole Robinson 2014-02-09 16:55:53 UTC
Also, I filed a separate bug to track wiring up storage migration:

https://bugzilla.redhat.com/show_bug.cgi?id=1063027

Comment 5 Daniel 2014-11-13 10:26:21 UTC
Please, add two tick boxes:

- One to keep the VM definition on source. When using virtlockd or sanlock, it's safe to have the VM defined on several hosts, and it makes it easier to restart the VM if one host crash
- One to add --persistent (so the VM will be defined on the destination host, and not just transiant)

Actually, I need both options, so I cannot use migration from the GUI and must use virsh instead.

It'd even be better to have a way to set defaults for all migrations

Comment 6 Cole Robinson 2015-04-21 21:50:19 UTC
(In reply to Cole Robinson from comment #0)
> Our migration support in virt-manager needs some work. We are never going to
> be able to do this perfectly, since migration depends on external setup, and
> really isn't the focus of virt-manager. But there are a few easy-ish things
> we can do better:
> 
> - Promote the 'tunnelled' option to above the 'Advanced options' fold. When
> properly configured, tunnelled migration can be easier for users, since it
> doesn't require opening any extra firewall ports. Then again, most
> virt-manager users use ssh for remote connections, and ssh doesn't play well
> with tunnelled migration. This all needs more thought.
> 
> - If user selects tunnelled migration, and the remote connection is using
> ssh transport, show a small warning label explaining that the user libvirt
> is running as needs ssh keys configured for it to work.
> 

Upstream has a lot of warnings and suggestions in this area now, trying to detect when we know things will fail.

> - Warn up front if libvirt is going to complain about CPU compat. Maybe
> point at the unsafe flag
> 

I didn't add this. It could be useful but I think libvirt's error here will be obvious enough that people can figure to go change their CPU model.

> - True OFFLINE support: allow the migration dialog to launch when VM is
> offline, use the OFFLINE flag.
> 

Filed a separate bug to track this: bug 1214056

> - When the VM is migrated, if we were connected to the console/details view
> of the source VM, we should auto open the console/details of the destination
> VM.
> 

Filed a separate bug to track this: bug 1214082

> - We need to make sure the source VM is undefined by default. I don't think
> we even need to give a UI option to _not_ do this, since I'm not sure why
> anyone doesn't want to permanently migrate a VM, and anyways if we support
> OFFLINE migrate they can just move it back easy enough.
> 

The new dialog defaults to using --undefine-source and --persist equivalents. There's an advanced option now 'temporary' which turns off those two flags. I think that covers the needed cases, but OFFLINE support might fill in the remaining gaps

> - Warn if we know that networking will fail, and how it will fail. So,
> 'default' network will lose all active network connections for example.

If people are using the 'default' network for their VM, they aren't hosting any public services and therefore don't need really need consistent VM network connection, so I didn't add any explicit warning here.

Closing this bug. If there's additional issues, we should track them individually


Note You need to log in before you can comment on or make changes to this bug.