Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1066594 - [RFE] Prevent RHEV from immediately re-using MAC addresses freed by destroyed guests into newly created guests
Summary: [RFE] Prevent RHEV from immediately re-using MAC addresses freed by destroyed...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: RFEs
Version: 3.2.0
Hardware: All
OS: Linux
high
medium
Target Milestone: ovirt-4.0.0-alpha
: 4.0.0
Assignee: Martin Mucha
QA Contact: Meni Yakove
URL:
Whiteboard:
Depends On: 1269301
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-02-18 16:56 UTC by Julio Entrena Perez
Modified: 2016-08-23 20:20 UTC (History)
18 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
With the update, the order in which MAC addresses are obtained from the MAC address pool has been altered. Previously, the leftmost available MAC address was returned from the MAC address pool when requested. This caused issues in certain environments when MAC addresses returned to MAC address pool were immediately queried from the MAC address pool by another process and confused some devices on the network as a device now has the MAC address that had been recently used by a different device. Now, the MAC address pool remembers that last returned MAC address for each address in its MAC address range and will return the first available MAC address following the most recently returned. If there is no further addresses left in the range the search starts from the beginning of the range. If there are multiple MAC address ranges with available MAC addresses they take turns in serving incoming requests in the same way available MAC addresses are selected.
Clone Of:
Environment:
Last Closed: 2016-08-23 20:20:34 UTC
oVirt Team: Network
nyechiel: Triaged+
gklein: testing_plan_complete-


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
oVirt gerrit 49502 master MERGED core: Do not acquire in left-most-available order 2016-02-03 08:44:43 UTC
oVirt gerrit 53081 None None None 2016-02-04 11:14:44 UTC
Red Hat Knowledge Base (Solution) 727463 None None None Never
Red Hat Product Errata RHEA-2016:1743 normal SHIPPED_LIVE Red Hat Virtualization Manager 4.0 GA Enhancement (ovirt-engine) 2016-09-02 21:54:01 UTC

Description Julio Entrena Perez 2014-02-18 16:56:10 UTC
Description of problem:
Customer uses third party solution for DNS and DHCP management of their networks and to facilitate provisioning of new RHEV guests by PXE booting and DHCP.

Users create and delete environments quickly resulting in MAC addresses from destroyed guests being immediately re-used in newly created guests.
After a guest is provisioned its MAC address, allocated IP address and hostname are pushed to third party network management solution.

Immediately re-using MAC addresses that were in use seconds ago by other already destroyed guests causes issues with QIP DHCP server not refreshed in time and guests not being reachable until former ARP entries expire.

Version-Release number of selected component (if applicable):
rhevm-backend-3.2.5-0.49.el6ev

How reproducible:
Always.

Steps to Reproduce:
1. Create a NIC.
2. Destroy the NIC.
3. Immediately afterwards create a new NIC.

Actual results:
NIC in step 3. has the same MAC address as NIC in step 1.

Expected results:
MAC addresses are not immediately re-used to prevent clashes and ARP issues.

Additional info:

Comment 2 lpeer 2014-02-23 08:46:34 UTC
As a temporary workaround, if the customer is interested in allocating the MAC address to the VM's in a proprietary way he can use the custom MAC address field in the VNIC.
By using this field the customer could guarantee the MACs won't be reused until the QIP is updated.

Comment 11 Yaniv Kaul 2015-11-15 20:11:15 UTC
Michal - doesn't look like a very hard task to achieve (random allocation from the pool) - can I mark as a StudentProject and target 4.0?

Comment 12 Michal Skrivanek 2015-11-16 07:55:34 UTC
there were some considerations/complications regarding the mac pool, not sure it was that simple(keeping a list of recently used MACs). 
Random would perhaps work good enough, though ultimately this is a network's group decision

Comment 13 Dan Kenigsberg 2015-12-06 11:05:17 UTC
Per mmucha, randomization is indeed simple.

Comment 14 Martin Mucha 2016-01-06 14:00:28 UTC
(In reply to Dan Kenigsberg from comment #13)
> Per mmucha, randomization is indeed simple.

no, randomization would be ineffective and would not produce 'stable' behavior. Better approach was used, required behavior is already implemented. We're waiting for code review & possible merge. Check following for more information:

https://gerrit.ovirt.org/#/c/49502/

Comment 17 Yaniv Lavi 2016-02-04 14:08:50 UTC
Why was this moved to 3.6.z?

Comment 18 Dan Kenigsberg 2016-02-04 16:05:43 UTC
(In reply to Yaniv Dary from comment #17)
> Why was this moved to 3.6.z?

My product manager told me that this bug annoys many customers, and should be fixed early if it is not too risky.

Comment 19 Yaniv Lavi 2016-02-07 11:12:54 UTC
Can you ack\nack this for 3.6.5?

Comment 20 Yaniv Kaul 2016-02-07 11:44:32 UTC
Is the solution working well in the case of a single range, which I believe many customers have?

Comment 21 Dan Kenigsberg 2016-02-07 11:57:08 UTC
(In reply to Yaniv Kaul from comment #20)
> Is the solution working well in the case of a single range, which I believe
> many customers have?

Yes, each range maintains its own startingLocationWhenSearchingForUnusedMac to make sure that an address is reused only after all its range peers have been reused. there are obvious early reuse if one of the ranges is almost full, though)

Comment 22 Michael Burman 2016-04-05 10:38:59 UTC
Verified on - 3.6.5.1-0.1.el6

Comment 23 Dan Kenigsberg 2016-04-07 10:25:35 UTC
no need to clone this bug, as it was already fixed upstream and verified in comment 22.

Comment 26 errata-xmlrpc 2016-08-23 20:20:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2016-1743.html


Note You need to log in before you can comment on or make changes to this bug.