Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 714421 - [vdsm] [Scalability] When host is loaded with networks - addNetwork and getVdsCaps takes a lot of time to return.
Summary: [vdsm] [Scalability] When host is loaded with networks - addNetwork and getVd...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: oVirt
Classification: Retired
Component: vdsm
Version: unspecified
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: 3.5.0
Assignee: Antoni Segura Puimedon
QA Contact: Meni Yakove
URL:
Whiteboard: network
Depends On:
Blocks: 1053114
TreeView+ depends on / blocked
 
Reported: 2011-06-19 08:47 UTC by David Naori
Modified: 2016-02-10 19:36 UTC (History)
10 users (show)

Fixed In Version: 3.5
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1053114 (view as bug list)
Environment:
Last Closed: 2014-04-02 10:23:12 UTC
oVirt Team: Network


Attachments (Terms of Use)
create networks script. (deleted)
2011-06-19 10:49 UTC, David Naori
no flags Details
vdsm and libvirtd logs (deleted)
2012-11-20 15:19 UTC, Rami Vaknin
no flags Details


Links
System ID Priority Status Summary Last Updated
oVirt gerrit 23518 None None None Never
oVirt gerrit 23575 None None None Never
oVirt gerrit 23576 None None None Never
oVirt gerrit 23577 None None None Never
oVirt gerrit 23578 None None None Never
oVirt gerrit 24158 None None None Never
oVirt gerrit 24603 None None None Never

Description David Naori 2011-06-19 08:47:03 UTC
Description of problem:
Scalability problem with addNetwork: When host is loaded with networks - addNetwork and getVdsCaps takes a lot of time to return.

For example:

with 500 netwroks:
addNetwork takes ~20 seconds
getVdsCaps takes ~12 seconds

with 1000 networks:
addNetwork takes ~01:26 minutes
getVdsCaps takes ~32 seconds

with 1500 networks:
addNetwork takes ~04:17 minutes to accomplish.
and getVdsCaps takes ~01:05 minutes.

rhevm is executing this both commands in order to add network- in this case rhevm always fail on timeout.

Version-Release number of selected component (if applicable):
vdsm-4.9-75.el6.x86_64

How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 2 David Naori 2011-06-19 10:49:08 UTC
Created attachment 505443 [details]
create networks script.

In order to reproduce this issue you may use the attached script- just change the following variables to match your environment: 

DATA_CENTER= datacenter uuid
CLUSTER= cluster uuid
HOST= host uuid
NIC= nic uuid
RHEVM_ADDRESS= address + port

and run the script with number of networks desired.
for example in order to create 2000 vlans:
./create_network.sh 2000


(the script adds vlan networks to the data-center, attach them a cluster and assign them to host's interface)

Comment 3 Erez Shinan 2011-06-21 13:52:38 UTC
After long consideration, we decided that we won't fix this for the old networking API. If performance is vital, we recommend that you use the new API (setupNetworks), which lets you add many networks/vlans/etc in one function call, and drastically eases the pain of optimization on our part.

Comment 4 Erez Shinan 2011-06-22 11:15:15 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
The cause of this performance issue is the long execution time of 'ifup' and 'ifconfig', which increases with some proportion to the amount of networks on the host.

Fixing this would require changes on two fronts. The first would be to use faster (and honestly, more correct) methods to list and enable networks. Simple linear optimization is sufficient to support a much larger amount of networks, and ideally, enabling a single network should be O(1). The second front would be to provide an API to change many networks at once. That would allow for a minimal listing of networks, and possibly allow to parallelize the enabling of networks. It is possible that using both approaches we can transform the O(n2) process into O(n). 

The difficulties in doing so are two-fold:
1) Re-implementing ifup and ifconfig is a long and error-prone process, which should be reserved for the developement period.
2) Rhevm would have to use a new api (which in fact already exists, but is not well tested), and that suffers the same fate.

Appendix: Here is the dump of my preliminary profiling at 300 networks.

        1    0.001    0.001    8.740    8.740 <stdin>:1(add)
        1    0.001    0.001    8.739    8.739 configNetwork.py:390(addNetwork)
        6    0.018    0.003    8.428    1.405 /usr/lib64/python2.6/subprocess.py:680(communicate)
        6    0.000    0.000    8.411    1.402 /usr/lib64/python2.6/subprocess.py:1293(_communicate)
        6    8.410    1.402    8.410    1.402 /usr/lib64/python2.6/subprocess.py:1326(_communicate_with_poll)
        3    0.000    0.000    4.364    1.455 configNetwork.py:61(ifup)
        2    0.000    0.000    4.291    2.145 netinfo.py:221(__init__)
        2    0.004    0.002    4.291    2.145 netinfo.py:161(get)
        2    0.009    0.005    4.003    2.001 netinfo.py:77(ifconfig)
        1    0.000    0.000    2.141    2.141 configNetwork.py:227(addNic)
       20    0.001    0.000    0.114    0.006 /usr/lib64/python2.6/glob.py:10(glob)
6324/1284    0.012    0.000    0.114    0.000 /usr/lib64/python2.6/glob.py:18(iglob)
      632    0.022    0.000    0.104    0.000 netinfo.py:134(getIfaceCfg)
        1    0.000    0.000    0.083    0.083 configNetwork.py:49(ifdown)
     2480    0.023    0.000    0.082    0.000 /usr/lib64/python2.6/shlex.py:274(split)
     5032    0.013    0.000    0.080    0.000 /usr/lib64/python2.6/glob.py:63(glob0)
     5032    0.059    0.000    0.059    0.000 /usr/lib64/python2.6/posixpath.py:139(lexists)
        4    0.000    0.000    0.057    0.014 netinfo.py:32(bondings)
     4960    0.007    0.000    0.046    0.000 /usr/lib64/python2.6/shlex.py:268(next)
        4    0.040    0.010    0.040    0.010 netinfo.py:61(speed)

Comment 6 Yaniv Kaul 2011-06-22 11:18:01 UTC
Deleted Technical Notes Contents.

Old Contents:
The cause of this performance issue is the long execution time of 'ifup' and 'ifconfig', which increases with some proportion to the amount of networks on the host.

Fixing this would require changes on two fronts. The first would be to use faster (and honestly, more correct) methods to list and enable networks. Simple linear optimization is sufficient to support a much larger amount of networks, and ideally, enabling a single network should be O(1). The second front would be to provide an API to change many networks at once. That would allow for a minimal listing of networks, and possibly allow to parallelize the enabling of networks. It is possible that using both approaches we can transform the O(n2) process into O(n). 

The difficulties in doing so are two-fold:
1) Re-implementing ifup and ifconfig is a long and error-prone process, which should be reserved for the developement period.
2) Rhevm would have to use a new api (which in fact already exists, but is not well tested), and that suffers the same fate.

Appendix: Here is the dump of my preliminary profiling at 300 networks.

        1    0.001    0.001    8.740    8.740 <stdin>:1(add)
        1    0.001    0.001    8.739    8.739 configNetwork.py:390(addNetwork)
        6    0.018    0.003    8.428    1.405 /usr/lib64/python2.6/subprocess.py:680(communicate)
        6    0.000    0.000    8.411    1.402 /usr/lib64/python2.6/subprocess.py:1293(_communicate)
        6    8.410    1.402    8.410    1.402 /usr/lib64/python2.6/subprocess.py:1326(_communicate_with_poll)
        3    0.000    0.000    4.364    1.455 configNetwork.py:61(ifup)
        2    0.000    0.000    4.291    2.145 netinfo.py:221(__init__)
        2    0.004    0.002    4.291    2.145 netinfo.py:161(get)
        2    0.009    0.005    4.003    2.001 netinfo.py:77(ifconfig)
        1    0.000    0.000    2.141    2.141 configNetwork.py:227(addNic)
       20    0.001    0.000    0.114    0.006 /usr/lib64/python2.6/glob.py:10(glob)
6324/1284    0.012    0.000    0.114    0.000 /usr/lib64/python2.6/glob.py:18(iglob)
      632    0.022    0.000    0.104    0.000 netinfo.py:134(getIfaceCfg)
        1    0.000    0.000    0.083    0.083 configNetwork.py:49(ifdown)
     2480    0.023    0.000    0.082    0.000 /usr/lib64/python2.6/shlex.py:274(split)
     5032    0.013    0.000    0.080    0.000 /usr/lib64/python2.6/glob.py:63(glob0)
     5032    0.059    0.000    0.059    0.000 /usr/lib64/python2.6/posixpath.py:139(lexists)
        4    0.000    0.000    0.057    0.014 netinfo.py:32(bondings)
     4960    0.007    0.000    0.046    0.000 /usr/lib64/python2.6/shlex.py:268(next)
        4    0.040    0.010    0.040    0.010 netinfo.py:61(speed)

Comment 7 Dan Kenigsberg 2011-07-13 13:34:24 UTC
getVdsCaps's slowness may be solved by dumping ifconfig. would `ip` be a reasonable replacement?

Comment 8 Rami Vaknin 2012-10-28 09:02:25 UTC
(In reply to comment #7)
> getVdsCaps's slowness may be solved by dumping ifconfig. would `ip` be a
> reasonable replacement?

Yes, 'ip' output is much shorter and faster.

Comment 9 Dan Kenigsberg 2012-11-19 12:39:41 UTC
In the 1+ year between comment 7 and comment 8, we've dumped ifconfig from getVdsCaps (replaced by ethtool ioctl's). Is there any improvement?

Comment 10 Rami Vaknin 2012-11-20 12:06:28 UTC
It looks like there is a severe degredation, I've measured the time it takes to attach vlan-networks to host's nic using python-sdk and the time it takes to vdsClient to return.

Attachement of network #1 to host's nic took 3.30945611 seconds
Attachement of network #50 to host's nic took 12.0781021118 seconds
Attachement of network #500 to host's nic took 131.516748905 seconds
Attachement of network #650 to host's nic took 189.009440899 seconds
Attachement of network #700 to host's nic took 208.530132055 seconds

getVdsCaps when 705 vlan-networks assigned to host's nic took 44.339 seconds.

Versions:
vdsm-4.9.6-43.0.el6_3.x86_64
libvirt-0.9.10-21.el6_3.6.x86_64
kernel-2.6.32-279.14.1.el6.x86_64

Comment 11 Rami Vaknin 2012-11-20 15:15:26 UTC
BTW, according to vdsm log, adding the 700th network took 2 mins and 126 seconds, the two vdsm steps that took most of the time:

1.
Thread-2287::DEBUG::2012-11-20 11:31:41,174::BindingXMLRPC::894::vds::(wrapper) client [10.35.160.91]::call addNetwork with ('net699', '699', '', ['eth1'], {'STP': 'no', 'bridged': 'true'}) {} flowID [1694c5da]

2.
MainProcess|Thread-2287::DEBUG::2012-11-20 11:32:23,260::configNetwork::248::root::(writeBackupFile) Persistently backed up /var/lib/vdsm/netconfback/ifcfg-eth1.699 (until next 'set safe config')


The whole thread from vdsm log:

Thread-2287::DEBUG::2012-11-20 11:31:41,174::BindingXMLRPC::894::vds::(wrapper) client [10.35.160.91]::call addNetwork with ('net699', '699', '', ['eth1'], {'STP': 'no', 'bridged': 'true'}) {} flowID [1694c5da]
MainProcess|Thread-2287::DEBUG::2012-11-20 11:32:22,912::configNetwork::882::root::(addNetwork) validating network...
MainProcess|Thread-2287::INFO::2012-11-20 11:32:22,919::configNetwork::891::root::(addNetwork) Adding network net699 with vlan=699, bonding=None, nics=['eth1'], bondingOptions=None, mtu=None, bridged=True, options={'STP': 'no'}
MainProcess|Thread-2287::DEBUG::2012-11-20 11:32:22,922::configNetwork::336::root::(_persistentBackup) backing up ifcfg-net699: # original file did not exist
MainProcess|Thread-2287::DEBUG::2012-11-20 11:32:22,924::configNetwork::248::root::(writeBackupFile) Persistently backed up /var/lib/vdsm/netconfback/ifcfg-net699 (until next 'set safe config')
MainProcess|Thread-2287::DEBUG::2012-11-20 11:32:23,057::__init__::1164::Storage.Misc.excCmd::(_log) '/sbin/ifdown net699' (cwd None)
MainProcess|Thread-2287::DEBUG::2012-11-20 11:32:23,252::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = ''; <rc> = 0
MainProcess|Thread-2287::DEBUG::2012-11-20 11:32:23,255::configNetwork::336::root::(_persistentBackup) backing up ifcfg-eth1.699: # original file did not exist
MainProcess|Thread-2287::DEBUG::2012-11-20 11:32:23,260::configNetwork::248::root::(writeBackupFile) Persistently backed up /var/lib/vdsm/netconfback/ifcfg-eth1.699 (until next 'set safe config')
MainProcess|Thread-2287::DEBUG::2012-11-20 11:33:46,722::configNetwork::304::root::(_atomicBackup) Backed up /etc/sysconfig/network-scripts/ifcfg-eth1
MainProcess|Thread-2287::DEBUG::2012-11-20 11:33:46,725::configNetwork::336::root::(_persistentBackup) backing up ifcfg-eth1: DEVICE=eth1
MainProcess|Thread-2287::DEBUG::2012-11-20 11:33:46,753::__init__::1164::Storage.Misc.excCmd::(_log) '/sbin/ifup eth1' (cwd None)
MainProcess|Thread-2287::DEBUG::2012-11-20 11:33:46,941::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = 'RTNETLINK answers: File exists\n'; <rc> = 0
MainProcess|Thread-2287::DEBUG::2012-11-20 11:33:46,945::__init__::1164::Storage.Misc.excCmd::(_log) '/sbin/ifup eth1.699' (cwd None)
MainProcess|Thread-2287::DEBUG::2012-11-20 11:33:47,087::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = ''; <rc> = 0
MainProcess|Thread-2287::DEBUG::2012-11-20 11:33:47,091::__init__::1164::Storage.Misc.excCmd::(_log) '/sbin/ifup net699' (cwd None)
MainProcess|Thread-2287::DEBUG::2012-11-20 11:33:47,290::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = ''; <rc> = 0
MainProcess|Thread-2287::DEBUG::2012-11-20 11:33:47,311::configNetwork::258::root::(_atomicNetworkBackup) Backed up net699
MainProcess|Thread-2287::DEBUG::2012-11-20 11:33:47,331::configNetwork::267::root::(_persistentNetworkBackup) backing up network net699: # original file did not exist
MainProcess|Thread-2287::DEBUG::2012-11-20 11:33:47,335::configNetwork::248::root::(writeBackupFile) Persistently backed up /var/lib/vdsm/netconfback/logicalnetworks/net699 (until next 'set safe config')
Thread-2287::DEBUG::2012-11-20 11:33:47,440::BindingXMLRPC::900::vds::(wrapper) return addNetwork with {'status': {'message': 'Done', 'code': 0}}

Comment 12 Rami Vaknin 2012-11-20 15:19:06 UTC
Created attachment 648597 [details]
vdsm and libvirtd logs

Comment 13 Rami Vaknin 2012-11-20 15:38:52 UTC
According to danken, the NetInfo() call to libvirt takes 29.3890450001 seconds (done with: "t0=time.time(); n=netinfo.NetInfo(); print time.time()-t0")

Comment 14 Antoni Segura Puimedon 2012-11-21 00:14:58 UTC
This is consistent with my observations. We might consider, for future scalability, to keep an alive netinfo object that is updated by net events or, at least, that is cached somehow.

Comment 15 Antoni Segura Puimedon 2013-01-17 15:37:25 UTC
With this patch, its dependencies and some more changes, I estimate to reduce significantly the time for adding new networks. The goal continues to be to keep an object alive and get events later on.

Comment 16 Antoni Segura Puimedon 2013-01-17 16:27:49 UTC
Forgot to add the patch link: http://gerrit.ovirt.org/#/c/11084/

Comment 17 Itamar Heim 2013-12-01 09:33:27 UTC
thoughts/status?

Comment 18 Antoni Segura Puimedon 2013-12-01 09:42:05 UTC
This is a moving goal. With the move to the new networking some parts have been sped up. I removed part of the libvirt round trips (which are still the netinfo critical path) but the linkinfo increased somewhat when I introduced the changes two weeks ago that increase the accuracy in device detection.

There are two patches that I have pending to push that will keep a link cache and be updated only on link events from netlink that will very significantly reduce the cost to instance the base of netinfo objects. (The patches are follow-up to http://gerrit.ovirt.org/#/c/21430/ ).

I'll consider this patch as solved once we use the device cache for sampling and for netinfo/caps.

Comment 20 Itamar Heim 2014-01-12 08:41:31 UTC
setting target release to current version for consideration and review. please do not push non-RFE bugs to an undefined target release to make sure bugs are reviewed for relevancy, fix, closure, etc.

Comment 21 Sandro Bonazzola 2014-03-04 09:24:35 UTC
This is an automated message.
Re-targeting all non-blocker bugs still open on 3.4.0 to 3.4.1.


Note You need to log in before you can comment on or make changes to this bug.