Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 453392 - virt-manager.py is taking all the memory!
Summary: virt-manager.py is taking all the memory!
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: virt-manager
Version: 5.2
Hardware: i386
OS: Linux
low
high
Target Milestone: rc
: ---
Assignee: Cole Robinson
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2008-06-30 10:52 UTC by jean-sebastien Hubert
Modified: 2009-12-14 21:18 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2009-01-16 14:41:20 UTC
Target Upstream Version:


Attachments (Terms of Use)
Add support for bonding and vlan devices (also plugs memory leak) (deleted)
2008-09-02 14:08 UTC, Cole Robinson
no flags Details

Description jean-sebastien Hubert 2008-06-30 10:52:17 UTC
Description of problem:
Run virt-manager.py one day or a couple of hours (connect it to a host):
It take more and more memory and may overload the system.

Version-Release number of selected component (if applicable):
virt-manager-0.5.3-8.el5

How reproducible:
Always

Steps to Reproduce:
1.Lauchn virt-manager
2.Connect it to a host
3.Lets run a couple of hours

  
Actual results:
[root@xenhost1 ~]# date
lun jun 30 14:22:19 RET 2008
[root@xenhost1 ~]# ps -auxw --sort=rss | grep virt-manager
Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.7/FAQ
root     10770  0.0  0.1   3952   732 pts/2    S+   14:22   0:00 grep virt-manager
root      3493  1.1 77.0 668692 557352 ?       Ss   Jun29  18:36 python
/usr/share/virt-manager/virt-manager.py
It may crash the host server

Expected results:
Not leak memory

Additional info:

Comment 1 Cole Robinson 2008-07-09 18:37:26 UTC
Hi, this is a previously reported issue but we are tracking progress in a
private bug. I think we have a working patch though, so I'll keep you informed.

Comment 2 Binbin Wang 2008-08-04 08:59:21 UTC
One of customer in China also have the same problem!

The output of ps command.
root     16256  5.8 75.4 6768480 6042652 ?     Ss   Jul23 1017:25 python /usr/share/virt-manager/virt-manager.py

any patch or workaround method?

Comment 3 Jonathan Kamens 2008-08-07 02:07:32 UTC
Is the the bug which causes me to see ridiculous %CPU values from procps with fedora rawhide (procps-3.2.7-20.fc9.i386), or is that a different bug?

For example:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
 3168 root      20   0 96852  20m 7748 S 424.4  1.1 363:21.90 Xorg              
 3769 jik       20   0 25976 8888 7532 S 284.5  0.4  37:36.81 multiload-apple   
 3830 jik       27   7 74204  30m  12m S 141.9  1.6  60:44.16 beagled           
 2804 haldaemo  20   0  7128 4600 3932 S 41.2  0.2   5:24.25 hald               
 4839 jik       20   0  2560 1092  828 R  1.3  0.1   0:00.10 top

Comment 4 Jonathan Kamens 2008-08-07 02:07:58 UTC
Oh, Damn, never mind, I put that comment in the wrong ticket.

Comment 5 Daniel Senie 2008-09-02 13:48:08 UTC
We see this as well when we leave Virtual Machine Manager open on a machine for an extended time. The machine runs out of memory, and problems start cropping up. Interestingly, it's another virtualization item that then complains via email (due to a cron job failing). Only the subject line of the email and the contents are included here:

Subject: Cron <root@briar04> python /usr/share/rhn/virtualization/poller.py

Traceback (most recent call last):
  File "/usr/share/rhn/virtualization/poller.py", line 213, in ?
    debug = options and options.debug)
  File "/usr/share/rhn/virtualization/poller_state_cache.py", line 50, in __init__
    self._load_state()
  File "/usr/share/rhn/virtualization/poller_state_cache.py", line 123, in _load_state
    except PickleError, pe:
NameError: global name 'PickleError' is not defined



I've had to reboot servers that have gotten into this state. Haven't figured out what services to kick to avoid reboot.

Comment 6 Cole Robinson 2008-09-02 14:08:29 UTC
Created attachment 315545 [details]
Add support for bonding and vlan devices (also plugs memory leak)

The attached patch fixes the memory leak. It is being tracked by a private bug to add support for bonding and vlan devices for bridges, and also happens to fix the leak :)

This will be in 5.3, but here is the patch in the interim.

Comment 7 Cole Robinson 2008-09-16 23:51:33 UTC
FYI, this fix has been committed and built. I'm just going to move this bug to ASSIGNED and leave it open until 5.3 is out, at which point I'll close it. For anyone with the proper access, the private bug we are using to track this is 443604.

Comment 8 Cole Robinson 2008-09-16 23:52:18 UTC
Ah sorry, the actual bug is 443680.

Comment 9 Cole Robinson 2009-01-16 14:41:20 UTC
Okay, fix is built and pending release for 5.3, so I am closing this bug.

Comment 10 Dave Oksner 2009-07-30 23:58:20 UTC
So, am I missing something, or was this left out of RH EL 5.3?  I have rhn-virtualization-host-1.0.1-55 installed.  It appears that this is the latest version and that it is from January 2008, before this bug was opened.

And, we're seeing the exact same messages as Daniel Senie reported in comment #5, or I wouldn't have come looking for an answer. :-)

Comment 11 Chris Lalancette 2009-07-31 12:32:52 UTC
So, the patch in the private BZ was committed to RHEL-5.3, so this issue should be fixed.  What version of virt-manager do you have installed, exactly?

Cole, do you have anything else to add here?

Chris Lalancette

Comment 12 Cole Robinson 2009-07-31 13:46:00 UTC
The traceback in comment #5 looks like an RHN bug, so Dave (Comment #10) should file a bug with them. I can pretty much guarantee that the original bug (memory leak) was fixed in RHEL5.3.

Comment 13 Dave Oksner 2009-08-04 16:22:08 UTC
Okay, thanks.  I'll try to track down what went wrong and where.


Note You need to log in before you can comment on or make changes to this bug.