Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 76797 - top idle time incorrect
Summary: top idle time incorrect
Keywords:
Status: CLOSED DUPLICATE of bug 71237
Alias: None
Product: Red Hat Linux
Classification: Retired
Component: procps
Version: 7.2
Hardware: i686
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Alexander Larsson
QA Contact: Brian Brock
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2002-10-26 22:02 UTC by Need Real Name
Modified: 2007-04-18 16:47 UTC (History)
0 users

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2002-10-31 16:09:40 UTC


Attachments (Terms of Use)

Description Need Real Name 2002-10-26 22:02:10 UTC
From Bugzilla Helper:
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; T312461)

Description of problem:
Idle time incorrect in top (procps-2.0.7-11)


Version-Release number of selected component (if applicable):


How reproducible:
Sometimes

Steps to Reproduce:
1.top
2.wait for less than 10 seconds on machine with load average over 1
3.watch idle time alternate between 0% and 8xxxxx.x%
	

Actual Results:  128 processes: 122 sleeping, 6 running, 0 zombie, 0 stopped
CPU states:  0.7% user,  2.1% system, 97.2% nice, 843804.7% idle
Mem:   255744K av,  250980K used,    4764K free,       0K shrd,   55260K buff
Swap:  554184K av,   26080K used,  528104K free                  108344K cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND


Expected Results:  0% idle

Additional info:

Linux 2.4.18-17.7.xcustom (root@malacandra) 
procinfo -f seems to display correct information

Comment 1 Marco De la Cruz 2002-10-31 16:09:34 UTC
I am running Red Hat 7.3. This is my experience with this bug:

I've recently upgraded to kernel-2.4.18-17.7.x.athlon.rpm (from
kernel-2.4.18-10.athlon.rpm) and I've noticed a problem with the
CPU idle %. Here is an example of what happens when running
"top":



 10:37am  up 8 days, 22:02,  9 users,  load average: 1.00, 1.01, 1.00
89 processes: 81 sleeping, 7 running, 1 zombie, 0 stopped
CPU states:  0.1% user,  0.1% system, 99.8% nice, 857278.7% idle
Mem:   514156K av,  485724K used,   28432K free,  0K shrd,   57956K buff
Swap:  522072K av,   44048K used,  478024K free             288448K cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
18840 mersenne  39  19 14996  14M   696 R N  99.9  2.9  7046m mprime 
14091 marco     15   0  1084 1084   848 R     0.1  0.2   0:00 top
    1 root      15   0   472  444   420 S     0.0  0.0   0:04 init
    2 root      15   0     0    0     0 SW    0.0  0.0   0:01 keventd
    3 root      15   0     0    0     0 SW    0.0  0.0   0:00 kapmd  
    
    
    
As you can see the idle value is absurd (it blows up about 1 in five
times, other updates are reasonable e.g. 0.2% idle). When running "top"
the value consistently jumps to about 800000% about 20% of the time.
If I run "top -d1" so that the updates take place every second the  
value consistently jumps to about 4200000%, again about 20% of the time.
Running "top -d2" makes it jump to around 2000000%. The idle value 
behaves approximately as follows:

        4200000
idle = --------% once every five updates.
       interval
       
If I stop "mprime" so that the system load becomes negligible the
idle value does not spike. If I run "mprime" or any other CPU-consuming
task (e.g. yes > /dev/null) the spikes commence. The niceness of
the process does not seem to matter.

My hardware is an Athlon 2100+ running on an ASUS A7V333.

> uname -a
Linux reimeika.math.toronto.edu 2.4.18-17.7.x #1
Tue Oct 8 11:49:30 EDT 2002 i686 unknown

> top --version
top (procps version 2.0.7)

> rpm -q procps
procps-2.0.7-12


Comment 2 Alexander Larsson 2002-11-05 10:01:17 UTC

*** This bug has been marked as a duplicate of 71237 ***


Note You need to log in before you can comment on or make changes to this bug.