Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.

Bug 113403

Summary: iostat reports abnormally large avgqu-sz
Product: [Retired] Red Hat Linux Reporter: Jim Laverty <jim.laverty>
Component: kernelAssignee: Arjan van de Ven <arjanv>
Status: CLOSED WONTFIX QA Contact: Brian Brock <bbrock>
Severity: low Docs Contact:
Priority: low    
Version: 9CC: jim.laverty, nphilipp, ppokorny
Target Milestone: ---   
Target Release: ---   
Hardware: i386   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2004-09-30 15:41:46 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
Attachments:
Description Flags
Patch to fix negative stats in /proc/partitions
none
patch to solve incorrect data reported from iostat under heavy merging none

Description Jim Laverty 2004-01-13 16:37:18 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.6b)
Gecko/20031208

Description of problem:
The iostat util reports abnormally large avgqu-sz on IDE drive based
systems (Refer to old bugzilla bug ID 78749). 

This issue still exists in Red Hat 9 using sysstat-4.0.7-3 (kernel
2.4.20-20.9smp) using iostat.  After further testing on three (3) IDE
based systems and four (4) different SCSI based systems, the problem
seems to only appear on IDE drive based systems.

The crazy inode sizes in sar ('sar -v') which are refered to in bug ID
78749, seem to have stopped however with this release.



Version-Release number of selected component (if applicable):
sysstat-4.0.7-3

How reproducible:
Always

Steps to Reproduce:
1.  Run 'iostat -x 1 100' 
2.  
3.
    

Actual Results:  
avg-cpu:  %user   %nice    %sys   %idle
           0.50    0.00    0.25   99.25
Very large avgqu-sz (42949652.96) on an mostly idle server:

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda     0.00  17.00  0.00  5.00    0.00  176.00     0.00    88.00
   35.20 42949652.96    0.00 200.00 100.00


Expected Results:  See realistic avgqu-sz for performance metrics.

Additional info:

iostat executed on a IDE based system shows:
--------------------------------------------


avg-cpu:  %user   %nice    %sys   %idle
           0.50    0.00    0.25   99.25

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda     0.00  17.00  0.00  5.00    0.00  176.00     0.00    88.00
   35.20 42949652.96    0.00 200.00 100.00
/dev/hda1    0.00   6.00  0.00  2.00    0.00   64.00     0.00    32.00
   32.00     0.00    0.00   0.00   0.00
/dev/hda2    0.00  11.00  0.00  3.00    0.00  112.00     0.00    56.00
   37.33     0.00    0.00   0.00   0.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda6    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda7    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice    %sys   %idle
           0.25    0.00    0.25   99.50

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda     0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00 42949652.96    0.00   0.00 100.00
/dev/hda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda2    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda6    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda7    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00

iostat executed on a SCSI based system shows :
----------------------------------------------

avg-cpu:  %user   %nice    %sys   %idle
           0.00    0.00    0.00  100.00

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda     0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00 42949652.96    0.00   0.00 100.00
/dev/hda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda2    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda6    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda7    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice    %sys   %idle
           0.00    0.00    0.00  100.00

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda     0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00 42949652.96    0.00   0.00 100.00
/dev/hda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda2    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda6    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda7    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00

Comment 1 Jim Laverty 2004-01-13 18:48:23 UTC
Correction for the SCSI stats (which had the IDE results posted in them):

Linux 2.4.20-20.9smp (stout)   01/13/2004

avg-cpu:  %user   %nice    %sys   %idle
           1.72    0.00    0.72   97.56

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/sda     0.09  57.39  0.04 12.54    0.81  559.53     0.40   279.76
   44.55     0.52    4.14   0.39   0.49
/dev/sda1    0.01   0.87  0.01  0.29    0.13    9.33     0.07     4.66
   31.48     0.03   10.82  10.75   0.32
/dev/sda2    0.02   0.21  0.03  0.12    0.33    2.67     0.17     1.34
   20.22     0.03   21.69  21.42   0.32
/dev/sda3    0.07   0.32  0.01  0.39    0.31    5.71     0.15     2.85
   15.20     0.04    9.53   9.19   0.36
/dev/sda5    0.00   0.01  0.00  0.00    0.00    0.06     0.00     0.03
   66.66     0.00    1.31   0.90   0.00
/dev/sda6    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
   14.49     0.00  147.03 147.03   0.00
/dev/sda7    0.00  55.98  0.00 11.73    0.03  541.76     0.02   270.88
   46.18     0.42    3.57   0.33   0.38

avg-cpu:  %user   %nice    %sys   %idle
           5.50    0.00    2.25   92.25

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/sda     0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/sda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/sda2    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/sda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/sda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/sda6    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/sda7    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00


Comment 2 Nils Philippsen 2004-01-14 12:35:07 UTC
Is there only one bogus value in the sar output, only for the first
sar run after (re)booting?

Comment 3 Jim Laverty 2004-01-29 15:34:07 UTC
The bogus output is only showing up in 'iostat'.  It does not show up
with 'sar' with this newer kernel.  Prior to 2.4.20 it showed up in
sar also.

[root@dontbuyscostock root]# iostat -x 1  100
Linux 2.4.20-20.9smp (dontbuyscostock)       01/29/2004

avg-cpu:  %user   %nice    %sys   %idle
          10.54    6.50    0.61   82.36

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda     3.10  75.87  0.81 38.22   31.22  912.96    15.61   456.48
   24.19     0.03    0.19   0.81   3.15
/dev/hda1    0.54   0.28  0.08  0.16    4.93    3.54     2.47     1.77
   34.76     0.01    5.88   1.84   0.04
/dev/hda2    1.80   0.43  0.45  0.16   17.99    4.71     8.99     2.35
   37.60     0.09   15.42   1.94   0.12
/dev/hda3    0.74  74.85  0.16 37.63    7.28  900.02     3.64   450.01
   24.01     0.27    0.71   0.54   2.02
/dev/hda5    0.01   0.24  0.00  0.02    0.13    2.08     0.06     1.04
   94.06     0.08  351.01   4.18   0.01
/dev/hda6    0.00   0.07  0.11  0.25    0.90    2.61     0.45     1.31
    9.63     0.28   76.92  30.07   1.10
/dev/hda7    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
   12.47     0.00  267.50  18.23   0.00

avg-cpu:  %user   %nice    %sys   %idle
           1.25    0.00    0.25   98.50

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda     0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00 42949652.96    0.00   0.00 100.00
/dev/hda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda2    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda6    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda7    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice    %sys   %idle
           1.75    0.00    0.25   98.00

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda     0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00 42949652.96    0.00   0.00 100.00
/dev/hda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda2    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda6    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda7    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00

avg-cpu:  %user   %nice    %sys   %idle
           1.25    0.00    0.00   98.75

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda     0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00 42949652.96    0.00   0.00 100.00
/dev/hda1    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda2    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda3    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda5    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda6    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00
/dev/hda7    0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00
    0.00     0.00    0.00   0.00   0.00

Comment 4 Jim Laverty 2004-01-29 16:17:33 UTC
The 2.4.20-28.9smp kernel produces the same results, using the
sysstat-4.0.7-3 rpm.






Comment 5 Philip Pokorny 2004-02-17 01:05:13 UTC
If you check you will probably find negative values in /proc/partitions.

Zlatko Calusic at http://linux.inet.hr/ reports a diskstats patch from
Rick Lindsley (http://linux.inet.hr/diskstats-2.4.patch) fixes this. 
I'll copy that small patch here.

Comment 6 Philip Pokorny 2004-02-17 01:06:34 UTC
Created attachment 97719 [details]
Patch to fix negative stats in /proc/partitions

Here is the diskstats patch from http://linux.inet.hr/diskstats-2.4.patch

Comment 7 Jeremy McNicoll 2004-02-23 17:33:23 UTC
I have created a patch which includes Ricks work and includes a fix 
for incorrect data reporting from IOStat.  This fix addresses the 
problem of incorrect values reported under high amounts of merges.  
It now adheres to Littles Law 
(http://www.mcnicoll.ca/iostat/theory.html). 

The patch is here. 
(http://www.mcnicoll.ca/iostat/patch_diskstats_24_23) .  There are a 
series of tests and changes I did in order to confirm the validity of 
the numbers.  (http://www.mcnicoll.ca/iostat/results.html)  
Everything seems correct after a large amount of rigourous testing. 



Comment 8 Jeremy McNicoll 2004-02-23 17:38:04 UTC
Created attachment 97954 [details]
patch to solve incorrect data reported from iostat under heavy merging

Comment 9 Nils Philippsen 2004-02-23 17:40:45 UTC
Apparently this isn't a sysstat bug then -> transfering to kernel
component and reassigning.

Comment 10 Bugzilla owner 2004-09-30 15:41:46 UTC
Thanks for the bug report. However, Red Hat no longer maintains this version of
the product. Please upgrade to the latest version and open a new bug if the problem
persists.

The Fedora Legacy project (http://fedoralegacy.org/) maintains some older releases, 
and if you believe this bug is interesting to them, please report the problem in
the bug tracker at: http://bugzilla.fedora.us/