Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1065215 - Stressing bash leads to code corruption (possible Buffer overflow?)
Summary: Stressing bash leads to code corruption (possible Buffer overflow?)
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: bash
Version: 5.10
Hardware: x86_64
OS: Linux
Target Milestone: rc
: ---
Assignee: Ondrej Oprala
QA Contact: BaseOS QE - Apps
Depends On:
TreeView+ depends on / blocked
Reported: 2014-02-14 06:31 UTC by Bob Hepple
Modified: 2016-02-01 02:09 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Last Closed: 2014-06-02 13:19:00 UTC

Attachments (Terms of Use)

Description Bob Hepple 2014-02-14 06:31:39 UTC
Description of problem:

Putting bash under a moderate amount of environment variable pressure causes code corruption.

Version-Release number of selected component (if applicable):
GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu)

How reproducible:

Steps to Reproduce:
1. Put this into a script:
cat > stress-bash << EOF
while (( $( echo "$text" |wc -c) < size )); do
      text+=$( dmesg )
text=$( echo "$text" | head -c $size )
for (( N=0; N<number; N++ )); do
    export $var="$text"

2. Run with parameters:
bash ./stress-bash 7000 400

Actual results:
bash ./stress-bash 7000 400
./stress-bash: line 15: /bin/date: Argument list too long

Expected results:
bash ./stress-bash 6000 400
Fri Feb 14 06:27:32 UTC 2014

Additional info:
No matter what else happens, 'date' should run.

It looks like when bash is under moderate memory pressure from environment variables, something overflows and corrupts the code. If that is correct then we may have a buffer overflow exploit.

Comment 1 Bob Hepple 2014-02-16 09:23:28 UTC
strace and valgrind report nothing interesting.

NB I had to re-compile bash with "./configure --with-bash-malloc=no" otherwise brk() failed as if it ran out of heap.

Comment 2 Bob Hepple 2014-02-16 10:35:59 UTC
I've traced it down to the execve() call in execute_cmd.c:shell_execve() - it's running into the ARG_MAX limitation, defined in /usr/include/linux/limits.h as 

#define ARG_MAX       131072    /* # bytes of args + environ for exec() */

The mystery is why it doesn't fail with a slightly smaller number of variables eg 300. That would create an environment space of 6000*400 - also well over the 131072 byte limit.

other Unix's don't have this arbitrary limit. Grrrr. Grumble.
Really! - 131072 bytes on a 64-bit system. Come on!!

Oh well, one workaround for this is to 'declare' instead of 'export' the variables, but I would be interested in other thoughts and also why it doesn't fail earlier.

Comment 3 Ondrej Oprala 2014-02-17 09:35:09 UTC
Very for the seemingly arbitrary value of ARG_MAX, I've come across a thorough analysis of this issue, including a rationale for ARG_MAX's value, possible workarounds and/or fixes for the more sandbox-happy of linux users. Please see for more info on this.

Comment 4 Bob Hepple 2014-02-18 00:37:37 UTC
getconf ARG_MAX on RHEL5 reports 131072 yet I can use over 2.6 MB of environment before execve() bombs. fedora19 reports 2097152 and that appears to be the limit.

while (( $( echo "$text" |wc -c) < size )); do
      text+=$( dmesg )
text=$( echo "$text" | head -c $size )
for (( N=0; N<number; N++ )); do
    export $var="$text"
    date >/dev/null || exit $?
    echo "N=$N foobar_* takes up $(( N * size )); env|wc -c = $(env|wc -c); getconf = $(getconf ARG_MAX)" 
echo success

RHEL5 result:
N=433 foobar_* takes up 2598000; env|wc -c = 2610328; getconf = 131072
N=434 foobar_* takes up 2604000; env|wc -c = 2616340; getconf = 131072
./stress-bash: line 11: /bin/date: Argument list too long

RHEL6 result:
N=432 foobar_* takes up 2592000; env|wc -c = 2605489; getconf = 2621440
N=433 foobar_* takes up 2598000; env|wc -c = 2611501; getconf = 2621440
./stress-bash: line 11: /bin/date: Argument list too long

fedora-19 result:
N=345 foobar_* takes up 2070000; env|wc -c = 2085475; getconf = 2097152
N=346 foobar_* takes up 2076000; env|wc -c = 2091487; getconf = 2097152
./stress-bash: line 11: /bin/date: Argument list too long

... so in practice, about 2-MB as a limit.

Comment 5 RHEL Product and Program Management 2014-03-07 12:49:10 UTC
This bug/component is not included in scope for RHEL-5.11.0 which is the last RHEL5 minor release. This Bugzilla will soon be CLOSED as WONTFIX (at the end of RHEL5.11 development phase (Apr 22, 2014)). Please contact your account manager or support representative in case you need to escalate this bug.

Comment 6 RHEL Product and Program Management 2014-06-02 13:19:00 UTC
Thank you for submitting this request for inclusion in Red Hat Enterprise Linux 5. We've carefully evaluated the request, but are unable to include it in RHEL5 stream. If the issue is critical for your business, please provide additional business justification through the appropriate support channels (

Comment 7 Bob Hepple 2014-06-02 23:36:06 UTC
For posterity - this was 'my bad' - I was 'export'ing far too much and overflowing the system limit ARG_MAX in execve().

The solution in this case was to use 'export -n' which make the variables global without being exported.

Note You need to log in before you can comment on or make changes to this bug.