Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 991271 - HAProxy/WSGY (python-2.7) Broken Pipes are scaling issues
Summary: HAProxy/WSGY (python-2.7) Broken Pipes are scaling issues
Keywords:
Status: CLOSED DUPLICATE of bug 912605
Alias: None
Product: OpenShift Online
Classification: Red Hat
Component: Containers
Version: 1.x
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Mrunal Patel
QA Contact: libra bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-08-02 03:55 UTC by Arun Babu Neelicattu
Modified: 2015-02-15 21:52 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-08-02 06:09:06 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Arun Babu Neelicattu 2013-08-02 03:55:15 UTC
This problem occurs when deploying a WSGI app on medium sized gears. Part of it looks like a regression of [bug 923611].

A sample of the relevant log from appserver.log is available at https://github.com/victims/victims-production/issues/1

The application was created with the following command:
> rhc app create -l ${RHC_LOGIN} ${APP_NAME} mongodb-2.2 python-2.7 --scaling --gear-size medium --from-code git://github.com/victims/victims-server-openshift.git

Test instance is deployed at [1].

Once this is done, scaling the app, leads to the spawned gears to be down and never come online. The command used is
> rhc scale -l ${RHC_LOGIN} ${APP_NAME} -c python-2.7 --min 3

I suspect this has something to do with the HAProxy issue as I can see the same error in the log once the workaround (below) is applied every time a gear goes down.

Workaround:
The initial cluttering of the logs (caused by heartbeat checks) could be worked around by 
a. manually adding the fix at [2] to ~/haproxy/conf/haproxy.cfg and ~/haproxy/versions/1.4/configuration/haproxy.cfg
b. Restarting the cartridge 
> rhc cartridge reload -l ${RHC_LOGIN} -c haproxy-1.4 -a ${APP_NAME}

[1] http://stage-victims.rhcloud.com/
[2] https://github.com/openshift/origin-server/commit/7a338aeb518966d17193104a3d1b6acda5c101a1

Comment 1 Xiaoli Tian 2013-08-02 06:09:06 UTC
Can you try if git push will work around your issue, according to  comment 5 in https://bugzilla.redhat.com/show_bug.cgi?id=912605#c5 , it will be fixed in SCL version of python-2.7 cartridge later.

*** This bug has been marked as a duplicate of bug 912605 ***

Comment 2 Arun Babu Neelicattu 2013-08-02 07:24:40 UTC
Agreed this is a duplicate.

However, note that applying the fix mentioned in bug 912605 (as noted in comment 0 and bug 912605 comment 5) only stops prevents the heartbeat checks from logging the error. The error's are still logged at longer intervals, not sure if the fix is incomplete of this is expected.


Note You need to log in before you can comment on or make changes to this bug.