Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1600557 - Error reported from kuberuntime_logs.go: Failed with err maximum write when writing log for log file
Summary: Error reported from kuberuntime_logs.go: Failed with err maximum write when w...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Containers
Version: 3.6.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 3.6.z
Assignee: Urvashi Mohnani
QA Contact: weiwei jiang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-12 13:39 UTC by Victor Hernando
Modified: 2018-11-29 21:28 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-29 21:28:48 UTC
Target Upstream Version:


Attachments (Terms of Use)

Comment 2 Urvashi Mohnani 2018-08-10 18:08:31 UTC
@Victor, can you give us steps on how to reproduce this. You mention that is not reproducible at all and if that is the case we don't have a way of debugging the issue.

Comment 3 Victor Hernando 2018-08-16 09:50:30 UTC
(In reply to Urvashi Mohnani from comment #2)
> @Victor, can you give us steps on how to reproduce this. You mention that is
> not reproducible at all and if that is the case we don't have a way of
> debugging the issue.

@Urvashi, I'm not able to reproduce the issue, this error message seems to be rising randomly (or I didn't find the race condition to trigger that) and going through the code seems to be related to something being overflowed at some point since errMaximumWrite is returned. As per code comments, "If there are no more bytes left then return errMaximumWrite". Any clue why the customer should be facing this error?

Comment 4 Urvashi Mohnani 2018-08-30 19:02:30 UTC
@Victor, yeah according to the code if the container log size exceeds the maximum allowed in the buffer it will throw this error. However there is a condition before it to check for this case and if it happens the log is broken down. So I am not sure why the customer is seeing this. Is this still happening?

Comment 5 Victor Hernando 2018-08-31 07:33:48 UTC
@Urvashi, yes is still happening, is something that we can not forecast and is happening sometimes, on different projects.
About the maximum allowed in the buffer, this means that is something related to a high frequency writing the logs (too many log written in a short time), or due to a high or large message written in the buffer, I think the second one is discarded since this error is being reported with different messages length.
Any advice about how to diagnose this issue?
Thanks in advance!


Note You need to log in before you can comment on or make changes to this bug.