Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1692694 - log throttling with read_lines_limit setting doesn't take effect [NEEDINFO]
Summary: log throttling with read_lines_limit setting doesn't take effect
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 3.11.z
Assignee: Jeff Cantrill
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-26 09:16 UTC by Alberto Gonzalez de Dios
Modified: 2019-03-29 15:22 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-03-29 15:22:47 UTC
Target Upstream Version:
rmeggins: needinfo? (algonzal)
jcantril: needinfo? (algonzal)


Attachments (Terms of Use)

Description Alberto Gonzalez de Dios 2019-03-26 09:16:28 UTC
Description of problem:
The log throttling  with read_lines_limit setting in fluentd doesn't take effect. The rate limit is configure to 10 for ".operations" (projects default, openshift, and openshift-infra), nonetheless, fluentd is forwarding Hawkular pod thousands of logs per seconds.

throttle-config.yaml 	.operations:
  read_lines_limit: 10

sh-4.2# cat /etc/fluent/configs.d/dynamic/input-docker-.operations-20190325.conf
<source>
  @type tail
  @id .operations-input
  @label @INGRESS
  path /var/log/containers/*_default_*.log,/var/log/containers/*_openshift_*.log,/var/log/containers/*_openshift-infra_*.log,/var/log/containers/*_kube-system_*.log
  pos_file /var/log/es-container-openshift-operations.log.pos
  read_lines_limit 10
  time_format %Y-%m-%dT%H:%M:%S.%N%Z
  tag kubernetes.*
  format json
  keep_time_key true
  read_from_head "true"
</source>

Version-Release number of selected component (if applicable):
OCP 3.11

How reproducible:
Change read_lines_limit and restart fluentd daemonset to make it effect

Steps to Reproduce:
1. Edit fluendt configmap, set read_lines_limit to 10 for .operation
$ oc edit configmap logging-fluentd -o yaml

2. Delete all fluentd pods

3. Check in fluentd pod /etc/fluent/configs.d/user/throttle-config.yaml and make sure the throtling settings exist:

throttle-config.yaml 	.operations:
  read_lines_limit: 10

sh-4.2# cat /etc/fluent/configs.d/dynamic/input-docker-.operations-20190325.conf
<source>
  @type tail
  @id .operations-input
  @label @INGRESS
  path /var/log/containers/*_default_*.log,/var/log/containers/*_openshift_*.log,/var/log/containers/*_openshift-infra_*.log,/var/log/containers/*_kube-system_*.log
  pos_file /var/log/es-container-openshift-operations.log.pos
  read_lines_limit 10
  time_format %Y-%m-%dT%H:%M:%S.%N%Z
  tag kubernetes.*
  format json
  keep_time_key true
  read_from_head "true"
</source>


Actual results:
Log throttling settings does not effect, as fluentd is forwarding thousand of logs per seconds of Hawkular


Expected results:
Forwarding limit of 10 log per second

Comment 3 Jeff Cantrill 2019-03-26 12:51:57 UTC
In what namespace is hawkular installed?

Comment 4 Alberto Gonzalez de Dios 2019-03-26 13:26:47 UTC
In the default one, openshift-infra

Comment 5 Rich Megginson 2019-03-26 13:51:41 UTC
(In reply to Alberto Gonzalez de Dios from comment #4)
> In the default one, openshift-infra

Please provide your fluentd configuration files.

oc get pods -l component=fluentd

then

oc exec $name_of_a_fluentd_pod -- ls -al /etc/fluent/configs.d/dynamic

then for each file in /etc/fluent/configs.d/dynamic

oc exec $name_of_a_fluentd_pod -- cat /etc/fluent/configs.d/dynamic/$name_of_file


Note You need to log in before you can comment on or make changes to this bug.