Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.

Bug 1517997

Summary: Better plan for limits.conf file
Product: Red Hat Satellite 6 Reporter: mharris <mharris>
Component: Docs Install GuideAssignee: satellite-doc-list
Status: NEW --- QA Contact: satellite-doc-list
Severity: medium Docs Contact:
Priority: unspecified    
Version: 6.2.12CC: ktordeur, mharris, swadeley
Target Milestone: Unspecified   
Target Release: Unused   
Hardware: Unspecified   
OS: Unspecified   
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description mharris 2017-11-27 21:14:03 UTC
Document URL:

Section Number and Name: 

Appendix A. Large Deployment Considerations

Describe the issue: 

The document isn't "super clear" here, but the first topic covered in this appendix, is "Increasing the Maximum Number of File Descriptors for Apache". The document doesn't make it "crystal clear" that the file limit created for "large deployments" (read as: 65536), isn't sufficient for "most large deployments".

I know in some older versions of Satellite documentation, probably stuff in the 6.0.X timeframe, there used to be different sections, that depended on the number of hosts you had planned to attach to the Satellite server. For instance,it provided advice for 800+ servers, 800-2000 servers, and 3000+ servers attached.

The problem, is that the documentation, makes it "seem" like 65536 is the maximum value accepted here, and that this will cover large deployments. I'm not sure the number of hosts I'd be comfortable with 65536 file descriptors, but it would probably be ~800. 

For instance, I have a client, who is having 2300 systems connect, and 65536 doesn't even come close to meeting the file descriptor numbers for httpd.

I haven't done the math or calculations on the number of fd's required, but I know it is a lot more than 65536, and probably a whole lot closer to 1,000,000 for 2300 hosts.

Suggestions for improvement: 

Personally, I like the way this used to be done in the 6.0.X documentation, where it provided an easier to follow list, like I mentioned before. Something like for 800+ hosts, do this. for 2000+ hosts, do this, and for 3000+ hosts, do this.

Additional information:

Comment 1 Stephen Wadeley 2018-05-07 08:08:24 UTC
Hello Michael

Thank you for raising this bug.

I see section in the 6.3 guide[1] is unchanged except for the removal of RHEL6 commands.

I see the new Tuning Guide has a section "5.13.1. Max open files limit"[2]

I see the table you mention in 6.1 guide section "1.4.7. Considerations for Large Deployments"[3]. Do you know if the information in the table is still valid for 6.3 and 6.4? Or we can ask one of the SMEs who did performance testing for the Performance Brief[4] and the Tuning Guide.

We could add that table back in the Installation Guide appendix and then review the whole section for a possible move to the Tuning Guide. That is a Content Strategy question, so I will ask Steve Bream for an opinion on that.

Thank you




[4] 2016 - Performance & Scale Tuning of Satellite 6.2 and Capsules - Red Hat Customer Portal