Note: This is a beta release of Red Hat Bugzilla 5.0. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Also email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback here.
Bug 1066585 - no differences in cgroup settings in medium and large gears
Summary: no differences in cgroup settings in medium and large gears
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Online
Classification: Red Hat
Component: Containers
Version: 2.x
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Jhon Honce
QA Contact: libra bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-02-18 16:42 UTC by Aleksandar Kostadinov
Modified: 2015-05-14 23:34 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-02-19 02:19:39 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Aleksandar Kostadinov 2014-02-18 16:42:57 UTC
Excuse me if I set wrong component or I'm missing something obvious.Testing with devenv_4390 I created medium and large gear instances. `cgsnapshot -s` on instance shows no reasonable difference to me between the gear sizes:

group openshift/53038b5f6971378aa3000238 {
	perm {
		admin {
			uid = root;
			gid = root;
		}
		task {
			uid = 53038b5f6971378aa3000238;
			gid = 53038b5f6971378aa3000238;
		}
	}
	cpu {
		cpu.rt_period_us="1000000";
		cpu.rt_runtime_us="0";
		cpu.cfs_period_us="100000";
		cpu.cfs_quota_us="100000";
		cpu.shares="128";
	}
	cpuacct {
		cpuacct.usage="1952883553";
	}
	memory {
		memory.memsw.failcnt="0";
		memory.memsw.limit_in_bytes="641728512";
		memory.memsw.max_usage_in_bytes="14315520";
		memory.move_charge_at_immigrate="1";
		memory.swappiness="60";
		memory.use_hierarchy="0";
		memory.failcnt="0";
		memory.soft_limit_in_bytes="9223372036854775807";
		memory.limit_in_bytes="536870912";
		memory.max_usage_in_bytes="14315520";
	}
	freezer {
		freezer.state="THAWED";
	}
	net_cls {
		net_cls.classid="66537";
	}
}

group openshift/530388e66971378aa3000212 {
	perm {
		admin {
			uid = root;
			gid = root;
		}
		task {
			uid = 530388e66971378aa3000212;
			gid = 530388e66971378aa3000212;
		}
	}
	cpu {
		cpu.rt_period_us="1000000";
		cpu.rt_runtime_us="0";
		cpu.cfs_period_us="100000";
		cpu.cfs_quota_us="100000";
		cpu.shares="128";
	}
	cpuacct {
		cpuacct.usage="2673417291";
	}
	memory {
		memory.memsw.failcnt="0";
		memory.memsw.limit_in_bytes="641728512";
		memory.memsw.max_usage_in_bytes="14336000";
		memory.move_charge_at_immigrate="1";
		memory.swappiness="60";
		memory.use_hierarchy="0";
		memory.failcnt="0";
		memory.soft_limit_in_bytes="9223372036854775807";
		memory.limit_in_bytes="536870912";
		memory.max_usage_in_bytes="14336000";
	}
	freezer {
		freezer.state="THAWED";
	}
	net_cls {
		net_cls.classid="66536";
	}
}

Comment 1 Jhon Honce 2014-02-18 22:34:23 UTC
Did you create the gears on two different nodes and change the  node_profile in node/conf/resource_limits.conf?

Comment 2 Meng Bo 2014-02-19 02:19:39 UTC
This should be a mis-config. 

Following is my result, medium gear and large gear has different memory limits,

group openshift/530413162d3ad3bec0000056 {
	perm {
		admin {
			uid = root;
			gid = root;
		}
		task {
			uid = 530413162d3ad3bec0000056;
			gid = 530413162d3ad3bec0000056;
		}
	}
	cpu {
		cpu.rt_period_us="1000000";
		cpu.rt_runtime_us="0";
		cpu.cfs_period_us="100000";
		cpu.cfs_quota_us="100000";
		cpu.shares="128";
	}
	cpuacct {
		cpuacct.usage="2376836163";
	}
	memory {
		memory.memsw.failcnt="0";
		memory.memsw.limit_in_bytes="2252341248";
		memory.memsw.max_usage_in_bytes="24637440";
		memory.move_charge_at_immigrate="1";
		memory.swappiness="60";
		memory.use_hierarchy="0";
		memory.failcnt="0";
		memory.soft_limit_in_bytes="9223372036854775807";
		memory.limit_in_bytes="2147483648";
		memory.max_usage_in_bytes="24637440";
	}
	freezer {
		freezer.state="THAWED";
	}
	net_cls {
		net_cls.classid="66537";
	}
}

group openshift/530410722d3ad3bec0000002 {
	perm {
		admin {
			uid = root;
			gid = root;
		}
		task {
			uid = 530410722d3ad3bec0000002;
			gid = 530410722d3ad3bec0000002;
		}
	}
	cpu {
		cpu.rt_period_us="1000000";
		cpu.rt_runtime_us="0";
		cpu.cfs_period_us="100000";
		cpu.cfs_quota_us="100000";
		cpu.shares="128";
	}
	cpuacct {
		cpuacct.usage="3784336641";
	}
	memory {
		memory.memsw.failcnt="0";
		memory.memsw.limit_in_bytes="1178599424";
		memory.memsw.max_usage_in_bytes="41746432";
		memory.move_charge_at_immigrate="1";
		memory.swappiness="60";
		memory.use_hierarchy="0";
		memory.failcnt="0";
		memory.soft_limit_in_bytes="9223372036854775807";
		memory.limit_in_bytes="1073741824";
		memory.max_usage_in_bytes="41746432";
	}
	freezer {
		freezer.state="THAWED";
	}
	net_cls {
		net_cls.classid="66536";
	}
}


Note You need to log in before you can comment on or make changes to this bug.