2008-03-05 06:28:39 +08:00
|
|
|
Memory Resource Controller
|
|
|
|
|
|
|
|
NOTE: The Memory Resource Controller has been generically been referred
|
|
|
|
to as the memory controller in this document. Do not confuse memory controller
|
|
|
|
used here with the memory controller that is used in hardware.
|
2008-02-07 16:13:46 +08:00
|
|
|
|
|
|
|
Salient features
|
|
|
|
|
|
|
|
a. Enable control of both RSS (mapped) and Page Cache (unmapped) pages
|
|
|
|
b. The infrastructure allows easy addition of other types of memory to control
|
|
|
|
c. Provides *zero overhead* for non memory controller users
|
|
|
|
d. Provides a double LRU: global memory pressure causes reclaim from the
|
|
|
|
global LRU; a cgroup on hitting a limit, reclaims from the per
|
|
|
|
cgroup LRU
|
|
|
|
|
2008-02-07 16:14:41 +08:00
|
|
|
NOTE: Swap Cache (unmapped) is not accounted now.
|
2008-02-07 16:13:46 +08:00
|
|
|
|
|
|
|
Benefits and Purpose of the memory controller
|
|
|
|
|
|
|
|
The memory controller isolates the memory behaviour of a group of tasks
|
|
|
|
from the rest of the system. The article on LWN [12] mentions some probable
|
|
|
|
uses of the memory controller. The memory controller can be used to
|
|
|
|
|
|
|
|
a. Isolate an application or a group of applications
|
|
|
|
Memory hungry applications can be isolated and limited to a smaller
|
|
|
|
amount of memory.
|
|
|
|
b. Create a cgroup with limited amount of memory, this can be used
|
|
|
|
as a good alternative to booting with mem=XXXX.
|
|
|
|
c. Virtualization solutions can control the amount of memory they want
|
|
|
|
to assign to a virtual machine instance.
|
|
|
|
d. A CD/DVD burner could control the amount of memory used by the
|
|
|
|
rest of the system to ensure that burning does not fail due to lack
|
|
|
|
of available memory.
|
|
|
|
e. There are several other use cases, find one or use the controller just
|
|
|
|
for fun (to learn and hack on the VM subsystem).
|
|
|
|
|
|
|
|
1. History
|
|
|
|
|
|
|
|
The memory controller has a long history. A request for comments for the memory
|
|
|
|
controller was posted by Balbir Singh [1]. At the time the RFC was posted
|
|
|
|
there were several implementations for memory control. The goal of the
|
|
|
|
RFC was to build consensus and agreement for the minimal features required
|
|
|
|
for memory control. The first RSS controller was posted by Balbir Singh[2]
|
|
|
|
in Feb 2007. Pavel Emelianov [3][4][5] has since posted three versions of the
|
|
|
|
RSS controller. At OLS, at the resource management BoF, everyone suggested
|
|
|
|
that we handle both page cache and RSS together. Another request was raised
|
|
|
|
to allow user space handling of OOM. The current memory controller is
|
|
|
|
at version 6; it combines both mapped (RSS) and unmapped Page
|
|
|
|
Cache Control [11].
|
|
|
|
|
|
|
|
2. Memory Control
|
|
|
|
|
|
|
|
Memory is a unique resource in the sense that it is present in a limited
|
|
|
|
amount. If a task requires a lot of CPU processing, the task can spread
|
|
|
|
its processing over a period of hours, days, months or years, but with
|
|
|
|
memory, the same physical memory needs to be reused to accomplish the task.
|
|
|
|
|
|
|
|
The memory controller implementation has been divided into phases. These
|
|
|
|
are:
|
|
|
|
|
|
|
|
1. Memory controller
|
|
|
|
2. mlock(2) controller
|
|
|
|
3. Kernel user memory accounting and slab control
|
|
|
|
4. user mappings length controller
|
|
|
|
|
|
|
|
The memory controller is the first controller developed.
|
|
|
|
|
|
|
|
2.1. Design
|
|
|
|
|
|
|
|
The core of the design is a counter called the res_counter. The res_counter
|
|
|
|
tracks the current memory usage and limit of the group of processes associated
|
|
|
|
with the controller. Each cgroup has a memory controller specific data
|
|
|
|
structure (mem_cgroup) associated with it.
|
|
|
|
|
|
|
|
2.2. Accounting
|
|
|
|
|
|
|
|
+--------------------+
|
|
|
|
| mem_cgroup |
|
|
|
|
| (res_counter) |
|
|
|
|
+--------------------+
|
|
|
|
/ ^ \
|
|
|
|
/ | \
|
|
|
|
+---------------+ | +---------------+
|
|
|
|
| mm_struct | |.... | mm_struct |
|
|
|
|
| | | | |
|
|
|
|
+---------------+ | +---------------+
|
|
|
|
|
|
|
|
|
+ --------------+
|
|
|
|
|
|
|
|
|
+---------------+ +------+--------+
|
|
|
|
| page +----------> page_cgroup|
|
|
|
|
| | | |
|
|
|
|
+---------------+ +---------------+
|
|
|
|
|
|
|
|
(Figure 1: Hierarchy of Accounting)
|
|
|
|
|
|
|
|
|
|
|
|
Figure 1 shows the important aspects of the controller
|
|
|
|
|
|
|
|
1. Accounting happens per cgroup
|
|
|
|
2. Each mm_struct knows about which cgroup it belongs to
|
|
|
|
3. Each page has a pointer to the page_cgroup, which in turn knows the
|
|
|
|
cgroup it belongs to
|
|
|
|
|
|
|
|
The accounting is done as follows: mem_cgroup_charge() is invoked to setup
|
|
|
|
the necessary data structures and check if the cgroup that is being charged
|
|
|
|
is over its limit. If it is then reclaim is invoked on the cgroup.
|
|
|
|
More details can be found in the reclaim section of this document.
|
|
|
|
If everything goes well, a page meta-data-structure called page_cgroup is
|
|
|
|
allocated and associated with the page. This routine also adds the page to
|
|
|
|
the per cgroup LRU.
|
|
|
|
|
|
|
|
2.2.1 Accounting details
|
|
|
|
|
2008-10-19 11:28:10 +08:00
|
|
|
All mapped anon pages (RSS) and cache pages (Page Cache) are accounted.
|
|
|
|
(some pages which never be reclaimable and will not be on global LRU
|
|
|
|
are not accounted. we just accounts pages under usual vm management.)
|
|
|
|
|
|
|
|
RSS pages are accounted at page_fault unless they've already been accounted
|
|
|
|
for earlier. A file page will be accounted for as Page Cache when it's
|
|
|
|
inserted into inode (radix-tree). While it's mapped into the page tables of
|
|
|
|
processes, duplicate accounting is carefully avoided.
|
|
|
|
|
|
|
|
A RSS page is unaccounted when it's fully unmapped. A PageCache page is
|
|
|
|
unaccounted when it's removed from radix-tree.
|
|
|
|
|
|
|
|
At page migration, accounting information is kept.
|
|
|
|
|
|
|
|
Note: we just account pages-on-lru because our purpose is to control amount
|
|
|
|
of used pages. not-on-lru pages are tend to be out-of-control from vm view.
|
2008-02-07 16:13:46 +08:00
|
|
|
|
|
|
|
2.3 Shared Page Accounting
|
|
|
|
|
|
|
|
Shared pages are accounted on the basis of the first touch approach. The
|
|
|
|
cgroup that first touches a page is accounted for the page. The principle
|
|
|
|
behind this approach is that a cgroup that aggressively uses a shared
|
|
|
|
page will eventually get charged for it (once it is uncharged from
|
|
|
|
the cgroup that brought it in -- this will happen on memory pressure).
|
|
|
|
|
|
|
|
2.4 Reclaim
|
|
|
|
|
|
|
|
Each cgroup maintains a per cgroup LRU that consists of an active
|
|
|
|
and inactive list. When a cgroup goes over its limit, we first try
|
|
|
|
to reclaim memory from the cgroup so as to make space for the new
|
|
|
|
pages that the cgroup has touched. If the reclaim is unsuccessful,
|
|
|
|
an OOM routine is invoked to select and kill the bulkiest task in the
|
|
|
|
cgroup.
|
|
|
|
|
|
|
|
The reclaim algorithm has not been modified for cgroups, except that
|
|
|
|
pages that are selected for reclaiming come from the per cgroup LRU
|
|
|
|
list.
|
|
|
|
|
|
|
|
2. Locking
|
|
|
|
|
|
|
|
The memory controller uses the following hierarchy
|
|
|
|
|
|
|
|
1. zone->lru_lock is used for selecting pages to be isolated
|
2008-02-07 16:14:41 +08:00
|
|
|
2. mem->per_zone->lru_lock protects the per cgroup LRU (per zone)
|
2008-02-07 16:13:46 +08:00
|
|
|
3. lock_page_cgroup() is used to protect page->page_cgroup
|
|
|
|
|
|
|
|
3. User Interface
|
|
|
|
|
|
|
|
0. Configuration
|
|
|
|
|
|
|
|
a. Enable CONFIG_CGROUPS
|
|
|
|
b. Enable CONFIG_RESOURCE_COUNTERS
|
2008-03-05 06:28:39 +08:00
|
|
|
c. Enable CONFIG_CGROUP_MEM_RES_CTLR
|
2008-02-07 16:13:46 +08:00
|
|
|
|
|
|
|
1. Prepare the cgroups
|
|
|
|
# mkdir -p /cgroups
|
|
|
|
# mount -t cgroup none /cgroups -o memory
|
|
|
|
|
|
|
|
2. Make the new group and move bash into it
|
|
|
|
# mkdir /cgroups/0
|
|
|
|
# echo $$ > /cgroups/0/tasks
|
|
|
|
|
|
|
|
Since now we're in the 0 cgroup,
|
|
|
|
We can alter the memory limit:
|
2008-03-05 06:28:24 +08:00
|
|
|
# echo 4M > /cgroups/0/memory.limit_in_bytes
|
2008-02-07 16:13:57 +08:00
|
|
|
|
|
|
|
NOTE: We can use a suffix (k, K, m, M, g or G) to indicate values in kilo,
|
|
|
|
mega or gigabytes.
|
|
|
|
|
|
|
|
# cat /cgroups/0/memory.limit_in_bytes
|
2008-02-24 07:24:12 +08:00
|
|
|
4194304
|
2008-02-07 16:13:57 +08:00
|
|
|
|
|
|
|
NOTE: The interface has now changed to display the usage in bytes
|
|
|
|
instead of pages
|
2008-02-07 16:13:46 +08:00
|
|
|
|
|
|
|
We can check the usage:
|
2008-02-07 16:13:57 +08:00
|
|
|
# cat /cgroups/0/memory.usage_in_bytes
|
2008-02-24 07:24:12 +08:00
|
|
|
1216512
|
2008-02-07 16:13:57 +08:00
|
|
|
|
|
|
|
A successful write to this file does not guarantee a successful set of
|
|
|
|
this limit to the value written into the file. This can be due to a
|
|
|
|
number of factors, such as rounding up to page boundaries or the total
|
|
|
|
availability of memory on the system. The user is required to re-read
|
|
|
|
this file after a write to guarantee the value committed by the kernel.
|
|
|
|
|
2008-03-05 06:28:24 +08:00
|
|
|
# echo 1 > memory.limit_in_bytes
|
2008-02-07 16:13:57 +08:00
|
|
|
# cat memory.limit_in_bytes
|
2008-02-24 07:24:12 +08:00
|
|
|
4096
|
2008-02-07 16:13:46 +08:00
|
|
|
|
|
|
|
The memory.failcnt field gives the number of times that the cgroup limit was
|
|
|
|
exceeded.
|
|
|
|
|
2008-02-07 16:14:41 +08:00
|
|
|
The memory.stat file gives accounting information. Now, the number of
|
|
|
|
caches, RSS and Active pages/Inactive pages are shown.
|
|
|
|
|
|
|
|
The memory.force_empty gives an interface to drop *all* charges by force.
|
|
|
|
|
2008-03-05 06:28:24 +08:00
|
|
|
# echo 1 > memory.force_empty
|
2008-02-07 16:14:41 +08:00
|
|
|
|
|
|
|
will drop all charges in cgroup. Currently, this is maintained for test.
|
|
|
|
|
2008-02-07 16:13:46 +08:00
|
|
|
4. Testing
|
|
|
|
|
|
|
|
Balbir posted lmbench, AIM9, LTP and vmmstress results [10] and [11].
|
|
|
|
Apart from that v6 has been tested with several applications and regular
|
|
|
|
daily use. The controller has also been tested on the PPC64, x86_64 and
|
|
|
|
UML platforms.
|
|
|
|
|
|
|
|
4.1 Troubleshooting
|
|
|
|
|
|
|
|
Sometimes a user might find that the application under a cgroup is
|
|
|
|
terminated. There are several causes for this:
|
|
|
|
|
|
|
|
1. The cgroup limit is too low (just too low to do anything useful)
|
|
|
|
2. The user is using anonymous memory and swap is turned off or too low
|
|
|
|
|
|
|
|
A sync followed by echo 1 > /proc/sys/vm/drop_caches will help get rid of
|
|
|
|
some of the pages cached in the cgroup (page cache pages).
|
|
|
|
|
|
|
|
4.2 Task migration
|
|
|
|
|
|
|
|
When a task migrates from one cgroup to another, it's charge is not
|
|
|
|
carried forward. The pages allocated from the original cgroup still
|
|
|
|
remain charged to it, the charge is dropped when the page is freed or
|
|
|
|
reclaimed.
|
|
|
|
|
|
|
|
4.3 Removing a cgroup
|
|
|
|
|
|
|
|
A cgroup can be removed by rmdir, but as discussed in sections 4.1 and 4.2, a
|
|
|
|
cgroup might have some charge associated with it, even though all
|
2008-02-07 16:14:41 +08:00
|
|
|
tasks have migrated away from it. Such charges are automatically dropped at
|
|
|
|
rmdir() if there are no tasks.
|
2008-02-07 16:13:46 +08:00
|
|
|
|
|
|
|
5. TODO
|
|
|
|
|
|
|
|
1. Add support for accounting huge pages (as a separate controller)
|
2008-02-07 16:14:41 +08:00
|
|
|
2. Make per-cgroup scanner reclaim not-shared pages first
|
|
|
|
3. Teach controller to account for shared-pages
|
2008-07-25 16:47:20 +08:00
|
|
|
4. Start reclamation in the background when the limit is
|
2008-02-07 16:13:46 +08:00
|
|
|
not yet hit but the usage is getting closer
|
|
|
|
|
|
|
|
Summary
|
|
|
|
|
|
|
|
Overall, the memory controller has been a stable controller and has been
|
|
|
|
commented and discussed quite extensively in the community.
|
|
|
|
|
|
|
|
References
|
|
|
|
|
|
|
|
1. Singh, Balbir. RFC: Memory Controller, http://lwn.net/Articles/206697/
|
|
|
|
2. Singh, Balbir. Memory Controller (RSS Control),
|
|
|
|
http://lwn.net/Articles/222762/
|
|
|
|
3. Emelianov, Pavel. Resource controllers based on process cgroups
|
|
|
|
http://lkml.org/lkml/2007/3/6/198
|
|
|
|
4. Emelianov, Pavel. RSS controller based on process cgroups (v2)
|
2008-02-24 07:24:12 +08:00
|
|
|
http://lkml.org/lkml/2007/4/9/78
|
2008-02-07 16:13:46 +08:00
|
|
|
5. Emelianov, Pavel. RSS controller based on process cgroups (v3)
|
|
|
|
http://lkml.org/lkml/2007/5/30/244
|
|
|
|
6. Menage, Paul. Control Groups v10, http://lwn.net/Articles/236032/
|
|
|
|
7. Vaidyanathan, Srinivasan, Control Groups: Pagecache accounting and control
|
|
|
|
subsystem (v3), http://lwn.net/Articles/235534/
|
2008-02-24 07:24:12 +08:00
|
|
|
8. Singh, Balbir. RSS controller v2 test results (lmbench),
|
2008-02-07 16:13:46 +08:00
|
|
|
http://lkml.org/lkml/2007/5/17/232
|
2008-02-24 07:24:12 +08:00
|
|
|
9. Singh, Balbir. RSS controller v2 AIM9 results
|
2008-02-07 16:13:46 +08:00
|
|
|
http://lkml.org/lkml/2007/5/18/1
|
2008-02-24 07:24:12 +08:00
|
|
|
10. Singh, Balbir. Memory controller v6 test results,
|
2008-02-07 16:13:46 +08:00
|
|
|
http://lkml.org/lkml/2007/8/19/36
|
2008-02-24 07:24:12 +08:00
|
|
|
11. Singh, Balbir. Memory controller introduction (v6),
|
|
|
|
http://lkml.org/lkml/2007/8/17/69
|
2008-02-07 16:13:46 +08:00
|
|
|
12. Corbet, Jonathan, Controlling memory use in cgroups,
|
|
|
|
http://lwn.net/Articles/243795/
|