docs: cgroup-v1: add it to the admin-guide book
Those files belong to the admin guide, so add them. Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
This commit is contained in:
parent
83bbf6e103
commit
da82c92f11
|
@ -3,7 +3,7 @@ Control Groups
|
|||
==============
|
||||
|
||||
Written by Paul Menage <menage@google.com> based on
|
||||
Documentation/cgroup-v1/cpusets.rst
|
||||
Documentation/admin-guide/cgroup-v1/cpusets.rst
|
||||
|
||||
Original copyright statements from cpusets.txt:
|
||||
|
||||
|
@ -76,7 +76,7 @@ On their own, the only use for cgroups is for simple job
|
|||
tracking. The intention is that other subsystems hook into the generic
|
||||
cgroup support to provide new attributes for cgroups, such as
|
||||
accounting/limiting the resources which processes in a cgroup can
|
||||
access. For example, cpusets (see Documentation/cgroup-v1/cpusets.rst) allow
|
||||
access. For example, cpusets (see Documentation/admin-guide/cgroup-v1/cpusets.rst) allow
|
||||
you to associate a set of CPUs and a set of memory nodes with the
|
||||
tasks in each cgroup.
|
||||
|
|
@ -49,7 +49,7 @@ hooks, beyond what is already present, required to manage dynamic
|
|||
job placement on large systems.
|
||||
|
||||
Cpusets use the generic cgroup subsystem described in
|
||||
Documentation/cgroup-v1/cgroups.rst.
|
||||
Documentation/admin-guide/cgroup-v1/cgroups.rst.
|
||||
|
||||
Requests by a task, using the sched_setaffinity(2) system call to
|
||||
include CPUs in its CPU affinity mask, and using the mbind(2) and
|
|
@ -1,5 +1,3 @@
|
|||
:orphan:
|
||||
|
||||
========================
|
||||
Control Groups version 1
|
||||
========================
|
|
@ -10,7 +10,7 @@ Because VM is getting complex (one of reasons is memcg...), memcg's behavior
|
|||
is complex. This is a document for memcg's internal behavior.
|
||||
Please note that implementation details can be changed.
|
||||
|
||||
(*) Topics on API should be in Documentation/cgroup-v1/memory.rst)
|
||||
(*) Topics on API should be in Documentation/admin-guide/cgroup-v1/memory.rst)
|
||||
|
||||
0. How to record usage ?
|
||||
========================
|
||||
|
@ -327,7 +327,7 @@ Under below explanation, we assume CONFIG_MEM_RES_CTRL_SWAP=y.
|
|||
You can see charges have been moved by reading ``*.usage_in_bytes`` or
|
||||
memory.stat of both A and B.
|
||||
|
||||
See 8.2 of Documentation/cgroup-v1/memory.rst to see what value should
|
||||
See 8.2 of Documentation/admin-guide/cgroup-v1/memory.rst to see what value should
|
||||
be written to move_charge_at_immigrate.
|
||||
|
||||
9.10 Memory thresholds
|
|
@ -9,7 +9,7 @@ This is the authoritative documentation on the design, interface and
|
|||
conventions of cgroup v2. It describes all userland-visible aspects
|
||||
of cgroup including core and specific controller behaviors. All
|
||||
future changes must be reflected in this document. Documentation for
|
||||
v1 is available under Documentation/cgroup-v1/.
|
||||
v1 is available under Documentation/admin-guide/cgroup-v1/.
|
||||
|
||||
.. CONTENTS
|
||||
|
||||
|
|
|
@ -59,6 +59,7 @@ configure specific aspects of kernel behavior to your liking.
|
|||
|
||||
initrd
|
||||
cgroup-v2
|
||||
cgroup-v1/index
|
||||
serial-console
|
||||
braille-console
|
||||
parport
|
||||
|
|
|
@ -4089,7 +4089,7 @@
|
|||
|
||||
relax_domain_level=
|
||||
[KNL, SMP] Set scheduler's default relax_domain_level.
|
||||
See Documentation/cgroup-v1/cpusets.rst.
|
||||
See Documentation/admin-guide/cgroup-v1/cpusets.rst.
|
||||
|
||||
reserve= [KNL,BUGS] Force kernel to ignore I/O ports or memory
|
||||
Format: <base1>,<size1>[,<base2>,<size2>,...]
|
||||
|
@ -4599,7 +4599,7 @@
|
|||
swapaccount=[0|1]
|
||||
[KNL] Enable accounting of swap in memory resource
|
||||
controller if no parameter or 1 is given or disable
|
||||
it if 0 is given (See Documentation/cgroup-v1/memory.rst)
|
||||
it if 0 is given (See Documentation/admin-guide/cgroup-v1/memory.rst)
|
||||
|
||||
swiotlb= [ARM,IA-64,PPC,MIPS,X86]
|
||||
Format: { <int> | force | noforce }
|
||||
|
|
|
@ -15,7 +15,7 @@ document attempts to describe the concepts and APIs of the 2.6 memory policy
|
|||
support.
|
||||
|
||||
Memory policies should not be confused with cpusets
|
||||
(``Documentation/cgroup-v1/cpusets.rst``)
|
||||
(``Documentation/admin-guide/cgroup-v1/cpusets.rst``)
|
||||
which is an administrative mechanism for restricting the nodes from which
|
||||
memory may be allocated by a set of processes. Memory policies are a
|
||||
programming interface that a NUMA-aware application can take advantage of. When
|
||||
|
|
|
@ -547,7 +547,7 @@ As for cgroups-v1 (blkio controller), the exact set of stat files
|
|||
created, and kept up-to-date by bfq, depends on whether
|
||||
CONFIG_BFQ_CGROUP_DEBUG is set. If it is set, then bfq creates all
|
||||
the stat files documented in
|
||||
Documentation/cgroup-v1/blkio-controller.rst. If, instead,
|
||||
Documentation/admin-guide/cgroup-v1/blkio-controller.rst. If, instead,
|
||||
CONFIG_BFQ_CGROUP_DEBUG is not set, then bfq creates only the files::
|
||||
|
||||
blkio.bfq.io_service_bytes
|
||||
|
|
|
@ -98,7 +98,7 @@ A memory policy with a valid NodeList will be saved, as specified, for
|
|||
use at file creation time. When a task allocates a file in the file
|
||||
system, the mount option memory policy will be applied with a NodeList,
|
||||
if any, modified by the calling task's cpuset constraints
|
||||
[See Documentation/cgroup-v1/cpusets.rst] and any optional flags, listed
|
||||
[See Documentation/admin-guide/cgroup-v1/cpusets.rst] and any optional flags, listed
|
||||
below. If the resulting NodeLists is the empty set, the effective memory
|
||||
policy for the file will revert to "default" policy.
|
||||
|
||||
|
|
|
@ -12,7 +12,7 @@ References
|
|||
|
||||
- Documentation/IRQ-affinity.txt: Binding interrupts to sets of CPUs.
|
||||
|
||||
- Documentation/cgroup-v1: Using cgroups to bind tasks to sets of CPUs.
|
||||
- Documentation/admin-guide/cgroup-v1: Using cgroups to bind tasks to sets of CPUs.
|
||||
|
||||
- man taskset: Using the taskset command to bind tasks to sets
|
||||
of CPUs.
|
||||
|
|
|
@ -669,7 +669,7 @@ Deadline Task Scheduling
|
|||
|
||||
-deadline tasks cannot have an affinity mask smaller that the entire
|
||||
root_domain they are created on. However, affinities can be specified
|
||||
through the cpuset facility (Documentation/cgroup-v1/cpusets.rst).
|
||||
through the cpuset facility (Documentation/admin-guide/cgroup-v1/cpusets.rst).
|
||||
|
||||
5.1 SCHED_DEADLINE and cpusets HOWTO
|
||||
------------------------------------
|
||||
|
|
|
@ -222,7 +222,7 @@ SCHED_BATCH) tasks.
|
|||
|
||||
These options need CONFIG_CGROUPS to be defined, and let the administrator
|
||||
create arbitrary groups of tasks, using the "cgroup" pseudo filesystem. See
|
||||
Documentation/cgroup-v1/cgroups.rst for more information about this filesystem.
|
||||
Documentation/admin-guide/cgroup-v1/cgroups.rst for more information about this filesystem.
|
||||
|
||||
When CONFIG_FAIR_GROUP_SCHED is defined, a "cpu.shares" file is created for each
|
||||
group created using the pseudo filesystem. See example steps below to create
|
||||
|
|
|
@ -133,7 +133,7 @@ This uses the cgroup virtual file system and "<cgroup>/cpu.rt_runtime_us"
|
|||
to control the CPU time reserved for each control group.
|
||||
|
||||
For more information on working with control groups, you should read
|
||||
Documentation/cgroup-v1/cgroups.rst as well.
|
||||
Documentation/admin-guide/cgroup-v1/cgroups.rst as well.
|
||||
|
||||
Group settings are checked against the following limits in order to keep the
|
||||
configuration schedulable:
|
||||
|
|
|
@ -67,7 +67,7 @@ nodes. Each emulated node will manage a fraction of the underlying cells'
|
|||
physical memory. NUMA emluation is useful for testing NUMA kernel and
|
||||
application features on non-NUMA platforms, and as a sort of memory resource
|
||||
management mechanism when used together with cpusets.
|
||||
[see Documentation/cgroup-v1/cpusets.rst]
|
||||
[see Documentation/admin-guide/cgroup-v1/cpusets.rst]
|
||||
|
||||
For each node with memory, Linux constructs an independent memory management
|
||||
subsystem, complete with its own free page lists, in-use page lists, usage
|
||||
|
@ -114,7 +114,7 @@ allocation behavior using Linux NUMA memory policy. [see
|
|||
|
||||
System administrators can restrict the CPUs and nodes' memories that a non-
|
||||
privileged user can specify in the scheduling or NUMA commands and functions
|
||||
using control groups and CPUsets. [see Documentation/cgroup-v1/cpusets.rst]
|
||||
using control groups and CPUsets. [see Documentation/admin-guide/cgroup-v1/cpusets.rst]
|
||||
|
||||
On architectures that do not hide memoryless nodes, Linux will include only
|
||||
zones [nodes] with memory in the zonelists. This means that for a memoryless
|
||||
|
|
|
@ -41,7 +41,7 @@ locations.
|
|||
Larger installations usually partition the system using cpusets into
|
||||
sections of nodes. Paul Jackson has equipped cpusets with the ability to
|
||||
move pages when a task is moved to another cpuset (See
|
||||
Documentation/cgroup-v1/cpusets.rst).
|
||||
Documentation/admin-guide/cgroup-v1/cpusets.rst).
|
||||
Cpusets allows the automation of process locality. If a task is moved to
|
||||
a new cpuset then also all its pages are moved with it so that the
|
||||
performance of the process does not sink dramatically. Also the pages
|
||||
|
|
|
@ -98,7 +98,7 @@ Memory Control Group Interaction
|
|||
--------------------------------
|
||||
|
||||
The unevictable LRU facility interacts with the memory control group [aka
|
||||
memory controller; see Documentation/cgroup-v1/memory.rst] by extending the
|
||||
memory controller; see Documentation/admin-guide/cgroup-v1/memory.rst] by extending the
|
||||
lru_list enum.
|
||||
|
||||
The memory controller data structure automatically gets a per-zone unevictable
|
||||
|
|
|
@ -15,7 +15,7 @@ assign them to cpusets and their attached tasks. This is a way of limiting the
|
|||
amount of system memory that are available to a certain class of tasks.
|
||||
|
||||
For more information on the features of cpusets, see
|
||||
Documentation/cgroup-v1/cpusets.rst.
|
||||
Documentation/admin-guide/cgroup-v1/cpusets.rst.
|
||||
There are a number of different configurations you can use for your needs. For
|
||||
more information on the numa=fake command line option and its various ways of
|
||||
configuring fake nodes, see Documentation/x86/x86_64/boot-options.rst.
|
||||
|
@ -40,7 +40,7 @@ A machine may be split as follows with "numa=fake=4*512," as reported by dmesg::
|
|||
On node 3 totalpages: 131072
|
||||
|
||||
Now following the instructions for mounting the cpusets filesystem from
|
||||
Documentation/cgroup-v1/cpusets.rst, you can assign fake nodes (i.e. contiguous memory
|
||||
Documentation/admin-guide/cgroup-v1/cpusets.rst, you can assign fake nodes (i.e. contiguous memory
|
||||
address spaces) to individual cpusets::
|
||||
|
||||
[root@xroads /]# mkdir exampleset
|
||||
|
|
|
@ -4158,7 +4158,7 @@ L: cgroups@vger.kernel.org
|
|||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git
|
||||
S: Maintained
|
||||
F: Documentation/admin-guide/cgroup-v2.rst
|
||||
F: Documentation/cgroup-v1/
|
||||
F: Documentation/admin-guide/cgroup-v1/
|
||||
F: include/linux/cgroup*
|
||||
F: kernel/cgroup/
|
||||
|
||||
|
@ -4169,7 +4169,7 @@ W: http://www.bullopensource.org/cpuset/
|
|||
W: http://oss.sgi.com/projects/cpusets/
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git
|
||||
S: Maintained
|
||||
F: Documentation/cgroup-v1/cpusets.rst
|
||||
F: Documentation/admin-guide/cgroup-v1/cpusets.rst
|
||||
F: include/linux/cpuset.h
|
||||
F: kernel/cgroup/cpuset.c
|
||||
|
||||
|
|
|
@ -89,7 +89,7 @@ config BLK_DEV_THROTTLING
|
|||
one needs to mount and use blkio cgroup controller for creating
|
||||
cgroups and specifying per device IO rate policies.
|
||||
|
||||
See Documentation/cgroup-v1/blkio-controller.rst for more information.
|
||||
See Documentation/admin-guide/cgroup-v1/blkio-controller.rst for more information.
|
||||
|
||||
config BLK_DEV_THROTTLING_LOW
|
||||
bool "Block throttling .low limit interface support (EXPERIMENTAL)"
|
||||
|
|
|
@ -624,7 +624,7 @@ struct cftype {
|
|||
|
||||
/*
|
||||
* Control Group subsystem type.
|
||||
* See Documentation/cgroup-v1/cgroups.rst for details
|
||||
* See Documentation/admin-guide/cgroup-v1/cgroups.rst for details
|
||||
*/
|
||||
struct cgroup_subsys {
|
||||
struct cgroup_subsys_state *(*css_alloc)(struct cgroup_subsys_state *parent_css);
|
||||
|
|
|
@ -806,7 +806,7 @@ union bpf_attr {
|
|||
* based on a user-provided identifier for all traffic coming from
|
||||
* the tasks belonging to the related cgroup. See also the related
|
||||
* kernel documentation, available from the Linux sources in file
|
||||
* *Documentation/cgroup-v1/net_cls.rst*.
|
||||
* *Documentation/admin-guide/cgroup-v1/net_cls.rst*.
|
||||
*
|
||||
* The Linux kernel has two versions for cgroups: there are
|
||||
* cgroups v1 and cgroups v2. Both are available to users, who can
|
||||
|
|
|
@ -821,7 +821,7 @@ menuconfig CGROUPS
|
|||
controls or device isolation.
|
||||
See
|
||||
- Documentation/scheduler/sched-design-CFS.rst (CFS)
|
||||
- Documentation/cgroup-v1/ (features for grouping, isolation
|
||||
- Documentation/admin-guide/cgroup-v1/ (features for grouping, isolation
|
||||
and resource control)
|
||||
|
||||
Say N if unsure.
|
||||
|
@ -883,7 +883,7 @@ config BLK_CGROUP
|
|||
CONFIG_CFQ_GROUP_IOSCHED=y; for enabling throttling policy, set
|
||||
CONFIG_BLK_DEV_THROTTLING=y.
|
||||
|
||||
See Documentation/cgroup-v1/blkio-controller.rst for more information.
|
||||
See Documentation/admin-guide/cgroup-v1/blkio-controller.rst for more information.
|
||||
|
||||
config CGROUP_WRITEBACK
|
||||
bool
|
||||
|
|
|
@ -729,7 +729,7 @@ static inline int nr_cpusets(void)
|
|||
* load balancing domains (sched domains) as specified by that partial
|
||||
* partition.
|
||||
*
|
||||
* See "What is sched_load_balance" in Documentation/cgroup-v1/cpusets.rst
|
||||
* See "What is sched_load_balance" in Documentation/admin-guide/cgroup-v1/cpusets.rst
|
||||
* for a background explanation of this.
|
||||
*
|
||||
* Does not return errors, on the theory that the callers of this
|
||||
|
|
|
@ -509,7 +509,7 @@ static inline int may_allow_all(struct dev_cgroup *parent)
|
|||
* This is one of the three key functions for hierarchy implementation.
|
||||
* This function is responsible for re-evaluating all the cgroup's active
|
||||
* exceptions due to a parent's exception change.
|
||||
* Refer to Documentation/cgroup-v1/devices.rst for more details.
|
||||
* Refer to Documentation/admin-guide/cgroup-v1/devices.rst for more details.
|
||||
*/
|
||||
static void revalidate_active_exceptions(struct dev_cgroup *devcg)
|
||||
{
|
||||
|
|
|
@ -806,7 +806,7 @@ union bpf_attr {
|
|||
* based on a user-provided identifier for all traffic coming from
|
||||
* the tasks belonging to the related cgroup. See also the related
|
||||
* kernel documentation, available from the Linux sources in file
|
||||
* *Documentation/cgroup-v1/net_cls.rst*.
|
||||
* *Documentation/admin-guide/cgroup-v1/net_cls.rst*.
|
||||
*
|
||||
* The Linux kernel has two versions for cgroups: there are
|
||||
* cgroups v1 and cgroups v2. Both are available to users, who can
|
||||
|
|
Loading…
Reference in New Issue