2017-05-17 20:10:48 +08:00
|
|
|
===================
|
2013-04-04 22:41:08 +08:00
|
|
|
this_cpu operations
|
2017-05-17 20:10:48 +08:00
|
|
|
===================
|
|
|
|
|
|
|
|
:Author: Christoph Lameter, August 4th, 2014
|
|
|
|
:Author: Pranith Kumar, Aug 2nd, 2014
|
2013-04-04 22:41:08 +08:00
|
|
|
|
|
|
|
this_cpu operations are a way of optimizing access to per cpu
|
2014-08-25 09:17:32 +08:00
|
|
|
variables associated with the *currently* executing processor. This is
|
|
|
|
done through the use of segment registers (or a dedicated register where
|
|
|
|
the cpu permanently stored the beginning of the per cpu area for a
|
|
|
|
specific processor).
|
2013-04-04 22:41:08 +08:00
|
|
|
|
2014-08-25 09:17:32 +08:00
|
|
|
this_cpu operations add a per cpu variable offset to the processor
|
|
|
|
specific per cpu base and encode that operation in the instruction
|
2013-04-04 22:41:08 +08:00
|
|
|
operating on the per cpu variable.
|
|
|
|
|
2014-08-25 09:17:32 +08:00
|
|
|
This means that there are no atomicity issues between the calculation of
|
2013-04-04 22:41:08 +08:00
|
|
|
the offset and the operation on the data. Therefore it is not
|
2014-08-25 09:17:32 +08:00
|
|
|
necessary to disable preemption or interrupts to ensure that the
|
2013-04-04 22:41:08 +08:00
|
|
|
processor is not changed between the calculation of the address and
|
|
|
|
the operation on the data.
|
|
|
|
|
|
|
|
Read-modify-write operations are of particular interest. Frequently
|
|
|
|
processors have special lower latency instructions that can operate
|
2014-08-25 09:17:32 +08:00
|
|
|
without the typical synchronization overhead, but still provide some
|
|
|
|
sort of relaxed atomicity guarantees. The x86, for example, can execute
|
|
|
|
RMW (Read Modify Write) instructions like inc/dec/cmpxchg without the
|
2013-04-04 22:41:08 +08:00
|
|
|
lock prefix and the associated latency penalty.
|
|
|
|
|
|
|
|
Access to the variable without the lock prefix is not synchronized but
|
|
|
|
synchronization is not necessary since we are dealing with per cpu
|
|
|
|
data specific to the currently executing processor. Only the current
|
|
|
|
processor should be accessing that variable and therefore there are no
|
|
|
|
concurrency issues with other processors in the system.
|
|
|
|
|
2014-08-25 09:17:32 +08:00
|
|
|
Please note that accesses by remote processors to a per cpu area are
|
|
|
|
exceptional situations and may impact performance and/or correctness
|
|
|
|
(remote write operations) of local RMW operations via this_cpu_*.
|
|
|
|
|
|
|
|
The main use of the this_cpu operations has been to optimize counter
|
|
|
|
operations.
|
|
|
|
|
|
|
|
The following this_cpu() operations with implied preemption protection
|
|
|
|
are defined. These operations can be used without worrying about
|
2017-05-17 20:10:48 +08:00
|
|
|
preemption and interrupts::
|
2014-08-25 09:17:32 +08:00
|
|
|
|
|
|
|
this_cpu_read(pcp)
|
|
|
|
this_cpu_write(pcp, val)
|
|
|
|
this_cpu_add(pcp, val)
|
|
|
|
this_cpu_and(pcp, val)
|
|
|
|
this_cpu_or(pcp, val)
|
|
|
|
this_cpu_add_return(pcp, val)
|
|
|
|
this_cpu_xchg(pcp, nval)
|
|
|
|
this_cpu_cmpxchg(pcp, oval, nval)
|
|
|
|
this_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2)
|
|
|
|
this_cpu_sub(pcp, val)
|
|
|
|
this_cpu_inc(pcp)
|
|
|
|
this_cpu_dec(pcp)
|
|
|
|
this_cpu_sub_return(pcp, val)
|
|
|
|
this_cpu_inc_return(pcp)
|
|
|
|
this_cpu_dec_return(pcp)
|
|
|
|
|
|
|
|
|
|
|
|
Inner working of this_cpu operations
|
|
|
|
------------------------------------
|
|
|
|
|
2013-04-04 22:41:08 +08:00
|
|
|
On x86 the fs: or the gs: segment registers contain the base of the
|
|
|
|
per cpu area. It is then possible to simply use the segment override
|
|
|
|
to relocate a per cpu relative address to the proper per cpu area for
|
|
|
|
the processor. So the relocation to the per cpu base is encoded in the
|
|
|
|
instruction via a segment register prefix.
|
|
|
|
|
2017-05-17 20:10:48 +08:00
|
|
|
For example::
|
2013-04-04 22:41:08 +08:00
|
|
|
|
|
|
|
DEFINE_PER_CPU(int, x);
|
|
|
|
int z;
|
|
|
|
|
|
|
|
z = this_cpu_read(x);
|
|
|
|
|
2017-05-17 20:10:48 +08:00
|
|
|
results in a single instruction::
|
2013-04-04 22:41:08 +08:00
|
|
|
|
|
|
|
mov ax, gs:[x]
|
|
|
|
|
|
|
|
instead of a sequence of calculation of the address and then a fetch
|
2014-08-25 09:17:32 +08:00
|
|
|
from that address which occurs with the per cpu operations. Before
|
2013-04-04 22:41:08 +08:00
|
|
|
this_cpu_ops such sequence also required preempt disable/enable to
|
|
|
|
prevent the kernel from moving the thread to a different processor
|
|
|
|
while the calculation is performed.
|
|
|
|
|
2017-05-17 20:10:48 +08:00
|
|
|
Consider the following this_cpu operation::
|
2013-04-04 22:41:08 +08:00
|
|
|
|
|
|
|
this_cpu_inc(x)
|
|
|
|
|
2017-05-17 20:10:48 +08:00
|
|
|
The above results in the following single instruction (no lock prefix!)::
|
2013-04-04 22:41:08 +08:00
|
|
|
|
|
|
|
inc gs:[x]
|
|
|
|
|
|
|
|
instead of the following operations required if there is no segment
|
2017-05-17 20:10:48 +08:00
|
|
|
register::
|
2013-04-04 22:41:08 +08:00
|
|
|
|
|
|
|
int *y;
|
|
|
|
int cpu;
|
|
|
|
|
|
|
|
cpu = get_cpu();
|
|
|
|
y = per_cpu_ptr(&x, cpu);
|
|
|
|
(*y)++;
|
|
|
|
put_cpu();
|
|
|
|
|
2014-08-25 09:17:32 +08:00
|
|
|
Note that these operations can only be used on per cpu data that is
|
2013-04-04 22:41:08 +08:00
|
|
|
reserved for a specific processor. Without disabling preemption in the
|
|
|
|
surrounding code this_cpu_inc() will only guarantee that one of the
|
2014-08-25 09:17:32 +08:00
|
|
|
per cpu counters is correctly incremented. However, there is no
|
2013-04-04 22:41:08 +08:00
|
|
|
guarantee that the OS will not move the process directly before or
|
|
|
|
after the this_cpu instruction is executed. In general this means that
|
|
|
|
the value of the individual counters for each processor are
|
|
|
|
meaningless. The sum of all the per cpu counters is the only value
|
|
|
|
that is of interest.
|
|
|
|
|
|
|
|
Per cpu variables are used for performance reasons. Bouncing cache
|
|
|
|
lines can be avoided if multiple processors concurrently go through
|
|
|
|
the same code paths. Since each processor has its own per cpu
|
2014-08-25 09:17:32 +08:00
|
|
|
variables no concurrent cache line updates take place. The price that
|
2013-04-04 22:41:08 +08:00
|
|
|
has to be paid for this optimization is the need to add up the per cpu
|
2014-08-25 09:17:32 +08:00
|
|
|
counters when the value of a counter is needed.
|
2013-04-04 22:41:08 +08:00
|
|
|
|
|
|
|
|
2017-05-17 20:10:48 +08:00
|
|
|
Special operations
|
|
|
|
------------------
|
|
|
|
|
|
|
|
::
|
2013-04-04 22:41:08 +08:00
|
|
|
|
|
|
|
y = this_cpu_ptr(&x)
|
|
|
|
|
|
|
|
Takes the offset of a per cpu variable (&x !) and returns the address
|
|
|
|
of the per cpu variable that belongs to the currently executing
|
|
|
|
processor. this_cpu_ptr avoids multiple steps that the common
|
|
|
|
get_cpu/put_cpu sequence requires. No processor number is
|
2014-08-25 09:17:32 +08:00
|
|
|
available. Instead, the offset of the local per cpu area is simply
|
|
|
|
added to the per cpu offset.
|
2013-04-04 22:41:08 +08:00
|
|
|
|
2014-08-25 09:17:32 +08:00
|
|
|
Note that this operation is usually used in a code segment when
|
|
|
|
preemption has been disabled. The pointer is then used to
|
|
|
|
access local per cpu data in a critical section. When preemption
|
|
|
|
is re-enabled this pointer is usually no longer useful since it may
|
|
|
|
no longer point to per cpu data of the current processor.
|
2013-04-04 22:41:08 +08:00
|
|
|
|
|
|
|
|
|
|
|
Per cpu variables and offsets
|
|
|
|
-----------------------------
|
|
|
|
|
2014-08-25 09:17:32 +08:00
|
|
|
Per cpu variables have *offsets* to the beginning of the per cpu
|
2013-04-04 22:41:08 +08:00
|
|
|
area. They do not have addresses although they look like that in the
|
|
|
|
code. Offsets cannot be directly dereferenced. The offset must be
|
2014-08-25 09:17:32 +08:00
|
|
|
added to a base pointer of a per cpu area of a processor in order to
|
2013-04-04 22:41:08 +08:00
|
|
|
form a valid address.
|
|
|
|
|
|
|
|
Therefore the use of x or &x outside of the context of per cpu
|
|
|
|
operations is invalid and will generally be treated like a NULL
|
|
|
|
pointer dereference.
|
|
|
|
|
2017-05-17 20:10:48 +08:00
|
|
|
::
|
|
|
|
|
2014-08-25 09:17:32 +08:00
|
|
|
DEFINE_PER_CPU(int, x);
|
2013-04-04 22:41:08 +08:00
|
|
|
|
2014-08-25 09:17:32 +08:00
|
|
|
In the context of per cpu operations the above implies that x is a per
|
|
|
|
cpu variable. Most this_cpu operations take a cpu variable.
|
2013-04-04 22:41:08 +08:00
|
|
|
|
2017-05-17 20:10:48 +08:00
|
|
|
::
|
|
|
|
|
2014-08-25 09:17:32 +08:00
|
|
|
int __percpu *p = &x;
|
2013-04-04 22:41:08 +08:00
|
|
|
|
2014-08-25 09:17:32 +08:00
|
|
|
&x and hence p is the *offset* of a per cpu variable. this_cpu_ptr()
|
|
|
|
takes the offset of a per cpu variable which makes this look a bit
|
|
|
|
strange.
|
2013-04-04 22:41:08 +08:00
|
|
|
|
|
|
|
|
|
|
|
Operations on a field of a per cpu structure
|
|
|
|
--------------------------------------------
|
|
|
|
|
2017-05-17 20:10:48 +08:00
|
|
|
Let's say we have a percpu structure::
|
2013-04-04 22:41:08 +08:00
|
|
|
|
|
|
|
struct s {
|
|
|
|
int n,m;
|
|
|
|
};
|
|
|
|
|
|
|
|
DEFINE_PER_CPU(struct s, p);
|
|
|
|
|
|
|
|
|
2017-05-17 20:10:48 +08:00
|
|
|
Operations on these fields are straightforward::
|
2013-04-04 22:41:08 +08:00
|
|
|
|
|
|
|
this_cpu_inc(p.m)
|
|
|
|
|
|
|
|
z = this_cpu_cmpxchg(p.m, 0, 1);
|
|
|
|
|
|
|
|
|
2017-05-17 20:10:48 +08:00
|
|
|
If we have an offset to struct s::
|
2013-04-04 22:41:08 +08:00
|
|
|
|
|
|
|
struct s __percpu *ps = &p;
|
|
|
|
|
2014-08-25 09:17:32 +08:00
|
|
|
this_cpu_dec(ps->m);
|
2013-04-04 22:41:08 +08:00
|
|
|
|
|
|
|
z = this_cpu_inc_return(ps->n);
|
|
|
|
|
|
|
|
|
|
|
|
The calculation of the pointer may require the use of this_cpu_ptr()
|
2017-05-17 20:10:48 +08:00
|
|
|
if we do not make use of this_cpu ops later to manipulate fields::
|
2013-04-04 22:41:08 +08:00
|
|
|
|
|
|
|
struct s *pp;
|
|
|
|
|
|
|
|
pp = this_cpu_ptr(&p);
|
|
|
|
|
|
|
|
pp->m--;
|
|
|
|
|
|
|
|
z = pp->n++;
|
|
|
|
|
|
|
|
|
|
|
|
Variants of this_cpu ops
|
2017-05-17 20:10:48 +08:00
|
|
|
------------------------
|
2013-04-04 22:41:08 +08:00
|
|
|
|
2014-08-25 09:17:32 +08:00
|
|
|
this_cpu ops are interrupt safe. Some architectures do not support
|
2013-04-04 22:41:08 +08:00
|
|
|
these per cpu local operations. In that case the operation must be
|
|
|
|
replaced by code that disables interrupts, then does the operations
|
2014-08-25 09:17:32 +08:00
|
|
|
that are guaranteed to be atomic and then re-enable interrupts. Doing
|
2013-04-04 22:41:08 +08:00
|
|
|
so is expensive. If there are other reasons why the scheduler cannot
|
|
|
|
change the processor we are executing on then there is no reason to
|
2014-08-25 09:17:32 +08:00
|
|
|
disable interrupts. For that purpose the following __this_cpu operations
|
|
|
|
are provided.
|
|
|
|
|
|
|
|
These operations have no guarantee against concurrent interrupts or
|
|
|
|
preemption. If a per cpu variable is not used in an interrupt context
|
|
|
|
and the scheduler cannot preempt, then they are safe. If any interrupts
|
|
|
|
still occur while an operation is in progress and if the interrupt too
|
|
|
|
modifies the variable, then RMW actions can not be guaranteed to be
|
2017-05-17 20:10:48 +08:00
|
|
|
safe::
|
2014-08-25 09:17:32 +08:00
|
|
|
|
|
|
|
__this_cpu_read(pcp)
|
|
|
|
__this_cpu_write(pcp, val)
|
|
|
|
__this_cpu_add(pcp, val)
|
|
|
|
__this_cpu_and(pcp, val)
|
|
|
|
__this_cpu_or(pcp, val)
|
|
|
|
__this_cpu_add_return(pcp, val)
|
|
|
|
__this_cpu_xchg(pcp, nval)
|
|
|
|
__this_cpu_cmpxchg(pcp, oval, nval)
|
|
|
|
__this_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2)
|
|
|
|
__this_cpu_sub(pcp, val)
|
|
|
|
__this_cpu_inc(pcp)
|
|
|
|
__this_cpu_dec(pcp)
|
|
|
|
__this_cpu_sub_return(pcp, val)
|
|
|
|
__this_cpu_inc_return(pcp)
|
|
|
|
__this_cpu_dec_return(pcp)
|
|
|
|
|
|
|
|
|
|
|
|
Will increment x and will not fall-back to code that disables
|
2013-04-04 22:41:08 +08:00
|
|
|
interrupts on platforms that cannot accomplish atomicity through
|
|
|
|
address relocation and a Read-Modify-Write operation in the same
|
|
|
|
instruction.
|
|
|
|
|
|
|
|
|
|
|
|
&this_cpu_ptr(pp)->n vs this_cpu_ptr(&pp->n)
|
|
|
|
--------------------------------------------
|
|
|
|
|
|
|
|
The first operation takes the offset and forms an address and then
|
2014-08-25 09:17:32 +08:00
|
|
|
adds the offset of the n field. This may result in two add
|
|
|
|
instructions emitted by the compiler.
|
2013-04-04 22:41:08 +08:00
|
|
|
|
|
|
|
The second one first adds the two offsets and then does the
|
|
|
|
relocation. IMHO the second form looks cleaner and has an easier time
|
|
|
|
with (). The second form also is consistent with the way
|
|
|
|
this_cpu_read() and friends are used.
|
|
|
|
|
|
|
|
|
2014-08-25 09:17:32 +08:00
|
|
|
Remote access to per cpu data
|
|
|
|
------------------------------
|
|
|
|
|
|
|
|
Per cpu data structures are designed to be used by one cpu exclusively.
|
|
|
|
If you use the variables as intended, this_cpu_ops() are guaranteed to
|
|
|
|
be "atomic" as no other CPU has access to these data structures.
|
|
|
|
|
|
|
|
There are special cases where you might need to access per cpu data
|
|
|
|
structures remotely. It is usually safe to do a remote read access
|
|
|
|
and that is frequently done to summarize counters. Remote write access
|
|
|
|
something which could be problematic because this_cpu ops do not
|
|
|
|
have lock semantics. A remote write may interfere with a this_cpu
|
|
|
|
RMW operation.
|
|
|
|
|
|
|
|
Remote write accesses to percpu data structures are highly discouraged
|
|
|
|
unless absolutely necessary. Please consider using an IPI to wake up
|
|
|
|
the remote CPU and perform the update to its per cpu area.
|
|
|
|
|
|
|
|
To access per-cpu data structure remotely, typically the per_cpu_ptr()
|
2017-05-17 20:10:48 +08:00
|
|
|
function is used::
|
2014-08-25 09:17:32 +08:00
|
|
|
|
|
|
|
|
|
|
|
DEFINE_PER_CPU(struct data, datap);
|
|
|
|
|
|
|
|
struct data *p = per_cpu_ptr(&datap, cpu);
|
|
|
|
|
|
|
|
This makes it explicit that we are getting ready to access a percpu
|
|
|
|
area remotely.
|
|
|
|
|
2017-05-17 20:10:48 +08:00
|
|
|
You can also do the following to convert the datap offset to an address::
|
2014-08-25 09:17:32 +08:00
|
|
|
|
|
|
|
struct data *p = this_cpu_ptr(&datap);
|
|
|
|
|
|
|
|
but, passing of pointers calculated via this_cpu_ptr to other cpus is
|
|
|
|
unusual and should be avoided.
|
|
|
|
|
|
|
|
Remote access are typically only for reading the status of another cpus
|
|
|
|
per cpu data. Write accesses can cause unique problems due to the
|
|
|
|
relaxed synchronization requirements for this_cpu operations.
|
|
|
|
|
|
|
|
One example that illustrates some concerns with write operations is
|
|
|
|
the following scenario that occurs because two per cpu variables
|
|
|
|
share a cache-line but the relaxed synchronization is applied to
|
|
|
|
only one process updating the cache-line.
|
|
|
|
|
2017-05-17 20:10:48 +08:00
|
|
|
Consider the following example::
|
2014-08-25 09:17:32 +08:00
|
|
|
|
|
|
|
|
|
|
|
struct test {
|
|
|
|
atomic_t a;
|
|
|
|
int b;
|
|
|
|
};
|
|
|
|
|
|
|
|
DEFINE_PER_CPU(struct test, onecacheline);
|
|
|
|
|
|
|
|
There is some concern about what would happen if the field 'a' is updated
|
|
|
|
remotely from one processor and the local processor would use this_cpu ops
|
|
|
|
to update field b. Care should be taken that such simultaneous accesses to
|
|
|
|
data within the same cache line are avoided. Also costly synchronization
|
|
|
|
may be necessary. IPIs are generally recommended in such scenarios instead
|
|
|
|
of a remote write to the per cpu area of another processor.
|
|
|
|
|
|
|
|
Even in cases where the remote writes are rare, please bear in
|
|
|
|
mind that a remote write will evict the cache line from the processor
|
|
|
|
that most likely will access it. If the processor wakes up and finds a
|
|
|
|
missing local cache line of a per cpu area, its performance and hence
|
|
|
|
the wake up times will be affected.
|