270c10e00a
For atomic backoff, we just loop over an exponentially backed off counter. This is extremely ineffective as it doesn't actually yield the cpu strand so that other competing strands can use the cpu core. In cpus previous to SPARC-T4 we have to do this in a slightly hackish way, by doing an operation with no side effects that also happens to mark the strand as unavailable. The mechanism we choose for this is three reads of the %ccr (condition-code) register into %g0 (the zero register). SPARC-T4 has an explicit "pause" instruction, and we'll make use of that in a subsequent commit. Yield strands also in cpu_relax(). We really should have done this a very long time ago. Signed-off-by: David S. Miller <davem@davemloft.net> |
||
---|---|---|
.. | ||
boot | ||
configs | ||
crypto | ||
include | ||
kernel | ||
lib | ||
math-emu | ||
mm | ||
net | ||
oprofile | ||
prom | ||
Kbuild | ||
Kconfig | ||
Kconfig.debug | ||
Makefile |