Commit Graph

5 Commits

Author SHA1 Message Date
Ralf Baechle 7837314d14 MIPS: Get rid of branches to .subsections.
It was a nice optimization - on paper at least.  In practice it results in
branches that may exceed the maximum legal range for a branch.  We can
fight that problem with -ffunction-sections but -ffunction-sections again
is incompatible with -pg used by the function tracer.

By rewriting the loop around all simple LL/SC blocks to C we reduce the
amount of inline assembler and at the same time allow GCC to often fill
the branch delay slots with something sensible or whatever else clever
optimization it may have up in its sleeve.

With this optimization gone we also no longer need -ffunction-sections,
so drop it.

This optimization was originally introduced in 2.6.21, commit
5999eca25c1fd4b9b9aca7833b04d10fe4bc877d (linux-mips.org) rsp.
f65e4fa8e0 (kernel.org).

Original fix for the issues which caused me to pull this optimization by
Paul Gortmaker <paul.gortmaker@windriver.com>.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2010-10-29 19:08:24 +01:00
Ralf Baechle 9eed4124c0 MIPS: cmpxchg.h: Fix excessive indentation.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2010-04-30 20:52:41 +01:00
David Daney f252ffd50c MIPS: New macro smp_mb__before_llsc.
Replace some instances of smp_llsc_mb() with a new macro
smp_mb__before_llsc().  It is used before ll/sc sequences that are
documented as needing write barrier semantics.

The default implementation of smp_mb__before_llsc() is just smp_llsc_mb(),
so there are no changes in semantics.

Also simplify definition of smp_mb(), smp_rmb(), and smp_wmb() to be just
barrier() in the non-SMP case.

Signed-off-by: David Daney <ddaney@caviumnetworks.com>
To: linux-mips@linux-mips.org
Patchwork: http://patchwork.linux-mips.org/patch/851/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2010-02-27 12:53:06 +01:00
David Daney b791d1193a MIPS: Allow kernel use of LL/SC to be separate from the presence of LL/SC.
On some CPUs, it is more efficient to disable and enable interrupts in the
kernel rather than use ll/sc for atomic operations.  But if we were to set
cpu_has_llsc to false, we would break the userspace futex interface (in
asm/futex.h).

We separate the two concepts, with a new predicate kernel_uses_llsc, that
lets us disable the kernel's use of ll/sc while still allowing the futex
code to use it.

Also there were a couple of cases in bitops.h where we were using ll/sc
unconditionally even if cpu_has_llsc were false.

Signed-off-by: David Daney <ddaney@caviumnetworks.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2009-09-17 20:07:50 +02:00
Ralf Baechle 384740dc49 MIPS: Move headfiles to new location below arch/mips/include
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2008-10-11 16:18:52 +01:00