Plug a memory leak in dwarf_unwinder_dump() where we didn't free the
memory that we had previously allocated for the DWARF frames and DWARF
registers.
Now is also a opportune time to implement our own mempool and kmem
cache. It's a good idea to have a certain number of frame and register
objects in reserve at all times, so that we are guaranteed to have our
allocation satisfied even when memory is scarce. Since we have pools to
allocate from we can implement the registers for each frame as a linked
list as opposed to a sparsely populated array. Whilst it's true that the
lookup time for a linked list is larger than for arrays, there's only
usually a maximum of 8 registers per frame. So the overhead isn't that
much of a concern.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
This moves the initialization over to an early_initcall(). This fixes up
some lockdep interaction issues. At the same time, kill off some
superfluous locking in the init path.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Also, remove the "fix" to DW_CFA_def_cfa_register where we reset the
frame's cfa_offset to 0. This action is incorrect when handling
DW_CFA_def_cfa_register as the DWARF spec specifically states that the
previous contents of cfa_offset should be used with the new
register. The reason that I thought cfa_offset should be reset to 0 was
because it was being assigned a bogus value prior to executing the
DW_CFA_def_cfa_register op. It turns out that the bogus cfa_offset value
came from interpreting .cfi_escape pseudo-ops (those used by the GNU
extensions) as CFA_DW_def_cfa ops.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
The previous hack for calculating the return address for the first frame
we unwind (dwarf_unwinder_dump) didn't always work. The problem was that
it assumed once it read the rule for calculating the return address,
there would be no new rules for calculating it. This isn't true because
the way in which the CFA is calculated can change as you progress
through a function and the return address is figured out using the
CFA. Therefore, the way to calculate the return address can change.
So, instead of using some offset from the beginning of
dwarf_unwind_stack which is just a flakey approach, and instead of
executing instructions from the FDE until the return address is setup,
we now figure out the pc in dwarf_unwind_stack() just before we call
dwarf_cfa_execute_insns().
Signed-off-by: Matt Fleming <matt@console-pimps.org>
This patch updates the SuperH Mobile sleep assembly code with
support for DBSC memory controller found in the sh7724 processor.
Without this fix the memory hooked up to the sh7724 processor
will never enter self-refresh mode before suspending to ram. The
effect of this is that the memory contents most likeley will be
lost upon resume which may or may not be what you want.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The way that the CFA is calculated can change as we progress through a
function. If we see a DW_CFA_def_cfa_register op we need to reset the
frame's cfa_offset value which may have been previously setup.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This implements EXPMASK initialization code for SH-4A parts, where it is
possible to disable compat features that will go away in newer cores.
Presently this includes disabling support for non-nop instructions in the
rte delay slot, as well as a sleep instruction being placed in a delay
slot (neither of which the kernel does any longer). As a result of this,
any future offenders will have illegal slot exceptions generated for
them.
Associative writes for the memory-mapped cache array are still left
enabled, until such a point that special cache operations for SH-4A are
provided to move off of the current (and rather dated) SH-4 versions.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Future SH parts do not support any instruction but a nop in the rte delay
slot, so make the change for all offending parts. SH-5 is excluded from
this, and already has its own set of restrictions with regards to rte
delay slot handling.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This inserts a ULONG_MAX entry at the end of the valid entries in the
stack trace buffer so the default code doesn't need to scan to the end of
available slots. This also makes the trace buffer termination behaviour
consistent with the other architectures.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This flags the default unwinder as reliable, as it tends to be reliable
enough for the purposes of the stacktrace buffer. We leave the unreliable
cases for the unwind methods that we know to be completely broken.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adopts the reliability checks from the x86 stacktrace code so known
bad addresses are not recorded in the stack trace buffer.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
save_stack_trace_tsk() and friends can be called from atomic context (as
triggered by latencytop), and subsequently hit two problematic allocation
points that were using GFP_KERNEL (these were dwarf_unwind_stack() and
dwarf_frame_alloc_regs()). Convert these over to GFP_ATOMIC and get
latencytop working with the DWARF unwinder.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Trying to figure out the best value for DWARF_ARCH_UNWIND_OFFSET is
tricky at best. Various things can change the size (and offset from the
beginning of the function) of the prologue. Notably, turning on ftrace
adds calls to mcount at the beginning of functions, thereby pushing the
prologue further into the function.
So replace DWARF_ARCH_UNWIND_OFFSET with some code that continues to
execute CFA instructions until the value of return address register is
defined. This is safe to do because we know that the return address must
have been pushed onto the frame before our first function call; we just
can't figure out where at compile-time.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The destination address might be unaligned, so set it with
put_unaligned() for safety. This restores the previous behaviour, albeit
through the proper API.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This was using internal symbols for unaligned accesses, bypassing the
exposed interface for variable sized safe accesses. This converts all of
the __get_unaligned_cpuXX() users over to get_unaligned() directly,
relying on the cast to select the proper internal routine.
Additionally, the __put_unaligned_cpuXX() case is superfluous given that
the destination address is aligned in all of the current cases, so just
drop that outright.
Furthermore, this switches to the asm/unaligned.h header instead of the
asm-generic version, which was silently bypassing the SH-4A optimized
unaligned ops.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Annotate various assembly code paths with CFI assembler directives so
that DWARF unwind info is available for the unwinder.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
In order to use DWARF unwinder info the frame register has to contain a
valid value. Whilst GCC takes care of this for C code, we have to do it
ourselves for assembly.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This is a first cut at a generic DWARF unwinder for the kernel. It's
still lacking DWARF64 support and the DWARF expression support hasn't
been tested very well but it is generating proper stacktraces on SH for
WARN_ON() and NULL dereferences.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Instead of implementing our own stack unwinder via dump_trace() we
should use the new stack unwinder API because it is more modular. This
change allows us to decouple the interface for generating stacktraces
from the implementation of a stack unwinder.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Provide an interface for registering stack unwinders, where each
unwinder is given a rating that describes its accuracy and
complexity. The more accurate an unwinder is, the more complex it is.
If a the current stack unwinder faults, then the stack unwinder with the
next highest accuracy will be used in its place (provided one is
available). For example, this allows unwinders, such as the DWARF
unwinder, to liberally sprinkle BUG()s to catch badly formed DWARF debug
info.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Copy the stacktrace ops code from x86 and provide a central function for
use by functions that need to dump a callstack.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Convert the processor platform device setup
functions from __initcall() and sometimes
device_initcall() to arch_initcall().
This makes sure that the platform devices are
registered a bit earlier so the devices are
available when drivers register using initcall
levels earlier than device_initcall().
A good example is platform devices needed by
i2c-sh_mobile.c which registers a bit earlier
using subsys_initcall().
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch removes the unused MSTPCRn register definitions
from the SuperH Mobile code for sh7722, sh7723 and sh7724.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds early printk support for SH770x (tested on SH7709 based hp6xx).
Signed-off-by: Rafael Ignacio Zurita <rizurita@yahoo.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This cleans up the irqflags tracing code quite a bit and ties it
in to various missing callsites that caused an imbalance when
CONFIG_PROVE_LOCKING was enabled.
Previously this was catching on:
987 #ifdef CONFIG_PROVE_LOCKING
988 DEBUG_LOCKS_WARN_ON(!p->hardirqs_enabled);
989 DEBUG_LOCKS_WARN_ON(!p->softirqs_enabled);
990 #endif
991 retval = -EAGAIN;
with hardirqs being doubly enabled, and subsequently bailing out
with the following call trace:
Call trace:
[<88035224>] __lock_acquire+0x616/0x6a6
[<88015a8c>] do_fork+0xf8/0x2b0
[<880331ec>] trace_hardirqs_on_caller+0xd4/0x114
[<88241074>] _spin_unlock_irq+0x20/0x64
[<88035224>] __lock_acquire+0x616/0x6a6
[<8800386c>] kernel_thread+0x48/0x70
[<88024ecc>] ____call_usermodehelper+0x0/0x110
[<88024ecc>] ____call_usermodehelper+0x0/0x110
[<88003894>] kernel_thread_helper+0x0/0x14
[<88024bac>] __call_usermodehelper+0x38/0x70
[<88025dc0>] worker_thread+0x150/0x274
[<88035b9c>] lock_release+0x0/0x198
[<88024b74>] __call_usermodehelper+0x0/0x70
[<88028cf0>] autoremove_wake_function+0x0/0x30
[<88028bf2>] kthread+0x3e/0x70
[<88025c70>] worker_thread+0x0/0x274
[<8800389c>] kernel_thread_helper+0x8/0x14
[<88028bb4>] kthread+0x0/0x70
[<88003894>] kernel_thread_helper+0x0/0x14
Reported-by: Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com>
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This reverts commit 1d29ebebcb.
Bumping up the earlytimer initialization causes IRQs to be enabled too
early, which blows up lockdep:
...
NR_IRQS:256 nr_irqs:256
------------[ cut here ]------------
Badness at kernel/lockdep.c:2128
Pid : 0, Comm: swapper
CPU : 0 Not tainted (2.6.31-rc3-00205-g3ed6e12-dirty #2443)
PC is at trace_hardirqs_on_caller+0x48/0x10c
PR is at trace_hardirqs_on_caller+0x3c/0x10c
...
Revert it back to late_time_init time, which fixes up lockdep.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Convert the processor platform device setup
functions from __initcall() and sometimes
device_initcall() to arch_initcall().
This makes sure that the platform devices are
registered a bit earlier so the devices are
available when drivers register using initcall
levels earlier than device_initcall().
A good example is platform devices needed by
i2c-sh_mobile.c which registers a bit earlier
using subsys_initcall().
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Convert the m66592-udc driver to use the on_chip flag
from platform data to enable on chip behaviour instead
of relying on CONFIG_SUPERH_BUILT_IN_M66592 ugliness.
This makes the code cleaner and also allows us to support
both external and internal m66592 with the same kernel.
It also makes the Kconfig part more future proof since
we with this patch can add support for new processors
with on-chip m66592 without modifying the Kconfig.
The patch adds a m66592 header file for platform data
and ties in platform data to the existing m66592 devices.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Convert the r8a66597-hcd driver to use the on_chip flag
from platform data to enable on chip behaviour instead
of relying on CONFIG_SUPERH_ON_CHIP_R8A66597 ugliness.
This makes the code cleaner and also allows us to support
both external and internal r8a66597 with the same kernel.
It also makes the Kconfig part more future proof since
we with this patch can add support for new processors
with on-chip r8a66597 without modifying the Kconfig.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Extend the SuperH hwblk code to support more than one counter.
Contains ground work for the future Runtime PM implementation.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The function prototype for mcount is not defined if we are not building
with ftrace support enabled, so use DECLARE_EXPORT() to stub one in.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
STACK_DEBUG ties in to mcount in order to do function-granular stack
overflow checks as opposed to lazily checking from IRQ context. As the
default is nohz, the frequency of overflow checking is too irregular to
catch much useful information, and so the mcount approach employed by
sparc64 is adopted instead.
This kills off the old check entirely from the do_IRQ() path and now
adopts CONFIG_MCOUNT instead.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds a general CONFIG_MCOUNT in order to permit mcount generation
without ftrace support. This is primarily for allowing platforms to
enable aggressive stack overflow checking without having to enable ftrace
support. Based on the sparc64 implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Annotate __switch_to() so that the function graph tracer does not try to
trace it. Use __notrace_funcgraph, as opposed to notrace, so that other
tracers can continue to trace __switch_to().
The reason that we don't want to trace __switch_to() with the function
graph tracer is because of how the return address stack in task_struct
is implemented. When we enter __switch_to we store the real return
address on prev's ret_stack. When we return from __switch_to() we've
patched the return address on the kernel stack to be
return_to_handler. Calling return_to_handler we do,
-> ftrace_return_to_handler()
-> ftrace_pop_return_ftrace()
Which tries to pop the real return address from current->ret_stack. The
problem being that we stored the return address on prev->ret_stack, but
current now points to next, and next->ret_stack doesn't contain the
correct return address (and is possibly even empty).
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Add both dynamic and static function graph tracer support for sh.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Enable kernel stack checking code in both the dynamic ftrace and mcount
code paths. Check the stack to see if it's overflowing and make sure
that the stack pointer contains an address that's either in init_stack
or after the bss.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch converts the sh architecture to use the new linker script
macros in include/asm-generic/vmlinux.lds.h.
Signed-off-by: Tim Abbott <tabbott@ksplice.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: linux-sh@vger.kernel.org
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Now that I've added TIF_SYSCALL_FTRACE the thread flags do not fit into
a single byte any more. Code testing them now needs to be aware of the
upper and lower bytes.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch adds cpuidle support for SuperH Mobile.
The sleep mode selected by cpuidle is compared with
the mode selected by the hwblk sleep code and the
best allowed mode is entered.
At this point "Sleep mode" and "Sleep mode + SF" are
supported. This code can easily be extended to support
"Software suspend mode", but the assembly code must
first be updated to avoid loosing interrupts.
Also, update the code to only copy the assembly snippet
into internal memory once at bootup.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>