Commit Graph

170 Commits

Author SHA1 Message Date
Josh Poimboeuf 071c44e427 sched/idle: Mark arch_cpu_idle_dead() __noreturn
Before commit 076cbf5d2163 ("x86/xen: don't let xen_pv_play_dead()
return"), in Xen, when a previously offlined CPU was brought back
online, it unexpectedly resumed execution where it left off in the
middle of the idle loop.

There were some hacks to make that work, but the behavior was surprising
as do_idle() doesn't expect an offlined CPU to return from the dead (in
arch_cpu_idle_dead()).

Now that Xen has been fixed, and the arch-specific implementations of
arch_cpu_idle_dead() also don't return, give it a __noreturn attribute.

This will cause the compiler to complain if an arch-specific
implementation might return.  It also improves code generation for both
caller and callee.

Also fixes the following warning:

  vmlinux.o: warning: objtool: do_idle+0x25f: unreachable instruction

Reported-by: Paul E. McKenney <paulmck@kernel.org>
Tested-by: Paul E. McKenney <paulmck@kernel.org>
Link: https://lore.kernel.org/r/60d527353da8c99d4cf13b6473131d46719ed16d.1676358308.git.jpoimboe@kernel.org
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
2023-03-08 08:44:28 -08:00
Jason A. Donenfeld 8032bf1233 treewide: use get_random_u32_below() instead of deprecated function
This is a simple mechanical transformation done by:

@@
expression E;
@@
- prandom_u32_max
+ get_random_u32_below
  (E)

Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Acked-by: Darrick J. Wong <djwong@kernel.org> # for xfs
Reviewed-by: SeongJae Park <sj@kernel.org> # for damon
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> # for infiniband
Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> # for arm
Acked-by: Ulf Hansson <ulf.hansson@linaro.org> # for mmc
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-11-18 02:15:15 +01:00
Jason A. Donenfeld 81895a65ec treewide: use prandom_u32_max() when possible, part 1
Rather than incurring a division or requesting too many random bytes for
the given range, use the prandom_u32_max() function, which only takes
the minimum required bytes from the RNG and avoids divisions. This was
done mechanically with this coccinelle script:

@basic@
expression E;
type T;
identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32";
typedef u64;
@@
(
- ((T)get_random_u32() % (E))
+ prandom_u32_max(E)
|
- ((T)get_random_u32() & ((E) - 1))
+ prandom_u32_max(E * XXX_MAKE_SURE_E_IS_POW2)
|
- ((u64)(E) * get_random_u32() >> 32)
+ prandom_u32_max(E)
|
- ((T)get_random_u32() & ~PAGE_MASK)
+ prandom_u32_max(PAGE_SIZE)
)

@multi_line@
identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32";
identifier RAND;
expression E;
@@

-       RAND = get_random_u32();
        ... when != RAND
-       RAND %= (E);
+       RAND = prandom_u32_max(E);

// Find a potential literal
@literal_mask@
expression LITERAL;
type T;
identifier get_random_u32 =~ "get_random_int|prandom_u32|get_random_u32";
position p;
@@

        ((T)get_random_u32()@p & (LITERAL))

// Add one to the literal.
@script:python add_one@
literal << literal_mask.LITERAL;
RESULT;
@@

value = None
if literal.startswith('0x'):
        value = int(literal, 16)
elif literal[0] in '123456789':
        value = int(literal, 10)
if value is None:
        print("I don't know how to handle %s" % (literal))
        cocci.include_match(False)
elif value == 2**32 - 1 or value == 2**31 - 1 or value == 2**24 - 1 or value == 2**16 - 1 or value == 2**8 - 1:
        print("Skipping 0x%x for cleanup elsewhere" % (value))
        cocci.include_match(False)
elif value & (value + 1) != 0:
        print("Skipping 0x%x because it's not a power of two minus one" % (value))
        cocci.include_match(False)
elif literal.startswith('0x'):
        coccinelle.RESULT = cocci.make_expr("0x%x" % (value + 1))
else:
        coccinelle.RESULT = cocci.make_expr("%d" % (value + 1))

// Replace the literal mask with the calculated result.
@plus_one@
expression literal_mask.LITERAL;
position literal_mask.p;
expression add_one.RESULT;
identifier FUNC;
@@

-       (FUNC()@p & (LITERAL))
+       prandom_u32_max(RESULT)

@collapse_ret@
type T;
identifier VAR;
expression E;
@@

 {
-       T VAR;
-       VAR = (E);
-       return VAR;
+       return E;
 }

@drop_var@
type T;
identifier VAR;
@@

 {
-       T VAR;
        ... when != VAR
 }

Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Yury Norov <yury.norov@gmail.com>
Reviewed-by: KP Singh <kpsingh@kernel.org>
Reviewed-by: Jan Kara <jack@suse.cz> # for ext4 and sbitmap
Reviewed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com> # for drbd
Acked-by: Jakub Kicinski <kuba@kernel.org>
Acked-by: Heiko Carstens <hca@linux.ibm.com> # for s390
Acked-by: Ulf Hansson <ulf.hansson@linaro.org> # for mmc
Acked-by: Darrick J. Wong <djwong@kernel.org> # for xfs
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-10-11 17:42:55 -06:00
Eric W. Biederman 5bd2e97c86 fork: Generalize PF_IO_WORKER handling
Add fn and fn_arg members into struct kernel_clone_args and test for
them in copy_thread (instead of testing for PF_KTHREAD | PF_IO_WORKER).
This allows any task that wants to be a user space task that only runs
in kernel mode to use this functionality.

The code on x86 is an exception and still retains a PF_KTHREAD test
because x86 unlikely everything else handles kthreads slightly
differently than user space tasks that start with a function.

The functions that created tasks that start with a function
have been updated to set ".fn" and ".fn_arg" instead of
".stack" and ".stack_size".  These functions are fork_idle(),
create_io_thread(), kernel_thread(), and user_mode_thread().

Link: https://lkml.kernel.org/r/20220506141512.516114-4-ebiederm@xmission.com
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2022-05-07 09:01:59 -05:00
Eric W. Biederman c5febea095 fork: Pass struct kernel_clone_args into copy_thread
With io_uring we have started supporting tasks that are for most
purposes user space tasks that exclusively run code in kernel mode.

The kernel task that exec's init and tasks that exec user mode
helpers are also user mode tasks that just run kernel code
until they call kernel execve.

Pass kernel_clone_args into copy_thread so these oddball
tasks can be supported more cleanly and easily.

v2: Fix spelling of kenrel_clone_args on h8300
Link: https://lkml.kernel.org/r/20220506141512.516114-2-ebiederm@xmission.com
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2022-05-07 09:01:48 -05:00
Thomas Bogendoerfer 455481fc9a MIPS: Remove TX39XX support
No (active) developer owns this hardware, so let's remove Linux support.

Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Acked-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org>
Tested-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
2022-03-01 10:07:22 +01:00
Kees Cook 42a20f86dc sched: Add wrapper for get_wchan() to keep task blocked
Having a stable wchan means the process must be blocked and for it to
stay that way while performing stack unwinding.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> [arm]
Tested-by: Mark Rutland <mark.rutland@arm.com> [arm64]
Link: https://lkml.kernel.org/r/20211008111626.332092234@infradead.org
2021-10-15 11:25:14 +02:00
Sebastian Andrzej Siewior 730d070ae9 MIPS: Replace deprecated CPU-hotplug functions.
The functions get_online_cpus() and put_online_cpus() have been
deprecated during the CPU hotplug rework. They map directly to
cpus_read_lock() and cpus_read_unlock().

Replace deprecated CPU-hotplug functions with the official version.
The behavior remains unchanged.

Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: linux-mips@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2021-08-05 10:57:01 +02:00
Peter Zijlstra b03fbd4ff2 sched: Introduce task_is_running()
Replace a bunch of 'p->state == TASK_RUNNING' with a new helper:
task_is_running(p).

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Davidlohr Bueso <dave@stgolabs.net>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210611082838.222401495@infradead.org
2021-06-18 11:43:07 +02:00
Thomas Bogendoerfer 04324f44cb MIPS: Remove get_fs/set_fs
All get_fs/set_fs calls in MIPS code are gone, so remove implementation
of it.  With the clear separation of user/kernel space access we no
longer need the EVA special handling, so get rid of that, too.

Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2021-04-06 15:12:58 +02:00
Jens Axboe 4727dc20e0 arch: setup PF_IO_WORKER threads like PF_KTHREAD
PF_IO_WORKER are kernel threads too, but they aren't PF_KTHREAD in the
sense that we don't assign ->set_child_tid with our own structure. Just
ensure that every arch sets up the PF_IO_WORKER threads like kthreads
in the arch implementation of copy_thread().

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-21 17:25:22 -07:00
Thomas Bogendoerfer ea4a1ea4c8 Revert "MIPS: microMIPS: Fix the judgment of mm_jr16_op and mm_jalr_op"
This reverts commit 9308579fef.

Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2021-02-09 17:12:34 +01:00
Jinyang He 9f0781bac9 MIPS: process: Fix no previous prototype warning
unwind_stack_by_address and unwind_stack need <asm/stacktrace.h>.
arch_align_stack needs <asm/exec.h>

link: https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org/thread/ZPL2RRA6RZKRQZI5IGOVLFXN2GVZBN3L/
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Jinyang He <hejinyang@loongson.cn>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2021-02-09 11:16:02 +01:00
Jinyang He 50886234e8 MIPS: Add is_jr_ra_ins() to end the loop early
For those leaf functions, they are likely to have no stack operations.
Add is_jr_ra_ins() to determine whether jr ra has been touched before
the frame_size is found. Without this patch, the get frame_size operation
may be out of range and get the frame_size from the next nested function.

There is no POOL32A format in uapi/asm/inst.h, so some bits here use the
format of r_format instead.
e.g.
---------------------------------------------------------------------
|    format      |  31:26  | 25:21 | 20:16 |    15:6    |    5:0    |
-----------------+---------+-------+-------+------------+------------
| pool32a_format | pool32a |  rt   |  rs   |   jalrc    | pool32axf |
-----------------+---------+-------+-------+------------+------------
|    r_format    |  opcode |  rs   |  rt   | rd:5, re:5 |    func   |
---------------------------------------------------------------------

Signed-off-by: Jinyang He <hejinyang@loongson.cn>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2021-01-25 12:21:23 +01:00
Jinyang He 2d62f64bcc MIPS: Fix get_frame_info() handing of function size
[1]: Commit b6c7a324df ("MIPS: Fix get_frame_info() handling of
                            microMIPS function size")
[2]: Commit 2b424cfc69 ("MIPS: Remove function size check in
                            get_frame_info()")

First patch added a constant to check the number of iterations against.
Second patch fixed the situation that info->func_size is zero.

However, func_size member became useless after the second commit. Without
ip_end, the get frame_size operation may be out of range although KALLSYMS
enabled. Thus, check func_size first. Then make ip_end be the sum of ip
and a constant (512) if func_size is equal to 0. Otherwise make ip_end be
the sum of ip and func_size.

Signed-off-by: Jinyang He <hejinyang@loongson.cn>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2021-01-25 12:21:23 +01:00
Jinyang He 9308579fef MIPS: microMIPS: Fix the judgment of mm_jr16_op and mm_jalr_op
mm16_r5_format.rt is 5 bits, so directly judge the value if equal or not.
mm_jalr_op requires 7th to 16th bits. These 10 which bits generated by
shifting u_format.uimmediate by 6 may be affected by sign extension.
Thus, take out the 10 bits for comparison.

Without this patch, errors may occur, such as these bits are all ones.

Signed-off-by: Jinyang He <hejinyang@loongson.cn>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2021-01-25 12:21:22 +01:00
Jinyang He fa85d6ac2c MIPS: process: Remove unnecessary headers inclusion
Some headers are not necessary, remove them and sort includes.

Signed-off-by: Jinyang He <hejinyang@loongson.cn>
Reviewed-by: Huacai Chen <chenhuacai@kernel.org>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2021-01-25 12:21:22 +01:00
Peter Zijlstra 545b8c8df4 smp: Cleanup smp_call_function*()
Get rid of the __call_single_node union and cleanup the API a little
to avoid external code relying on the structure layout as much.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
2020-11-24 16:47:49 +01:00
Pujin Shi 047248cab1 MIPS: process: include exec.h header in process.c
arch/mips/kernel/process.c:696:15: error: no previous prototype for 'arch_align_stack' [-Werror=missing-prototypes]

Signed-off-by: Pujin Shi <shipujin.t@gmail.com>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2020-09-30 21:53:00 +02:00
Huacai Chen bc1c969f11 MIPS: Loongson-3: Calculate ra properly when unwinding the stack
Loongson-3 has 16-bytes load/store instructions: gslq and gssq. This
patch calculate ra properly when unwinding the stack, if ra is saved
by gssq and restored by gslq.

Signed-off-by: Huacai Chen <chenhc@lemote.com>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2020-09-21 22:15:22 +02:00
Huacai Chen 195615ecc8 MIPS: Loongson-3: Enable COP2 usage in kernel
Loongson-3's COP2 is Multi-Media coprocessor, it is disabled in kernel
mode by default. However, gslq/gssq (16-bytes load/store instructions)
overrides the instruction format of lwc2/swc2. If we wan't to use gslq/
gssq for optimization in kernel, we should enable COP2 usage in kernel.

Please pay attention that in this patch we only enable COP2 in kernel,
which means it will lose ST0_CU2 when a process go to user space (try
to use COP2 in user space will trigger an exception and then grab COP2,
which is similar to FPU). And as a result, we need to modify the context
switching code because the new scheduled process doesn't contain ST0_CU2
in its THERAD_STATUS probably.

For zboot, we disable gslq/gssq be generated by toolchain.

Signed-off-by: Huacai Chen <chenhc@lemote.com>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2020-09-21 22:15:03 +02:00
Christian Brauner 714acdbd1c
arch: rename copy_thread_tls() back to copy_thread()
Now that HAVE_COPY_THREAD_TLS has been removed, rename copy_thread_tls()
back simply copy_thread(). It's a simpler name, and doesn't imply that only
tls is copied here. This finishes an outstanding chunk of internal process
creation work since we've added clone3().

Cc: linux-arch@vger.kernel.org
Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>A
Acked-by: Stafford Horne <shorne@gmail.com>
Acked-by: Greentime Hu <green.hu@gmail.com>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>A
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
2020-07-04 23:41:37 +02:00
Mike Rapoport e31cf2f4ca mm: don't include asm/pgtable.h if linux/mm.h is already included
Patch series "mm: consolidate definitions of page table accessors", v2.

The low level page table accessors (pXY_index(), pXY_offset()) are
duplicated across all architectures and sometimes more than once.  For
instance, we have 31 definition of pgd_offset() for 25 supported
architectures.

Most of these definitions are actually identical and typically it boils
down to, e.g.

static inline unsigned long pmd_index(unsigned long address)
{
        return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1);
}

static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
{
        return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address);
}

These definitions can be shared among 90% of the arches provided
XYZ_SHIFT, PTRS_PER_XYZ and xyz_page_vaddr() are defined.

For architectures that really need a custom version there is always
possibility to override the generic version with the usual ifdefs magic.

These patches introduce include/linux/pgtable.h that replaces
include/asm-generic/pgtable.h and add the definitions of the page table
accessors to the new header.

This patch (of 12):

The linux/mm.h header includes <asm/pgtable.h> to allow inlining of the
functions involving page table manipulations, e.g.  pte_alloc() and
pmd_alloc().  So, there is no point to explicitly include <asm/pgtable.h>
in the files that include <linux/mm.h>.

The include statements in such cases are remove with a simple loop:

	for f in $(git grep -l "include <linux/mm.h>") ; do
		sed -i -e '/include <asm\/pgtable.h>/ d' $f
	done

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Cain <bcain@codeaurora.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Greg Ungerer <gerg@linux-m68k.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Guo Ren <guoren@kernel.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Ley Foon Tan <ley.foon.tan@intel.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Nick Hu <nickhu@andestech.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vincent Chen <deanbo422@gmail.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will@kernel.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Link: http://lkml.kernel.org/r/20200514170327.31389-1-rppt@kernel.org
Link: http://lkml.kernel.org/r/20200514170327.31389-2-rppt@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-06-09 09:39:13 -07:00
Yousong Zhou aebdc6ff3b MIPS: Exclude more dsemul code when CONFIG_MIPS_FP_SUPPORT=n
This furthers what commit 42b10815d5 ("MIPS: Don't compile math-emu
when CONFIG_MIPS_FP_SUPPORT=n") has done

Signed-off-by: Yousong Zhou <yszhou4tech@gmail.com>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2020-03-25 16:07:14 +01:00
Jun-Ru Chang 2b424cfc69
MIPS: Remove function size check in get_frame_info()
Patch (b6c7a324df "MIPS: Fix get_frame_info() handling of
microMIPS function size.") introduces additional function size
check for microMIPS by only checking insn between ip and ip + func_size.
However, func_size in get_frame_info() is always 0 if KALLSYMS is not
enabled. This causes get_frame_info() to return immediately without
calculating correct frame_size, which in turn causes "Can't analyze
schedule() prologue" warning messages at boot time.

This patch removes func_size check, and let the frame_size check run
up to 128 insns for both MIPS and microMIPS.

Signed-off-by: Jun-Ru Chang <jrjang@realtek.com>
Signed-off-by: Tony Wu <tonywu@realtek.com>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Fixes: b6c7a324df ("MIPS: Fix get_frame_info() handling of microMIPS function size.")
Cc: <ralf@linux-mips.org>
Cc: <jhogan@kernel.org>
Cc: <macro@mips.com>
Cc: <yamada.masahiro@socionext.com>
Cc: <peterz@infradead.org>
Cc: <mingo@kernel.org>
Cc: <linux-mips@vger.kernel.org>
Cc: <linux-kernel@vger.kernel.org>
2019-02-04 15:15:34 -08:00
Paul Burton 41e486f4f6
MIPS: Remove struct mm_context_t fp_mode_switching field
The fp_mode_switching field in struct mm_context_t was left unused by
commit 8c8d953c28 ("MIPS: Schedule on CPUs we need to lose FPU for a
mode switch") in v4.19, with nothing modifying its value & nothing
waiting on it having any particular value after that commit. Remove the
unused field & the one remaining reference to it.

Signed-off-by: Paul Burton <paul.burton@mips.com>
2018-12-17 22:05:40 -08:00
Paul Burton ea7e0480a4
MIPS: VDSO: Always map near top of user memory
When using the legacy mmap layout, for example triggered using ulimit -s
unlimited, get_unmapped_area() fills memory from bottom to top starting
from a fairly low address near TASK_UNMAPPED_BASE.

This placement is suboptimal if the user application wishes to allocate
large amounts of heap memory using the brk syscall. With the VDSO being
located low in the user's virtual address space, the amount of space
available for access using brk is limited much more than it was prior to
the introduction of the VDSO.

For example:

  # ulimit -s unlimited; cat /proc/self/maps
  00400000-004ec000 r-xp 00000000 08:00 71436      /usr/bin/coreutils
  004fc000-004fd000 rwxp 000ec000 08:00 71436      /usr/bin/coreutils
  004fd000-0050f000 rwxp 00000000 00:00 0
  00cc3000-00ce4000 rwxp 00000000 00:00 0          [heap]
  2ab96000-2ab98000 r--p 00000000 00:00 0          [vvar]
  2ab98000-2ab99000 r-xp 00000000 00:00 0          [vdso]
  2ab99000-2ab9d000 rwxp 00000000 00:00 0
  ...

Resolve this by adjusting STACK_TOP to reserve space for the VDSO &
providing an address hint to get_unmapped_area() causing it to use this
space even when using the legacy mmap layout.

We reserve enough space for the VDSO, plus 1MB or 256MB for 32 bit & 64
bit systems respectively within which we randomize the VDSO base
address. Previously this randomization was taken care of by the mmap
base address randomization performed by arch_mmap_rnd(). The 1MB & 256MB
sizes are somewhat arbitrary but chosen such that we have some
randomization without taking up too much of the user's virtual address
space, which is often in short supply for 32 bit systems.

With this the VDSO is always mapped at a high address, leaving lots of
space for statically linked programs to make use of brk:

  # ulimit -s unlimited; cat /proc/self/maps
  00400000-004ec000 r-xp 00000000 08:00 71436      /usr/bin/coreutils
  004fc000-004fd000 rwxp 000ec000 08:00 71436      /usr/bin/coreutils
  004fd000-0050f000 rwxp 00000000 00:00 0
  00c28000-00c49000 rwxp 00000000 00:00 0          [heap]
  ...
  7f67c000-7f69d000 rwxp 00000000 00:00 0          [stack]
  7f7fc000-7f7fd000 rwxp 00000000 00:00 0
  7fcf1000-7fcf3000 r--p 00000000 00:00 0          [vvar]
  7fcf3000-7fcf4000 r-xp 00000000 00:00 0          [vdso]

Signed-off-by: Paul Burton <paul.burton@mips.com>
Reported-by: Huacai Chen <chenhc@lemote.com>
Fixes: ebb5e78cc6 ("MIPS: Initial implementation of a VDSO")
Cc: Huacai Chen <chenhc@lemote.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # v4.4+
2018-09-28 12:09:00 -07:00
Linus Torvalds e5a32b5b21 Here are the main MIPS changes for 4.19.
An overview of the general architecture changes:
 
   - Massive DMA ops refactoring from Christoph Hellwig (huzzah for
     deleting crufty code!).
 
   - We introduce NT_MIPS_DSP & NT_MIPS_FP_MODE ELF notes & corresponding
     regsets to expose DSP ASE & floating point mode state respectively,
     both for live debugging & core dumps.
 
   - We better optimize our code by hard-coding cpu_has_* macros at
     compile time where their values are known due to the ISA revision
     that the kernel build is targeting.
 
   - The EJTAG exception handler now better handles SMP systems, where it
     was previously possible for CPUs to clobber a register value saved
     by another CPU.
 
   - Our implementation of memset() gained a couple of fixes for MIPSr6
     systems to return correct values in some cases where stores fault.
 
   - We now implement ioremap_wc() using the uncached-accelerated cache
     coherency attribute where supported, which is detected during boot,
     and fall back to plain uncached access where necessary. The
     MIPS-specific (and unused in tree) ioremap_uncached_accelerated() &
     ioremap_cacheable_cow() are removed.
 
   - The prctl(PR_SET_FP_MODE, ...) syscall is better supported for SMP
     systems by reworking the way we ensure remote CPUs that may be
     running threads within the affected process switch mode.
 
   - Systems using the MIPS Coherence Manager will now set the
     MIPS_IC_SNOOPS_REMOTE flag to avoid some unnecessary cache
     maintenance overhead when flushing the icache.
 
   - A few fixes were made for building with clang/LLVM, which
     now sucessfully builds kernels for many of our platforms.
 
   - Miscellaneous cleanups all over.
 
 And some platform-specific changes:
 
   - ar7 gained stubs for a few clock API functions to fix build failures
     for some drivers.
 
   - ath79 gained support for a few new SoCs, a few fixes & better
     gpio-keys support.
 
   - Ci20 now exposes its SPI bus using the spi-gpio driver.
 
   - The generic platform can now auto-detect a suitable value for
     PHYS_OFFSET based upon the memory map described by the device tree,
     allowing us to avoid wasting memory on page book-keeping for systems
     where RAM starts at a non-zero physical address.
 
   - Ingenic systems using the jz4740 platform code now link their
     vmlinuz higher to allow for kernels of a realistic size.
 
   - Loongson32 now builds the kernel targeting MIPSr1 rather than MIPSr2
     to avoid CPU errata.
 
   - Loongson64 gains a couple of fixes, a workaround for a write
     buffering issue & support for the Loongson 3A R3.1 CPU.
 
   - Malta now uses the piix4-poweroff driver to handle powering down.
 
   - Microsemi Ocelot gained support for its SPI bus & NOR flash, its
     second MDIO bus and can now be supported by a FIT/.itb image.
 
   - Octeon saw a bunch of header cleanups which remove a lot of
     duplicate or unused code.
 -----BEGIN PGP SIGNATURE-----
 
 iIsEABYIADMWIQRgLjeFAZEXQzy86/s+p5+stXUA3QUCW3G6JxUccGF1bC5idXJ0
 b25AbWlwcy5jb20ACgkQPqefrLV1AN0n/gD/Rpdgay31G/4eTTKBmBrcaju6Shjt
 /2Iu6WC5Sj4hDHUBAJSbuI+B9YjcNsjekBYxB/LLD7ImcLBl6nLMIvKmXLAL
 =cUiF
 -----END PGP SIGNATURE-----

Merge tag 'mips_4.19' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux

Pull MIPS updates from Paul Burton:
 "Here are the main MIPS changes for 4.19.

  An overview of the general architecture changes:

   - Massive DMA ops refactoring from Christoph Hellwig (huzzah for
     deleting crufty code!).

   - We introduce NT_MIPS_DSP & NT_MIPS_FP_MODE ELF notes &
     corresponding regsets to expose DSP ASE & floating point mode state
     respectively, both for live debugging & core dumps.

   - We better optimize our code by hard-coding cpu_has_* macros at
     compile time where their values are known due to the ISA revision
     that the kernel build is targeting.

   - The EJTAG exception handler now better handles SMP systems, where
     it was previously possible for CPUs to clobber a register value
     saved by another CPU.

   - Our implementation of memset() gained a couple of fixes for MIPSr6
     systems to return correct values in some cases where stores fault.

   - We now implement ioremap_wc() using the uncached-accelerated cache
     coherency attribute where supported, which is detected during boot,
     and fall back to plain uncached access where necessary. The
     MIPS-specific (and unused in tree) ioremap_uncached_accelerated() &
     ioremap_cacheable_cow() are removed.

   - The prctl(PR_SET_FP_MODE, ...) syscall is better supported for SMP
     systems by reworking the way we ensure remote CPUs that may be
     running threads within the affected process switch mode.

   - Systems using the MIPS Coherence Manager will now set the
     MIPS_IC_SNOOPS_REMOTE flag to avoid some unnecessary cache
     maintenance overhead when flushing the icache.

   - A few fixes were made for building with clang/LLVM, which now
     sucessfully builds kernels for many of our platforms.

   - Miscellaneous cleanups all over.

  And some platform-specific changes:

   - ar7 gained stubs for a few clock API functions to fix build
     failures for some drivers.

   - ath79 gained support for a few new SoCs, a few fixes & better
     gpio-keys support.

   - Ci20 now exposes its SPI bus using the spi-gpio driver.

   - The generic platform can now auto-detect a suitable value for
     PHYS_OFFSET based upon the memory map described by the device tree,
     allowing us to avoid wasting memory on page book-keeping for
     systems where RAM starts at a non-zero physical address.

   - Ingenic systems using the jz4740 platform code now link their
     vmlinuz higher to allow for kernels of a realistic size.

   - Loongson32 now builds the kernel targeting MIPSr1 rather than
     MIPSr2 to avoid CPU errata.

   - Loongson64 gains a couple of fixes, a workaround for a write
     buffering issue & support for the Loongson 3A R3.1 CPU.

   - Malta now uses the piix4-poweroff driver to handle powering down.

   - Microsemi Ocelot gained support for its SPI bus & NOR flash, its
     second MDIO bus and can now be supported by a FIT/.itb image.

   - Octeon saw a bunch of header cleanups which remove a lot of
     duplicate or unused code"

* tag 'mips_4.19' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: (123 commits)
  MIPS: Remove remnants of UASM_ISA
  MIPS: netlogic: xlr: Remove erroneous check in nlm_fmn_send()
  MIPS: VDSO: Force link endianness
  MIPS: Always specify -EB or -EL when using clang
  MIPS: Use dins to simplify __write_64bit_c0_split()
  MIPS: Use read-write output operand in __write_64bit_c0_split()
  MIPS: Avoid using array as parameter to write_c0_kpgd()
  MIPS: vdso: Allow clang's --target flag in VDSO cflags
  MIPS: genvdso: Remove GOT checks
  MIPS: Remove obsolete MIPS checks for DST node "chosen@0"
  MIPS: generic: Remove input symbols from defconfig
  MIPS: Delete unused code in linux32.c
  MIPS: Remove unused sys_32_mmap2
  MIPS: Remove nabi_no_regargs
  mips: dts: mscc: enable spi and NOR flash support on ocelot PCB123
  mips: dts: mscc: Add spi on Ocelot
  MIPS: Loongson: Merge load addresses
  MIPS: Loongson: Set Loongson32 to MIPS32R1
  MIPS: mscc: ocelot: add interrupt controller properties to GPIO controller
  MIPS: generic: Select MIPS_AUTO_PFN_OFFSET
  ...
2018-08-13 19:24:32 -07:00
Paul Burton b63e132b64
MIPS: Use async IPIs for arch_trigger_cpumask_backtrace()
The current MIPS implementation of arch_trigger_cpumask_backtrace() is
broken because it attempts to use synchronous IPIs despite the fact that
it may be run with interrupts disabled.

This means that when arch_trigger_cpumask_backtrace() is invoked, for
example by the RCU CPU stall watchdog, we may:

  - Deadlock due to use of synchronous IPIs with interrupts disabled,
    causing the CPU that's attempting to generate the backtrace output
    to hang itself.

  - Not succeed in generating the desired output from remote CPUs.

  - Produce warnings about this from smp_call_function_many(), for
    example:

    [42760.526910] INFO: rcu_sched detected stalls on CPUs/tasks:
    [42760.535755]  0-...!: (1 GPs behind) idle=ade/140000000000000/0 softirq=526944/526945 fqs=0
    [42760.547874]  1-...!: (0 ticks this GP) idle=e4a/140000000000000/0 softirq=547885/547885 fqs=0
    [42760.559869]  (detected by 2, t=2162 jiffies, g=266689, c=266688, q=33)
    [42760.568927] ------------[ cut here ]------------
    [42760.576146] WARNING: CPU: 2 PID: 1216 at kernel/smp.c:416 smp_call_function_many+0x88/0x20c
    [42760.587839] Modules linked in:
    [42760.593152] CPU: 2 PID: 1216 Comm: sh Not tainted 4.15.4-00373-gee058bb4d0c2 #2
    [42760.603767] Stack : 8e09bd20 8e09bd20 8e09bd20 fffffff0 00000007 00000006 00000000 8e09bca8
    [42760.616937]         95b2b379 95b2b379 807a0080 00000007 81944518 0000018a 00000032 00000000
    [42760.630095]         00000000 00000030 80000000 00000000 806eca74 00000009 8017e2b8 000001a0
    [42760.643169]         00000000 00000002 00000000 8e09baa4 00000008 808b8008 86d69080 8e09bca0
    [42760.656282]         8e09ad50 805e20aa 00000000 00000000 00000000 8017e2b8 00000009 801070ca
    [42760.669424]         ...
    [42760.673919] Call Trace:
    [42760.678672] [<27fde568>] show_stack+0x70/0xf0
    [42760.685417] [<84751641>] dump_stack+0xaa/0xd0
    [42760.692188] [<699d671c>] __warn+0x80/0x92
    [42760.698549] [<68915d41>] warn_slowpath_null+0x28/0x36
    [42760.705912] [<f7c76c1c>] smp_call_function_many+0x88/0x20c
    [42760.713696] [<6bbdfc2a>] arch_trigger_cpumask_backtrace+0x30/0x4a
    [42760.722216] [<f845bd33>] rcu_dump_cpu_stacks+0x6a/0x98
    [42760.729580] [<796e7629>] rcu_check_callbacks+0x672/0x6ac
    [42760.737476] [<059b3b43>] update_process_times+0x18/0x34
    [42760.744981] [<6eb94941>] tick_sched_handle.isra.5+0x26/0x38
    [42760.752793] [<478d3d70>] tick_sched_timer+0x1c/0x50
    [42760.759882] [<e56ea39f>] __hrtimer_run_queues+0xc6/0x226
    [42760.767418] [<e88bbcae>] hrtimer_interrupt+0x88/0x19a
    [42760.775031] [<6765a19e>] gic_compare_interrupt+0x2e/0x3a
    [42760.782761] [<0558bf5f>] handle_percpu_devid_irq+0x78/0x168
    [42760.790795] [<90c11ba2>] generic_handle_irq+0x1e/0x2c
    [42760.798117] [<1b6d462c>] gic_handle_local_int+0x38/0x86
    [42760.805545] [<b2ada1c7>] gic_irq_dispatch+0xa/0x14
    [42760.812534] [<90c11ba2>] generic_handle_irq+0x1e/0x2c
    [42760.820086] [<c7521934>] do_IRQ+0x16/0x20
    [42760.826274] [<9aef3ce6>] plat_irq_dispatch+0x62/0x94
    [42760.833458] [<6a94b53c>] except_vec_vi_end+0x70/0x78
    [42760.840655] [<22284043>] smp_call_function_many+0x1ba/0x20c
    [42760.848501] [<54022b58>] smp_call_function+0x1e/0x2c
    [42760.855693] [<ab9fc705>] flush_tlb_mm+0x2a/0x98
    [42760.862730] [<0844cdd0>] tlb_flush_mmu+0x1c/0x44
    [42760.869628] [<cb259b74>] arch_tlb_finish_mmu+0x26/0x3e
    [42760.877021] [<1aeaaf74>] tlb_finish_mmu+0x18/0x66
    [42760.883907] [<b3fce717>] exit_mmap+0x76/0xea
    [42760.890428] [<c4c8a2f6>] mmput+0x80/0x11a
    [42760.896632] [<a41a08f4>] do_exit+0x1f4/0x80c
    [42760.903158] [<ee01cef6>] do_group_exit+0x20/0x7e
    [42760.909990] [<13fa8d54>] __wake_up_parent+0x0/0x1e
    [42760.917045] [<46cf89d0>] smp_call_function_many+0x1a2/0x20c
    [42760.924893] [<8c21a93b>] syscall_common+0x14/0x1c
    [42760.931765] ---[ end trace 02aa09da9dc52a60 ]---
    [42760.938342] ------------[ cut here ]------------
    [42760.945311] WARNING: CPU: 2 PID: 1216 at kernel/smp.c:291 smp_call_function_single+0xee/0xf8
    ...

This patch switches MIPS' arch_trigger_cpumask_backtrace() to use async
IPIs & smp_call_function_single_async() in order to resolve this
problem. We ensure use of the pre-allocated call_single_data_t
structures is serialized by maintaining a cpumask indicating that
they're busy, and refusing to attempt to send an IPI when a CPU's bit is
set in this mask. This should only happen if a CPU hasn't responded to a
previous backtrace IPI - ie. if it's hung - and we print a warning to
the console in this case.

I've marked this for stable branches as far back as v4.9, to which it
applies cleanly. Strictly speaking the faulty MIPS implementation can be
traced further back to commit 856839b768 ("MIPS: Add
arch_trigger_all_cpu_backtrace() function") in v3.19, but kernel
versions v3.19 through v4.8 will require further work to backport due to
the rework performed in commit 9a01c3ed5c ("nmi_backtrace: add more
trigger_*_cpu_backtrace() methods").

Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19597/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # v4.9+
Fixes: 856839b768 ("MIPS: Add arch_trigger_all_cpu_backtrace() function")
Fixes: 9a01c3ed5c ("nmi_backtrace: add more trigger_*_cpu_backtrace() methods")
2018-06-28 14:14:41 -07:00
Paul Burton 5a267832c2
MIPS: Call dump_stack() from show_regs()
The generic nmi_cpu_backtrace() function calls show_regs() when a struct
pt_regs is available, and dump_stack() otherwise. If we were to make use
of the generic nmi_cpu_backtrace() with MIPS' current implementation of
show_regs() this would mean that we see only register data with no
accompanying stack information, in contrast with our current
implementation which calls dump_stack() regardless of whether register
state is available.

In preparation for making use of the generic nmi_cpu_backtrace() to
implement arch_trigger_cpumask_backtrace(), have our implementation of
show_regs() call dump_stack() and drop the explicit dump_stack() call in
arch_dump_stack() which is invoked by arch_trigger_cpumask_backtrace().

This will allow the output we produce to remain the same after a later
patch switches to using nmi_cpu_backtrace(). It may mean that we produce
extra stack output in other uses of show_regs(), but this:

  1) Seems harmless.
  2) Is good for consistency between arch_trigger_cpumask_backtrace()
     and other users of show_regs().
  3) Matches the behaviour of the ARM & PowerPC architectures.

Marked for stable back to v4.9 as a prerequisite of the following patch
"MIPS: Call dump_stack() from show_regs()".

Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19596/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # v4.9+
2018-06-28 11:48:54 -07:00
Paul Burton 8c8d953c28
MIPS: Schedule on CPUs we need to lose FPU for a mode switch
Commit 6b8322576e ("MIPS: Force CPUs to lose FP context during mode
switches") ensures that we react to PR_SET_FP_MODE prctl syscalls
quickly by broadcasting an IPI in order to cause CPUs to lose FPU access
when necessary. Whilst it achieves that, unfortunately it causes all
sorts of strange race conditions because:

 1) The IPI may arrive at a point where the FPU is in the process of
    being enabled, but that process is not yet complete leading to a
    state we aren't prepared to handle. For example:

    [  370.215903] do_cpu invoked from kernel context![#1]:
    [  370.221064] CPU: 0 PID: 963 Comm: fp-prctl Not tainted 4.9.0-rc5-00323-g210db32-dirty #226
    [  370.229420] task: a8000000fd672e00 task.stack: a8000000fd630000
    [  370.235399] $ 0   : 0000000000000000 0000000000000001 0000000000000001 a8000000fd630000
    [  370.243882] $ 4   : a8000000fd672e00 0000000000000000 0000000000000453 0000000000000000
    [  370.252317] $ 8   : 0000000000000000 a8000000fd637c28 1000000000000000 0000000000000010
    [  370.260753] $12   : 00000000140084e0 ffffffff80109c00 0000000000000000 0000000000000002
    [  370.269179] $16   : ffffffff8092f080 a8000000fd672e00 ffffffff80107fe8 a8000000fd485000
    [  370.277612] $20   : ffffffff8084d328 ffffffff80940000 0000000000000009 ffffffff80930000
    [  370.286038] $24   : 0000000000000000 900000001612048c
    [  370.294476] $28   : a8000000fd630000 a8000000fd637ac0 ffffffff80937300 ffffffff8010807c
    [  370.302909] Hi    : 0000000000000000
    [  370.306595] Lo    : 0000000000000200
    [  370.310376] epc   : ffffffff80115d38 _save_fp+0x10/0xa0
    [  370.315784] ra    : ffffffff8010807c prepare_for_fp_mode_switch+0x94/0x1b0
    [  370.322707] Status: 140084e2 KX SX UX KERNEL EXL
    [  370.327980] Cause : 1080002c (ExcCode 0b)
    [  370.332091] PrId  : 0001a428 (MIPS P6600)
    [  370.336179] Modules linked in:
    [  370.339486] Process fp-prctl (pid: 963, threadinfo=a8000000fd630000, task=a8000000fd672e00, tls=00000000756e67d0)
    [  370.349724] Stack : 0000000000000000 a8000000fd557dc0 0000000000000000 ffffffff801ca8e0
    [  370.358161]         0000000000000000 a8000000fd637b9c 0000000000000009 ffffffff80923780
    [  370.366575]         ffffffff80850000 ffffffff8011610c 00000000000000b8 ffffffff801a5084
    [  370.374989]         ffffffff8084a370 ffffffff8084a388 ffffffff80923780 ffffffff80923828
    [  370.383395]         0000000000010000 ffffffff809237a8 0000000000020000 ffffffff80a40000
    [  370.391817]         000000000000007c 00000000004a0000 00000000756dedd0 ffffffff801a5188
    [  370.400230]         a800000002014900 0000000000000001 ffffffff80923780 0000000080923828
    [  370.408644]         ffffffff80923780 ffffffff80923780 ffffffff80923828 ffffffff801a521c
    [  370.417066]         ffffffff80923780 ffffffff80923828 0000000000010000 ffffffff801a8f84
    [  370.425472]         ffffffff80a40000 a8000000fd637c20 ffffffff80a39240 0000000000000001
    [  370.433885]         ...
    [  370.436562] Call Trace:
    [  370.439222] [<ffffffff80115d38>] _save_fp+0x10/0xa0
    [  370.444305] [<ffffffff8010807c>] prepare_for_fp_mode_switch+0x94/0x1b0
    [  370.451035] [<ffffffff801ca8e0>] flush_smp_call_function_queue+0xf8/0x230
    [  370.457991] [<ffffffff8011610c>] ipi_call_interrupt+0xc/0x20
    [  370.463814] [<ffffffff801a5084>] __handle_irq_event_percpu+0xc4/0x1a8
    [  370.470404] [<ffffffff801a5188>] handle_irq_event_percpu+0x20/0x68
    [  370.476734] [<ffffffff801a521c>] handle_irq_event+0x4c/0x88
    [  370.482486] [<ffffffff801a8f84>] handle_edge_irq+0x12c/0x210
    [  370.488316] [<ffffffff801a47a0>] generic_handle_irq+0x38/0x48
    [  370.494280] [<ffffffff804a2dbc>] gic_handle_shared_int+0x194/0x268
    [  370.500616] [<ffffffff801a47a0>] generic_handle_irq+0x38/0x48
    [  370.506529] [<ffffffff80107e60>] do_IRQ+0x18/0x28
    [  370.511445] [<ffffffff804a1524>] plat_irq_dispatch+0xc4/0x140
    [  370.517339] [<ffffffff80106230>] ret_from_irq+0x0/0x4
    [  370.522583] [<ffffffff8010fad4>] do_ri+0x4fc/0x7e8
    [  370.527546] [<ffffffff80106220>] ret_from_exception+0x0/0x10

 2) The IPI may arrive during kernel use of the FPU, since we generally
    only disable preemption around use of the FPU & leave interrupts
    enabled. This can lead to us unexpectedly losing access to the FPU
    in places where it previously had not been possible. For example:

    do_cpu invoked from kernel context![#2]:
    CPU: 2 PID: 7338 Comm: fp-prctl Tainted: G      D         4.7.0-00424-g49b0c82
    #2
    task: 838e4000 ti: 88d38000 task.ti: 88d38000
    $ 0   : 00000000 00000001 ffffffff 88d3fef8
    $ 4   : 838e4000 88d38004 00000000 00000001
    $ 8   : 3400fc01 801f8020 808e9100 24000000
    $12   : dbffffff 807b69d8 807b0000 00000000
    $16   : 00000000 80786150 00400fc4 809c0398
    $20   : 809c0338 0040273c 88d3ff28 808e9d30
    $24   : 808e9d30 00400fb4
    $28   : 88d38000 88d3fe88 00000000 8011a2ac
    Hi    : 0040273c
    Lo    : 88d3ff28
    epc   : 80114178 _restore_fp+0x10/0xa0
    ra    : 8011a2ac mipsr2_decoder+0xd5c/0x1660
    Status: 1400fc03    KERNEL EXL IE
    Cause : 1080002c (ExcCode 0b)
    PrId  : 0001a920 (MIPS I6400)
    Modules linked in:
    Process fp-prctl (pid: 7338, threadinfo=88d38000, task=838e4000, tls=766527d0)
    Stack : 00000000 00000000 00000000 88d3fe98 00000000 00000000 809c0398 809c0338
          808e9100 00000000 88d3ff28 00400fc4 00400fc4 0040273c 7fb69e18 004a0000
          004a0000 004a0000 7664add0 8010de18 00000000 00000000 88d3fef8 88d3ff28
          808e9100 00000000 766527d0 8010e534 000c0000 85755000 8181d580 00000000
          00000000 00000000 004a0000 00000000 766527d0 7fb69e18 004a0000 80105c20
          ...
    Call Trace:
    [<80114178>] _restore_fp+0x10/0xa0
    [<8011a2ac>] mipsr2_decoder+0xd5c/0x1660
    [<8010de18>] do_ri+0x90/0x6b8
    [<80105c20>] ret_from_exception+0x0/0x10

At first glance a simple fix may seem to be to disable interrupts around
kernel use of the FPU rather than merely preemption, however this would
introduce further overhead outside of the mode switch path & doesn't
solve the third problem:

 3) The IPI may arrive whilst the kernel is running code that will lead
    to a preempt_disable() call & FPU usage soon. If this happens then
    the IPI will be serviced & we'll proceed to enable an FPU whilst the
    mode switch is in progress, leading to strange & inconsistent
    behaviour.

Further to all of this is a separate but related problem:

 4) There are various paths through which we may enable the FPU without
    the user having triggered a coprocessor 1 disabled exception. These
    paths are those in which we emulate instructions & then enable the
    FPU with the expectation that the user might execute an FP
    instruction shortly afterwards. However these paths have not
    previously checked whether an FP mode switch is underway for the
    task, and therefore could enable the FPU whilst such a mode switch
    is in progress leading to strange & inconsistent behaviour for user
    code.

This patch fixes all of the above by taking a step back & re-examining
our approach to FP mode switches. Up until now we have taken these basic
steps:

 a) Prevent any threads that are part of the affected process from being
    able to obtain ownership of the FPU.

 b) Cause any threads that are part of the affected process and already
    have ownership of an FPU to lose it.

 c) Set the thread flags for each thread that is part of the affected
    process to reflect the new FP mode.

 d) Allow threads to obtain ownership of the FPU again.

This approach is however more complex than necessary. All that we really
require is that the mode switch has occurred for all threads that are
part of the affected process before mips_set_process_fp_mode(), and thus
the PR_SET_FP_MODE prctl() syscall, returns. This doesn't require that
we stop threads from owning or using an FPU whilst a mode switch occurs,
only that we force them to relinquish it after the mode switch has
occurred such that they next own an FPU with the correct mode
configured. Our basic steps therefore simplify to:

 A) Set the thread flags for each thread that is part of the affected
    process to reflect the new FP mode.

 B) Cause any threads that are part of the affected process and already
    have ownership of an FPU to lose it.

We implement B) by forcing each CPU which might be running a thread
which is part of the affected process to schedule a no-op function,
which causes the affected thread to lose its FPU ownership when it is
descheduled.

The end result is simpler FP mode switching with less overhead in the
FPU enable path (ie. enable_restore_fp_context()) and fewer moving
parts.

Signed-off-by: Paul Burton <paul.burton@mips.com>
Fixes: 9791554b45 ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS")
Fixes: 6b8322576e ("MIPS: Force CPUs to lose FP context during mode switches")
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: stable <stable@vger.kernel.org> # v4.0+
2018-06-24 09:27:27 -07:00
Linus Torvalds 050e9baa9d Kbuild: rename CC_STACKPROTECTOR[_STRONG] config variables
The changes to automatically test for working stack protector compiler
support in the Kconfig files removed the special STACKPROTECTOR_AUTO
option that picked the strongest stack protector that the compiler
supported.

That was all a nice cleanup - it makes no sense to have the AUTO case
now that the Kconfig phase can just determine the compiler support
directly.

HOWEVER.

It also meant that doing "make oldconfig" would now _disable_ the strong
stackprotector if you had AUTO enabled, because in a legacy config file,
the sane stack protector configuration would look like

  CONFIG_HAVE_CC_STACKPROTECTOR=y
  # CONFIG_CC_STACKPROTECTOR_NONE is not set
  # CONFIG_CC_STACKPROTECTOR_REGULAR is not set
  # CONFIG_CC_STACKPROTECTOR_STRONG is not set
  CONFIG_CC_STACKPROTECTOR_AUTO=y

and when you ran this through "make oldconfig" with the Kbuild changes,
it would ask you about the regular CONFIG_CC_STACKPROTECTOR (that had
been renamed from CONFIG_CC_STACKPROTECTOR_REGULAR to just
CONFIG_CC_STACKPROTECTOR), but it would think that the STRONG version
used to be disabled (because it was really enabled by AUTO), and would
disable it in the new config, resulting in:

  CONFIG_HAVE_CC_STACKPROTECTOR=y
  CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
  CONFIG_CC_STACKPROTECTOR=y
  # CONFIG_CC_STACKPROTECTOR_STRONG is not set
  CONFIG_CC_HAS_SANE_STACKPROTECTOR=y

That's dangerously subtle - people could suddenly find themselves with
the weaker stack protector setup without even realizing.

The solution here is to just rename not just the old RECULAR stack
protector option, but also the strong one.  This does that by just
removing the CC_ prefix entirely for the user choices, because it really
is not about the compiler support (the compiler support now instead
automatially impacts _visibility_ of the options to users).

This results in "make oldconfig" actually asking the user for their
choice, so that we don't have any silent subtle security model changes.
The end result would generally look like this:

  CONFIG_HAVE_CC_STACKPROTECTOR=y
  CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
  CONFIG_STACKPROTECTOR=y
  CONFIG_STACKPROTECTOR_STRONG=y
  CONFIG_CC_HAS_SANE_STACKPROTECTOR=y

where the "CC_" versions really are about internal compiler
infrastructure, not the user selections.

Acked-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-06-14 12:21:18 +09:00
Maciej W. Rozycki 28e4213dd3
MIPS: prctl: Disallow FRE without FR with PR_SET_FP_MODE requests
Having PR_FP_MODE_FRE (i.e. Config5.FRE) set without PR_FP_MODE_FR (i.e.
Status.FR) is not supported as the lone purpose of Config5.FRE is to
emulate Status.FR=0 handling on FPU hardware that has Status.FR=1
hardwired[1][2].  Also we do not handle this case elsewhere, and assume
throughout our code that TIF_HYBRID_FPREGS and TIF_32BIT_FPREGS cannot
be set both at once for a task, leading to inconsistent behaviour if
this does happen.

Return unsuccessfully then from prctl(2) PR_SET_FP_MODE calls requesting
PR_FP_MODE_FRE to be set with PR_FP_MODE_FR clear.  This corresponds to
modes allowed by `mips_set_personality_fp'.

References:

[1] "MIPS Architecture For Programmers, Vol. III: MIPS32 / microMIPS32
    Privileged Resource Architecture", Imagination Technologies,
    Document Number: MD00090, Revision 6.02, July 10, 2015, Table 9.69
    "Config5 Register Field Descriptions", p. 262

[2] "MIPS Architecture For Programmers, Volume III: MIPS64 / microMIPS64
    Privileged Resource Architecture", Imagination Technologies,
    Document Number: MD00091, Revision 6.03, December 22, 2015, Table
    9.72 "Config5 Register Field Descriptions", p. 288

Fixes: 9791554b45 ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS")
Signed-off-by: Maciej W. Rozycki <macro@mips.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 4.0+
Patchwork: https://patchwork.linux-mips.org/patch/19327/
Signed-off-by: James Hogan <jhogan@kernel.org>
2018-05-24 13:45:13 +01:00
Peter Zijlstra 6887a56b6e sched/wait, arch/mips: Fix and convert wait_on_atomic_t() usage to the new wait_var_event() API
The old wait_on_atomic_t() is going to get removed, use the more
flexible wait_var_event() API instead.

And while there, fix a bug and add the missing wakeup...

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: James Hogan <jhogan@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-03-20 08:23:23 +01:00
Maciej W. Rozycki b67336eee3 MIPS: Validate PR_SET_FP_MODE prctl(2) requests against the ABI of the task
Fix an API loophole introduced with commit 9791554b45 ("MIPS,prctl:
add PR_[GS]ET_FP_MODE prctl options for MIPS"), where the caller of
prctl(2) is incorrectly allowed to make a change to CP0.Status.FR or
CP0.Config5.FRE register bits even if CONFIG_MIPS_O32_FP64_SUPPORT has
not been enabled, despite that an executable requesting the mode
requested via ELF file annotation would not be allowed to run in the
first place, or for n64 and n64 ABI tasks which do not have non-default
modes defined at all.  Add suitable checks to `mips_set_process_fp_mode'
and bail out if an invalid mode change has been requested for the ABI in
effect, even if the FPU hardware or emulation would otherwise allow it.

Always succeed however without taking any further action if the mode
requested is the same as one already in effect, regardless of whether
any mode change, should it be requested, would actually be allowed for
the task concerned.

Signed-off-by: Maciej W. Rozycki <macro@mips.com>
Fixes: 9791554b45 ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS")
Reviewed-by: Paul Burton <paul.burton@mips.com>
Cc: James Hogan <james.hogan@mips.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Cc: stable@vger.kernel.org # 4.0+
Patchwork: https://patchwork.linux-mips.org/patch/17800/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-12-20 11:55:43 +01:00
Tobias Klauser 508c5757a7 MIPS: make thread_saved_pc static
The only user of thread_saved_pc() in non-arch-specific code was removed
in commit 8243d55977 ("sched/core: Remove pointless printout in
sched_show_task()"), so it no longer needs to be globally defined for
MIPS and can be made static.

Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/17303/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-10-09 14:53:56 +02:00
Matt Redfearn 56dfb7001a MIPS: Refactor handling of stack pointer in get_frame_info
Commit 34c2f668d0 ("MIPS: microMIPS: Add unaligned access support.")
added handling of microMIPS instructions to manipulate the stack
pointer. The code that was added violates code style rules with long
lines caused by lots of nested conditionals.

The added code interprets (inline) any known stack pointer manipulation
instruction to find the stack frame size. Handling the microMIPS cases
added quite a bit of complication to this function.

Refactor is_sp_move_ins to perform the interpretation of the immediate
as the instruction manipulating the stack pointer is found. This reduces
the amount of indentation required in get_frame_info, and more closely
matches the operation of is_ra_save_ins.

Suggested-by: Maciej W. Rozycki <macro@imgtec.com>
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/16958/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-09-06 13:21:44 +02:00
Matt Redfearn 41885b0212 MIPS: Stacktrace: Fix microMIPS stack unwinding on big endian systems
The stack unwinding code uses the mips_instuction union to decode the
instructions it finds. That union uses the __BITFIELD_FIELD macro to
reorder depending on endianness. The stack unwinding code always places
16bit instructions in halfword 1 of the union. This makes the union
accesses correct for little endian systems. Similarly, 32bit
instructions are reordered such that they are correct for little endian
systems. This handling leaves unwinding the stack on big endian systems
broken, as the mips_instruction union will then look for the fields in
the wrong halfword.

To fix this, use a logical shift to place the 16bit instruction into the
correct position in the word field of the union. Use the same shifting
to order the 2 halfwords of 32bit instuctions. Then replace accesses to
the halfword with accesses to the shifted word.
In the case of the ADDIUS5 instruction, switch to using the
mm16_r5_format union member to avoid the need for a 16bit shift.

Fixes: 34c2f668d0 ("MIPS: microMIPS: Add unaligned access support.")
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/16956/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-09-06 13:21:00 +02:00
Matt Redfearn cea8cd498f MIPS: microMIPS: Fix decoding of swsp16 instruction
When the immediate encoded in the instruction is accessed, it is sign
extended due to being a signed value being assigned to a signed integer.
The ISA specifies that this operation is an unsigned operation.
The sign extension leads us to incorrectly decode:

801e9c8e:       cbf1            sw      ra,68(sp)

As having an immediate of 1073741809.

Since the instruction format does not specify signed/unsigned, and this
is currently the only location to use this instuction format, change it
to an unsigned immediate.

Fixes: bb9bc4689b ("MIPS: Calculate microMIPS ra properly when unwinding the stack")
Suggested-by: Paul Burton <paul.burton@imgtec.com>
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: Miodrag Dinic <miodrag.dinic@imgtec.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/16957/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-09-06 13:20:25 +02:00
Matt Redfearn a0ae2b0833 MIPS: microMIPS: Fix decoding of addiusp instruction
Commit 34c2f668d0 ("MIPS: microMIPS: Add unaligned access support.")
added handling of microMIPS instructions to manipulate the stack
pointer. Unfortunately the decoding of the addiusp instruction was
incorrect, and performed a left shift by 2 bits to the raw immediate,
rather than decoding the immediate and then performing the shift, as
documented in the ISA.

This led to incomplete stack traces, due to incorrect frame sizes being
calculated. For example the instruction:
801faee0 <do_sys_poll>:
801faee0:       4e25            addiu   sp,sp,-952

As decoded by objdump, would be interpreted by the existing code as
having manipulated the stack pointer by +1096.

Fix this by changing the order of decoding the immediate and applying
the left shift. Also change to accessing the instuction through the
union to avoid the endianness problem of accesing halfword[0], which
will fail on big endian systems.

Cope with the special behaviour of immediates 0x0, 0x1, 0x1fe and 0x1ff
by XORing with 0x100 again if mod(immediate) < 4. This logic was tested
with the following test code:

int main(int argc, char **argv)
{
	unsigned int enc;
	int imm;

	for (enc = 0; enc < 512; ++enc) {
		int tmp = enc << 2;
		imm = -(signed short)(tmp | ((tmp & 0x100) ? 0xfe00 : 0));
		unsigned short tmp = enc;
		tmp = (tmp ^ 0x100) - 0x100;
		if ((unsigned short)(tmp + 2) < 4)
			tmp ^= 0x100;
		imm = -(signed short)(tmp << 2);
		printf("%#x\t%d\t->\t(%#x\t%d)\t%#x\t%d\n",
		       enc, enc,
		       (short)tmp, (short)tmp,
		       imm, imm);
	}
	return EXIT_SUCCESS;
}

Which generates the table:

input encoding	->	tmp (matching manual)	frame size
-----------------------------------------------------------------------
0	0	->	(0x100		256)	0xfffffc00	-1024
0x1	1	->	(0x101		257)	0xfffffbfc	-1028
0x2	2	->	(0x2		2)	0xfffffff8	-8
0x3	3	->	(0x3		3)	0xfffffff4	-12
...
0xfe	254	->	(0xfe		254)	0xfffffc08	-1016
0xff	255	->	(0xff		255)	0xfffffc04	-1020
0x100	256	->	(0xffffff00	-256)	0x400		1024
0x101	257	->	(0xffffff01	-255)	0x3fc		1020
...
0x1fc	508	->	(0xfffffffc	-4)	0x10		16
0x1fd	509	->	(0xfffffffd	-3)	0xc		12
0x1fe	510	->	(0xfffffefe	-258)	0x408		1032
0x1ff	511	->	(0xfffffeff	-257)	0x404		1028

Thanks to James Hogan for the test code & verifying the logic.

Fixes: 34c2f668d0 ("MIPS: microMIPS: Add unaligned access support.")
Suggested-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/16955/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-09-06 13:19:49 +02:00
Matt Redfearn b332fec048 MIPS: microMIPS: Fix detection of addiusp instruction
The addiusp instruction uses the pool16d opcode, with bit 0 of the
immediate set. The test for the addiusp opcode erroneously did a logical
and of the immediate with mm_addiusp_func, which has value 1, so this
test always passes when the immediate is non-zero.

Fix the test by replacing the logical and with a bitwise and.

Fixes: 34c2f668d0 ("MIPS: microMIPS: Add unaligned access support.")
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/16954/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-09-06 13:19:08 +02:00
Matt Redfearn 11887ed172 MIPS: Handle non word sized instructions when examining frame
Commit 34c2f668d0 ("MIPS: microMIPS: Add unaligned access support.")
added fairly broken support for handling 16bit microMIPS instructions in
get_frame_info(). It adjusts the instruction pointer by 16bits in the
case of a 16bit sp move instruction, but not any other 16bit
instruction.

Commit b6c7a324df ("MIPS: Fix get_frame_info() handling of microMIPS
function size") goes some way to fixing get_frame_info() to iterate over
microMIPS instuctions, but the instruction pointer is still manipulated
using a postincrement, and is of union mips_instruction type. Since the
union is sized to the largest member (a word), but microMIPS
instructions are a mix of halfword and word sizes, the function does not
always iterate correctly, ending up misaligned with the instruction
stream and interpreting it incorrectly.

Since the instruction modifying the stack pointer is usually the first
in the function, that one is usually handled correctly. But the
instruction which saves the return address to the sp is some variable
number of instructions into the frame and is frequently missed due to
not being on a word boundary, leading to incomplete walking of the
stack.

Fix this by incrementing the instruction pointer based on the size of
the previously decoded instruction (& remove the hack introduced by
commit 34c2f668d0 ("MIPS: microMIPS: Add unaligned access support.")
which adjusts the instruction pointer in the case of a 16bit sp move
instruction, but not any other).

Fixes: 34c2f668d0 ("MIPS: microMIPS: Add unaligned access support.")
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/16953/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-09-06 13:17:52 +02:00
Corey Minyard aee16625b1 MIPS: Fix issues in backtraces
I saw two problems when doing backtraces:

The compiler was putting a "fast return" at the top of some
functions, before it set up the frame.  The backtrace code
would stop when it saw a jump instruction, so it would never
get to the stack frame setup and would thus misinterpret it.
To fix this, don't look for jump instructions until the
frame setup has been seen.

The assembly code here is:

ffffffff80b885a0 <serial8250_handle_irq>:
ffffffff80b885a0:       c8a00003        bbit0   a1,0x0,ffffffff80b885b0 <serial8250_handle_irq+0x10>
ffffffff80b885a4:       0000102d        move    v0,zero
ffffffff80b885a8:       03e00008        jr      ra
ffffffff80b885ac:       00000000        nop
ffffffff80b885b0:       67bdffd0        daddiu  sp,sp,-48
ffffffff80b885b4:       ffb00008        sd      s0,8(sp)

The second problem was the compiler was putting the last
instruction of the frame save in the delay slot of the
jump instruction.  If it saved the RA in there, the
backtrace could would miss it and misinterpret the frame.
To fix this, make sure to process the instruction after
the first jump seen.

The assembly code for this is:

ffffffff80806fd0 <plat_irq_dispatch>:
ffffffff80806fd0:       67bdffd0        daddiu  sp,sp,-48
ffffffff80806fd4:       ffb30020        sd      s3,32(sp)
ffffffff80806fd8:       24130018        li      s3,24
ffffffff80806fdc:       ffb20018        sd      s2,24(sp)
ffffffff80806fe0:       3c12811c        lui     s2,0x811c
ffffffff80806fe4:       ffb10010        sd      s1,16(sp)
ffffffff80806fe8:       3c11811c        lui     s1,0x811c
ffffffff80806fec:       ffb00008        sd      s0,8(sp)
ffffffff80806ff0:       3c10811c        lui     s0,0x811c
ffffffff80806ff4:       08201c03        j       ffffffff8080700c <plat_irq_dispa
tch+0x3c>
ffffffff80806ff8:       ffbf0028        sd      ra,40(sp)

Signed-off-by: Corey Minyard <cminyard@mvista.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/16992/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-09-06 11:01:00 +02:00
Vegard Nossum b0f5a8f32e kthread: fix boot hang (regression) on MIPS/OpenRISC
This fixes a regression in commit 4d6501dce0 where I didn't notice
that MIPS and OpenRISC were reinitialising p->{set,clear}_child_tid to
NULL after our initialisation in copy_process().

We can simply get rid of the arch-specific initialisation here since it
is now always done in copy_process() before hitting copy_thread{,_tls}().

Review notes:

 - As far as I can tell, copy_process() is the only user of
   copy_thread_tls(), which is the only caller of copy_thread() for
   architectures that don't implement copy_thread_tls().

 - After this patch, there is no arch-specific code touching
   p->set_child_tid or p->clear_child_tid whatsoever.

 - It may look like MIPS/OpenRISC wanted to always have these fields be
   NULL, but that's not true, as copy_process() would unconditionally
   set them again _after_ calling copy_thread_tls() before commit
   4d6501dce0.

Fixes: 4d6501dce0 ("kthread: Fix use-after-free if kthread fork fails")
Reported-by: Guenter Roeck <linux@roeck-us.net>
Tested-by: Guenter Roeck <linux@roeck-us.net> # MIPS only
Acked-by: Stafford Horne <shorne@gmail.com>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
Cc: openrisc@lists.librecores.org
Cc: Jamie Iles <jamie.iles@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-29 09:40:54 -07:00
James Cowgill f9c4e3a6da MIPS: Opt into HAVE_COPY_THREAD_TLS
This the mips version of commit c1bd55f922 ("x86: opt into
HAVE_COPY_THREAD_TLS, for both 32-bit and 64-bit").

Simply use the tls system call argument instead of extracting the tls
argument by magic from the pt_regs structure.

See commit 3033f14ab7 ("clone: support passing tls argument via C
rather than pt_regs magic") for more background.

Signed-off-by: James Cowgill <James.Cowgill@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/15855/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-04-12 13:52:21 +02:00
Matt Redfearn db8466c581 MIPS: IRQ Stack: Unwind IRQ stack onto task stack
When the separate IRQ stack was introduced, stack unwinding only
proceeded as far as the top of the IRQ stack, leading to kernel
backtraces being less useful, lacking the trace of what was interrupted.

Fix this by providing a means for the kernel to unwind the IRQ stack
onto the interrupted task stack. The processor state is saved to the
kernel task stack on interrupt. The IRQ_STACK_START macro reserves an
unsigned long at the top of the IRQ stack where the interrupted task
stack pointer can be saved. After the active stack is switched to the
IRQ stack, save the interrupted tasks stack pointer to the reserved
location.

Fix the stack unwinding code to look for the frame being the top of the
IRQ stack and if so get the next frame from the saved location. The
existing test does not work with the separate stack since the ra is no
longer pointed at ret_from_{irq,exception}.

The test to stop unwinding the stack 32 bytes from the top of a stack
must be modified to allow unwinding to continue up to the location of
the saved task stack pointer when on the IRQ stack. The low / high marks
of the stack are set depending on whether the sp is on an irq stack or
not.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: Masanari Iida <standby24x7@gmail.com>
Cc: Chris Metcalf <cmetcalf@mellanox.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jason A. Donenfeld <jason@zx2c4.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/15788/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-03-22 11:53:57 +01:00
Ingo Molnar 68db0cf106 sched/headers: Prepare for new header dependencies before moving code to <linux/sched/task_stack.h>
We are going to split <linux/sched/task_stack.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.

Create a trivial placeholder <linux/sched/task_stack.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.

Include the new header in the files that are going to need it.

Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-03-02 08:42:36 +01:00
Ingo Molnar 299300258d sched/headers: Prepare for new header dependencies before moving code to <linux/sched/task.h>
We are going to split <linux/sched/task.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.

Create a trivial placeholder <linux/sched/task.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.

Include the new header in the files that are going to need it.

Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-03-02 08:42:35 +01:00
Ingo Molnar b17b01533b sched/headers: Prepare for new header dependencies before moving code to <linux/sched/debug.h>
We are going to split <linux/sched/debug.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.

Create a trivial placeholder <linux/sched/debug.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.

Include the new header in the files that are going to need it.

Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-03-02 08:42:34 +01:00
Marcin Nowakowski 08c941bf6e MIPS: Move register dump routines out of ptrace code
Current register dump methods for MIPS are implemented inside ptrace
methods, but there will be other uses in the kernel for them, so keep
them separately in process.c and use those definitions for ptrace
instead.

Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14587/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-01-03 16:34:44 +01:00