When a memblock allocation APIs are called with align = 0, the alignment
is implicitly set to SMP_CACHE_BYTES.
Implicit alignment is done deep in the memblock allocator and it can
come as a surprise. Not that such an alignment would be wrong even
when used incorrectly but it is better to be explicit for the sake of
clarity and the prinicple of the least surprise.
Replace all such uses of memblock APIs with the 'align' parameter
explicitly set to SMP_CACHE_BYTES and stop implicit alignment assignment
in the memblock internal allocation functions.
For the case when memblock APIs are used via helper functions, e.g. like
iommu_arena_new_node() in Alpha, the helper functions were detected with
Coccinelle's help and then manually examined and updated where
appropriate.
The direct memblock APIs users were updated using the semantic patch below:
@@
expression size, min_addr, max_addr, nid;
@@
(
|
- memblock_alloc_try_nid_raw(size, 0, min_addr, max_addr, nid)
+ memblock_alloc_try_nid_raw(size, SMP_CACHE_BYTES, min_addr, max_addr,
nid)
|
- memblock_alloc_try_nid_nopanic(size, 0, min_addr, max_addr, nid)
+ memblock_alloc_try_nid_nopanic(size, SMP_CACHE_BYTES, min_addr, max_addr,
nid)
|
- memblock_alloc_try_nid(size, 0, min_addr, max_addr, nid)
+ memblock_alloc_try_nid(size, SMP_CACHE_BYTES, min_addr, max_addr, nid)
|
- memblock_alloc(size, 0)
+ memblock_alloc(size, SMP_CACHE_BYTES)
|
- memblock_alloc_raw(size, 0)
+ memblock_alloc_raw(size, SMP_CACHE_BYTES)
|
- memblock_alloc_from(size, 0, min_addr)
+ memblock_alloc_from(size, SMP_CACHE_BYTES, min_addr)
|
- memblock_alloc_nopanic(size, 0)
+ memblock_alloc_nopanic(size, SMP_CACHE_BYTES)
|
- memblock_alloc_low(size, 0)
+ memblock_alloc_low(size, SMP_CACHE_BYTES)
|
- memblock_alloc_low_nopanic(size, 0)
+ memblock_alloc_low_nopanic(size, SMP_CACHE_BYTES)
|
- memblock_alloc_from_nopanic(size, 0, min_addr)
+ memblock_alloc_from_nopanic(size, SMP_CACHE_BYTES, min_addr)
|
- memblock_alloc_node(size, 0, nid)
+ memblock_alloc_node(size, SMP_CACHE_BYTES, nid)
)
[mhocko@suse.com: changelog update]
[akpm@linux-foundation.org: coding-style fixes]
[rppt@linux.ibm.com: fix missed uses of implicit alignment]
Link: http://lkml.kernel.org/r/20181016133656.GA10925@rapoport-lnx
Link: http://lkml.kernel.org/r/1538687224-17535-1-git-send-email-rppt@linux.vnet.ibm.com
Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com>
Suggested-by: Michal Hocko <mhocko@suse.com>
Acked-by: Paul Burton <paul.burton@mips.com> [MIPS]
Acked-by: Michael Ellerman <mpe@ellerman.id.au> [powerpc]
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Richard Weinberger <richard@nod.at>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Move remaining definitions and declarations from include/linux/bootmem.h
into include/linux/memblock.h and remove the redundant header.
The includes were replaced with the semantic patch below and then
semi-automated removal of duplicated '#include <linux/memblock.h>
@@
@@
- #include <linux/bootmem.h>
+ #include <linux/memblock.h>
[sfr@canb.auug.org.au: dma-direct: fix up for the removal of linux/bootmem.h]
Link: http://lkml.kernel.org/r/20181002185342.133d1680@canb.auug.org.au
[sfr@canb.auug.org.au: powerpc: fix up for removal of linux/bootmem.h]
Link: http://lkml.kernel.org/r/20181005161406.73ef8727@canb.auug.org.au
[sfr@canb.auug.org.au: x86/kaslr, ACPI/NUMA: fix for linux/bootmem.h removal]
Link: http://lkml.kernel.org/r/20181008190341.5e396491@canb.auug.org.au
Link: http://lkml.kernel.org/r/1536927045-23536-30-git-send-email-rppt@linux.vnet.ibm.com
Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Ley Foon Tan <lftan@altera.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Paul Burton <paul.burton@mips.com>
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Serge Semin <fancer.lancer@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The alloc_bootmem(size) is a shortcut for allocation of SMP_CACHE_BYTES
aligned memory. When the align parameter of memblock_alloc() is 0, the
alignment is implicitly set to SMP_CACHE_BYTES and thus alloc_bootmem(size)
and memblock_alloc(size, 0) are equivalent.
The conversion is done using the following semantic patch:
@@
expression size;
@@
- alloc_bootmem(size)
+ memblock_alloc(size, 0)
Link: http://lkml.kernel.org/r/1536927045-23536-22-git-send-email-rppt@linux.vnet.ibm.com
Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Ley Foon Tan <lftan@altera.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Paul Burton <paul.burton@mips.com>
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Serge Semin <fancer.lancer@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The functions are equivalent, just the later does not require nobootmem
translation layer.
The conversion is done using the following semantic patch:
@@
expression size, align, goal;
@@
- __alloc_bootmem(size, align, goal)
+ memblock_alloc_from(size, align, goal)
Link: http://lkml.kernel.org/r/1536927045-23536-21-git-send-email-rppt@linux.vnet.ibm.com
Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Ley Foon Tan <lftan@altera.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Paul Burton <paul.burton@mips.com>
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Serge Semin <fancer.lancer@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
- kexec support for the generic MIPS platform when running on a CPU
including the MIPS Coherence Manager & related hardware.
- Improvements to the definition of memory barriers used around MMIO
accesses, and fixes in their use.
- Switch to CONFIG_NO_BOOTMEM from Mike Rapoport, finally dropping
reliance on the old bootmem code.
- A number of fixes & improvements for Loongson 3 systems.
- DT & config updates for the Microsemi Ocelot platform.
- Workaround to enable USB power on the Netgear WNDR3400v3.
- Various cleanups & fixes.
-----BEGIN PGP SIGNATURE-----
iIsEABYIADMWIQRgLjeFAZEXQzy86/s+p5+stXUA3QUCW9NfwRUccGF1bC5idXJ0
b25AbWlwcy5jb20ACgkQPqefrLV1AN1LNgD9Hy73DkYnnYeLNLcCe+5QMCr+NO2C
kwIs7kAI40X+/LQA/RgCcg6z4rUSH38hfNEobD6VXva7QiFhiYcJj5rCFH8O
=nDQg
-----END PGP SIGNATURE-----
Merge tag 'mips_4.20' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux
Pull MIPS updates from Paul Burton:
- kexec support for the generic MIPS platform when running on a CPU
including the MIPS Coherence Manager & related hardware.
- Improvements to the definition of memory barriers used around MMIO
accesses, and fixes in their use.
- Switch to CONFIG_NO_BOOTMEM from Mike Rapoport, finally dropping
reliance on the old bootmem code.
- A number of fixes & improvements for Loongson 3 systems.
- DT & config updates for the Microsemi Ocelot platform.
- Workaround to enable USB power on the Netgear WNDR3400v3.
- Various cleanups & fixes.
* tag 'mips_4.20' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: (51 commits)
MIPS: Cleanup DSP ASE detection
MIPS: dts: Change upper case to lower case
MIPS: generic: Add Network, SPI and I2C to ocelot_defconfig
MIPS: Loongson-3: Fix BRIDGE irq delivery problem
MIPS: Loongson-3: Fix CPU UART irq delivery problem
MIPS: Remove unused PREF, PREFE & PREFX macros
MIPS: lib: Use kernel_pref & user_pref in memcpy()
MIPS: Remove unused CAT macro
MIPS: Add kernel_pref & user_pref helpers
MIPS: Remove unused TTABLE macro
MIPS: Remove unused PIC macros
MIPS: Remove unused MOVN & MOVZ macros
MIPS: Provide actually relaxed MMIO accessors
MIPS: Enforce strong ordering for MMIO accessors
MIPS: Correct `mmiowb' barrier for `wbflush' platforms
MIPS: Define MMIO ordering barriers
MIPS: mscc: add PCB120 to the ocelot fitImage
MIPS: mscc: add DT for Ocelot PCB120
MIPS: memset: Limit excessive `noreorder' assembly mode use
MIPS: memset: Fix CPU_DADDI_WORKAROUNDS `small_fixup' regression
...
Pull timekeeping updates from Thomas Gleixner:
"The timers and timekeeping departement provides:
- Another large y2038 update with further preparations for providing
the y2038 safe timespecs closer to the syscalls.
- An overhaul of the SHCMT clocksource driver
- SPDX license identifier updates
- Small cleanups and fixes all over the place"
* 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (31 commits)
tick/sched : Remove redundant cpu_online() check
clocksource/drivers/dw_apb: Add reset control
clocksource: Remove obsolete CLOCKSOURCE_OF_DECLARE
clocksource/drivers: Unify the names to timer-* format
clocksource/drivers/sh_cmt: Add R-Car gen3 support
dt-bindings: timer: renesas: cmt: document R-Car gen3 support
clocksource/drivers/sh_cmt: Properly line-wrap sh_cmt_of_table[] initializer
clocksource/drivers/sh_cmt: Fix clocksource width for 32-bit machines
clocksource/drivers/sh_cmt: Fixup for 64-bit machines
clocksource/drivers/sh_tmu: Convert to SPDX identifiers
clocksource/drivers/sh_mtu2: Convert to SPDX identifiers
clocksource/drivers/sh_cmt: Convert to SPDX identifiers
clocksource/drivers/renesas-ostm: Convert to SPDX identifiers
clocksource: Convert to using %pOFn instead of device_node.name
tick/broadcast: Remove redundant check
RISC-V: Request newstat syscalls
y2038: signal: Change rt_sigtimedwait to use __kernel_timespec
y2038: socket: Change recvmmsg to use __kernel_timespec
y2038: sched: Change sched_rr_get_interval to use __kernel_timespec
y2038: utimes: Rework #ifdef guards for compat syscalls
...
- mostly more consolidation of the direct mapping code, including
converting over hexagon, and merging the coherent and non-coherent
code into a single dma_map_ops instance (me)
- cleanups for the dma_configure/dma_unconfigure callchains (me)
- better handling of dma_masks in odd setups (me, Alexander Duyck)
- better debugging of passing vmalloc address to the DMA API
(Stephen Boyd)
- CMA command line parsing fix (He Zhe)
-----BEGIN PGP SIGNATURE-----
iQI/BAABCgApFiEEgdbnc3r/njty3Iq9D55TZVIEUYMFAlvNg6YLHGhjaEBsc3Qu
ZGUACgkQD55TZVIEUYMm/Q/9FFVOH73Nc3rT40N2HdaPbzV2hXmI1//hEJcImDP5
mLGq8XqieGuo8Pmu9+xp1tC2UnfUkhK4FjhQbWM+qKER/RNYES2BD50xVFmt6ICS
9d8IaRcs+ceggljfdwszkkucJspBsYNxpiKjjao0OsHn6UDatu6elZs/yvb2nXci
HCJUvs9vYm9MkAtVXEtOQtij3YRaJ/9xYY4h5Dy5vBtHPp+kjUMF0mWAwA2+Ec1V
8iqKjUY3c8nr8Kf6WE9tzJ0wrMFijc4HJlE3W1ud8YsKdfCkCf8XiIuS6PgTzOeK
0cn9h8dVrV1ZXJ/D/9JZDivmYvIsoKWAYVQHNzAiq7PI3uOJY1ggCxyZpWtTHZhM
ATHF0sJGpIenkSWybYpKee8e8RsS7L9dUgu6bYpK5pVkirNYnR9IOGVJNmS63L7Q
B0uUtqjBKDG2yNGZGY9zqBQFgxiPO0wxFLeKyHbIsC0b7FBti3rXGAimch5WiBuL
zlDV0zEfMH0BW6gNPrjfFur84duKtGZ/0DBSxQ0E1Mvk8B1LBr78MgZt8OfJEuoe
dx1FYU70u8PYi+hjmn386YnNNMTjd1GT5XW7AWedM2wCjRYmNy0yMGmm9cACMneN
5eBv/SYr7X1zKNL7w7H6KQVZilTJcBoj3f/lmjL7i22m9FXYQpcUP61L8wHNM8H2
iJo=
=AVSD
-----END PGP SIGNATURE-----
Merge tag 'dma-mapping-4.20' of git://git.infradead.org/users/hch/dma-mapping
Pull dma mapping updates from Christoph Hellwig:
"First batch of dma-mapping changes for 4.20.
There will be a second PR as some big changes were only applied just
before the end of the merge window, and I want to give them a few more
days in linux-next.
Summary:
- mostly more consolidation of the direct mapping code, including
converting over hexagon, and merging the coherent and non-coherent
code into a single dma_map_ops instance (me)
- cleanups for the dma_configure/dma_unconfigure callchains (me)
- better handling of dma_masks in odd setups (me, Alexander Duyck)
- better debugging of passing vmalloc address to the DMA API (Stephen
Boyd)
- CMA command line parsing fix (He Zhe)"
* tag 'dma-mapping-4.20' of git://git.infradead.org/users/hch/dma-mapping: (27 commits)
dma-direct: respect DMA_ATTR_NO_WARN
dma-mapping: translate __GFP_NOFAIL to DMA_ATTR_NO_WARN
dma-direct: document the zone selection logic
dma-debug: Check for drivers mapping invalid addresses in dma_map_single()
dma-direct: fix return value of dma_direct_supported
dma-mapping: move dma_default_get_required_mask under ifdef
dma-direct: always allow dma mask <= physiscal memory size
dma-direct: implement complete bus_dma_mask handling
dma-direct: refine dma_direct_alloc zone selection
dma-direct: add an explicit dma_direct_get_required_mask
dma-mapping: make the get_required_mask method available unconditionally
unicore32: remove swiotlb support
Revert "dma-mapping: clear dev->dma_ops in arch_teardown_dma_ops"
dma-mapping: support non-coherent devices in dma_common_get_sgtable
dma-mapping: consolidate the dma mmap implementations
dma-mapping: merge direct and noncoherent ops
dma-mapping: move the dma_coherent flag to struct device
MIPS: don't select DMA_MAYBE_COHERENT from DMA_PERDEV_COHERENT
dma-mapping: add the missing ARCH_HAS_SYNC_DMA_FOR_CPU_ALL declaration
dma-mapping: fix panic caused by passing empty cma command line argument
...
Currently we hardcode a list of files for which we specify that the
toolchain has DSP ASE support when building for MIPSr2 only. This has a
number of problems:
1) It doesn't actually ensure that the toolchain supports the DSP ASE
at all.
2) It's fragile if we try to use DSP ASE macros in other files.
3) It makes no provision for MIPSr6 & later systems which also support
the DSP ASE & end up using the .word directive implementation of
the DSP macros.
Fix this by detecting assembler support for the DSP ASE globally, not
just for a small set of files, and not just for MIPSr2. This now exposes
use of toolchain DSP support to kernel builds targeting MIPSr1 and
older, so we add .set MIPS_ISA_LEVEL directives prior to all .set dsp
directives in order to prevent the assembler from complaining that the
DSP ASE is only supported with MIPSr2 & higher.
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/20901/
Cc: linux-mips@linux-mips.org
Commit 8ce355cf2e ("MIPS: Setup boot_command_line before
plat_mem_setup") fixed a problem for systems which have
CONFIG_CMDLINE_BOOL=y & use a DT with a chosen node that has either no
bootargs property or an empty one. In this configuration
early_init_dt_scan_chosen() copies CONFIG_CMDLINE into
boot_command_line, but the MIPS code doesn't know this so it appends
CONFIG_CMDLINE (via builtin_cmdline) to boot_command_line again. The
result is that boot_command_line contains the arguments from
CONFIG_CMDLINE twice.
That commit took the approach of simply setting up boot_command_line
from the MIPS code before early_init_dt_scan_chosen() runs, causing it
not to copy CONFIG_CMDLINE to boot_command_line if a chosen node with no
bootargs property is found.
Unfortunately this is problematic for systems which do have a non-empty
bootargs property & CONFIG_CMDLINE_BOOL=y. There
early_init_dt_scan_chosen() will overwrite boot_command_line with the
arguments from DT, which means we lose those from CONFIG_CMDLINE
entirely. This breaks CONFIG_MIPS_CMDLINE_DTB_EXTEND. If we have
CONFIG_MIPS_CMDLINE_FROM_BOOTLOADER or
CONFIG_MIPS_CMDLINE_BUILTIN_EXTEND selected and the DT has a bootargs
property which we should ignore, it will instead be honoured breaking
those configurations too.
Fix this by reverting commit 8ce355cf2e ("MIPS: Setup
boot_command_line before plat_mem_setup") to restore the former
behaviour, and fixing the CONFIG_CMDLINE duplication issue by
initializing boot_command_line to a non-empty string that
early_init_dt_scan_chosen() will not overwrite with CONFIG_CMDLINE.
This is a little ugly, but cleanup in this area is on its way. In the
meantime this is at least easy to backport & contains the ugliness
within arch/mips/.
Signed-off-by: Paul Burton <paul.burton@mips.com>
Fixes: 8ce355cf2e ("MIPS: Setup boot_command_line before plat_mem_setup")
References: https://patchwork.linux-mips.org/patch/18804/
Patchwork: https://patchwork.linux-mips.org/patch/20813/
Cc: Frank Rowand <frowand.list@gmail.com>
Cc: Jaedon Shin <jaedon.shin@gmail.com>
Cc: Mathieu Malaterre <malat@debian.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: devicetree@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # v4.16+
When using the legacy mmap layout, for example triggered using ulimit -s
unlimited, get_unmapped_area() fills memory from bottom to top starting
from a fairly low address near TASK_UNMAPPED_BASE.
This placement is suboptimal if the user application wishes to allocate
large amounts of heap memory using the brk syscall. With the VDSO being
located low in the user's virtual address space, the amount of space
available for access using brk is limited much more than it was prior to
the introduction of the VDSO.
For example:
# ulimit -s unlimited; cat /proc/self/maps
00400000-004ec000 r-xp 00000000 08:00 71436 /usr/bin/coreutils
004fc000-004fd000 rwxp 000ec000 08:00 71436 /usr/bin/coreutils
004fd000-0050f000 rwxp 00000000 00:00 0
00cc3000-00ce4000 rwxp 00000000 00:00 0 [heap]
2ab96000-2ab98000 r--p 00000000 00:00 0 [vvar]
2ab98000-2ab99000 r-xp 00000000 00:00 0 [vdso]
2ab99000-2ab9d000 rwxp 00000000 00:00 0
...
Resolve this by adjusting STACK_TOP to reserve space for the VDSO &
providing an address hint to get_unmapped_area() causing it to use this
space even when using the legacy mmap layout.
We reserve enough space for the VDSO, plus 1MB or 256MB for 32 bit & 64
bit systems respectively within which we randomize the VDSO base
address. Previously this randomization was taken care of by the mmap
base address randomization performed by arch_mmap_rnd(). The 1MB & 256MB
sizes are somewhat arbitrary but chosen such that we have some
randomization without taking up too much of the user's virtual address
space, which is often in short supply for 32 bit systems.
With this the VDSO is always mapped at a high address, leaving lots of
space for statically linked programs to make use of brk:
# ulimit -s unlimited; cat /proc/self/maps
00400000-004ec000 r-xp 00000000 08:00 71436 /usr/bin/coreutils
004fc000-004fd000 rwxp 000ec000 08:00 71436 /usr/bin/coreutils
004fd000-0050f000 rwxp 00000000 00:00 0
00c28000-00c49000 rwxp 00000000 00:00 0 [heap]
...
7f67c000-7f69d000 rwxp 00000000 00:00 0 [stack]
7f7fc000-7f7fd000 rwxp 00000000 00:00 0
7fcf1000-7fcf3000 r--p 00000000 00:00 0 [vvar]
7fcf3000-7fcf4000 r-xp 00000000 00:00 0 [vdso]
Signed-off-by: Paul Burton <paul.burton@mips.com>
Reported-by: Huacai Chen <chenhc@lemote.com>
Fixes: ebb5e78cc6 ("MIPS: Initial implementation of a VDSO")
Cc: Huacai Chen <chenhc@lemote.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # v4.4+
The crash utility initializes cpu state by reading the system kernel
memory, which is copied into vmcore.
It is also natural to preserve the online state for CPUs at crash.
Failing to do so could make the analysis tool present info for only 1 CPU
by default, and unable to find panic task.
Signed-off-by: Dengcheng Zhu <dzhu@wavecomp.com>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/20809/
Cc: Paul Burton <pburton@wavecomp.com>
Cc: "ralf@linux-mips.org" <ralf@linux-mips.org>
Cc: "linux-mips@linux-mips.org" <linux-mips@linux-mips.org>
Cc: "rachel.mozes@intel.com" <rachel.mozes@intel.com>
In much the same vein as commit ac41f9c462 ("MIPS: Remove a temporary
hack for debugging cache flushes in SMTC configuration") and commit
eb75ecb113 ("MIPS: MT: Remove unused MT single-threaded cache flush
code"), remove the long obsolete ndflush & niflush command line
arguments which provided a hack that should not be useful outside of
debug sessions performed long ago.
Signed-off-by: Paul Burton <paul.burton@mips.com>
Commit ac41f9c462 ("MIPS: Remove a temporary hack for debugging cache
flushes in SMTC configuration") removed an ugly hack that allowed cache
flushing to be performed single-threaded, something which should not be
necessary outside of debug sessions performed long ago.
Whilst the hack was removed from the cache flush code itself, the
mt_protdflush & mt_protiflush variables were left behind along with code
providing the protdflush & protiflush command line arguments. The
mt_cflush_lockdown() & mt_cflush_release() functions were also left
behind but are now entirely unused.
Remove all the unused code to complete the removal of the MT ASE
single-threaded cache flush hack.
Signed-off-by: Paul Burton <paul.burton@mips.com>
MIPSR6 CPUs do not support unaligned load/store instructions
(LWL, LWR, SWL, SWR and LDL, LDR, SDL, SDR for 64bit).
Currently the MIPS tree has some special cases to avoid these
instructions, and the code is testing for !CONFIG_CPU_MIPSR6.
This patch declares a new Kconfig variable:
CONFIG_CPU_HAS_LOAD_STORE_LR.
This variable indicates that the CPU supports these instructions.
Then, the patch does the following:
- Carefully selects this option on all CPUs except MIPSR6.
- Switches all the special cases to test for the new variable,
and inverts the logic:
'#ifndef CONFIG_CPU_MIPSR6' turns into
'#ifdef CONFIG_CPU_HAS_LOAD_STORE_LR'
and vice-versa.
Also, when this variable is NOT selected (e.g. MIPSR6),
CONFIG_GENERIC_CSUM will default to 'y', to compile generic
C checksum code (instead of special assembly code that uses the
unsupported instructions).
This commit should not affect any existing CPU, and is required
for future Lexra CPU support, that misses these instructions too.
Signed-off-by: Yasha Cherikovsky <yasha.che3@gmail.com>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/20808/
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul Burton <paul.burton@mips.com>
Cc: James Hogan <jhogan@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Since commit 15f37e1588 ("MIPS: store the appended
dtb address in a variable"),
in kernels with MIPS_RAW_APPENDED_DTB=y, the early boot code detects
the dtb and stores it in the 'fw_passed_dtb' variable.
However, the dtb is not stored in 'fw_passed_dtb' in kernels with
MIPS_ELF_APPENDED_DTB=y.
Under MIPS_ELF_APPENDED_DTB=y, the dtb is also located in the
__appended_dtb section, so we just need to update the #ifdef.
This will allow to access the dtb in a more uniform way.
Fixes: 15f37e1588 ("MIPS: store the appended dtb address in a variable")
Signed-off-by: Yasha Cherikovsky <yasha.che3@gmail.com>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/20803/
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul Burton <paul.burton@mips.com>
Cc: James Hogan <jhogan@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
It makes the code more readable, especially in the nested ifdefs.
Signed-off-by: Yasha Cherikovsky <yasha.che3@gmail.com>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/20802/
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul Burton <paul.burton@mips.com>
Cc: James Hogan <jhogan@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Out-of-tree platforms may not be based on Generic as shown in customer
communication. Share the prepare method with all using UHI boot protocol,
and put into machine_kexec.c.
The benefit is that, when having kexec_args related problems, developers
will naturally look into machine_kexec.c, where "CONFIG_UHI_BOOT" will be
found, prompting them to add "select UHI_BOOT" to the platform Kconfig. It
would otherwise require a lot debugging or online searching to be aware
that the solution is in Generic code.
Tested-by: Rachel Mozes <rachel.mozes@intel.com>
Reported-by: Rachel Mozes <rachel.mozes@intel.com>
Signed-off-by: Dengcheng Zhu <dzhu@wavecomp.com>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/20569/
Cc: pburton@wavecomp.com
Cc: ralf@linux-mips.org
Cc: linux-mips@linux-mips.org
The existing implementation lets machine_kexec() CPU jump to reboot code
buffer, whereas other CPUs to relocated_kexec_smp_wait. The natural way to
bring up an SMP new kernel would be to let CPU0 do it while others being
halted. For those failing to do so, fall back to the jumping method.
Signed-off-by: Dengcheng Zhu <dzhu@wavecomp.com>
[paul.burton@mips.com: Guard kexec_nonboot_cpu_jump with CONFIG_SMP]
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/20570/
Cc: pburton@wavecomp.com
Cc: ralf@linux-mips.org
Cc: linux-mips@linux-mips.org
Cc: rachel.mozes@intel.com
While both option select a form of conditional dma coherence they don't
actually share any code in the implementation, so untangle them.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Paul Burton <paul.burton@mips.com>
MIPS already has memblock support and all the memory is already registered
with it.
This patch replaces bootmem memory reservations with memblock ones and
removes the bootmem initialization.
Since memblock allocates memory in top-down mode, we ensure that memblock
limit is max_low_pfn to prevent allocations from the high memory.
To have the exceptions base in the lower 512M of the physical memory, its
allocation in arch/mips/kernel/traps.c::traps_init() is using bottom-up
mode.
Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/20560/
Cc: Serge Semin <fancer.lancer@gmail.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: James Hogan <jhogan@kernel.org>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
The comment describing arch_mem_init() was separated from the definition
of arch_mem_init() by commit a09fc446fb ("[MIPS] setup.c: use
early_param() for early command line parsing"). Move the comment such
that it's next to the definition again for ease of reading.
Signed-off-by: Paul Burton <paul.burton@mips.com>
Clean up instances of casts to the type that a value already has, since
they are effectively no-ops and only serve to complicate the code.
This is the result of the following semantic patch:
@identitycast@
type T;
T *A;
@@
- (T *)(A)
+ A
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19599/
When a system suffers from dcache aliasing a user program may observe
stale VDSO data from an aliased cache line. Notably this can break the
expectation that clock_gettime(CLOCK_MONOTONIC, ...) is, as its name
suggests, monotonic.
In order to ensure that users observe updates to the VDSO data page as
intended, align the user mappings of the VDSO data page such that their
cache colouring matches that of the virtual address range which the
kernel will use to update the data page - typically its unmapped address
within kseg0.
This ensures that we don't introduce aliasing cache lines for the VDSO
data page, and therefore that userland will observe updates without
requiring cache invalidation.
Signed-off-by: Paul Burton <paul.burton@mips.com>
Reported-by: Hauke Mehrtens <hauke@hauke-m.de>
Reported-by: Rene Nielsen <rene.nielsen@microsemi.com>
Reported-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Fixes: ebb5e78cc6 ("MIPS: Initial implementation of a VDSO")
Patchwork: https://patchwork.linux-mips.org/patch/20344/
Tested-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Tested-by: Hauke Mehrtens <hauke@hauke-m.de>
Cc: James Hogan <jhogan@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # v4.4+
Christoph Hellwig suggested a slightly different path for handling
backwards compatibility with the 32-bit time_t based system calls:
Rather than simply reusing the compat_sys_* entry points on 32-bit
architectures unchanged, we get rid of those entry points and the
compat_time types by renaming them to something that makes more sense
on 32-bit architectures (which don't have a compat mode otherwise),
and then share the entry points under the new name with the 64-bit
architectures that use them for implementing the compatibility.
The following types and interfaces are renamed here, and moved
from linux/compat_time.h to linux/time32.h:
old new
--- ---
compat_time_t old_time32_t
struct compat_timeval struct old_timeval32
struct compat_timespec struct old_timespec32
struct compat_itimerspec struct old_itimerspec32
ns_to_compat_timeval() ns_to_old_timeval32()
get_compat_itimerspec64() get_old_itimerspec32()
put_compat_itimerspec64() put_old_itimerspec32()
compat_get_timespec64() get_old_timespec32()
compat_put_timespec64() put_old_timespec32()
As we already have aliases in place, this patch addresses only the
instances that are relevant to the system call interface in particular,
not those that occur in device drivers and other modules. Those
will get handled separately, while providing the 64-bit version
of the respective interfaces.
I'm not renaming the timex, rusage and itimerval structures, as we are
still debating what the new interface will look like, and whether we
will need a replacement at all.
This also doesn't change the names of the syscall entry points, which can
be done more easily when we actually switch over the 32-bit architectures
to use them, at that point we need to change COMPAT_SYSCALL_DEFINEx to
SYSCALL_DEFINEx with a new name, e.g. with a _time32 suffix.
Suggested-by: Christoph Hellwig <hch@infradead.org>
Link: https://lore.kernel.org/lkml/20180705222110.GA5698@infradead.org/
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
- Fix microMIPS build failures by adding a .insn directive to the
barrier_before_unreachable() asm statement in order to convince the
toolchain that the asm statement is a valid branch target rather
than a bogus attempt to switch ISA.
- Clean up our declarations of TLB functions that we overwrite with
generated code in order to prevent the compiler making assumptions
about alignment that cause microMIPS kernels built with GCC 7 &
above to die early during boot.
- Fix up a regression for MIPS32 kernels which slipped into the main
MIPS pull for 4.19, causing CONFIG_32BIT=y kernels to contain
inappropriate MIPS64 instructions.
- Extend our existing workaround for MIPSr6 builds that end up using
the __multi3 intrinsic to GCC 7 & below, rather than just GCC 7.
-----BEGIN PGP SIGNATURE-----
iIsEABYIADMWIQRgLjeFAZEXQzy86/s+p5+stXUA3QUCW37wVhUccGF1bC5idXJ0
b25AbWlwcy5jb20ACgkQPqefrLV1AN18iAD/ZO02rgkTgMG7NvZMtbOwflxe1aVz
YpAQzcOSz+CBxgUA/30ZwZm37hgMi3YWOJMSfmbuWKsYi+/vkcjwlfai7UUF
=oJFy
-----END PGP SIGNATURE-----
Merge tag 'mips_4.19_2' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux
Pull MIPS fixes from Paul Burton:
- Fix microMIPS build failures by adding a .insn directive to the
barrier_before_unreachable() asm statement in order to convince the
toolchain that the asm statement is a valid branch target rather
than a bogus attempt to switch ISA.
- Clean up our declarations of TLB functions that we overwrite with
generated code in order to prevent the compiler making assumptions
about alignment that cause microMIPS kernels built with GCC 7 &
above to die early during boot.
- Fix up a regression for MIPS32 kernels which slipped into the main
MIPS pull for 4.19, causing CONFIG_32BIT=y kernels to contain
inappropriate MIPS64 instructions.
- Extend our existing workaround for MIPSr6 builds that end up using
the __multi3 intrinsic to GCC 7 & below, rather than just GCC 7.
* tag 'mips_4.19_2' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux:
MIPS: lib: Provide MIPS64r6 __multi3() for GCC < 7
MIPS: Workaround GCC __builtin_unreachable reordering bug
compiler.h: Allow arch-specific asm/compiler.h
MIPS: Avoid move psuedo-instruction whilst using MIPS_ISA_LEVEL
MIPS: Consistently declare TLB functions
MIPS: Export tlbmiss_handler_setup_pgd near its definition
- Restructure of lockdep and latency tracers
This is the biggest change. Joel Fernandes restructured the hooks
from irqs and preemption disabling and enabling. He got rid of
a lot of the preprocessor #ifdef mess that they caused.
He turned both lockdep and the latency tracers to use trace events
inserted in the preempt/irqs disabling paths. But unfortunately,
these started to cause issues in corner cases. Thus, parts of the
code was reverted back to where lockde and the latency tracers
just get called directly (without using the trace events).
But because the original change cleaned up the code very nicely
we kept that, as well as the trace events for preempt and irqs
disabling, but they are limited to not being called in NMIs.
- Have trace events use SRCU for "rcu idle" calls. This was required
for the preempt/irqs off trace events. But it also had to not
allow them to be called in NMI context. Waiting till Paul makes
an NMI safe SRCU API.
- New notrace SRCU API to allow trace events to use SRCU.
- Addition of mcount-nop option support
- SPDX headers replacing GPL templates.
- Various other fixes and clean ups.
- Some fixes are marked for stable, but were not fully tested
before the merge window opened.
-----BEGIN PGP SIGNATURE-----
iIoEABYIADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCW3ruhRQccm9zdGVkdEBn
b29kbWlzLm9yZwAKCRAp5XQQmuv6qiM7AP47NhYdSnCFCRUJfrt6PovXmQtuCHt3
c3QMoGGdvzh9YAEAqcSXwh7uLhpHUp1LjMAPkXdZVwNddf4zJQ1zyxQ+EAU=
=vgEr
-----END PGP SIGNATURE-----
Merge tag 'trace-v4.19' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull tracing updates from Steven Rostedt:
- Restructure of lockdep and latency tracers
This is the biggest change. Joel Fernandes restructured the hooks
from irqs and preemption disabling and enabling. He got rid of a lot
of the preprocessor #ifdef mess that they caused.
He turned both lockdep and the latency tracers to use trace events
inserted in the preempt/irqs disabling paths. But unfortunately,
these started to cause issues in corner cases. Thus, parts of the
code was reverted back to where lockdep and the latency tracers just
get called directly (without using the trace events). But because the
original change cleaned up the code very nicely we kept that, as well
as the trace events for preempt and irqs disabling, but they are
limited to not being called in NMIs.
- Have trace events use SRCU for "rcu idle" calls. This was required
for the preempt/irqs off trace events. But it also had to not allow
them to be called in NMI context. Waiting till Paul makes an NMI safe
SRCU API.
- New notrace SRCU API to allow trace events to use SRCU.
- Addition of mcount-nop option support
- SPDX headers replacing GPL templates.
- Various other fixes and clean ups.
- Some fixes are marked for stable, but were not fully tested before
the merge window opened.
* tag 'trace-v4.19' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (44 commits)
tracing: Fix SPDX format headers to use C++ style comments
tracing: Add SPDX License format tags to tracing files
tracing: Add SPDX License format to bpf_trace.c
blktrace: Add SPDX License format header
s390/ftrace: Add -mfentry and -mnop-mcount support
tracing: Add -mcount-nop option support
tracing: Avoid calling cc-option -mrecord-mcount for every Makefile
tracing: Handle CC_FLAGS_FTRACE more accurately
Uprobe: Additional argument arch_uprobe to uprobe_write_opcode()
Uprobes: Simplify uprobe_register() body
tracepoints: Free early tracepoints after RCU is initialized
uprobes: Use synchronize_rcu() not synchronize_sched()
tracing: Fix synchronizing to event changes with tracepoint_synchronize_unregister()
ftrace: Remove unused pointer ftrace_swapper_pid
tracing: More reverting of "tracing: Centralize preemptirq tracepoints and unify their usage"
tracing/irqsoff: Handle preempt_count for different configs
tracing: Partial revert of "tracing: Centralize preemptirq tracepoints and unify their usage"
tracing: irqsoff: Account for additional preempt_disable
trace: Use rcu_dereference_raw for hooks from trace-event subsystem
tracing/kprobes: Fix within_notrace_func() to check only notrace functions
...
An overview of the general architecture changes:
- Massive DMA ops refactoring from Christoph Hellwig (huzzah for
deleting crufty code!).
- We introduce NT_MIPS_DSP & NT_MIPS_FP_MODE ELF notes & corresponding
regsets to expose DSP ASE & floating point mode state respectively,
both for live debugging & core dumps.
- We better optimize our code by hard-coding cpu_has_* macros at
compile time where their values are known due to the ISA revision
that the kernel build is targeting.
- The EJTAG exception handler now better handles SMP systems, where it
was previously possible for CPUs to clobber a register value saved
by another CPU.
- Our implementation of memset() gained a couple of fixes for MIPSr6
systems to return correct values in some cases where stores fault.
- We now implement ioremap_wc() using the uncached-accelerated cache
coherency attribute where supported, which is detected during boot,
and fall back to plain uncached access where necessary. The
MIPS-specific (and unused in tree) ioremap_uncached_accelerated() &
ioremap_cacheable_cow() are removed.
- The prctl(PR_SET_FP_MODE, ...) syscall is better supported for SMP
systems by reworking the way we ensure remote CPUs that may be
running threads within the affected process switch mode.
- Systems using the MIPS Coherence Manager will now set the
MIPS_IC_SNOOPS_REMOTE flag to avoid some unnecessary cache
maintenance overhead when flushing the icache.
- A few fixes were made for building with clang/LLVM, which
now sucessfully builds kernels for many of our platforms.
- Miscellaneous cleanups all over.
And some platform-specific changes:
- ar7 gained stubs for a few clock API functions to fix build failures
for some drivers.
- ath79 gained support for a few new SoCs, a few fixes & better
gpio-keys support.
- Ci20 now exposes its SPI bus using the spi-gpio driver.
- The generic platform can now auto-detect a suitable value for
PHYS_OFFSET based upon the memory map described by the device tree,
allowing us to avoid wasting memory on page book-keeping for systems
where RAM starts at a non-zero physical address.
- Ingenic systems using the jz4740 platform code now link their
vmlinuz higher to allow for kernels of a realistic size.
- Loongson32 now builds the kernel targeting MIPSr1 rather than MIPSr2
to avoid CPU errata.
- Loongson64 gains a couple of fixes, a workaround for a write
buffering issue & support for the Loongson 3A R3.1 CPU.
- Malta now uses the piix4-poweroff driver to handle powering down.
- Microsemi Ocelot gained support for its SPI bus & NOR flash, its
second MDIO bus and can now be supported by a FIT/.itb image.
- Octeon saw a bunch of header cleanups which remove a lot of
duplicate or unused code.
-----BEGIN PGP SIGNATURE-----
iIsEABYIADMWIQRgLjeFAZEXQzy86/s+p5+stXUA3QUCW3G6JxUccGF1bC5idXJ0
b25AbWlwcy5jb20ACgkQPqefrLV1AN0n/gD/Rpdgay31G/4eTTKBmBrcaju6Shjt
/2Iu6WC5Sj4hDHUBAJSbuI+B9YjcNsjekBYxB/LLD7ImcLBl6nLMIvKmXLAL
=cUiF
-----END PGP SIGNATURE-----
Merge tag 'mips_4.19' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux
Pull MIPS updates from Paul Burton:
"Here are the main MIPS changes for 4.19.
An overview of the general architecture changes:
- Massive DMA ops refactoring from Christoph Hellwig (huzzah for
deleting crufty code!).
- We introduce NT_MIPS_DSP & NT_MIPS_FP_MODE ELF notes &
corresponding regsets to expose DSP ASE & floating point mode state
respectively, both for live debugging & core dumps.
- We better optimize our code by hard-coding cpu_has_* macros at
compile time where their values are known due to the ISA revision
that the kernel build is targeting.
- The EJTAG exception handler now better handles SMP systems, where
it was previously possible for CPUs to clobber a register value
saved by another CPU.
- Our implementation of memset() gained a couple of fixes for MIPSr6
systems to return correct values in some cases where stores fault.
- We now implement ioremap_wc() using the uncached-accelerated cache
coherency attribute where supported, which is detected during boot,
and fall back to plain uncached access where necessary. The
MIPS-specific (and unused in tree) ioremap_uncached_accelerated() &
ioremap_cacheable_cow() are removed.
- The prctl(PR_SET_FP_MODE, ...) syscall is better supported for SMP
systems by reworking the way we ensure remote CPUs that may be
running threads within the affected process switch mode.
- Systems using the MIPS Coherence Manager will now set the
MIPS_IC_SNOOPS_REMOTE flag to avoid some unnecessary cache
maintenance overhead when flushing the icache.
- A few fixes were made for building with clang/LLVM, which now
sucessfully builds kernels for many of our platforms.
- Miscellaneous cleanups all over.
And some platform-specific changes:
- ar7 gained stubs for a few clock API functions to fix build
failures for some drivers.
- ath79 gained support for a few new SoCs, a few fixes & better
gpio-keys support.
- Ci20 now exposes its SPI bus using the spi-gpio driver.
- The generic platform can now auto-detect a suitable value for
PHYS_OFFSET based upon the memory map described by the device tree,
allowing us to avoid wasting memory on page book-keeping for
systems where RAM starts at a non-zero physical address.
- Ingenic systems using the jz4740 platform code now link their
vmlinuz higher to allow for kernels of a realistic size.
- Loongson32 now builds the kernel targeting MIPSr1 rather than
MIPSr2 to avoid CPU errata.
- Loongson64 gains a couple of fixes, a workaround for a write
buffering issue & support for the Loongson 3A R3.1 CPU.
- Malta now uses the piix4-poweroff driver to handle powering down.
- Microsemi Ocelot gained support for its SPI bus & NOR flash, its
second MDIO bus and can now be supported by a FIT/.itb image.
- Octeon saw a bunch of header cleanups which remove a lot of
duplicate or unused code"
* tag 'mips_4.19' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: (123 commits)
MIPS: Remove remnants of UASM_ISA
MIPS: netlogic: xlr: Remove erroneous check in nlm_fmn_send()
MIPS: VDSO: Force link endianness
MIPS: Always specify -EB or -EL when using clang
MIPS: Use dins to simplify __write_64bit_c0_split()
MIPS: Use read-write output operand in __write_64bit_c0_split()
MIPS: Avoid using array as parameter to write_c0_kpgd()
MIPS: vdso: Allow clang's --target flag in VDSO cflags
MIPS: genvdso: Remove GOT checks
MIPS: Remove obsolete MIPS checks for DST node "chosen@0"
MIPS: generic: Remove input symbols from defconfig
MIPS: Delete unused code in linux32.c
MIPS: Remove unused sys_32_mmap2
MIPS: Remove nabi_no_regargs
mips: dts: mscc: enable spi and NOR flash support on ocelot PCB123
mips: dts: mscc: Add spi on Ocelot
MIPS: Loongson: Merge load addresses
MIPS: Loongson: Set Loongson32 to MIPS32R1
MIPS: mscc: ocelot: add interrupt controller properties to GPIO controller
MIPS: generic: Select MIPS_AUTO_PFN_OFFSET
...
Add addition argument 'arch_uprobe' to uprobe_write_opcode().
We need this in later set of patches.
Link: http://lkml.kernel.org/r/20180809041856.1547-3-ravi.bangoria@linux.ibm.com
Reviewed-by: Song Liu <songliubraving@fb.com>
Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Since at least the beginning of the git era we've declared our TLB
exception handling functions inconsistently. They're actually functions,
but we declare them as arrays of u32 where each u32 is an encoded
instruction. This has always been the case for arch/mips/mm/tlbex.c, and
has also been true for arch/mips/kernel/traps.c since commit
86a1708a9d ("MIPS: Make tlb exception handler definitions and
declarations match.") which aimed for consistency but did so by
consistently making the our C code inconsistent with our assembly.
This is all usually harmless, but when using GCC 7 or newer to build a
kernel targeting microMIPS (ie. CONFIG_CPU_MICROMIPS=y) it becomes
problematic. With microMIPS bit 0 of the program counter indicates the
ISA mode. When bit 0 is zero instructions are decoded using the standard
MIPS32 or MIPS64 ISA. When bit 0 is one instructions are decoded using
microMIPS. This means that function pointers become odd - their least
significant bit is one for microMIPS code. We work around this in cases
where we need to access code using loads & stores with our
msk_isa16_mode() macro which simply clears bit 0 of the value it is
given:
#define msk_isa16_mode(x) ((x) & ~0x1)
For example we do this for our TLB load handler in
build_r4000_tlb_load_handler():
u32 *p = (u32 *)msk_isa16_mode((ulong)handle_tlbl);
We then write code to p, expecting it to be suitably aligned (our LEAF
macro aligns functions on 4 byte boundaries, so (ulong)handle_tlbl will
give a value one greater than a multiple of 4 - ie. the start of a
function on a 4 byte boundary, with the ISA mode bit 0 set).
This worked fine up to GCC 6, but GCC 7 & onwards is smart enough to
presume that handle_tlbl which we declared as an array of u32s must be
aligned sufficiently that bit 0 of its address will never be set, and as
a result optimize out msk_isa16_mode(). This leads to p having an
address with bit 0 set, and when we go on to attempt to store code at
that address we take an address error exception due to the unaligned
memory access.
This leads to an exception prior to the kernel having configured its own
exception handlers, so we jump to whatever handlers the bootloader
configured. In the case of QEMU this results in a silent hang, since it
has no useful general exception vector.
Fix this by consistently declaring our TLB-related functions as
functions. For handle_tlbl(), handle_tlbs() & handle_tlbm() we do this
in asm/tlbex.h & we make use of the existing declaration of
tlbmiss_handler_setup_pgd() in asm/mmu_context.h. Our TLB handler
generation code in arch/mips/mm/tlbex.c is adjusted to deal with these
definitions, in most cases simply by casting the function pointers to
u32 pointers.
This allows us to include asm/mmu_context.h in arch/mips/mm/tlbex.c to
get the definitions of tlbmiss_handler_setup_pgd & pgd_current, removing
some needless duplication. Consistently using msk_isa16_mode() on
function pointers means we no longer need the
tlbmiss_handler_setup_pgd_start symbol so that is removed entirely.
Now that we're declaring our functions as functions GCC stops optimizing
out msk_isa16_mode() & a microMIPS kernel built with either GCC 7.3.0 or
8.1.0 boots successfully.
Signed-off-by: Paul Burton <paul.burton@mips.com>
The A() & AA() macros have been unused since commit 05e4396651
("[MIPS] Use SYSVIPC_COMPAT to fix various problems on N32"), which
switched to the more standard compat_ptr().
RLIM_INFINITY32, RESOURCE32() & struct rlimit32 have been present but
unused since the beginning of the git era.
Remove the dead code.
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/20108/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
The sys_32_mmap2 function has been unused since we started using syscall
wrappers in commit dbda6ac089 ("MIPS: CVE-2009-0029: Enable syscall
wrappers."), and is indeed identical to the sys_mips_mmap2 function that
replaced it in sys32_call_table.
Remove the dead code.
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/20107/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Our sigreturn functions make use of a macro named nabi_no_regargs to
declare 8 dummy arguments to a function, forcing the compiler to expect
a pt_regs structure on the stack rather than in argument registers. This
is an ugly hack which unnecessarily causes these sigreturn functions to
need to care about the calling convention of the ABI the kernel is built
for. Although this is abstracted via nabi_no_regargs, it's still ugly &
unnecessary.
Remove nabi_no_regargs & the struct pt_regs argument from sigreturn
functions, and instead use current_pt_regs() to find the struct pt_regs
on the stack, which works cleanly regardless of ABI.
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/20106/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
On systems where physical memory begins at a non-zero address, defining
PHYS_OFFSET (which influences ARCH_PFN_OFFSET) can save us time & memory
by avoiding book-keeping for pages from address zero to the start of
memory.
Some MIPS platforms already make use of this, but with the definition of
PHYS_OFFSET being compile-time constant it hasn't been possible to
enable this optimization for a kernel which may run on systems with
varying physical memory base addresses.
Introduce a new Kconfig option CONFIG_MIPS_AUTO_PFN_OFFSET which, when
enabled, makes ARCH_PFN_OFFSET a variable & detects it from the boot
memory map (which for example may have been populated from DT). The
relationship with PHYS_OFFSET is reversed, with PHYS_OFFSET now being
based on ARCH_PFN_OFFSET. This is because ARCH_PFN_OFFSET is used far
more often, so avoiding the need for runtime calculation gives us a
smaller impact on kernel text size (0.1% rather than 0.15% for
64r6el_defconfig).
Signed-off-by: Paul Burton <paul.burton@mips.com>
Suggested-by: Vladimir Kondratiev <vladimir.kondratiev@intel.com>
Patchwork: https://patchwork.linux-mips.org/patch/20048/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Define an NT_MIPS_FP_MODE core file note and implement a corresponding
regset holding the state handled by PR_SET_FP_MODE and PR_GET_FP_MODE
prctl(2) requests. This lets debug software correctly interpret the
contents of floating-point general registers both in live debugging and
in core files, and also switch floating-point modes of a live process.
[paul.burton@mips.com:
- Changed NT_MIPS_FP_MODE to 0x801 to match first nibble of
NT_MIPS_DSP, which was also changed to avoid a conflict.]
Signed-off-by: Maciej W. Rozycki <macro@mips.com>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19331/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Define an NT_MIPS_DSP core file note type and implement a corresponding
regset holding the DSP ASE register context, following the layout of the
`mips_dsp_state' structure, except for the DSPControl register stored as
a 64-bit rather than 32-bit quantity in a 64-bit note.
The lack of DSP ASE register saving to core files can be considered a
design flaw with commit e50c0a8fa6 ("Support the MIPS32 / MIPS64 DSP
ASE."), leading to an incomplete state being saved. Consequently no DSP
ASE regset has been created with commit 7aeb753b53 ("MIPS: Implement
task_user_regset_view."), when regset support was added to the MIPS
port.
Additionally there is no way for ptrace(2) to correctly access the DSP
accumulator registers in n32 processes with the existing interfaces.
This is due to 32-bit truncation of data passed with PTRACE_PEEKUSR and
PTRACE_POKEUSR requests, which cannot be avoided owing to how the data
types for ptrace(3) have been defined. This new NT_MIPS_DSP regset
fills the missing interface gap.
[paul.burton@mips.com:
- Change NT_MIPS_DSP to 0x800 to avoid conflict with NT_VMCOREDD
introduced by commit 2724273e8f ("vmcore: add API to collect
hardware dump in second kernel").
- Drop stable tag. Whilst I agree the lack of this functionality can
be considered a flaw in earlier DSP ASE support, it's still new
functionality which doesn't meet up to the requirements set out in
Documentation/process/stable-kernel-rules.rst.]
Signed-off-by: Maciej W. Rozycki <macro@mips.com>
Signed-off-by: Paul Burton <paul.burton@mips.com>
References: 7aeb753b53 ("MIPS: Implement task_user_regset_view.")
Patchwork: https://patchwork.linux-mips.org/patch/19330/
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-fsdevel@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Use the `unsigned long' rather than `__u32' type for DSP accumulator
registers, like with the regular MIPS multiply/divide accumulator and
general-purpose registers, as all are 64-bit in 64-bit implementations
and using a 32-bit data type leads to contents truncation on context
saving.
Update `arch_ptrace' and `compat_arch_ptrace' accordingly, removing
casts that are similarly not used with multiply/divide accumulator or
general-purpose register accesses.
Signed-off-by: Maciej W. Rozycki <macro@mips.com>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Fixes: e50c0a8fa6 ("Support the MIPS32 / MIPS64 DSP ASE.")
Patchwork: https://patchwork.linux-mips.org/patch/19329/
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-fsdevel@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Cc: stable@vger.kernel.org # 2.6.15+
prom_putchar() is used centrally in early printk infrastructure therefore
at least MIPS should agree on the function return type.
[paul.burton@mips.com:
- Include linux/types.h in asm/setup.h to gain the bool typedef before
we start include asm/setup.h elsewhere.
- Include asm/setup.h in all files that use or define prom_putchar().
- Also standardise on signed rather than unsigned char argument.]
Signed-off-by: Alexander Sverdlin <alexander.sverdlin@nokia.com>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19842/
Cc: linux-mips@linux-mips.org
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: James Hogan <jhogan@kernel.org>
Cc: Jonas Gorski <jonas.gorski@gmail.com>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: Kate Stewart <kstewart@linuxfoundation.org>
Cc: Philippe Ombredanne <pombredanne@nexb.com>
Annotate cpu_wait implementations using the __cpuidle macro which
places these functions in the .cpuidle.text section. This allows
cpu_in_idle() to return true for PC values which fall within these
functions, allowing nmi_backtrace() to produce cleaner output for CPUs
running idle functions. For example:
# echo l >/proc/sysrq-trigger
[ 38.587170] sysrq: SysRq : Show backtrace of all active CPUs
[ 38.593657] NMI backtrace for cpu 1
[ 38.597611] CPU: 1 PID: 161 Comm: sh Not tainted 4.18.0-rc1+ #27
[ 38.604306] Stack : 00000000 00000004 00000006 80486724 00000000 00000000 00000000 00000000
[ 38.613647] 80e17eda 00000034 00000000 00000000 80d20000 80b67e98 8e559c90 0ffe1e88
[ 38.622986] 00000000 00000000 80e70000 00000000 8f61db18 38312e34 722d302e 202b3163
[ 38.632324] 8e559d3c 8e559adc 00000001 6b636162 80d20000 80000000 00000000 80d1cfa4
[ 38.641664] 00000001 80d20000 80d19520 00000000 00000003 80836724 00000004 80e10004
[ 38.650993] ...
[ 38.653724] Call Trace:
[ 38.656499] [<8040cdd0>] show_stack+0xa0/0x144
[ 38.661475] [<80b67e98>] dump_stack+0xe8/0x120
[ 38.666455] [<80b6f6d4>] nmi_cpu_backtrace+0x1b4/0x1cc
[ 38.672189] [<80b6f81c>] nmi_trigger_cpumask_backtrace+0x130/0x1e4
[ 38.679081] [<808295d8>] __handle_sysrq+0xc0/0x180
[ 38.684421] [<80829b84>] write_sysrq_trigger+0x50/0x64
[ 38.690176] [<8061c984>] proc_reg_write+0xd0/0xfc
[ 38.695447] [<805aac1c>] __vfs_write+0x54/0x194
[ 38.700500] [<805aaf24>] vfs_write+0xe0/0x18c
[ 38.705360] [<805ab190>] ksys_write+0x7c/0xf0
[ 38.710238] [<80416018>] syscall_common+0x34/0x58
[ 38.715558] Sending NMI from CPU 1 to CPUs 0,2-3:
[ 38.720916] NMI backtrace for cpu 0 skipped: idling at r4k_wait_irqoff+0x2c/0x34
[ 38.729186] NMI backtrace for cpu 3 skipped: idling at r4k_wait_irqoff+0x2c/0x34
[ 38.737449] NMI backtrace for cpu 2 skipped: idling at r4k_wait_irqoff+0x2c/0x34
Without this we get register value & backtrace output from all CPUs,
which is generally useless for those running the idle function & serves
only to overwhelm & obfuscate the meaningful output from non-idle CPUs.
Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/19598/
The current MIPS implementation of arch_trigger_cpumask_backtrace() is
broken because it attempts to use synchronous IPIs despite the fact that
it may be run with interrupts disabled.
This means that when arch_trigger_cpumask_backtrace() is invoked, for
example by the RCU CPU stall watchdog, we may:
- Deadlock due to use of synchronous IPIs with interrupts disabled,
causing the CPU that's attempting to generate the backtrace output
to hang itself.
- Not succeed in generating the desired output from remote CPUs.
- Produce warnings about this from smp_call_function_many(), for
example:
[42760.526910] INFO: rcu_sched detected stalls on CPUs/tasks:
[42760.535755] 0-...!: (1 GPs behind) idle=ade/140000000000000/0 softirq=526944/526945 fqs=0
[42760.547874] 1-...!: (0 ticks this GP) idle=e4a/140000000000000/0 softirq=547885/547885 fqs=0
[42760.559869] (detected by 2, t=2162 jiffies, g=266689, c=266688, q=33)
[42760.568927] ------------[ cut here ]------------
[42760.576146] WARNING: CPU: 2 PID: 1216 at kernel/smp.c:416 smp_call_function_many+0x88/0x20c
[42760.587839] Modules linked in:
[42760.593152] CPU: 2 PID: 1216 Comm: sh Not tainted 4.15.4-00373-gee058bb4d0c2 #2
[42760.603767] Stack : 8e09bd20 8e09bd20 8e09bd20 fffffff0 00000007 00000006 00000000 8e09bca8
[42760.616937] 95b2b379 95b2b379 807a0080 00000007 81944518 0000018a 00000032 00000000
[42760.630095] 00000000 00000030 80000000 00000000 806eca74 00000009 8017e2b8 000001a0
[42760.643169] 00000000 00000002 00000000 8e09baa4 00000008 808b8008 86d69080 8e09bca0
[42760.656282] 8e09ad50 805e20aa 00000000 00000000 00000000 8017e2b8 00000009 801070ca
[42760.669424] ...
[42760.673919] Call Trace:
[42760.678672] [<27fde568>] show_stack+0x70/0xf0
[42760.685417] [<84751641>] dump_stack+0xaa/0xd0
[42760.692188] [<699d671c>] __warn+0x80/0x92
[42760.698549] [<68915d41>] warn_slowpath_null+0x28/0x36
[42760.705912] [<f7c76c1c>] smp_call_function_many+0x88/0x20c
[42760.713696] [<6bbdfc2a>] arch_trigger_cpumask_backtrace+0x30/0x4a
[42760.722216] [<f845bd33>] rcu_dump_cpu_stacks+0x6a/0x98
[42760.729580] [<796e7629>] rcu_check_callbacks+0x672/0x6ac
[42760.737476] [<059b3b43>] update_process_times+0x18/0x34
[42760.744981] [<6eb94941>] tick_sched_handle.isra.5+0x26/0x38
[42760.752793] [<478d3d70>] tick_sched_timer+0x1c/0x50
[42760.759882] [<e56ea39f>] __hrtimer_run_queues+0xc6/0x226
[42760.767418] [<e88bbcae>] hrtimer_interrupt+0x88/0x19a
[42760.775031] [<6765a19e>] gic_compare_interrupt+0x2e/0x3a
[42760.782761] [<0558bf5f>] handle_percpu_devid_irq+0x78/0x168
[42760.790795] [<90c11ba2>] generic_handle_irq+0x1e/0x2c
[42760.798117] [<1b6d462c>] gic_handle_local_int+0x38/0x86
[42760.805545] [<b2ada1c7>] gic_irq_dispatch+0xa/0x14
[42760.812534] [<90c11ba2>] generic_handle_irq+0x1e/0x2c
[42760.820086] [<c7521934>] do_IRQ+0x16/0x20
[42760.826274] [<9aef3ce6>] plat_irq_dispatch+0x62/0x94
[42760.833458] [<6a94b53c>] except_vec_vi_end+0x70/0x78
[42760.840655] [<22284043>] smp_call_function_many+0x1ba/0x20c
[42760.848501] [<54022b58>] smp_call_function+0x1e/0x2c
[42760.855693] [<ab9fc705>] flush_tlb_mm+0x2a/0x98
[42760.862730] [<0844cdd0>] tlb_flush_mmu+0x1c/0x44
[42760.869628] [<cb259b74>] arch_tlb_finish_mmu+0x26/0x3e
[42760.877021] [<1aeaaf74>] tlb_finish_mmu+0x18/0x66
[42760.883907] [<b3fce717>] exit_mmap+0x76/0xea
[42760.890428] [<c4c8a2f6>] mmput+0x80/0x11a
[42760.896632] [<a41a08f4>] do_exit+0x1f4/0x80c
[42760.903158] [<ee01cef6>] do_group_exit+0x20/0x7e
[42760.909990] [<13fa8d54>] __wake_up_parent+0x0/0x1e
[42760.917045] [<46cf89d0>] smp_call_function_many+0x1a2/0x20c
[42760.924893] [<8c21a93b>] syscall_common+0x14/0x1c
[42760.931765] ---[ end trace 02aa09da9dc52a60 ]---
[42760.938342] ------------[ cut here ]------------
[42760.945311] WARNING: CPU: 2 PID: 1216 at kernel/smp.c:291 smp_call_function_single+0xee/0xf8
...
This patch switches MIPS' arch_trigger_cpumask_backtrace() to use async
IPIs & smp_call_function_single_async() in order to resolve this
problem. We ensure use of the pre-allocated call_single_data_t
structures is serialized by maintaining a cpumask indicating that
they're busy, and refusing to attempt to send an IPI when a CPU's bit is
set in this mask. This should only happen if a CPU hasn't responded to a
previous backtrace IPI - ie. if it's hung - and we print a warning to
the console in this case.
I've marked this for stable branches as far back as v4.9, to which it
applies cleanly. Strictly speaking the faulty MIPS implementation can be
traced further back to commit 856839b768 ("MIPS: Add
arch_trigger_all_cpu_backtrace() function") in v3.19, but kernel
versions v3.19 through v4.8 will require further work to backport due to
the rework performed in commit 9a01c3ed5c ("nmi_backtrace: add more
trigger_*_cpu_backtrace() methods").
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19597/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # v4.9+
Fixes: 856839b768 ("MIPS: Add arch_trigger_all_cpu_backtrace() function")
Fixes: 9a01c3ed5c ("nmi_backtrace: add more trigger_*_cpu_backtrace() methods")
The generic nmi_cpu_backtrace() function calls show_regs() when a struct
pt_regs is available, and dump_stack() otherwise. If we were to make use
of the generic nmi_cpu_backtrace() with MIPS' current implementation of
show_regs() this would mean that we see only register data with no
accompanying stack information, in contrast with our current
implementation which calls dump_stack() regardless of whether register
state is available.
In preparation for making use of the generic nmi_cpu_backtrace() to
implement arch_trigger_cpumask_backtrace(), have our implementation of
show_regs() call dump_stack() and drop the explicit dump_stack() call in
arch_dump_stack() which is invoked by arch_trigger_cpumask_backtrace().
This will allow the output we produce to remain the same after a later
patch switches to using nmi_cpu_backtrace(). It may mean that we produce
extra stack output in other uses of show_regs(), but this:
1) Seems harmless.
2) Is good for consistency between arch_trigger_cpumask_backtrace()
and other users of show_regs().
3) Matches the behaviour of the ARM & PowerPC architectures.
Marked for stable back to v4.9 as a prerequisite of the following patch
"MIPS: Call dump_stack() from show_regs()".
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19596/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # v4.9+
Commit 784e0300fe ("rseq: Avoid infinite recursion when delivering
SIGSEGV") added a new ksig argument to the rseq_signal_deliver() &
rseq_handle_notify_resume() functions, and was merged in v4.18-rc2.
Meanwhile MIPS support for restartable sequences was also merged in
v4.18-rc2 with commit 9ea141ad54 ("MIPS: Add support for restartable
sequences"), and therefore didn't get updated for the API change.
This results in build failures like the following:
CC arch/mips/kernel/signal.o
arch/mips/kernel/signal.c: In function 'handle_signal':
arch/mips/kernel/signal.c:804:22: error: passing argument 1 of
'rseq_signal_deliver' from incompatible pointer type
[-Werror=incompatible-pointer-types]
rseq_signal_deliver(regs);
^~~~
In file included from ./include/linux/context_tracking.h:5,
from arch/mips/kernel/signal.c:12:
./include/linux/sched.h:1811:56: note: expected 'struct ksignal *' but
argument is of type 'struct pt_regs *'
static inline void rseq_signal_deliver(struct ksignal *ksig,
~~~~~~~~~~~~~~~~^~~~
arch/mips/kernel/signal.c:804:2: error: too few arguments to function
'rseq_signal_deliver'
rseq_signal_deliver(regs);
^~~~~~~~~~~~~~~~~~~
Fix this by adding the ksig argument as was done for other architectures
in commit 784e0300fe ("rseq: Avoid infinite recursion when delivering
SIGSEGV").
Signed-off-by: Paul Burton <paul.burton@mips.com>
Acked-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Patchwork: https://patchwork.linux-mips.org/patch/19603/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Commit 6b8322576e ("MIPS: Force CPUs to lose FP context during mode
switches") ensures that we react to PR_SET_FP_MODE prctl syscalls
quickly by broadcasting an IPI in order to cause CPUs to lose FPU access
when necessary. Whilst it achieves that, unfortunately it causes all
sorts of strange race conditions because:
1) The IPI may arrive at a point where the FPU is in the process of
being enabled, but that process is not yet complete leading to a
state we aren't prepared to handle. For example:
[ 370.215903] do_cpu invoked from kernel context![#1]:
[ 370.221064] CPU: 0 PID: 963 Comm: fp-prctl Not tainted 4.9.0-rc5-00323-g210db32-dirty #226
[ 370.229420] task: a8000000fd672e00 task.stack: a8000000fd630000
[ 370.235399] $ 0 : 0000000000000000 0000000000000001 0000000000000001 a8000000fd630000
[ 370.243882] $ 4 : a8000000fd672e00 0000000000000000 0000000000000453 0000000000000000
[ 370.252317] $ 8 : 0000000000000000 a8000000fd637c28 1000000000000000 0000000000000010
[ 370.260753] $12 : 00000000140084e0 ffffffff80109c00 0000000000000000 0000000000000002
[ 370.269179] $16 : ffffffff8092f080 a8000000fd672e00 ffffffff80107fe8 a8000000fd485000
[ 370.277612] $20 : ffffffff8084d328 ffffffff80940000 0000000000000009 ffffffff80930000
[ 370.286038] $24 : 0000000000000000 900000001612048c
[ 370.294476] $28 : a8000000fd630000 a8000000fd637ac0 ffffffff80937300 ffffffff8010807c
[ 370.302909] Hi : 0000000000000000
[ 370.306595] Lo : 0000000000000200
[ 370.310376] epc : ffffffff80115d38 _save_fp+0x10/0xa0
[ 370.315784] ra : ffffffff8010807c prepare_for_fp_mode_switch+0x94/0x1b0
[ 370.322707] Status: 140084e2 KX SX UX KERNEL EXL
[ 370.327980] Cause : 1080002c (ExcCode 0b)
[ 370.332091] PrId : 0001a428 (MIPS P6600)
[ 370.336179] Modules linked in:
[ 370.339486] Process fp-prctl (pid: 963, threadinfo=a8000000fd630000, task=a8000000fd672e00, tls=00000000756e67d0)
[ 370.349724] Stack : 0000000000000000 a8000000fd557dc0 0000000000000000 ffffffff801ca8e0
[ 370.358161] 0000000000000000 a8000000fd637b9c 0000000000000009 ffffffff80923780
[ 370.366575] ffffffff80850000 ffffffff8011610c 00000000000000b8 ffffffff801a5084
[ 370.374989] ffffffff8084a370 ffffffff8084a388 ffffffff80923780 ffffffff80923828
[ 370.383395] 0000000000010000 ffffffff809237a8 0000000000020000 ffffffff80a40000
[ 370.391817] 000000000000007c 00000000004a0000 00000000756dedd0 ffffffff801a5188
[ 370.400230] a800000002014900 0000000000000001 ffffffff80923780 0000000080923828
[ 370.408644] ffffffff80923780 ffffffff80923780 ffffffff80923828 ffffffff801a521c
[ 370.417066] ffffffff80923780 ffffffff80923828 0000000000010000 ffffffff801a8f84
[ 370.425472] ffffffff80a40000 a8000000fd637c20 ffffffff80a39240 0000000000000001
[ 370.433885] ...
[ 370.436562] Call Trace:
[ 370.439222] [<ffffffff80115d38>] _save_fp+0x10/0xa0
[ 370.444305] [<ffffffff8010807c>] prepare_for_fp_mode_switch+0x94/0x1b0
[ 370.451035] [<ffffffff801ca8e0>] flush_smp_call_function_queue+0xf8/0x230
[ 370.457991] [<ffffffff8011610c>] ipi_call_interrupt+0xc/0x20
[ 370.463814] [<ffffffff801a5084>] __handle_irq_event_percpu+0xc4/0x1a8
[ 370.470404] [<ffffffff801a5188>] handle_irq_event_percpu+0x20/0x68
[ 370.476734] [<ffffffff801a521c>] handle_irq_event+0x4c/0x88
[ 370.482486] [<ffffffff801a8f84>] handle_edge_irq+0x12c/0x210
[ 370.488316] [<ffffffff801a47a0>] generic_handle_irq+0x38/0x48
[ 370.494280] [<ffffffff804a2dbc>] gic_handle_shared_int+0x194/0x268
[ 370.500616] [<ffffffff801a47a0>] generic_handle_irq+0x38/0x48
[ 370.506529] [<ffffffff80107e60>] do_IRQ+0x18/0x28
[ 370.511445] [<ffffffff804a1524>] plat_irq_dispatch+0xc4/0x140
[ 370.517339] [<ffffffff80106230>] ret_from_irq+0x0/0x4
[ 370.522583] [<ffffffff8010fad4>] do_ri+0x4fc/0x7e8
[ 370.527546] [<ffffffff80106220>] ret_from_exception+0x0/0x10
2) The IPI may arrive during kernel use of the FPU, since we generally
only disable preemption around use of the FPU & leave interrupts
enabled. This can lead to us unexpectedly losing access to the FPU
in places where it previously had not been possible. For example:
do_cpu invoked from kernel context![#2]:
CPU: 2 PID: 7338 Comm: fp-prctl Tainted: G D 4.7.0-00424-g49b0c82
#2
task: 838e4000 ti: 88d38000 task.ti: 88d38000
$ 0 : 00000000 00000001 ffffffff 88d3fef8
$ 4 : 838e4000 88d38004 00000000 00000001
$ 8 : 3400fc01 801f8020 808e9100 24000000
$12 : dbffffff 807b69d8 807b0000 00000000
$16 : 00000000 80786150 00400fc4 809c0398
$20 : 809c0338 0040273c 88d3ff28 808e9d30
$24 : 808e9d30 00400fb4
$28 : 88d38000 88d3fe88 00000000 8011a2ac
Hi : 0040273c
Lo : 88d3ff28
epc : 80114178 _restore_fp+0x10/0xa0
ra : 8011a2ac mipsr2_decoder+0xd5c/0x1660
Status: 1400fc03 KERNEL EXL IE
Cause : 1080002c (ExcCode 0b)
PrId : 0001a920 (MIPS I6400)
Modules linked in:
Process fp-prctl (pid: 7338, threadinfo=88d38000, task=838e4000, tls=766527d0)
Stack : 00000000 00000000 00000000 88d3fe98 00000000 00000000 809c0398 809c0338
808e9100 00000000 88d3ff28 00400fc4 00400fc4 0040273c 7fb69e18 004a0000
004a0000 004a0000 7664add0 8010de18 00000000 00000000 88d3fef8 88d3ff28
808e9100 00000000 766527d0 8010e534 000c0000 85755000 8181d580 00000000
00000000 00000000 004a0000 00000000 766527d0 7fb69e18 004a0000 80105c20
...
Call Trace:
[<80114178>] _restore_fp+0x10/0xa0
[<8011a2ac>] mipsr2_decoder+0xd5c/0x1660
[<8010de18>] do_ri+0x90/0x6b8
[<80105c20>] ret_from_exception+0x0/0x10
At first glance a simple fix may seem to be to disable interrupts around
kernel use of the FPU rather than merely preemption, however this would
introduce further overhead outside of the mode switch path & doesn't
solve the third problem:
3) The IPI may arrive whilst the kernel is running code that will lead
to a preempt_disable() call & FPU usage soon. If this happens then
the IPI will be serviced & we'll proceed to enable an FPU whilst the
mode switch is in progress, leading to strange & inconsistent
behaviour.
Further to all of this is a separate but related problem:
4) There are various paths through which we may enable the FPU without
the user having triggered a coprocessor 1 disabled exception. These
paths are those in which we emulate instructions & then enable the
FPU with the expectation that the user might execute an FP
instruction shortly afterwards. However these paths have not
previously checked whether an FP mode switch is underway for the
task, and therefore could enable the FPU whilst such a mode switch
is in progress leading to strange & inconsistent behaviour for user
code.
This patch fixes all of the above by taking a step back & re-examining
our approach to FP mode switches. Up until now we have taken these basic
steps:
a) Prevent any threads that are part of the affected process from being
able to obtain ownership of the FPU.
b) Cause any threads that are part of the affected process and already
have ownership of an FPU to lose it.
c) Set the thread flags for each thread that is part of the affected
process to reflect the new FP mode.
d) Allow threads to obtain ownership of the FPU again.
This approach is however more complex than necessary. All that we really
require is that the mode switch has occurred for all threads that are
part of the affected process before mips_set_process_fp_mode(), and thus
the PR_SET_FP_MODE prctl() syscall, returns. This doesn't require that
we stop threads from owning or using an FPU whilst a mode switch occurs,
only that we force them to relinquish it after the mode switch has
occurred such that they next own an FPU with the correct mode
configured. Our basic steps therefore simplify to:
A) Set the thread flags for each thread that is part of the affected
process to reflect the new FP mode.
B) Cause any threads that are part of the affected process and already
have ownership of an FPU to lose it.
We implement B) by forcing each CPU which might be running a thread
which is part of the affected process to schedule a no-op function,
which causes the affected thread to lose its FPU ownership when it is
descheduled.
The end result is simpler FP mode switching with less overhead in the
FPU enable path (ie. enable_restore_fp_context()) and fewer moving
parts.
Signed-off-by: Paul Burton <paul.burton@mips.com>
Fixes: 9791554b45 ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS")
Fixes: 6b8322576e ("MIPS: Force CPUs to lose FP context during mode switches")
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: stable <stable@vger.kernel.org> # v4.0+
On SMP systems, the shared ejtag debug buffer may be overwritten by
other cores, because every cores can generate ejtag exception at
same time.
Unfortunately, in that context, it's difficult to relax more registers
to access per cpu buffers. so use ll/sc to serialize the access.
[paul.burton@mips.com:
This could in theory be backported at least as far back as the
beginning of the git era, however in general it's exceedingly rare
that anyone would hit this without further changes, so it doesn't seem
worthwhile marking for backport.]
Signed-off-by: Heiher <r@hev.cc>
Patchwork: https://patchwork.linux-mips.org/patch/19507/
Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: jhogan@kernel.org
Cc: ralf@linux-mips.org