Commit Graph

436 Commits

Author SHA1 Message Date
Linus Torvalds 6325e940e7 arm64 updates for 3.18:
- eBPF JIT compiler for arm64
 - CPU suspend backend for PSCI (firmware interface) with standard idle
   states defined in DT (generic idle driver to be merged via a different
   tree)
 - Support for CONFIG_DEBUG_SET_MODULE_RONX
 - Support for unmapped cpu-release-addr (outside kernel linear mapping)
 - set_arch_dma_coherent_ops() implemented and bus notifiers removed
 - EFI_STUB improvements when base of DRAM is occupied
 - Typos in KGDB macros
 - Clean-up to (partially) allow kernel building with LLVM
 - Other clean-ups (extern keyword, phys_addr_t usage)
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJUNB6NAAoJEGvWsS0AyF7x22sP/1qPQvFoY71fSqTZmSY+kfgW
 UMXhDFZOd+khD2TPHWptbgBRDElTQjRPHyISv/8ILKwDNoMlUDLlYkp1XPLM/nlB
 ea9ou2GX8iktqgM2JF5r4vk1hjH6JqEGOUHyWKZc7ibphTVm3dhg3nWL1A4peOUG
 0UyX79kl8BLAaggLSUhjtUz1GMpSNlb6Pc1ForUXaPMayBlOcVoOzh1ir7b5wb3e
 IvotUY1gv+opE9uK0QPr1AJSfpCogPEfQ2TSCP8MQZjxkrEz69n0HaFvdy60rwf4
 DaJiqBoQ5MSP3Bw+qvoYgyz+tfiPFAvEF+O3YQ5x3LBTteoooriFYH4mL7DsicAs
 2WLor/342mHykE0bOc44/gNl8B/xaZNzvO2ezLYrjVGsiY2QHTZ7fXB8arPUvQSS
 RUXVfHmcv4qthZjI17rgreBKvsfeFIMighSfvMJnVhGqDSvB8abjiPwZjzqB91Bq
 pu5MDitNgR3k3ctwzRaS6JtH2CluVFv97xIS4VaD/hm3JnS5NPeTXFou3Gb3lvon
 d/wXOIB3vY8FDMIt+BMCQPzWiU0liZ/sN7p1bsOmkgZ1wLOZ0nmsaHF09PDRGbtA
 vifopwaw9qtNlcVrTB/rDBCDaT0Ds/mTYD/a3+ch5CYUeLmQmfW/vBMfq/3gUt65
 JdI/nTVXawbl2CpBWw36
 =SAfQ
 -----END PGP SIGNATURE-----

Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 updates from Catalin Marinas:
 - eBPF JIT compiler for arm64
 - CPU suspend backend for PSCI (firmware interface) with standard idle
   states defined in DT (generic idle driver to be merged via a
   different tree)
 - Support for CONFIG_DEBUG_SET_MODULE_RONX
 - Support for unmapped cpu-release-addr (outside kernel linear mapping)
 - set_arch_dma_coherent_ops() implemented and bus notifiers removed
 - EFI_STUB improvements when base of DRAM is occupied
 - Typos in KGDB macros
 - Clean-up to (partially) allow kernel building with LLVM
 - Other clean-ups (extern keyword, phys_addr_t usage)

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (51 commits)
  arm64: Remove unneeded extern keyword
  ARM64: make of_device_ids const
  arm64: Use phys_addr_t type for physical address
  aarch64: filter $x from kallsyms
  arm64: Use DMA_ERROR_CODE to denote failed allocation
  arm64: Fix typos in KGDB macros
  arm64: insn: Add return statements after BUG_ON()
  arm64: debug: don't re-enable debug exceptions on return from el1_dbg
  Revert "arm64: dmi: Add SMBIOS/DMI support"
  arm64: Implement set_arch_dma_coherent_ops() to replace bus notifiers
  of: amba: use of_dma_configure for AMBA devices
  arm64: dmi: Add SMBIOS/DMI support
  arm64: Correct ftrace calls to aarch64_insn_gen_branch_imm()
  arm64:mm: initialize max_mapnr using function set_max_mapnr
  setup: Move unmask of async interrupts after possible earlycon setup
  arm64: LLVMLinux: Fix inline arm64 assembly for use with clang
  arm64: pageattr: Correctly adjust unaligned start addresses
  net: bpf: arm64: fix module memory leak when JIT image build fails
  arm64: add PSCI CPU_SUSPEND based cpu_suspend support
  arm64: kernel: introduce cpu_init_idle CPU operation
  ...
2014-10-08 05:34:24 -04:00
Uwe Kleine-König c8fdd497a4 ARM64: make of_device_ids const
of_device_ids (i.e. compatible strings and the respective data) are not
supposed to change at runtime. All functions working with of_device_ids
provided by <linux/of.h> work with const of_device_ids. So mark the
only non-const struct in arch/arm64 as const, too.

Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-10-03 14:49:28 +01:00
Russell King d5d1689224 Merge branches 'fiq' (early part), 'fixes', 'l2c' (early part) and 'misc' into for-next 2014-10-02 21:47:02 +01:00
Yalin Wang 562c85cadb ARM: 8168/1: extend __init_end to a page align address
This patch changes the __init_end address to a
page align address, so that free_initmem() can
free the whole .init section, because if the end
address is not page aligned, it will round down to
a page align address, then the tail unligned page
will not be freed.

Signed-off-by: wang <yalin.wang2010@gmail.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-10-02 21:28:16 +01:00
Catalin Marinas 7acf71d1a2 arm64: Fix typos in KGDB macros
Some of the KGDB macros used for generating the BRK instructions had the
wrong spelling for DBG and KGDB abbreviations.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-25 15:35:41 +01:00
Mark Brown a9ae04c9fa arm64: insn: Add return statements after BUG_ON()
Following a recent series of enhancements to the insn code the ARMv8
allnoconfig build has been generating a large number of warnings in the
form of:

arch/arm64/kernel/insn.c:689:8: warning: 'insn' may be used uninitialized in this function [-Wmaybe-uninitialized]

This is because BUG() and related macros can be compiled out so we get
execution paths which normally result in a panic compiling out to noops
instead.

I wasn't able to immediately identify a sensible return value to use in
these cases so just return AARCH64_BREAK_FAULT - this is all "should
never happen" code so hopefully it never has a practical impact.

Signed-off-by: Mark Brown <broonie@kernel.org>
[catalin.marinas@arm.com: AARCH64_BREAK_FAULT definition contributed by Daniel Borkmann]
[catalin.marinas@arm.com: replace return 0 with AARCH64_BREAK_FAULT]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-25 15:32:48 +01:00
Will Deacon 1059c6bf85 arm64: debug: don't re-enable debug exceptions on return from el1_dbg
When returning from a debug exception taken from EL1, we unmask debug
exceptions after handling the exception. This is crucial for debug
exceptions taken from EL0, so that any kernel work on the ret_to_user
path can be debugged by kgdb.

However, when returning back to EL1 the only thing left to do is to
restore the original register state before the exception return. If
single-step has been enabled by the debug exception handler, we will
get stuck in an infinite debug exception loop, since we will take the
step exception as soon as we unmask debug exceptions.

This patch avoids unmasking debug exceptions on the debug exception
return path when the exception was taken from EL1.

Fixes: 2a2830703a (arm64: debug: avoid accessing mdscr_el1 on fault paths where possible)
Cc: <stable@vger.kernel.org> #3.16+
Reported-by: David Long <dave.long@linaro.org>
Reported-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-23 15:49:34 +01:00
Catalin Marinas 6f325eaa86 Revert "arm64: dmi: Add SMBIOS/DMI support"
This reverts commit 668ebd1068.

... because of lots of warnings during boot if Linux isn't started as an EFI
application:

WARNING: CPU: 4 PID: 1 at
/work/Linux/linux-2.6-aarch64/drivers/firmware/dmi_scan.c:591 dmi_matches+0x10c/0x110()
dmi check: not initialized yet.
Modules linked in:
CPU: 4 PID: 1 Comm: swapper/0 Not tainted 3.17.0-rc4+ #606
Call trace:
[<ffffffc000087fb0>] dump_backtrace+0x0/0x124
[<ffffffc0000880e4>] show_stack+0x10/0x1c
[<ffffffc0004d58f8>] dump_stack+0x74/0xb8
[<ffffffc0000ab640>] warn_slowpath_common+0x8c/0xb4
[<ffffffc0000ab6b4>] warn_slowpath_fmt+0x4c/0x58
[<ffffffc0003f2d7c>] dmi_matches+0x108/0x110
[<ffffffc0003f2da8>] dmi_check_system+0x24/0x68
[<ffffffc0006974c4>] atkbd_init+0x10/0x34
[<ffffffc0000814ac>] do_one_initcall+0x88/0x1a0
[<ffffffc00067aab4>] kernel_init_freeable+0x148/0x1e8
[<ffffffc0004d2c64>] kernel_init+0x10/0xd4

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-22 18:12:37 +01:00
Yi Li 668ebd1068 arm64: dmi: Add SMBIOS/DMI support
SMBIOS is important for server hardware vendors. It implements a spec for
providing descriptive information about the platform. Things like serial
numbers, physical layout of the ports, build configuration data, and the like.

This has been tested by dmidecode and lshw tools.

This patch adds the call to dmi_scan_machine() to arm64_enter_virtual_mode(),
as that is the point where the EFI Configuration Tables are registered as
being available. It needs to be in an early_initcall anyway as dmi_id_init(),
which is an arch_initcall itself, depends on dmi_scan_machine() having been
called already.

Signed-off-by: Yi Li <yi.li@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-22 11:11:18 +01:00
Catalin Marinas 9f1ae7596a arm64: Correct ftrace calls to aarch64_insn_gen_branch_imm()
The aarch64_insn_gen_branch_imm() function takes an enum as the last
argument rather than a bool. It happens to work because
AARCH64_INSN_BRANCH_LINK matches 'true' but better to use the actual
type.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-19 12:05:45 +01:00
Jon Masters 7a9c43bed8 setup: Move unmask of async interrupts after possible earlycon setup
The kernel wants to enable reporting of asynchronous interrupts (i.e.
System Errors) as early as possible. But if this happens too early then
any pending System Error on initial entry into the kernel may never be
reported where a user can see it. This situation will occur if the kernel
is configured with CONFIG_PANIC_ON_OOPS set and (default or command line)
enabled, in which case the kernel will panic as intended, however the
associated logging messages indicating this failure condition will remain
only in the kernel ring buffer and never be flushed out to the (not yet
configured) console. Therefore, this patch moves the enabling of
asynchronous interrupts during early setup to as early as reasonable,
but after parsing any possible earlycon parameters setting up earlycon.

Signed-off-by: Jon Masters <jcm@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-15 18:15:09 +01:00
Catalin Marinas c2eb6b6139 Merge arm64 CPU suspend branch
* cpuidle:
  arm64: add PSCI CPU_SUSPEND based cpu_suspend support
  arm64: kernel: introduce cpu_init_idle CPU operation
  arm64: kernel: refactor the CPU suspend API for retention states
  Documentation: arm: define DT idle states bindings
2014-09-12 10:50:21 +01:00
Lorenzo Pieralisi 18910ab0d9 arm64: add PSCI CPU_SUSPEND based cpu_suspend support
This patch implements the cpu_suspend cpu operations method through
the PSCI CPU SUSPEND API. The PSCI implementation translates the idle state
index passed by the cpu_suspend core call into a valid PSCI state according to
the PSCI states initialized at boot through the cpu_init_idle() CPU
operations hook.

The PSCI CPU suspend operation hook checks if the PSCI state is a
standby state. If it is, it calls the PSCI suspend implementation
straight away, without saving any context. If the state is a power
down state the kernel calls the __cpu_suspend API (that saves the CPU
context) and passed the PSCI suspend finisher as a parameter so that PSCI
can be called by the __cpu_suspend implementation after saving and flushing
the context as last function before power down.

For power down states, entry point is set to cpu_resume physical address,
that represents the default kernel execution address following a CPU reset.

Reviewed-by: Ashwin Chaugule <ashwin.chaugule@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-12 10:48:56 +01:00
Lorenzo Pieralisi d64f84f696 arm64: kernel: introduce cpu_init_idle CPU operation
The CPUidle subsystem on ARM64 machines requires the idle states
implementation back-end to initialize idle states parameter upon
boot. This patch adds a hook in the CPU operations structure that
should be initialized by the CPU operations back-end in order to
provide a function that initializes cpu idle states.

This patch also adds the infrastructure to arm64 kernel required
to export the CPU operations based initialization interface, so
that drivers (ie CPUidle) can use it when they are initialized
at probe time.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-12 10:48:55 +01:00
Lorenzo Pieralisi 714f599255 arm64: kernel: refactor the CPU suspend API for retention states
CPU suspend is the standard kernel interface to be used to enter
low-power states on ARM64 systems. Current cpu_suspend implementation
by default assumes that all low power states are losing the CPU context,
so the CPU registers must be saved and cleaned to DRAM upon state
entry. Furthermore, the current cpu_suspend() implementation assumes
that if the CPU suspend back-end method returns when called, this has
to be considered an error regardless of the return code (which can be
successful) since the CPU was not expected to return from a code path that
is different from cpu_resume code path - eg returning from the reset vector.

All in all this means that the current API does not cope well with low-power
states that preserve the CPU context when entered (ie retention states),
since first of all the context is saved for nothing on state entry for
those states and a successful state entry can return as a normal function
return, which is considered an error by the current CPU suspend
implementation.

This patch refactors the cpu_suspend() API so that it can be split in
two separate functionalities. The arm64 cpu_suspend API just provides
a wrapper around CPU suspend operation hook. A new function is
introduced (for architecture code use only) for states that require
context saving upon entry:

__cpu_suspend(unsigned long arg, int (*fn)(unsigned long))

__cpu_suspend() saves the context on function entry and calls the
so called suspend finisher (ie fn) to complete the suspend operation.
The finisher is not expected to return, unless it fails in which case
the error is propagated back to the __cpu_suspend caller.

The API refactoring results in the following pseudo code call sequence for a
suspending CPU, when triggered from a kernel subsystem:

/*
 * int cpu_suspend(unsigned long idx)
 * @idx: idle state index
 */
{
-> cpu_suspend(idx)
	|---> CPU operations suspend hook called, if present
		|--> if (retention_state)
			|--> direct suspend back-end call (eg PSCI suspend)
		     else
			|--> __cpu_suspend(idx, &back_end_finisher);
}

By refactoring the cpu_suspend API this way, the CPU operations back-end
has a chance to detect whether idle states require state saving or not
and can call the required suspend operations accordingly either through
simple function call or indirectly through __cpu_suspend() which carries out
state saving and suspend finisher dispatching to complete idle state entry.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-12 10:48:55 +01:00
Will Deacon eb35bdd7bc arm64: flush TLS registers during exec
Nathan reports that we leak TLS information from the parent context
during an exec, as we don't clear the TLS registers when flushing the
thread state.

This patch updates the flushing code so that we:

  (1) Unconditionally zero the tpidr_el0 register (since this is fully
      context switched for native tasks and zeroed for compat tasks)

  (2) Zero the tp_value state in thread_info before clearing the
      tpidrr0_el0 register for compat tasks (since this is only writable
      by the set_tls compat syscall and therefore not fully switched).

A missing compiler barrier is also added to the compat set_tls syscall.

Cc: <stable@vger.kernel.org>
Acked-by: Nathan Lynch <Nathan_Lynch@mentor.com>
Reported-by: Nathan Lynch <Nathan_Lynch@mentor.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-11 18:34:58 +01:00
Zi Shen Lim 5e6e15a2c4 arm64: introduce aarch64_insn_gen_logical_shifted_reg()
Introduce function to generate logical (shifted register)
instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:21 +01:00
Zi Shen Lim 27f95ba59b arm64: introduce aarch64_insn_gen_data3()
Introduce function to generate data-processing (3 source) instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:20 +01:00
Zi Shen Lim 6481063989 arm64: introduce aarch64_insn_gen_data2()
Introduce function to generate data-processing (2 source) instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:20 +01:00
Zi Shen Lim 546dd36b44 arm64: introduce aarch64_insn_gen_data1()
Introduce function to generate data-processing (1 source) instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:20 +01:00
Zi Shen Lim 5fdc639a7a arm64: introduce aarch64_insn_gen_add_sub_shifted_reg()
Introduce function to generate add/subtract (shifted register)
instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:20 +01:00
Zi Shen Lim 6098f2d5c7 arm64: introduce aarch64_insn_gen_movewide()
Introduce function to generate move wide (immediate) instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:20 +01:00
Zi Shen Lim 4a89d2c98e arm64: introduce aarch64_insn_gen_bitfield()
Introduce function to generate bitfield instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:20 +01:00
Zi Shen Lim 9951a157fa arm64: introduce aarch64_insn_gen_add_sub_imm()
Introduce function to generate add/subtract (immediate) instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:20 +01:00
Zi Shen Lim 1bba567d0f arm64: introduce aarch64_insn_gen_load_store_pair()
Introduce function to generate load/store pair instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:20 +01:00
Zi Shen Lim 17cac17988 arm64: introduce aarch64_insn_gen_load_store_reg()
Introduce function to generate load/store (register offset)
instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:19 +01:00
Zi Shen Lim 345e0d35ec arm64: introduce aarch64_insn_gen_cond_branch_imm()
Introduce function to generate conditional branch (immediate)
instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:19 +01:00
Zi Shen Lim c0cafbae20 arm64: introduce aarch64_insn_gen_branch_reg()
Introduce function to generate unconditional branch (register)
instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:19 +01:00
Zi Shen Lim 617d2fbc45 arm64: introduce aarch64_insn_gen_comp_branch_imm()
Introduce function to generate compare & branch (immediate)
instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:19 +01:00
Behan Webster a4ceab1adb arm64: LLVMLinux: Use global stack pointer in return_address()
The global register current_stack_pointer holds the current stack pointer.
This change supports being able to compile the kernel with both gcc and clang.

Author: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Behan Webster <behanw@converseincode.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:19 +01:00
Behan Webster 2128df143d arm64: LLVMLinux: Use current_stack_pointer in kernel/traps.c
Use the global current_stack_pointer to get the value of the stack pointer.
This change supports being able to compile the kernel with both gcc and clang.

Signed-off-by: Behan Webster <behanw@converseincode.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Reviewed-by: Olof Johansson <olof@lixom.net>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:19 +01:00
Behan Webster bb28cec4ea arm64: LLVMLinux: Use current_stack_pointer in save_stack_trace_tsk
Use the global current_stack_pointer to get the value of the stack pointer.
This change supports being able to compile the kernel with both gcc and clang.

Signed-off-by: Behan Webster <behanw@converseincode.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Reviewed-by: Jan-Simon Möller <dl9pf@gmx.de>
Reviewed-by: Olof Johansson <olof@lixom.net>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:19 +01:00
Arun Chandran 5e05153144 arm64: convert part of soft_restart() to assembly
The current soft_restart() and setup_restart implementations incorrectly
assume that compiler will not spill/fill values to/from stack. However
this assumption seems to be wrong, revealed by the disassembly of the
currently existing code (v3.16) built with Linaro GCC 4.9-2014.05.

ffffffc000085224 <soft_restart>:
ffffffc000085224:  a9be7bfd  stp    x29, x30, [sp,#-32]!
ffffffc000085228:  910003fd  mov    x29, sp
ffffffc00008522c:  f9000fa0  str    x0, [x29,#24]
ffffffc000085230:  94003d21  bl     ffffffc0000946b4 <setup_mm_for_reboot>
ffffffc000085234:  94003b33  bl     ffffffc000093f00 <flush_cache_all>
ffffffc000085238:  94003dfa  bl     ffffffc000094a20 <cpu_cache_off>
ffffffc00008523c:  94003b31  bl     ffffffc000093f00 <flush_cache_all>
ffffffc000085240:  b0003321  adrp   x1, ffffffc0006ea000 <reset_devices>

ffffffc000085244:  f9400fa0  ldr    x0, [x29,#24] ----> spilled addr
ffffffc000085248:  f942fc22  ldr    x2, [x1,#1528] ----> global memstart_addr

ffffffc00008524c:  f0000061  adrp   x1, ffffffc000094000 <__inval_cache_range+0x40>
ffffffc000085250:  91290021  add    x1, x1, #0xa40
ffffffc000085254:  8b010041  add    x1, x2, x1
ffffffc000085258:  d2c00802  mov    x2, #0x4000000000           // #274877906944
ffffffc00008525c:  8b020021  add    x1, x1, x2
ffffffc000085260:  d63f0020  blr    x1
...

Here the compiler generates memory accesses after the cache is disabled,
loading stale values for the spilled value and global variable. As we cannot
control when the compiler will access memory we must rewrite the
functions in assembly to stash values we need in registers prior to
disabling the cache, avoiding the use of memory.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Arun Chandran <achandran@mvista.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:18 +01:00
Ard Biesheuvel 58015ec6b8 arm64/efi: efistub: don't abort if base of DRAM is occupied
If we cannot relocate the kernel Image to its preferred offset of base of DRAM
plus TEXT_OFFSET, instead relocate it to the lowest available 2 MB boundary plus
TEXT_OFFSET. We may lose a bit of memory at the low end, but we can still
proceed normally otherwise.

Acked-by: Mark Salter <msalter@redhat.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:18 +01:00
Ard Biesheuvel c16173fa56 arm64/efi: efistub: cover entire static mem footprint in PE/COFF .text
The static memory footprint of a kernel Image at boot is larger than the
Image file itself. Things like .bss data and initial page tables are allocated
statically but populated dynamically so their content is not contained in the
Image file.

However, if EFI (or GRUB) has loaded the Image at precisely the desired offset
of base of DRAM + TEXT_OFFSET, the Image will be booted in place, and we have
to make sure that the allocation done by the PE/COFF loader is large enough.

Fix this by growing the PE/COFF .text section to cover the entire static
memory footprint. The part of the section that is not covered by the payload
will be zero initialised by the PE/COFF loader.

Acked-by: Mark Salter <msalter@redhat.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:18 +01:00
Mark Rutland 113954c646 arm64: spin-table: handle unmapped cpu-release-addrs
In certain cases the cpu-release-addr of a CPU may not fall in the
linear mapping (e.g. when the kernel is loaded above this address due to
the presence of other images in memory). This is problematic for the
spin-table code as it assumes that it can trivially convert a
cpu-release-addr to a valid VA in the linear map.

This patch modifies the spin-table code to use a temporary cached
mapping to write to a given cpu-release-addr, enabling us to support
addresses regardless of whether they are covered by the linear mapping.

Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[ardb: added (__force void *) cast]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:18 +01:00
Ard Biesheuvel 169c018de7 arm64: don't flag non-aliasing VIPT I-caches as aliasing
VIPT caches are non-aliasing if the index is derived from address bits that
are always equal between VA and PA. Classifying these as aliasing results in
unnecessary flushing which may hurt performance.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:18 +01:00
Ard Biesheuvel 80c517b0ff arm64: add helper functions to read I-cache attributes
This adds helper functions and #defines to <asm/cachetype.h> to read the
line size and the number of sets from the level 1 instruction cache.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:18 +01:00
Sudeep Holla 3d8afe3099 arm64: use irq_set_affinity with force=false when migrating irqs
The arm64 interrupt migration code on cpu offline calls
irqchip.irq_set_affinity() with the argument force=true. Originally
this argument had no effect because it was not used by any interrupt
chip driver and there was no semantics defined.

This changed with commit 01f8fa4f01 ("genirq: Allow forcing cpu
affinity of interrupts") which made the force argument useful to route
interrupts to not yet online cpus without checking the target cpu
against the cpu online mask. The following commit ffde1de640
("irqchip: gic: Support forced affinity setting") implemented this for
the GIC interrupt controller.

As a consequence the cpu offline irq migration fails if CPU0 is
offlined, because CPU0 is still set in the affinity mask and the
validation against cpu online mask is skipped to the force argument
being true. The following first_cpu(mask) selection always selects
CPU0 as the target.

Commit 601c942176d8("arm64: use cpu_online_mask when using forced
irq_set_affinity") intended to fix the above mentioned issue but
introduced another issue where affinity can be migrated to a wrong
CPU due to unconditional copy of cpu_online_mask.

As with for arm, solve the issue by calling irq_set_affinity() with
force=false from the CPU offline irq migration code so the GIC driver
validates the affinity mask against CPU online mask and therefore
removes CPU0 from the possible target candidates. Also revert the
changes done in the commit 601c942176 as it's no longer needed.

Tested on Juno platform.

Fixes: 601c942176d8("arm64: use cpu_online_mask when using forced
	irq_set_affinity")
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org> # 3.10.x
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-03 19:24:38 +01:00
Will Deacon 5e39977edf Revert "arm64: cpuinfo: print info for all CPUs"
It turns out that vendors are relying on the format of /proc/cpuinfo,
and we've even spotted out-of-tree hacks attempting to make it look
identical to the format used by arch/arm/. That means we can't afford to
churn this interface in mainline, so revert the recent reformatting of
the file for arm64 pending discussions on the list to find out what
people actually want.

This reverts commit d7a49086f2.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-01 15:55:22 +01:00
Leo Yan 7c68a9cc04 arm64: fix bug for reloading FPSIMD state after cpu power off
Now arm64 defers reloading FPSIMD state, but this optimization also
introduces the bug after cpu resume back from low power mode.

The reason is after the cpu has been powered off, s/w need set the
cpu's fpsimd_last_state to NULL so that it will force to reload
FPSIMD state for the thread, otherwise there has the chance to meet
the condition for both the task's fpsimd_state.cpu field contains the
id of the current cpu, and the cpu's fpsimd_last_state per-cpu variable
points to the task's fpsimd_state, so finally kernel will skip to reload
the context during it return back to userland.

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Leo Yan <leoy@marvell.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-01 12:55:21 +01:00
Will Deacon 5b75a6af11 arm64: perf: don't rely on layout of pt_regs when grabbing sp or pc
The current perf_regs code relies on sp and pc sitting just off the end
of the pt_regs->regs array. This is ugly and fragile, so this patch
checks for these register explicitly and returns the appropriate field.

Acked-by: Jean Pihet <jean.pihet@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-28 20:01:50 +01:00
Will Deacon 85487edd25 arm64: ptrace: fix compat reg getter/setter return values
copy_{to,from}_user return the number of bytes remaining on failure, not
an error code.

This patch returns -EFAULT when the copy operation didn't complete,
rather than expose the number of bytes not copied directly to userspace.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-28 20:01:42 +01:00
Will Deacon 27d7ff273c arm64: ptrace: fix compat hardware watchpoint reporting
I'm not sure what I was on when I wrote this, but when iterating over
the hardware watchpoint array (hbp_watch_array), our index is off by
ARM_MAX_BRP, so we walk off the end of our thread_struct...

... except, a dodgy condition in the loop means that it never executes
at all (bp cannot be NULL).

This patch fixes the code so that we remove the bp check and use the
correct index for accessing the watchpoint structures.

Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-28 20:01:36 +01:00
Geoff Levand 5843be2279 arm64: Remove unused variable in head.S
Remove an unused local variable from head.S.  It seems this was never
used even from the initial commit
9703d9d7f7 (arm64: Kernel booting and
initialisation), and is a left over from a previous implementation
of __calc_phys_offset.

Signed-off-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-26 19:24:00 +01:00
Linus Torvalds 7be141d055 Merge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Ingo Molnar:
 "A couple of EFI fixes, plus misc fixes all around the map"

* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  efi/arm64: Store Runtime Services revision
  firmware: Do not use WARN_ON(!spin_is_locked())
  x86_32, entry: Clean up sysenter_badsys declaration
  x86/doc: Fix the 'tlb_single_page_flush_ceiling' sysconfig path
  x86/mm: Fix sparse 'tlb_single_page_flush_ceiling' warning and make the variable read-mostly
  x86/mm: Fix RCU splat from new TLB tracepoints
2014-08-24 16:17:41 -07:00
Semen Protsenko 6a7519e813 efi/arm64: Store Runtime Services revision
"efi" global data structure contains "runtime_version" field which must
be assigned in order to use it later in Runtime Services virtual calls
(virt_efi_* functions).

Before this patch "runtime_version" was unassigned (0), so each
Runtime Service virtual call that checks revision would fail.

Signed-off-by: Semen Protsenko <semen.protsenko@linaro.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-08-22 08:45:41 +01:00
Will Deacon 44b375070f Revert "arm64: Do not invoke audit_syscall_* functions if !CONFIG_AUDIT_SYSCALL"
For some reason, the audit patches didn't make it out of -next this
merge window, so revert our temporary hack and let the audit guys deal
with fixing up -next.

This reverts commit 2a8f45b040.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-19 22:05:45 +01:00
Leif Lindholm 86c8b27a01 arm64: ignore DT memreserve entries when booting in UEFI mode
UEFI provides its own method for marking regions to reserve, via the
memory map which is also used to initialise memblock. So when using the
UEFI memory map, ignore any memreserve entries present in the DT.

Reported-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-19 20:22:03 +01:00
Ard Biesheuvel 4190312beb arm64: align randomized TEXT_OFFSET on 4 kB boundary
When booting via UEFI, the kernel Image is loaded at a 4 kB boundary and
the embedded EFI stub is executed in place. The EFI stub relocates the
Image to reside TEXT_OFFSET bytes above a 2 MB boundary, and jumps into
the kernel proper.

In AArch64, PC relative symbol references are emitted using adrp/add or
adrp/ldr pairs, where the offset into a 4 kB page is resolved using a
separate :lo12: relocation. This implicitly assumes that the code will
always be executed at the same relative offset with respect to a 4 kB
boundary, or the references will point to the wrong address.

This means we should link the kernel at a 4 kB aligned base address in
order to remain compatible with the base address the UEFI loader uses
when doing the initial load of Image. So update the code that generates
TEXT_OFFSET to choose a multiple of 4 kB.

At the same time, update the code so it chooses from the interval [0..2MB)
as the author originally intended.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-19 19:26:09 +01:00