Commit Graph

93468 Commits

Author SHA1 Message Date
Leonid Yegoshin 2f284eac28 MIPS: malta: malta-init: Fix System Controller memory mapping for EVA
Shift System Controller memory mapping to 0x80000000

Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:20 +01:00
Markos Chandras d1965c0616 MIPS: malta: malta-memory: Add free_init_pages_eva() callback
Use a Malta specific function to free the init section once the
kernel has booted. When operating in EVA mode, the physical memory
is shifted to 0x80000000. Kernel is loaded into 0x80000000 (virtual)
so the offset between physical and virtual addresses is 0.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:20 +01:00
Markos Chandras 3bdd8e6e09 MIPS: malta: malta-memory: Use the PHYS_OFFSET to build the memory map
PHYS_OFFSET is used to denote the physical start address of the
first bank of RAM. When the Malta board is in EVA mode, the physical
start address of RAM is shifted to 0x80000000 so it's necessary to use
this macro in order to make the code EVA agnostic.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:20 +01:00
Markos Chandras e6ca4e5bf1 MIPS: malta: malta-memory: Add support for the 'ememsize' variable
The 'ememsize' variable is used to denote the real RAM which is
present on the Malta board. This is different compared to 'memsize'
which is capped to 256MB. The 'ememsize' is used to get the actual
physical memory when setting up the Malta memory layout. This only
makes sense in case the core operates in the EVA mode, and it's
ignored otherwise.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:20 +01:00
Markos Chandras c9fede2afc MIPS: malta: spaces.h: Add spaces.h file for Malta (EVA)
Add a spaces.h file for Malta to override certain memory macros
when operating in EVA mode.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:19 +01:00
Markos Chandras f8b7faf17b MIPS: malta: Configure Segment Control registers for EVA boot
The Malta board aliases 0x80000000 - 0xffffffff to 0x00000000
- 0x7fffffff ignoring the 256 MB IO hole in 0x10000000.
The physical memory is shifted to 0x80000000 so up to 2GB
can be used. Kuseg is expanded to 3GB (due to board limitations
only 2GB can be accessed) and lowmem (kernel space) is expanded to 2GB.

The Segment Control registers are programmed as follows:

Virtual memory           Physical memory           Mapping
0x00000000 - 0x7fffffff  0x80000000 - 0xfffffffff   MUSUK (kuseg)
0x80000000 - 0x9fffffff  0x00000000 - 0x1ffffffff   MUSUK (kseg0)
0xa0000000 - 0xbf000000  0x00000000 - 0x1ffffffff   MUSUK (kseg1)
0xc0000000 - 0xdfffffff             -                 MK  (kseg2)
0xe0000000 - 0xffffffff             -                 MK  (kseg3)

The location of exception vectors remain the same since 0xbfc00000
(traditional exception base) still maps to 0x1fc00000 physical.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:19 +01:00
Leonid Yegoshin 4676f9359f MIPS: mm: c-r4k: Flush scache to avoid cache aliases
There is a chance for the secondary cache to have memory
aliases. This can happen if the bootloader is in a non-EVA mode
(or even in EVA mode but with different mapping from the kernel)
and the kernel switching to EVA afterwards. It's best to flush
the icache to avoid having the secondary CPUs fetching stale
data from it.

Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:19 +01:00
Markos Chandras 80ca69f40f MIPS: mm: c-r4k: Add support for flushing user pages from cache
Use the userspace cache flushing functions if the interrupted
process is a userspace one.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:19 +01:00
Leonid Yegoshin 4caa906ee9 MIPS: mm: c-r4k: Build EVA {d,i}cache flushing functions
Build EVA specific cache flushing functions (ie cachee).
They will be used by a subsequent patch.

Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:19 +01:00
Markos Chandras 0893d3fb8d MIPS: mm: init: Add free_init_pages() callback for EVA
A core in EVA mode can have any possible segment mapping, so the
default free_initmem_default() function may not always work as expected.
Therefore, add a callback that platforms can use to free up the init section.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:18 +01:00
Markos Chandras 91119686f3 MIPS: kernel: proc: Add EVA to the list of CPU features
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:18 +01:00
Markos Chandras 49016748ec MIPS: kernel: cpu-probe: Enable EVA option on supported cores
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:18 +01:00
Markos Chandras 7ae6696656 MIPS: asm: cpu: Add cpu flag for Enhanced Virtual Addressing
The MIPS *Aptiv family uses bit 28 in Config5 CP0 register to
indicate whether the core supports EVA or not.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:18 +01:00
Leonid Yegoshin 27b3db2031 MIPS: asm: page: Allow __pa_symbol overrides
This will allow platforms to use an alternative way to get
the physical address of a symbol.

Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:18 +01:00
Leonid Yegoshin 6ebda44f36 MIPS: kernel: {ftrace,kgdb}: Set correct address limit for cache flushes
When flushing the icache, make sure the address limit is correct
so the appropriate 'cache' instruction will be used. This has no
impact on cores operating in non-eva mode. However, when EVA is
enabled, we ensure that 'cache' will be used instead of 'cachee'.

Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:18 +01:00
Leonid Yegoshin de8974e3f7 MIPS: asm: r4kcache: Add EVA cache flushing functions
Add EVA cache flushing functions similar to non-EVA configurations.
Because the cache may or may not contain user virtual addresses, we
need to use the 'cache' or 'cachee' instruction based on whether we
flush the cache on behalf of kernel or user respectively.

Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:18 +01:00
Leonid Yegoshin a805385499 MIPS: asm: r4kcache: Add protected cache operation for EVA
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:18 +01:00
Leonid Yegoshin 41e62b0411 MIPS: asm: r4kcache: Build flushing code for instruction cache
Build code to invalidate an address range in the  instruction cache
using the Hit Invalidate cache operation.

Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:18 +01:00
Leonid Yegoshin ca750649e0 MIPS: kernel: signal: Prevent save/restore FPU context in user memory
EVA does not have FPU specific instructions for reading or writing
FPU registers from userspace memory.

Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:17 +01:00
Leonid Yegoshin c2d85bc104 MIPS: asm: checksum: Add MIPS specific csum_and_copy_from_user function
A MIPS specific csum_and_copy_from_user function is necessary because
the generic one from include/net/checksum.h will not work for EVA.
This is because the generic one will link to symbols from lib/checksum.c
which are not EVA aware.

Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:17 +01:00
Leonid Yegoshin fb316913f8 MIPS: asm: checksum: Split kernel and user copy operations
In EVA mode, different instructions need to be used to read/write
from kernel and userland. In non-EVA mode, there is no functional
difference. The current address limit is checked to decide the
type of operation that will be performed.

Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:17 +01:00
Markos Chandras 6f85cebe49 MIPS: lib: csum_partial: Add EVA support
Use EVA specific functions to read and write data to
user address space.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:17 +01:00
Markos Chandras e89fb56c8b MIPS: lib: csum_partial: Add macro to build csum_partial symbols
In preparation for EVA support, we use a macro to build the
__csum_partial_copy_user main code so it can be shared across
multiple implementations. EVA uses the same code but it replaces
the load/store/prefetch instructions with the EVA specific ones
therefore using a macro avoids unnecessary code duplications.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:17 +01:00
Markos Chandras 2ab82e6648 MIPS: lib: csum_partial: Merge EXC and load/store macros
Each load/store macro always adds an entry to the __ex_table
using the EXC macro. There are cases where a load instruction may
never fail such as when we are sure the load happens in the kernel
address space. Therefore, we merge these the EXC and LOADX/STOREX
macros into a single one. We also expand the argument list in the EXC
macro to make the macro more flexible. The extra 'type' argument is not
used by this commit, but it will be used when EVA support is added to
memcpy.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:17 +01:00
Markos Chandras ac85227f76 MIPS: checksum: Split the 'copy_user' symbol
The 'copy_user' symbol can be used to copy from or to
userland so we will use two different symbols for these
operations. This makes no difference in the existing code,
but when the core is operating in EVA mode, different instructions
need to be used to read and write to userland address space.
The old function has also been renamed to 'copy_kernel' to denote
that it is suitable for copy data to and from kernel space.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:17 +01:00
Leonid Yegoshin c1771216ab MIPS: kernel: unaligned: Handle unaligned accesses for EVA
Handle unaligned accesses when we access userspace memory
EVA mode.

Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:16 +01:00
Markos Chandras 9d8e573683 MIPS: kernel: unaligned: Add EVA instruction wrappers
Use the load/store instruction wrappers from asm/asm.h to
perform such operations when operating in EVA mode.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:16 +01:00
Markos Chandras e3a9b07a9c MIPS: asm: uaccess: Add EVA support for str*_user operations
The str*_user functions are used to securely access NULL terminated
strings from userland. Therefore, it's necessary to use the appropriate
EVA function. However, if the string is in kernel space, then the normal
instructions are being used to access it. The __str*_kernel_asm and
__str*_user_asm symbols are the same for non-EVA mode so there is no
functional change for the non-EVA kernels.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:16 +01:00
Markos Chandras 05c6516005 MIPS: asm: uaccess: Add EVA support to copy_{in, to,from}_user
Use the EVA specific functions from memcpy.S to perform
userspace operations. When get_fs() == get_ds() the usual load/store
instructions are used because the destination address is located in
the kernel address space region. Otherwise, the EVA specifc load/store
instructions are used which will go through th TLB to perform the virtual
to physical translation for the userspace address.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:16 +01:00
Markos Chandras 0081ad2486 MIPS: asm: uaccess: Rename {get,put}_user_asm macros
The {get,put}_user_asm functions can be used to load data from
kernel or the user address space so rename them to avoid
confusion.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:16 +01:00
Markos Chandras ac1d8590d3 MIPS: asm: uaccess: Use EVA instructions wrappers
Use the EVA instruction wrappers from asm.h to perform
read/write operations from userland.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:16 +01:00
Leonid Yegoshin 18e900185b MIPS: asm: uaccess: Disable unaligned access macros for EVA
ulb, ulh, ulw are macros which emulate unaligned access for MIPS.
However, no such macros exist for EVA mode, so the only way to do
EVA unaligned accesses is in the ADE exception handler. As a result
of which, disable these macros for EVA.

Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:15 +01:00
Markos Chandras ec56b1d461 MIPS: asm: uaccess: Move duplicated code to common function
Similar to __get_user_* functions, move common code to
__put_user_*_common so it can be shared among similar users.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:15 +01:00
Markos Chandras d84869a19f MIPS: asm: uaccess: Add instruction argument to __{put,get}_user_asm
In preparation for EVA support, an instruction argument is needed
for the __get_user_asm{,_ll32} functions to allow instruction overrides in
EVA mode. Even though EVA only works for MIPS 32-bit, both codepaths are
changed (32-bit and 64-bit) for consistency reasons.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:15 +01:00
Markos Chandras fd9720e96e MIPS: lib: memset: Add EVA support for the __bzero function.
Build the __bzero function using the EVA load/store instructions
when operating in the EVA mode. This function is only used when
accessing user code so there is no need to build two distinct symbols
for user and kernel operations respectively.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:15 +01:00
Markos Chandras 6d5155c2a6 MIPS: lib: memset: Use macro to build the __bzero symbol
Build the __bzero symbol using a macor. In EVA mode we will
need to use similar code to do the userspace load operations so
it is better if we use a macro to avoid code duplications.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:15 +01:00
Markos Chandras 8483b14aaa MIPS: lib: memset: Whitespace fixes
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:14 +01:00
Markos Chandras cd26cb41ec MIPS: lib: memcpy: Add EVA support
Add copy_{to,from,in}_user when the CPU operates in EVA mode.
This is necessary so the EVA specific instructions can be used
to perform the virtual to physical translation for user space
addresses. We will use the non-EVA functions to read from kernel
if needed.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:14 +01:00
Markos Chandras cf62a8b813 MIPS: lib: memcpy: Use macro to build the copy_user code
The code can be shared between EVA and non-EVA configurations,
therefore use a macro to build it to avoid code duplications.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:14 +01:00
Markos Chandras bda4d986a6 MIPS: lib: memcpy: Split source and destination prefetch macros
In preparation for EVA support, the PREF macro is split into two
separate macros, PREFS and PREFD, for source and destination data
prefetching respectively.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:14 +01:00
Markos Chandras 5bc05971d3 MIPS: lib: memcpy: Merge EXC and load/store macros
Each load/store macro always adds an entry to the __ex_table
using the EXC macro. Therefore, these load/store macros are now merged
with the EXC one. The argument list is also expanded in order to make
the macro more flexible. The extra 'type' argument is not used by this
commit, but it will be used when the EVA support is added to the memcpy.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:14 +01:00
Markos Chandras b3c3025b2c MIPS: lib: strncpy_user: Add EVA support
In non-EVA mode, strncpy_from_user* aliases are used for the
strncpy_from_kernel* symbols since the code is identical. In EVA
mode, new strcpy_from_user* symbols are used which use the EVA
specific instructions to load values from userspace.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:14 +01:00
Markos Chandras cc59fe5b88 MIPS: lib: strncpy_user: Use macro to build the strncpy_from_user symbol
Build the __strncpy_from_user symbol using a macro. In EVA mode we will
need to use similar code to do the userspace load operations so
it is better if we use a macro to avoid code duplications.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:14 +01:00
Markos Chandras 053970542f MIPS: lib: strlen_user: Add EVA support
In non-EVA mode, strlen_user* aliases are used for the
strlen_kernel* symbols since the code is identical. In EVA
mode, new strlen_user* symbols are used which use the EVA
specific instructions to load values from userspace.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:13 +01:00
Markos Chandras 5cc494972a MIPS: lib: strlen_user: Use macro to build the strlen_user symbol
Build the __strlen_user symbol using a macro. In EVA mode we will
need to use similar code to do the userspace load operations so
it is better if we use a macro to avoid code duplications.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:13 +01:00
Markos Chandras 4968db4b9c MIPS: lib: strnlen_user: Add EVA support
In non-EVA mode, a strlen_user* alias is used for the
strlen_kernel* symbols since the code is identical. In EVA
mode, a new strlen_user* symbol is used which uses the EVA
specific instructions to load values from userspace.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:13 +01:00
Markos Chandras c48be43eb5 MIPS: lib: strnlen_user: Use macro to build the strnlen_user symbol
Build the __strnlen_user symbol using a macro. In EVA mode we will
need to use similar code to do the userspace load operations so
it is better if we use a macro to avoid code duplications.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:13 +01:00
Leonid Yegoshin 078dde5e21 MIPS: traps: Set correct address limit for breakpoints and traps
When a breakpoint or trap happens when operating in kernel mode but
on users behalf (eg syscall) it is necessary to change the address
limit to KERNEL_DS so any address checking can be bypassed and print
the correct stack trace.

Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:13 +01:00
Markos Chandras b08a9c95f8 MIPS: kernel: traps: Whitespace clean up
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:13 +01:00
Markos Chandras 86bdb2779d MIPS: kernel: scall32-o32: Use EVA wrappers to fetch syscall arguments
Arguments 4-8 are stored on user's stack, so use the EVA instructions
to fetch them if EVA is enabled.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:13 +01:00
Leonid Yegoshin aa1af47f5d MIPS: uapi: inst: Add instruction format for SPECIAL3 instructions
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:12 +01:00
Leonid Yegoshin a442c9e7b8 MIPS: uapi: inst: Add new EVA opcodes
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:12 +01:00
Markos Chandras a6813fe5c2 MIPS: futex: Add EVA support for futex operations
Use LLE/SCE instructions for performing an address translation for
userspace when EVA is enabled.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:12 +01:00
Markos Chandras 9324494595 MIPS: asm: Add wrappers for EVA/non-EVA instructions
EVA uses specific instructions for accessing user memory.
Instead of polluting the kernel with numerous #ifdef CONFIG_EVA
we add wrappers for all the instructions that need special
handling when EVA is enabled.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:12 +01:00
Leonid Yegoshin 5b736cd243 MIPS: asm: Add prefetch instruction for EVA
EVA can use the PREFE instruction to perform the virtual address
translation using the user mapping of the address rather than the
kernel mapping.

Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:12 +01:00
Leonid Yegoshin a6e18781c5 MIPS: Kconfig: Add Kconfig symbols for EVA support
Add basic Kconfig support for EVA. Not selectable by any platform
at this point.

Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
2014-03-26 23:09:12 +01:00
James Hogan 8c7f6ba342 MIPS: OProfile: Add CPU_P5600 cases
Add a CPU_P5600 cpu type case in oprofile_arch_init() to use the MIPS
model, and in mipsxx_init() to set the cpu_type string to "mips/P5600".

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Robert Richter <rric@kernel.org>
Cc: oprofile-list@lists.sf.net
Patchwork: https://patchwork.linux-mips.org/patch/6410/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:12 +01:00
James Hogan d83b0e82ad MIPS: Allow FTLB to be turned on for CPU_P5600
Allow FTLB to be turned on or off for CPU_P5600 as well as CPU_PROAPTIV.
The existing if statement is converted into a switch to allow for future
expansion.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6411/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:11 +01:00
James Hogan 829dcc0a95 MIPS: Add MIPS P5600 probe support
Add a case in cpu_probe_mips for the MIPS P5600 processor ID, which sets
the CPU type to the new CPU_P5600.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6409/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:11 +01:00
James Hogan aced4cbd6e MIPS: Add cases for CPU_P5600
Add a CPU_P5600 case to various switch statements, doing the same thing
as for CPU_PROAPTIV.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6408/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:11 +01:00
James Hogan f43e4dfd39 MIPS: Add MIPS P5600 PRid and cputype identifiers
Add a Processor ID and CPU type for the MIPS P5600 core.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6407/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:11 +01:00
Paul Burton eec43a224c MIPS: Save/restore MSA context around signals
This patch extends sigcontext in order to hold the most significant 64
bits of each vector register in addition to the MSA control & status
register. The least significant 64 bits are already saved as the scalar
FP context. This makes things a little awkward since the least & most
significant 64 bits of each vector register are not contiguous in
memory. Thus the copy_u & insert instructions are used to transfer the
values of the most significant 64 bits via GP registers.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6533/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:11 +01:00
Paul Burton a8ad136789 MIPS: Warn if vector register partitioning is implemented
No current systems implementing MSA include support for vector register
partitioning which makes it somewhat difficult to implement support for
it in the kernel. Thus for the moment the kernel includes no such
support. However if the kernel were to be run on a system which
implemented register partitioning then it would not function correctly,
mishandling MSA disabled exceptions. Print a warning if run on a system
with vector register partitioning implemented to indicate this problem
should it occur.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6494/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:11 +01:00
Paul Burton 2bcb3fbc3f MIPS: Dumb MSA FP exception handler
This patch adds a simple handler for MSA FP exceptions which delivers a
SIGFPE to the running task. In the future it should probably be extended
to re-execute the instruction with the MSACSR.NX bit set in order to
generate results for any elements which did not cause an exception
before delivering the SIGFPE signal.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6432/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:10 +01:00
Paul Burton 1db1af84d6 MIPS: Basic MSA context switching support
This patch adds support for context switching the MSA vector registers.
These 128 bit vector registers are aliased with the FP registers - an
FP register accesses the least significant bits of the vector register
with which it is aliased (ie. the register with the same index). Due to
both this & the requirement that the scalar FPU must be 64-bit (FR=1) if
enabled at the same time as MSA the kernel will enable MSA & scalar FP
at the same time for tasks which use MSA. If we restore the MSA vector
context then we might as well enable the scalar FPU since the reason it
was left disabled was to allow for lazy FP context restoring - but we
just restored the FP context as it's a subset of the vector context. If
we restore the FP context and have previously used MSA then we have to
restore the whole vector context anyway (see comment in
enable_restore_fp_context for details) so similarly we might as well
enable MSA.

Thus if a task does not use MSA then it will continue to behave as
without this patch - the scalar FP context will be saved & restored as
usual. But if a task executes an MSA instruction then it will save &
restore the vector context forever more.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6431/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:10 +01:00
Paul Burton a5e9a69e2c MIPS: Detect the MSA ASE
This patch adds support for probing the MSAP bit within the Config3
register in order to detect the presence of the MSA ASE. Presence of the
ASE will be indicated in /proc/cpuinfo. The value of the MSA
implementation register will be displayed at boot to aid debugging and
verification of a correct setup, as is done for the FPU.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6430/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:10 +01:00
Paul Burton 7f65afb97f MIPS: Add MSA register definitions & access
This patch introduces definitions for the MSA control registers and
functions which allow access to both the control & vector registers. If
the toolchain being used to build the kernel includes support for MSA
then this patch will make use of that support & use MSA instructions
directly. However toolchain support for MSA is very new & far from a
point where it can be reasonably expected that everyone building the
kernel uses a toolchain with support. Thus fallbacks using .word
assembler directives are also provided for now as a temporary measure.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6429/
Patchwork: https://patchwork.linux-mips.org/patch/6607/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:10 +01:00
Paul Burton 02987633df MIPS: Don't assume 64-bit FP registers for context switch
When saving or restoring scalar FP context we want to access the least
significant 64 bits of each FP register. When the FP registers are 64
bits wide that is trivially the start of the registers value in memory.
However when the FP registers are wider this equivalence will no longer
be true for big endian systems. Define a new set of offset macros for
the least significant 64 bits of each saved FP register within thread
context, and make use of them when saving and restoring scalar FP
context.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6428/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:10 +01:00
Paul Burton 72b22bbad1 MIPS: Don't assume 64-bit FP registers for FP regset
When we want to access 64-bit FP register values we can only treat
consecutive registers as being consecutive in memory when the width of
an FP register equals 64 bits. This assumption will not remain true once
MSA support is introduced, so provide a code path which copies each 64
bit FP register value in turn when the width of an FP register differs
from 64 bits.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6427/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:10 +01:00
Paul Burton 6cec7c4ad7 MIPS: Don't assume 64-bit FP registers for dump_{,task_}fpu
This code assumed that saved FP registers are 64 bits wide, an
assumption which will no longer be true once MSA is introduced. This
patch modifies the code to copy the lower 64 bits of each register in
turn, which is safe for any FP register width >= 64 bits.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6425/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:10 +01:00
Paul Burton ef1c47afc0 MIPS: Clear upper bits of FP registers on emulator writes
The upper bits of an FP register are architecturally defined as
unpredictable following an instructions which only writes the lower
bits. The prior behaviour of the kernel is to leave them unmodified.
This patch modifies that to clear the upper bits to zero. This is what
the MSA architecture reference manual specifies should happen for its
wider registers and is still permissible for scalar FP instructions
given the bits unpredictability there.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: sergei.shtylyov@cogentembedded.com
Patchwork: https://patchwork.linux-mips.org/patch/6435/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:10 +01:00
Paul Burton 6bbfd65e28 MIPS: Replace hardcoded 32 with NUM_FPU_REGS in ptrace
NUM_FPU_REGS just makes it clearer what's going on, rather than the
magic hard coded 32.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6424/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:09 +01:00
Paul Burton ff3aa5f296 MIPS: Don't require FPU on sigcontext setup/restore
When a task which has used the FPU at some point in its past takes a
signal the kernel would previously always require the task to take
ownership of the FPU whilst setting up or restoring from the sigcontext.
That means that if the task has not used the FPU within this timeslice
then the kernel would enable the FPU, restore the task's FP context into
FPU registers and then save them into the sigcontext. This seems
inefficient, and if the signal handler doesn't use FP then enabling the
FPU & the extra memory accesses are entirely wasted work.

This patch modifies the sigcontext setup & restore code to copy directly
between the tasks saved FP context & the sigcontext for any tasks which
have used FP in the past but are not currently the FPU owner (ie. have
not used FP in this timeslice).

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: Qais Yousef <qais.yousef@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6423/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:09 +01:00
Paul Burton b2ead52828 MIPS: Move & rename fpu_emulator_{save,restore}_context
These functions aren't directly related to the FPU emulator at all, they
simply copy between a thread's saved context & a sigcontext. Thus move
them to the appropriate signal files & rename them accordingly. This
makes it clearer that the functions don't require the FPU emulator in
any way.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: Qais Yousef <qais.yousef@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6422/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:09 +01:00
Paul Burton e87ce94822 MIPS: Update outdated comment
The hard-coded offsets mentioned in this comment seem to not exist
anymore, so remove mention of them from the comment.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: Qais Yousef <qais.yousef@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6421/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:09 +01:00
Paul Burton bbd426f542 MIPS: Simplify FP context access
This patch replaces the fpureg_t typedef with a "union fpureg" enabling
easier access to 32 & 64 bit values. This allows the access macros used
in cp1emu.c to be simplified somewhat. It will also make it easier to
expand the width of the FP registers as will be done in a future
patch in order to support the 128 bit registers introduced with MSA.

No behavioural change is intended by this patch.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: Qais Yousef <qais.yousef@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6532/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:09 +01:00
Markos Chandras 490b004feb MIPS: Select HAVE_ARCH_SECCOMP_FILTER
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6401/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:09 +01:00
Markos Chandras 4c21b8fd8f MIPS: seccomp: Handle indirect system calls (o32)
When userland uses syscall() to perform an indirect system call
the actually system call that needs to be checked by the filter
is on the first argument. The kernel code needs to handle this case
by looking at the original syscall number in v0 and if it's
NR_syscall, then it needs to examine the first argument to
identify the real system call that will be executed.
Similarly, we need to 'virtually' shift the syscall() arguments
so the syscall_get_arguments() function can fetch the correct
arguments for the indirect system call.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6404/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:08 +01:00
Markos Chandras 9d37c405ed MIPS: kernel: scalls: Skip the syscall if denied by the seccomp filter
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6399/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:08 +01:00
Markos Chandras 1225eb8252 MIPS: ptrace: Move away from secure_computing_strict
MIPS now has the infrastructure for dynamic seccomp-bpf
filtering

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6400/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:08 +01:00
Markos Chandras 137f7df8ce MIPS: asm: thread_info: Add _TIF_SECCOMP flag
Add _TIF_SECCOMP flag to _TIF_WORK_SYSCALL_ENTRY to indicate
that the system call needs to be checked against a seccomp filter.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6405/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:08 +01:00
Markos Chandras 6e34574603 MIPS: asm: syscall: Define syscall_get_arch
This effectively renames __syscall_get_arch to syscall_get_arch
and implements a compatible interface for the seccomp API.
The seccomp code (kernel/seccomp.c) expects a syscall_get_arch
function to be defined for every architecture, so we drop
the leading underscores from the existing function.

This also makes use of the 'task' argument to determine the type
the process instead of assuming the process has the same
characteristics as the kernel it's running on.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6398/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:08 +01:00
Markos Chandras 22feadbe4c MIPS: asm: syscall: Add the syscall_rollback function
The syscall_rollback function is used by seccomp-bpf but it was never
added for MIPS. It doesn't need to do anything as none of the registers
are clobbered if the system call has been denied by the seccomp filter.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6403/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:08 +01:00
Paul Burton 5cac93b35c MIPS: Deprecate CONFIG_MIPS_CMP
CONFIG_MIPS_CPS is a better option for systems where it is supported,
which as far as I am aware should be all systems where CONFIG_MIPS_CMP
could provide any value (ie. where there are multiple cores for YAMON to
bring up). This option is therefore deprecated, and marked as such. It
is left intact for the time being in order to provide a fallback should
someone find a system where CONFIG_MIPS_CPS will not function (ie. where
the reset vector cannot be moved), and should be removed entirely in the
future assuming that does not happen.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6369/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:08 +01:00
Paul Burton a6ce202ead MIPS: MIPS_CMP should depend upon !SMTC, not upon SMVP
Commit f55afb0969cc "MIPS: Clean up MIPS MT and CMP configuration
options." introduced a dependency upon MIPS_MT_SMP (ie. SMVP) for the
MIPS_CMP (ie. CMP framework support) Kconfig option. It did not specify
why, and that dependency is bogus. It is perfectly valid to have a
multi-core system with the YAMON bootloader but without MT support -
an example of this would be any multi-core proAptiv bitstream running on
a Malta. Forcing MT support to be enabled in a kernel for such a system
is incorrect. I suspect that the dependency was actually meant to
reflect the fact that YAMON will only bind 1 TC per VPE on an MT system,
and only describe those 1:1 TC:VPE pairs as CPUs through the AMON
interface. Thus an SMTC kernel makes little sense on a system using
MIPS_CMP, and the Kconfig dependencies should reflect that rather than
introducing the bogus SMVP dependency.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6368/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:08 +01:00
Paul Burton 044505c780 MIPS: More helpful CONFIG_MIPS_CMP label, help text
The prior help text introduced in commit f55afb0969cc "MIPS: Clean up
MIPS MT and CMP configuration options." reads as though this option
enables the kernel to make use of the CM hardware, which is not true.
What it actually does is allow the kernel to interact with the YAMON
bootloader which actually interacts with the CM hardware to bring up
secondary cores. Re-introduce the word "framework" which that commit
removed to avoid misleading people.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6367/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:07 +01:00
Paul Burton 9d9812706f MIPS: Remove gcmpregs.h
This header was used only by Malta but is used no longer. Remove it. It
was also included unnecessarily in irq-gic.c, so that include is also
removed.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6366/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:07 +01:00
Paul Burton e56b6aa6da MIPS: Malta: Allow use of MIPS CPS SMP implementation
This patch simply attempts to register the MIPS Coherent Processing
System SMP implementation when it is enabled. If registering that fails
for some reason (like the Kconfig option being disabled or a lack of
hardware support) then we fall back to the same SMP implementations as
before.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6365/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:07 +01:00
Paul Burton 7dc2834fd5 MIPS: Malta: Probe CPC when supported
When CPC support is compiled into the kernel (ie. CONFIG_MIPS_CPC=y),
probe the CPC on boot for Malta in order to allow any users of the CPC
to detect its presence & function correctly.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6363/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:06 +01:00
Paul Burton 237036de65 MIPS: Malta: Make use of generic CM support
Remove the Malta-specific CM probe code and instead make use of the
newly added generic CM code.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6364/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:09:06 +01:00
Paul Burton 0ee958e102 MIPS: Coherent Processing System SMP implementation
This patch introduces a new SMP implementation for systems implementing
the MIPS Coherent Processing System architecture. The kernel will make
use of the Coherence Manager, Cluster Power Controller & Global
Interrupt Controller in order to detect, bring up & make use of other
cores in the system. SMTC is not supported, so only a single TC per VPE
in the system is used. That is, this option enables an SMVP style setup
but across multiple cores.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6362/
Patchwork: https://patchwork.linux-mips.org/patch/6611/
Patchwork: https://patchwork.linux-mips.org/patch/6651/
Patchwork: https://patchwork.linux-mips.org/patch/6652/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:00:12 +01:00
Paul Burton b86c2247a2 MIPS: Add cpu_vpe_id macro
The vpe_id field of struct cpuinfo_mips is only present when one of
CONFIG_MIPS_MT_{SMP,SMTC} is enabled. That means that any code accessing
which may compile without MT is currently forced to use an #ifdef.
Instead this patch provides an accessor macro, #ifdef'd appropriately
to prevent further #ifdef's elsewhere.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6646/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-26 23:00:05 +01:00
Tom Herbert 61b905da33 net: Rename skb->rxhash to skb->hash
The packet hash can be considered a property of the packet, not just
on RX path.

This patch changes name of rxhash and l4_rxhash skbuff fields to be
hash and l4_hash respectively. This includes changing uses of the
field in the code which don't call the access functions.

Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Mahesh Bandewar <maheshb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-26 15:58:20 -04:00
Matt Fleming 204b0a1a4b x86, efi: Abstract x86 efi_early calls
The ARM EFI boot stub doesn't need to care about the efi_early
infrastructure that x86 requires in order to do mixed mode thunking. So
wrap everything up in an efi_call_early() macro.

This allows x86 to do the necessary indirection jumps to call whatever
firmware interface is necessary (native or mixed mode), but also allows
the ARM folks to mask the fact that they don't support relocation in the
boot stub and need to pass 'sys_table_arg' to every function.

[ hpa: there are no object code changes from this patch ]

Signed-off-by: Matt Fleming <matt.fleming@intel.com>
Link: http://lkml.kernel.org/r/20140326091011.GB2958@console-pimps.org
Cc: Roy Franz <roy.franz@linaro.org>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-03-26 11:30:03 -07:00
Anton Blanchard 7505258c5f KVM: PPC: Book3S HV: Fix KVM hang with CONFIG_KVM_XICS=n
I noticed KVM is broken when KVM in-kernel XICS emulation
(CONFIG_KVM_XICS) is disabled.

The problem was introduced in 48eaef05 (KVM: PPC: Book3S HV: use
xics_wake_cpu only when defined). It used CONFIG_KVM_XICS to wrap
xics_wake_cpu, where CONFIG_PPC_ICP_NATIVE should have been
used.

Signed-off-by: Anton Blanchard <anton@samba.org>
Cc: stable@vger.kernel.org
Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Scott Wood <scottwood@freescale.com>
2014-03-26 23:34:56 +11:00
Laurent Dufour 69e9fbb278 KVM: PPC: Book3S: Introduce hypervisor call H_GET_TCE
This introduces the H_GET_TCE hypervisor call, which is basically the
reverse of H_PUT_TCE, as defined in the Power Architecture Platform
Requirements (PAPR).

The hcall H_GET_TCE is required by the kdump kernel, which uses it to
retrieve TCEs set up by the previous (panicked) kernel.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2014-03-26 23:34:27 +11:00
Greg Kurz e59d24e612 KVM: PPC: Book3S HV: Fix incorrect userspace exit on ioeventfd write
When the guest does an MMIO write which is handled successfully by an
ioeventfd, ioeventfd_write() returns 0 (success) and
kvmppc_handle_store() returns EMULATE_DONE.  Then
kvmppc_emulate_mmio() converts EMULATE_DONE to RESUME_GUEST_NV and
this causes an exit from the loop in kvmppc_vcpu_run_hv(), causing an
exit back to userspace with a bogus exit reason code, typically
causing userspace (e.g. qemu) to crash with a message about an unknown
exit code.

This adds handling of RESUME_GUEST_NV in kvmppc_vcpu_run_hv() in order
to fix that.  For generality, we define a helper to check for either
of the return-to-guest codes we use, RESUME_GUEST and RESUME_GUEST_NV,
to make it easy to check for either and provide one place to update if
any other return-to-guest code gets defined in future.

Since it only affects Book3S HV for now, the helper is added to
the kvm_book3s.h header file.

We use the helper in two places in kvmppc_run_core() as well for
future-proofing, though we don't see RESUME_GUEST_NV in either place
at present.

[paulus@samba.org - combined 4 patches into one, rewrote description]

Suggested-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2014-03-26 23:33:44 +11:00
David S. Miller 04f58c8854 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Conflicts:
	Documentation/devicetree/bindings/net/micrel-ks8851.txt
	net/core/netpoll.c

The net/core/netpoll.c conflict is a bug fix in 'net' happening
to code which is completely removed in 'net-next'.

In micrel-ks8851.txt we simply have overlapping changes.

Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-25 20:29:20 -04:00
Andy Lutomirski b9a4a56c1e x86, vdso, build: Don't rebuild 32-bit vdsos on every make
vdso32/vclock_gettime.o was confusing kbuild.

Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Stefani Seibold <stefani@seibold.net>
Link: http://lkml.kernel.org/r/d741449340642213744dd659471a35bb970a0c4c.1395789923.git.luto@amacapital.net
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-03-25 17:08:58 -07:00
Carlo Caione 8ff973a267 ARM: sun7i/sun6i: dts: Add NMI irqchip support
This patch adds DTS entries for NMI controller as child of GIC.

Signed-off-by: Carlo Caione <carlo@caione.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-sunxi@googlegroups.com
Cc: mark.rutland@arm.com
Cc: hdegoede@redhat.com
Acked-by: maxime.ripard@free-electrons.com
Link: http://lkml.kernel.org/r/1395256879-8475-3-git-send-email-carlo@caione.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-03-26 01:00:50 +01:00
Mark Brown a23af4ab6b Merge remote-tracking branches 'asoc/fix/cs42l51', 'asoc/fix/cs42l52', 'asoc/fix/cs42l73', 'asoc/fix/rcar', 'asoc/fix/spear' and 'asoc/fix/tegra' into asoc-linus 2014-03-25 21:21:57 +00:00
H. Peter Anvin 26f5ef2e3c x86, vdso: Actually discard the .discard sections
The .discard/.discard.* sections are used to generate intermediate
results for the assembler (effectively "test assembly".)  The output
is waste and should not be retained.

Cc: Stefani Seibold <stefani@seibold.net>
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Link: http://lkml.kernel.org/n/tip-psizrnant8x3nrhbgvq2vekr@git.kernel.org
2014-03-25 13:41:36 -07:00
Bartlomiej Zolnierkiewicz 080c492df0 ARM: davinci: da850: update SATA AHCI support
Update SATA AHCI support to make use of the new ahci_da850 host
driver (instead of the generic ahci_platform one) and remove
deprecated ahci_platform_data code.

Cc: Sekhar Nori <nsekhar@ti.com>
Cc: Kevin Hilman <khilman@deeprootsystems.com>
Cc: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
2014-03-25 15:17:13 -04:00
Thomas Gleixner b712c8dae0 x86: hpet: Use proper destructor for delayed work
destroy_timer_on_stack() is hardly the right thing for a delayed
work. We leak a tracking object for the work itself when DEBUG_OBJECTS
is enabled.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: x86@kernel.org
Link: http://lkml.kernel.org/r/20140323141940.034005322@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-03-25 17:34:01 +01:00
Paolo Bonzini f7b9ddb8a5 3 fixes
- memory leak on certain SIGP conditions
 - wrong size for idle bitmap (always too big)
 - clear local interrupts on initial CPU reset
 
 1 performance improvement
 - improve performance with many guests on certain workloads
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.11 (GNU/Linux)
 
 iQIcBAABAgAGBQJTMXvEAAoJEBF7vIC1phx8ii8P/2XI/aXFlhkITD/79rghPbjo
 sE76o5Rlz4UzM8khgEW/4kajehww/1/hO68ojcy1xUE1eMHnjk4sF7c5ozbKX7Qw
 a411B/IrkDEz3aVFLx8KXu2qSdCs22WDgyYFUR4kOBjStP+TvN7KH1NYw84CO+pA
 7Mf+DAxF26X83q6nNvfnEq4cjrQEGmB0aRLkiT4PmHjs4WL+iimJmXeVTewhCtKm
 rE3N9AjpaZpqUQANXvdTJc5Cap/RbVMJf05EbIg5LrsEN+9xQHHRpkkjSRw+7XLx
 iY1uxgA8a92dGWY1RCSZe3lLS0ibg6LWH05hlhYOOCe1DBU0KVQJ5Txixu/ekm5j
 gv2BPpnquF+R2uYptTmF8Xq7TeP15kc3JjahrE8tZ16RH5dpZ7fn6fK/f3590JYB
 4rt0xt5diQwaFRDcgT+8zLGvIq8DZ4ZH6KNElXli8megdY1hiOIlkb1R7Hq/RIt1
 eacX2mycZAlf2ZUp3lVTHYxPL43WH2Qf2s4Y4mHlEmH8LQGGiFOl8Ne0Wgoeha9O
 JVvoHxTEqvzhWTgUi6n8cTlUsYvq6ICXhwCOPM5HLgbxCgbPbEv5EMOZxGAQJ0FK
 fQXasKpjxzPYyJ6XS8xeNel4yFQ+j8G1rvN4Q8kMLY4fjAj5sfm0WRrFw/gCb2ds
 ISfe6UOnoV9scrXvAMyH
 =7smd
 -----END PGP SIGNATURE-----

Merge tag 'kvm-s390-20140325' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into kvm-next

3 fixes
- memory leak on certain SIGP conditions
- wrong size for idle bitmap (always too big)
- clear local interrupts on initial CPU reset

1 performance improvement
- improve performance with many guests on certain workloads
2014-03-25 15:44:06 +01:00
Jens Freimann 2ed10cc15e KVM: s390: clear local interrupts at cpu initial reset
Empty list of local interrupts when vcpu goes through initial reset
to provide a clean state

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2014-03-25 13:27:12 +01:00
Thomas Huth 91880d07fc KVM: s390: Fix possible memory leak in SIGP functions
When kvm_get_vcpu() returned NULL for the destination CPU in
__sigp_emergency() or __sigp_external_call(), the memory for the
"inti" structure was not released anymore. This patch fixes this
issue by moving the check for !dst_vcpu before the kzalloc() call.

Signed-off-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2014-03-25 13:27:11 +01:00
Jens Freimann 609433fbed KVM: s390: fix calculation of idle_mask array size
We need BITS_TO_LONGS, not sizeof(long) to calculate
the correct size.

idle_mask is a bitmask, each bit representing the state
of a cpu. The desired outcome is an array of unsigned long
fields that can fit KVM_MAX_VCPUS bits. We should not use
sizeof(long) which returnes the size in bytes, but BITS_TO_LONGS

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2014-03-25 13:27:11 +01:00
Christian Borntraeger f6c137ff00 KVM: s390: randomize sca address
We allocate a page for the 2k sca, so lets use the space to improve
hit rate of some internal cpu caches. No need to change the freeing
of the page, as this will shift away the page offset bits anyway.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
2014-03-25 13:27:10 +01:00
Mathias Krause 37b2894717 crypto: x86/sha1 - reduce size of the AVX2 asm implementation
There is really no need to page align sha1_transform_avx2. The default
alignment is just fine. This is not the hot code but only the entry
point, after all.

Cc: Chandramouli Narayanan <mouli@linux.intel.com>
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Reviewed-by: H. Peter Anvin <hpa@linux.intel.com>
Reviewed-by: Marek Vasut <marex@denx.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-03-25 20:25:43 +08:00
Mathias Krause 6c8c17cc7a crypto: x86/sha1 - fix stack alignment of AVX2 variant
The AVX2 implementation might waste up to a page of stack memory because
of a wrong alignment calculation. This will, in the worst case, increase
the stack usage of sha1_transform_avx2() alone to 5.4 kB -- way to big
for a kernel function. Even worse, it might also allocate *less* bytes
than needed if the stack pointer is already aligned bacause in that case
the 'sub %rbx, %rsp' is effectively moving the stack pointer upwards,
not downwards.

Fix those issues by changing and simplifying the alignment calculation
to use a 32 byte alignment, the alignment really needed.

Cc: Chandramouli Narayanan <mouli@linux.intel.com>
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Reviewed-by: H. Peter Anvin <hpa@linux.intel.com>
Reviewed-by: Marek Vasut <marex@denx.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-03-25 20:25:43 +08:00
Mathias Krause 6ca5afb8c2 crypto: x86/sha1 - re-enable the AVX variant
Commit 7c1da8d0d0 "crypto: sha - SHA1 transform x86_64 AVX2"
accidentally disabled the AVX variant by making the avx_usable() test
not only fail in case the CPU doesn't support AVX or OSXSAVE but also
if it doesn't support AVX2.

Fix that regression by splitting up the AVX/AVX2 test into two
functions. Also test for the BMI1 extension in the avx2_usable() test
as the AVX2 implementation not only makes use of BMI2 but also BMI1
instructions.

Cc: Chandramouli Narayanan <mouli@linux.intel.com>
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Reviewed-by: H. Peter Anvin <hpa@linux.intel.com>
Reviewed-by: Marek Vasut <marex@denx.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-03-25 20:25:42 +08:00
David Vrabel 5926f87fda Revert "xen: properly account for _PAGE_NUMA during xen pte translations"
This reverts commit a9c8e4beee.

PTEs in Xen PV guests must contain machine addresses if _PAGE_PRESENT
is set and pseudo-physical addresses is _PAGE_PRESENT is clear.

This is because during a domain save/restore (migration) the page
table entries are "canonicalised" and uncanonicalised". i.e., MFNs are
converted to PFNs during domain save so that on a restore the page
table entries may be rewritten with the new MFNs on the destination.
This canonicalisation is only done for PTEs that are present.

This change resulted in writing PTEs with MFNs if _PAGE_PROTNONE (or
_PAGE_NUMA) was set but _PAGE_PRESENT was clear.  These PTEs would be
migrated as-is which would result in unexpected behaviour in the
destination domain.  Either a) the MFN would be translated to the
wrong PFN/page; b) setting the _PAGE_PRESENT bit would clear the PTE
because the MFN is no longer owned by the domain; or c) the present
bit would not get set.

Symptoms include "Bad page" reports when munmapping after migrating a
domain.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: <stable@vger.kernel.org>        [3.12+]
2014-03-25 11:11:42 +00:00
Linus Torvalds 822316461b Merge branch 'parisc-3.14' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux
Pull parisc updates from Helge Deller:
 - revert parts of the latest patch regarding font selection with STICON
   console
 - wire up the utimes() syscall for parisc
 - remove the unused parisc tmpalias code and unnecessary arch*relax
   defines

* 'parisc-3.14' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux:
  parisc: locks: remove redundant arch_*_relax operations
  parisc: wire up sys_utimes
  parisc: Remove unused CONFIG_PARISC_TMPALIAS code
  partly revert commit 8a10bc9: parisc/sti_console: prefer Linux fonts over built-in ROM fonts
2014-03-24 17:36:58 -07:00
Linus Torvalds 56f1f4b24e Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc
Pull sparc fixes from David Miller:

 1) Do serial locking in a way that makes things clear that these are
    IRQ spinlocks.

 2) Conversion to generic idle loop broke first generation Niagara
    machines, need to have %pil interrupts enabled during cpu yield
    hypervisor call.

 3) Do not use magic constants for iterations over tsb tables, from Doug
    Wilson.

 4) Fix erroneous truncation of 64-bit system call return values to
    32-bit.  From Dave Kleikamp.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc:
  sparc64: Make sure %pil interrupts are enabled during hypervisor yield.
  sparc64:tsb.c:use array size macro rather than number
  sparc64: don't treat 64-bit syscall return codes as 32-bit
  sparc: serial: Clean up the locking for -rt
2014-03-24 17:30:44 -07:00
Eric W. Biederman fabfb91d50 uml/net_kern: Call dev_consume_skb_any instead of dev_kfree_skb.
Replace dev_kfree_skb with dev_consume_skb_any in uml_net_start_xmit
as it can be called in hard irq and other contexts.

dev_consume_skb_any is used as uml_net_start_xmit typically
consumes (not drops) packets.

Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2014-03-24 15:13:35 -07:00
David S. Miller cb3042d609 sparc64: Make sure %pil interrupts are enabled during hypervisor yield.
In arch_cpu_idle() we must enable %pil based interrupts before
potentially invoking the hypervisor cpu yield call.

As per the Hypervisor API documentation for cpu_yield:

	Interrupts which are blocked by some mechanism other that
	pstate.ie (for example %pil) are not guaranteed to cause
	a return from this service.

It seems that only first generation Niagara chips are hit by this
bug.  My best guess is that later chips implement this in hardware
and wake up anyways from %pil events, whereas in first generation
chips the yield is implemented completely in hypervisor code and
requires %pil to be enabled in order to wake properly from this
call.

Fixes: 87fa05aeb3 ("sparc: Use generic idle loop")
Reported-by: Fabio M. Di Nitto <fabbione@fabbione.net>
Reported-by: Jan Engelhardt <jengelh@inai.de>
Tested-by: Jan Engelhardt <jengelh@inai.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-03-24 14:45:12 -04:00
Kees Cook 9dd721c6db x86, kaslr: fix module lock ordering problem
There was a potential lock ordering problem with the module kASLR patch
("x86, kaslr: randomize module base load address"). This patch removes
the usage of the module_mutex and creates a new mutex to protect the
module base address offset value.

Chain exists of:
  text_mutex --> kprobe_insn_slots.mutex --> module_mutex

[    0.515561]  Possible unsafe locking scenario:
[    0.515561]
[    0.515561]        CPU0                    CPU1
[    0.515561]        ----                    ----
[    0.515561]   lock(module_mutex);
[    0.515561]                                lock(kprobe_insn_slots.mutex);
[    0.515561]                                lock(module_mutex);
[    0.515561]   lock(text_mutex);
[    0.515561]
[    0.515561]  *** DEADLOCK ***

Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Andy Honig <ahonig@google.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-03-24 10:18:26 -07:00
Stefani Seibold 645a387ecb x86, vdso: Fix size of get_unmapped_area()
The size of the reserved memory for a 32 bit vdso must be the size of the
32 bit vDSO in pages + HPET page + VVAR page.

One page is not enough for this. Grrrr.... silly copy and paste bug,
was right in previous patch.

Signed-off-by: Stefani Seibold <stefani@seibold.net>
Cc: Andy Lutomirski <luto@amacapital.net>
Link: http://lkml.kernel.org/r/1395592694-20571-1-git-send-email-stefani@seibold.net
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-03-24 09:31:23 -07:00
Kuninori Morimoto 6fa387b2ed ARM: bockw: fixup SND_SOC_DAIFMT_CBx_CFx flags
SND_SOC_DAIFMT_CBx_CFx means "codec" side master/slave mode.
Then, rcar will be master mode if it was SND_SOC_DAIFMT_CBS_CFS.

Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
2014-03-24 10:41:20 +00:00
Catalin Marinas 196adf2f30 arm64: Remove pgprot_dmacoherent()
Since this macro is identical to pgprot_writecombine() and is only used
in a single place, remove it completely to avoid confusion. On ARMv7+
processors, the coherent DMA mapping must be Normal NonCacheable (a.k.a.
writecombine) to avoid mismatched hardware attribute aliases (with the
kernel linear mapping as Normal Cacheable).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-03-24 10:35:35 +00:00
Laura Abbott 214fdbe74a arm64: Support DMA_ATTR_WRITE_COMBINE
DMA_ATTR_WRITE_COMBINE is currently ignored. Set the pgprot
appropriately for non coherent opperations.

Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-03-24 10:26:10 +00:00
Laura Abbott 6e8d7968e9 arm64: Implement custom mmap functions for dma mapping
The current dma_ops do not specify an mmap function so maping
falls back to the default implementation. There are at least
two issues with using the default implementation:

1) The pgprot is always pgprot_noncached (strongly ordered)
memory even with coherent operations
2) dma_common_mmap calls virt_to_page on the remapped non-coherent
address which leads to invalid memory being mapped.

Fix both these issue by implementing a custom mmap function which
correctly accounts for remapped addresses and sets vm_pg_prot
appropriately.

Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
[catalin.marinas@arm.com: replaced "arm64_" with "__" prefix for consistency]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-03-24 10:01:17 +00:00
Benjamin Herrenschmidt cd42748535 Merge remote-tracking branch 'scott/next' into next
Freescale updates from Scott. Mostly support for critical
and machine check exceptions on 64-bit BookE, some new
PCI suspend/resume work and misc bits.
2014-03-24 10:26:10 +11:00
Mahesh Salgaonkar d410ae2126 powerpc/book3s: Fix CFAR clobbering issue in machine check handler.
While checking powersaving mode in machine check handler at 0x200, we
clobber CFAR register. Fix it by saving and restoring it during beq/bgt.

Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 10:16:09 +11:00
Anton Blanchard 422b9b9684 powerpc/compat: 32-bit little endian machine name is ppcle, not ppc
I noticed this when testing setarch. No, we don't magically
support a big endian userspace on a little endian kernel.

Signed-off-by: Anton Blanchard <anton@samba.org>
Cc: stable@vger.kernel.org # v3.10+
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 10:16:04 +11:00
Greg Kurz 599d287042 powerpc/le: Big endian arguments for ppc_rtas()
The ppc_rtas() syscall allows userspace to interact directly with RTAS.
For the moment, it assumes every thing is big endian and returns either
EINVAL or EFAULT when called in a little endian environment.

As suggested by Benjamin, to avoid bugs when userspace wants to pass
a non 32 bit value to RTAS, it is far better to stick with a simple
rationale: ppc_rtas() should be called with a big endian rtas_args
structure.

With this patch, it is now up to userspace to forge big endian arguments,
as expected by RTAS.

Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:36 +11:00
Anton Blanchard e45fbae515 powerpc: Use default set of netfilter modules (CONFIG_NETFILTER_ADVANCED=n)
Our netfilter options are stale and important things like masquerading
are no longer enabled. Instead of trying to keep up with any updates,
set CONFIG_NETFILTER_ADVANCED=n on ppc64* and pseries* defconfigs.
This enables the most common netfilter modules for us.

While here, enable the network bridge module which is heavily used in
KVM setups.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:35 +11:00
Aneesh Kumar K.V 1907a2c2c3 powerpc/defconfigs: Enable THP in pseries defconfig
We also set it to be enabled always. This helps in wider testing

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:35 +11:00
Aneesh Kumar K.V 346519a160 powerpc/mm: Make sure a local_irq_disable prevent a parallel THP split
We have generic code like the one in get_futex_key that assume that
a local_irq_disable prevents a parallel THP split. Support that by
adding a dummy smp call function after setting _PAGE_SPLITTING. Code
paths like get_user_pages_fast still need to check for _PAGE_SPLITTING
after disabling IRQ which indicate that a parallel THP splitting is
ongoing. Now if they don't find _PAGE_SPLITTING set, then we can be
sure that parallel split will now block in pmdp_splitting flush
until we enables IRQ

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:34 +11:00
Michael Neuling ee4ed6fa5a powerpc: Rate-limit users spamming kernel log buffer
The facility unavailable exception can be triggered from userspace by
accessing PMU registers when EBB is not enabled.  This causes the
included pr_err() to run, hence spamming the kernel log buffer.

This avoids this by rate limiting these messages.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:34 +11:00
Michael Ellerman e9aaac1ac3 powerpc/perf: Fix handling of L3 events with bank == 1
Currently we reject events which have the L3 bank == 1, such as
0x000084918F, because the cache field is non-zero.

However that is incorrect, because although the bank is non-zero, the
value we would write into MMCRC is zero, and so we can count the event.

So fix the check to ignore the bank selector when checking whether the
cache selector is non-zero.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:33 +11:00
Cody P Schafer 30daeb6c8f powerpc/perf: Add kconfig option for hypervisor provided counters
The commit adds a Kconfig option which allows the hv_gpci and hv_24x7
PMUs, added in the preceeding commits, to be built.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:32 +11:00
Cody P Schafer 0e93a6edd9 powerpc/perf: Add support for the hv 24x7 interface
This provides a basic interface between hv_24x7 and perf. Similar to
the one provided for gpci, it lacks transaction support and does not
list any events.

Example usage via perf tool:

	perf stat -e 'hv_24x7/domain=2,offset=8,starting_index=0,lpar=0xffffffff/' -r 0 -C 0 -x ' ' sleep 0.1

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:32 +11:00
Cody P Schafer 220a0c609a powerpc/perf: Add support for the hv gpci (get performance counter info) interface
This provides a basic link between perf and hv_gpci. Notably, it does
not yet support transactions and does not list any events (they can
still be manually composed).

Example usage via perf tool:

	perf stat -e 'hv_gpci/counter_info_version=3,offset=0,length=8,secondary_index=0,starting_index=0xffffffff,request=0x10/' -r 0 -C 0 -x ' ' sleep 0.1

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:31 +11:00
Cody P Schafer 7b43c67950 powerpc/perf: Add macros for defining event fields & formats
Add two macros which generate functions to extract the relevent bits
from event->attr.config{,1,2}.

EVENT_DEFINE_RANGE() defines an accessor for a range of bits in the
event, as well as a "max" function that gives the maximum value of the
field based on the bit width.

EVENT_DEFINE_RANGE_FORMAT() defines the accessor & max routine and also
a format attribute for use in the PMU's attr_groups.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
[mpe: move to powerpc, ugly but descriptive macro names]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:31 +11:00
Cody P Schafer 2d1b21ad7d powerpc/perf: Add a shared interface to get gpci version and capabilities
This exposes a simple way to grab the firmware provided
collect_priveliged, ga, expanded, and lab capability bits. All of these
bits come in from the same gpci request, so we've exposed all of them.

Only the collect_priveliged bit is really used by the hv-gpci/hv-24x7
code, the other bits are simply exposed in sysfs to inform the user.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:30 +11:00
Cody P Schafer a8b2c43671 powerpc/perf: Add 24x7 interface headers
24x7 (also called hv_24x7 or H_24X7) is an interface to obtain
performance counters from the hypervisor. These counters do not have a
fixed format/possition and are instead documented in a "24x7 Catalog",
which is provided by the hypervisor (that interface is also documented
paritialy in the included hv-24x7-catalog.h and fully in at
https://raw.githubusercontent.com/jmesmon/catalog-24x7/master/hv-24x7-catalog.h ).

The 24x7 data access is simply a copy operation into a 4 dimentional
array of 64bit counters (from hypervisor to kernel memory). There is no
interupt triggered on overflow, these are completely disjoint from the
typical power pmu.

This method of obtaining performance counters from the hypervisor is
intended to paritialy replace the gpci interface.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:29 +11:00
Cody P Schafer a67f144739 powerpc/perf: Add hv_gpci interface header
"H_GetPerformanceCounterInfo" (refered to as hv_gpci or just gpci from
here on) is an interface to retrieve specific performance counters and
other data from the hypervisor. All outputs have a fixed format. This
header only describes the portions of the interface that we plan on
using in linux at this time.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:29 +11:00
Cody P Schafer 827f798ac1 powerpc: Add hvcalls for 24x7 and gpci (Get Performance Counter Info)
Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:28 +11:00
Michael Ellerman 76cb8a783a powerpc/perf: Enable BHRB access for EBB events
The previous commit added constraint and register handling to allow
processes using EBB (Event Based Branches) to request access to the BHRB
(Branch History Rolling Buffer).

With that in place we can allow processes using EBB to access the BHRB.
This is achieved by setting BHRBA in MMCR0 when we enable EBB access. We
must also clear BHRBA when we are disabling.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:27 +11:00
Michael Ellerman ba969237cf powerpc/perf: Add BHRB constraint and IFM MMCRA handling for EBB
We want a way for users of EBB (Event Based Branches) to also access the
BHRB (Branch History Rolling Buffer). EBB does not interoperate with our
existing BHRB support, which is wired into the generic Linux branch
stack sampling support.

To support EBB & BHRB we add three new bits to the event code. The first
bit indicates that the event wants access to the BHRB, and the other two
bits indicate the desired IFM (Instruction Filtering Mode).

We allow multiple events to request access to the BHRB, but they must
agree on the IFM value. Events which are not interested in the BHRB can
also interoperate with events which do.

Finally we program the desired IFM value into MMCRA. Although we do this
for every event, we know that the value will be identical for all events
that request BHRB access.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:27 +11:00
Michael Ellerman 7cbba63028 powerpc/perf: Avoid mutating event in power8_get_constraint()
We only need to mask the EBB bit out of the event for the check of the
special PMC 5 & 6 events. So use a local to do it just for that code,
rather than changing the event value for the life of the function.

While we're there move the set of mask and value after all the checks.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:26 +11:00
Michael Ellerman fb568d763f powerpc/perf: Clean up the EBB hash defines a little
Rather than using PERF_EVENT_CONFIG_EBB_SHIFT everywhere, add an
EVENT_EBB_SHIFT like every other event and use that.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:26 +11:00
Michael Ellerman 58b5fb0049 powerpc/perf: Reject EBB events which specify a sample_type
Although we already block EBB events which request sampling using
sample_period, technically it's possible for an event to set sample_type
but not sample_period.

Nothing terrible will happen if an EBB event does specify sample_type,
but it signals a major confusion on the part of userspace, and so we do
them the favor of rejecting it.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:25 +11:00
Michael Ellerman c2e37a2626 powerpc/perf: Add lost exception workaround
Some power8 revisions have a hardware bug where we can lose a PMU
exception, this commit adds a workaround to detect the bad condition and
rectify the situation.

See the comment in the commit for a full description.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:25 +11:00
Michael Ellerman 68f2f0d431 powerpc: Add a cpu feature CPU_FTR_PMAO_BUG
Some power8 revisions have a hardware bug where we can lose a
Performance Monitor (PMU) exception under certain circumstances.

We will be adding a workaround for this case, see the next commit for
details. The observed behaviour is that writing PMAO doesn't cause an
exception as we would expect, hence the name of the feature.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:24 +11:00
Anshuman Khandual 5f6d0380c6 powerpc/perf: Define perf_event_print_debug() to print PMU register values
Currently the sysrq ShowRegs command does not print any PMU registers as
we have an empty definition for perf_event_print_debug(). This patch
defines perf_event_print_debug() to print various PMU registers.

Example output:

CPU: 0 PMU registers, ppmu = POWER7 n_counters = 6
PMC1:  00000000 PMC2: 00000000 PMC3: 00000000 PMC4: 00000000
PMC5:  00000000 PMC6: 00000000 PMC7: deadbeef PMC8: deadbeef
MMCR0: 0000000080000000 MMCR1: 0000000000000000 MMCRA: 0f00000001000000
SIAR:  0000000000000000 SDAR:  0000000000000000 SIER:  0000000000000000

Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
[mpe: Fix 32 bit build and rework formatting for compactness]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:23 +11:00
Anshuman Khandual 2f0695232c powerpc/perf: Make some new raw event codes available in sysfs
This patchset adds some missing event list for POWER7 PMU raw
events which are exported through sysfs interface. Also updates
the ABI documentation to add all the sysfs exported raw events.

Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:23 +11:00
Neelesh Gupta 7224adbbb8 powerpc/powernv: Enable fetching of platform sensor data
This patch enables fetching of various platform sensor data through
OPAL and expects a sensor handle from the driver to pass to OPAL.

Signed-off-by: Neelesh Gupta <neelegup@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:48:21 +11:00
Neelesh Gupta 4029cd6654 powerpc/powernv: Enable reading and updating of system parameters
This patch enables reading and updating of system parameters through
OPAL call.

Signed-off-by: Neelesh Gupta <neelegup@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:47:30 +11:00
Neelesh Gupta 8d72482322 powerpc/powernv: Infrastructure to support OPAL async completion
This patch adds support for notifying the clients of their request
completion. Clients request for the token before making OPAL call
and then wait for the response.

This patch uses messaging infrastructure to pull the data to linux
by registering itself for the message type OPAL_MSG_ASYNC_COMP.

Signed-off-by: Neelesh Gupta <neelegup@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-03-24 09:45:22 +11:00
Will Deacon a34fe10750 parisc: locks: remove redundant arch_*_relax operations
Now that the arch_{spin,read,write}_relax macros default to cpu_relax(),
remove the redundant definitions for parisc.

Cc: Helge Deller <deller@gmx.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Helge Deller <deller@gmx.de>
2014-03-23 17:01:23 +01:00
Helge Deller e9af8b7aba parisc: wire up sys_utimes
We seem to be nearly the only platform which does not provide the
sys_utimes syscall.  Adding it now makes our life much easier with
userspace applications (like dietlibc and e2fsprogs) since we then
behave like all other platforms too and don't need extra patches which
are hard to get upstream anyway because we are not a mainstream
architecture.

Signed-off-by: Helge Deller <deller@gmx.de>
Cc: stable@vger.kernel.org # v3.13
2014-03-23 16:57:13 +01:00
John David Anglin 4b02a72a26 parisc: Remove unused CONFIG_PARISC_TMPALIAS code
The attached change removes the unused and experimental
CONFIG_PARISC_TMPALIAS code. It doesn't work and I don't believe it will
ever be used.

Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
2014-03-23 16:46:30 +01:00
Linus Torvalds 084c6c5013 ARM: SoC fixes for 3.14
Only two patches this time, one to fix ethernet probe order on at91 (better
 fix with proper device aliasing will be done for 3.15, this is stop-gap), and
 one update to MAINTAINERS due to Freescale moving their repo to kernel.org.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.14 (GNU/Linux)
 
 iQIcBAABAgAGBQJTLRrzAAoJEIwa5zzehBx3RR8P/1IdwjlFC7ait4LxUvSfpPEw
 laHatndu1IBc+SoUDlPqddvEZlbPGelSRRUXg/5RbRwedkZRvI1gLE3KmqbxCLW3
 SmdnSbhZlFY55c4Qjf1EWbRvVs/eQWADBowxd7xHHH17Rr/nKNrRxXME0tSjtScm
 uN0ZrhjJPfdRIo7pYI29CV7E1hy+9WKKEDO6mWUu7aC/R5n8L5fPthNiLiuDE4e4
 rILenZAUhxhHGrn+NBy2LXm2QuhS2T38OYKDGm4wCRnzUJphN7dHAwH8xz+n8yTl
 mRN3shGrzaPgNt1UR8cBrEWhUjFem7n8XBV7dgaGTD6LZvUbThLtY6g1PK2bOlHp
 qnm99UHRPjMpQXg19ssbXuYOcRFyvDxBwCNYUwTbyGy5fb5XmpiTzyR4n0j2LFkU
 Svv4H1UUaSeicnBZwheB6idY6fd5yGU6pMfdsWUgMS03Gbpt9v71DKwvM3ZJYTWh
 A0ZSwOHdUf3wiEkxLkpVXA8kPBU2La17wEYV/US1M/ZCwk2t1UHmVAdtqpZZqng4
 4uqkIBwJot5s/kbi+m5I/dx9elZYAhQU41bjVaUJab8rjqrADdTToRsLouEsjGfE
 WuqGDhmYM4k+I88xxFXgA5LxcJHsX/bzFexwkUPCJmvJOz5qtUdU25NMw+fmNVhy
 xeKODx1dWmajrFou0yjw
 =Dh0X
 -----END PGP SIGNATURE-----

Merge tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC fixes from Olof Johansson:
 "Only two patches this time, one to fix ethernet probe order on at91
  (better fix with proper device aliasing will be done for 3.15, this is
  stop-gap), and one update to MAINTAINERS due to Freescale moving their
  repo to kernel.org"

* tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
  ARM: at91: fix network interface ordering for sama5d36
  MAINTAINERS: update IMX kernel git tree
2014-03-22 09:13:13 -07:00
Rafael J. Wysocki 884411d9a5 Merge branch 'acpi-processor'
* acpi-processor:
  ACPI: Move BAD_MADT_ENTRY() to linux/acpi.h
  ACPI / processor: Make it possible to get APIC ID via GIC
  ACPI / processor: Build idle_boot_override on x86 and ia64
  ACPI / processor: Use ACPI_PROCESSOR_DEVICE_HID instead of "ACPI0007"
  ACPI / processor: Fix acpi_processor_eval_pdc() return value type
2014-03-21 16:53:28 +01:00
chandramouli narayanan 7c1da8d0d0 crypto: sha - SHA1 transform x86_64 AVX2
This git patch adds x86_64 AVX2 optimization of SHA1
transform to crypto support. The patch has been tested with 3.14.0-rc1
kernel.

On a Haswell desktop, with turbo disabled and all cpus running
at maximum frequency, tcrypt shows AVX2 performance improvement
from 3% for 256 bytes update to 16% for 1024 bytes update over
AVX implementation.

This patch adds sha1_avx2_transform(), the glue, build and
configuration changes needed for AVX2 optimization of
SHA1 transform to crypto support.

sha1-ssse3 is one module which adds the necessary optimization
support (SSSE3/AVX/AVX2) for the low-level SHA1 transform function.
With better optimization support, transform function is overridden
as the case may be. In the case of AVX2, due to performance reasons
across datablock sizes, the AVX or AVX2 transform function is used
at run-time as it suits best. The Makefile change therefore appends
the necessary objects to the linkage. Due to this, the patch merely
appends AVX2 transform to the existing build mix and Kconfig support
and leaves the configuration build support as is.

Signed-off-by: Chandramouli Narayanan <mouli@linux.intel.com>
Reviewed-by: Marek Vasut <marex@denx.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-03-21 21:54:30 +08:00
Cornelia Huck 8422359877 KVM: s390: irq routing for adapter interrupts.
Introduce a new interrupt class for s390 adapter interrupts and enable
irqfds for s390.

This is depending on a new s390 specific vm capability, KVM_CAP_S390_IRQCHIP,
that needs to be enabled by userspace.

Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
2014-03-21 13:43:00 +01:00
Cornelia Huck 841b91c584 KVM: s390: adapter interrupt sources
Add a new interface to register/deregister sources of adapter interrupts
identified by an unique id via the flic. Adapters may also be maskable
and carry a list of pinned pages.

These adapters will be used by irq routing later.

Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
2014-03-21 13:42:49 +01:00
Cornelia Huck d938dc5522 KVM: Add per-vm capability enablement.
Allow KVM_ENABLE_CAP to act on a vm as well as on a vcpu. This makes more
sense when the caller wants to enable a vm-related capability.

s390 will be the first user; wire it up.

Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
2014-03-21 13:42:39 +01:00
Dominik Dingel 6e5a40a49f s390/mm: remove unecessary parameter from pgste_ipte_notify
Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-03-21 09:58:01 +01:00
Dominik Dingel aaeff84a2d s390/mm: remove unnecessary parameter from gmap_do_ipte_notify
Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-03-21 09:57:58 +01:00
Dominik Dingel c7c5be73cc s390/mm: fixing comment so that parameter name match
Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-03-21 09:57:55 +01:00
Andy Lutomirski 3c1b63b9e4 x86, vdso: Finish removing VDSO32_PRELINK
It's a declaration of a nonexistent symbol.  We can get rid of the
64-bit versions, too, but that's more intrusive.

Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Stefani Seibold <stefani@seibold.net>
Link: http://lkml.kernel.org/r/2ce2ce18447d8a0b78d44a278a066b6c0af06b32.1395366931.git.luto@amacapital.net
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2014-03-20 20:20:18 -07:00
Andy Lutomirski 9e6f450f94 x86, vdso: Move more vdso definitions into vdso.h
This fixes the Xen build and gets rid of a silly header file.

Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Stefani Seibold <stefani@seibold.net>
Link: http://lkml.kernel.org/r/1df77311795aff75f5742c787d277518314a38d3.1395366931.git.luto@amacapital.net
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2014-03-20 20:20:08 -07:00
Chris Bainbridge 69f2366c94 x86, cpu: Add forcepae parameter for booting PAE kernels on PAE-disabled Pentium M
Many Pentium M systems disable PAE but may have a functionally usable PAE
implementation. This adds the "forcepae" parameter which bypasses the boot
check for PAE, and sets the CPU as being PAE capable. Using this parameter
will taint the kernel with TAINT_CPU_OUT_OF_SPEC.

Signed-off-by: Chris Bainbridge <chris.bainbridge@gmail.com>
Link: http://lkml.kernel.org/r/20140307114040.GA4997@localhost
Acked-by: Borislav Petkov <bp@suse.de>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2014-03-20 16:31:54 -07:00
Dave Jones 8c90487cdc Rename TAINT_UNSAFE_SMP to TAINT_CPU_OUT_OF_SPEC
Rename TAINT_UNSAFE_SMP to TAINT_CPU_OUT_OF_SPEC, so we can repurpose
the flag to encompass a wider range of pushing the CPU beyond its
warrany.

Signed-off-by: Dave Jones <davej@fedoraproject.org>
Link: http://lkml.kernel.org/r/20140226154949.GA770@redhat.com
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2014-03-20 16:28:09 -07:00
Andy Lutomirski b67e612cef x86: Load the 32-bit vdso in place, just like the 64-bit vdsos
This replaces a decent amount of incomprehensible and buggy code
with much more straightforward code.  It also brings the 32-bit vdso
more in line with the 64-bit vdsos, so maybe someday they can share
even more code.

This wastes a small amount of kernel .data and .text space, but it
avoids a couple of allocations on startup, so it should be more or
less a wash memory-wise.

Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Stefani Seibold <stefani@seibold.net>
Link: http://lkml.kernel.org/r/b8093933fad09ce181edb08a61dcd5d2592e9814.1395352498.git.luto@amacapital.net
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-03-20 15:19:14 -07:00
Linus Torvalds 3fb725c48b Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus
Pull MIPS fixes from Ralf Baechle:
 "Another set of five fixes.  The most interesting one is a fix for race
  condition in the local_irq_disable() implementation used by .S code
  for pre-MIPS R2 processors only.  It leaves a race that's hard but not
  impossible to hit; the others fairly obvious"

* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus:
  MIPS: Make local_irq_disable macro safe for non-Mipsr2
  MIPS: Octeon: Fix warning in of_device_alloc on cn3xxx
  MIPS: ftrace: Tweak safe_load()/safe_store() macros
  MIPS: BCM47XX: Check all (32) GPIOs when looking for a pin
  MIPS: Fix possible build error with transparent hugepages enabled
2014-03-20 10:57:52 -07:00
Christopher Covington 31b1e940c5 arm64: Fix __range_ok macro
Without this, the following scenario is incorrectly determined
to be invalid.

addr 0x7f_ffffe000 size 8192 addr_limit 0x80_00000000

This behavior was observed while trying to vmsplice the stack
as part of a CRIU dump of a process on a system started with the
norandmaps kernel parameter.

Signed-off-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-03-20 17:41:27 +00:00
Jim Quinlan 71ca758889 MIPS: Make local_irq_disable macro safe for non-Mipsr2
For non-mipsr2 processors, the local_irq_disable contains an mfc0-mtc0
pair with instructions inbetween.  With preemption enabled, this sequence
may get preempted and effect a stale value of CP0_STATUS when executing
the mtc0 instruction.  This commit avoids this scenario by incrementing
the preempt count before the mfc0 and decrementing it after the mtc9.

[ralf@linux-mips.org: This patch is sorting out the part that were missed
by e97c5b6098 [MIPS: Make irqflags.h functions preempt-safe for non-mipsr2
cpus.]  I also re-enabled the inclusion of <asm/asm-offsets.h> at the top
of <asm/asmmacro.h>].

Signed-off-by: Jim Quinlan <jim2101024@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: cernekee@gmail.com
Patchwork: https://patchwork.linux-mips.org/patch/6164/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-20 13:46:15 +01:00
Rafael J. Wysocki 5a2d853ffc Merge branch 'pm-cpufreq'
* pm-cpufreq: (30 commits)
  intel_pstate: Set core to min P state during core offline
  cpufreq: Add stop CPU callback to cpufreq_driver interface
  cpufreq: Remove unnecessary braces
  cpufreq: Fix checkpatch errors and warnings
  cpufreq: powerpc: add cpufreq transition latency for FSL e500mc SoCs
  cpufreq: remove unused notifier: CPUFREQ_{SUSPENDCHANGE|RESUMECHANGE}
  cpufreq: Do not allow ->setpolicy drivers to provide ->target
  cpufreq: arm_big_little: set 'physical_cluster' for each CPU
  cpufreq: arm_big_little: make vexpress driver depend on bL core driver
  cpufreq: SPEAr: Instantiate as platform_driver
  cpufreq: Remove unnecessary variable/parameter 'frozen'
  cpufreq: Remove cpufreq_generic_exit()
  cpufreq: add 'freq_table' in struct cpufreq_policy
  cpufreq: Reformat printk() statements
  cpufreq: Tegra: Use cpufreq_generic_suspend()
  cpufreq: s5pv210: Use cpufreq_generic_suspend()
  cpufreq: exynos: Use cpufreq_generic_suspend()
  cpufreq: Implement cpufreq_generic_suspend()
  cpufreq: suspend governors on system suspend/hibernate
  cpufreq: move call to __find_governor() to cpufreq_init_policy()
  ...
2014-03-20 13:26:12 +01:00
Thomas Gleixner b524ca742e arm: omap: Fix typo in ams-delta-fiq.c
8435cf757 (arm: Replace various irq_desc accesses) typoed
irq_get_irq_chip() instead of irq_get_chip().

Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-03-20 12:44:02 +01:00
Thomas Gleixner b718102e7d m68k: atari: Fix the last kernel_stat.h fallout
Reported-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-03-20 12:40:30 +01:00
Wang Dongsheng 48b16180d0 fsl/pci: The new pci suspend/resume implementation
If we do nothing in suspend/resume, some platform PCIe ip-block
can't guarantee the link back to L0 state from sleep, then, when
we read the EP device will hang. Only we send pme turnoff message
in pci controller suspend, and send pme exit message in resume, the
link state will be normal.

When we send pme turnoff message in pci controller suspend, the
links will into l2/l3 ready, then, host cannot communicate with
ep device, but pci-driver will call back EP device to save them
state. So we need to change platform_driver->suspend/resume to
syscore->suspend/resume.

So the new suspend/resume implementation, send pme turnoff message
in suspend, and send pme exit message in resume. And add a PME handler,
to response PME & message interrupt.

Change platform_driver->suspend/resume to syscore->suspend/resume.
pci-driver will call back EP device, to save EP state in
pci_pm_suspend_noirq, so we need to keep the link, until
pci_pm_suspend_noirq finish.

Signed-off-by: Wang Dongsheng <dongsheng.wang@freescale.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-03-19 22:37:44 -05:00
Scott Wood 609af38f8f powerpc/booke64: Critical and machine check exception support
Add special state saving for critical and machine check exceptions.

Most of this code could be used to handle debug exceptions taken from
kernel space, but actually doing so is outside the scope of this patch.

The various critical and machine check exceptions now point to their
real handlers, rather than hanging the kernel.

Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-03-19 19:57:27 -05:00
Scott Wood 31f7124828 powerpc/booke64: Add crit/mc/debug support to EXCEPTION_COMMON
Use the proper scratch SPRG and PACA region.  Introduce level-specific
macros to simplify usage and avoid needing to do a bunch of token
pasting throughout EXCEPTION_COMMON().

Now that EXCEPTION_COMMON_DBG() is properly using the debug scratch
register, there's no more need for the caller to move the value to the
GEN scratch first.

Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-03-19 19:57:17 -05:00
Scott Wood 28a3ded1d6 powerpc/booke64: Remove ints from EXCEPTION_COMMON
The ints parameter was used to optionally insert RECONCILE_IRQ_STATE
into EXCEPTION_COMMON.  However, since it came at the end of
EXCEPTION_COMMON, there was no real benefit for it to be there as
opposed to being called separately by the caller of EXCEPTION_COMMON.

The ints parameter was causing some hassle when trying to add an extra
macro layer.  Besides avoiding that, moving "ints" to the caller makes
the code simpler by:
 - avoiding the asymmetry where INTS_RESTORE_HARD is called separately
by the individual exception, but INTS_DISABLE was not
 - removing the no-op INTS_KEEP
 - not having an unnecessary macro parameter

It also turned out to be necessary to delay the INTS_DISABLE
in the case of special level exceptions until after we saved the
old value of PACAIRQHAPPENED.

Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-03-19 19:57:16 -05:00
Scott Wood a3dc620743 powerpc/booke64: Use SPRG_TLB_EXFRAME on bolted handlers
While bolted handlers (including e6500) do not need to deal with a TLB
miss recursively causing another TLB miss, nested TLB misses can still
happen with crit/mc/debug exceptions -- so we still need to honor
SPRG_TLB_EXFRAME.

We don't need to spend time modifying it in the TLB miss fastpath,
though -- the special level exception will handle that.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Cc: Mihai Caraman <mihai.caraman@freescale.com>
Cc: kvm-ppc@vger.kernel.org
2014-03-19 19:57:15 -05:00
Scott Wood 9d378dfac8 powerpc/booke64: Use SPRG7 for VDSO
Previously SPRG3 was marked for use by both VDSO and critical
interrupts (though critical interrupts were not fully implemented).

In commit 8b64a9dfb0 ("powerpc/booke64:
Use SPRG0/3 scratch for bolted TLB miss & crit int"), Mihai Caraman
made an attempt to resolve this conflict by restoring the VDSO value
early in the critical interrupt, but this has some issues:

 - It's incompatible with EXCEPTION_COMMON which restores r13 from the
   by-then-overwritten scratch (this cost me some debugging time).
 - It forces critical exceptions to be a special case handled
   differently from even machine check and debug level exceptions.
 - It didn't occur to me that it was possible to make this work at all
   (by doing a final "ld r13, PACA_EXCRIT+EX_R13(r13)") until after
   I made (most of) this patch. :-)

It might be worth investigating using a load rather than SPRG on return
from all exceptions (except TLB misses where the scratch never leaves
the SPRG) -- it could save a few cycles.  Until then, let's stick with
SPRG for all exceptions.

Since we cannot use SPRG4-7 for scratch without corrupting the state of
a KVM guest, move VDSO to SPRG7 on book3e.  Since neither SPRG4-7 nor
critical interrupts exist on book3s, SPRG3 is still used for VDSO
there.

Signed-off-by: Scott Wood <scottwood@freescale.com>
Cc: Mihai Caraman <mihai.caraman@freescale.com>
Cc: Anton Blanchard <anton@samba.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: kvm-ppc@vger.kernel.org
2014-03-19 19:57:14 -05:00
Scott Wood 82d86de25b powerpc/e6500: Make TLB lock recursive
Once special level interrupts are supported, we may take nested TLB
misses -- so allow the same thread to acquire the lock recursively.

The lock will not be effective against the nested TLB miss handler
trying to write the same entry as the interrupted TLB miss handler, but
that's also a problem on non-threaded CPUs that lack TLB write
conditional.  This will be addressed in the patch that enables crit/mc
support by invalidating the TLB on return from level exceptions.

Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-03-19 19:57:13 -05:00
Scott Wood c4787d1ecf powerpc/booke64: Fix exception numbers
altivec_unavailable was commented as 0xf20 but the code uses 0x200.
Note that 0xf20 is also used by ap_unavailable.

altivec_assist was commented as 0x1700 but the code uses 0x220.

critical_input was commented as 0x580 but the code uses 0x100.

machine_check was commented and implemented as 0x200, which conflicts
with altivec_assist (it only builds because MC_EXCEPTION_PROLOG is
commented out).  Changed to the fixed IVOR value of 0x000.

Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-03-19 19:57:12 -05:00
Tiejun Chen 19007b340d powerpc/book3e: store crit/mc/dbg exception thread info
We need to store thread info to these exception thread info like something
we already did for PPC32.

Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-03-19 19:57:10 -05:00
Tiejun Chen 160c732433 powerpc/book3e: initialize crit/mc/dbg kernel stack pointers
We already allocated critical/machine/debug check exceptions, but
we also should initialize those associated kernel stack pointers
for use by special exceptions in the PACA.

Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-03-19 19:57:09 -05:00
Wang Dongsheng b0b7dcbdf3 powerpc/fsl: add PVR definition for E500MC and E5500
Signed-off-by: Wang Dongsheng <dongsheng.wang@freescale.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-03-19 19:39:25 -05:00
Zhao Qiang 0f5a869600 Corenet: Add QE platform support for Corenet
There is QE on platform T104x, add support.
Call funcs qe_ic_init and qe_init if CONFIG_QUICC_ENGINE is defined.

Signed-off-by: Zhao Qiang <B45475@freescale.com>
[scottwood@freesacle.com: whitespace fix]
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-03-19 19:37:59 -05:00
Zhao Qiang 706f4aa035 QE: split function mpc85xx_qe_init() into two functions.
New QE doesn't have par_io, it doesn't need to init par_io
for new QE.
Split function mpc85xx_qe_init() into mpc85xx_qe_init()
and mpc85xx_qe_par_io_init().
Call mpc85xx_qe_init() for both new and old while
mpc85xx_qe_par_io_init() after mpc85xx_qe_init() for old.

Signed-off-by: Zhao Qiang <B45475@freescale.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-03-19 19:37:31 -05:00
Sebastian Siewior b0ad062cc4 powerpc: 85xx rdb: move np pointer to avoid builderror
If CONFIG_UCC_GETH or CONFIG_SERIAL_QE is not defined then we get a
warning about an used variable which leads to a build error.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-03-19 19:02:42 -05:00
Jan Beulich 7a5917e978 x86, hash: Simplify switch, add __init annotation
Minor cleanups:

- simplify switch statement
- add __init annotation to setup_arch_fast_hash()

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Link: http://lkml.kernel.org/r/530F09CE020000780011FBEF@nat28.tlf.novell.com
Cc: Francesco Fusco <ffusco@redhat.com>
Cc: Thomas Graf <tgraf@redhat.com>
Cc: David S. Miller <davem@davemloft.net>
Acked-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-03-19 16:51:04 -07:00
Jan Beulich c5cdfdf909 x86, hash: Swap arguments passed to crc32_u32()
... to match the function's parameters. While reportedly commutative,
using the proper order allows for leveraging the instruction permitting
the source operand to be in memory.

[ hpa: This code originated in the dpdk toolkit.  This was a bug in dpdk
  which has recently been fixed in part due to an earlier version of
  this patch. ]

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Link: http://lkml.kernel.org/r/530F09B6020000780011FBEB@nat28.tlf.novell.com
Acked-by: Daniel Borkmann <dborkman@redhat.com>
Cc: Francesco Fusco <ffusco@redhat.com>
Cc: Thomas Graf <tgraf@redhat.com>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-03-19 16:51:04 -07:00
Jan Beulich 06325190bd x86, hash: Fix build failure with older binutils
Just like for other ISA extension instruction uses we should check
whether the assembler actually supports them. The fallback here simply
is to encode an instruction  with fixed operands (%eax and %ecx).

[ hpa: tagging for -stable as a build fix ]

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Link: http://lkml.kernel.org/r/530F0996020000780011FBE7@nat28.tlf.novell.com
Cc: Francesco Fusco <ffusco@redhat.com>
Cc: Thomas Graf <tgraf@redhat.com>
Cc: David S. Miller <davem@davemloft.net>
Acked-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: <stable@vger.kernel.org> # v3.14
2014-03-19 16:51:04 -07:00
Linus Torvalds 7c1cfacca2 PCI updates for v3.14:
Resource management
     - Revert "Insert GART region into resource map"
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.11 (GNU/Linux)
 
 iQIcBAABAgAGBQJTKdCYAAoJEFmIoMA60/r88lMP/3Njn9KD69FNU+Hphs5ExbQG
 OtqudAWiJBPAabsvXAJNHh/osCJJcPgewzKGOX9dboEP3eL/bCibT2dKgXIjxRav
 WULAlINj6x1h3K+/+VlBVsW0W/DcnK7zIQ0O7Qjqh8f0E2v7+nB5/tOOQ6gQeqxH
 kp7jzBxdTUfVeZtfwY80ccHM2zI/nqm6/p5PobMt3G8vsdamxNGDjHDlOlqARgNs
 /nP26/XusS9Uw0MI7wwuALgLNLNMbLWMmKBfz1n7Gw3wUvWvQeK+ib67gyTgoVT7
 VVDuFn8bg3hMqVwlCrBU8SnS83/dQlSl0fze0gL4UMPH+AGxHnTPCeirOkEhvtDY
 krVuiber7PAkj5dz0ApRR9nawHSh20Y7WJxLDYrpILo8C3e4pXcGRVsm4d8e3GuP
 zYwD/6OH6SgMdozc456Jb1qSaXDfLfVffYKcjLS+5bPstP+QIFSUdJdhRrsOKZGn
 VuIvqXioKeBYe4QqJYGt7cMWdRXTqkrYXOV5N1qPOqBloc83sRdQWtSqHivISma9
 rIeyBCAA30SYtr9FF0n/up8Mw9AhNukfcKuouWUuDV/FeMRbMbblvw8Vgnq8eXY9
 fLqUMOTPnE0nz5hScHPfTolQ6ATghh/sk6EOkD4gvvfS7+Ul5oiduE+rBOb2eY2Q
 M6/H1MT1OiNKcRROt0q6
 =yDUb
 -----END PGP SIGNATURE-----

Merge tag 'pci-v3.14-fixes-3' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI resource management fix from Bjorn Helgaas:
 "This is a fix for an AGP regression exposed by e501b3d87f ("agp:
  Support 64-bit APBASE"), which we merged in v3.14-rc1.

  We've warned about the conflict between the GART and PCI resources and
  cleared out the PCI resource for a long time, but after e501b3d87f,
  we still *use* that cleared-out PCI resource.  I think the GART
  resource is incorrect, so this patch removes it"

* tag 'pci-v3.14-fixes-3' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci:
  Revert "[PATCH] Insert GART region into resource map"
2014-03-19 16:15:54 -07:00
Andreas Herrmann 2eddb708d8 MIPS: Octeon: Fix warning in of_device_alloc on cn3xxx
Starting with commit 3da5278727 (of/irq:
Rework of_irq_count()) the following warning is triggered on octeon
cn3xxx:

[    0.887281] WARNING: CPU: 0 PID: 1 at drivers/of/platform.c:171 of_device_alloc+0x228/0x230()
[    0.895642] Modules linked in:
[    0.898689] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.14.0-rc7-00012-g9ae51f2-dirty #41
[    0.906860] Stack : c8b439581166d96e ffffffff816b0000 0000000040808000 ffffffff81185ddc
[    0.906860] 	  0000000000000000 0000000000000000 0000000000000000 000000000000000b
[    0.906860] 	  000000000000000a 000000000000000a 0000000000000000 0000000000000000
[    0.906860] 	  ffffffff81740000 ffffffff81720000 ffffffff81615900 ffffffff816b0177
[    0.906860] 	  ffffffff81727d10 800000041f868fb0 0000000000000001 0000000000000000
[    0.906860] 	  0000000000000000 0000000000000038 0000000000000001 ffffffff81568484
[    0.906860] 	  800000041f86faa8 ffffffff81145ddc 0000000000000000 ffffffff811873f4
[    0.906860] 	  800000041f868b88 800000041f86f9c0 0000000000000000 ffffffff81569c9c
[    0.906860] 	  0000000000000000 0000000000000000 0000000000000000 0000000000000000
[    0.906860] 	  0000000000000000 ffffffff811205e0 0000000000000000 0000000000000000
[    0.906860] 	  ...
[    0.971695] Call Trace:
[    0.974139] [<ffffffff811205e0>] show_stack+0x68/0x80
[    0.979183] [<ffffffff81569c9c>] dump_stack+0x8c/0xe0
[    0.984196] [<ffffffff81145efc>] warn_slowpath_common+0x84/0xb8
[    0.990110] [<ffffffff81436888>] of_device_alloc+0x228/0x230
[    0.995726] [<ffffffff814368d8>] of_platform_device_create_pdata+0x48/0xd0
[    1.002593] [<ffffffff81436a94>] of_platform_bus_create+0x134/0x1e8
[    1.008837] [<ffffffff81436af8>] of_platform_bus_create+0x198/0x1e8
[    1.015064] [<ffffffff81436cc4>] of_platform_bus_probe+0xa4/0x100
[    1.021149] [<ffffffff81100570>] do_one_initcall+0xd8/0x128
[    1.026701] [<ffffffff816e2a10>] kernel_init_freeable+0x144/0x210
[    1.032753] [<ffffffff81564bc4>] kernel_init+0x14/0x110
[    1.037973] [<ffffffff8111bb44>] ret_from_kernel_thread+0x14/0x1c

With this commit the kernel starts mapping the interrupts listed for
gpio-controller node. irq_domain_ops for CIU (octeon_irq_ciu_map and
octeon_irq_ciu_xlat) refuse to handle the GPIO lines (returning -EINVAL)
and this is causing above warning in of_device_alloc().

Modify irq_domain_ops for CIU and CIU2 to "gracefully handle" GPIO
lines (neither return error code nor call octeon_irq_set_ciu_mapping
for it). This should avoid the warning.

(As before the real setup for GPIO lines will happen using
irq_domain_ops of gpio-controller.)

This patch is based on Wei's patch v2 (see
http://marc.info/?l=linux-mips&m=139511814813247).

Signed-off-by: Andreas Herrmann <andreas.herrmann@caviumnetworks.com>
Reported-by: Yang Wei <wei.yang@windriver.com>
Acked-by: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6624/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2014-03-19 23:50:30 +01:00
Vivek Goyal 04999550f9 x86, boot: Move memset() definition in compressed/string.c
Currently compressed/misc.c needs to link against memset(). I think one of
the reasons of this need is inclusion of various header files which define
static inline functions and use memset() inside these. For example,
include/linux/bitmap.h

I think trying to include "../string.h" and using builtin version of memset
does not work because by the time "#define memset" shows up, it is too
late. Some other header file has already used memset() and expects to
find a definition during link phase.

Currently we have a C definitoin of memset() in misc.c. Move it to
compressed/string.c so that others can use it if need be.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Link: http://lkml.kernel.org/r/1395170800-11059-6-git-send-email-vgoyal@redhat.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-03-19 15:44:09 -07:00
Vivek Goyal fb4cac573e x86, boot: Move memcmp() into string.h and string.c
Try to treat memcmp() in same way as memcpy() and memset(). Provide a
declaration in boot/string.h and by default user gets a memcmp() which
maps to builtin function.

Move optimized definition of memcmp() in boot/string.c. Now a user can
do #undef memcmp and link against string.c to use optimzied memcmp().

It also simplifies boot/compressed/string.c where we had to redefine
memcmp(). That extra definition is gone now.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Link: http://lkml.kernel.org/r/1395170800-11059-5-git-send-email-vgoyal@redhat.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-03-19 15:44:04 -07:00
Vivek Goyal 820e8feca0 x86, boot: Move optimized memcpy() 32/64 bit versions to compressed/string.c
Move optimized versions of memcpy to compressed/string.c This will allow
any other code to use these functions too if need be in future. Again
trying to put definition in a common place instead of hiding it in misc.c

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Link: http://lkml.kernel.org/r/1395170800-11059-4-git-send-email-vgoyal@redhat.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-03-19 15:43:59 -07:00
Vivek Goyal c041b5ad86 x86, boot: Create a separate string.h file to provide standard string functions
Create a separate arch/x86/boot/string.h file to provide declaration of
some of the common string functions.

By default memcpy, memset and memcmp functions will default to gcc
builtin functions. If code wants to use an optimized version of any
of these functions, they need to #undef the respective macro and link
against a local file providing definition of undefed function.

For example, arch/x86/boot/* code links against copy.S to get memcpy()
and memcmp() definitions. arch/86/boot/compressed/* links against
compressed/string.c.

There are quite a few places in arch/x86/ where these functions are
used. Idea is to try to consilidate  their declaration and possibly
definitions so that it can be reused.

I am planning to reuse boot/string.h in arch/x86/purgatory/ and use
gcc builtin functions for memcpy, memset and memcmp.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Link: http://lkml.kernel.org/r/1395170800-11059-3-git-send-email-vgoyal@redhat.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-03-19 15:43:45 -07:00
Vivek Goyal aad830938e x86, boot: Undef memcmp before providing a new definition
With CONFIG_X86_32=y, string_32.h gets pulled in compressed/string.c by
"misch.h". string_32.h defines a macro to map memcmp to __builtin_memcmp().
And that macro in turn changes the name of memcmp() defined here and
converts it to __builtin_memcmp().

I thought that's not the intention though. We probably want to provide
our own optimized definition of memcmp(). If yes, then undef the memcmp
before we define a new memcmp.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Link: http://lkml.kernel.org/r/1395170800-11059-2-git-send-email-vgoyal@redhat.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-03-19 15:43:37 -07:00
Prabhakar Kushwaha 2eb1b9a41b powerpc/config: Remove unnecssary CONFIG_FSL_IFC
CONFIG_FSL_IFC gets enabled by Kconfig dependancies.
So remove unnecssary define from the defconfigs

Signed-off-by: Prabhakar Kushwaha <prabhakar@freescale.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
2014-03-19 17:38:13 -05:00