Update code that relied on sched.h including various MM types for them.
This will allow us to remove the <linux/mm_types.h> include from <linux/sched.h>.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
We are going to split more MM APIs out of <linux/sched.h>, which
will have to be picked up from a couple of .c files.
The APIs that we are going to move are:
arch_pick_mmap_layout()
arch_get_unmapped_area()
arch_get_unmapped_area_topdown()
mm_update_next_owner()
Include the header in the files that are going to need it.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
We are going to split <linux/sched/signal.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.
Create a trivial placeholder <linux/sched/signal.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.
Include the new header in the files that are going to need it.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Bart Van Assche noted that the ib DMA mapping code was significantly
similar enough to the core DMA mapping code that with a few changes
it was possible to remove the IB DMA mapping code entirely and
switch the RDMA stack to use the core DMA mapping code. This resulted
in a nice set of cleanups, but touched the entire tree. This branch
will be submitted separately to Linus at the end of the merge window
as per normal practice for tree wide changes like this.
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJYo06oAAoJELgmozMOVy/d9Z8QALedWHdu98St1L0u2c8sxnR9
2zo/4sF5Vb9u7FpmdIX32L4SQ9s9KhPE8Qp8NtZLf9v10zlDebIRJDpXknXtKooV
CAXxX4sxBXV27/UrhbZEfXiPrmm6ccJFyIfRnMU6NlMqh2AtAsRa5AC2/RMp8oUD
Med97PFiF0o6TD22/UH1VFbRpX1zjaKyqm7a3as5sJfzNA+UGIZAQ7Euz8000DKZ
xCgVLTEwS0FmOujtBkCst7xa9TjuqR1HLOB4DdGvAhP6BHdz2yamM7Qmh9NN+NEX
0BtjsuXomtn6j6AszGC+bpipCZh3NUigcwoFAARXCYFHibBvo4DPdFeGsraFgXdy
1+KyR8CCeQG3Aly5Vwr264RFPGkGpwMj8PsBlXgQVtrlg4rriaCzOJNmIIbfdADw
ftqhxBOzReZw77aH2s+9p2ILRfcAmPqhynLvFGFo9LBvsik8LVso7YgZN0xGxwcI
IjI/XGC8UskPVsIZBIYA6sl2bYzgOjtBIHiXjRrPlW3uhduIXLrvKFfLPP/5XLAG
ehLXK+J0bfsyY9ClmlNS8oH/WdLhXAyy/KNmnj5bRRm9qg6BRJR3bsOBhZJODuoC
XgEXFfF6/7roNESWxowff7pK0rTkRg/m/Pa4VQpeO+6NWHE7kgZhL6kyIp5nKcwS
3e7mgpcwC+3XfA/6vU3F
=e0Si
-----END PGP SIGNATURE-----
Merge tag 'for-next-dma_ops' of git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma
Pull rdma DMA mapping updates from Doug Ledford:
"Drop IB DMA mapping code and use core DMA code instead.
Bart Van Assche noted that the ib DMA mapping code was significantly
similar enough to the core DMA mapping code that with a few changes it
was possible to remove the IB DMA mapping code entirely and switch the
RDMA stack to use the core DMA mapping code.
This resulted in a nice set of cleanups, but touched the entire tree
and has been kept separate for that reason."
* tag 'for-next-dma_ops' of git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma: (37 commits)
IB/rxe, IB/rdmavt: Use dma_virt_ops instead of duplicating it
IB/core: Remove ib_device.dma_device
nvme-rdma: Switch from dma_device to dev.parent
RDS: net: Switch from dma_device to dev.parent
IB/srpt: Modify a debug statement
IB/srp: Switch from dma_device to dev.parent
IB/iser: Switch from dma_device to dev.parent
IB/IPoIB: Switch from dma_device to dev.parent
IB/rxe: Switch from dma_device to dev.parent
IB/vmw_pvrdma: Switch from dma_device to dev.parent
IB/usnic: Switch from dma_device to dev.parent
IB/qib: Switch from dma_device to dev.parent
IB/qedr: Switch from dma_device to dev.parent
IB/ocrdma: Switch from dma_device to dev.parent
IB/nes: Remove a superfluous assignment statement
IB/mthca: Switch from dma_device to dev.parent
IB/mlx5: Switch from dma_device to dev.parent
IB/mlx4: Switch from dma_device to dev.parent
IB/i40iw: Remove a superfluous assignment statement
IB/hns: Switch from dma_device to dev.parent
...
The callers of the DMA alloc functions already provide the proper
context GFP flags. Make sure to pass them through to the CMA allocator,
to make the CMA compaction context aware.
Link: http://lkml.kernel.org/r/20170127172328.18574-3-l.stach@pengutronix.de
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Radim Krcmar <rkrcmar@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Alexander Graf <agraf@suse.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
MIPS dependencies for KVM
Miscellaneous MIPS architecture changes depended on by the MIPS KVM
changes in the KVM tree.
- Move pgd_alloc() out of header.
- Exports so KVM can access page table management and TLBEX functions.
- Add return errors to protected cache ops.
Export to TLB exception code generating functions so that KVM can
construct a fast TLB refill handler for guest context without
reinventing the wheel quite so much.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Export pmd_init(), invalid_pmd_table and tlbmiss_handler_setup_pgd to
GPL kernel modules so that MIPS KVM can use the inline page table
management functions and switch between page tables:
- pmd_init() will be used directly by KVM to initialise newly allocated
pmd tables with invalid lower level table pointers.
- invalid_pmd_table is used by pud_present(), pud_none(), and
pud_clear(), which KVM will use to test and clear pud entries.
- tlbmiss_handler_setup_pgd() will be called by KVM entry code to switch
to the appropriate GVA page tables.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
pgd_alloc() references init_mm which is not exported to modules. In
order for KVM to be able to use pgd_alloc() to allocate GVA page tables,
move pgd_alloc() into a new pgtable.c file and export it to modules.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
In systems with CM3 & higher, the L2 cache is inclusive of the L1
dcache. Indicate this such that cpu_has_inclusive_pcaches evaluates true
and we avoid some unnecessary cache ops during DMA cache maintenance.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14018/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Physically indexed caches cannot suffer from virtual aliasing, so clear
the MIPS_CACHE_ALIASES bit in order to ensure we don't do extra work
avoiding aliasing that cannot happen.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14017/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The L1 data cache in I6400 CPUs is indexed by physical address bits if
an entry for the address is present in the DTLB early enough in the
pipelined execution of a memory access instruction. If an entry is not
present then it's indexed by virtual address bits, but hardware will
check in a later pipeline stage when a DTLB entry has been created
whether the virtual address bits used match the physical address bits,
and if not will transparently restart the memory access instruction.
This means that although it isn't always physically indexed, it appears
so to software & we can treat the I6400 L1 data cache as being
physically indexed in order to avoid considering aliasing.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14016/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Now that EXPORT_SYMBOL can be used from assembly source, move the
EXPORT_SYMBOL invocations for the copy_page & clear_page functions to be
alongside their definitions.
With this change there are no longer any symbols exported from
mips_ksyms.c so remove the file.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14515/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
It's unclear to me why this wasn't always the case, but move the
EXPORT_SYMBOL invocation for invalid_pte_table to be alongside its
definition.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14511/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When generating TLB exception handling code we write to memory reserved
at the handle_tlbl, handle_tlbm & handle_tlbs symbols. Up until now the
ISA bit has always been clear simply because the assembly code reserving
the space for those functions places no instructions in them. In
preparation for marking all LEAF functions as containing code,
explicitly clear the ISA bit when calculating the addresses at which to
write TLB exception handling code.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14507/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
arch_mmap_rnd() uses hard-coded limits of 16MB for the randomisation
of mmap within 32bit processes and 256MB in 64bit processes. Since v4.4
other arches support tuning this value in /proc/sys/vm/mmap_rnd_bits.
Add support for this to MIPS.
Set the minimum(default) number of bits randomisation for 32bit to 8 -
which with 4k pagesize is unchanged from the current 16MB total
randomness. The minimum(default) for 64bit is 12bits, again with 4k
pagesize this is the same as the current 256MB.
This patch is necessary for MIPS32 to pass the Android CTS tests, with
the number of random bits set to 15.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Daniel Cashman <dcashman@android.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mips@linux-mips.org
Cc: kernel-hardening@lists.openwall.com
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14617/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Fix the following build error with binutils 2.25.
CC arch/mips/mm/sc-ip22.o
{standard input}: Assembler messages:
{standard input}:132: Error: number (0x9000000080000000) larger than 32 bits
{standard input}:159: Error: number (0x9000000080000000) larger than 32 bits
{standard input}:200: Error: number (0x9000000080000000) larger than 32 bits
scripts/Makefile.build:293: recipe for target 'arch/mips/mm/sc-ip22.o' failed
make[1]: *** [arch/mips/mm/sc-ip22.o] Error 1
MIPS has used .set mips3 to temporarily switch the assembler to 64 bit
mode in 64 bit kernels virtually forever. Binutils 2.25 broke this
behavious partially by happily accepting 64 bit instructions in .set mips3
mode but puking on 64 bit constants when generating 32 bit ELF. Binutils
2.26 restored the old behaviour again.
Fix build with binutils 2.25 by open coding the offending
dli $1, 0x9000000080000000
as
li $1, 0x9000
dsll $1, $1, 48
which is ugly be the only thing that will build on all binutils vintages.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Cc: stable@vger.kernel.org
This was entirely automated, using the script by Al:
PATT='^[[:blank:]]*#[[:blank:]]*include[[:blank:]]*<asm/uaccess.h>'
sed -i -e "s!$PATT!#include <linux/uaccess.h>!" \
$(git grep -l "$PATT"|grep -v ^include/linux/uaccess.h)
to do the replacement at the end of the merge window.
Requested-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This change allows us to pass DMA_ATTR_SKIP_CPU_SYNC which allows us to
avoid invoking cache line invalidation if the driver will just handle it
via a sync_for_cpu or sync_for_device call.
Link: http://lkml.kernel.org/r/20161110113513.76501.32321.stgit@ahduyck-blue-test.jf.intel.com
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Keguang Zhang <keguang.zhang@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Since commit 4bcc595ccd ("printk: reinstate KERN_CONT for printing
continuation lines") the output from __do_page_fault on MIPS has been
pretty unreadable due to the lack of KERN_CONT markers. Use pr_cont
to provide the appropriate markers & restore the expected output.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14544/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Since MIPSr6 the Wired register is split into 2 fields, with the upper
16 bits of the register indicating a limit on the value that the wired
entry count in the bottom 16 bits of the register can take. This means
that simply reading the wired register doesn't get us a valid TLB entry
index any longer, and we instead need to retrieve only the lower 16 bits
of the register. Introduce a new num_wired_entries() function which does
this on MIPSr6 or higher and simply returns the value of the wired
register on older architecture revisions, and make use of it when
reading the number of wired entries.
Since commit e710d66683 ("MIPS: tlb-r4k: If there are wired entries,
don't use TLBINVF") we have been using a non-zero number of wired
entries to determine whether we should avoid use of the tlbinvf
instruction (which would invalidate wired entries) and instead loop over
TLB entries in local_flush_tlb_all(). This loop begins with the number
of wired entries, or before this patch some large bogus TLB index on
MIPSr6 systems. Thus since the aforementioned commit some MIPSr6 systems
with FTLBs have been prone to leaving stale address translations in the
FTLB & crashing in various weird & wonderful ways when we later observe
the wrong memory.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14557/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Merge the gup_flags cleanups from Lorenzo Stoakes:
"This patch series adjusts functions in the get_user_pages* family such
that desired FOLL_* flags are passed as an argument rather than
implied by flags.
The purpose of this change is to make the use of FOLL_FORCE explicit
so it is easier to grep for and clearer to callers that this flag is
being used. The use of FOLL_FORCE is an issue as it overrides missing
VM_READ/VM_WRITE flags for the VMA whose pages we are reading
from/writing to, which can result in surprising behaviour.
The patch series came out of the discussion around commit 38e0885465
("mm: check VMA flags to avoid invalid PROT_NONE NUMA balancing"),
which addressed a BUG_ON() being triggered when a page was faulted in
with PROT_NONE set but having been overridden by FOLL_FORCE.
do_numa_page() was run on the assumption the page _must_ be one marked
for NUMA node migration as an actual PROT_NONE page would have been
dealt with prior to this code path, however FOLL_FORCE introduced a
situation where this assumption did not hold.
See
https://marc.info/?l=linux-mm&m=147585445805166
for the patch proposal"
Additionally, there's a fix for an ancient bug related to FOLL_FORCE and
FOLL_WRITE by me.
[ This branch was rebased recently to add a few more acked-by's and
reviewed-by's ]
* gup_flag-cleanups:
mm: replace access_process_vm() write parameter with gup_flags
mm: replace access_remote_vm() write parameter with gup_flags
mm: replace __access_remote_vm() write parameter with gup_flags
mm: replace get_user_pages_remote() write/force parameters with gup_flags
mm: replace get_user_pages() write/force parameters with gup_flags
mm: replace get_vaddr_frames() write/force parameters with gup_flags
mm: replace get_user_pages_locked() write/force parameters with gup_flags
mm: replace get_user_pages_unlocked() write/force parameters with gup_flags
mm: remove write/force parameters from __get_user_pages_unlocked()
mm: remove write/force parameters from __get_user_pages_locked()
mm: remove gup_flags FOLL_WRITE games from __get_user_pages()
This removes the 'write' and 'force' use from get_user_pages_unlocked()
and replaces them with 'gup_flags' to make the use of FOLL_FORCE
explicit in callers as use of this flag can result in surprising
behaviour (and hence bugs) within the mm subsystem.
Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
On some MIPS systems, a subset of devices may have DMA coherent with CPU
caches. For example in systems including a MIPS I/O Coherence Unit
(IOCU), some devices may be connected to that IOCU whilst others are
not.
Prior to this patch, we have a plat_device_is_coherent() function but no
implementation which does anything besides return a global true or
false, optionally chosen at runtime. For devices such as those described
above this is insufficient.
Fix this by tracking DMA coherence on a per-device basis with a
dma_coherent field in struct dev_archdata. Setting this from
arch_setup_dma_ops() takes care of devices which set the dma-coherent
property via device tree, and any PCI devices beneath a bridge described
in DT, automatically.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14349/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
There are no cases where plat_device_is_coherent() will return zero
whilst hw_coherentio is non-zero, and acting any differently in such a
case doesn't make much sense - if a device is non-coherent with the CPU
caches then access to memory "coherent" with DMA must be uncached. Clean
up the nonsensical case.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14348/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The coherentio variable has previously been used as a boolean value,
indicating whether the user specified that coherent I/O should be
enabled or disabled. It failed to take into account the case where the
user does not specify any preference, in which case it makes sense that
we should default to coherent I/O if the hardware supports it
(hw_coherentio is non-zero).
Introduce an enum to clarify the 3 different values of coherentio & use
it throughout the code, modifying plat_device_is_coherent() &
r4k_cache_init() to take into account the default case.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Paul Burton <paul.burton@imgtec.com>
Patchwork: https://patchwork.linux-mips.org/patch/14347/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Historically a lot of these existed because we did not have
a distinction between what was modular code and what was providing
support to modules via EXPORT_SYMBOL and friends. That changed
when we forked out support for the latter into the export.h file.
This means we should be able to reduce the usage of module.h
in code that is obj-y Makefile or bool Kconfig. The advantage
in doing so is that module.h itself sources about 15 other headers;
adding significantly to what we feed cpp, and it can obscure what
headers we are effectively using.
Since module.h was the source for init.h (for __init) and for
export.h (for EXPORT_SYMBOL) we consider each obj-y/bool instance
for the presence of either and replace as needed.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14033/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When adding a wired entry to the TLB via add_wired_entry, the tlb is
flushed with local_flush_tlb_all, which on CPUs with TLBINV results in
the new wired entry being flushed again.
Behavior of the TLBINV instruction applies to all applicable TLB entries
and is unaffected by the setting of the Wired register. Therefore if
the TLB has any wired entries, fall back to iterating over the entries
rather than blasting them all using TLBINVF.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Bjorn Andersson <bjorn.andersson@linaro.org>
Cc: Ohad Ben-Cohen <ohad@wizery.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: lisa.parratt@imgtec.com
Cc: Hugh Dickins <hughd@google.com>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mips@linux-mips.org
Cc: linux-remoteproc@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14283/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
flush_icache_range() flushes icache lines in a protected fashion for
kernel addresses, however this isn't correct with EVA where protected
cache ops only operate on user addresses, making flush_icache_range()
ineffective.
Split the implementations of __flush_icache_user_range() from
flush_icache_range(), changing the normal flush_icache_range() to use
unprotected normal cache ops.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14156/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The cacheflush(2) system call uses flush_icache_range() to flush a range
of usermode addresses from the icache, so change it to utilise the new
__flush_icache_user_range() API to allow the more generic
flush_icache_range() to be changed to work on kernel addresses only.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14153/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
flush_icache_range() is used for both user addresses (i.e.
cacheflush(2)), and kernel addresses (as the API documentation
describes).
This isn't really suitable however for Enhanced Virtual Addressing (EVA)
where cache operations on usermode addresses must use a different
instruction, and the protected cache ops assume user addresses, making
flush_icache_range() ineffective on kernel addresses.
Split out a new __flush_icache_user_range() and
__local_flush_icache_user_range() for users which actually want to flush
usermode addresses (note that flush_icache_user_range() already exists
on various architectures but with different arguments).
The implementation of flush_icache_range() will be changed in an
upcoming commit to use unprotected normal cache ops so as to always work
on the kernel mode address space.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14152/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The EVA conditional bc_wback_inv() at the end of flush_icache_range() to
flush the modified code all the way back to RAM was apparently there for
debug purposes and to accommodate the Malta EVA configuration which
makes use of a physical alias, and didn't use the CP0_EBase.WG (Write
Gate) bit to put the exception vector in the same physical alias where
the exception vector code is written and is being flushed.
Now that CP0_EBase.WG is used, lets drop this flush.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14151/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
These files were only including module.h for exception table
related functions. We've now separated that content out into its
own file "extable.h" so now move over to that and avoid all the
extra header content in module.h that we don't really need to compile
these files.
In the case of traps.c we can't dump the module.h include since it is
also used to provide "print_modules".
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/13934/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The paging_init() function contains code which detects that highmem is
in use but unsupported due to dcache aliasing. However this code was
ineffective because it was being run before the caches are probed,
meaning that cpu_has_dc_aliases would always evaluate to false (unless a
platform overrides it to a compile-time constant) and the detection of
the unsupported case is never triggered. The kernel would then go on to
attempt to use highmem & either hit coherency issues or trigger the
BUG_ON in flush_kernel_dcache_page().
Fix this by running paging_init() later than cpu_cache_init(), such that
the cpu_has_dc_aliases macro will evaluate correctly & the unsupported
highmem case will be detected successfully.
This then leads to a formerly hidden issue in that
mem_init_free_highmem() will attempt to free all highmem pages, even
though we're avoiding use of them & don't have valid page structs for
them. This leads to an invalid pointer dereference & a TLB exception.
Avoid this by skipping the loop in mem_init_free_highmem() if
cpu_has_dc_aliases evaluates true.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Rabin Vincent <rabinv@axis.com>
Cc: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Jerome Marchand <jmarchan@redhat.com>
Cc: Alexander Sverdlin <alexander.sverdlin@gmail.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Jaedon Shin <jaedon.shin@gmail.com>
Cc: Toshi Kani <toshi.kani@hpe.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Sergey Ryazanov <ryazanov.s.a@gmail.com>
Cc: Jonas Gorski <jogo@openwrt.org>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14184/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Commit f70ddc07b6 ("MIPS: c-r4k: Avoid small flush_icache_range SMP
calls") adds checks to force use of hit-type cache ops for small icache
flushes where they are globalised & index-type cache ops aren't, in
order to avoid the overhead of IPIs in those cases. However it
calculated the size of the region being flushed incorrectly, subtracting
the end address from the start address rather than the reverse. This
would have led to an overflow with size wrapping round to some large
value, and likely to the special case for avoiding IPIs not actually
being hit.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Fixes: f70ddc07b6 ("MIPS: c-r4k: Avoid small flush_icache_range SMP calls")
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14211/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The alignment of MIPS MAAR region addresses isn't quite right.
- It rounds an already 64 KiB aligned start address up to the next
64 KiB boundary, e.g. 0x80000000 is rounded up to 0x80010000.
- It assumes the end address is already on a 64 KiB boundary and doesn't
round it down. Should that not be the case it will hit the second
BUG_ON() in write_maar_pair().
Both cases are addressed by rounding up and down to 64 KiB boundaries in
the more traditional way of adding 0xffff (for rounding up) and masking
off the low 16 bits.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13858/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Pull MIPS updates from Ralf Baechle:
"This is the main pull request for MIPS for 4.8. Also includes is a
minor SSB cleanup as SSB code traditionally is merged through the MIPS
tree:
ATH25:
- MIPS: Add default configuration for ath25
Boot:
- For zboot, copy appended dtb to the end of the kernel
- store the appended dtb address in a variable
BPF:
- Fix off by one error in offset allocation
Cobalt code:
- Fix typos
Core code:
- debugfs_create_file returns NULL on error, so don't use IS_ERR for
testing for errors.
- Fix double locking issue in RM7000 S-cache code. This would only
affect RM7000 ARC systems on reboot.
- Fix page table corruption on THP permission changes.
- Use compat_sys_keyctl for 32 bit userspace on 64 bit kernels.
David says, there are no compatibility issues raised by this fix.
- Move some signal code around.
- Rewrite r4k count/compare clockevent device registration such that
min_delta_ticks/max_delta_ticks files are guaranteed to be
initialized.
- Only register r4k count/compare as clockevent device if we can
assume the clock to be constant.
- Fix MSA asm warnings in control reg accessors
- uasm and tlbex fixes and tweaking.
- Print segment physical address when EU=1.
- Define AT_VECTOR_SIZE_ARCH for ARCH_DLINFO.
- CP: Allow booting by VP other than VP 0
- Cache handling fixes and optimizations for r4k class caches
- Add hotplug support for R6 processors
- Cleanup hotplug bits in kconfig
- traps: return correct si code for accessing nonmapped addresses
- Remove cpu_has_safe_index_cacheops
Lantiq:
- Register IRQ handler for virtual IRQ number
- Fix EIU interrupt loading code
- Use the real EXIN count
- Fix build error.
Loongson 3:
- Increase HPET_MIN_PROG_DELTA and decrease HPET_MIN_CYCLES
Octeon:
- Delete built-in DTB pruning code for D-Link DSR-1000N.
- Clean up GPIO definitions in dlink_dsr-1000n.dts.
- Add more LEDs to the DSR-100n DTS
- Fix off by one in octeon_irq_gpio_map()
- Typo fixes
- Enable SATA by default in cavium_octeon_defconfig
- Support readq/writeq()
- Remove forced mappings of USB interrupts.
- Ensure DMA descriptors are always in the low 4GB
- Improve USB reset code for OCTEON II.
Pistachio:
- Add maintainers entry for pistachio SoC Support
- Remove plat_setup_iocoherency
Ralink:
- Fix pwm UART in spis group pinmux.
SSB:
- Change bare unsigned to unsigned int to suit coding style
Tools:
- Fix reloc tool compiler warnings.
Other:
- Delete use of ARCH_WANT_OPTIONAL_GPIOLIB"
* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (61 commits)
MIPS: mm: Fix definition of R6 cache instruction
MIPS: tools: Fix relocs tool compiler warnings
MIPS: Cobalt: Fix typo
MIPS: Octeon: Fix typo
MIPS: Lantiq: Fix build failure
MIPS: Use CPHYSADDR to implement mips32 __pa
MIPS: Octeon: Dlink_dsr-1000n.dts: add more leds.
MIPS: Octeon: Clean up GPIO definitions in dlink_dsr-1000n.dts.
MIPS: Octeon: Delete built-in DTB pruning code for D-Link DSR-1000N.
MIPS: store the appended dtb address in a variable
MIPS: ZBOOT: copy appended dtb to the end of the kernel
MIPS: ralink: fix spis group pinmux
MIPS: Factor o32 specific code into signal_o32.c
MIPS: non-exec stack & heap when non-exec PT_GNU_STACK is present
MIPS: Use per-mm page to execute branch delay slot instructions
MIPS: Modify error handling
MIPS: c-r4k: Use SMP calls for CM indexed cache ops
MIPS: c-r4k: Avoid small flush_icache_range SMP calls
MIPS: c-r4k: Local flush_icache_range cache op override
MIPS: c-r4k: Split r4k_flush_kernel_vmap_range()
...
The dma-mapping core and the implementations do not change the DMA
attributes passed by pointer. Thus the pointer can point to const data.
However the attributes do not have to be a bitfield. Instead unsigned
long will do fine:
1. This is just simpler. Both in terms of reading the code and setting
attributes. Instead of initializing local attributes on the stack
and passing pointer to it to dma_set_attr(), just set the bits.
2. It brings safeness and checking for const correctness because the
attributes are passed by value.
Semantic patches for this change (at least most of them):
virtual patch
virtual context
@r@
identifier f, attrs;
@@
f(...,
- struct dma_attrs *attrs
+ unsigned long attrs
, ...)
{
...
}
@@
identifier r.f;
@@
f(...,
- NULL
+ 0
)
and
// Options: --all-includes
virtual patch
virtual context
@r@
identifier f, attrs;
type t;
@@
t f(..., struct dma_attrs *attrs);
@@
identifier r.f;
@@
f(...,
- NULL
+ 0
)
Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.com
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Acked-by: Vineet Gupta <vgupta@synopsys.com>
Acked-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no>
Acked-by: Mark Salter <msalter@redhat.com> [c6x]
Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> [cris]
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> [drm]
Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com>
Acked-by: Joerg Roedel <jroedel@suse.de> [iommu]
Acked-by: Fabien Dessenne <fabien.dessenne@st.com> [bdisp]
Reviewed-by: Marek Szyprowski <m.szyprowski@samsung.com> [vb2-core]
Acked-by: David Vrabel <david.vrabel@citrix.com> [xen]
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [xen swiotlb]
Acked-by: Joerg Roedel <jroedel@suse.de> [iommu]
Acked-by: Richard Kuo <rkuo@codeaurora.org> [hexagon]
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k]
Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390]
Acked-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> [avr32]
Acked-by: Vineet Gupta <vgupta@synopsys.com> [arc]
Acked-by: Robin Murphy <robin.murphy@arm.com> [arm64 and dma-iommu]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The use of config_enabled() against config options is ambiguous. In
practical terms, config_enabled() is equivalent to IS_BUILTIN(), but the
author might have used it for the meaning of IS_ENABLED(). Using
IS_ENABLED(), IS_BUILTIN(), IS_MODULE() etc. makes the intention
clearer.
This commit replaces config_enabled() with IS_ENABLED() where possible.
This commit is only touching bool config options.
I noticed two cases where config_enabled() is used against a tristate
option:
- config_enabled(CONFIG_HWMON)
[ drivers/net/wireless/ath/ath10k/thermal.c ]
- config_enabled(CONFIG_BACKLIGHT_CLASS_DEVICE)
[ drivers/gpu/drm/gma500/opregion.c ]
I did not touch them because they should be converted to IS_BUILTIN()
in order to keep the logic, but I was not sure it was the authors'
intention.
Link: http://lkml.kernel.org/r/1465215656-20569-1-git-send-email-yamada.masahiro@socionext.com
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Kees Cook <keescook@chromium.org>
Cc: Stas Sergeev <stsp@list.ru>
Cc: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Joshua Kinard <kumba@gentoo.org>
Cc: Jiri Slaby <jslaby@suse.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: "Dmitry V. Levin" <ldv@altlinux.org>
Cc: yu-cheng yu <yu-cheng.yu@intel.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Will Drewry <wad@chromium.org>
Cc: Nikolay Martynov <mar.kolya@gmail.com>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: Rafal Milecki <zajec5@gmail.com>
Cc: James Cowgill <James.Cowgill@imgtec.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Alex Smith <alex.smith@imgtec.com>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: Qais Yousef <qais.yousef@imgtec.com>
Cc: Jiang Liu <jiang.liu@linux.intel.com>
Cc: Mikko Rapeli <mikko.rapeli@iki.fi>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Brian Norris <computersforpeace@gmail.com>
Cc: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
Cc: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Roland McGrath <roland@hack.frob.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Kalle Valo <kvalo@qca.qualcomm.com>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Tony Wu <tung7970@gmail.com>
Cc: Huaitong Han <huaitong.han@intel.com>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Juergen Gross <jgross@suse.com>
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andrea Gelmini <andrea.gelmini@gelma.net>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Rabin Vincent <rabin@rab.in>
Cc: "Maciej W. Rozycki" <macro@imgtec.com>
Cc: David Daney <david.daney@cavium.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Merge yet more updates from Andrew Morton:
- the rest of ocfs2
- various hotfixes, mainly MM
- quite a bit of misc stuff - drivers, fork, exec, signals, etc.
- printk updates
- firmware
- checkpatch
- nilfs2
- more kexec stuff than usual
- rapidio updates
- w1 things
* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (111 commits)
ipc: delete "nr_ipc_ns"
kcov: allow more fine-grained coverage instrumentation
init/Kconfig: add clarification for out-of-tree modules
config: add android config fragments
init/Kconfig: ban CONFIG_LOCALVERSION_AUTO with allmodconfig
relay: add global mode support for buffer-only channels
init: allow blacklisting of module_init functions
w1:omap_hdq: fix regression
w1: add helper macro module_w1_family
w1: remove need for ida and use PLATFORM_DEVID_AUTO
rapidio/switches: add driver for IDT gen3 switches
powerpc/fsl_rio: apply changes for RIO spec rev 3
rapidio: modify for rev.3 specification changes
rapidio: change inbound window size type to u64
rapidio/idt_gen2: fix locking warning
rapidio: fix error handling in mbox request/release functions
rapidio/tsi721_dma: advance queue processing from transfer submit call
rapidio/tsi721: add messaging mbox selector parameter
rapidio/tsi721: add PCIe MRRS override parameter
rapidio/tsi721_dma: add channel mask and queue size parameters
...
There was only one use of __initdata_refok and __exit_refok
__init_refok was used 46 times against 82 for __ref.
Those definitions are obsolete since commit 312b1485fb ("Introduce new
section reference annotations tags: __ref, __refdata, __refconst")
This patch removes the following compatibility definitions and replaces
them treewide.
/* compatibility defines */
#define __init_refok __ref
#define __initdata_refok __refdata
#define __exit_refok __ref
I can also provide separate patches if necessary.
(One patch per tree and check in 1 month or 2 to remove old definitions)
[akpm@linux-foundation.org: coding-style fixes]
Link: http://lkml.kernel.org/r/1466796271-3043-1-git-send-email-fabf@skynet.be
Signed-off-by: Fabian Frederick <fabf@skynet.be>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
VGIC implementation.
- s390: support for trapping software breakpoints, nested virtualization
(vSIE), the STHYI opcode, initial extensions for CPU model support.
- MIPS: support for MIPS64 hosts (32-bit guests only) and lots of cleanups,
preliminary to this and the upcoming support for hardware virtualization
extensions.
- x86: support for execute-only mappings in nested EPT; reduced vmexit
latency for TSC deadline timer (by about 30%) on Intel hosts; support for
more than 255 vCPUs.
- PPC: bugfixes.
The ugly bit is the conflicts. A couple of them are simple conflicts due
to 4.7 fixes, but most of them are with other trees. There was definitely
too much reliance on Acked-by here. Some conflicts are for KVM patches
where _I_ gave my Acked-by, but the worst are for this pull request's
patches that touch files outside arch/*/kvm. KVM submaintainers should
probably learn to synchronize better with arch maintainers, with the
latter providing topic branches whenever possible instead of Acked-by.
This is what we do with arch/x86. And I should learn to refuse pull
requests when linux-next sends scary signals, even if that means that
submaintainers have to rebase their branches.
Anyhow, here's the list:
- arch/x86/kvm/vmx.c: handle_pcommit and EXIT_REASON_PCOMMIT was removed
by the nvdimm tree. This tree adds handle_preemption_timer and
EXIT_REASON_PREEMPTION_TIMER at the same place. In general all mentions
of pcommit have to go.
There is also a conflict between a stable fix and this patch, where the
stable fix removed the vmx_create_pml_buffer function and its call.
- virt/kvm/kvm_main.c: kvm_cpu_notifier was removed by the hotplug tree.
This tree adds kvm_io_bus_get_dev at the same place.
- virt/kvm/arm/vgic.c: a few final bugfixes went into 4.7 before the
file was completely removed for 4.8.
- include/linux/irqchip/arm-gic-v3.h: this one is entirely our fault;
this is a change that should have gone in through the irqchip tree and
pulled by kvm-arm. I think I would have rejected this kvm-arm pull
request. The KVM version is the right one, except that it lacks
GITS_BASER_PAGES_SHIFT.
- arch/powerpc: what a mess. For the idle_book3s.S conflict, the KVM
tree is the right one; everything else is trivial. In this case I am
not quite sure what went wrong. The commit that is causing the mess
(fd7bacbca4, "KVM: PPC: Book3S HV: Fix TB corruption in guest exit
path on HMI interrupt", 2016-05-15) touches both arch/powerpc/kernel/
and arch/powerpc/kvm/. It's large, but at 396 insertions/5 deletions
I guessed that it wasn't really possible to split it and that the 5
deletions wouldn't conflict. That wasn't the case.
- arch/s390: also messy. First is hypfs_diag.c where the KVM tree
moved some code and the s390 tree patched it. You have to reapply the
relevant part of commits 6c22c98637, plus all of e030c1125e, to
arch/s390/kernel/diag.c. Or pick the linux-next conflict
resolution from http://marc.info/?l=kvm&m=146717549531603&w=2.
Second, there is a conflict in gmap.c between a stable fix and 4.8.
The KVM version here is the correct one.
I have pushed my resolution at refs/heads/merge-20160802 (commit
3d1f53419842) at git://git.kernel.org/pub/scm/virt/kvm/kvm.git.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJXoGm7AAoJEL/70l94x66DugQIAIj703ePAFepB/fCrKHkZZia
SGrsBdvAtNsOhr7FQ5qvvjLxiv/cv7CymeuJivX8H+4kuUHUllDzey+RPHYHD9X7
U6n1PdCH9F15a3IXc8tDjlDdOMNIKJixYuq1UyNZMU6NFwl00+TZf9JF8A2US65b
x/41W98ilL6nNBAsoDVmCLtPNWAqQ3lajaZELGfcqRQ9ZGKcAYOaLFXHv2YHf2XC
qIDMf+slBGSQ66UoATnYV2gAopNlWbZ7n0vO6tE2KyvhHZ1m399aBX1+k8la/0JI
69r+Tz7ZHUSFtmlmyByi5IAB87myy2WQHyAPwj+4vwJkDGPcl0TrupzbG7+T05Y=
=42ti
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull KVM updates from Paolo Bonzini:
- ARM: GICv3 ITS emulation and various fixes. Removal of the
old VGIC implementation.
- s390: support for trapping software breakpoints, nested
virtualization (vSIE), the STHYI opcode, initial extensions
for CPU model support.
- MIPS: support for MIPS64 hosts (32-bit guests only) and lots
of cleanups, preliminary to this and the upcoming support for
hardware virtualization extensions.
- x86: support for execute-only mappings in nested EPT; reduced
vmexit latency for TSC deadline timer (by about 30%) on Intel
hosts; support for more than 255 vCPUs.
- PPC: bugfixes.
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (302 commits)
KVM: PPC: Introduce KVM_CAP_PPC_HTM
MIPS: Select HAVE_KVM for MIPS64_R{2,6}
MIPS: KVM: Reset CP0_PageMask during host TLB flush
MIPS: KVM: Fix ptr->int cast via KVM_GUEST_KSEGX()
MIPS: KVM: Sign extend MFC0/RDHWR results
MIPS: KVM: Fix 64-bit big endian dynamic translation
MIPS: KVM: Fail if ebase doesn't fit in CP0_EBase
MIPS: KVM: Use 64-bit CP0_EBase when appropriate
MIPS: KVM: Set CP0_Status.KX on MIPS64
MIPS: KVM: Make entry code MIPS64 friendly
MIPS: KVM: Use kmap instead of CKSEG0ADDR()
MIPS: KVM: Use virt_to_phys() to get commpage PFN
MIPS: Fix definition of KSEGX() for 64-bit
KVM: VMX: Add VMCS to CPU's loaded VMCSs before VMPTRLD
kvm: x86: nVMX: maintain internal copy of current VMCS
KVM: PPC: Book3S HV: Save/restore TM state in H_CEDE
KVM: PPC: Book3S HV: Pull out TM state save/restore into separate procedures
KVM: arm64: vgic-its: Simplify MAPI error handling
KVM: arm64: vgic-its: Make vgic_its_cmd_handle_mapi similar to other handlers
KVM: arm64: vgic-its: Turn device_id validation into generic ID validation
...
debugfs_create_file returns NULL on error so an IS_ERR test is
incorrect here and a NULL check is required.
The Coccinelle semantic patch used to make this change is as follows:
@@
expression e;
@@
e = debugfs_create_file(...);
if(
- IS_ERR(e)
+ !e
)
{
<+...
return
- PTR_ERR(e)
+ -ENOMEM
;
...+>
}
Signed-off-by: Amitoj Kaur Chawla <amitoj1606@gmail.com>
Cc: julia.lawall@lip6.fr
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/13834/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The MIPS Coherence Manager (CM) can propagate address-based ("hit")
cache operations to other cores in the coherent system, alleviating
software of the need to use SMP calls, however indexed cache operations
are not propagated by hardware since doing so makes no sense for
separate caches.
Update r4k_op_needs_ipi() to report that only hit cache operations are
globalized by the CM, requiring indexed cache operations to be
globalized by software via an SMP call.
r4k_on_each_cpu() previously had a special case for CONFIG_MIPS_MT_SMP,
intended to avoid the SMP calls when the only other CPUs in the system
were other VPEs in the same core, and hence sharing the same caches.
This was changed by commit cccf34e941 ("MIPS: c-r4k: Fix cache
flushing for MT cores") to apparently handle multi-core multi-VPE
systems, but it focussed mainly on hit cache ops, so the SMP calls were
still disabled entirely for CM systems.
This doesn't normally cause problems, but tests can be written to hit
these corner cases by using multiple threads, or changing task
affinities to force the process to migrate cores. For example the
failure of mprotect RW->RX to globally sync icaches (via
flush_cache_range) can be detected by modifying and mprotecting a code
page on one core, and migrating to a different core to execute from it.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13807/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Avoid SMP calls for flushing small icache ranges. On non-CM platforms,
and CM platforms too after we make r4k_on_each_cpu() take the cache op
type into account, it will be called on multiple CPUs due to the
possibility that local_r4k_flush_icache_range_ipi() could do
non-globalized indexed cache ops. This rougly copies the range size
check out into r4k_flush_icache_range(), which can disallow indexed
cache ops and allow r4k_on_each_cpu() to skip the SMP call.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13805/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Allow the permitted cache op types used by
local_r4k_flush_icache_range_ipi() to be overridden by the SMP caller.
This will allow SMP calls to be avoided under certain circumstances,
falling back to a single CPU performing globalized hit cache ops only.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13803/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Split the operation of r4k_flush_kernel_vmap_range() into separate
SMP callbacks for the indexed cache flush and hit cache flush cases,
since the logic to determine which to use can be determined by the
initiating CPU prior to doing any SMP calls.
This will help when we change r4k_on_each_cpu() to distinguish indexed
and hit cache ops in a later patch, preventing globalized hit cache ops
being performed redundantly on multiple CPUs.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13806/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When performing SMP calls to foreign cores, exclude sibling CPUs from
the provided map, as we already handle the local core on the current
CPU. This prevents an SMP call from for example core 0, VPE 1 to VPE 0
on the same core.
In the process the cpu_foreign_map cpumask is turned into an array of
cpumasks, so that each CPU has its own version of it which excludes
sibling CPUs. r4k_op_needs_ipi() is also updated to reflect that cache
management SMP calls are not needed when all CPUs are siblings (i.e.
there are no foreign CPUs according to the new cpu_foreign_map[]
semantics which exclude siblings).
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: Felix Fietkau <nbd@nbd.name>
Cc: Jayachandran C. <jchandra@broadcom.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13801/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Several cache operations are optimised to return early from the SMP call
handler if the memory map in question has no valid ASID on the current
CPU, or any online CPU in the case of MIPS_MT_SMP. The idea is that if a
memory map has never been used on a CPU it shouldn't have cache lines in
need of flushing.
However this doesn't cover all cases when ASIDs for other CPUs need to
be checked:
- Offline VPEs may have recently been online and brought lines into the
(shared) cache, so they should also be checked, rather than only
online CPUs.
- SMP systems with a Coherence Manager (CM), but with MT disabled still
have globalized hit cache ops, but don't use SMP calls, so all present
CPUs should be taken into account.
- R6 systems have a different multithreading implementation, so
MIPS_MT_SMP won't be set, but as above may still have a CM which
globalizes hit cache ops.
Additionally for non-globalized cache operations where an SMP call to a
single VPE in each foreign core is used, it is not necessary to check
every CPU in the system, only sibling CPUs sharing the same first level
cache.
Fix this by making has_valid_asid() take a cache op type argument like
r4k_on_each_cpu(), so it can determine whether r4k_on_each_cpu() will
have done SMP calls to other cores. It can then determine which set of
CPUs to check the ASIDs of based on that, excluding foreign CPUs if an
SMP call will have been performed.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13804/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The r4k_on_each_cpu() function calls the specified cache flush helper on
other CPUs if deemed necessary due to the cache ops not being
globalized by hardware. However this really depends on the cache op
addressing type, as the MIPS Coherence Manager (CM) if present will
globalize "hit" cache ops (addressed by virtual address), but not
"index" cache ops (addressed by cache index). This results in index
cache ops only being performed on a single CPU when CM is present.
Most (but not all) of the functions called by r4k_on_each_cpu() perform
cache operations exclusively with a single cache op type, so add a type
argument and modify the callers to pass in some combination of R4K_HIT
(global kernel virtual addressing or user virtual addressing
conditional upon matching active_mm) and R4K_INDEX (index into cache).
This will allow r4k_on_each_cpu() to later distinguish these cases and
decide whether to perform an SMP call based on it.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13798/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Avoid the dcache and scache flush in local_r4k_flush_cache_sigtramp() if
the icache fills straight from the dcache.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13802/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Fix r4k_flush_cache_sigtramp() and local_r4k_flush_cache_sigtramp() to
flush the delay slot emulation trampoline cacheline through a kmap
rather than directly when the active_mm doesn't match that of the task
initiating the flush, a bit like local_r4k_flush_cache_page() does.
This would fix a corner case on SMP systems without hardware globalized
hit cache ops, where a migration to another CPU after the flush, where
that CPU did not have the same mm active at the time of the flush, could
result in stale icache content being executed instead of the trampoline,
e.g. from a previous delay slot emulation with a similar stack pointer.
This case was artificially triggered by replacing the icache flush with
a full indexed flush (not globalized on CM systems) and forcing the SMP
call to take place, with a test program that alternated two FPU delay
slots with a parent process repeatedly changing scheduler affinity.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13797/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The SMP flush_tlb_*() functions may clear the memory map's ASIDs for
other CPUs if the mm has only a single user (the current CPU) in order
to avoid SMP calls. However this makes it appear to has_valid_asid(),
which is used by various cache flush functions, as if the CPUs have
never run in the mm, and therefore can't have cached any of its memory.
For flush_tlb_mm() this doesn't sound unreasonable.
flush_tlb_range() corresponds to flush_cache_range() which does do full
indexed cache flushes, but only on the icache if the specified mapping
is executable, otherwise it doesn't guarantee that there are no cache
contents left for the mm.
flush_tlb_page() corresponds to flush_cache_page(), which will perform
address based cache ops on the specified page only, and also only
touches the icache if the page is executable. It does not guarantee that
there are no cache contents left for the mm.
For example, this affects flush_cache_range() which uses the
has_valid_asid() optimisation. It is required to flush the icache when
mappings are made executable (e.g. using mprotect) so they are
immediately usable. If some code is changed to non executable in order
to be modified then it will not be flushed from the icache during that
time, but the ASID on other CPUs may still be cleared for TLB flushing.
When the code is changed back to executable, flush_cache_range() will
assume the code hasn't run on those other CPUs due to the zero ASID, and
won't invalidate the icache on them.
This is fixed by clearing the other CPUs ASIDs to 1 instead of 0 for the
above two flush_tlb_*() functions when the corresponding cache flushes
are likely to be incomplete (non executable range flush, or any page
flush). This ASID appears valid to has_valid_asid(), but still triggers
ASID regeneration due to the upper ASID version bits being 0, which is
less than the minimum ASID version of 1 and so always treated as stale.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13795/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The expression "uasm_in_compat_space_p(swpd) && !uasm_rel_lo(swpd)" is
used twice in build_get_pgd_vmalloc64(), one of which is assigned to the
local variable single_insn_swpd. Update the other use to just use
single_insn_swpd instead to remove the duplication.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: David Daney <ddaney@caviumnetworks.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13779/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
uasm_in_compat_space_p() determines whether the given value is in the
32-bit compatibility part of the 64-bit address space, i.e. is in 32-bit
sign-extended form, however it only handles the top half of the value
space (corresponding to the kernel compatibility segments in the upper
half of the address space). Since values < 2^31 (corresponding to the
low 2GiB of the address space) can also be handled using 32-bit
instructions (e.g. a LUI and ADDIU) rather than convoluted 64-bit
immediate generation, rewrite it with a cast to check whether the
address matches its 32-bit sign extended form.
This allows UASM_i_LA to be used to generate arbitrary 32-bit immediates
more efficiently on 64-bit CPUs, i.e. more like the li (load immediate)
pseudo-instruction.
For example this code to load the immediate (ST0_EXL | KSU_USER |
ST0_BEV | ST0_KX) into k0 with UASM_i_LA():
lui k0,0x0
dsll k0,k0,0x10
daddiu k0,k0,64
dsll k0,k0,0x10
daddiu k0,k0,146
Changes to this more efficient version:
lui k0,0x40
addiu k0,k0,146
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13778/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Very early versions of the 1004K had an hardware issue that made index
cache ops unsafe so they had to be avoided and hit ops be used instead.
This may significantly slow down cache maintenance operations. Only
very early FPGA versions of the 1004K were affected so let's get rid
of the workaround which was only implemented for the DMA cache
maintenance operations anyway.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Add the R6 MUL instruction encoding for 3 operand signed multiply to
uasm so that KVM can use uasm for generating its entry point code at
runtime on R6.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add MTHI/MTLO instructions for writing to the hi & lo registers to uasm
so that KVM can use uasm for generating its entry point code at runtime.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add DI instruction for disabling interrupts to uasm so that KVM can use
uasm for generating its entry point code at runtime.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add CFCMSA/CTCMSA instructions for accessing MSA control registers to
uasm so that KVM can use uasm for generating its entry point code at
runtime.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add CFC1/CTC1 instructions for accessing FP control registers to uasm so
that KVM can use uasm for generating its entry point code at runtime.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The Config.VI bit specifies that the instruction cache is virtually
tagged, which is checked in c-r4k.c's probe_pcache(). Add a proper
definition for it in mipsregs.h and make use of it.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The Hardware page Table Walker (HTW) is being misconfigured on 64-bit
kernels. The PWSize.PS (pointer size) bit determines whether pointers
within directories are loaded as 32-bit or 64-bit addresses, but was
never being set to 1 for 64-bit kernels where the unsigned long in pgd_t
is 64-bits wide.
This actually reduces rather than improves performance when the HTW is
enabled on P6600 since the HTW is initiated lots, but walks are all
aborted due I think to bad intermediate pointers.
Since we were already taking the width of the PTEs into account by
setting PWSize.PTEW, which is the left shift applied to the page table
index *in addition to* the native pointer size, we also need to reduce
PTEW by 1 when PS=1. This is done by calculating PTEW based on the
relative size of pte_t compared to pgd_t.
Finally in order for the HTW to be used when PS=1, the appropriate
XK/XS/XU bits corresponding to the different 64-bit segments need to be
set in PWCtl. We enable only XU for now to enable walking for XUSeg.
Supporting walking for XKSeg would be a bit more involved so is left for
a future patch. It would either require the use of a per-CPU top level
base directory if supported by the HTW (a bit like pgd_current but with
a second entry pointing at swapper_pg_dir), or the HTW would prepend bit
63 of the address to the global directory index which doesn't really
match how we split user and kernel page directories.
Fixes: cab25bc753 ("MIPS: Extend hardware table walking support to MIPS64")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13364/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Add field definitions for some of the 64-bit specific Hardware page
Table Walker (HTW) register fields in PWSize and PWCtl, in preparation
for fixing the 64-bit HTW configuration.
Also print these fields out along with the others in print_htw_config().
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13363/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Merge updates from Andrew Morton:
- fsnotify fix
- poll() timeout fix
- a few scripts/ tweaks
- debugobjects updates
- the (small) ocfs2 queue
- Minor fixes to kernel/padata.c
- Maybe half of the MM queue
* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (117 commits)
mm, page_alloc: restore the original nodemask if the fast path allocation failed
mm, page_alloc: uninline the bad page part of check_new_page()
mm, page_alloc: don't duplicate code in free_pcp_prepare
mm, page_alloc: defer debugging checks of pages allocated from the PCP
mm, page_alloc: defer debugging checks of freed pages until a PCP drain
cpuset: use static key better and convert to new API
mm, page_alloc: inline pageblock lookup in page free fast paths
mm, page_alloc: remove unnecessary variable from free_pcppages_bulk
mm, page_alloc: pull out side effects from free_pages_check
mm, page_alloc: un-inline the bad part of free_pages_check
mm, page_alloc: check multiple page fields with a single branch
mm, page_alloc: remove field from alloc_context
mm, page_alloc: avoid looking up the first zone in a zonelist twice
mm, page_alloc: shortcut watermark checks for order-0 pages
mm, page_alloc: reduce cost of fair zone allocation policy retry
mm, page_alloc: shorten the page allocator fast path
mm, page_alloc: check once if a zone has isolated pageblocks
mm, page_alloc: move __GFP_HARDWALL modifications out of the fastpath
mm, page_alloc: simplify last cpupid reset
mm, page_alloc: remove unnecessary initialisation from __alloc_pages_nodemask()
...
I've just discovered that the useful-sounding has_transparent_hugepage()
is actually an architecture-dependent minefield: on some arches it only
builds if CONFIG_TRANSPARENT_HUGEPAGE=y, on others it's also there when
not, but on some of those (arm and arm64) it then gives the wrong
answer; and on mips alone it's marked __init, which would crash if
called later (but so far it has not been called later).
Straighten this out: make it available to all configs, with a sensible
default in asm-generic/pgtable.h, removing its definitions from those
arches (arc, arm, arm64, sparc, tile) which are served by the default,
adding #define has_transparent_hugepage has_transparent_hugepage to
those (mips, powerpc, s390, x86) which need to override the default at
runtime, and removing the __init from mips (but maybe that kind of code
should be avoided after init: set a static variable the first time it's
called).
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andres Lagar-Cavilla <andreslc@google.com>
Cc: Yang Shi <yang.shi@linaro.org>
Cc: Ning Qu <quning@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Vineet Gupta <vgupta@synopsys.com> [arch/arc]
Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [arch/s390]
Acked-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
XPA kernels hardcode for the presence of RIXI - the PTE format & its
handling presume RI & XI bits. Make this dependence explicit by panicing
if we run on a system that violates it.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/13125/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Performing an MTHC0 instruction without XPA being present will trigger a
reserved instruction exception, therefore conditionalise the use of this
instruction when building TLB handlers (build_update_entries()), and in
__update_tlb().
This allows an XPA kernel to run on non XPA hardware without that
instruction implemented, just like it can run on XPA capable hardware
without XPA in use (with the noxpa kernel argument) or with XPA not
configured in hardware.
[paul.burton@imgtec.com:
- Rebase atop other TLB work.
- Add "mm" to subject.
- Handle the __kmap_pgprot case.]
Fixes: c5b367835c ("MIPS: Add support for XPA.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Jerome Marchand <jmarchan@redhat.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/13124/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
We can simplify build_update_entries by unifying the code for the 36 bit
physical addressing with MIPS32 case with the general case, by using
pte_off_ variables in all cases & handling the trivial
_PAGE_GLOBAL_SHIFT == 0 case in build_convert_pte_to_entrylo. This
leaves XPA as the only special case.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/13123/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The XPA case in iPTE_SW or's in software mode bits to the pte_low value
(which is what actually ends up in the high 32 bits of EntryLo...). It
does this presuming that only bits in the upper 16 bits of the 32 bit
pte_low value will be set. Make this assumption explicit with a BUG_ON.
A similar assumption is made for the hardware mode bits, which are or'd
in with a single ori instruction. Make that assumption explicit with a
BUG_ON too.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/13122/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Rather than hardcode a scratch register for the XPA case in iPTE_SW,
pass one through from the work registers allocated by the caller. This
allows for the XPA path to function correctly regardless of the work
registers in use.
Without doing this there are cases (where KScratch registers are
unavailable) in which iPTE_SW will incorrectly clobber $1 despite it
already being in use for the PTE or PTE pointer.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/13121/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
For XPA kernels build_update_entries() uses $1 (at) as a scratch
register, but doesn't arrange for it to be preserved, so it will always
be clobbered by the TLB refill exception. Although this register
normally has a very short lifetime that doesn't cross memory accesses,
TLB refills due to instruction fetches (either on a page boundary or
after preemption) could clobber live data, and its easy to reproduce
the clobber with a little bit of assembler code.
Note that the use of a hardware page table walker will partly mask the
problem, as the TLB refill handler will not always be invoked.
This is fixed by avoiding the use of the extra scratch register. The
pte_high parts (going into the lower half of the EntryLo registers) are
loaded and manipulated separately so as to keep the PTE pointer around
for the other halves (instead of storing in the scratch register), and
the pte_low parts (going into the high half of the EntryLo registers)
are masked with 0x00ffffff using an ext instruction (instead of loading
0x00ffffff into the scratch register and AND'ing).
[paul.burton@imgtec.com:
- Rebase atop other TLB work.
- Use ext instead of an sll, srl sequence.
- Use cpu_has_xpa instead of #ifdefs.
- Modify commit subject to include "mm".]
Fixes: c5b367835c ("MIPS: Add support for XPA.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13120/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
There are 2 distinct cases in which a kernel for a MIPS32 CPU
(CONFIG_CPU_MIPS32=y) may use 64 bit physical addresses
(CONFIG_PHYS_ADDR_T_64BIT=y):
- 36 bit physical addressing as used by RMI Alchemy & Netlogic XLP/XLR
CPUs.
- MIPS32r5 eXtended Physical Addressing (XPA).
These 2 cases are distinct in that they require different behaviour from
the kernel - the EntryLo registers have different formats. Until Linux
v4.1 we only supported the first case, with code conditional upon the 2
aforementioned Kconfig variables being set. Commit c5b367835c ("MIPS:
Add support for XPA.") added support for the second case, but did so by
modifying the code that existed for the first case rather than treating
the 2 cases as distinct. Since the EntryLo registers have different
formats this breaks the 36 bit Alchemy/XLP/XLR case. Fix this by
splitting the 2 cases, with XPA cases now being conditional upon
CONFIG_XPA and the non-XPA case matching the code as it existed prior to
commit c5b367835c ("MIPS: Add support for XPA.").
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reported-by: Manuel Lauss <manuel.lauss@gmail.com>
Tested-by: Manuel Lauss <manuel.lauss@gmail.com>
Fixes: c5b367835c ("MIPS: Add support for XPA.")
Cc: James Hogan <james.hogan@imgtec.com>
Cc: David Daney <david.daney@cavium.com>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Alex Smith <alex.smith@imgtec.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: stable@vger.kernel.org # v4.1+
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/13119/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Ever since support for RI/XI was implemented by commit 6dd9344cfc
("MIPS: Implement Read Inhibit/eXecute Inhibit") we've had a mixture of
_PAGE_READ & _PAGE_NO_READ bits. Rather than keep both around, switch
away from using _PAGE_READ to determine page presence & instead invert
the use to _PAGE_NO_READ. Wherever we formerly had no definition for
_PAGE_NO_READ, change what was _PAGE_READ to _PAGE_NO_READ. The end
result is that we consistently use _PAGE_NO_READ to determine whether a
page is readable, regardless of whether RI/XI is implemented.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: David Daney <david.daney@cavium.com>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Alex Smith <alex.smith@imgtec.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/13116/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The hardware page table walker (HTW) configuration is broken on XPA
kernels where XPA couldn't be enabled (either nohtw or the hardware
doesn't support it). This is because the PWSize.PTEW field (PTE width)
was only set to 8 bytes (an extra shift of 1) in config_htw_params() if
PageGrain.ELPA (enable large physical addressing) is set. On an XPA
kernel though the size of PTEs is fixed at 8 bytes regardless of whether
XPA could actually be enabled.
Fix the initialisation of this field based on sizeof(pte_t) instead.
Fixes: c5b367835c ("MIPS: Add support for XPA.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Steven J. Hill <sjhill@realitydiluted.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/13113/
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
In preparation for supporting variable ASID masks, retrieve ASID masks
using functions in asm/cpu-info.h which accept struct cpuinfo_mips. This
will allow those functions to determine the ASID mask based upon the CPU
in a later patch. This also allows for the r3k & r8k cases to be handled
in Kconfig, which is arguably cleaner than the previous #ifdefs.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/13210/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
New Loongson 3 CPU (since Loongson-3A R2, as opposed to Loongson-3A R1,
Loongson-3B R1 and Loongson-3B R2) has many enhancements, such as FTLB,
L1-VCache, EI/DI/Wait/Prefetch instruction, DSP/DSPv2 ASE, User Local
register, Read-Inhibit/Execute-Inhibit, SFB (Store Fill Buffer), Fast
TLB refill support, etc.
This patch introduce a config option, CONFIG_LOONGSON3_ENHANCEMENT, to
enable those enhancements which are not probed at run time. If you want
a generic kernel to run on all Loongson 3 machines, please say 'N'
here. If you want a high-performance kernel to run on new Loongson 3
machines only, please say 'Y' here.
Some additional explanations:
1) SFB locates between core and L1 cache, it causes memory access out
of order, so writel/outl (and other similar functions) need a I/O
reorder barrier.
2) Loongson 3 has a bug that di instruction can not save the irqflag,
so arch_local_irq_save() is modified. Since CPU_MIPSR2 is selected
by CONFIG_LOONGSON3_ENHANCEMENT, generic kernel doesn't use ei/di
at all.
3) CPU_HAS_PREFETCH is selected by CONFIG_LOONGSON3_ENHANCEMENT, so
MIPS_CPU_PREFETCH (used by uasm) probing is also put in this patch.
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Steven J . Hill <sjhill@realitydiluted.com>
Cc: Fuxin Zhang <zhangfx@lemote.com>
Cc: Zhangjin Wu <wuzhangjin@gmail.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/12755/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Loongson-3A R2 has pwbase/pwfield/pwsize/pwctl registers in CP0 (this
is very similar to HTW) and lwdir/lwpte/lddir/ldpte instructions which
can be used for fast TLB refill.
[ralf@linux-mips.org: Resolve conflict.]
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Steven J . Hill <sjhill@realitydiluted.com>
Cc: Fuxin Zhang <zhangfx@lemote.com>
Cc: Zhangjin Wu <wuzhangjin@gmail.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/12754/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Loongson-2 has a 4 entry itlb which is a subset of jtlb, Loongson-3 has
a 4 entry itlb and a 4 entry dtlb which are subsets of jtlb. We should
write diag register to invalidate itlb/dtlb when flushing jtlb because
itlb/dtlb are not totally transparent to software.
For Loongson-3A R2 (and newer), we should invalidate ITLB, DTLB, VTLB
and FTLB before we enable/disable FTLB.
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Steven J . Hill <sjhill@realitydiluted.com>
Cc: Fuxin Zhang <zhangfx@lemote.com>
Cc: Zhangjin Wu <wuzhangjin@gmail.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/12753/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Loongson-3 maintains cache coherency by hardware, this means:
1) It's icache is coherent with dcache.
2) It's dcaches don't alias (maybe depend on PAGE_SIZE).
3) It maintains cache coherency across cores (and for DMA).
So we can skip most cache flush operations by setting relevant handlers
to `cache_noop' in `r4k_cache_init'.
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Steven J . Hill <sjhill@realitydiluted.com>
Cc: Fuxin Zhang <zhangfx@lemote.com>
Cc: Zhangjin Wu <wuzhangjin@gmail.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/12752/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
local_r4k___flush_cache_all() is missing a special check for BMIPS5000
processors, we need to blast the S-cache, just like other MTI processors
since we have an inclusive cache. We also need an additional __sync() to
make sure this is completed.
Fixes: d74b0172e4 ("MIPS: BMIPS: Add special cache handling in c-r4k.c")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13012/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
BMIPS5000 and BMIPS5200 processor have no D cache aliases, and this is
properly handled by the per-CPU override added at the end of
r4k_cache_init(), the problem is that the output of probe_pcache()
disagrees with that, since this is too late:
Primary instruction cache 32kB, VIPT, 4-way, linesize 64 bytes.
Primary data cache 32kB, 4-way, VIPT, cache aliases, linesize 32 bytes
With the change moved earlier, we now have a consistent output with the
settings we are intending to have:
Primary instruction cache 32kB, VIPT, 4-way, linesize 64 bytes.
Primary data cache 32kB, 4-way, VIPT, no aliases, linesize 32 bytes
Fixes: d74b0172e4 ("MIPS: BMIPS: Add special cache handling in c-r4k.c")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13011/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
BMIPS5000 and BMIPS52000 processors have their I-cache filling from the
D-cache. Since BMIPS_GENERIC does not provide (yet) a
cpu-feature-overrides.h file, this was not set anywhere, so make sure
the R4K cache detection takes care of that.
Fixes: d74b0172e4 ("MIPS: BMIPS: Add special cache handling in c-r4k.c")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13010/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
It's possible for pages to become visible prior to update_mmu_cache
running if a thread within the same address space preempts the current
thread or runs simultaneously on another CPU. That is, the following
scenario is possible:
CPU0 CPU1
write to page
flush_dcache_page
flush_icache_page
set_pte_at
map page
update_mmu_cache
If CPU1 maps the page in between CPU0's set_pte_at, which marks it valid
& visible, and update_mmu_cache where the dcache flush occurs then CPU1s
icache will fill from stale data (unless it fills from the dcache, in
which case all is good, but most MIPS CPUs don't have this property).
Commit 4d46a67a3e ("MIPS: Fix race condition in lazy cache flushing.")
attempted to fix that by performing the dcache flush in
flush_icache_page such that it occurs before the set_pte_at call makes
the page visible. However it has the problem that not all code that
writes to pages exposed to userland call flush_icache_page. There are
many callers of set_pte_at under mm/ and only 2 of them do call
flush_icache_page. Thus the race window between a page becoming visible
& being coherent between the icache & dcache remains open in some cases.
To illustrate some of the cases, a WARN was added to __update_cache with
this patch applied that triggered in cases where a page about to be
flushed from the dcache was not the last page provided to
flush_icache_page. That is, backtraces were obtained for cases in which
the race window is left open without this patch. The 2 standout examples
follow.
When forking a process:
[ 15.271842] [<80417630>] __update_cache+0xcc/0x188
[ 15.277274] [<80530394>] copy_page_range+0x56c/0x6ac
[ 15.282861] [<8042936c>] copy_process.part.54+0xd40/0x17ac
[ 15.289028] [<80429f80>] do_fork+0xe4/0x420
[ 15.293747] [<80413808>] handle_sys+0x128/0x14c
When exec'ing an ELF binary:
[ 14.445964] [<80417630>] __update_cache+0xcc/0x188
[ 14.451369] [<80538d88>] move_page_tables+0x414/0x498
[ 14.457075] [<8055d848>] setup_arg_pages+0x220/0x318
[ 14.462685] [<805b0f38>] load_elf_binary+0x530/0x12a0
[ 14.468374] [<8055ec3c>] search_binary_handler+0xbc/0x214
[ 14.474444] [<8055f6c0>] do_execveat_common+0x43c/0x67c
[ 14.480324] [<8055f938>] do_execve+0x38/0x44
[ 14.485137] [<80413808>] handle_sys+0x128/0x14c
These code paths write into a page, call flush_dcache_page then call
set_pte_at without flush_icache_page inbetween. The end result is that
the icache can become corrupted & userland processes may execute
unexpected or invalid code, typically resulting in a reserved
instruction exception, a trap or a segfault.
Fix this race condition fully by performing any cache maintenance
required to keep the icache & dcache in sync in set_pte_at, before the
page is made valid. This has the added bonus of ensuring the cache
maintenance always happens in one location, rather than being duplicated
in flush_icache_page & update_mmu_cache. It also matches the way other
architectures solve the same problem (see arm, ia64 & powerpc).
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reported-by: Ionela Voinescu <ionela.voinescu@imgtec.com>
Cc: Lars Persson <lars.persson@axis.com>
Fixes: 4d46a67a3e ("MIPS: Fix race condition in lazy cache flushing.")
Cc: Steven J. Hill <sjhill@realitydiluted.com>
Cc: David Daney <david.daney@cavium.com>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Jerome Marchand <jmarchan@redhat.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Cc: stable <stable@vger.kernel.org> # v4.1+
Patchwork: https://patchwork.linux-mips.org/patch/12722/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The following patch will expose __update_cache to highmem pages. Handle
them by mapping them in for the duration of the cache maintenance, just
like in __flush_dcache_page. The code for that isn't shared because we
need the page address in __update_cache so sharing became messy. Given
that the entirity is an extra 5 lines, just duplicate it.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Lars Persson <lars.persson@axis.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Jerome Marchand <jmarchan@redhat.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Cc: stable <stable@vger.kernel.org> # v4.1+
Patchwork: https://patchwork.linux-mips.org/patch/12721/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When flush_dcache_page is called on an executable page, that page is
about to be provided to userland & we can presume that the icache
contains no valid entries for its address range. However if the icache
does not fill from the dcache then we cannot presume that the pages
content has been written back as far as the memories that the dcache
will fill from (ie. L2 or further out).
This was being done for lowmem pages, but not for highmem which can lead
to icache corruption. Fix this by mapping highmem pages & flushing their
content from the dcache in __flush_dcache_page before providing the page
to userland, just as is done for lowmem pages.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Lars Persson <lars.persson@axis.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/12720/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Add casses supporting the M6250 CPU to various switch statements in the
core MIPS kernel code that define behaviour dependent upon the CPU.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Joshua Kinard <kumba@gentoo.org>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Maciej W. Rozycki <macro@codesourcery.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/12374/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Add cases supporting the P6600 CPU to various switch statements in
core MIPS kernel code that define behaviour dependent upon the CPU.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Maciej W. Rozycki <macro@imgtec.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Joshua Kinard <kumba@gentoo.org>
Cc: Andrzej Hajda <a.hajda@samsung.com>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Petri Gynther <pgynther@google.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/12343/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This patch ensures that the dev parameter is checked for NULL before it
is dereferenced in massage_gfp_flags. If dev is NULL, then fall back
setting the GFP flag requested and available.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/11919/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Coherence Manager 3 (CM3) as present in I6400 can fill icache lines
effectively from dirty dcaches, so there is no need to flush dirty lines
from dcaches through to L2 prior to icache invalidation.
Set the MIPS_CACHE_IC_F_DC flag such that cpu_has_ic_fills_f_dc
evaluates to true, which avoids those dcache flushes.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: Manuel Lauss <manuel.lauss@gmail.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/12180/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
It is still necessary to handle icache coherency in flush_cache_range()
and copy_to_user_page() when the icache fills from the dcache, even
though the dcache does not need to be written back. However when this
handling was added in commit 2eaa7ec286 ("[MIPS] Handle I-cache
coherency in flush_cache_range()"), it did not do any icache flushing
when it fills from dcache.
Therefore fix r4k_flush_cache_range() to run
local_r4k_flush_cache_range() without taking into account whether icache
fills from dcache, so that the icache coherency gets handled. Checks are
also added in local_r4k_flush_cache_range() so that the dcache blast
doesn't take place when icache fills from dcache.
A test to mmap a page PROT_READ|PROT_WRITE, modify code in it, and
mprotect it to VM_READ|VM_EXEC (similar to case described in above
commit) can hit this case quite easily to verify the fix.
A similar check was added in commit f8829caee3 ("[MIPS] Fix aliasing
bug in copy_to_user_page / copy_from_user_page"), so also fix
copy_to_user_page() similarly, to call flush_cache_page() without taking
into account whether icache fills from dcache, since flush_cache_page()
already takes that into account to avoid performing a dcache flush.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: Manuel Lauss <manuel.lauss@gmail.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/12179/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>