Currently flush_tlb_mm flushes the entire TLB. Switch it to doing a
PID aware flush. This also improves the readibility of flush_tlb_pid.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Ley Foon Tan <ley.foon.tan@intel.com>
This matches the other functions in this file that use TLBMISC.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Ley Foon Tan <ley.foon.tan@intel.com>
TLBMISC_RD does not use PID bits, and when setting invalid TLBs,
the PID is not required because the address will not match.
This is just a tidy up.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Ley Foon Tan <ley.foon.tan@intel.com>
There is no need for complicated calculation for an invalid address
that maps to the same TLB index as the entry to be invalidated. Using
the TLB address plus the two top bits set puts the address into the
kernel TLB bypass range and still maps to the same cache line.
This is also a bug fix for flush_tlb_pid, which is currently unused,
but does not set PTEADDR to invalid.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Ley Foon Tan <ley.foon.tan@intel.com>
flush_tlb_page is for flushing user pages, so it should not be using
flush_tlb_one (which flushes all pages).
This patch implements it with the flush_tlb_range, which is a user
flush that does the right thing.
flush_tlb_one is made static to mm/tlb.c because it's a bit confusing.
It is used in do_page_fault to flush the kernel non-linear mappings,
so that is replaced with flush_tlb_kernel_page. The end result is that
functionality is identical.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Ley Foon Tan <ley.foon.tan@intel.com>