kmap_coherent needs disabled preemption to not schedule in the critical
section, just like kmap_coherent on mips and kmap_atomic in general.
Fixes: 8222dbe21e "sched/preempt, mm/fault: Decouple preemption from the page fault logic"
Reported-by: Hans Verkuil <hverkuil@xs4all.nl>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Tested-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Rich Felker <dalias@libc.org>
This follows the ARM change c01778001a
("ARM: 6379/1: Assume new page cache pages have dirty D-cache") for the
same rationale:
There are places in Linux where writes to newly allocated page
cache pages happen without a subsequent call to flush_dcache_page()
(several PIO drivers including USB HCD). This patch changes the
meaning of PG_arch_1 to be PG_dcache_clean and always flush the
D-cache for a newly mapped page in update_mmu_cache().
This addresses issues seen with executing binaries from MMC, in
addition to some of the other HCDs that don't explicitly do cache
management for their pipe-in buffers.
Requested-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
In the SMP VIPT case the page copy/clear ops still perform colouring,
care needs to be taken that CPUs don't end up stepping on each other,
so we give them a bit of room to work with.
At the same time, we reduce the worst-case colouring given that these
pages are always consumed.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This fixes up a number of outstanding issues observed with old mappings
on the same colour hanging around. This requires some more optimal
handling, but is a safe fallback until all of the corner cases have been
handled.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This fixes up the kmap_coherent/kunmap_coherent() interface for recent
changes both in the page fault path and the shared cache flushers, as
well as adding in some optimizations.
One of the key things to note here is that the TLB flush itself is
deferred until the unmap, and the call in to update_mmu_cache() itself
goes away, relying on the regular page fault path to handle the lazy
dcache writeback if necessary.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This plugs in kmap_coherent() for the non-SH4 cases to permit the
pg-mmu.c bits to be used generically across all CPUs. SH-5 is still in
the TODO state, but will move over to fixmap and the generic interface
gradually.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>