MIPS: Perform post-DMA cache flushes on systems with MAARs
Recent CPUs from Imagination Technologies such as the I6400 or P6600 are able to speculatively fetch data from memory into caches. This means that if used in a system with non-coherent DMA they require that caches be invalidated after a device performs DMA, and before the CPU reads the DMA'd data, in order to ensure that stale values weren't speculatively prefetched. Such CPUs also introduced Memory Accessibility Attribute Registers (MAARs) in order to control the regions in which they are allowed to speculate. Thus we can use the presence of MAARs as a good indication that the CPU requires the above cache maintenance. Use the presence of MAARs to determine the result of cpu_needs_post_dma_flush() in the default case, in order to handle these recent CPUs correctly. Note that the return type of cpu_needs_post_dma_flush() is changed to bool, such that it's clearer what's happening when cpu_has_maar is cast to bool for the return value. If this patch were backported to a pre-v4.7 kernel then MIPS_CPU_MAAR was 1ull<<34, so when cast to an int we would incorrectly return 0. It so happens that MIPS_CPU_MAAR is currently 1ull<<30, so when truncated to an int gives a non-zero value anyway, but even so the implicit conversion from long long int to bool makes it clearer to understand what will happen than the implicit conversion from long long int to int would. The bool return type also fits this usage better semantically, so seems like an all-round win. Thanks to Ed for spotting the issue for pre-v4.7 kernels & suggesting the return type change. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Reviewed-by: Bryan O'Donoghue <pure.logic@nexus-software.ie> Tested-by: Bryan O'Donoghue <pure.logic@nexus-software.ie> Cc: Ed Blake <ed.blake@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16363/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This commit is contained in:
parent
d8550860d9
commit
cad482c1b1
|
@ -68,12 +68,25 @@ static inline struct page *dma_addr_to_page(struct device *dev,
|
|||
* systems and only the R10000 and R12000 are used in such systems, the
|
||||
* SGI IP28 Indigo² rsp. SGI IP32 aka O2.
|
||||
*/
|
||||
static inline int cpu_needs_post_dma_flush(struct device *dev)
|
||||
static inline bool cpu_needs_post_dma_flush(struct device *dev)
|
||||
{
|
||||
return !plat_device_is_coherent(dev) &&
|
||||
(boot_cpu_type() == CPU_R10000 ||
|
||||
boot_cpu_type() == CPU_R12000 ||
|
||||
boot_cpu_type() == CPU_BMIPS5000);
|
||||
if (plat_device_is_coherent(dev))
|
||||
return false;
|
||||
|
||||
switch (boot_cpu_type()) {
|
||||
case CPU_R10000:
|
||||
case CPU_R12000:
|
||||
case CPU_BMIPS5000:
|
||||
return true;
|
||||
|
||||
default:
|
||||
/*
|
||||
* Presence of MAARs suggests that the CPU supports
|
||||
* speculatively prefetching data, and therefore requires
|
||||
* the post-DMA flush/invalidate.
|
||||
*/
|
||||
return cpu_has_maar;
|
||||
}
|
||||
}
|
||||
|
||||
static gfp_t massage_gfp_flags(const struct device *dev, gfp_t gfp)
|
||||
|
|
Loading…
Reference in New Issue