arm64: turn flush_dcache_mmap_lock into a no-op
ARM64 doesn't walk the VMA tree in its flush_dcache_page() implementation, so has no need to take the tree_lock. Link: http://lkml.kernel.org/r/20180313132639.17387-4-willy@infradead.org Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Cc: Darrick J. Wong <darrick.wong@oracle.com> Cc: Dave Chinner <david@fromorbit.com> Cc: Jeff Layton <jlayton@kernel.org> Cc: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
60a052719a
commit
427c896f26
|
@ -140,10 +140,8 @@ static inline void __flush_icache_all(void)
|
|||
dsb(ish);
|
||||
}
|
||||
|
||||
#define flush_dcache_mmap_lock(mapping) \
|
||||
spin_lock_irq(&(mapping)->tree_lock)
|
||||
#define flush_dcache_mmap_unlock(mapping) \
|
||||
spin_unlock_irq(&(mapping)->tree_lock)
|
||||
#define flush_dcache_mmap_lock(mapping) do { } while (0)
|
||||
#define flush_dcache_mmap_unlock(mapping) do { } while (0)
|
||||
|
||||
/*
|
||||
* We don't appear to need to do anything here. In fact, if we did, we'd
|
||||
|
|
Loading…
Reference in New Issue