The new preemtible kmap_local() implementation:
- Consolidate all kmap_atomic() internals into a generic implementation which builds the base for the kmap_local() API and make the kmap_atomic() interface wrappers which handle the disabling/enabling of preemption and pagefaults. - Switch the storage from per-CPU to per task and provide scheduler support for clearing mapping when scheduling out and restoring them when scheduling back in. - Merge the migrate_disable/enable() code, which is also part of the scheduler pull request. This was required to make the kmap_local() interface available which does not disable preemption when a mapping is established. It has to disable migration instead to guarantee that the virtual address of the mapped slot is the same accross preemption. - Provide better debug facilities: guard pages and enforced utilization of the mapping mechanics on 64bit systems when the architecture allows it. - Provide the new kmap_local() API which can now be used to cleanup the kmap_atomic() usage sites all over the place. Most of the usage sites do not require the implicit disabling of preemption and pagefaults so the penalty on 64bit and 32bit non-highmem systems is removed and quite some of the code can be simplified. A wholesale conversion is not possible because some usage depends on the implicit side effects and some need to be cleaned up because they work around these side effects. The migrate disable side effect is only effective on highmem systems and when enforced debugging is enabled. On 64bit and 32bit non-highmem systems the overhead is completely avoided. -----BEGIN PGP SIGNATURE----- iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAl/XyQwTHHRnbHhAbGlu dXRyb25peC5kZQAKCRCmGPVMDXSYoUolD/9+R+BX96fGir+I8rG9dc3cbLw5meSi 0I/Nq3PToZMs2Iqv50DsoaPYHHz/M6fcAO9LRIgsE9jRbnY93GnsBM0wU9Y8yQaT 4wUzOG5WHaLDfqIkx/CN9coUl458oEiwOEbn79A2FmPXFzr7IpkufnV3ybGDwzwP p73bjMJMPPFrsa9ig87YiYfV/5IAZHi82PN8Cq1v4yNzgXRP3Tg6QoAuCO84ZnWF RYlrfKjcJ2xPdn+RuYyXolPtxr1hJQ0bOUpe4xu/UfeZjxZ7i1wtwLN9kWZe8CKH +x4Lz8HZZ5QMTQ9sCHOLtKzu2MceMcpISzoQH4/aFQCNMgLn1zLbS790XkYiQCuR ne9Cua+IqgYfGMG8cq8+bkU9HCNKaXqIBgPEKE/iHYVmqzCOqhW5Cogu4KFekf6V Wi7pyyUdX2en8BAWpk5NHc8de9cGcc+HXMq2NIcgXjVWvPaqRP6DeITERTZLJOmz XPxq5oPLGl7wdm7z+ICIaNApy8zuxpzb6sPLNcn7l5OeorViORlUu08AN8587wAj FiVjp6ZYomg+gyMkiNkDqFOGDH5TMENpOFoB0hNNEyJwwS0xh6CgWuwZcv+N8aPO HuS/P+tNANbD8ggT4UparXYce7YCtgOf3IG4GA3JJYvYmJ6pU+AZOWRoDScWq4o+ +jlfoJhMbtx5Gg== =n71I -----END PGP SIGNATURE----- Merge tag 'core-mm-2020-12-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull kmap updates from Thomas Gleixner: "The new preemtible kmap_local() implementation: - Consolidate all kmap_atomic() internals into a generic implementation which builds the base for the kmap_local() API and make the kmap_atomic() interface wrappers which handle the disabling/enabling of preemption and pagefaults. - Switch the storage from per-CPU to per task and provide scheduler support for clearing mapping when scheduling out and restoring them when scheduling back in. - Merge the migrate_disable/enable() code, which is also part of the scheduler pull request. This was required to make the kmap_local() interface available which does not disable preemption when a mapping is established. It has to disable migration instead to guarantee that the virtual address of the mapped slot is the same across preemption. - Provide better debug facilities: guard pages and enforced utilization of the mapping mechanics on 64bit systems when the architecture allows it. - Provide the new kmap_local() API which can now be used to cleanup the kmap_atomic() usage sites all over the place. Most of the usage sites do not require the implicit disabling of preemption and pagefaults so the penalty on 64bit and 32bit non-highmem systems is removed and quite some of the code can be simplified. A wholesale conversion is not possible because some usage depends on the implicit side effects and some need to be cleaned up because they work around these side effects. The migrate disable side effect is only effective on highmem systems and when enforced debugging is enabled. On 64bit and 32bit non-highmem systems the overhead is completely avoided" * tag 'core-mm-2020-12-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (33 commits) ARM: highmem: Fix cache_is_vivt() reference x86/crashdump/32: Simplify copy_oldmem_page() io-mapping: Provide iomap_local variant mm/highmem: Provide kmap_local* sched: highmem: Store local kmaps in task struct x86: Support kmap_local() forced debugging mm/highmem: Provide CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP mm/highmem: Provide and use CONFIG_DEBUG_KMAP_LOCAL microblaze/mm/highmem: Add dropped #ifdef back xtensa/mm/highmem: Make generic kmap_atomic() work correctly mm/highmem: Take kmap_high_get() properly into account highmem: High implementation details and document API Documentation/io-mapping: Remove outdated blurb io-mapping: Cleanup atomic iomap mm/highmem: Remove the old kmap_atomic cruft highmem: Get rid of kmap_types.h xtensa/mm/highmem: Switch to generic kmap atomic sparc/mm/highmem: Switch to generic kmap atomic powerpc/mm/highmem: Switch to generic kmap atomic nds32/mm/highmem: Switch to generic kmap atomic ...
This commit is contained in:
commit
edd7ab7684
|
@ -20,78 +20,72 @@ A mapping object is created during driver initialization using::
|
|||
mappable, while 'size' indicates how large a mapping region to
|
||||
enable. Both are in bytes.
|
||||
|
||||
This _wc variant provides a mapping which may only be used
|
||||
with the io_mapping_map_atomic_wc or io_mapping_map_wc.
|
||||
This _wc variant provides a mapping which may only be used with
|
||||
io_mapping_map_atomic_wc(), io_mapping_map_local_wc() or
|
||||
io_mapping_map_wc().
|
||||
|
||||
With this mapping object, individual pages can be mapped either atomically
|
||||
or not, depending on the necessary scheduling environment. Of course, atomic
|
||||
maps are more efficient::
|
||||
With this mapping object, individual pages can be mapped either temporarily
|
||||
or long term, depending on the requirements. Of course, temporary maps are
|
||||
more efficient. They come in two flavours::
|
||||
|
||||
void *io_mapping_map_local_wc(struct io_mapping *mapping,
|
||||
unsigned long offset)
|
||||
|
||||
void *io_mapping_map_atomic_wc(struct io_mapping *mapping,
|
||||
unsigned long offset)
|
||||
|
||||
'offset' is the offset within the defined mapping region.
|
||||
Accessing addresses beyond the region specified in the
|
||||
creation function yields undefined results. Using an offset
|
||||
which is not page aligned yields an undefined result. The
|
||||
return value points to a single page in CPU address space.
|
||||
'offset' is the offset within the defined mapping region. Accessing
|
||||
addresses beyond the region specified in the creation function yields
|
||||
undefined results. Using an offset which is not page aligned yields an
|
||||
undefined result. The return value points to a single page in CPU address
|
||||
space.
|
||||
|
||||
This _wc variant returns a write-combining map to the
|
||||
page and may only be used with mappings created by
|
||||
io_mapping_create_wc
|
||||
This _wc variant returns a write-combining map to the page and may only be
|
||||
used with mappings created by io_mapping_create_wc()
|
||||
|
||||
Note that the task may not sleep while holding this page
|
||||
mapped.
|
||||
Temporary mappings are only valid in the context of the caller. The mapping
|
||||
is not guaranteed to be globaly visible.
|
||||
|
||||
::
|
||||
io_mapping_map_local_wc() has a side effect on X86 32bit as it disables
|
||||
migration to make the mapping code work. No caller can rely on this side
|
||||
effect.
|
||||
|
||||
io_mapping_map_atomic_wc() has the side effect of disabling preemption and
|
||||
pagefaults. Don't use in new code. Use io_mapping_map_local_wc() instead.
|
||||
|
||||
Nested mappings need to be undone in reverse order because the mapping
|
||||
code uses a stack for keeping track of them::
|
||||
|
||||
addr1 = io_mapping_map_local_wc(map1, offset1);
|
||||
addr2 = io_mapping_map_local_wc(map2, offset2);
|
||||
...
|
||||
io_mapping_unmap_local(addr2);
|
||||
io_mapping_unmap_local(addr1);
|
||||
|
||||
The mappings are released with::
|
||||
|
||||
void io_mapping_unmap_local(void *vaddr)
|
||||
void io_mapping_unmap_atomic(void *vaddr)
|
||||
|
||||
'vaddr' must be the value returned by the last
|
||||
io_mapping_map_atomic_wc call. This unmaps the specified
|
||||
page and allows the task to sleep once again.
|
||||
'vaddr' must be the value returned by the last io_mapping_map_local_wc() or
|
||||
io_mapping_map_atomic_wc() call. This unmaps the specified mapping and
|
||||
undoes the side effects of the mapping functions.
|
||||
|
||||
If you need to sleep while holding the lock, you can use the non-atomic
|
||||
variant, although they may be significantly slower.
|
||||
|
||||
::
|
||||
If you need to sleep while holding a mapping, you can use the regular
|
||||
variant, although this may be significantly slower::
|
||||
|
||||
void *io_mapping_map_wc(struct io_mapping *mapping,
|
||||
unsigned long offset)
|
||||
|
||||
This works like io_mapping_map_atomic_wc except it allows
|
||||
the task to sleep while holding the page mapped.
|
||||
This works like io_mapping_map_atomic/local_wc() except it has no side
|
||||
effects and the pointer is globaly visible.
|
||||
|
||||
|
||||
::
|
||||
The mappings are released with::
|
||||
|
||||
void io_mapping_unmap(void *vaddr)
|
||||
|
||||
This works like io_mapping_unmap_atomic, except it is used
|
||||
for pages mapped with io_mapping_map_wc.
|
||||
Use for pages mapped with io_mapping_map_wc().
|
||||
|
||||
At driver close time, the io_mapping object must be freed::
|
||||
|
||||
void io_mapping_free(struct io_mapping *mapping)
|
||||
|
||||
Current Implementation
|
||||
======================
|
||||
|
||||
The initial implementation of these functions uses existing mapping
|
||||
mechanisms and so provides only an abstraction layer and no new
|
||||
functionality.
|
||||
|
||||
On 64-bit processors, io_mapping_create_wc calls ioremap_wc for the whole
|
||||
range, creating a permanent kernel-visible mapping to the resource. The
|
||||
map_atomic and map functions add the requested offset to the base of the
|
||||
virtual address returned by ioremap_wc.
|
||||
|
||||
On 32-bit processors with HIGHMEM defined, io_mapping_map_atomic_wc uses
|
||||
kmap_atomic_pfn to map the specified page in an atomic fashion;
|
||||
kmap_atomic_pfn isn't really supposed to be used with device pages, but it
|
||||
provides an efficient mapping for this usage.
|
||||
|
||||
On 32-bit processors without HIGHMEM defined, io_mapping_map_atomic_wc and
|
||||
io_mapping_map_wc both use ioremap_wc, a terribly inefficient function which
|
||||
performs an IPI to inform all processors about the new mapping. This results
|
||||
in a significant performance penalty.
|
||||
|
|
|
@ -1,15 +0,0 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef _ASM_KMAP_TYPES_H
|
||||
#define _ASM_KMAP_TYPES_H
|
||||
|
||||
/* Dummy header just to define km_type. */
|
||||
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
#define __WITH_KM_FENCE
|
||||
#endif
|
||||
|
||||
#include <asm-generic/kmap_types.h>
|
||||
|
||||
#undef __WITH_KM_FENCE
|
||||
|
||||
#endif
|
|
@ -507,6 +507,7 @@ config LINUX_RAM_BASE
|
|||
config HIGHMEM
|
||||
bool "High Memory Support"
|
||||
select ARCH_DISCONTIGMEM_ENABLE
|
||||
select KMAP_LOCAL
|
||||
help
|
||||
With ARC 2G:2G address split, only upper 2G is directly addressable by
|
||||
kernel. Enable this to potentially allow access to rest of 2G and PAE
|
||||
|
|
|
@ -9,17 +9,29 @@
|
|||
#ifdef CONFIG_HIGHMEM
|
||||
|
||||
#include <uapi/asm/page.h>
|
||||
#include <asm/kmap_types.h>
|
||||
#include <asm/kmap_size.h>
|
||||
|
||||
#define FIXMAP_SIZE PGDIR_SIZE
|
||||
#define PKMAP_SIZE PGDIR_SIZE
|
||||
|
||||
/* start after vmalloc area */
|
||||
#define FIXMAP_BASE (PAGE_OFFSET - FIXMAP_SIZE - PKMAP_SIZE)
|
||||
#define FIXMAP_SIZE PGDIR_SIZE /* only 1 PGD worth */
|
||||
#define KM_TYPE_NR ((FIXMAP_SIZE >> PAGE_SHIFT)/NR_CPUS)
|
||||
#define FIXMAP_ADDR(nr) (FIXMAP_BASE + ((nr) << PAGE_SHIFT))
|
||||
|
||||
#define FIX_KMAP_SLOTS (KM_MAX_IDX * NR_CPUS)
|
||||
#define FIX_KMAP_BEGIN (0UL)
|
||||
#define FIX_KMAP_END ((FIX_KMAP_BEGIN + FIX_KMAP_SLOTS) - 1)
|
||||
|
||||
#define FIXADDR_TOP (FIXMAP_BASE + (FIX_KMAP_END << PAGE_SHIFT))
|
||||
|
||||
/*
|
||||
* This should be converted to the asm-generic version, but of course this
|
||||
* is needlessly different from all other architectures. Sigh - tglx
|
||||
*/
|
||||
#define __fix_to_virt(x) (FIXADDR_TOP - ((x) << PAGE_SHIFT))
|
||||
#define __virt_to_fix(x) (((FIXADDR_TOP - ((x) & PAGE_MASK))) >> PAGE_SHIFT)
|
||||
|
||||
/* start after fixmap area */
|
||||
#define PKMAP_BASE (FIXMAP_BASE + FIXMAP_SIZE)
|
||||
#define PKMAP_SIZE PGDIR_SIZE
|
||||
#define LAST_PKMAP (PKMAP_SIZE >> PAGE_SHIFT)
|
||||
#define LAST_PKMAP_MASK (LAST_PKMAP - 1)
|
||||
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
|
||||
|
@ -29,11 +41,13 @@
|
|||
|
||||
extern void kmap_init(void);
|
||||
|
||||
#define arch_kmap_local_post_unmap(vaddr) \
|
||||
local_flush_tlb_kernel_range(vaddr, vaddr + PAGE_SIZE)
|
||||
|
||||
static inline void flush_cache_kmaps(void)
|
||||
{
|
||||
flush_cache_all();
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
|
|
@ -1,14 +0,0 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
/*
|
||||
* Copyright (C) 2015 Synopsys, Inc. (www.synopsys.com)
|
||||
*/
|
||||
|
||||
#ifndef _ASM_KMAP_TYPES_H
|
||||
#define _ASM_KMAP_TYPES_H
|
||||
|
||||
/*
|
||||
* We primarily need to define KM_TYPE_NR here but that in turn
|
||||
* is a function of PGDIR_SIZE etc.
|
||||
* To avoid circular deps issue, put everything in asm/highmem.h
|
||||
*/
|
||||
#endif
|
|
@ -36,9 +36,8 @@
|
|||
* This means each only has 1 PGDIR_SIZE worth of kvaddr mappings, which means
|
||||
* 2M of kvaddr space for typical config (8K page and 11:8:13 traversal split)
|
||||
*
|
||||
* - fixmap anyhow needs a limited number of mappings. So 2M kvaddr == 256 PTE
|
||||
* slots across NR_CPUS would be more than sufficient (generic code defines
|
||||
* KM_TYPE_NR as 20).
|
||||
* - The fixed KMAP slots for kmap_local/atomic() require KM_MAX_IDX slots per
|
||||
* CPU. So the number of CPUs sharing a single PTE page is limited.
|
||||
*
|
||||
* - pkmap being preemptible, in theory could do with more than 256 concurrent
|
||||
* mappings. However, generic pkmap code: map_new_virtual(), doesn't traverse
|
||||
|
@ -47,48 +46,6 @@
|
|||
*/
|
||||
|
||||
extern pte_t * pkmap_page_table;
|
||||
static pte_t * fixmap_page_table;
|
||||
|
||||
void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
|
||||
{
|
||||
int idx, cpu_idx;
|
||||
unsigned long vaddr;
|
||||
|
||||
cpu_idx = kmap_atomic_idx_push();
|
||||
idx = cpu_idx + KM_TYPE_NR * smp_processor_id();
|
||||
vaddr = FIXMAP_ADDR(idx);
|
||||
|
||||
set_pte_at(&init_mm, vaddr, fixmap_page_table + idx,
|
||||
mk_pte(page, prot));
|
||||
|
||||
return (void *)vaddr;
|
||||
}
|
||||
EXPORT_SYMBOL(kmap_atomic_high_prot);
|
||||
|
||||
void kunmap_atomic_high(void *kv)
|
||||
{
|
||||
unsigned long kvaddr = (unsigned long)kv;
|
||||
|
||||
if (kvaddr >= FIXMAP_BASE && kvaddr < (FIXMAP_BASE + FIXMAP_SIZE)) {
|
||||
|
||||
/*
|
||||
* Because preemption is disabled, this vaddr can be associated
|
||||
* with the current allocated index.
|
||||
* But in case of multiple live kmap_atomic(), it still relies on
|
||||
* callers to unmap in right order.
|
||||
*/
|
||||
int cpu_idx = kmap_atomic_idx();
|
||||
int idx = cpu_idx + KM_TYPE_NR * smp_processor_id();
|
||||
|
||||
WARN_ON(kvaddr != FIXMAP_ADDR(idx));
|
||||
|
||||
pte_clear(&init_mm, kvaddr, fixmap_page_table + idx);
|
||||
local_flush_tlb_kernel_range(kvaddr, kvaddr + PAGE_SIZE);
|
||||
|
||||
kmap_atomic_idx_pop();
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(kunmap_atomic_high);
|
||||
|
||||
static noinline pte_t * __init alloc_kmap_pgtable(unsigned long kvaddr)
|
||||
{
|
||||
|
@ -108,10 +65,9 @@ void __init kmap_init(void)
|
|||
{
|
||||
/* Due to recursive include hell, we can't do this in processor.h */
|
||||
BUILD_BUG_ON(PAGE_OFFSET < (VMALLOC_END + FIXMAP_SIZE + PKMAP_SIZE));
|
||||
|
||||
BUILD_BUG_ON(KM_TYPE_NR > PTRS_PER_PTE);
|
||||
pkmap_page_table = alloc_kmap_pgtable(PKMAP_BASE);
|
||||
|
||||
BUILD_BUG_ON(LAST_PKMAP > PTRS_PER_PTE);
|
||||
fixmap_page_table = alloc_kmap_pgtable(FIXMAP_BASE);
|
||||
BUILD_BUG_ON(FIX_KMAP_SLOTS > PTRS_PER_PTE);
|
||||
|
||||
pkmap_page_table = alloc_kmap_pgtable(PKMAP_BASE);
|
||||
alloc_kmap_pgtable(FIXMAP_BASE);
|
||||
}
|
||||
|
|
|
@ -1499,6 +1499,7 @@ config HAVE_ARCH_PFN_VALID
|
|||
config HIGHMEM
|
||||
bool "High Memory Support"
|
||||
depends on MMU
|
||||
select KMAP_LOCAL
|
||||
help
|
||||
The address space of ARM processors is only 4 Gigabytes large
|
||||
and it has to accommodate user address space, kernel address
|
||||
|
|
|
@ -7,14 +7,14 @@
|
|||
#define FIXADDR_TOP (FIXADDR_END - PAGE_SIZE)
|
||||
|
||||
#include <linux/pgtable.h>
|
||||
#include <asm/kmap_types.h>
|
||||
#include <asm/kmap_size.h>
|
||||
|
||||
enum fixed_addresses {
|
||||
FIX_EARLYCON_MEM_BASE,
|
||||
__end_of_permanent_fixed_addresses,
|
||||
|
||||
FIX_KMAP_BEGIN = __end_of_permanent_fixed_addresses,
|
||||
FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_TYPE_NR * NR_CPUS) - 1,
|
||||
FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_MAX_IDX * NR_CPUS) - 1,
|
||||
|
||||
/* Support writing RO kernel text via kprobes, jump labels, etc. */
|
||||
FIX_TEXT_POKE0,
|
||||
|
|
|
@ -2,7 +2,8 @@
|
|||
#ifndef _ASM_HIGHMEM_H
|
||||
#define _ASM_HIGHMEM_H
|
||||
|
||||
#include <asm/kmap_types.h>
|
||||
#include <asm/cachetype.h>
|
||||
#include <asm/fixmap.h>
|
||||
|
||||
#define PKMAP_BASE (PAGE_OFFSET - PMD_SIZE)
|
||||
#define LAST_PKMAP PTRS_PER_PTE
|
||||
|
@ -46,19 +47,32 @@ extern pte_t *pkmap_page_table;
|
|||
|
||||
#ifdef ARCH_NEEDS_KMAP_HIGH_GET
|
||||
extern void *kmap_high_get(struct page *page);
|
||||
#else
|
||||
|
||||
static inline void *arch_kmap_local_high_get(struct page *page)
|
||||
{
|
||||
if (IS_ENABLED(CONFIG_DEBUG_HIGHMEM) && !cache_is_vivt())
|
||||
return NULL;
|
||||
return kmap_high_get(page);
|
||||
}
|
||||
#define arch_kmap_local_high_get arch_kmap_local_high_get
|
||||
|
||||
#else /* ARCH_NEEDS_KMAP_HIGH_GET */
|
||||
static inline void *kmap_high_get(struct page *page)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
#endif
|
||||
#endif /* !ARCH_NEEDS_KMAP_HIGH_GET */
|
||||
|
||||
/*
|
||||
* The following functions are already defined by <linux/highmem.h>
|
||||
* when CONFIG_HIGHMEM is not set.
|
||||
*/
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
extern void *kmap_atomic_pfn(unsigned long pfn);
|
||||
#endif
|
||||
#define arch_kmap_local_post_map(vaddr, pteval) \
|
||||
local_flush_tlb_kernel_page(vaddr)
|
||||
|
||||
#define arch_kmap_local_pre_unmap(vaddr) \
|
||||
do { \
|
||||
if (cache_is_vivt()) \
|
||||
__cpuc_flush_dcache_area((void *)vaddr, PAGE_SIZE); \
|
||||
} while (0)
|
||||
|
||||
#define arch_kmap_local_post_unmap(vaddr) \
|
||||
local_flush_tlb_kernel_page(vaddr)
|
||||
|
||||
#endif
|
||||
|
|
|
@ -1,10 +0,0 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef __ARM_KMAP_TYPES_H
|
||||
#define __ARM_KMAP_TYPES_H
|
||||
|
||||
/*
|
||||
* This is the "bare minimum". AIO seems to require this.
|
||||
*/
|
||||
#define KM_TYPE_NR 16
|
||||
|
||||
#endif
|
|
@ -19,7 +19,6 @@ obj-$(CONFIG_MODULES) += proc-syms.o
|
|||
obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o
|
||||
|
||||
obj-$(CONFIG_ALIGNMENT_TRAP) += alignment.o
|
||||
obj-$(CONFIG_HIGHMEM) += highmem.o
|
||||
obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
|
||||
obj-$(CONFIG_ARM_PV_FIXUP) += pv-fixup-asm.o
|
||||
|
||||
|
|
|
@ -1,121 +0,0 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
/*
|
||||
* arch/arm/mm/highmem.c -- ARM highmem support
|
||||
*
|
||||
* Author: Nicolas Pitre
|
||||
* Created: september 8, 2008
|
||||
* Copyright: Marvell Semiconductors Inc.
|
||||
*/
|
||||
|
||||
#include <linux/module.h>
|
||||
#include <linux/highmem.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <asm/fixmap.h>
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/tlbflush.h>
|
||||
#include "mm.h"
|
||||
|
||||
static inline void set_fixmap_pte(int idx, pte_t pte)
|
||||
{
|
||||
unsigned long vaddr = __fix_to_virt(idx);
|
||||
pte_t *ptep = virt_to_kpte(vaddr);
|
||||
|
||||
set_pte_ext(ptep, pte, 0);
|
||||
local_flush_tlb_kernel_page(vaddr);
|
||||
}
|
||||
|
||||
static inline pte_t get_fixmap_pte(unsigned long vaddr)
|
||||
{
|
||||
pte_t *ptep = virt_to_kpte(vaddr);
|
||||
|
||||
return *ptep;
|
||||
}
|
||||
|
||||
void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
|
||||
{
|
||||
unsigned int idx;
|
||||
unsigned long vaddr;
|
||||
void *kmap;
|
||||
int type;
|
||||
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
/*
|
||||
* There is no cache coherency issue when non VIVT, so force the
|
||||
* dedicated kmap usage for better debugging purposes in that case.
|
||||
*/
|
||||
if (!cache_is_vivt())
|
||||
kmap = NULL;
|
||||
else
|
||||
#endif
|
||||
kmap = kmap_high_get(page);
|
||||
if (kmap)
|
||||
return kmap;
|
||||
|
||||
type = kmap_atomic_idx_push();
|
||||
|
||||
idx = FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id();
|
||||
vaddr = __fix_to_virt(idx);
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
/*
|
||||
* With debugging enabled, kunmap_atomic forces that entry to 0.
|
||||
* Make sure it was indeed properly unmapped.
|
||||
*/
|
||||
BUG_ON(!pte_none(get_fixmap_pte(vaddr)));
|
||||
#endif
|
||||
/*
|
||||
* When debugging is off, kunmap_atomic leaves the previous mapping
|
||||
* in place, so the contained TLB flush ensures the TLB is updated
|
||||
* with the new mapping.
|
||||
*/
|
||||
set_fixmap_pte(idx, mk_pte(page, prot));
|
||||
|
||||
return (void *)vaddr;
|
||||
}
|
||||
EXPORT_SYMBOL(kmap_atomic_high_prot);
|
||||
|
||||
void kunmap_atomic_high(void *kvaddr)
|
||||
{
|
||||
unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
|
||||
int idx, type;
|
||||
|
||||
if (kvaddr >= (void *)FIXADDR_START) {
|
||||
type = kmap_atomic_idx();
|
||||
idx = FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id();
|
||||
|
||||
if (cache_is_vivt())
|
||||
__cpuc_flush_dcache_area((void *)vaddr, PAGE_SIZE);
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
BUG_ON(vaddr != __fix_to_virt(idx));
|
||||
set_fixmap_pte(idx, __pte(0));
|
||||
#else
|
||||
(void) idx; /* to kill a warning */
|
||||
#endif
|
||||
kmap_atomic_idx_pop();
|
||||
} else if (vaddr >= PKMAP_ADDR(0) && vaddr < PKMAP_ADDR(LAST_PKMAP)) {
|
||||
/* this address was obtained through kmap_high_get() */
|
||||
kunmap_high(pte_page(pkmap_page_table[PKMAP_NR(vaddr)]));
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(kunmap_atomic_high);
|
||||
|
||||
void *kmap_atomic_pfn(unsigned long pfn)
|
||||
{
|
||||
unsigned long vaddr;
|
||||
int idx, type;
|
||||
struct page *page = pfn_to_page(pfn);
|
||||
|
||||
preempt_disable();
|
||||
pagefault_disable();
|
||||
if (!PageHighMem(page))
|
||||
return page_address(page);
|
||||
|
||||
type = kmap_atomic_idx_push();
|
||||
idx = FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id();
|
||||
vaddr = __fix_to_virt(idx);
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
BUG_ON(!pte_none(get_fixmap_pte(vaddr)));
|
||||
#endif
|
||||
set_fixmap_pte(idx, pfn_pte(pfn, kmap_prot));
|
||||
|
||||
return (void *)vaddr;
|
||||
}
|
|
@ -286,6 +286,7 @@ config NR_CPUS
|
|||
config HIGHMEM
|
||||
bool "High Memory Support"
|
||||
depends on !CPU_CK610
|
||||
select KMAP_LOCAL
|
||||
default y
|
||||
|
||||
config FORCE_MAX_ZONEORDER
|
||||
|
|
|
@ -8,7 +8,7 @@
|
|||
#include <asm/memory.h>
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
#include <linux/threads.h>
|
||||
#include <asm/kmap_types.h>
|
||||
#include <asm/kmap_size.h>
|
||||
#endif
|
||||
|
||||
enum fixed_addresses {
|
||||
|
@ -17,7 +17,7 @@ enum fixed_addresses {
|
|||
#endif
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
FIX_KMAP_BEGIN,
|
||||
FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_TYPE_NR * NR_CPUS) - 1,
|
||||
FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_MAX_IDX * NR_CPUS) - 1,
|
||||
#endif
|
||||
__end_of_fixed_addresses
|
||||
};
|
||||
|
|
|
@ -9,7 +9,7 @@
|
|||
#include <linux/init.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <asm/kmap_types.h>
|
||||
#include <asm/kmap_size.h>
|
||||
#include <asm/cache.h>
|
||||
|
||||
/* undef for production */
|
||||
|
@ -32,10 +32,12 @@ extern pte_t *pkmap_page_table;
|
|||
|
||||
#define ARCH_HAS_KMAP_FLUSH_TLB
|
||||
extern void kmap_flush_tlb(unsigned long addr);
|
||||
extern void *kmap_atomic_pfn(unsigned long pfn);
|
||||
|
||||
#define flush_cache_kmaps() do {} while (0)
|
||||
|
||||
#define arch_kmap_local_post_map(vaddr, pteval) kmap_flush_tlb(vaddr)
|
||||
#define arch_kmap_local_post_unmap(vaddr) kmap_flush_tlb(vaddr)
|
||||
|
||||
extern void kmap_init(void);
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
|
|
|
@ -9,8 +9,6 @@
|
|||
#include <asm/tlbflush.h>
|
||||
#include <asm/cacheflush.h>
|
||||
|
||||
static pte_t *kmap_pte;
|
||||
|
||||
unsigned long highstart_pfn, highend_pfn;
|
||||
|
||||
void kmap_flush_tlb(unsigned long addr)
|
||||
|
@ -19,67 +17,7 @@ void kmap_flush_tlb(unsigned long addr)
|
|||
}
|
||||
EXPORT_SYMBOL(kmap_flush_tlb);
|
||||
|
||||
void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
|
||||
{
|
||||
unsigned long vaddr;
|
||||
int idx, type;
|
||||
|
||||
type = kmap_atomic_idx_push();
|
||||
idx = type + KM_TYPE_NR*smp_processor_id();
|
||||
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
BUG_ON(!pte_none(*(kmap_pte - idx)));
|
||||
#endif
|
||||
set_pte(kmap_pte-idx, mk_pte(page, prot));
|
||||
flush_tlb_one((unsigned long)vaddr);
|
||||
|
||||
return (void *)vaddr;
|
||||
}
|
||||
EXPORT_SYMBOL(kmap_atomic_high_prot);
|
||||
|
||||
void kunmap_atomic_high(void *kvaddr)
|
||||
{
|
||||
unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
|
||||
int idx;
|
||||
|
||||
if (vaddr < FIXADDR_START)
|
||||
return;
|
||||
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
idx = KM_TYPE_NR*smp_processor_id() + kmap_atomic_idx();
|
||||
|
||||
BUG_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx));
|
||||
|
||||
pte_clear(&init_mm, vaddr, kmap_pte - idx);
|
||||
flush_tlb_one(vaddr);
|
||||
#else
|
||||
(void) idx; /* to kill a warning */
|
||||
#endif
|
||||
kmap_atomic_idx_pop();
|
||||
}
|
||||
EXPORT_SYMBOL(kunmap_atomic_high);
|
||||
|
||||
/*
|
||||
* This is the same as kmap_atomic() but can map memory that doesn't
|
||||
* have a struct page associated with it.
|
||||
*/
|
||||
void *kmap_atomic_pfn(unsigned long pfn)
|
||||
{
|
||||
unsigned long vaddr;
|
||||
int idx, type;
|
||||
|
||||
pagefault_disable();
|
||||
|
||||
type = kmap_atomic_idx_push();
|
||||
idx = type + KM_TYPE_NR*smp_processor_id();
|
||||
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
|
||||
set_pte(kmap_pte-idx, pfn_pte(pfn, PAGE_KERNEL));
|
||||
flush_tlb_one(vaddr);
|
||||
|
||||
return (void *) vaddr;
|
||||
}
|
||||
|
||||
static void __init kmap_pages_init(void)
|
||||
void __init kmap_init(void)
|
||||
{
|
||||
unsigned long vaddr;
|
||||
pgd_t *pgd;
|
||||
|
@ -96,14 +34,3 @@ static void __init kmap_pages_init(void)
|
|||
pte = pte_offset_kernel(pmd, vaddr);
|
||||
pkmap_page_table = pte;
|
||||
}
|
||||
|
||||
void __init kmap_init(void)
|
||||
{
|
||||
unsigned long vaddr;
|
||||
|
||||
kmap_pages_init();
|
||||
|
||||
vaddr = __fix_to_virt(FIX_KMAP_BEGIN);
|
||||
|
||||
kmap_pte = pte_offset_kernel((pmd_t *)pgd_offset_k(vaddr), vaddr);
|
||||
}
|
||||
|
|
|
@ -1,13 +0,0 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef _ASM_IA64_KMAP_TYPES_H
|
||||
#define _ASM_IA64_KMAP_TYPES_H
|
||||
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
#define __WITH_KM_FENCE
|
||||
#endif
|
||||
|
||||
#include <asm-generic/kmap_types.h>
|
||||
|
||||
#undef __WITH_KM_FENCE
|
||||
|
||||
#endif /* _ASM_IA64_KMAP_TYPES_H */
|
|
@ -155,6 +155,7 @@ config XILINX_UNCACHED_SHADOW
|
|||
config HIGHMEM
|
||||
bool "High memory support"
|
||||
depends on MMU
|
||||
select KMAP_LOCAL
|
||||
help
|
||||
The address space of Microblaze processors is only 4 Gigabytes large
|
||||
and it has to accommodate user address space, kernel address
|
||||
|
|
|
@ -20,7 +20,7 @@
|
|||
#include <asm/page.h>
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
#include <linux/threads.h>
|
||||
#include <asm/kmap_types.h>
|
||||
#include <asm/kmap_size.h>
|
||||
#endif
|
||||
|
||||
#define FIXADDR_TOP ((unsigned long)(-PAGE_SIZE))
|
||||
|
@ -47,7 +47,7 @@ enum fixed_addresses {
|
|||
FIX_HOLE,
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */
|
||||
FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_TYPE_NR * num_possible_cpus()) - 1,
|
||||
FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_MAX_IDX * num_possible_cpus()) - 1,
|
||||
#endif
|
||||
__end_of_fixed_addresses
|
||||
};
|
||||
|
|
|
@ -25,7 +25,6 @@
|
|||
#include <linux/uaccess.h>
|
||||
#include <asm/fixmap.h>
|
||||
|
||||
extern pte_t *kmap_pte;
|
||||
extern pte_t *pkmap_page_table;
|
||||
|
||||
/*
|
||||
|
@ -52,6 +51,11 @@ extern pte_t *pkmap_page_table;
|
|||
|
||||
#define flush_cache_kmaps() { flush_icache(); flush_dcache(); }
|
||||
|
||||
#define arch_kmap_local_post_map(vaddr, pteval) \
|
||||
local_flush_tlb_page(NULL, vaddr);
|
||||
#define arch_kmap_local_post_unmap(vaddr) \
|
||||
local_flush_tlb_page(NULL, vaddr);
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
|
||||
#endif /* _ASM_HIGHMEM_H */
|
||||
|
|
|
@ -6,4 +6,3 @@
|
|||
obj-y := consistent.o init.o
|
||||
|
||||
obj-$(CONFIG_MMU) += pgtable.o mmu_context.o fault.o
|
||||
obj-$(CONFIG_HIGHMEM) += highmem.o
|
||||
|
|
|
@ -1,78 +0,0 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* highmem.c: virtual kernel memory mappings for high memory
|
||||
*
|
||||
* PowerPC version, stolen from the i386 version.
|
||||
*
|
||||
* Used in CONFIG_HIGHMEM systems for memory pages which
|
||||
* are not addressable by direct kernel virtual addresses.
|
||||
*
|
||||
* Copyright (C) 1999 Gerhard Wichert, Siemens AG
|
||||
* Gerhard.Wichert@pdb.siemens.de
|
||||
*
|
||||
*
|
||||
* Redesigned the x86 32-bit VM architecture to deal with
|
||||
* up to 16 Terrabyte physical memory. With current x86 CPUs
|
||||
* we now support up to 64 Gigabytes physical RAM.
|
||||
*
|
||||
* Copyright (C) 1999 Ingo Molnar <mingo@redhat.com>
|
||||
*
|
||||
* Reworked for PowerPC by various contributors. Moved from
|
||||
* highmem.h by Benjamin Herrenschmidt (c) 2009 IBM Corp.
|
||||
*/
|
||||
|
||||
#include <linux/export.h>
|
||||
#include <linux/highmem.h>
|
||||
|
||||
/*
|
||||
* The use of kmap_atomic/kunmap_atomic is discouraged - kmap/kunmap
|
||||
* gives a more generic (and caching) interface. But kmap_atomic can
|
||||
* be used in IRQ contexts, so in some (very limited) cases we need
|
||||
* it.
|
||||
*/
|
||||
#include <asm/tlbflush.h>
|
||||
|
||||
void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
|
||||
{
|
||||
|
||||
unsigned long vaddr;
|
||||
int idx, type;
|
||||
|
||||
type = kmap_atomic_idx_push();
|
||||
idx = type + KM_TYPE_NR*smp_processor_id();
|
||||
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
BUG_ON(!pte_none(*(kmap_pte-idx)));
|
||||
#endif
|
||||
set_pte_at(&init_mm, vaddr, kmap_pte-idx, mk_pte(page, prot));
|
||||
local_flush_tlb_page(NULL, vaddr);
|
||||
|
||||
return (void *) vaddr;
|
||||
}
|
||||
EXPORT_SYMBOL(kmap_atomic_high_prot);
|
||||
|
||||
void kunmap_atomic_high(void *kvaddr)
|
||||
{
|
||||
unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
|
||||
int type;
|
||||
unsigned int idx;
|
||||
|
||||
if (vaddr < __fix_to_virt(FIX_KMAP_END))
|
||||
return;
|
||||
|
||||
type = kmap_atomic_idx();
|
||||
|
||||
idx = type + KM_TYPE_NR * smp_processor_id();
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
BUG_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx));
|
||||
#endif
|
||||
/*
|
||||
* force other mappings to Oops if they'll try to access
|
||||
* this pte without first remap it
|
||||
*/
|
||||
pte_clear(&init_mm, vaddr, kmap_pte-idx);
|
||||
local_flush_tlb_page(NULL, vaddr);
|
||||
|
||||
kmap_atomic_idx_pop();
|
||||
}
|
||||
EXPORT_SYMBOL(kunmap_atomic_high);
|
|
@ -50,16 +50,11 @@ EXPORT_SYMBOL(min_low_pfn);
|
|||
EXPORT_SYMBOL(max_low_pfn);
|
||||
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
pte_t *kmap_pte;
|
||||
EXPORT_SYMBOL(kmap_pte);
|
||||
|
||||
static void __init highmem_init(void)
|
||||
{
|
||||
pr_debug("%x\n", (u32)PKMAP_BASE);
|
||||
map_page(PKMAP_BASE, 0, 0); /* XXX gross */
|
||||
pkmap_page_table = virt_to_kpte(PKMAP_BASE);
|
||||
|
||||
kmap_pte = virt_to_kpte(__fix_to_virt(FIX_KMAP_BEGIN));
|
||||
}
|
||||
|
||||
static void highmem_setup(void)
|
||||
|
|
|
@ -2719,6 +2719,7 @@ config WAR_MIPS34K_MISSED_ITLB
|
|||
config HIGHMEM
|
||||
bool "High Memory Support"
|
||||
depends on 32BIT && CPU_SUPPORTS_HIGHMEM && SYS_SUPPORTS_HIGHMEM && !CPU_MIPS32_3_5_EVA
|
||||
select KMAP_LOCAL
|
||||
|
||||
config CPU_SUPPORTS_HIGHMEM
|
||||
bool
|
||||
|
|
|
@ -17,7 +17,7 @@
|
|||
#include <spaces.h>
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
#include <linux/threads.h>
|
||||
#include <asm/kmap_types.h>
|
||||
#include <asm/kmap_size.h>
|
||||
#endif
|
||||
|
||||
/*
|
||||
|
@ -52,7 +52,7 @@ enum fixed_addresses {
|
|||
#ifdef CONFIG_HIGHMEM
|
||||
/* reserved pte's for temporary kernel mappings */
|
||||
FIX_KMAP_BEGIN = FIX_CMAP_END + 1,
|
||||
FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1,
|
||||
FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_MAX_IDX * NR_CPUS) - 1,
|
||||
#endif
|
||||
__end_of_fixed_addresses
|
||||
};
|
||||
|
|
|
@ -24,7 +24,7 @@
|
|||
#include <linux/interrupt.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <asm/cpu-features.h>
|
||||
#include <asm/kmap_types.h>
|
||||
#include <asm/kmap_size.h>
|
||||
|
||||
/* declarations for highmem.c */
|
||||
extern unsigned long highstart_pfn, highend_pfn;
|
||||
|
@ -48,11 +48,11 @@ extern pte_t *pkmap_page_table;
|
|||
|
||||
#define ARCH_HAS_KMAP_FLUSH_TLB
|
||||
extern void kmap_flush_tlb(unsigned long addr);
|
||||
extern void *kmap_atomic_pfn(unsigned long pfn);
|
||||
|
||||
#define flush_cache_kmaps() BUG_ON(cpu_has_dc_aliases)
|
||||
|
||||
extern void kmap_init(void);
|
||||
#define arch_kmap_local_post_map(vaddr, pteval) local_flush_tlb_one(vaddr)
|
||||
#define arch_kmap_local_post_unmap(vaddr) local_flush_tlb_one(vaddr)
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
|
||||
|
|
|
@ -1,13 +0,0 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef _ASM_KMAP_TYPES_H
|
||||
#define _ASM_KMAP_TYPES_H
|
||||
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
#define __WITH_KM_FENCE
|
||||
#endif
|
||||
|
||||
#include <asm-generic/kmap_types.h>
|
||||
|
||||
#undef __WITH_KM_FENCE
|
||||
|
||||
#endif
|
|
@ -8,8 +8,6 @@
|
|||
#include <asm/fixmap.h>
|
||||
#include <asm/tlbflush.h>
|
||||
|
||||
static pte_t *kmap_pte;
|
||||
|
||||
unsigned long highstart_pfn, highend_pfn;
|
||||
|
||||
void kmap_flush_tlb(unsigned long addr)
|
||||
|
@ -17,78 +15,3 @@ void kmap_flush_tlb(unsigned long addr)
|
|||
flush_tlb_one(addr);
|
||||
}
|
||||
EXPORT_SYMBOL(kmap_flush_tlb);
|
||||
|
||||
void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
|
||||
{
|
||||
unsigned long vaddr;
|
||||
int idx, type;
|
||||
|
||||
type = kmap_atomic_idx_push();
|
||||
idx = type + KM_TYPE_NR*smp_processor_id();
|
||||
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
BUG_ON(!pte_none(*(kmap_pte - idx)));
|
||||
#endif
|
||||
set_pte(kmap_pte-idx, mk_pte(page, prot));
|
||||
local_flush_tlb_one((unsigned long)vaddr);
|
||||
|
||||
return (void*) vaddr;
|
||||
}
|
||||
EXPORT_SYMBOL(kmap_atomic_high_prot);
|
||||
|
||||
void kunmap_atomic_high(void *kvaddr)
|
||||
{
|
||||
unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
|
||||
int type __maybe_unused;
|
||||
|
||||
if (vaddr < FIXADDR_START)
|
||||
return;
|
||||
|
||||
type = kmap_atomic_idx();
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
{
|
||||
int idx = type + KM_TYPE_NR * smp_processor_id();
|
||||
|
||||
BUG_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx));
|
||||
|
||||
/*
|
||||
* force other mappings to Oops if they'll try to access
|
||||
* this pte without first remap it
|
||||
*/
|
||||
pte_clear(&init_mm, vaddr, kmap_pte-idx);
|
||||
local_flush_tlb_one(vaddr);
|
||||
}
|
||||
#endif
|
||||
kmap_atomic_idx_pop();
|
||||
}
|
||||
EXPORT_SYMBOL(kunmap_atomic_high);
|
||||
|
||||
/*
|
||||
* This is the same as kmap_atomic() but can map memory that doesn't
|
||||
* have a struct page associated with it.
|
||||
*/
|
||||
void *kmap_atomic_pfn(unsigned long pfn)
|
||||
{
|
||||
unsigned long vaddr;
|
||||
int idx, type;
|
||||
|
||||
preempt_disable();
|
||||
pagefault_disable();
|
||||
|
||||
type = kmap_atomic_idx_push();
|
||||
idx = type + KM_TYPE_NR*smp_processor_id();
|
||||
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
|
||||
set_pte(kmap_pte-idx, pfn_pte(pfn, PAGE_KERNEL));
|
||||
flush_tlb_one(vaddr);
|
||||
|
||||
return (void*) vaddr;
|
||||
}
|
||||
|
||||
void __init kmap_init(void)
|
||||
{
|
||||
unsigned long kmap_vstart;
|
||||
|
||||
/* cache the first kmap pte */
|
||||
kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN);
|
||||
kmap_pte = virt_to_kpte(kmap_vstart);
|
||||
}
|
||||
|
|
|
@ -36,7 +36,6 @@
|
|||
#include <asm/cachectl.h>
|
||||
#include <asm/cpu.h>
|
||||
#include <asm/dma.h>
|
||||
#include <asm/kmap_types.h>
|
||||
#include <asm/maar.h>
|
||||
#include <asm/mmu_context.h>
|
||||
#include <asm/sections.h>
|
||||
|
@ -402,9 +401,6 @@ void __init paging_init(void)
|
|||
|
||||
pagetable_init();
|
||||
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
kmap_init();
|
||||
#endif
|
||||
#ifdef CONFIG_ZONE_DMA
|
||||
max_zone_pfns[ZONE_DMA] = MAX_DMA_PFN;
|
||||
#endif
|
||||
|
|
|
@ -157,6 +157,7 @@ config HW_SUPPORT_UNALIGNMENT_ACCESS
|
|||
config HIGHMEM
|
||||
bool "High Memory Support"
|
||||
depends on MMU && !CPU_CACHE_ALIASING
|
||||
select KMAP_LOCAL
|
||||
help
|
||||
The address space of Andes processors is only 4 Gigabytes large
|
||||
and it has to accommodate user address space, kernel address
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
#include <linux/threads.h>
|
||||
#include <asm/kmap_types.h>
|
||||
#include <asm/kmap_size.h>
|
||||
#endif
|
||||
|
||||
enum fixed_addresses {
|
||||
|
@ -14,7 +14,7 @@ enum fixed_addresses {
|
|||
FIX_KMAP_RESERVED,
|
||||
FIX_KMAP_BEGIN,
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_TYPE_NR * NR_CPUS),
|
||||
FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_MAX_IDX * NR_CPUS) - 1,
|
||||
#endif
|
||||
FIX_EARLYCON_MEM_BASE,
|
||||
__end_of_fixed_addresses
|
||||
|
|
|
@ -5,7 +5,6 @@
|
|||
#define _ASM_HIGHMEM_H
|
||||
|
||||
#include <asm/proc-fns.h>
|
||||
#include <asm/kmap_types.h>
|
||||
#include <asm/fixmap.h>
|
||||
|
||||
/*
|
||||
|
@ -45,11 +44,22 @@ extern pte_t *pkmap_page_table;
|
|||
extern void kmap_init(void);
|
||||
|
||||
/*
|
||||
* The following functions are already defined by <linux/highmem.h>
|
||||
* when CONFIG_HIGHMEM is not set.
|
||||
* FIXME: The below looks broken vs. a kmap_atomic() in task context which
|
||||
* is interupted and another kmap_atomic() happens in interrupt context.
|
||||
* But what do I know about nds32. -- tglx
|
||||
*/
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
extern void *kmap_atomic_pfn(unsigned long pfn);
|
||||
#endif
|
||||
#define arch_kmap_local_post_map(vaddr, pteval) \
|
||||
do { \
|
||||
__nds32__tlbop_inv(vaddr); \
|
||||
__nds32__mtsr_dsb(vaddr, NDS32_SR_TLB_VPN); \
|
||||
__nds32__tlbop_rwr(pteval); \
|
||||
__nds32__isb(); \
|
||||
} while (0)
|
||||
|
||||
#define arch_kmap_local_pre_unmap(vaddr) \
|
||||
do { \
|
||||
__nds32__tlbop_inv(vaddr); \
|
||||
__nds32__isb(); \
|
||||
} while (0)
|
||||
|
||||
#endif
|
||||
|
|
|
@ -3,7 +3,6 @@ obj-y := extable.o tlb.o fault.o init.o mmap.o \
|
|||
mm-nds32.o cacheflush.o proc.o
|
||||
|
||||
obj-$(CONFIG_ALIGNMENT_TRAP) += alignment.o
|
||||
obj-$(CONFIG_HIGHMEM) += highmem.o
|
||||
|
||||
ifdef CONFIG_FUNCTION_TRACER
|
||||
CFLAGS_REMOVE_proc.o = $(CC_FLAGS_FTRACE)
|
||||
|
|
|
@ -1,48 +0,0 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
// Copyright (C) 2005-2017 Andes Technology Corporation
|
||||
|
||||
#include <linux/export.h>
|
||||
#include <linux/highmem.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/smp.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/memblock.h>
|
||||
#include <asm/fixmap.h>
|
||||
#include <asm/tlbflush.h>
|
||||
|
||||
void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
|
||||
{
|
||||
unsigned int idx;
|
||||
unsigned long vaddr, pte;
|
||||
int type;
|
||||
pte_t *ptep;
|
||||
|
||||
type = kmap_atomic_idx_push();
|
||||
|
||||
idx = type + KM_TYPE_NR * smp_processor_id();
|
||||
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
|
||||
pte = (page_to_pfn(page) << PAGE_SHIFT) | prot;
|
||||
ptep = pte_offset_kernel(pmd_off_k(vaddr), vaddr);
|
||||
set_pte(ptep, pte);
|
||||
|
||||
__nds32__tlbop_inv(vaddr);
|
||||
__nds32__mtsr_dsb(vaddr, NDS32_SR_TLB_VPN);
|
||||
__nds32__tlbop_rwr(pte);
|
||||
__nds32__isb();
|
||||
return (void *)vaddr;
|
||||
}
|
||||
EXPORT_SYMBOL(kmap_atomic_high_prot);
|
||||
|
||||
void kunmap_atomic_high(void *kvaddr)
|
||||
{
|
||||
if (kvaddr >= (void *)FIXADDR_START) {
|
||||
unsigned long vaddr = (unsigned long)kvaddr;
|
||||
pte_t *ptep;
|
||||
kmap_atomic_idx_pop();
|
||||
__nds32__tlbop_inv(vaddr);
|
||||
__nds32__isb();
|
||||
ptep = pte_offset_kernel(pmd_off_k(vaddr), vaddr);
|
||||
set_pte(ptep, 0);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(kunmap_atomic_high);
|
|
@ -33,7 +33,6 @@
|
|||
#include <asm/io.h>
|
||||
#include <asm/tlb.h>
|
||||
#include <asm/mmu_context.h>
|
||||
#include <asm/kmap_types.h>
|
||||
#include <asm/fixmap.h>
|
||||
#include <asm/tlbflush.h>
|
||||
#include <asm/sections.h>
|
||||
|
|
|
@ -15,7 +15,6 @@
|
|||
#include <linux/io.h>
|
||||
#include <linux/pgtable.h>
|
||||
#include <asm/pgalloc.h>
|
||||
#include <asm/kmap_types.h>
|
||||
#include <asm/fixmap.h>
|
||||
#include <asm/bug.h>
|
||||
#include <linux/sched.h>
|
||||
|
|
|
@ -1,13 +0,0 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef _ASM_KMAP_TYPES_H
|
||||
#define _ASM_KMAP_TYPES_H
|
||||
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
#define __WITH_KM_FENCE
|
||||
#endif
|
||||
|
||||
#include <asm-generic/kmap_types.h>
|
||||
|
||||
#undef __WITH_KM_FENCE
|
||||
|
||||
#endif
|
|
@ -410,6 +410,7 @@ menu "Kernel options"
|
|||
config HIGHMEM
|
||||
bool "High memory support"
|
||||
depends on PPC32
|
||||
select KMAP_LOCAL
|
||||
|
||||
source "kernel/Kconfig.hz"
|
||||
|
||||
|
|
|
@ -20,7 +20,7 @@
|
|||
#include <asm/page.h>
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
#include <linux/threads.h>
|
||||
#include <asm/kmap_types.h>
|
||||
#include <asm/kmap_size.h>
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_KASAN
|
||||
|
@ -55,7 +55,7 @@ enum fixed_addresses {
|
|||
FIX_EARLY_DEBUG_BASE = FIX_EARLY_DEBUG_TOP+(ALIGN(SZ_128K, PAGE_SIZE)/PAGE_SIZE)-1,
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */
|
||||
FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1,
|
||||
FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_MAX_IDX * NR_CPUS) - 1,
|
||||
#endif
|
||||
#ifdef CONFIG_PPC_8xx
|
||||
/* For IMMR we need an aligned 512K area */
|
||||
|
|
|
@ -24,12 +24,10 @@
|
|||
#ifdef __KERNEL__
|
||||
|
||||
#include <linux/interrupt.h>
|
||||
#include <asm/kmap_types.h>
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/page.h>
|
||||
#include <asm/fixmap.h>
|
||||
|
||||
extern pte_t *kmap_pte;
|
||||
extern pte_t *pkmap_page_table;
|
||||
|
||||
/*
|
||||
|
@ -60,6 +58,11 @@ extern pte_t *pkmap_page_table;
|
|||
|
||||
#define flush_cache_kmaps() flush_cache_all()
|
||||
|
||||
#define arch_kmap_local_post_map(vaddr, pteval) \
|
||||
local_flush_tlb_page(NULL, vaddr)
|
||||
#define arch_kmap_local_post_unmap(vaddr) \
|
||||
local_flush_tlb_page(NULL, vaddr)
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
|
||||
#endif /* _ASM_HIGHMEM_H */
|
||||
|
|
|
@ -1,13 +0,0 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0-or-later */
|
||||
#ifndef _ASM_POWERPC_KMAP_TYPES_H
|
||||
#define _ASM_POWERPC_KMAP_TYPES_H
|
||||
|
||||
#ifdef __KERNEL__
|
||||
|
||||
/*
|
||||
*/
|
||||
|
||||
#define KM_TYPE_NR 16
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
#endif /* _ASM_POWERPC_KMAP_TYPES_H */
|
|
@ -16,7 +16,6 @@ obj-$(CONFIG_NEED_MULTIPLE_NODES) += numa.o
|
|||
obj-$(CONFIG_PPC_MM_SLICES) += slice.o
|
||||
obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
|
||||
obj-$(CONFIG_NOT_COHERENT_CACHE) += dma-noncoherent.o
|
||||
obj-$(CONFIG_HIGHMEM) += highmem.o
|
||||
obj-$(CONFIG_PPC_COPRO_BASE) += copro_fault.o
|
||||
obj-$(CONFIG_PPC_PTDUMP) += ptdump/
|
||||
obj-$(CONFIG_KASAN) += kasan/
|
||||
|
|
|
@ -1,67 +0,0 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* highmem.c: virtual kernel memory mappings for high memory
|
||||
*
|
||||
* PowerPC version, stolen from the i386 version.
|
||||
*
|
||||
* Used in CONFIG_HIGHMEM systems for memory pages which
|
||||
* are not addressable by direct kernel virtual addresses.
|
||||
*
|
||||
* Copyright (C) 1999 Gerhard Wichert, Siemens AG
|
||||
* Gerhard.Wichert@pdb.siemens.de
|
||||
*
|
||||
*
|
||||
* Redesigned the x86 32-bit VM architecture to deal with
|
||||
* up to 16 Terrabyte physical memory. With current x86 CPUs
|
||||
* we now support up to 64 Gigabytes physical RAM.
|
||||
*
|
||||
* Copyright (C) 1999 Ingo Molnar <mingo@redhat.com>
|
||||
*
|
||||
* Reworked for PowerPC by various contributors. Moved from
|
||||
* highmem.h by Benjamin Herrenschmidt (c) 2009 IBM Corp.
|
||||
*/
|
||||
|
||||
#include <linux/highmem.h>
|
||||
#include <linux/module.h>
|
||||
|
||||
void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
|
||||
{
|
||||
unsigned long vaddr;
|
||||
int idx, type;
|
||||
|
||||
type = kmap_atomic_idx_push();
|
||||
idx = type + KM_TYPE_NR*smp_processor_id();
|
||||
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
|
||||
WARN_ON(IS_ENABLED(CONFIG_DEBUG_HIGHMEM) && !pte_none(*(kmap_pte - idx)));
|
||||
__set_pte_at(&init_mm, vaddr, kmap_pte-idx, mk_pte(page, prot), 1);
|
||||
local_flush_tlb_page(NULL, vaddr);
|
||||
|
||||
return (void*) vaddr;
|
||||
}
|
||||
EXPORT_SYMBOL(kmap_atomic_high_prot);
|
||||
|
||||
void kunmap_atomic_high(void *kvaddr)
|
||||
{
|
||||
unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
|
||||
|
||||
if (vaddr < __fix_to_virt(FIX_KMAP_END))
|
||||
return;
|
||||
|
||||
if (IS_ENABLED(CONFIG_DEBUG_HIGHMEM)) {
|
||||
int type = kmap_atomic_idx();
|
||||
unsigned int idx;
|
||||
|
||||
idx = type + KM_TYPE_NR * smp_processor_id();
|
||||
WARN_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx));
|
||||
|
||||
/*
|
||||
* force other mappings to Oops if they'll try to access
|
||||
* this pte without first remap it
|
||||
*/
|
||||
pte_clear(&init_mm, vaddr, kmap_pte-idx);
|
||||
local_flush_tlb_page(NULL, vaddr);
|
||||
}
|
||||
|
||||
kmap_atomic_idx_pop();
|
||||
}
|
||||
EXPORT_SYMBOL(kunmap_atomic_high);
|
|
@ -62,11 +62,6 @@
|
|||
unsigned long long memory_limit;
|
||||
bool init_mem_is_free;
|
||||
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
pte_t *kmap_pte;
|
||||
EXPORT_SYMBOL(kmap_pte);
|
||||
#endif
|
||||
|
||||
pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
|
||||
unsigned long size, pgprot_t vma_prot)
|
||||
{
|
||||
|
@ -236,8 +231,6 @@ void __init paging_init(void)
|
|||
|
||||
map_kernel_page(PKMAP_BASE, 0, __pgprot(0)); /* XXX gross */
|
||||
pkmap_page_table = virt_to_kpte(PKMAP_BASE);
|
||||
|
||||
kmap_pte = virt_to_kpte(__fix_to_virt(FIX_KMAP_BEGIN));
|
||||
#endif /* CONFIG_HIGHMEM */
|
||||
|
||||
printk(KERN_DEBUG "Top of RAM: 0x%llx, Total RAM: 0x%llx\n",
|
||||
|
|
|
@ -13,9 +13,6 @@
|
|||
#include <linux/kernel.h>
|
||||
#include <linux/threads.h>
|
||||
#include <asm/page.h>
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
#include <asm/kmap_types.h>
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Here we define all the compile-time 'special' virtual
|
||||
|
@ -53,11 +50,6 @@ enum fixed_addresses {
|
|||
FIX_CMAP_BEGIN,
|
||||
FIX_CMAP_END = FIX_CMAP_BEGIN + (FIX_N_COLOURS * NR_CPUS) - 1,
|
||||
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */
|
||||
FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_TYPE_NR * NR_CPUS) - 1,
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_IOREMAP_FIXED
|
||||
/*
|
||||
* FIX_IOREMAP entries are useful for mapping physical address
|
||||
|
|
|
@ -1,15 +0,0 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef __SH_KMAP_TYPES_H
|
||||
#define __SH_KMAP_TYPES_H
|
||||
|
||||
/* Dummy header just to define km_type. */
|
||||
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
#define __WITH_KM_FENCE
|
||||
#endif
|
||||
|
||||
#include <asm-generic/kmap_types.h>
|
||||
|
||||
#undef __WITH_KM_FENCE
|
||||
|
||||
#endif
|
|
@ -362,9 +362,6 @@ void __init mem_init(void)
|
|||
mem_init_print_info(NULL);
|
||||
pr_info("virtual kernel memory layout:\n"
|
||||
" fixmap : 0x%08lx - 0x%08lx (%4ld kB)\n"
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
" pkmap : 0x%08lx - 0x%08lx (%4ld kB)\n"
|
||||
#endif
|
||||
" vmalloc : 0x%08lx - 0x%08lx (%4ld MB)\n"
|
||||
" lowmem : 0x%08lx - 0x%08lx (%4ld MB) (cached)\n"
|
||||
#ifdef CONFIG_UNCACHED_MAPPING
|
||||
|
@ -376,11 +373,6 @@ void __init mem_init(void)
|
|||
FIXADDR_START, FIXADDR_TOP,
|
||||
(FIXADDR_TOP - FIXADDR_START) >> 10,
|
||||
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
PKMAP_BASE, PKMAP_BASE+LAST_PKMAP*PAGE_SIZE,
|
||||
(LAST_PKMAP*PAGE_SIZE) >> 10,
|
||||
#endif
|
||||
|
||||
(unsigned long)VMALLOC_START, VMALLOC_END,
|
||||
(VMALLOC_END - VMALLOC_START) >> 20,
|
||||
|
||||
|
|
|
@ -139,6 +139,7 @@ config MMU
|
|||
config HIGHMEM
|
||||
bool
|
||||
default y if SPARC32
|
||||
select KMAP_LOCAL
|
||||
|
||||
config ZONE_DMA
|
||||
bool
|
||||
|
|
|
@ -24,7 +24,6 @@
|
|||
#include <linux/interrupt.h>
|
||||
#include <linux/pgtable.h>
|
||||
#include <asm/vaddrs.h>
|
||||
#include <asm/kmap_types.h>
|
||||
#include <asm/pgtsrmmu.h>
|
||||
|
||||
/* declarations for highmem.c */
|
||||
|
@ -33,8 +32,6 @@ extern unsigned long highstart_pfn, highend_pfn;
|
|||
#define kmap_prot __pgprot(SRMMU_ET_PTE | SRMMU_PRIV | SRMMU_CACHE)
|
||||
extern pte_t *pkmap_page_table;
|
||||
|
||||
void kmap_init(void) __init;
|
||||
|
||||
/*
|
||||
* Right now we initialize only a single pte table. It can be extended
|
||||
* easily, subsequent pte tables have to be allocated in one physical
|
||||
|
@ -53,6 +50,11 @@ void kmap_init(void) __init;
|
|||
|
||||
#define flush_cache_kmaps() flush_cache_all()
|
||||
|
||||
/* FIXME: Use __flush_tlb_one(vaddr) instead of flush_cache_all() -- Anton */
|
||||
#define arch_kmap_local_post_map(vaddr, pteval) flush_cache_all()
|
||||
#define arch_kmap_local_post_unmap(vaddr) flush_cache_all()
|
||||
|
||||
|
||||
#endif /* __KERNEL__ */
|
||||
|
||||
#endif /* _ASM_HIGHMEM_H */
|
||||
|
|
|
@ -1,11 +0,0 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef _ASM_KMAP_TYPES_H
|
||||
#define _ASM_KMAP_TYPES_H
|
||||
|
||||
/* Dummy header just to define km_type. None of this
|
||||
* is actually used on sparc. -DaveM
|
||||
*/
|
||||
|
||||
#include <asm-generic/kmap_types.h>
|
||||
|
||||
#endif
|
|
@ -32,13 +32,13 @@
|
|||
#define SRMMU_NOCACHE_ALCRATIO 64 /* 256 pages per 64MB of system RAM */
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
#include <asm/kmap_types.h>
|
||||
#include <asm/kmap_size.h>
|
||||
|
||||
enum fixed_addresses {
|
||||
FIX_HOLE,
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
FIX_KMAP_BEGIN,
|
||||
FIX_KMAP_END = (KM_TYPE_NR * NR_CPUS),
|
||||
FIX_KMAP_END = (KM_MAX_IDX * NR_CPUS),
|
||||
#endif
|
||||
__end_of_fixed_addresses
|
||||
};
|
||||
|
|
|
@ -15,6 +15,3 @@ obj-$(CONFIG_SPARC32) += leon_mm.o
|
|||
|
||||
# Only used by sparc64
|
||||
obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
|
||||
|
||||
# Only used by sparc32
|
||||
obj-$(CONFIG_HIGHMEM) += highmem.o
|
||||
|
|
|
@ -1,115 +0,0 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* highmem.c: virtual kernel memory mappings for high memory
|
||||
*
|
||||
* Provides kernel-static versions of atomic kmap functions originally
|
||||
* found as inlines in include/asm-sparc/highmem.h. These became
|
||||
* needed as kmap_atomic() and kunmap_atomic() started getting
|
||||
* called from within modules.
|
||||
* -- Tomas Szepe <szepe@pinerecords.com>, September 2002
|
||||
*
|
||||
* But kmap_atomic() and kunmap_atomic() cannot be inlined in
|
||||
* modules because they are loaded with btfixup-ped functions.
|
||||
*/
|
||||
|
||||
/*
|
||||
* The use of kmap_atomic/kunmap_atomic is discouraged - kmap/kunmap
|
||||
* gives a more generic (and caching) interface. But kmap_atomic can
|
||||
* be used in IRQ contexts, so in some (very limited) cases we need it.
|
||||
*
|
||||
* XXX This is an old text. Actually, it's good to use atomic kmaps,
|
||||
* provided you remember that they are atomic and not try to sleep
|
||||
* with a kmap taken, much like a spinlock. Non-atomic kmaps are
|
||||
* shared by CPUs, and so precious, and establishing them requires IPI.
|
||||
* Atomic kmaps are lightweight and we may have NCPUS more of them.
|
||||
*/
|
||||
#include <linux/highmem.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/mm.h>
|
||||
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/tlbflush.h>
|
||||
#include <asm/vaddrs.h>
|
||||
|
||||
static pte_t *kmap_pte;
|
||||
|
||||
void __init kmap_init(void)
|
||||
{
|
||||
unsigned long address = __fix_to_virt(FIX_KMAP_BEGIN);
|
||||
|
||||
/* cache the first kmap pte */
|
||||
kmap_pte = virt_to_kpte(address);
|
||||
}
|
||||
|
||||
void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
|
||||
{
|
||||
unsigned long vaddr;
|
||||
long idx, type;
|
||||
|
||||
type = kmap_atomic_idx_push();
|
||||
idx = type + KM_TYPE_NR*smp_processor_id();
|
||||
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
|
||||
|
||||
/* XXX Fix - Anton */
|
||||
#if 0
|
||||
__flush_cache_one(vaddr);
|
||||
#else
|
||||
flush_cache_all();
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
BUG_ON(!pte_none(*(kmap_pte-idx)));
|
||||
#endif
|
||||
set_pte(kmap_pte-idx, mk_pte(page, prot));
|
||||
/* XXX Fix - Anton */
|
||||
#if 0
|
||||
__flush_tlb_one(vaddr);
|
||||
#else
|
||||
flush_tlb_all();
|
||||
#endif
|
||||
|
||||
return (void*) vaddr;
|
||||
}
|
||||
EXPORT_SYMBOL(kmap_atomic_high_prot);
|
||||
|
||||
void kunmap_atomic_high(void *kvaddr)
|
||||
{
|
||||
unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
|
||||
int type;
|
||||
|
||||
if (vaddr < FIXADDR_START)
|
||||
return;
|
||||
|
||||
type = kmap_atomic_idx();
|
||||
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
{
|
||||
unsigned long idx;
|
||||
|
||||
idx = type + KM_TYPE_NR * smp_processor_id();
|
||||
BUG_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN+idx));
|
||||
|
||||
/* XXX Fix - Anton */
|
||||
#if 0
|
||||
__flush_cache_one(vaddr);
|
||||
#else
|
||||
flush_cache_all();
|
||||
#endif
|
||||
|
||||
/*
|
||||
* force other mappings to Oops if they'll try to access
|
||||
* this pte without first remap it
|
||||
*/
|
||||
pte_clear(&init_mm, vaddr, kmap_pte-idx);
|
||||
/* XXX Fix - Anton */
|
||||
#if 0
|
||||
__flush_tlb_one(vaddr);
|
||||
#else
|
||||
flush_tlb_all();
|
||||
#endif
|
||||
}
|
||||
#endif
|
||||
|
||||
kmap_atomic_idx_pop();
|
||||
}
|
||||
EXPORT_SYMBOL(kunmap_atomic_high);
|
|
@ -971,8 +971,6 @@ void __init srmmu_paging_init(void)
|
|||
|
||||
sparc_context_init(num_contexts);
|
||||
|
||||
kmap_init();
|
||||
|
||||
{
|
||||
unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
|
||||
|
||||
|
|
|
@ -3,7 +3,6 @@
|
|||
#define __UM_FIXMAP_H
|
||||
|
||||
#include <asm/processor.h>
|
||||
#include <asm/kmap_types.h>
|
||||
#include <asm/archparam.h>
|
||||
#include <asm/page.h>
|
||||
#include <linux/threads.h>
|
||||
|
|
|
@ -1,13 +0,0 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Copyright (C) 2002 Jeff Dike (jdike@karaya.com)
|
||||
*/
|
||||
|
||||
#ifndef __UM_KMAP_TYPES_H
|
||||
#define __UM_KMAP_TYPES_H
|
||||
|
||||
/* No more #include "asm/arch/kmap_types.h" ! */
|
||||
|
||||
#define KM_TYPE_NR 14
|
||||
|
||||
#endif
|
|
@ -14,10 +14,11 @@ config X86_32
|
|||
select ARCH_WANT_IPC_PARSE_VERSION
|
||||
select CLKSRC_I8253
|
||||
select CLONE_BACKWARDS
|
||||
select GENERIC_VDSO_32
|
||||
select HAVE_DEBUG_STACKOVERFLOW
|
||||
select KMAP_LOCAL
|
||||
select MODULES_USE_ELF_REL
|
||||
select OLD_SIGACTION
|
||||
select GENERIC_VDSO_32
|
||||
|
||||
config X86_64
|
||||
def_bool y
|
||||
|
@ -92,6 +93,7 @@ config X86
|
|||
select ARCH_SUPPORTS_ACPI
|
||||
select ARCH_SUPPORTS_ATOMIC_RMW
|
||||
select ARCH_SUPPORTS_NUMA_BALANCING if X86_64
|
||||
select ARCH_SUPPORTS_KMAP_LOCAL_FORCE_MAP if NR_CPUS <= 4096
|
||||
select ARCH_USE_BUILTIN_BSWAP
|
||||
select ARCH_USE_QUEUED_RWLOCKS
|
||||
select ARCH_USE_QUEUED_SPINLOCKS
|
||||
|
|
|
@ -14,13 +14,20 @@
|
|||
#ifndef _ASM_X86_FIXMAP_H
|
||||
#define _ASM_X86_FIXMAP_H
|
||||
|
||||
#include <asm/kmap_size.h>
|
||||
|
||||
/*
|
||||
* Exposed to assembly code for setting up initial page tables. Cannot be
|
||||
* calculated in assembly code (fixmap entries are an enum), but is sanity
|
||||
* checked in the actual fixmap C code to make sure that the fixmap is
|
||||
* covered fully.
|
||||
*/
|
||||
#define FIXMAP_PMD_NUM 2
|
||||
#ifndef CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP
|
||||
# define FIXMAP_PMD_NUM 2
|
||||
#else
|
||||
# define KM_PMDS (KM_MAX_IDX * ((CONFIG_NR_CPUS + 511) / 512))
|
||||
# define FIXMAP_PMD_NUM (KM_PMDS + 2)
|
||||
#endif
|
||||
/* fixmap starts downwards from the 507th entry in level2_fixmap_pgt */
|
||||
#define FIXMAP_PMD_TOP 507
|
||||
|
||||
|
@ -31,7 +38,6 @@
|
|||
#include <asm/pgtable_types.h>
|
||||
#ifdef CONFIG_X86_32
|
||||
#include <linux/threads.h>
|
||||
#include <asm/kmap_types.h>
|
||||
#else
|
||||
#include <uapi/asm/vsyscall.h>
|
||||
#endif
|
||||
|
@ -92,9 +98,9 @@ enum fixed_addresses {
|
|||
FIX_IO_APIC_BASE_0,
|
||||
FIX_IO_APIC_BASE_END = FIX_IO_APIC_BASE_0 + MAX_IO_APICS - 1,
|
||||
#endif
|
||||
#ifdef CONFIG_X86_32
|
||||
#ifdef CONFIG_KMAP_LOCAL
|
||||
FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */
|
||||
FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1,
|
||||
FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_MAX_IDX * NR_CPUS) - 1,
|
||||
#ifdef CONFIG_PCI_MMCONFIG
|
||||
FIX_PCIE_MCFG,
|
||||
#endif
|
||||
|
@ -151,7 +157,6 @@ extern void reserve_top_address(unsigned long reserve);
|
|||
|
||||
extern int fixmaps_set;
|
||||
|
||||
extern pte_t *kmap_pte;
|
||||
extern pte_t *pkmap_page_table;
|
||||
|
||||
void __native_set_fixmap(enum fixed_addresses idx, pte_t pte);
|
||||
|
|
|
@ -23,7 +23,6 @@
|
|||
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/threads.h>
|
||||
#include <asm/kmap_types.h>
|
||||
#include <asm/tlbflush.h>
|
||||
#include <asm/paravirt.h>
|
||||
#include <asm/fixmap.h>
|
||||
|
@ -58,11 +57,17 @@ extern unsigned long highstart_pfn, highend_pfn;
|
|||
#define PKMAP_NR(virt) ((virt-PKMAP_BASE) >> PAGE_SHIFT)
|
||||
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
|
||||
|
||||
void *kmap_atomic_pfn(unsigned long pfn);
|
||||
void *kmap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot);
|
||||
|
||||
#define flush_cache_kmaps() do { } while (0)
|
||||
|
||||
#define arch_kmap_local_post_map(vaddr, pteval) \
|
||||
arch_flush_lazy_mmu_mode()
|
||||
|
||||
#define arch_kmap_local_post_unmap(vaddr) \
|
||||
do { \
|
||||
flush_tlb_one_kernel((vaddr)); \
|
||||
arch_flush_lazy_mmu_mode(); \
|
||||
} while (0)
|
||||
|
||||
extern void add_highpages_with_active_regions(int nid, unsigned long start_pfn,
|
||||
unsigned long end_pfn);
|
||||
|
||||
|
|
|
@ -9,19 +9,14 @@
|
|||
#include <linux/fs.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <linux/highmem.h>
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/tlbflush.h>
|
||||
|
||||
void __iomem *
|
||||
iomap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot);
|
||||
void __iomem *__iomap_local_pfn_prot(unsigned long pfn, pgprot_t prot);
|
||||
|
||||
void
|
||||
iounmap_atomic(void __iomem *kvaddr);
|
||||
int iomap_create_wc(resource_size_t base, unsigned long size, pgprot_t *prot);
|
||||
|
||||
int
|
||||
iomap_create_wc(resource_size_t base, unsigned long size, pgprot_t *prot);
|
||||
|
||||
void
|
||||
iomap_free(resource_size_t base, unsigned long size);
|
||||
void iomap_free(resource_size_t base, unsigned long size);
|
||||
|
||||
#endif /* _ASM_X86_IOMAP_H */
|
||||
|
|
|
@ -1,13 +0,0 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef _ASM_X86_KMAP_TYPES_H
|
||||
#define _ASM_X86_KMAP_TYPES_H
|
||||
|
||||
#if defined(CONFIG_X86_32) && defined(CONFIG_DEBUG_HIGHMEM)
|
||||
#define __WITH_KM_FENCE
|
||||
#endif
|
||||
|
||||
#include <asm-generic/kmap_types.h>
|
||||
|
||||
#undef __WITH_KM_FENCE
|
||||
|
||||
#endif /* _ASM_X86_KMAP_TYPES_H */
|
|
@ -41,7 +41,6 @@
|
|||
#ifndef __ASSEMBLY__
|
||||
|
||||
#include <asm/desc_defs.h>
|
||||
#include <asm/kmap_types.h>
|
||||
#include <asm/pgtable_types.h>
|
||||
#include <asm/nospec-branch.h>
|
||||
|
||||
|
|
|
@ -143,7 +143,11 @@ extern unsigned int ptrs_per_p4d;
|
|||
|
||||
#define MODULES_VADDR (__START_KERNEL_map + KERNEL_IMAGE_SIZE)
|
||||
/* The module sections ends with the start of the fixmap */
|
||||
#define MODULES_END _AC(0xffffffffff000000, UL)
|
||||
#ifndef CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP
|
||||
# define MODULES_END _AC(0xffffffffff000000, UL)
|
||||
#else
|
||||
# define MODULES_END _AC(0xfffffffffe000000, UL)
|
||||
#endif
|
||||
#define MODULES_LEN (MODULES_END - MODULES_VADDR)
|
||||
|
||||
#define ESPFIX_PGD_ENTRY _AC(-2, UL)
|
||||
|
|
|
@ -13,8 +13,6 @@
|
|||
|
||||
#include <linux/uaccess.h>
|
||||
|
||||
static void *kdump_buf_page;
|
||||
|
||||
static inline bool is_crashed_pfn_valid(unsigned long pfn)
|
||||
{
|
||||
#ifndef CONFIG_X86_PAE
|
||||
|
@ -41,15 +39,11 @@ static inline bool is_crashed_pfn_valid(unsigned long pfn)
|
|||
* @userbuf: if set, @buf is in user address space, use copy_to_user(),
|
||||
* otherwise @buf is in kernel address space, use memcpy().
|
||||
*
|
||||
* Copy a page from "oldmem". For this page, there is no pte mapped
|
||||
* in the current kernel. We stitch up a pte, similar to kmap_atomic.
|
||||
*
|
||||
* Calling copy_to_user() in atomic context is not desirable. Hence first
|
||||
* copying the data to a pre-allocated kernel page and then copying to user
|
||||
* space in non-atomic context.
|
||||
* Copy a page from "oldmem". For this page, there might be no pte mapped
|
||||
* in the current kernel.
|
||||
*/
|
||||
ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
|
||||
size_t csize, unsigned long offset, int userbuf)
|
||||
ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
|
||||
unsigned long offset, int userbuf)
|
||||
{
|
||||
void *vaddr;
|
||||
|
||||
|
@ -59,38 +53,16 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
|
|||
if (!is_crashed_pfn_valid(pfn))
|
||||
return -EFAULT;
|
||||
|
||||
vaddr = kmap_atomic_pfn(pfn);
|
||||
vaddr = kmap_local_pfn(pfn);
|
||||
|
||||
if (!userbuf) {
|
||||
memcpy(buf, (vaddr + offset), csize);
|
||||
kunmap_atomic(vaddr);
|
||||
memcpy(buf, vaddr + offset, csize);
|
||||
} else {
|
||||
if (!kdump_buf_page) {
|
||||
printk(KERN_WARNING "Kdump: Kdump buffer page not"
|
||||
" allocated\n");
|
||||
kunmap_atomic(vaddr);
|
||||
return -EFAULT;
|
||||
}
|
||||
copy_page(kdump_buf_page, vaddr);
|
||||
kunmap_atomic(vaddr);
|
||||
if (copy_to_user(buf, (kdump_buf_page + offset), csize))
|
||||
return -EFAULT;
|
||||
if (copy_to_user(buf, vaddr + offset, csize))
|
||||
csize = -EFAULT;
|
||||
}
|
||||
|
||||
kunmap_local(vaddr);
|
||||
|
||||
return csize;
|
||||
}
|
||||
|
||||
static int __init kdump_buf_page_init(void)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
kdump_buf_page = kmalloc(PAGE_SIZE, GFP_KERNEL);
|
||||
if (!kdump_buf_page) {
|
||||
printk(KERN_WARNING "Kdump: Failed to allocate kdump buffer"
|
||||
" page\n");
|
||||
ret = -ENOMEM;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
arch_initcall(kdump_buf_page_init);
|
||||
|
|
|
@ -4,65 +4,6 @@
|
|||
#include <linux/swap.h> /* for totalram_pages */
|
||||
#include <linux/memblock.h>
|
||||
|
||||
void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
|
||||
{
|
||||
unsigned long vaddr;
|
||||
int idx, type;
|
||||
|
||||
type = kmap_atomic_idx_push();
|
||||
idx = type + KM_TYPE_NR*smp_processor_id();
|
||||
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
|
||||
BUG_ON(!pte_none(*(kmap_pte-idx)));
|
||||
set_pte(kmap_pte-idx, mk_pte(page, prot));
|
||||
arch_flush_lazy_mmu_mode();
|
||||
|
||||
return (void *)vaddr;
|
||||
}
|
||||
EXPORT_SYMBOL(kmap_atomic_high_prot);
|
||||
|
||||
/*
|
||||
* This is the same as kmap_atomic() but can map memory that doesn't
|
||||
* have a struct page associated with it.
|
||||
*/
|
||||
void *kmap_atomic_pfn(unsigned long pfn)
|
||||
{
|
||||
return kmap_atomic_prot_pfn(pfn, kmap_prot);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(kmap_atomic_pfn);
|
||||
|
||||
void kunmap_atomic_high(void *kvaddr)
|
||||
{
|
||||
unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
|
||||
|
||||
if (vaddr >= __fix_to_virt(FIX_KMAP_END) &&
|
||||
vaddr <= __fix_to_virt(FIX_KMAP_BEGIN)) {
|
||||
int idx, type;
|
||||
|
||||
type = kmap_atomic_idx();
|
||||
idx = type + KM_TYPE_NR * smp_processor_id();
|
||||
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
WARN_ON_ONCE(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx));
|
||||
#endif
|
||||
/*
|
||||
* Force other mappings to Oops if they'll try to access this
|
||||
* pte without first remap it. Keeping stale mappings around
|
||||
* is a bad idea also, in case the page changes cacheability
|
||||
* attributes or becomes a protected page in a hypervisor.
|
||||
*/
|
||||
kpte_clear_flush(kmap_pte-idx, vaddr);
|
||||
kmap_atomic_idx_pop();
|
||||
arch_flush_lazy_mmu_mode();
|
||||
}
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
else {
|
||||
BUG_ON(vaddr < PAGE_OFFSET);
|
||||
BUG_ON(vaddr >= (unsigned long)high_memory);
|
||||
}
|
||||
#endif
|
||||
}
|
||||
EXPORT_SYMBOL(kunmap_atomic_high);
|
||||
|
||||
void __init set_highmem_pages_init(void)
|
||||
{
|
||||
struct zone *zone;
|
||||
|
|
|
@ -394,19 +394,6 @@ repeat:
|
|||
return last_map_addr;
|
||||
}
|
||||
|
||||
pte_t *kmap_pte;
|
||||
|
||||
static void __init kmap_init(void)
|
||||
{
|
||||
unsigned long kmap_vstart;
|
||||
|
||||
/*
|
||||
* Cache the first kmap pte:
|
||||
*/
|
||||
kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN);
|
||||
kmap_pte = virt_to_kpte(kmap_vstart);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
static void __init permanent_kmaps_init(pgd_t *pgd_base)
|
||||
{
|
||||
|
@ -712,8 +699,6 @@ void __init paging_init(void)
|
|||
|
||||
__flush_tlb_all();
|
||||
|
||||
kmap_init();
|
||||
|
||||
/*
|
||||
* NOTE: at this point the bootmem allocator is fully available.
|
||||
*/
|
||||
|
|
|
@ -44,28 +44,7 @@ void iomap_free(resource_size_t base, unsigned long size)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(iomap_free);
|
||||
|
||||
void *kmap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot)
|
||||
{
|
||||
unsigned long vaddr;
|
||||
int idx, type;
|
||||
|
||||
preempt_disable();
|
||||
pagefault_disable();
|
||||
|
||||
type = kmap_atomic_idx_push();
|
||||
idx = type + KM_TYPE_NR * smp_processor_id();
|
||||
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
|
||||
set_pte(kmap_pte - idx, pfn_pte(pfn, prot));
|
||||
arch_flush_lazy_mmu_mode();
|
||||
|
||||
return (void *)vaddr;
|
||||
}
|
||||
|
||||
/*
|
||||
* Map 'pfn' using protections 'prot'
|
||||
*/
|
||||
void __iomem *
|
||||
iomap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot)
|
||||
void __iomem *__iomap_local_pfn_prot(unsigned long pfn, pgprot_t prot)
|
||||
{
|
||||
/*
|
||||
* For non-PAT systems, translate non-WB request to UC- just in
|
||||
|
@ -81,36 +60,6 @@ iomap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot)
|
|||
/* Filter out unsupported __PAGE_KERNEL* bits: */
|
||||
pgprot_val(prot) &= __default_kernel_pte_mask;
|
||||
|
||||
return (void __force __iomem *) kmap_atomic_prot_pfn(pfn, prot);
|
||||
return (void __force __iomem *)__kmap_local_pfn_prot(pfn, prot);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(iomap_atomic_prot_pfn);
|
||||
|
||||
void
|
||||
iounmap_atomic(void __iomem *kvaddr)
|
||||
{
|
||||
unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
|
||||
|
||||
if (vaddr >= __fix_to_virt(FIX_KMAP_END) &&
|
||||
vaddr <= __fix_to_virt(FIX_KMAP_BEGIN)) {
|
||||
int idx, type;
|
||||
|
||||
type = kmap_atomic_idx();
|
||||
idx = type + KM_TYPE_NR * smp_processor_id();
|
||||
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
WARN_ON_ONCE(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx));
|
||||
#endif
|
||||
/*
|
||||
* Force other mappings to Oops if they'll try to access this
|
||||
* pte without first remap it. Keeping stale mappings around
|
||||
* is a bad idea also, in case the page changes cacheability
|
||||
* attributes or becomes a protected page in a hypervisor.
|
||||
*/
|
||||
kpte_clear_flush(kmap_pte-idx, vaddr);
|
||||
kmap_atomic_idx_pop();
|
||||
}
|
||||
|
||||
pagefault_enable();
|
||||
preempt_enable();
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(iounmap_atomic);
|
||||
EXPORT_SYMBOL_GPL(__iomap_local_pfn_prot);
|
||||
|
|
|
@ -666,6 +666,7 @@ endchoice
|
|||
config HIGHMEM
|
||||
bool "High Memory Support"
|
||||
depends on MMU
|
||||
select KMAP_LOCAL
|
||||
help
|
||||
Linux can use the full amount of RAM in the system by
|
||||
default. However, the default MMUv2 setup only maps the
|
||||
|
|
|
@ -16,64 +16,23 @@
|
|||
#ifdef CONFIG_HIGHMEM
|
||||
#include <linux/threads.h>
|
||||
#include <linux/pgtable.h>
|
||||
#include <asm/kmap_types.h>
|
||||
#endif
|
||||
#include <asm/kmap_size.h>
|
||||
|
||||
/*
|
||||
* Here we define all the compile-time 'special' virtual
|
||||
* addresses. The point is to have a constant address at
|
||||
* compile time, but to set the physical address only
|
||||
* in the boot process. We allocate these special addresses
|
||||
* from the start of the consistent memory region upwards.
|
||||
* Also this lets us do fail-safe vmalloc(), we
|
||||
* can guarantee that these special addresses and
|
||||
* vmalloc()-ed addresses never overlap.
|
||||
*
|
||||
* these 'compile-time allocated' memory buffers are
|
||||
* fixed-size 4k pages. (or larger if used with an increment
|
||||
* higher than 1) use fixmap_set(idx,phys) to associate
|
||||
* physical memory with fixmap indices.
|
||||
*/
|
||||
/* The map slots for temporary mappings via kmap_atomic/local(). */
|
||||
enum fixed_addresses {
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
/* reserved pte's for temporary kernel mappings */
|
||||
FIX_KMAP_BEGIN,
|
||||
FIX_KMAP_END = FIX_KMAP_BEGIN +
|
||||
(KM_TYPE_NR * NR_CPUS * DCACHE_N_COLORS) - 1,
|
||||
#endif
|
||||
(KM_MAX_IDX * NR_CPUS * DCACHE_N_COLORS) - 1,
|
||||
__end_of_fixed_addresses
|
||||
};
|
||||
|
||||
#define FIXADDR_TOP (XCHAL_KSEG_CACHED_VADDR - PAGE_SIZE)
|
||||
#define FIXADDR_END (XCHAL_KSEG_CACHED_VADDR - PAGE_SIZE)
|
||||
#define FIXADDR_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)
|
||||
#define FIXADDR_START ((FIXADDR_TOP - FIXADDR_SIZE) & PMD_MASK)
|
||||
/* Enforce that FIXADDR_START is PMD aligned to handle cache aliasing */
|
||||
#define FIXADDR_START ((FIXADDR_END - FIXADDR_SIZE) & PMD_MASK)
|
||||
#define FIXADDR_TOP (FIXADDR_START + FIXADDR_SIZE - PAGE_SIZE)
|
||||
|
||||
#define __fix_to_virt(x) (FIXADDR_START + ((x) << PAGE_SHIFT))
|
||||
#define __virt_to_fix(x) (((x) - FIXADDR_START) >> PAGE_SHIFT)
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
/*
|
||||
* 'index to address' translation. If anyone tries to use the idx
|
||||
* directly without translation, we catch the bug with a NULL-deference
|
||||
* kernel oops. Illegal ranges of incoming indices are caught too.
|
||||
*/
|
||||
static __always_inline unsigned long fix_to_virt(const unsigned int idx)
|
||||
{
|
||||
/* Check if this memory layout is broken because fixmap overlaps page
|
||||
* table.
|
||||
*/
|
||||
BUILD_BUG_ON(FIXADDR_START <
|
||||
TLBTEMP_BASE_1 + TLBTEMP_SIZE);
|
||||
BUILD_BUG_ON(idx >= __end_of_fixed_addresses);
|
||||
return __fix_to_virt(idx);
|
||||
}
|
||||
|
||||
static inline unsigned long virt_to_fix(const unsigned long vaddr)
|
||||
{
|
||||
BUG_ON(vaddr >= FIXADDR_TOP || vaddr < FIXADDR_START);
|
||||
return __virt_to_fix(vaddr);
|
||||
}
|
||||
|
||||
#endif
|
||||
#include <asm-generic/fixmap.h>
|
||||
|
||||
#endif /* CONFIG_HIGHMEM */
|
||||
#endif
|
||||
|
|
|
@ -12,13 +12,13 @@
|
|||
#ifndef _XTENSA_HIGHMEM_H
|
||||
#define _XTENSA_HIGHMEM_H
|
||||
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
#include <linux/wait.h>
|
||||
#include <linux/pgtable.h>
|
||||
#include <asm/cacheflush.h>
|
||||
#include <asm/fixmap.h>
|
||||
#include <asm/kmap_types.h>
|
||||
|
||||
#define PKMAP_BASE ((FIXADDR_START - \
|
||||
#define PKMAP_BASE ((FIXADDR_START - \
|
||||
(LAST_PKMAP + 1) * PAGE_SIZE) & PMD_MASK)
|
||||
#define LAST_PKMAP (PTRS_PER_PTE * DCACHE_N_COLORS)
|
||||
#define LAST_PKMAP_MASK (LAST_PKMAP - 1)
|
||||
|
@ -59,6 +59,13 @@ static inline wait_queue_head_t *get_pkmap_wait_queue_head(unsigned int color)
|
|||
{
|
||||
return pkmap_map_wait_arr + color;
|
||||
}
|
||||
|
||||
enum fixed_addresses kmap_local_map_idx(int type, unsigned long pfn);
|
||||
#define arch_kmap_local_map_idx kmap_local_map_idx
|
||||
|
||||
enum fixed_addresses kmap_local_unmap_idx(int type, unsigned long addr);
|
||||
#define arch_kmap_local_unmap_idx kmap_local_unmap_idx
|
||||
|
||||
#endif
|
||||
|
||||
extern pte_t *pkmap_page_table;
|
||||
|
@ -68,6 +75,10 @@ static inline void flush_cache_kmaps(void)
|
|||
flush_cache_all();
|
||||
}
|
||||
|
||||
#define arch_kmap_local_post_unmap(vaddr) \
|
||||
local_flush_tlb_kernel_range(vaddr, vaddr + PAGE_SIZE)
|
||||
|
||||
void kmap_init(void);
|
||||
|
||||
#endif /* CONFIG_HIGHMEM */
|
||||
#endif
|
||||
|
|
|
@ -12,8 +12,6 @@
|
|||
#include <linux/highmem.h>
|
||||
#include <asm/tlbflush.h>
|
||||
|
||||
static pte_t *kmap_pte;
|
||||
|
||||
#if DCACHE_WAY_SIZE > PAGE_SIZE
|
||||
unsigned int last_pkmap_nr_arr[DCACHE_N_COLORS];
|
||||
wait_queue_head_t pkmap_map_wait_arr[DCACHE_N_COLORS];
|
||||
|
@ -25,67 +23,37 @@ static void __init kmap_waitqueues_init(void)
|
|||
for (i = 0; i < ARRAY_SIZE(pkmap_map_wait_arr); ++i)
|
||||
init_waitqueue_head(pkmap_map_wait_arr + i);
|
||||
}
|
||||
#else
|
||||
static inline void kmap_waitqueues_init(void)
|
||||
{
|
||||
}
|
||||
#endif
|
||||
|
||||
static inline enum fixed_addresses kmap_idx(int type, unsigned long color)
|
||||
{
|
||||
return (type + KM_TYPE_NR * smp_processor_id()) * DCACHE_N_COLORS +
|
||||
color;
|
||||
int idx = (type + KM_MAX_IDX * smp_processor_id()) * DCACHE_N_COLORS;
|
||||
|
||||
/*
|
||||
* The fixmap operates top down, so the color offset needs to be
|
||||
* reverse as well.
|
||||
*/
|
||||
return idx + DCACHE_N_COLORS - 1 - color;
|
||||
}
|
||||
|
||||
void *kmap_atomic_high_prot(struct page *page, pgprot_t prot)
|
||||
enum fixed_addresses kmap_local_map_idx(int type, unsigned long pfn)
|
||||
{
|
||||
enum fixed_addresses idx;
|
||||
unsigned long vaddr;
|
||||
return kmap_idx(type, DCACHE_ALIAS(pfn << PAGE_SHIFT));
|
||||
}
|
||||
|
||||
idx = kmap_idx(kmap_atomic_idx_push(),
|
||||
DCACHE_ALIAS(page_to_phys(page)));
|
||||
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
BUG_ON(!pte_none(*(kmap_pte + idx)));
|
||||
enum fixed_addresses kmap_local_unmap_idx(int type, unsigned long addr)
|
||||
{
|
||||
return kmap_idx(type, DCACHE_ALIAS(addr));
|
||||
}
|
||||
|
||||
#else
|
||||
static inline void kmap_waitqueues_init(void) { }
|
||||
#endif
|
||||
set_pte(kmap_pte + idx, mk_pte(page, prot));
|
||||
|
||||
return (void *)vaddr;
|
||||
}
|
||||
EXPORT_SYMBOL(kmap_atomic_high_prot);
|
||||
|
||||
void kunmap_atomic_high(void *kvaddr)
|
||||
{
|
||||
if (kvaddr >= (void *)FIXADDR_START &&
|
||||
kvaddr < (void *)FIXADDR_TOP) {
|
||||
int idx = kmap_idx(kmap_atomic_idx(),
|
||||
DCACHE_ALIAS((unsigned long)kvaddr));
|
||||
|
||||
/*
|
||||
* Force other mappings to Oops if they'll try to access this
|
||||
* pte without first remap it. Keeping stale mappings around
|
||||
* is a bad idea also, in case the page changes cacheability
|
||||
* attributes or becomes a protected page in a hypervisor.
|
||||
*/
|
||||
pte_clear(&init_mm, kvaddr, kmap_pte + idx);
|
||||
local_flush_tlb_kernel_range((unsigned long)kvaddr,
|
||||
(unsigned long)kvaddr + PAGE_SIZE);
|
||||
|
||||
kmap_atomic_idx_pop();
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(kunmap_atomic_high);
|
||||
|
||||
void __init kmap_init(void)
|
||||
{
|
||||
unsigned long kmap_vstart;
|
||||
|
||||
/* Check if this memory layout is broken because PKMAP overlaps
|
||||
* page table.
|
||||
*/
|
||||
BUILD_BUG_ON(PKMAP_BASE < TLBTEMP_BASE_1 + TLBTEMP_SIZE);
|
||||
/* cache the first kmap pte */
|
||||
kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN);
|
||||
kmap_pte = virt_to_kpte(kmap_vstart);
|
||||
kmap_waitqueues_init();
|
||||
}
|
||||
|
|
|
@ -147,8 +147,8 @@ void __init mem_init(void)
|
|||
#ifdef CONFIG_HIGHMEM
|
||||
PKMAP_BASE, PKMAP_BASE + LAST_PKMAP * PAGE_SIZE,
|
||||
(LAST_PKMAP*PAGE_SIZE) >> 10,
|
||||
FIXADDR_START, FIXADDR_TOP,
|
||||
(FIXADDR_TOP - FIXADDR_START) >> 10,
|
||||
FIXADDR_START, FIXADDR_END,
|
||||
(FIXADDR_END - FIXADDR_START) >> 10,
|
||||
#endif
|
||||
PAGE_OFFSET, PAGE_OFFSET +
|
||||
(max_low_pfn - min_low_pfn) * PAGE_SIZE,
|
||||
|
|
|
@ -52,7 +52,8 @@ static void * __init init_pmd(unsigned long vaddr, unsigned long n_pages)
|
|||
|
||||
static void __init fixedrange_init(void)
|
||||
{
|
||||
init_pmd(__fix_to_virt(0), __end_of_fixed_addresses);
|
||||
BUILD_BUG_ON(FIXADDR_START < TLBTEMP_BASE_1 + TLBTEMP_SIZE);
|
||||
init_pmd(FIXADDR_START, __end_of_fixed_addresses);
|
||||
}
|
||||
#endif
|
||||
|
||||
|
|
1
fs/aio.c
1
fs/aio.c
|
@ -43,7 +43,6 @@
|
|||
#include <linux/mount.h>
|
||||
#include <linux/pseudo_fs.h>
|
||||
|
||||
#include <asm/kmap_types.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <linux/nospec.h>
|
||||
|
||||
|
|
|
@ -17,7 +17,6 @@
|
|||
#include <linux/wait.h>
|
||||
#include <linux/slab.h>
|
||||
#include <trace/events/btrfs.h>
|
||||
#include <asm/kmap_types.h>
|
||||
#include <asm/unaligned.h>
|
||||
#include <linux/pagemap.h>
|
||||
#include <linux/btrfs.h>
|
||||
|
|
|
@ -30,7 +30,7 @@ mandatory-y += irq.h
|
|||
mandatory-y += irq_regs.h
|
||||
mandatory-y += irq_work.h
|
||||
mandatory-y += kdebug.h
|
||||
mandatory-y += kmap_types.h
|
||||
mandatory-y += kmap_size.h
|
||||
mandatory-y += kprobes.h
|
||||
mandatory-y += linkage.h
|
||||
mandatory-y += local.h
|
||||
|
|
|
@ -0,0 +1,12 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef _ASM_GENERIC_KMAP_SIZE_H
|
||||
#define _ASM_GENERIC_KMAP_SIZE_H
|
||||
|
||||
/* For debug this provides guard pages between the maps */
|
||||
#ifdef CONFIG_DEBUG_KMAP_LOCAL
|
||||
# define KM_MAX_IDX 33
|
||||
#else
|
||||
# define KM_MAX_IDX 16
|
||||
#endif
|
||||
|
||||
#endif
|
|
@ -1,11 +0,0 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef _ASM_GENERIC_KMAP_TYPES_H
|
||||
#define _ASM_GENERIC_KMAP_TYPES_H
|
||||
|
||||
#ifdef __WITH_KM_FENCE
|
||||
# define KM_TYPE_NR 41
|
||||
#else
|
||||
# define KM_TYPE_NR 20
|
||||
#endif
|
||||
|
||||
#endif
|
|
@ -0,0 +1,232 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef _LINUX_HIGHMEM_INTERNAL_H
|
||||
#define _LINUX_HIGHMEM_INTERNAL_H
|
||||
|
||||
/*
|
||||
* Outside of CONFIG_HIGHMEM to support X86 32bit iomap_atomic() cruft.
|
||||
*/
|
||||
#ifdef CONFIG_KMAP_LOCAL
|
||||
void *__kmap_local_pfn_prot(unsigned long pfn, pgprot_t prot);
|
||||
void *__kmap_local_page_prot(struct page *page, pgprot_t prot);
|
||||
void kunmap_local_indexed(void *vaddr);
|
||||
void kmap_local_fork(struct task_struct *tsk);
|
||||
void __kmap_local_sched_out(void);
|
||||
void __kmap_local_sched_in(void);
|
||||
static inline void kmap_assert_nomap(void)
|
||||
{
|
||||
DEBUG_LOCKS_WARN_ON(current->kmap_ctrl.idx);
|
||||
}
|
||||
#else
|
||||
static inline void kmap_local_fork(struct task_struct *tsk) { }
|
||||
static inline void kmap_assert_nomap(void) { }
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
#include <asm/highmem.h>
|
||||
|
||||
#ifndef ARCH_HAS_KMAP_FLUSH_TLB
|
||||
static inline void kmap_flush_tlb(unsigned long addr) { }
|
||||
#endif
|
||||
|
||||
#ifndef kmap_prot
|
||||
#define kmap_prot PAGE_KERNEL
|
||||
#endif
|
||||
|
||||
void *kmap_high(struct page *page);
|
||||
void kunmap_high(struct page *page);
|
||||
void __kmap_flush_unused(void);
|
||||
struct page *__kmap_to_page(void *addr);
|
||||
|
||||
static inline void *kmap(struct page *page)
|
||||
{
|
||||
void *addr;
|
||||
|
||||
might_sleep();
|
||||
if (!PageHighMem(page))
|
||||
addr = page_address(page);
|
||||
else
|
||||
addr = kmap_high(page);
|
||||
kmap_flush_tlb((unsigned long)addr);
|
||||
return addr;
|
||||
}
|
||||
|
||||
static inline void kunmap(struct page *page)
|
||||
{
|
||||
might_sleep();
|
||||
if (!PageHighMem(page))
|
||||
return;
|
||||
kunmap_high(page);
|
||||
}
|
||||
|
||||
static inline struct page *kmap_to_page(void *addr)
|
||||
{
|
||||
return __kmap_to_page(addr);
|
||||
}
|
||||
|
||||
static inline void kmap_flush_unused(void)
|
||||
{
|
||||
__kmap_flush_unused();
|
||||
}
|
||||
|
||||
static inline void *kmap_local_page(struct page *page)
|
||||
{
|
||||
return __kmap_local_page_prot(page, kmap_prot);
|
||||
}
|
||||
|
||||
static inline void *kmap_local_page_prot(struct page *page, pgprot_t prot)
|
||||
{
|
||||
return __kmap_local_page_prot(page, prot);
|
||||
}
|
||||
|
||||
static inline void *kmap_local_pfn(unsigned long pfn)
|
||||
{
|
||||
return __kmap_local_pfn_prot(pfn, kmap_prot);
|
||||
}
|
||||
|
||||
static inline void __kunmap_local(void *vaddr)
|
||||
{
|
||||
kunmap_local_indexed(vaddr);
|
||||
}
|
||||
|
||||
static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
|
||||
{
|
||||
preempt_disable();
|
||||
pagefault_disable();
|
||||
return __kmap_local_page_prot(page, prot);
|
||||
}
|
||||
|
||||
static inline void *kmap_atomic(struct page *page)
|
||||
{
|
||||
return kmap_atomic_prot(page, kmap_prot);
|
||||
}
|
||||
|
||||
static inline void *kmap_atomic_pfn(unsigned long pfn)
|
||||
{
|
||||
preempt_disable();
|
||||
pagefault_disable();
|
||||
return __kmap_local_pfn_prot(pfn, kmap_prot);
|
||||
}
|
||||
|
||||
static inline void __kunmap_atomic(void *addr)
|
||||
{
|
||||
kunmap_local_indexed(addr);
|
||||
pagefault_enable();
|
||||
preempt_enable();
|
||||
}
|
||||
|
||||
unsigned int __nr_free_highpages(void);
|
||||
extern atomic_long_t _totalhigh_pages;
|
||||
|
||||
static inline unsigned int nr_free_highpages(void)
|
||||
{
|
||||
return __nr_free_highpages();
|
||||
}
|
||||
|
||||
static inline unsigned long totalhigh_pages(void)
|
||||
{
|
||||
return (unsigned long)atomic_long_read(&_totalhigh_pages);
|
||||
}
|
||||
|
||||
static inline void totalhigh_pages_inc(void)
|
||||
{
|
||||
atomic_long_inc(&_totalhigh_pages);
|
||||
}
|
||||
|
||||
static inline void totalhigh_pages_add(long count)
|
||||
{
|
||||
atomic_long_add(count, &_totalhigh_pages);
|
||||
}
|
||||
|
||||
#else /* CONFIG_HIGHMEM */
|
||||
|
||||
static inline struct page *kmap_to_page(void *addr)
|
||||
{
|
||||
return virt_to_page(addr);
|
||||
}
|
||||
|
||||
static inline void *kmap(struct page *page)
|
||||
{
|
||||
might_sleep();
|
||||
return page_address(page);
|
||||
}
|
||||
|
||||
static inline void kunmap_high(struct page *page) { }
|
||||
static inline void kmap_flush_unused(void) { }
|
||||
|
||||
static inline void kunmap(struct page *page)
|
||||
{
|
||||
#ifdef ARCH_HAS_FLUSH_ON_KUNMAP
|
||||
kunmap_flush_on_unmap(page_address(page));
|
||||
#endif
|
||||
}
|
||||
|
||||
static inline void *kmap_local_page(struct page *page)
|
||||
{
|
||||
return page_address(page);
|
||||
}
|
||||
|
||||
static inline void *kmap_local_page_prot(struct page *page, pgprot_t prot)
|
||||
{
|
||||
return kmap_local_page(page);
|
||||
}
|
||||
|
||||
static inline void *kmap_local_pfn(unsigned long pfn)
|
||||
{
|
||||
return kmap_local_page(pfn_to_page(pfn));
|
||||
}
|
||||
|
||||
static inline void __kunmap_local(void *addr)
|
||||
{
|
||||
#ifdef ARCH_HAS_FLUSH_ON_KUNMAP
|
||||
kunmap_flush_on_unmap(addr);
|
||||
#endif
|
||||
}
|
||||
|
||||
static inline void *kmap_atomic(struct page *page)
|
||||
{
|
||||
preempt_disable();
|
||||
pagefault_disable();
|
||||
return page_address(page);
|
||||
}
|
||||
|
||||
static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
|
||||
{
|
||||
return kmap_atomic(page);
|
||||
}
|
||||
|
||||
static inline void *kmap_atomic_pfn(unsigned long pfn)
|
||||
{
|
||||
return kmap_atomic(pfn_to_page(pfn));
|
||||
}
|
||||
|
||||
static inline void __kunmap_atomic(void *addr)
|
||||
{
|
||||
#ifdef ARCH_HAS_FLUSH_ON_KUNMAP
|
||||
kunmap_flush_on_unmap(addr);
|
||||
#endif
|
||||
pagefault_enable();
|
||||
preempt_enable();
|
||||
}
|
||||
|
||||
static inline unsigned int nr_free_highpages(void) { return 0; }
|
||||
static inline unsigned long totalhigh_pages(void) { return 0UL; }
|
||||
|
||||
#endif /* CONFIG_HIGHMEM */
|
||||
|
||||
/*
|
||||
* Prevent people trying to call kunmap_atomic() as if it were kunmap()
|
||||
* kunmap_atomic() should get the return value of kmap_atomic, not the page.
|
||||
*/
|
||||
#define kunmap_atomic(__addr) \
|
||||
do { \
|
||||
BUILD_BUG_ON(__same_type((__addr), struct page *)); \
|
||||
__kunmap_atomic(__addr); \
|
||||
} while (0)
|
||||
|
||||
#define kunmap_local(__addr) \
|
||||
do { \
|
||||
BUILD_BUG_ON(__same_type((__addr), struct page *)); \
|
||||
__kunmap_local(__addr); \
|
||||
} while (0)
|
||||
|
||||
#endif
|
|
@ -11,6 +11,119 @@
|
|||
|
||||
#include <asm/cacheflush.h>
|
||||
|
||||
#include "highmem-internal.h"
|
||||
|
||||
/**
|
||||
* kmap - Map a page for long term usage
|
||||
* @page: Pointer to the page to be mapped
|
||||
*
|
||||
* Returns: The virtual address of the mapping
|
||||
*
|
||||
* Can only be invoked from preemptible task context because on 32bit
|
||||
* systems with CONFIG_HIGHMEM enabled this function might sleep.
|
||||
*
|
||||
* For systems with CONFIG_HIGHMEM=n and for pages in the low memory area
|
||||
* this returns the virtual address of the direct kernel mapping.
|
||||
*
|
||||
* The returned virtual address is globally visible and valid up to the
|
||||
* point where it is unmapped via kunmap(). The pointer can be handed to
|
||||
* other contexts.
|
||||
*
|
||||
* For highmem pages on 32bit systems this can be slow as the mapping space
|
||||
* is limited and protected by a global lock. In case that there is no
|
||||
* mapping slot available the function blocks until a slot is released via
|
||||
* kunmap().
|
||||
*/
|
||||
static inline void *kmap(struct page *page);
|
||||
|
||||
/**
|
||||
* kunmap - Unmap the virtual address mapped by kmap()
|
||||
* @addr: Virtual address to be unmapped
|
||||
*
|
||||
* Counterpart to kmap(). A NOOP for CONFIG_HIGHMEM=n and for mappings of
|
||||
* pages in the low memory area.
|
||||
*/
|
||||
static inline void kunmap(struct page *page);
|
||||
|
||||
/**
|
||||
* kmap_to_page - Get the page for a kmap'ed address
|
||||
* @addr: The address to look up
|
||||
*
|
||||
* Returns: The page which is mapped to @addr.
|
||||
*/
|
||||
static inline struct page *kmap_to_page(void *addr);
|
||||
|
||||
/**
|
||||
* kmap_flush_unused - Flush all unused kmap mappings in order to
|
||||
* remove stray mappings
|
||||
*/
|
||||
static inline void kmap_flush_unused(void);
|
||||
|
||||
/**
|
||||
* kmap_local_page - Map a page for temporary usage
|
||||
* @page: Pointer to the page to be mapped
|
||||
*
|
||||
* Returns: The virtual address of the mapping
|
||||
*
|
||||
* Can be invoked from any context.
|
||||
*
|
||||
* Requires careful handling when nesting multiple mappings because the map
|
||||
* management is stack based. The unmap has to be in the reverse order of
|
||||
* the map operation:
|
||||
*
|
||||
* addr1 = kmap_local_page(page1);
|
||||
* addr2 = kmap_local_page(page2);
|
||||
* ...
|
||||
* kunmap_local(addr2);
|
||||
* kunmap_local(addr1);
|
||||
*
|
||||
* Unmapping addr1 before addr2 is invalid and causes malfunction.
|
||||
*
|
||||
* Contrary to kmap() mappings the mapping is only valid in the context of
|
||||
* the caller and cannot be handed to other contexts.
|
||||
*
|
||||
* On CONFIG_HIGHMEM=n kernels and for low memory pages this returns the
|
||||
* virtual address of the direct mapping. Only real highmem pages are
|
||||
* temporarily mapped.
|
||||
*
|
||||
* While it is significantly faster than kmap() for the higmem case it
|
||||
* comes with restrictions about the pointer validity. Only use when really
|
||||
* necessary.
|
||||
*
|
||||
* On HIGHMEM enabled systems mapping a highmem page has the side effect of
|
||||
* disabling migration in order to keep the virtual address stable across
|
||||
* preemption. No caller of kmap_local_page() can rely on this side effect.
|
||||
*/
|
||||
static inline void *kmap_local_page(struct page *page);
|
||||
|
||||
/**
|
||||
* kmap_atomic - Atomically map a page for temporary usage - Deprecated!
|
||||
* @page: Pointer to the page to be mapped
|
||||
*
|
||||
* Returns: The virtual address of the mapping
|
||||
*
|
||||
* Effectively a wrapper around kmap_local_page() which disables pagefaults
|
||||
* and preemption.
|
||||
*
|
||||
* Do not use in new code. Use kmap_local_page() instead.
|
||||
*/
|
||||
static inline void *kmap_atomic(struct page *page);
|
||||
|
||||
/**
|
||||
* kunmap_atomic - Unmap the virtual address mapped by kmap_atomic()
|
||||
* @addr: Virtual address to be unmapped
|
||||
*
|
||||
* Counterpart to kmap_atomic().
|
||||
*
|
||||
* Effectively a wrapper around kunmap_local() which additionally undoes
|
||||
* the side effects of kmap_atomic(), i.e. reenabling pagefaults and
|
||||
* preemption.
|
||||
*/
|
||||
|
||||
/* Highmem related interfaces for management code */
|
||||
static inline unsigned int nr_free_highpages(void);
|
||||
static inline unsigned long totalhigh_pages(void);
|
||||
|
||||
#ifndef ARCH_HAS_FLUSH_ANON_PAGE
|
||||
static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long vmaddr)
|
||||
{
|
||||
|
@ -29,199 +142,6 @@ static inline void invalidate_kernel_vmap_range(void *vaddr, int size)
|
|||
}
|
||||
#endif
|
||||
|
||||
#include <asm/kmap_types.h>
|
||||
|
||||
#ifdef CONFIG_HIGHMEM
|
||||
extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
|
||||
extern void kunmap_atomic_high(void *kvaddr);
|
||||
#include <asm/highmem.h>
|
||||
|
||||
#ifndef ARCH_HAS_KMAP_FLUSH_TLB
|
||||
static inline void kmap_flush_tlb(unsigned long addr) { }
|
||||
#endif
|
||||
|
||||
#ifndef kmap_prot
|
||||
#define kmap_prot PAGE_KERNEL
|
||||
#endif
|
||||
|
||||
void *kmap_high(struct page *page);
|
||||
static inline void *kmap(struct page *page)
|
||||
{
|
||||
void *addr;
|
||||
|
||||
might_sleep();
|
||||
if (!PageHighMem(page))
|
||||
addr = page_address(page);
|
||||
else
|
||||
addr = kmap_high(page);
|
||||
kmap_flush_tlb((unsigned long)addr);
|
||||
return addr;
|
||||
}
|
||||
|
||||
void kunmap_high(struct page *page);
|
||||
|
||||
static inline void kunmap(struct page *page)
|
||||
{
|
||||
might_sleep();
|
||||
if (!PageHighMem(page))
|
||||
return;
|
||||
kunmap_high(page);
|
||||
}
|
||||
|
||||
/*
|
||||
* kmap_atomic/kunmap_atomic is significantly faster than kmap/kunmap because
|
||||
* no global lock is needed and because the kmap code must perform a global TLB
|
||||
* invalidation when the kmap pool wraps.
|
||||
*
|
||||
* However when holding an atomic kmap it is not legal to sleep, so atomic
|
||||
* kmaps are appropriate for short, tight code paths only.
|
||||
*
|
||||
* The use of kmap_atomic/kunmap_atomic is discouraged - kmap/kunmap
|
||||
* gives a more generic (and caching) interface. But kmap_atomic can
|
||||
* be used in IRQ contexts, so in some (very limited) cases we need
|
||||
* it.
|
||||
*/
|
||||
static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
|
||||
{
|
||||
preempt_disable();
|
||||
pagefault_disable();
|
||||
if (!PageHighMem(page))
|
||||
return page_address(page);
|
||||
return kmap_atomic_high_prot(page, prot);
|
||||
}
|
||||
#define kmap_atomic(page) kmap_atomic_prot(page, kmap_prot)
|
||||
|
||||
/* declarations for linux/mm/highmem.c */
|
||||
unsigned int nr_free_highpages(void);
|
||||
extern atomic_long_t _totalhigh_pages;
|
||||
static inline unsigned long totalhigh_pages(void)
|
||||
{
|
||||
return (unsigned long)atomic_long_read(&_totalhigh_pages);
|
||||
}
|
||||
|
||||
static inline void totalhigh_pages_inc(void)
|
||||
{
|
||||
atomic_long_inc(&_totalhigh_pages);
|
||||
}
|
||||
|
||||
static inline void totalhigh_pages_dec(void)
|
||||
{
|
||||
atomic_long_dec(&_totalhigh_pages);
|
||||
}
|
||||
|
||||
static inline void totalhigh_pages_add(long count)
|
||||
{
|
||||
atomic_long_add(count, &_totalhigh_pages);
|
||||
}
|
||||
|
||||
static inline void totalhigh_pages_set(long val)
|
||||
{
|
||||
atomic_long_set(&_totalhigh_pages, val);
|
||||
}
|
||||
|
||||
void kmap_flush_unused(void);
|
||||
|
||||
struct page *kmap_to_page(void *addr);
|
||||
|
||||
#else /* CONFIG_HIGHMEM */
|
||||
|
||||
static inline unsigned int nr_free_highpages(void) { return 0; }
|
||||
|
||||
static inline struct page *kmap_to_page(void *addr)
|
||||
{
|
||||
return virt_to_page(addr);
|
||||
}
|
||||
|
||||
static inline unsigned long totalhigh_pages(void) { return 0UL; }
|
||||
|
||||
static inline void *kmap(struct page *page)
|
||||
{
|
||||
might_sleep();
|
||||
return page_address(page);
|
||||
}
|
||||
|
||||
static inline void kunmap_high(struct page *page)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void kunmap(struct page *page)
|
||||
{
|
||||
#ifdef ARCH_HAS_FLUSH_ON_KUNMAP
|
||||
kunmap_flush_on_unmap(page_address(page));
|
||||
#endif
|
||||
}
|
||||
|
||||
static inline void *kmap_atomic(struct page *page)
|
||||
{
|
||||
preempt_disable();
|
||||
pagefault_disable();
|
||||
return page_address(page);
|
||||
}
|
||||
#define kmap_atomic_prot(page, prot) kmap_atomic(page)
|
||||
|
||||
static inline void kunmap_atomic_high(void *addr)
|
||||
{
|
||||
/*
|
||||
* Mostly nothing to do in the CONFIG_HIGHMEM=n case as kunmap_atomic()
|
||||
* handles re-enabling faults + preemption
|
||||
*/
|
||||
#ifdef ARCH_HAS_FLUSH_ON_KUNMAP
|
||||
kunmap_flush_on_unmap(addr);
|
||||
#endif
|
||||
}
|
||||
|
||||
#define kmap_atomic_pfn(pfn) kmap_atomic(pfn_to_page(pfn))
|
||||
|
||||
#define kmap_flush_unused() do {} while(0)
|
||||
|
||||
#endif /* CONFIG_HIGHMEM */
|
||||
|
||||
#if defined(CONFIG_HIGHMEM) || defined(CONFIG_X86_32)
|
||||
|
||||
DECLARE_PER_CPU(int, __kmap_atomic_idx);
|
||||
|
||||
static inline int kmap_atomic_idx_push(void)
|
||||
{
|
||||
int idx = __this_cpu_inc_return(__kmap_atomic_idx) - 1;
|
||||
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
WARN_ON_ONCE(in_irq() && !irqs_disabled());
|
||||
BUG_ON(idx >= KM_TYPE_NR);
|
||||
#endif
|
||||
return idx;
|
||||
}
|
||||
|
||||
static inline int kmap_atomic_idx(void)
|
||||
{
|
||||
return __this_cpu_read(__kmap_atomic_idx) - 1;
|
||||
}
|
||||
|
||||
static inline void kmap_atomic_idx_pop(void)
|
||||
{
|
||||
#ifdef CONFIG_DEBUG_HIGHMEM
|
||||
int idx = __this_cpu_dec_return(__kmap_atomic_idx);
|
||||
|
||||
BUG_ON(idx < 0);
|
||||
#else
|
||||
__this_cpu_dec(__kmap_atomic_idx);
|
||||
#endif
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Prevent people trying to call kunmap_atomic() as if it were kunmap()
|
||||
* kunmap_atomic() should get the return value of kmap_atomic, not the page.
|
||||
*/
|
||||
#define kunmap_atomic(addr) \
|
||||
do { \
|
||||
BUILD_BUG_ON(__same_type((addr), struct page *)); \
|
||||
kunmap_atomic_high(addr); \
|
||||
pagefault_enable(); \
|
||||
preempt_enable(); \
|
||||
} while (0)
|
||||
|
||||
|
||||
/* when CONFIG_HIGHMEM is not set these will be plain clear/copy_page */
|
||||
#ifndef clear_user_highpage
|
||||
static inline void clear_user_highpage(struct page *page, unsigned long vaddr)
|
||||
|
|
|
@ -69,13 +69,32 @@ io_mapping_map_atomic_wc(struct io_mapping *mapping,
|
|||
|
||||
BUG_ON(offset >= mapping->size);
|
||||
phys_addr = mapping->base + offset;
|
||||
return iomap_atomic_prot_pfn(PHYS_PFN(phys_addr), mapping->prot);
|
||||
preempt_disable();
|
||||
pagefault_disable();
|
||||
return __iomap_local_pfn_prot(PHYS_PFN(phys_addr), mapping->prot);
|
||||
}
|
||||
|
||||
static inline void
|
||||
io_mapping_unmap_atomic(void __iomem *vaddr)
|
||||
{
|
||||
iounmap_atomic(vaddr);
|
||||
kunmap_local_indexed((void __force *)vaddr);
|
||||
pagefault_enable();
|
||||
preempt_enable();
|
||||
}
|
||||
|
||||
static inline void __iomem *
|
||||
io_mapping_map_local_wc(struct io_mapping *mapping, unsigned long offset)
|
||||
{
|
||||
resource_size_t phys_addr;
|
||||
|
||||
BUG_ON(offset >= mapping->size);
|
||||
phys_addr = mapping->base + offset;
|
||||
return __iomap_local_pfn_prot(PHYS_PFN(phys_addr), mapping->prot);
|
||||
}
|
||||
|
||||
static inline void io_mapping_unmap_local(void __iomem *vaddr)
|
||||
{
|
||||
kunmap_local_indexed((void __force *)vaddr);
|
||||
}
|
||||
|
||||
static inline void __iomem *
|
||||
|
@ -97,7 +116,7 @@ io_mapping_unmap(void __iomem *vaddr)
|
|||
iounmap(vaddr);
|
||||
}
|
||||
|
||||
#else
|
||||
#else /* HAVE_ATOMIC_IOMAP */
|
||||
|
||||
#include <linux/uaccess.h>
|
||||
|
||||
|
@ -162,7 +181,18 @@ io_mapping_unmap_atomic(void __iomem *vaddr)
|
|||
preempt_enable();
|
||||
}
|
||||
|
||||
#endif /* HAVE_ATOMIC_IOMAP */
|
||||
static inline void __iomem *
|
||||
io_mapping_map_local_wc(struct io_mapping *mapping, unsigned long offset)
|
||||
{
|
||||
return io_mapping_map_wc(mapping, offset, PAGE_SIZE);
|
||||
}
|
||||
|
||||
static inline void io_mapping_unmap_local(void __iomem *vaddr)
|
||||
{
|
||||
io_mapping_unmap(vaddr);
|
||||
}
|
||||
|
||||
#endif /* !HAVE_ATOMIC_IOMAP */
|
||||
|
||||
static inline struct io_mapping *
|
||||
io_mapping_create_wc(resource_size_t base,
|
||||
|
|
|
@ -35,6 +35,7 @@
|
|||
#include <linux/rseq.h>
|
||||
#include <linux/seqlock.h>
|
||||
#include <linux/kcsan.h>
|
||||
#include <asm/kmap_size.h>
|
||||
|
||||
/* task_struct member predeclarations (sorted alphabetically): */
|
||||
struct audit_context;
|
||||
|
@ -638,6 +639,13 @@ struct wake_q_node {
|
|||
struct wake_q_node *next;
|
||||
};
|
||||
|
||||
struct kmap_ctrl {
|
||||
#ifdef CONFIG_KMAP_LOCAL
|
||||
int idx;
|
||||
pte_t pteval[KM_MAX_IDX];
|
||||
#endif
|
||||
};
|
||||
|
||||
struct task_struct {
|
||||
#ifdef CONFIG_THREAD_INFO_IN_TASK
|
||||
/*
|
||||
|
@ -1318,6 +1326,7 @@ struct task_struct {
|
|||
unsigned int sequential_io;
|
||||
unsigned int sequential_io_avg;
|
||||
#endif
|
||||
struct kmap_ctrl kmap_ctrl;
|
||||
#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
|
||||
unsigned long task_state_change;
|
||||
#endif
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
|
||||
#include <linux/context_tracking.h>
|
||||
#include <linux/entry-common.h>
|
||||
#include <linux/highmem.h>
|
||||
#include <linux/livepatch.h>
|
||||
#include <linux/audit.h>
|
||||
|
||||
|
@ -203,6 +204,7 @@ static void exit_to_user_mode_prepare(struct pt_regs *regs)
|
|||
|
||||
/* Ensure that the address limit is intact and no locks are held */
|
||||
addr_limit_user_check();
|
||||
kmap_assert_nomap();
|
||||
lockdep_assert_irqs_disabled();
|
||||
lockdep_sys_exit();
|
||||
}
|
||||
|
|
|
@ -931,6 +931,7 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node)
|
|||
account_kernel_stack(tsk, 1);
|
||||
|
||||
kcov_task_init(tsk);
|
||||
kmap_local_fork(tsk);
|
||||
|
||||
#ifdef CONFIG_FAULT_INJECTION
|
||||
tsk->fail_nth = 0;
|
||||
|
|
|
@ -4091,6 +4091,22 @@ static inline void finish_lock_switch(struct rq *rq)
|
|||
# define finish_arch_post_lock_switch() do { } while (0)
|
||||
#endif
|
||||
|
||||
static inline void kmap_local_sched_out(void)
|
||||
{
|
||||
#ifdef CONFIG_KMAP_LOCAL
|
||||
if (unlikely(current->kmap_ctrl.idx))
|
||||
__kmap_local_sched_out();
|
||||
#endif
|
||||
}
|
||||
|
||||
static inline void kmap_local_sched_in(void)
|
||||
{
|
||||
#ifdef CONFIG_KMAP_LOCAL
|
||||
if (unlikely(current->kmap_ctrl.idx))
|
||||
__kmap_local_sched_in();
|
||||
#endif
|
||||
}
|
||||
|
||||
/**
|
||||
* prepare_task_switch - prepare to switch tasks
|
||||
* @rq: the runqueue preparing to switch
|
||||
|
@ -4113,6 +4129,7 @@ prepare_task_switch(struct rq *rq, struct task_struct *prev,
|
|||
perf_event_task_sched_out(prev, next);
|
||||
rseq_preempt(prev);
|
||||
fire_sched_out_preempt_notifiers(prev, next);
|
||||
kmap_local_sched_out();
|
||||
prepare_task(next);
|
||||
prepare_arch_switch(next);
|
||||
}
|
||||
|
@ -4179,6 +4196,14 @@ static struct rq *finish_task_switch(struct task_struct *prev)
|
|||
finish_lock_switch(rq);
|
||||
finish_arch_post_lock_switch();
|
||||
kcov_finish_switch(current);
|
||||
/*
|
||||
* kmap_local_sched_out() is invoked with rq::lock held and
|
||||
* interrupts disabled. There is no requirement for that, but the
|
||||
* sched out code does not have an interrupt enabled section.
|
||||
* Restoring the maps on sched in does not require interrupts being
|
||||
* disabled either.
|
||||
*/
|
||||
kmap_local_sched_in();
|
||||
|
||||
fire_sched_in_preempt_notifiers(current);
|
||||
/*
|
||||
|
|
|
@ -849,9 +849,31 @@ config DEBUG_PER_CPU_MAPS
|
|||
|
||||
Say N if unsure.
|
||||
|
||||
config DEBUG_KMAP_LOCAL
|
||||
bool "Debug kmap_local temporary mappings"
|
||||
depends on DEBUG_KERNEL && KMAP_LOCAL
|
||||
help
|
||||
This option enables additional error checking for the kmap_local
|
||||
infrastructure. Disable for production use.
|
||||
|
||||
config ARCH_SUPPORTS_KMAP_LOCAL_FORCE_MAP
|
||||
bool
|
||||
|
||||
config DEBUG_KMAP_LOCAL_FORCE_MAP
|
||||
bool "Enforce kmap_local temporary mappings"
|
||||
depends on DEBUG_KERNEL && ARCH_SUPPORTS_KMAP_LOCAL_FORCE_MAP
|
||||
select KMAP_LOCAL
|
||||
select DEBUG_KMAP_LOCAL
|
||||
help
|
||||
This option enforces temporary mappings through the kmap_local
|
||||
mechanism for non-highmem pages and on non-highmem systems.
|
||||
Disable this for production systems!
|
||||
|
||||
config DEBUG_HIGHMEM
|
||||
bool "Highmem debugging"
|
||||
depends on DEBUG_KERNEL && HIGHMEM
|
||||
select DEBUG_KMAP_LOCAL_FORCE_MAP if ARCH_SUPPORTS_KMAP_LOCAL_FORCE_MAP
|
||||
select DEBUG_KMAP_LOCAL
|
||||
help
|
||||
This option enables additional error checking for high memory
|
||||
systems. Disable for production systems.
|
||||
|
|
|
@ -859,4 +859,7 @@ config ARCH_HAS_HUGEPD
|
|||
config MAPPING_DIRTY_HELPERS
|
||||
bool
|
||||
|
||||
config KMAP_LOCAL
|
||||
bool
|
||||
|
||||
endmenu
|
||||
|
|
272
mm/highmem.c
272
mm/highmem.c
|
@ -31,10 +31,6 @@
|
|||
#include <asm/tlbflush.h>
|
||||
#include <linux/vmalloc.h>
|
||||
|
||||
#if defined(CONFIG_HIGHMEM) || defined(CONFIG_X86_32)
|
||||
DEFINE_PER_CPU(int, __kmap_atomic_idx);
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Virtual_count is not a pure "count".
|
||||
* 0 means that it is not mapped, and has not been mapped
|
||||
|
@ -108,9 +104,7 @@ static inline wait_queue_head_t *get_pkmap_wait_queue_head(unsigned int color)
|
|||
atomic_long_t _totalhigh_pages __read_mostly;
|
||||
EXPORT_SYMBOL(_totalhigh_pages);
|
||||
|
||||
EXPORT_PER_CPU_SYMBOL(__kmap_atomic_idx);
|
||||
|
||||
unsigned int nr_free_highpages (void)
|
||||
unsigned int __nr_free_highpages (void)
|
||||
{
|
||||
struct zone *zone;
|
||||
unsigned int pages = 0;
|
||||
|
@ -147,7 +141,7 @@ pte_t * pkmap_page_table;
|
|||
do { spin_unlock(&kmap_lock); (void)(flags); } while (0)
|
||||
#endif
|
||||
|
||||
struct page *kmap_to_page(void *vaddr)
|
||||
struct page *__kmap_to_page(void *vaddr)
|
||||
{
|
||||
unsigned long addr = (unsigned long)vaddr;
|
||||
|
||||
|
@ -158,7 +152,7 @@ struct page *kmap_to_page(void *vaddr)
|
|||
|
||||
return virt_to_page(addr);
|
||||
}
|
||||
EXPORT_SYMBOL(kmap_to_page);
|
||||
EXPORT_SYMBOL(__kmap_to_page);
|
||||
|
||||
static void flush_all_zero_pkmaps(void)
|
||||
{
|
||||
|
@ -200,10 +194,7 @@ static void flush_all_zero_pkmaps(void)
|
|||
flush_tlb_kernel_range(PKMAP_ADDR(0), PKMAP_ADDR(LAST_PKMAP));
|
||||
}
|
||||
|
||||
/**
|
||||
* kmap_flush_unused - flush all unused kmap mappings in order to remove stray mappings
|
||||
*/
|
||||
void kmap_flush_unused(void)
|
||||
void __kmap_flush_unused(void)
|
||||
{
|
||||
lock_kmap();
|
||||
flush_all_zero_pkmaps();
|
||||
|
@ -367,9 +358,260 @@ void kunmap_high(struct page *page)
|
|||
if (need_wakeup)
|
||||
wake_up(pkmap_map_wait);
|
||||
}
|
||||
|
||||
EXPORT_SYMBOL(kunmap_high);
|
||||
#endif /* CONFIG_HIGHMEM */
|
||||
#endif /* CONFIG_HIGHMEM */
|
||||
|
||||
#ifdef CONFIG_KMAP_LOCAL
|
||||
|
||||
#include <asm/kmap_size.h>
|
||||
|
||||
/*
|
||||
* With DEBUG_KMAP_LOCAL the stack depth is doubled and every second
|
||||
* slot is unused which acts as a guard page
|
||||
*/
|
||||
#ifdef CONFIG_DEBUG_KMAP_LOCAL
|
||||
# define KM_INCR 2
|
||||
#else
|
||||
# define KM_INCR 1
|
||||
#endif
|
||||
|
||||
static inline int kmap_local_idx_push(void)
|
||||
{
|
||||
WARN_ON_ONCE(in_irq() && !irqs_disabled());
|
||||
current->kmap_ctrl.idx += KM_INCR;
|
||||
BUG_ON(current->kmap_ctrl.idx >= KM_MAX_IDX);
|
||||
return current->kmap_ctrl.idx - 1;
|
||||
}
|
||||
|
||||
static inline int kmap_local_idx(void)
|
||||
{
|
||||
return current->kmap_ctrl.idx - 1;
|
||||
}
|
||||
|
||||
static inline void kmap_local_idx_pop(void)
|
||||
{
|
||||
current->kmap_ctrl.idx -= KM_INCR;
|
||||
BUG_ON(current->kmap_ctrl.idx < 0);
|
||||
}
|
||||
|
||||
#ifndef arch_kmap_local_post_map
|
||||
# define arch_kmap_local_post_map(vaddr, pteval) do { } while (0)
|
||||
#endif
|
||||
|
||||
#ifndef arch_kmap_local_pre_unmap
|
||||
# define arch_kmap_local_pre_unmap(vaddr) do { } while (0)
|
||||
#endif
|
||||
|
||||
#ifndef arch_kmap_local_post_unmap
|
||||
# define arch_kmap_local_post_unmap(vaddr) do { } while (0)
|
||||
#endif
|
||||
|
||||
#ifndef arch_kmap_local_map_idx
|
||||
#define arch_kmap_local_map_idx(idx, pfn) kmap_local_calc_idx(idx)
|
||||
#endif
|
||||
|
||||
#ifndef arch_kmap_local_unmap_idx
|
||||
#define arch_kmap_local_unmap_idx(idx, vaddr) kmap_local_calc_idx(idx)
|
||||
#endif
|
||||
|
||||
#ifndef arch_kmap_local_high_get
|
||||
static inline void *arch_kmap_local_high_get(struct page *page)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
#endif
|
||||
|
||||
/* Unmap a local mapping which was obtained by kmap_high_get() */
|
||||
static inline bool kmap_high_unmap_local(unsigned long vaddr)
|
||||
{
|
||||
#ifdef ARCH_NEEDS_KMAP_HIGH_GET
|
||||
if (vaddr >= PKMAP_ADDR(0) && vaddr < PKMAP_ADDR(LAST_PKMAP)) {
|
||||
kunmap_high(pte_page(pkmap_page_table[PKMAP_NR(vaddr)]));
|
||||
return true;
|
||||
}
|
||||
#endif
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline int kmap_local_calc_idx(int idx)
|
||||
{
|
||||
return idx + KM_MAX_IDX * smp_processor_id();
|
||||
}
|
||||
|
||||
static pte_t *__kmap_pte;
|
||||
|
||||
static pte_t *kmap_get_pte(void)
|
||||
{
|
||||
if (!__kmap_pte)
|
||||
__kmap_pte = virt_to_kpte(__fix_to_virt(FIX_KMAP_BEGIN));
|
||||
return __kmap_pte;
|
||||
}
|
||||
|
||||
void *__kmap_local_pfn_prot(unsigned long pfn, pgprot_t prot)
|
||||
{
|
||||
pte_t pteval, *kmap_pte = kmap_get_pte();
|
||||
unsigned long vaddr;
|
||||
int idx;
|
||||
|
||||
/*
|
||||
* Disable migration so resulting virtual address is stable
|
||||
* accross preemption.
|
||||
*/
|
||||
migrate_disable();
|
||||
preempt_disable();
|
||||
idx = arch_kmap_local_map_idx(kmap_local_idx_push(), pfn);
|
||||
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
|
||||
BUG_ON(!pte_none(*(kmap_pte - idx)));
|
||||
pteval = pfn_pte(pfn, prot);
|
||||
set_pte_at(&init_mm, vaddr, kmap_pte - idx, pteval);
|
||||
arch_kmap_local_post_map(vaddr, pteval);
|
||||
current->kmap_ctrl.pteval[kmap_local_idx()] = pteval;
|
||||
preempt_enable();
|
||||
|
||||
return (void *)vaddr;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(__kmap_local_pfn_prot);
|
||||
|
||||
void *__kmap_local_page_prot(struct page *page, pgprot_t prot)
|
||||
{
|
||||
void *kmap;
|
||||
|
||||
/*
|
||||
* To broaden the usage of the actual kmap_local() machinery always map
|
||||
* pages when debugging is enabled and the architecture has no problems
|
||||
* with alias mappings.
|
||||
*/
|
||||
if (!IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP) && !PageHighMem(page))
|
||||
return page_address(page);
|
||||
|
||||
/* Try kmap_high_get() if architecture has it enabled */
|
||||
kmap = arch_kmap_local_high_get(page);
|
||||
if (kmap)
|
||||
return kmap;
|
||||
|
||||
return __kmap_local_pfn_prot(page_to_pfn(page), prot);
|
||||
}
|
||||
EXPORT_SYMBOL(__kmap_local_page_prot);
|
||||
|
||||
void kunmap_local_indexed(void *vaddr)
|
||||
{
|
||||
unsigned long addr = (unsigned long) vaddr & PAGE_MASK;
|
||||
pte_t *kmap_pte = kmap_get_pte();
|
||||
int idx;
|
||||
|
||||
if (addr < __fix_to_virt(FIX_KMAP_END) ||
|
||||
addr > __fix_to_virt(FIX_KMAP_BEGIN)) {
|
||||
if (IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP)) {
|
||||
/* This _should_ never happen! See above. */
|
||||
WARN_ON_ONCE(1);
|
||||
return;
|
||||
}
|
||||
/*
|
||||
* Handle mappings which were obtained by kmap_high_get()
|
||||
* first as the virtual address of such mappings is below
|
||||
* PAGE_OFFSET. Warn for all other addresses which are in
|
||||
* the user space part of the virtual address space.
|
||||
*/
|
||||
if (!kmap_high_unmap_local(addr))
|
||||
WARN_ON_ONCE(addr < PAGE_OFFSET);
|
||||
return;
|
||||
}
|
||||
|
||||
preempt_disable();
|
||||
idx = arch_kmap_local_unmap_idx(kmap_local_idx(), addr);
|
||||
WARN_ON_ONCE(addr != __fix_to_virt(FIX_KMAP_BEGIN + idx));
|
||||
|
||||
arch_kmap_local_pre_unmap(addr);
|
||||
pte_clear(&init_mm, addr, kmap_pte - idx);
|
||||
arch_kmap_local_post_unmap(addr);
|
||||
current->kmap_ctrl.pteval[kmap_local_idx()] = __pte(0);
|
||||
kmap_local_idx_pop();
|
||||
preempt_enable();
|
||||
migrate_enable();
|
||||
}
|
||||
EXPORT_SYMBOL(kunmap_local_indexed);
|
||||
|
||||
/*
|
||||
* Invoked before switch_to(). This is safe even when during or after
|
||||
* clearing the maps an interrupt which needs a kmap_local happens because
|
||||
* the task::kmap_ctrl.idx is not modified by the unmapping code so a
|
||||
* nested kmap_local will use the next unused index and restore the index
|
||||
* on unmap. The already cleared kmaps of the outgoing task are irrelevant
|
||||
* because the interrupt context does not know about them. The same applies
|
||||
* when scheduling back in for an interrupt which happens before the
|
||||
* restore is complete.
|
||||
*/
|
||||
void __kmap_local_sched_out(void)
|
||||
{
|
||||
struct task_struct *tsk = current;
|
||||
pte_t *kmap_pte = kmap_get_pte();
|
||||
int i;
|
||||
|
||||
/* Clear kmaps */
|
||||
for (i = 0; i < tsk->kmap_ctrl.idx; i++) {
|
||||
pte_t pteval = tsk->kmap_ctrl.pteval[i];
|
||||
unsigned long addr;
|
||||
int idx;
|
||||
|
||||
/* With debug all even slots are unmapped and act as guard */
|
||||
if (IS_ENABLED(CONFIG_DEBUG_HIGHMEM) && !(i & 0x01)) {
|
||||
WARN_ON_ONCE(!pte_none(pteval));
|
||||
continue;
|
||||
}
|
||||
if (WARN_ON_ONCE(pte_none(pteval)))
|
||||
continue;
|
||||
|
||||
/*
|
||||
* This is a horrible hack for XTENSA to calculate the
|
||||
* coloured PTE index. Uses the PFN encoded into the pteval
|
||||
* and the map index calculation because the actual mapped
|
||||
* virtual address is not stored in task::kmap_ctrl.
|
||||
* For any sane architecture this is optimized out.
|
||||
*/
|
||||
idx = arch_kmap_local_map_idx(i, pte_pfn(pteval));
|
||||
|
||||
addr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
|
||||
arch_kmap_local_pre_unmap(addr);
|
||||
pte_clear(&init_mm, addr, kmap_pte - idx);
|
||||
arch_kmap_local_post_unmap(addr);
|
||||
}
|
||||
}
|
||||
|
||||
void __kmap_local_sched_in(void)
|
||||
{
|
||||
struct task_struct *tsk = current;
|
||||
pte_t *kmap_pte = kmap_get_pte();
|
||||
int i;
|
||||
|
||||
/* Restore kmaps */
|
||||
for (i = 0; i < tsk->kmap_ctrl.idx; i++) {
|
||||
pte_t pteval = tsk->kmap_ctrl.pteval[i];
|
||||
unsigned long addr;
|
||||
int idx;
|
||||
|
||||
/* With debug all even slots are unmapped and act as guard */
|
||||
if (IS_ENABLED(CONFIG_DEBUG_HIGHMEM) && !(i & 0x01)) {
|
||||
WARN_ON_ONCE(!pte_none(pteval));
|
||||
continue;
|
||||
}
|
||||
if (WARN_ON_ONCE(pte_none(pteval)))
|
||||
continue;
|
||||
|
||||
/* See comment in __kmap_local_sched_out() */
|
||||
idx = arch_kmap_local_map_idx(i, pte_pfn(pteval));
|
||||
addr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
|
||||
set_pte_at(&init_mm, addr, kmap_pte - idx, pteval);
|
||||
arch_kmap_local_post_map(addr, pteval);
|
||||
}
|
||||
}
|
||||
|
||||
void kmap_local_fork(struct task_struct *tsk)
|
||||
{
|
||||
if (WARN_ON_ONCE(tsk->kmap_ctrl.idx))
|
||||
memset(&tsk->kmap_ctrl, 0, sizeof(tsk->kmap_ctrl));
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
#if defined(HASHED_PAGE_VIRTUAL)
|
||||
|
||||
|
|
Loading…
Reference in New Issue