PTE_PROT_NONE means that a pte is present but does not have any
read/write attributes. However, setting the memory type like
pgprot_writecombine() is allowed and such bits overlap with
PTE_PROT_NONE. This causes mmap/munmap issues in drivers that change the
vma->vm_pg_prot on PROT_NONE mappings.
This patch reverts the PTE_FILE/PTE_PROT_NONE shift in commit
59911ca432 (ARM64: mm: Move PTE_PROT_NONE bit) and moves PTE_PROT_NONE
together with the other software bits.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Steve Capper <steve.capper@linaro.org>
Cc: <stable@vger.kernel.org> # 3.11+
This provides better performance compared to Device GRE and also allows
unaligned accesses. Such memory is intended to be used with standard RAM
(e.g. framebuffers) and not I/O.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch expands the VA_BITS to 42 when the 64K page configuration is
enabled allowing 2TB kernel linear mapping. Linux still uses 2 levels of
page tables in this configuration with pgd now being a full page.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
- KVM and Xen ports to AArch64
- Hugetlbfs and transparent huge pages support for arm64
- Applied Micro X-Gene Kconfig entry and dts file
- Cache flushing improvements
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iQIcBAABAgAGBQJR0bZAAAoJEGvWsS0AyF7xTEEP/R/aRoqWwbVAMlwAhujq616O
t4RzIyBXZXqxS9I+raokCX4mgYxdeisJlzN2hoq73VEX2BQlXZoYh8vmfY9WeNSM
2pdfif2HF7oo9ymCRyqfuhbumPrTyJhpbguzOYrxPqpp2f1hv2D8hbUJEFj429yL
UjqTFoONngfouZmAlwrPGZQKhBI95vvN53yvDMH0PWfvpm07DKGIQMYp20y0pj8j
slhLH3bh2kfpS1cf23JtH6IICwWD2pXW0POo569CfZry6bI74xve+Trcsm7iPnsO
PSI1P046ME1mu3SBbKwiPIdN/FQqWwTHW07fvMmH/xuXu3Zs/mxgzi7vDzDrVvTg
PJSbKWD6N/IPPwKS/gCUmWWDASO0bXx3KlDuRZqAjbRojs0UPUOTUhzJM/BHUms1
vY2QS9lAm02LmZZrk1LeKKP85gB+qKQvHuOVhIOldWeLGKtsNufz1kynz6YTqsLq
uUB55KwbhQ7q8+aoY6lWujqiTXMoLkBgGdjHs2I407PAv7ZjlhRWk2fIry7xJifp
rKu2cIlWsRe4CGvGI410NvIJFrGvJAV4wA43sgBDjPumyILgT/5jw9r3RpJEBZZs
akw/Bl1CbL+gMjyoPUWgcWZdRkUCE0eLrgyMOmaYfst8cOTaWw4dWLvUG/bBZg+Y
mGnuEQUQtAPadk8P/Sv3
=PZ/e
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64
Pull ARM64 updates from Catalin Marinas:
"Main features:
- KVM and Xen ports to AArch64
- Hugetlbfs and transparent huge pages support for arm64
- Applied Micro X-Gene Kconfig entry and dts file
- Cache flushing improvements
For arm64 huge pages support, there are x86 changes moving part of
arch/x86/mm/hugetlbpage.c into mm/hugetlb.c to be re-used by arm64"
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64: (66 commits)
arm64: Add initial DTS for APM X-Gene Storm SOC and APM Mustang board
arm64: Add defines for APM ARMv8 implementation
arm64: Enable APM X-Gene SOC family in the defconfig
arm64: Add Kconfig option for APM X-Gene SOC family
arm64/Makefile: provide vdso_install target
ARM64: mm: THP support.
ARM64: mm: Raise MAX_ORDER for 64KB pages and THP.
ARM64: mm: HugeTLB support.
ARM64: mm: Move PTE_PROT_NONE bit.
ARM64: mm: Make PAGE_NONE pages read only and no-execute.
ARM64: mm: Restore memblock limit when map_mem finished.
mm: thp: Correct the HPAGE_PMD_ORDER check.
x86: mm: Remove general hugetlb code from x86.
mm: hugetlb: Copy general hugetlb code from x86 to mm.
x86: mm: Remove x86 version of huge_pmd_share.
mm: hugetlb: Copy huge_pmd_share from x86 to mm.
arm64: KVM: document kernel object mappings in HYP
arm64: KVM: MAINTAINERS update
arm64: KVM: userspace API documentation
arm64: KVM: enable initialization of a 32bit vcpu
...
* 'for-next/hugepages' of git://git.linaro.org/people/stevecapper/linux:
ARM64: mm: THP support.
ARM64: mm: Raise MAX_ORDER for 64KB pages and THP.
ARM64: mm: HugeTLB support.
ARM64: mm: Move PTE_PROT_NONE bit.
ARM64: mm: Make PAGE_NONE pages read only and no-execute.
ARM64: mm: Restore memblock limit when map_mem finished.
mm: thp: Correct the HPAGE_PMD_ORDER check.
x86: mm: Remove general hugetlb code from x86.
mm: hugetlb: Copy general hugetlb code from x86 to mm.
x86: mm: Remove x86 version of huge_pmd_share.
mm: hugetlb: Copy huge_pmd_share from x86 to mm.
Conflicts:
arch/arm64/Kconfig
arch/arm64/include/asm/pgtable-hwdef.h
arch/arm64/include/asm/pgtable.h
Bring Transparent HugePage support to ARM. The size of a
transparent huge page depends on the normal page size. A
transparent huge page is always represented as a pmd.
If PAGE_SIZE is 4KB, THPs are 2MB.
If PAGE_SIZE is 64KB, THPs are 512MB.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Add huge page support to ARM64, different huge page sizes are
supported depending on the size of normal pages:
PAGE_SIZE is 4KB:
2MB - (pmds) these can be allocated at any time.
1024MB - (puds) usually allocated on bootup with the command line
with something like: hugepagesz=1G hugepages=6
PAGE_SIZE is 64KB:
512MB - (pmds) usually allocated on bootup via command line.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Under ARM64, PTEs can be broadly categorised as follows:
- Present and valid: Bit #0 is set. The PTE is valid and memory
access to the region may fault.
- Present and invalid: Bit #0 is clear and bit #1 is set.
Represents present memory with PROT_NONE protection. The PTE
is an invalid entry, and the user fault handler will raise a
SIGSEGV.
- Not present (file or swap): Bits #0 and #1 are clear.
Memory represented has been paged out. The PTE is an invalid
entry, and the fault handler will try and re-populate the
memory where necessary.
Huge PTEs are block descriptors that have bit #1 clear. If we wish
to represent PROT_NONE huge PTEs we then run into a problem as
there is no way to distinguish between regular and huge PTEs if we
set bit #1.
To resolve this ambiguity this patch moves PTE_PROT_NONE from
bit #1 to bit #2 and moves PTE_FILE from bit #2 to bit #3. The
number of swap/file bits is reduced by 1 as a consequence, leaving
60 bits for file and swap entries.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
If we consider the following code sequence:
my_pte = pte_modify(entry, myprot);
x = pte_write(my_pte);
y = pte_exec(my_pte);
If myprot comes from a PROT_NONE page, then x and y will both be
true which is undesireable behaviour.
This patch sets the no-execute and read-only bits for PAGE_NONE
such that the code above will return false for both x and y.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
pte_index is a useful helper outside of arch/arm64, for things like the
ARM SMMU driver, so rename __pte_index to pte_index to be consistent
with both arch/arm/ and also the definitions of pmd_index and pgd_index.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Add HYP and S2 page flags, for both normal and device memory.
Reviewed-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
This is mostly a port of dbf62d5006 ("ARM: mm: introduce L_PTE_VALID
for page table entries") and 26ffd0d43b ("ARM: mm: introduce present,
faulting entries for PAGE_NONE") from ARM, which makes use of present,
faulting page table entries for page table entries mapped as PROT_NONE.
The main difference with this implementation is that we can make use of
the two pte type bits in order to avoid allocating a software bit for
identifying PROT_NONE pages, instead reserving the 10b suffix for these
types of mappings.
This is required to prevent users from accessing such pages via syscalls
such as read/write over a pipe.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Marking non-present ptes as read-only can corrupt file ptes, breaking
things like swap and file mappings.
This patch ensures that we only manipulate user pte bits when the pte
is marked present.
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
- Generic execve, kernel_thread, fork/vfork/clone.
- Preparatory patches for KVM support (initialising EL2 mode for later
installing KVM support, hypervisor stub).
- Signal handling corner case fix (alternative signal stack set up for a
SEGV handler, which is raised in response to RLIMIT_STACK being
reached).
- Sub-nanosecond timer error fix.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iQIcBAABAgAGBQJQx1TPAAoJEGvWsS0AyF7xSrEP/R7KPhKSKIJKW0n3nP/uGe5g
isUiTM+y4koGzeHShao8I7VUXZLJptYHiviy12Rf0S/IK0L25P1p29ABLndd8SPB
5lqQehLz35bAIzmRXypvz4szpCwlRXPzEcHX7cnid0Nv27A9hVpfssYM2HIKLIJN
1AXZAxjlNmPHCc+hd+QOnP8d7h6KGiZWqiC1lsuU12Ma4oZIwiS225oxUdMg5d4I
AxfWAvVLy14eNxDRqBgA0W2Jxe62TD82LrgD4tP88mbwWsFIyE5dea2yYShOJnBe
mwLWw4Jovfe5VLSn00yggqM5JPp36sM/7Bka5EZaGKY2HllVtSwqnshUChG3fw3/
fepN4nB0L8lPgTMfQAUjNKqZWgt2vwIGC+7GLX+Sg6/kOidRaxsQgU710gNvceZu
E417RTtW4WM8IA+euCTiq3huJt7iOt8APSblpPWnrf8M7ntJKV4ESTOhtN30mR2D
ZYeMZp1DYrET3Pxkd+bMdaRYGhMqAlpfCF096H+A4FscicbDLC+KincWtW/YpOXE
voWDxE/Rd+3nAhCVL+A2HUSw9lNddsFvxRR9hQWfQ3uvMiDp7AS6O4EAYcK60GiA
YsEnksMQr/ksscNf/7nvpY6DBNkeuZjj9IGfbFYVqWZ80f//8NEoJCNDzNPlATHU
ddPpD5ZayUQ3UUMulQGg
=fQ+2
-----END PGP SIGNATURE-----
Merge tag 'arm64-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64
Pull ARM64 updates from Catalin Marinas:
- Generic execve, kernel_thread, fork/vfork/clone.
- Preparatory patches for KVM support (initialising EL2 mode for later
installing KVM support, hypervisor stub).
- Signal handling corner case fix (alternative signal stack set up for
a SEGV handler, which is raised in response to RLIMIT_STACK being
reached).
- Sub-nanosecond timer error fix.
* tag 'arm64-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64: (30 commits)
arm64: Update the MAINTAINERS entry
arm64: compat for clock_adjtime(2) is miswired
arm64: move FP-SIMD save/restore code to a macro
arm64: hyp: initialize vttbr_el2 to zero
arm64: add hypervisor stub
arm64: record boot mode when entering the kernel
arm64: move vector entry macro to assembler.h
arm64: add AArch32 execution modes to ptrace.h
arm64: expand register mapping between AArch32 and AArch64
arm64: generic timer: use virtual counter instead of physical at EL0
arm64: vdso: defer shifting of nanosecond component of timespec
arm64: vdso: rework __do_get_tspec register allocation and return shift
arm64: vdso: check sequence counter even for coarse realtime operations
arm64: vdso: fix clocksource mask when extracting bottom 56 bits
ARM64: Remove incorrect Kconfig symbol HAVE_SPARSE_IRQ
Documentation: Fixes a word in Documentation/arm64/memory.txt
arm64: Make !dirty ptes read-only
arm64: Convert empty flush_cache_{mm,page} functions to static inline
arm64: signal: let the compiler inline compat_get_sigframe
arm64: signal: return struct rt_sigframe from get_sigframe
...
Conflicts:
arch/arm64/include/asm/unistd32.h
The AArch64 Linux port relies on the mm code to wrprotect clean ptes.
This however is not the case with newly created ptes and
PAGE_SHARED(_EXEC) is writable but !dirty.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: <stable@vger.kernel.org>
On AArch64, the meaning of the XN bit has changed to UXN (user). The PXN
(privileged) bit must be set to prevent kernel execution. Without the
PXN bit set, the CPU may speculatively access device memory. This patch
ensures that all the mappings that the kernel must not execute from
(including user mappings) have the PXN bit set.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The virtual memory layout is described in
Documentation/arm64/memory.txt. This patch adds the MMU definitions for
the 4KB and 64KB translation table configurations. The SECTION_SIZE is
2MB with 4KB page and 512MB with 64KB page configuration.
PHYS_OFFSET is calculated at run-time and stored in a variable (no
run-time code patching at this stage).
On the current implementation, both user and kernel address spaces are
512G (39-bit) each with a maximum of 256G for the RAM linear mapping.
Linux uses 3 levels of translation tables with the 4K page configuration
and 2 levels with the 64K configuration. Extending the memory space
beyond 39-bit with the 4K pages or 42-bit with 64K pages requires an
additional level of translation tables.
The SPARSEMEM configuration is global to all AArch64 platforms and
allows for 1GB sections with SPARSEMEM_VMEMMAP enabled by default.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>