linux-sg2042/Documentation/x86/x86_64/mm.txt

80 lines
4.7 KiB
Plaintext
Raw Normal View History

Virtual memory map with 4 level page tables:
0000000000000000 - 00007fffffffffff (=47 bits, 128 TB) user space, different per mm
hole caused by [47:63] sign extension
ffff800000000000 - ffff87ffffffffff (=43 bits, 8 TB) guard hole, reserved for hypervisor
ffff880000000000 - ffffc7ffffffffff (=46 bits, 64 TB) direct mapping of all phys. memory (page_offset_base)
ffffc80000000000 - ffffc8ffffffffff (=40 bits, 1 TB) unused hole
ffffc90000000000 - ffffe8ffffffffff (=45 bits, 32 TB) vmalloc/ioremap space (vmalloc_base)
ffffe90000000000 - ffffe9ffffffffff (=40 bits, 1 TB) unused hole
ffffea0000000000 - ffffeaffffffffff (=40 bits, 1 TB) virtual memory map (vmemmap_base)
ffffeb0000000000 - ffffebffffffffff (=40 bits, 1 TB) unused hole
ffffec0000000000 - fffffbffffffffff (=44 bits, 16 TB) kasan shadow memory
fffffc0000000000 - fffffdffffffffff (=41 bits, 2 TB) unused hole
vaddr_end for KASLR
fffffe0000000000 - fffffe7fffffffff (=39 bits, 512 GB) cpu_entry_area mapping
fffffe8000000000 - fffffeffffffffff (=39 bits, 512 GB) LDT remap for PTI
ffffff0000000000 - ffffff7fffffffff (=39 bits, 512 GB) %esp fixup stacks
ffffff8000000000 - fffffffeefffffff (~39 bits, ~507 GB) unused hole
ffffffef00000000 - fffffffeffffffff (=36 bits, 64 GB) EFI region mapping space
ffffffff00000000 - ffffffff7fffffff (=31 bits, 2 GB) unused hole
ffffffff80000000 - ffffffff9fffffff (=29 bits, 512 MB) kernel text mapping, from phys 0
ffffffffa0000000 - fffffffffeffffff (~31 bits, 1520 MB) module mapping space
[fixmap start] - ffffffffff5fffff kernel-internal fixmap range
ffffffffff600000 - ffffffffff600fff ( =4 kB) legacy vsyscall ABI
ffffffffffe00000 - ffffffffffffffff ( =2 MB) unused hole
Virtual memory map with 5 level page tables:
0000000000000000 - 00ffffffffffffff (=56 bits, 64 PB) user space, different per mm
hole caused by [56:63] sign extension
ff00000000000000 - ff0fffffffffffff (=52 bits, 4 PB) guard hole, reserved for hypervisor
ff10000000000000 - ff8fffffffffffff (=55 bits, 32 PB) direct mapping of all phys. memory (page_offset_base)
ff90000000000000 - ff9fffffffffffff (=52 bits, 4 PB) LDT remap for PTI
ffa0000000000000 - ffd1ffffffffffff (=53 bits, 12800 TB) vmalloc/ioremap space (vmalloc_base)
ffd2000000000000 - ffd3ffffffffffff (=49 bits, 512 TB) unused hole
ffd4000000000000 - ffd5ffffffffffff (=49 bits, 512 TB) virtual memory map (vmemmap_base)
ffd6000000000000 - ffdeffffffffffff (~51 bits, 2304 TB) unused hole
ffdf000000000000 - fffffdffffffffff (~53 bits, ~8 PB) kasan shadow memory
fffffc0000000000 - fffffdffffffffff (=41 bits, 2 TB) unused hole
vaddr_end for KASLR
fffffe0000000000 - fffffe7fffffffff (=39 bits, 512 GB) cpu_entry_area mapping
fffffe8000000000 - fffffeffffffffff (=39 bits, 512 GB) unused hole
ffffff0000000000 - ffffff7fffffffff (=39 bits, 512 GB) %esp fixup stacks
ffffff8000000000 - ffffffeeffffffff (~39 bits, 444 GB) unused hole
ffffffef00000000 - fffffffeffffffff (=36 bits, 64 GB) EFI region mapping space
ffffffff00000000 - ffffffff7fffffff (31 bits, 2 GB) unused hole
ffffffff80000000 - ffffffff9fffffff (=29 bits, 512 MB) kernel text mapping, from phys 0
ffffffffa0000000 - fffffffffeffffff (~31 bits, 1520 MB) module mapping space
[fixmap start] - ffffffffff5fffff kernel-internal fixmap range
ffffffffff600000 - ffffffffff600fff ( =4 kB) legacy vsyscall ABI
ffffffffffe00000 - ffffffffffffffff ( =2 MB) unused hole
Architecture defines a 64-bit virtual address. Implementations can support
less. Currently supported are 48- and 57-bit virtual addresses. Bits 63
through to the most-significant implemented bit are sign extended.
This causes hole between user space and kernel addresses if you interpret them
as unsigned.
The direct mapping covers all memory in the system up to the highest
memory address (this means in some cases it can also include PCI memory
holes).
vmalloc space is lazily synchronized into the different PML4/PML5 pages of
the processes using the page fault handler, with init_top_pgt as
reference.
We map EFI runtime services in the 'efi_pgd' PGD in a 64Gb large virtual
memory window (this size is arbitrary, it can be raised later if needed).
The mappings are not part of any other kernel PGD and are only available
during EFI runtime calls.
x86/mm: Implement ASLR for kernel memory regions Randomizes the virtual address space of kernel memory regions for x86_64. This first patch adds the infrastructure and does not randomize any region. The following patches will randomize the physical memory mapping, vmalloc and vmemmap regions. This security feature mitigates exploits relying on predictable kernel addresses. These addresses can be used to disclose the kernel modules base addresses or corrupt specific structures to elevate privileges bypassing the current implementation of KASLR. This feature can be enabled with the CONFIG_RANDOMIZE_MEMORY option. The order of each memory region is not changed. The feature looks at the available space for the regions based on different configuration options and randomizes the base and space between each. The size of the physical memory mapping is the available physical memory. No performance impact was detected while testing the feature. Entropy is generated using the KASLR early boot functions now shared in the lib directory (originally written by Kees Cook). Randomization is done on PGD & PUD page table levels to increase possible addresses. The physical memory mapping code was adapted to support PUD level virtual addresses. This implementation on the best configuration provides 30,000 possible virtual addresses in average for each memory region. An additional low memory page is used to ensure each CPU can start with a PGD aligned virtual address (for realmode). x86/dump_pagetable was updated to correctly display each region. Updated documentation on x86_64 memory layout accordingly. Performance data, after all patches in the series: Kernbench shows almost no difference (-+ less than 1%): Before: Average Optimal load -j 12 Run (std deviation): Elapsed Time 102.63 (1.2695) User Time 1034.89 (1.18115) System Time 87.056 (0.456416) Percent CPU 1092.9 (13.892) Context Switches 199805 (3455.33) Sleeps 97907.8 (900.636) After: Average Optimal load -j 12 Run (std deviation): Elapsed Time 102.489 (1.10636) User Time 1034.86 (1.36053) System Time 87.764 (0.49345) Percent CPU 1095 (12.7715) Context Switches 199036 (4298.1) Sleeps 97681.6 (1031.11) Hackbench shows 0% difference on average (hackbench 90 repeated 10 times): attemp,before,after 1,0.076,0.069 2,0.072,0.069 3,0.066,0.066 4,0.066,0.068 5,0.066,0.067 6,0.066,0.069 7,0.067,0.066 8,0.063,0.067 9,0.067,0.065 10,0.068,0.071 average,0.0677,0.0677 Signed-off-by: Thomas Garnier <thgarnie@google.com> Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Alexander Kuleshov <kuleshovmail@gmail.com> Cc: Alexander Popov <alpopov@ptsecurity.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Baoquan He <bhe@redhat.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Borislav Petkov <bp@suse.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Dave Young <dyoung@redhat.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jan Beulich <JBeulich@suse.com> Cc: Joerg Roedel <jroedel@suse.de> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Lv Zheng <lv.zheng@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephen Smalley <sds@tycho.nsa.gov> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Toshi Kani <toshi.kani@hpe.com> Cc: Xiao Guangrong <guangrong.xiao@linux.intel.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: kernel-hardening@lists.openwall.com Cc: linux-doc@vger.kernel.org Link: http://lkml.kernel.org/r/1466556426-32664-6-git-send-email-keescook@chromium.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-06-22 08:47:02 +08:00
Note that if CONFIG_RANDOMIZE_MEMORY is enabled, the direct mapping of all
physical memory, vmalloc/ioremap space and virtual memory map are randomized.
Their order is preserved but their base will be offset early at boot time.
Be very careful vs. KASLR when changing anything here. The KASLR address
range must not overlap with anything except the KASAN shadow area, which is
correct as KASAN disables KASLR.