Merge branch 'akpm' (patches from Andrew)

Merge misc updates from Andrew Morton:

 - a few MM hotfixes

 - kthread, tools, scripts, ntfs and ocfs2

 - some of MM

Subsystems affected by this patch series: kthread, tools, scripts, ntfs,
ocfs2 and mm (hofixes, pagealloc, slab-generic, slab, slub, kcsan,
debug, pagecache, gup, swap, shmem, memcg, pagemap, mremap, mincore,
sparsemem, vmalloc, kasan, pagealloc, hugetlb and vmscan).

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (162 commits)
  mm: vmscan: consistent update to pgrefill
  mm/vmscan.c: fix typo
  khugepaged: khugepaged_test_exit() check mmget_still_valid()
  khugepaged: retract_page_tables() remember to test exit
  khugepaged: collapse_pte_mapped_thp() protect the pmd lock
  khugepaged: collapse_pte_mapped_thp() flush the right range
  mm/hugetlb: fix calculation of adjust_range_if_pmd_sharing_possible
  mm: thp: replace HTTP links with HTTPS ones
  mm/page_alloc: fix memalloc_nocma_{save/restore} APIs
  mm/page_alloc.c: skip setting nodemask when we are in interrupt
  mm/page_alloc: fallbacks at most has 3 elements
  mm/page_alloc: silence a KASAN false positive
  mm/page_alloc.c: remove unnecessary end_bitidx for [set|get]_pfnblock_flags_mask()
  mm/page_alloc.c: simplify pageblock bitmap access
  mm/page_alloc.c: extract the common part in pfn_to_bitidx()
  mm/page_alloc.c: replace the definition of NR_MIGRATETYPE_BITS with PB_migratetype_bits
  mm/shuffle: remove dynamic reconfiguration
  mm/memory_hotplug: document why shuffle_zone() is relevant
  mm/page_alloc: remove nr_free_pagecache_pages()
  mm: remove vm_total_pages
  ...
This commit is contained in:
Linus Torvalds 2020-08-07 11:39:33 -07:00
commit 81e11336d9
396 changed files with 4906 additions and 3437 deletions

View File

@ -4693,7 +4693,7 @@
fragmentation. Defaults to 1 for systems with
more than 32MB of RAM, 0 otherwise.
slub_debug[=options[,slabs]] [MM, SLUB]
slub_debug[=options[,slabs][;[options[,slabs]]...] [MM, SLUB]
Enabling slub_debug allows one to determine the
culprit if slab objects become corrupted. Enabling
slub_debug can create guard zones around objects and

View File

@ -13,11 +13,8 @@ KASAN uses compile-time instrumentation to insert validity checks before every
memory access, and therefore requires a compiler version that supports that.
Generic KASAN is supported in both GCC and Clang. With GCC it requires version
4.9.2 or later for basic support and version 5.0 or later for detection of
out-of-bounds accesses for stack and global variables and for inline
instrumentation mode (see the Usage section). With Clang it requires version
7.0.0 or later and it doesn't support detection of out-of-bounds accesses for
global variables yet.
8.3.0 or later. With Clang it requires version 7.0.0 or later, but detection of
out-of-bounds accesses for global variables is only supported since Clang 11.
Tag-based KASAN is only supported in Clang and requires version 7.0.0 or later.
@ -193,6 +190,9 @@ function calls GCC directly inserts the code to check the shadow memory.
This option significantly enlarges kernel but it gives x1.1-x2 performance
boost over outline instrumented kernel.
Generic KASAN prints up to 2 call_rcu() call stacks in reports, the last one
and the second to last.
Software tag-based KASAN
~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -12,7 +12,7 @@ dlmfs is built with OCFS2 as it requires most of its infrastructure.
:Project web page: http://ocfs2.wiki.kernel.org
:Tools web page: https://github.com/markfasheh/ocfs2-tools
:OCFS2 mailing lists: http://oss.oracle.com/projects/ocfs2/mailman/
:OCFS2 mailing lists: https://oss.oracle.com/projects/ocfs2/mailman/
All code copyright 2005 Oracle except when otherwise noted.

View File

@ -14,7 +14,7 @@ get "mount.ocfs2" and "ocfs2_hb_ctl".
Project web page: http://ocfs2.wiki.kernel.org
Tools git tree: https://github.com/markfasheh/ocfs2-tools
OCFS2 mailing lists: http://oss.oracle.com/projects/ocfs2/mailman/
OCFS2 mailing lists: https://oss.oracle.com/projects/ocfs2/mailman/
All code copyright 2005 Oracle except when otherwise noted.

View File

@ -150,6 +150,22 @@ These options do not have any effect on remount. You can change these
parameters with chmod(1), chown(1) and chgrp(1) on a mounted filesystem.
tmpfs has a mount option to select whether it will wrap at 32- or 64-bit inode
numbers:
======= ========================
inode64 Use 64-bit inode numbers
inode32 Use 32-bit inode numbers
======= ========================
On a 32-bit kernel, inode32 is implicit, and inode64 is refused at mount time.
On a 64-bit kernel, CONFIG_TMPFS_INODE64 sets the default. inode64 avoids the
possibility of multiple files with the same inode number on a single device;
but risks glibc failing with EOVERFLOW once 33-bit inode numbers are reached -
if a long-lived tmpfs is accessed by 32-bit applications so ancient that
opening a file larger than 2GiB fails with EINVAL.
So 'mount -t tmpfs -o size=10G,nr_inodes=10k,mode=700 tmpfs /mytmpfs'
will give you tmpfs instance on /mytmpfs which can allocate 10GB
RAM/SWAP in 10240 inodes and it is only accessible by root.
@ -161,3 +177,5 @@ RAM/SWAP in 10240 inodes and it is only accessible by root.
Hugh Dickins, 4 June 2007
:Updated:
KOSAKI Motohiro, 16 Mar 2010
:Updated:
Chris Down, 13 July 2020

View File

@ -0,0 +1,258 @@
.. SPDX-License-Identifier: GPL-2.0
.. _arch_page_table_helpers:
===============================
Architecture Page Table Helpers
===============================
Generic MM expects architectures (with MMU) to provide helpers to create, access
and modify page table entries at various level for different memory functions.
These page table helpers need to conform to a common semantics across platforms.
Following tables describe the expected semantics which can also be tested during
boot via CONFIG_DEBUG_VM_PGTABLE option. All future changes in here or the debug
test need to be in sync.
======================
PTE Page Table Helpers
======================
+---------------------------+--------------------------------------------------+
| pte_same | Tests whether both PTE entries are the same |
+---------------------------+--------------------------------------------------+
| pte_bad | Tests a non-table mapped PTE |
+---------------------------+--------------------------------------------------+
| pte_present | Tests a valid mapped PTE |
+---------------------------+--------------------------------------------------+
| pte_young | Tests a young PTE |
+---------------------------+--------------------------------------------------+
| pte_dirty | Tests a dirty PTE |
+---------------------------+--------------------------------------------------+
| pte_write | Tests a writable PTE |
+---------------------------+--------------------------------------------------+
| pte_special | Tests a special PTE |
+---------------------------+--------------------------------------------------+
| pte_protnone | Tests a PROT_NONE PTE |
+---------------------------+--------------------------------------------------+
| pte_devmap | Tests a ZONE_DEVICE mapped PTE |
+---------------------------+--------------------------------------------------+
| pte_soft_dirty | Tests a soft dirty PTE |
+---------------------------+--------------------------------------------------+
| pte_swp_soft_dirty | Tests a soft dirty swapped PTE |
+---------------------------+--------------------------------------------------+
| pte_mkyoung | Creates a young PTE |
+---------------------------+--------------------------------------------------+
| pte_mkold | Creates an old PTE |
+---------------------------+--------------------------------------------------+
| pte_mkdirty | Creates a dirty PTE |
+---------------------------+--------------------------------------------------+
| pte_mkclean | Creates a clean PTE |
+---------------------------+--------------------------------------------------+
| pte_mkwrite | Creates a writable PTE |
+---------------------------+--------------------------------------------------+
| pte_mkwrprotect | Creates a write protected PTE |
+---------------------------+--------------------------------------------------+
| pte_mkspecial | Creates a special PTE |
+---------------------------+--------------------------------------------------+
| pte_mkdevmap | Creates a ZONE_DEVICE mapped PTE |
+---------------------------+--------------------------------------------------+
| pte_mksoft_dirty | Creates a soft dirty PTE |
+---------------------------+--------------------------------------------------+
| pte_clear_soft_dirty | Clears a soft dirty PTE |
+---------------------------+--------------------------------------------------+
| pte_swp_mksoft_dirty | Creates a soft dirty swapped PTE |
+---------------------------+--------------------------------------------------+
| pte_swp_clear_soft_dirty | Clears a soft dirty swapped PTE |
+---------------------------+--------------------------------------------------+
| pte_mknotpresent | Invalidates a mapped PTE |
+---------------------------+--------------------------------------------------+
| ptep_get_and_clear | Clears a PTE |
+---------------------------+--------------------------------------------------+
| ptep_get_and_clear_full | Clears a PTE |
+---------------------------+--------------------------------------------------+
| ptep_test_and_clear_young | Clears young from a PTE |
+---------------------------+--------------------------------------------------+
| ptep_set_wrprotect | Converts into a write protected PTE |
+---------------------------+--------------------------------------------------+
| ptep_set_access_flags | Converts into a more permissive PTE |
+---------------------------+--------------------------------------------------+
======================
PMD Page Table Helpers
======================
+---------------------------+--------------------------------------------------+
| pmd_same | Tests whether both PMD entries are the same |
+---------------------------+--------------------------------------------------+
| pmd_bad | Tests a non-table mapped PMD |
+---------------------------+--------------------------------------------------+
| pmd_leaf | Tests a leaf mapped PMD |
+---------------------------+--------------------------------------------------+
| pmd_huge | Tests a HugeTLB mapped PMD |
+---------------------------+--------------------------------------------------+
| pmd_trans_huge | Tests a Transparent Huge Page (THP) at PMD |
+---------------------------+--------------------------------------------------+
| pmd_present | Tests a valid mapped PMD |
+---------------------------+--------------------------------------------------+
| pmd_young | Tests a young PMD |
+---------------------------+--------------------------------------------------+
| pmd_dirty | Tests a dirty PMD |
+---------------------------+--------------------------------------------------+
| pmd_write | Tests a writable PMD |
+---------------------------+--------------------------------------------------+
| pmd_special | Tests a special PMD |
+---------------------------+--------------------------------------------------+
| pmd_protnone | Tests a PROT_NONE PMD |
+---------------------------+--------------------------------------------------+
| pmd_devmap | Tests a ZONE_DEVICE mapped PMD |
+---------------------------+--------------------------------------------------+
| pmd_soft_dirty | Tests a soft dirty PMD |
+---------------------------+--------------------------------------------------+
| pmd_swp_soft_dirty | Tests a soft dirty swapped PMD |
+---------------------------+--------------------------------------------------+
| pmd_mkyoung | Creates a young PMD |
+---------------------------+--------------------------------------------------+
| pmd_mkold | Creates an old PMD |
+---------------------------+--------------------------------------------------+
| pmd_mkdirty | Creates a dirty PMD |
+---------------------------+--------------------------------------------------+
| pmd_mkclean | Creates a clean PMD |
+---------------------------+--------------------------------------------------+
| pmd_mkwrite | Creates a writable PMD |
+---------------------------+--------------------------------------------------+
| pmd_mkwrprotect | Creates a write protected PMD |
+---------------------------+--------------------------------------------------+
| pmd_mkspecial | Creates a special PMD |
+---------------------------+--------------------------------------------------+
| pmd_mkdevmap | Creates a ZONE_DEVICE mapped PMD |
+---------------------------+--------------------------------------------------+
| pmd_mksoft_dirty | Creates a soft dirty PMD |
+---------------------------+--------------------------------------------------+
| pmd_clear_soft_dirty | Clears a soft dirty PMD |
+---------------------------+--------------------------------------------------+
| pmd_swp_mksoft_dirty | Creates a soft dirty swapped PMD |
+---------------------------+--------------------------------------------------+
| pmd_swp_clear_soft_dirty | Clears a soft dirty swapped PMD |
+---------------------------+--------------------------------------------------+
| pmd_mkinvalid | Invalidates a mapped PMD [1] |
+---------------------------+--------------------------------------------------+
| pmd_set_huge | Creates a PMD huge mapping |
+---------------------------+--------------------------------------------------+
| pmd_clear_huge | Clears a PMD huge mapping |
+---------------------------+--------------------------------------------------+
| pmdp_get_and_clear | Clears a PMD |
+---------------------------+--------------------------------------------------+
| pmdp_get_and_clear_full | Clears a PMD |
+---------------------------+--------------------------------------------------+
| pmdp_test_and_clear_young | Clears young from a PMD |
+---------------------------+--------------------------------------------------+
| pmdp_set_wrprotect | Converts into a write protected PMD |
+---------------------------+--------------------------------------------------+
| pmdp_set_access_flags | Converts into a more permissive PMD |
+---------------------------+--------------------------------------------------+
======================
PUD Page Table Helpers
======================
+---------------------------+--------------------------------------------------+
| pud_same | Tests whether both PUD entries are the same |
+---------------------------+--------------------------------------------------+
| pud_bad | Tests a non-table mapped PUD |
+---------------------------+--------------------------------------------------+
| pud_leaf | Tests a leaf mapped PUD |
+---------------------------+--------------------------------------------------+
| pud_huge | Tests a HugeTLB mapped PUD |
+---------------------------+--------------------------------------------------+
| pud_trans_huge | Tests a Transparent Huge Page (THP) at PUD |
+---------------------------+--------------------------------------------------+
| pud_present | Tests a valid mapped PUD |
+---------------------------+--------------------------------------------------+
| pud_young | Tests a young PUD |
+---------------------------+--------------------------------------------------+
| pud_dirty | Tests a dirty PUD |
+---------------------------+--------------------------------------------------+
| pud_write | Tests a writable PUD |
+---------------------------+--------------------------------------------------+
| pud_devmap | Tests a ZONE_DEVICE mapped PUD |
+---------------------------+--------------------------------------------------+
| pud_mkyoung | Creates a young PUD |
+---------------------------+--------------------------------------------------+
| pud_mkold | Creates an old PUD |
+---------------------------+--------------------------------------------------+
| pud_mkdirty | Creates a dirty PUD |
+---------------------------+--------------------------------------------------+
| pud_mkclean | Creates a clean PUD |
+---------------------------+--------------------------------------------------+
| pud_mkwrite | Creates a writable PUD |
+---------------------------+--------------------------------------------------+
| pud_mkwrprotect | Creates a write protected PUD |
+---------------------------+--------------------------------------------------+
| pud_mkdevmap | Creates a ZONE_DEVICE mapped PUD |
+---------------------------+--------------------------------------------------+
| pud_mkinvalid | Invalidates a mapped PUD [1] |
+---------------------------+--------------------------------------------------+
| pud_set_huge | Creates a PUD huge mapping |
+---------------------------+--------------------------------------------------+
| pud_clear_huge | Clears a PUD huge mapping |
+---------------------------+--------------------------------------------------+
| pudp_get_and_clear | Clears a PUD |
+---------------------------+--------------------------------------------------+
| pudp_get_and_clear_full | Clears a PUD |
+---------------------------+--------------------------------------------------+
| pudp_test_and_clear_young | Clears young from a PUD |
+---------------------------+--------------------------------------------------+
| pudp_set_wrprotect | Converts into a write protected PUD |
+---------------------------+--------------------------------------------------+
| pudp_set_access_flags | Converts into a more permissive PUD |
+---------------------------+--------------------------------------------------+
==========================
HugeTLB Page Table Helpers
==========================
+---------------------------+--------------------------------------------------+
| pte_huge | Tests a HugeTLB |
+---------------------------+--------------------------------------------------+
| pte_mkhuge | Creates a HugeTLB |
+---------------------------+--------------------------------------------------+
| huge_pte_dirty | Tests a dirty HugeTLB |
+---------------------------+--------------------------------------------------+
| huge_pte_write | Tests a writable HugeTLB |
+---------------------------+--------------------------------------------------+
| huge_pte_mkdirty | Creates a dirty HugeTLB |
+---------------------------+--------------------------------------------------+
| huge_pte_mkwrite | Creates a writable HugeTLB |
+---------------------------+--------------------------------------------------+
| huge_pte_mkwrprotect | Creates a write protected HugeTLB |
+---------------------------+--------------------------------------------------+
| huge_ptep_get_and_clear | Clears a HugeTLB |
+---------------------------+--------------------------------------------------+
| huge_ptep_set_wrprotect | Converts into a write protected HugeTLB |
+---------------------------+--------------------------------------------------+
| huge_ptep_set_access_flags | Converts into a more permissive HugeTLB |
+---------------------------+--------------------------------------------------+
========================
SWAP Page Table Helpers
========================
+---------------------------+--------------------------------------------------+
| __pte_to_swp_entry | Creates a swapped entry (arch) from a mapped PTE |
+---------------------------+--------------------------------------------------+
| __swp_to_pte_entry | Creates a mapped PTE from a swapped entry (arch) |
+---------------------------+--------------------------------------------------+
| __pmd_to_swp_entry | Creates a swapped entry (arch) from a mapped PMD |
+---------------------------+--------------------------------------------------+
| __swp_to_pmd_entry | Creates a mapped PMD from a swapped entry (arch) |
+---------------------------+--------------------------------------------------+
| is_migration_entry | Tests a migration (read or write) swapped entry |
+---------------------------+--------------------------------------------------+
| is_write_migration_entry | Tests a write migration swapped entry |
+---------------------------+--------------------------------------------------+
| make_migration_entry_read | Converts into read migration swapped entry |
+---------------------------+--------------------------------------------------+
| make_migration_entry | Creates a migration swapped entry (read or write)|
+---------------------------+--------------------------------------------------+
[1] https://lore.kernel.org/linux-mm/20181017020930.GN30832@redhat.com/

View File

@ -141,11 +141,8 @@ sections:
`mem_section` objects and the number of rows is calculated to fit
all the memory sections.
The architecture setup code should call :c:func:`memory_present` for
each active memory range or use :c:func:`memblocks_present` or
:c:func:`sparse_memory_present_with_active_regions` wrappers to
initialize the memory sections. Next, the actual memory maps should be
set up using :c:func:`sparse_init`.
The architecture setup code should call sparse_init() to
initialize the memory sections and the memory maps.
With SPARSEMEM there are two possible ways to convert a PFN to the
corresponding `struct page` - a "classic sparse" and "sparse
@ -178,7 +175,7 @@ for persistent memory devices in pre-allocated storage on those
devices. This storage is represented with :c:type:`struct vmem_altmap`
that is eventually passed to vmemmap_populate() through a long chain
of function calls. The vmemmap_populate() implementation may use the
`vmem_altmap` along with :c:func:`altmap_alloc_block_buf` helper to
`vmem_altmap` along with :c:func:`vmemmap_alloc_block_buf` helper to
allocate memory map on the persistent memory device.
ZONE_DEVICE

View File

@ -41,6 +41,11 @@ slub_debug=<Debug-Options>,<slab name1>,<slab name2>,...
Enable options only for select slabs (no spaces
after a comma)
Multiple blocks of options for all slabs or selected slabs can be given, with
blocks of options delimited by ';'. The last of "all slabs" blocks is applied
to all slabs except those that match one of the "select slabs" block. Options
of the first "select slabs" blocks that matches the slab's name are applied.
Possible debug options are::
F Sanity checks on (enables SLAB_DEBUG_CONSISTENCY_CHECKS
@ -83,17 +88,33 @@ switch off debugging for such caches by default, use::
slub_debug=O
In case you forgot to enable debugging on the kernel command line: It is
possible to enable debugging manually when the kernel is up. Look at the
contents of::
You can apply different options to different list of slab names, using blocks
of options. This will enable red zoning for dentry and user tracking for
kmalloc. All other slabs will not get any debugging enabled::
slub_debug=Z,dentry;U,kmalloc-*
You can also enable options (e.g. sanity checks and poisoning) for all caches
except some that are deemed too performance critical and don't need to be
debugged by specifying global debug options followed by a list of slab names
with "-" as options::
slub_debug=FZ;-,zs_handle,zspage
The state of each debug option for a slab can be found in the respective files
under::
/sys/kernel/slab/<slab name>/
Look at the writable files. Writing 1 to them will enable the
corresponding debug option. All options can be set on a slab that does
not contain objects. If the slab already contains objects then sanity checks
and tracing may only be enabled. The other options may cause the realignment
of objects.
If the file contains 1, the option is enabled, 0 means disabled. The debug
options from the ``slub_debug`` parameter translate to the following files::
F sanity_checks
Z red_zone
P poison
U store_user
T trace
A failslab
Careful with tracing: It may spew out lots of information and never stop if
used on the wrong slab.

View File

@ -5,7 +5,7 @@
#include <linux/mm.h>
#include <linux/mmzone.h>
#include <asm-generic/pgalloc.h> /* for pte_{alloc,free}_one */
#include <asm-generic/pgalloc.h>
/*
* Allocate and free page tables. The xxx_kernel() versions are
@ -34,23 +34,4 @@ pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
extern pgd_t *pgd_alloc(struct mm_struct *mm);
static inline void
pgd_free(struct mm_struct *mm, pgd_t *pgd)
{
free_page((unsigned long)pgd);
}
static inline pmd_t *
pmd_alloc_one(struct mm_struct *mm, unsigned long address)
{
pmd_t *ret = (pmd_t *)__get_free_page(GFP_PGTABLE_USER);
return ret;
}
static inline void
pmd_free(struct mm_struct *mm, pmd_t *pmd)
{
free_page((unsigned long)pmd);
}
#endif /* _ALPHA_PGALLOC_H */

View File

@ -5,7 +5,6 @@
#include <linux/mm.h>
#include <linux/sched.h>
#include <asm/compiler.h>
#include <asm/pgalloc.h>
#ifndef __EXTERN_INLINE
#define __EXTERN_INLINE extern inline

View File

@ -302,7 +302,6 @@ irongate_init_arch(void)
#include <linux/agp_backend.h>
#include <linux/agpgart.h>
#include <linux/export.h>
#include <asm/pgalloc.h>
#define GET_PAGE_DIR_OFF(addr) (addr >> 22)
#define GET_PAGE_DIR_IDX(addr) (GET_PAGE_DIR_OFF(addr))

View File

@ -23,7 +23,6 @@
#include <asm/ptrace.h>
#include <asm/smp.h>
#include <asm/gct.h>
#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
#include <asm/vga.h>

View File

@ -20,7 +20,6 @@
#include <asm/ptrace.h>
#include <asm/smp.h>
#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
#include <asm/vga.h>

View File

@ -7,8 +7,6 @@
* This file has goodies to help simplify instantiation of machine vectors.
*/
#include <asm/pgalloc.h>
/* Whee. These systems don't have an HAE:
IRONGATE, MARVEL, POLARIS, TSUNAMI, TITAN, WILDFIRE
Fix things up for the GENERIC kernel by defining the HAE address

View File

@ -36,7 +36,6 @@
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/pgalloc.h>
#include <asm/mmu_context.h>
#include <asm/tlbflush.h>

View File

@ -17,7 +17,6 @@
#include <linux/module.h>
#include <asm/hwrpb.h>
#include <asm/pgalloc.h>
#include <asm/sections.h>
pg_data_t node_data[MAX_NUMNODES];

View File

@ -13,7 +13,6 @@
#include <linux/kdebug.h>
#include <linux/perf_event.h>
#include <linux/mm_types.h>
#include <asm/pgalloc.h>
#include <asm/mmu.h>
/*

View File

@ -14,7 +14,6 @@
#include <linux/module.h>
#include <linux/highmem.h>
#include <asm/page.h>
#include <asm/pgalloc.h>
#include <asm/sections.h>
#include <asm/arcregs.h>

View File

@ -22,17 +22,6 @@
#ifdef CONFIG_ARM_LPAE
static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
{
return (pmd_t *)get_zeroed_page(GFP_KERNEL);
}
static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
{
BUG_ON((unsigned long)pmd & (PAGE_SIZE-1));
free_page((unsigned long)pmd);
}
static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
{
set_pud(pud, __pud(__pa(pmd) | PMD_TYPE_TABLE));
@ -76,6 +65,7 @@ static inline void clean_pte_table(pte_t *pte)
#define __HAVE_ARCH_PTE_ALLOC_ONE_KERNEL
#define __HAVE_ARCH_PTE_ALLOC_ONE
#define __HAVE_ARCH_PGD_FREE
#include <asm-generic/pgalloc.h>
static inline pte_t *

View File

@ -27,7 +27,6 @@
#else /* !CONFIG_MMU */
#include <linux/swap.h>
#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
static inline void __tlb_remove_table(void *_table)

View File

@ -11,7 +11,6 @@
#include <linux/irq.h>
#include <linux/memblock.h>
#include <linux/of_fdt.h>
#include <asm/pgalloc.h>
#include <asm/mmu_context.h>
#include <asm/cacheflush.h>
#include <asm/fncpy.h>

View File

@ -37,7 +37,6 @@
#include <asm/idmap.h>
#include <asm/topology.h>
#include <asm/mmu_context.h>
#include <asm/pgalloc.h>
#include <asm/procinfo.h>
#include <asm/processor.h>
#include <asm/sections.h>

View File

@ -7,7 +7,6 @@
#include <asm/bugs.h>
#include <asm/cacheflush.h>
#include <asm/idmap.h>
#include <asm/pgalloc.h>
#include <asm/memory.h>
#include <asm/smp_plat.h>
#include <asm/suspend.h>

View File

@ -42,7 +42,6 @@
#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
#include <asm/smp_scu.h>
#include <asm/pgalloc.h>
#include <asm/suspend.h>
#include <asm/virt.h>
#include <asm/hardware/cache-l2x0.h>

View File

@ -17,7 +17,6 @@
#include <asm/mman.h>
#include <asm/tlb.h>
#include <asm/tlbflush.h>
#include <asm/pgalloc.h>
/*
* On ARM, huge pages are backed by pmd's rather than pte's, so we do a lot

View File

@ -243,13 +243,8 @@ void __init bootmem_init(void)
(phys_addr_t)max_low_pfn << PAGE_SHIFT);
/*
* Sparsemem tries to allocate bootmem in memory_present(),
* so must be done after the fixed reservations
*/
memblocks_present();
/*
* sparse_init() needs the bootmem allocator up and running.
* sparse_init() tries to allocate memory from memblock, so must be
* done after the fixed reservations
*/
sparse_init();

View File

@ -29,6 +29,7 @@
#include <asm/traps.h>
#include <asm/procinfo.h>
#include <asm/memory.h>
#include <asm/pgalloc.h>
#include <asm/mach/arch.h>
#include <asm/mach/map.h>

View File

@ -13,37 +13,13 @@
#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
#include <asm-generic/pgalloc.h> /* for pte_{alloc,free}_one */
#define __HAVE_ARCH_PGD_FREE
#include <asm-generic/pgalloc.h>
#define PGD_SIZE (PTRS_PER_PGD * sizeof(pgd_t))
#if CONFIG_PGTABLE_LEVELS > 2
static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
{
gfp_t gfp = GFP_PGTABLE_USER;
struct page *page;
if (mm == &init_mm)
gfp = GFP_PGTABLE_KERNEL;
page = alloc_page(gfp);
if (!page)
return NULL;
if (!pgtable_pmd_page_ctor(page)) {
__free_page(page);
return NULL;
}
return page_address(page);
}
static inline void pmd_free(struct mm_struct *mm, pmd_t *pmdp)
{
BUG_ON((unsigned long)pmdp & (PAGE_SIZE-1));
pgtable_pmd_page_dtor(virt_to_page(pmdp));
free_page((unsigned long)pmdp);
}
static inline void __pud_populate(pud_t *pudp, phys_addr_t pmdp, pudval_t prot)
{
set_pud(pudp, __pud(__phys_to_pud_val(pmdp) | prot));
@ -62,17 +38,6 @@ static inline void __pud_populate(pud_t *pudp, phys_addr_t pmdp, pudval_t prot)
#if CONFIG_PGTABLE_LEVELS > 3
static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr)
{
return (pud_t *)__get_free_page(GFP_PGTABLE_USER);
}
static inline void pud_free(struct mm_struct *mm, pud_t *pudp)
{
BUG_ON((unsigned long)pudp & (PAGE_SIZE-1));
free_page((unsigned long)pudp);
}
static inline void __p4d_populate(p4d_t *p4dp, phys_addr_t pudp, p4dval_t prot)
{
set_p4d(p4dp, __p4d(__phys_to_p4d_val(pudp) | prot));

View File

@ -276,7 +276,7 @@ arch_initcall(reserve_memblock_reserved_regions);
u64 __cpu_logical_map[NR_CPUS] = { [0 ... NR_CPUS-1] = INVALID_HWID };
void __init setup_arch(char **cmdline_p)
void __init __no_sanitize_address setup_arch(char **cmdline_p)
{
init_mm.start_code = (unsigned long) _text;
init_mm.end_code = (unsigned long) _etext;

View File

@ -43,7 +43,6 @@
#include <asm/kvm_mmu.h>
#include <asm/mmu_context.h>
#include <asm/numa.h>
#include <asm/pgalloc.h>
#include <asm/processor.h>
#include <asm/smp_plat.h>
#include <asm/sections.h>

View File

@ -17,7 +17,6 @@
#include <asm/mman.h>
#include <asm/tlb.h>
#include <asm/tlbflush.h>
#include <asm/pgalloc.h>
/*
* HugeTLB Support Matrix

View File

@ -430,11 +430,9 @@ void __init bootmem_init(void)
#endif
/*
* Sparsemem tries to allocate bootmem in memory_present(), so must be
* done after the fixed reservations.
* sparse_init() tries to allocate memory from memblock, so must be
* done after the fixed reservations
*/
memblocks_present();
sparse_init();
zone_sizes_init(min, max);

View File

@ -16,7 +16,6 @@
#include <asm/fixmap.h>
#include <asm/tlbflush.h>
#include <asm/pgalloc.h>
static void __iomem *__ioremap_caller(phys_addr_t phys_addr, size_t size,
pgprot_t prot, void *caller)

View File

@ -35,6 +35,7 @@
#include <asm/mmu_context.h>
#include <asm/ptdump.h>
#include <asm/tlbflush.h>
#include <asm/pgalloc.h>
#define NO_BLOCK_MAPPINGS BIT(0)
#define NO_CONT_MAPPINGS BIT(1)
@ -760,15 +761,20 @@ int kern_addr_valid(unsigned long addr)
}
#ifdef CONFIG_MEMORY_HOTPLUG
static void free_hotplug_page_range(struct page *page, size_t size)
static void free_hotplug_page_range(struct page *page, size_t size,
struct vmem_altmap *altmap)
{
if (altmap) {
vmem_altmap_free(altmap, size >> PAGE_SHIFT);
} else {
WARN_ON(PageReserved(page));
free_pages((unsigned long)page_address(page), get_order(size));
}
}
static void free_hotplug_pgtable_page(struct page *page)
{
free_hotplug_page_range(page, PAGE_SIZE);
free_hotplug_page_range(page, PAGE_SIZE, NULL);
}
static bool pgtable_range_aligned(unsigned long start, unsigned long end,
@ -791,7 +797,8 @@ static bool pgtable_range_aligned(unsigned long start, unsigned long end,
}
static void unmap_hotplug_pte_range(pmd_t *pmdp, unsigned long addr,
unsigned long end, bool free_mapped)
unsigned long end, bool free_mapped,
struct vmem_altmap *altmap)
{
pte_t *ptep, pte;
@ -805,12 +812,14 @@ static void unmap_hotplug_pte_range(pmd_t *pmdp, unsigned long addr,
pte_clear(&init_mm, addr, ptep);
flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
if (free_mapped)
free_hotplug_page_range(pte_page(pte), PAGE_SIZE);
free_hotplug_page_range(pte_page(pte),
PAGE_SIZE, altmap);
} while (addr += PAGE_SIZE, addr < end);
}
static void unmap_hotplug_pmd_range(pud_t *pudp, unsigned long addr,
unsigned long end, bool free_mapped)
unsigned long end, bool free_mapped,
struct vmem_altmap *altmap)
{
unsigned long next;
pmd_t *pmdp, pmd;
@ -833,16 +842,17 @@ static void unmap_hotplug_pmd_range(pud_t *pudp, unsigned long addr,
flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
if (free_mapped)
free_hotplug_page_range(pmd_page(pmd),
PMD_SIZE);
PMD_SIZE, altmap);
continue;
}
WARN_ON(!pmd_table(pmd));
unmap_hotplug_pte_range(pmdp, addr, next, free_mapped);
unmap_hotplug_pte_range(pmdp, addr, next, free_mapped, altmap);
} while (addr = next, addr < end);
}
static void unmap_hotplug_pud_range(p4d_t *p4dp, unsigned long addr,
unsigned long end, bool free_mapped)
unsigned long end, bool free_mapped,
struct vmem_altmap *altmap)
{
unsigned long next;
pud_t *pudp, pud;
@ -865,16 +875,17 @@ static void unmap_hotplug_pud_range(p4d_t *p4dp, unsigned long addr,
flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
if (free_mapped)
free_hotplug_page_range(pud_page(pud),
PUD_SIZE);
PUD_SIZE, altmap);
continue;
}
WARN_ON(!pud_table(pud));
unmap_hotplug_pmd_range(pudp, addr, next, free_mapped);
unmap_hotplug_pmd_range(pudp, addr, next, free_mapped, altmap);
} while (addr = next, addr < end);
}
static void unmap_hotplug_p4d_range(pgd_t *pgdp, unsigned long addr,
unsigned long end, bool free_mapped)
unsigned long end, bool free_mapped,
struct vmem_altmap *altmap)
{
unsigned long next;
p4d_t *p4dp, p4d;
@ -887,16 +898,24 @@ static void unmap_hotplug_p4d_range(pgd_t *pgdp, unsigned long addr,
continue;
WARN_ON(!p4d_present(p4d));
unmap_hotplug_pud_range(p4dp, addr, next, free_mapped);
unmap_hotplug_pud_range(p4dp, addr, next, free_mapped, altmap);
} while (addr = next, addr < end);
}
static void unmap_hotplug_range(unsigned long addr, unsigned long end,
bool free_mapped)
bool free_mapped, struct vmem_altmap *altmap)
{
unsigned long next;
pgd_t *pgdp, pgd;
/*
* altmap can only be used as vmemmap mapping backing memory.
* In case the backing memory itself is not being freed, then
* altmap is irrelevant. Warn about this inconsistency when
* encountered.
*/
WARN_ON(!free_mapped && altmap);
do {
next = pgd_addr_end(addr, end);
pgdp = pgd_offset_k(addr);
@ -905,7 +924,7 @@ static void unmap_hotplug_range(unsigned long addr, unsigned long end,
continue;
WARN_ON(!pgd_present(pgd));
unmap_hotplug_p4d_range(pgdp, addr, next, free_mapped);
unmap_hotplug_p4d_range(pgdp, addr, next, free_mapped, altmap);
} while (addr = next, addr < end);
}
@ -1069,7 +1088,7 @@ static void free_empty_tables(unsigned long addr, unsigned long end,
int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
struct vmem_altmap *altmap)
{
return vmemmap_populate_basepages(start, end, node);
return vmemmap_populate_basepages(start, end, node, altmap);
}
#else /* !ARM64_SWAPPER_USES_SECTION_MAPS */
int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
@ -1101,7 +1120,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
if (pmd_none(READ_ONCE(*pmdp))) {
void *p = NULL;
p = vmemmap_alloc_block_buf(PMD_SIZE, node);
p = vmemmap_alloc_block_buf(PMD_SIZE, node, altmap);
if (!p)
return -ENOMEM;
@ -1119,7 +1138,7 @@ void vmemmap_free(unsigned long start, unsigned long end,
#ifdef CONFIG_MEMORY_HOTPLUG
WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
unmap_hotplug_range(start, end, true);
unmap_hotplug_range(start, end, true, altmap);
free_empty_tables(start, end, VMEMMAP_START, VMEMMAP_END);
#endif
}
@ -1410,7 +1429,7 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size)
WARN_ON(pgdir != init_mm.pgd);
WARN_ON((start < PAGE_OFFSET) || (end > PAGE_END));
unmap_hotplug_range(start, end, false);
unmap_hotplug_range(start, end, false, NULL);
free_empty_tables(start, end, PAGE_OFFSET, PAGE_END);
}

View File

@ -9,7 +9,7 @@
#include <linux/sched.h>
#define __HAVE_ARCH_PTE_ALLOC_ONE_KERNEL
#include <asm-generic/pgalloc.h> /* for pte_{alloc,free}_one */
#include <asm-generic/pgalloc.h>
static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
pte_t *pte)
@ -42,11 +42,6 @@ static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
return pte;
}
static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
{
free_pages((unsigned long)pgd, PGD_ORDER);
}
static inline pgd_t *pgd_alloc(struct mm_struct *mm)
{
pgd_t *ret;

View File

@ -23,7 +23,6 @@
#include <asm/traps.h>
#include <asm/sections.h>
#include <asm/mmu_context.h>
#include <asm/pgalloc.h>
#ifdef CONFIG_CPU_HAS_FPU
#include <abi/fpu.h>
#endif

View File

@ -11,7 +11,7 @@
#include <asm/mem-layout.h>
#include <asm/atomic.h>
#include <asm-generic/pgalloc.h> /* for pte_{alloc,free}_one */
#include <asm-generic/pgalloc.h>
extern unsigned long long kmap_generation;
@ -41,11 +41,6 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
return pgd;
}
static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
{
free_page((unsigned long) pgd);
}
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
pgtable_t pte)
{

View File

@ -29,11 +29,6 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
return (pgd_t *)__get_free_page(GFP_KERNEL | __GFP_ZERO);
}
static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
{
free_page((unsigned long)pgd);
}
#if CONFIG_PGTABLE_LEVELS == 4
static inline void
p4d_populate(struct mm_struct *mm, p4d_t * p4d_entry, pud_t * pud)
@ -41,15 +36,6 @@ p4d_populate(struct mm_struct *mm, p4d_t * p4d_entry, pud_t * pud)
p4d_val(*p4d_entry) = __pa(pud);
}
static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr)
{
return (pud_t *)__get_free_page(GFP_KERNEL | __GFP_ZERO);
}
static inline void pud_free(struct mm_struct *mm, pud_t *pud)
{
free_page((unsigned long)pud);
}
#define __pud_free_tlb(tlb, pud, address) pud_free((tlb)->mm, pud)
#endif /* CONFIG_PGTABLE_LEVELS == 4 */
@ -59,16 +45,6 @@ pud_populate(struct mm_struct *mm, pud_t * pud_entry, pmd_t * pmd)
pud_val(*pud_entry) = __pa(pmd);
}
static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
{
return (pmd_t *)__get_free_page(GFP_KERNEL | __GFP_ZERO);
}
static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
{
free_page((unsigned long)pmd);
}
#define __pmd_free_tlb(tlb, pmd, address) pmd_free((tlb)->mm, pmd)
static inline void

View File

@ -42,7 +42,6 @@
#include <linux/pagemap.h>
#include <linux/swap.h>
#include <asm/pgalloc.h>
#include <asm/processor.h>
#include <asm/tlbflush.h>

View File

@ -40,7 +40,6 @@
#include <asm/elf.h>
#include <asm/irq.h>
#include <asm/kexec.h>
#include <asm/pgalloc.h>
#include <asm/processor.h>
#include <asm/sal.h>
#include <asm/switch_to.h>

View File

@ -39,7 +39,6 @@
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/page.h>
#include <asm/pgalloc.h>
#include <asm/processor.h>
#include <asm/ptrace.h>
#include <asm/sal.h>

View File

@ -49,7 +49,6 @@
#include <asm/irq.h>
#include <asm/mca.h>
#include <asm/page.h>
#include <asm/pgalloc.h>
#include <asm/processor.h>
#include <asm/ptrace.h>
#include <asm/sal.h>

View File

@ -21,7 +21,6 @@
#include <linux/swap.h>
#include <asm/meminit.h>
#include <asm/pgalloc.h>
#include <asm/sections.h>
#include <asm/mca.h>

View File

@ -24,7 +24,6 @@
#include <linux/efi.h>
#include <linux/nodemask.h>
#include <linux/slab.h>
#include <asm/pgalloc.h>
#include <asm/tlb.h>
#include <asm/meminit.h>
#include <asm/numa.h>
@ -601,7 +600,6 @@ void __init paging_init(void)
max_dma = virt_to_phys((void *) MAX_DMA_ADDRESS) >> PAGE_SHIFT;
sparse_memory_present_with_active_regions(MAX_NUMNODES);
sparse_init();
#ifdef CONFIG_VIRTUAL_MEM_MAP
@ -656,7 +654,7 @@ void arch_refresh_nodedata(int update_node, pg_data_t *update_pgdat)
int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
struct vmem_altmap *altmap)
{
return vmemmap_populate_basepages(start, end, node);
return vmemmap_populate_basepages(start, end, node, NULL);
}
void vmemmap_free(unsigned long start, unsigned long end,

View File

@ -18,7 +18,6 @@
#include <linux/sysctl.h>
#include <linux/log2.h>
#include <asm/mman.h>
#include <asm/pgalloc.h>
#include <asm/tlb.h>
#include <asm/tlbflush.h>

View File

@ -27,7 +27,6 @@
#include <asm/delay.h>
#include <asm/mmu_context.h>
#include <asm/pgalloc.h>
#include <asm/pal.h>
#include <asm/tlbflush.h>
#include <asm/dma.h>

View File

@ -222,7 +222,7 @@ static inline void activate_mm(struct mm_struct *prev_mm,
#include <asm/setup.h>
#include <asm/page.h>
#include <asm/pgalloc.h>
#include <asm/cacheflush.h>
static inline int init_new_context(struct task_struct *tsk,
struct mm_struct *mm)

View File

@ -13,7 +13,7 @@
#include <asm/tlb.h>
#include <asm-generic/pgalloc.h> /* for pte_{alloc,free}_one */
#include <asm-generic/pgalloc.h>
extern const char bad_pmd_string[];
@ -40,11 +40,6 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, pgtable_t page
*/
#define pmd_free(mm, x) do { } while (0)
static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
{
free_page((unsigned long) pgd);
}
static inline pgd_t * pgd_alloc(struct mm_struct *mm)
{
pgd_t *new_pgd;

View File

@ -15,7 +15,7 @@
#include <linux/vmalloc.h>
#include <linux/export.h>
#include <asm/pgalloc.h>
#include <asm/cacheflush.h>
#if defined(CONFIG_MMU) && !defined(CONFIG_COLDFIRE)
void arch_dma_prep_coherent(struct page *page, size_t size)

View File

@ -35,10 +35,9 @@
#include <asm/fpu.h>
#include <linux/uaccess.h>
#include <asm/traps.h>
#include <asm/pgalloc.h>
#include <asm/machdep.h>
#include <asm/siginfo.h>
#include <asm/tlbflush.h>
static const char *vec_names[] = {
[VEC_RESETSP] = "RESET SP",

View File

@ -8,7 +8,7 @@
*/
#include <linux/module.h>
#include <asm/pgalloc.h>
#include <asm/cacheflush.h>
#include <asm/traps.h>

View File

@ -15,7 +15,6 @@
#include <asm/setup.h>
#include <asm/traps.h>
#include <asm/pgalloc.h>
extern void die_if_kernel(char *, struct pt_regs *, long);

View File

@ -19,8 +19,8 @@
#include <asm/setup.h>
#include <asm/segment.h>
#include <asm/page.h>
#include <asm/pgalloc.h>
#include <asm/io.h>
#include <asm/tlbflush.h>
#undef DEBUG

View File

@ -20,6 +20,7 @@
#include <asm/mmu_context.h>
#include <asm/mcf_pgalloc.h>
#include <asm/tlbflush.h>
#include <asm/pgalloc.h>
#define KMAPAREA(x) ((x >= VMALLOC_START) && (x < KMAP_END))

View File

@ -17,7 +17,6 @@
#include <asm/setup.h>
#include <asm/segment.h>
#include <asm/page.h>
#include <asm/pgalloc.h>
#include <asm/traps.h>
#include <asm/machdep.h>

View File

@ -22,7 +22,7 @@
#include <asm/dvma.h>
#include <asm/io.h>
#include <asm/page.h>
#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
/* IOMMU support */

View File

@ -28,12 +28,6 @@ static inline pgd_t *get_pgd(void)
return (pgd_t *)__get_free_pages(GFP_KERNEL|__GFP_ZERO, 0);
}
static inline void free_pgd(pgd_t *pgd)
{
free_page((unsigned long)pgd);
}
#define pgd_free(mm, pgd) free_pgd(pgd)
#define pgd_alloc(mm) get_pgd()
#define pmd_pgtable(pmd) pmd_page(pmd)

View File

@ -15,7 +15,6 @@
#include <asm/processor.h> /* For TASK_SIZE */
#include <asm/mmu.h>
#include <asm/page.h>
#include <asm/pgalloc.h>
extern void _tlbie(unsigned long address);
extern void _tlbia(void);

View File

@ -18,7 +18,6 @@
#include <linux/tick.h>
#include <linux/bitops.h>
#include <linux/ptrace.h>
#include <asm/pgalloc.h>
#include <linux/uaccess.h> /* for USER_DS macros */
#include <asm/cacheflush.h>

View File

@ -35,7 +35,6 @@
#include <asm/entry.h>
#include <asm/ucontext.h>
#include <linux/uaccess.h>
#include <asm/pgalloc.h>
#include <linux/syscalls.h>
#include <asm/cacheflush.h>
#include <asm/syscalls.h>

View File

@ -172,9 +172,6 @@ void __init setup_memory(void)
&memblock.memory, 0);
}
/* XXX need to clip this if using highmem? */
sparse_memory_present_with_active_regions(0);
paging_init();
}

View File

@ -13,7 +13,9 @@
#include <linux/mm.h>
#include <linux/sched.h>
#include <asm-generic/pgalloc.h> /* for pte_{alloc,free}_one */
#define __HAVE_ARCH_PMD_ALLOC_ONE
#define __HAVE_ARCH_PUD_ALLOC_ONE
#include <asm-generic/pgalloc.h>
static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
pte_t *pte)
@ -47,11 +49,6 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
extern void pgd_init(unsigned long page);
extern pgd_t *pgd_alloc(struct mm_struct *mm);
static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
{
free_pages((unsigned long)pgd, PGD_ORDER);
}
#define __pte_free_tlb(tlb,pte,address) \
do { \
pgtable_pte_page_dtor(pte); \
@ -70,11 +67,6 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
return pmd;
}
static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
{
free_pages((unsigned long)pmd, PMD_ORDER);
}
#define __pmd_free_tlb(tlb, x, addr) pmd_free((tlb)->mm, x)
#endif
@ -91,11 +83,6 @@ static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address)
return pud;
}
static inline void pud_free(struct mm_struct *mm, pud_t *pud)
{
free_pages((unsigned long)pud, PUD_ORDER);
}
static inline void p4d_populate(struct mm_struct *mm, p4d_t *p4d, pud_t *pud)
{
set_p4d(p4d, __p4d((unsigned long)pud));

View File

@ -371,14 +371,6 @@ static void __init bootmem_init(void)
#endif
}
/*
* In any case the added to the memblock memory regions
* (highmem/lowmem, available/reserved, etc) are considered
* as present, so inform sparsemem about them.
*/
memblocks_present();
/*
* Reserve initrd memory if needed.
*/

View File

@ -220,7 +220,6 @@ static __init void prom_meminit(void)
cpumask_clear(&__node_cpumask[node]);
}
}
memblocks_present();
max_low_pfn = PHYS_PFN(memblock_end_of_DRAM());
for (cpu = 0; cpu < loongson_sysconf.nr_cpus; cpu++) {

View File

@ -402,8 +402,6 @@ void __init prom_meminit(void)
}
__node_data[node] = &null_node;
}
memblocks_present();
}
void __init prom_free_prom_memory(void)

View File

@ -14,7 +14,6 @@
#include <asm/ip32/crime.h>
#include <asm/bootinfo.h>
#include <asm/page.h>
#include <asm/pgalloc.h>
extern void crime_init(void);

View File

@ -2,6 +2,8 @@
// Copyright (C) 2005-2017 Andes Technology Corporation
#include <linux/init_task.h>
#define __HAVE_ARCH_PGD_FREE
#include <asm/pgalloc.h>
#define FIRST_KERNEL_PGD_NR (USER_PTRS_PER_PGD)

View File

@ -12,7 +12,7 @@
#include <linux/mm.h>
#include <asm-generic/pgalloc.h> /* for pte_{alloc,free}_one */
#include <asm-generic/pgalloc.h>
static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
pte_t *pte)
@ -34,11 +34,6 @@ extern void pmd_init(unsigned long page, unsigned long pagetable);
extern pgd_t *pgd_alloc(struct mm_struct *mm);
static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
{
free_pages((unsigned long)pgd, PGD_ORDER);
}
#define __pte_free_tlb(tlb, pte, addr) \
do { \
pgtable_pte_page_dtor(pte); \

View File

@ -20,6 +20,9 @@
#include <linux/mm.h>
#include <linux/memblock.h>
#define __HAVE_ARCH_PTE_ALLOC_ONE_KERNEL
#include <asm-generic/pgalloc.h>
extern int mem_init_done;
#define pmd_populate_kernel(mm, pmd, pte) \
@ -61,38 +64,8 @@ extern inline pgd_t *pgd_alloc(struct mm_struct *mm)
}
#endif
static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
{
free_page((unsigned long)pgd);
}
extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm);
static inline struct page *pte_alloc_one(struct mm_struct *mm)
{
struct page *pte;
pte = alloc_pages(GFP_KERNEL, 0);
if (!pte)
return NULL;
clear_page(page_address(pte));
if (!pgtable_pte_page_ctor(pte)) {
__free_page(pte);
return NULL;
}
return pte;
}
static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
{
free_page((unsigned long)pte);
}
static inline void pte_free(struct mm_struct *mm, struct page *pte)
{
pgtable_pte_page_dtor(pte);
__free_page(pte);
}
#define __pte_free_tlb(tlb, pte, addr) \
do { \
pgtable_pte_page_dtor(pte); \

View File

@ -17,7 +17,6 @@
#include <linux/mm.h>
#include <asm/processor.h>
#include <asm/pgalloc.h>
#include <asm/current.h>
#include <linux/sched.h>

View File

@ -26,7 +26,6 @@
#include <asm/io.h>
#include <asm/hardirq.h>
#include <asm/delay.h>
#include <asm/pgalloc.h>
#define DECLARE_EXPORT(name) extern void name(void); EXPORT_SYMBOL(name)

View File

@ -5,7 +5,6 @@
#include <linux/mm.h>
#include <linux/sched.h>
#include <linux/atomic.h>
#include <asm/pgalloc.h>
#include <asm-generic/mm_hooks.h>
static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)

View File

@ -10,7 +10,9 @@
#include <asm/cache.h>
#include <asm-generic/pgalloc.h> /* for pte_{alloc,free}_one */
#define __HAVE_ARCH_PMD_FREE
#define __HAVE_ARCH_PGD_FREE
#include <asm-generic/pgalloc.h>
/* Allocate the top level pgd (page directory)
*
@ -65,14 +67,6 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
(__u32)(__pa((unsigned long)pmd) >> PxD_VALUE_SHIFT)));
}
static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
{
pmd_t *pmd = (pmd_t *)__get_free_pages(GFP_KERNEL, PMD_ORDER);
if (pmd)
memset(pmd, 0, PAGE_SIZE<<PMD_ORDER);
return pmd;
}
static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
{
if (pmd_flag(*pmd) & PxD_FLAG_ATTACHED) {

View File

@ -24,7 +24,6 @@
#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
#include <asm/page.h>
#include <asm/pgalloc.h>
#include <asm/processor.h>
#include <asm/sections.h>
#include <asm/shmparam.h>

View File

@ -32,7 +32,6 @@
#include <asm/dma.h> /* for DMA_CHUNK_SIZE */
#include <asm/io.h>
#include <asm/page.h> /* get_order */
#include <asm/pgalloc.h>
#include <linux/uaccess.h>
#include <asm/tlbflush.h> /* for purge_tlb_*() macros */

View File

@ -47,7 +47,6 @@
#include <asm/assembly.h>
#include <asm/pdc.h>
#include <asm/pdc_chassis.h>
#include <asm/pgalloc.h>
#include <asm/unwind.h>
#include <asm/sections.h>

View File

@ -30,7 +30,6 @@
#include <asm/ucontext.h>
#include <asm/rt_sigframe.h>
#include <linux/uaccess.h>
#include <asm/pgalloc.h>
#include <asm/cacheflush.h>
#include <asm/asm-offsets.h>

View File

@ -39,7 +39,6 @@
#include <asm/irq.h> /* for CPU_IRQ_REGION and friends */
#include <asm/mmu_context.h>
#include <asm/page.h>
#include <asm/pgalloc.h>
#include <asm/processor.h>
#include <asm/ptrace.h>
#include <asm/unistd.h>

View File

@ -15,7 +15,6 @@
#include <linux/sysctl.h>
#include <asm/mman.h>
#include <asm/pgalloc.h>
#include <asm/tlb.h>
#include <asm/tlbflush.h>
#include <asm/cacheflush.h>

View File

@ -689,11 +689,6 @@ void __init paging_init(void)
flush_cache_all_local(); /* start with known state */
flush_tlb_all_local(NULL);
/*
* Mark all memblocks as present for sparsemem using
* memory_present() and then initialize sparsemem.
*/
memblocks_present();
sparse_init();
parisc_bootmem_free();
}

View File

@ -11,7 +11,7 @@
#include <linux/errno.h>
#include <linux/module.h>
#include <linux/io.h>
#include <asm/pgalloc.h>
#include <linux/mm.h>
/*
* Generic mapping function (not visible outside):

View File

@ -12,7 +12,6 @@
#ifndef __powerpc64__
#include <linux/pgtable.h>
#endif
#include <asm/pgalloc.h>
#ifndef __powerpc64__
#include <asm/page.h>
#include <asm/mmu.h>

View File

@ -10,7 +10,6 @@
#include <linux/mm.h>
#include <linux/hugetlb.h>
#include <asm/pgalloc.h>
#include <asm/cacheflush.h>
#include <asm/machdep.h>

View File

@ -9,7 +9,6 @@
#include <linux/mm_types.h>
#include <linux/mm.h>
#include <asm/pgalloc.h>
#include <asm/sections.h>
#include <asm/mmu.h>
#include <asm/tlb.h>

View File

@ -21,7 +21,6 @@
#include <linux/mm.h>
#include <linux/percpu.h>
#include <linux/hardirq.h>
#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
#include <asm/tlb.h>
#include <asm/bug.h>

View File

@ -2,7 +2,6 @@
#include <linux/mm.h>
#include <linux/hugetlb.h>
#include <linux/security.h>
#include <asm/pgalloc.h>
#include <asm/cacheflush.h>
#include <asm/machdep.h>
#include <asm/mman.h>

View File

@ -29,7 +29,6 @@
#include <linux/slab.h>
#include <linux/hugetlb.h>
#include <asm/pgalloc.h>
#include <asm/prom.h>
#include <asm/io.h>
#include <asm/mmu.h>

View File

@ -225,12 +225,12 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
* fall back to system memory if the altmap allocation fail.
*/
if (altmap && !altmap_cross_boundary(altmap, start, page_size)) {
p = altmap_alloc_block_buf(page_size, altmap);
p = vmemmap_alloc_block_buf(page_size, node, altmap);
if (!p)
pr_debug("altmap block allocation failed, falling back to system memory");
}
if (!p)
p = vmemmap_alloc_block_buf(page_size, node);
p = vmemmap_alloc_block_buf(page_size, node, NULL);
if (!p)
return -ENOMEM;

View File

@ -5,7 +5,6 @@
#include <linux/kasan.h>
#include <linux/memblock.h>
#include <linux/hugetlb.h>
#include <asm/pgalloc.h>
static int __init
kasan_init_shadow_8M(unsigned long k_start, unsigned long k_end, void *block)

View File

@ -4,7 +4,6 @@
#include <linux/kasan.h>
#include <linux/memblock.h>
#include <asm/pgalloc.h>
#include <mm/mmu_decl.h>
int __init kasan_init_region(void *start, size_t size)

View File

@ -34,7 +34,6 @@
#include <linux/dma-direct.h>
#include <linux/kprobes.h>
#include <asm/pgalloc.h>
#include <asm/prom.h>
#include <asm/io.h>
#include <asm/mmu_context.h>
@ -179,8 +178,6 @@ void __init mem_topology_setup(void)
void __init initmem_init(void)
{
/* XXX need to clip this if using highmem? */
sparse_memory_present_with_active_regions(0);
sparse_init();
}

View File

@ -32,7 +32,6 @@
#include <linux/highmem.h>
#include <linux/memblock.h>
#include <asm/pgalloc.h>
#include <asm/prom.h>
#include <asm/io.h>
#include <asm/mmu_context.h>

View File

@ -13,7 +13,6 @@
#include <asm/fixmap.h>
#include <asm/code-patching.h>
#include <asm/inst.h>
#include <asm/pgalloc.h>
#include <mm/mmu_decl.h>

View File

@ -37,7 +37,6 @@
#include <linux/highmem.h>
#include <linux/memblock.h>
#include <asm/pgalloc.h>
#include <asm/prom.h>
#include <asm/io.h>
#include <asm/mmu_context.h>

View File

@ -15,7 +15,6 @@
#include <linux/libfdt.h>
#include <linux/crash_core.h>
#include <asm/cacheflush.h>
#include <asm/pgalloc.h>
#include <asm/prom.h>
#include <asm/kdump.h>
#include <mm/mmu_decl.h>

View File

@ -34,6 +34,7 @@
#include <linux/of_fdt.h>
#include <linux/hugetlb.h>
#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
#include <asm/tlb.h>
#include <asm/code-patching.h>

View File

@ -953,7 +953,6 @@ void __init initmem_init(void)
get_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
setup_node_data(nid, start_pfn, end_pfn);
sparse_memory_present_with_active_regions(nid);
}
sparse_init();

View File

@ -23,7 +23,6 @@
#include <linux/percpu.h>
#include <linux/hardirq.h>
#include <linux/hugetlb.h>
#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
#include <asm/tlb.h>
#include <asm/hugetlb.h>

View File

@ -31,7 +31,6 @@
#include <linux/slab.h>
#include <linux/hugetlb.h>
#include <asm/pgalloc.h>
#include <asm/page.h>
#include <asm/prom.h>
#include <asm/mmu_context.h>

View File

@ -17,10 +17,10 @@
#include <linux/seq_file.h>
#include <linux/const.h>
#include <asm/page.h>
#include <asm/pgalloc.h>
#include <asm/plpar_wrappers.h>
#include <linux/memblock.h>
#include <asm/firmware.h>
#include <asm/pgalloc.h>
struct pg_state {
struct seq_file *seq;

Some files were not shown because too many files have changed in this diff Show More