License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2008-10-19 11:26:30 +08:00
|
|
|
#ifndef LINUX_MM_INLINE_H
|
|
|
|
#define LINUX_MM_INLINE_H
|
|
|
|
|
2022-01-15 06:06:10 +08:00
|
|
|
#include <linux/atomic.h>
|
2011-01-14 07:47:13 +08:00
|
|
|
#include <linux/huge_mm.h>
|
2013-09-12 05:22:36 +08:00
|
|
|
#include <linux/swap.h>
|
2022-01-15 06:06:07 +08:00
|
|
|
#include <linux/string.h>
|
2011-01-14 07:47:13 +08:00
|
|
|
|
2008-10-19 11:26:30 +08:00
|
|
|
/**
|
2021-02-25 22:47:41 +08:00
|
|
|
* folio_is_file_lru - Should the folio be on a file LRU or anon LRU?
|
|
|
|
* @folio: The folio to test.
|
2008-10-19 11:26:30 +08:00
|
|
|
*
|
|
|
|
* We would like to get this info without a page flag, but the state
|
2021-02-25 22:47:41 +08:00
|
|
|
* needs to survive until the folio is last deleted from the LRU, which
|
2008-10-19 11:26:30 +08:00
|
|
|
* could be as far down as __page_cache_release.
|
2021-02-25 22:47:41 +08:00
|
|
|
*
|
|
|
|
* Return: An integer (not a boolean!) used to sort a folio onto the
|
|
|
|
* right LRU list and to account folios correctly.
|
|
|
|
* 1 if @folio is a regular filesystem backed page cache folio
|
|
|
|
* or a lazily freed anonymous folio (e.g. via MADV_FREE).
|
|
|
|
* 0 if @folio is a normal anonymous folio, a tmpfs folio or otherwise
|
|
|
|
* ram or swap backed folio.
|
2008-10-19 11:26:30 +08:00
|
|
|
*/
|
2021-02-25 22:47:41 +08:00
|
|
|
static inline int folio_is_file_lru(struct folio *folio)
|
|
|
|
{
|
|
|
|
return !folio_test_swapbacked(folio);
|
|
|
|
}
|
|
|
|
|
2020-04-07 11:04:41 +08:00
|
|
|
static inline int page_is_file_lru(struct page *page)
|
2008-10-19 11:26:30 +08:00
|
|
|
{
|
2021-02-25 22:47:41 +08:00
|
|
|
return folio_is_file_lru(page_folio(page));
|
2008-10-19 11:26:30 +08:00
|
|
|
}
|
|
|
|
|
2021-02-25 04:08:40 +08:00
|
|
|
static __always_inline void update_lru_size(struct lruvec *lruvec,
|
2016-07-29 06:45:31 +08:00
|
|
|
enum lru_list lru, enum zone_type zid,
|
2021-02-25 22:47:41 +08:00
|
|
|
long nr_pages)
|
mm: update_lru_size do the __mod_zone_page_state
Konstantin Khlebnikov pointed out (nearly four years ago, when lumpy
reclaim was removed) that lru_size can be updated by -nr_taken once per
call to isolate_lru_pages(), instead of page by page.
Update it inside isolate_lru_pages(), or at its two callsites? I chose
to update it at the callsites, rearranging and grouping the updates by
nr_taken and nr_scanned together in both.
With one exception, mem_cgroup_update_lru_size(,lru,) is then used where
__mod_zone_page_state(,NR_LRU_BASE+lru,) is used; and we shall be adding
some more calls in a future commit. Make the code a little smaller and
simpler by incorporating stat update in lru_size update.
The exception was move_active_pages_to_lru(), which aggregated the
pgmoved stat update separately from the individual lru_size updates; but
I still think this a simplification worth making.
However, the __mod_zone_page_state is not peculiar to mem_cgroups: so
better use the name update_lru_size, calls mem_cgroup_update_lru_size
when CONFIG_MEMCG.
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andres Lagar-Cavilla <andreslc@google.com>
Cc: Yang Shi <yang.shi@linaro.org>
Cc: Ning Qu <quning@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-20 08:12:38 +08:00
|
|
|
{
|
2016-07-29 06:45:31 +08:00
|
|
|
struct pglist_data *pgdat = lruvec_pgdat(lruvec);
|
|
|
|
|
2019-05-14 08:17:57 +08:00
|
|
|
__mod_lruvec_state(lruvec, NR_LRU_BASE + lru, nr_pages);
|
mm: add per-zone lru list stat
When I did stress test with hackbench, I got OOM message frequently
which didn't ever happen in zone-lru.
gfp_mask=0x26004c0(GFP_KERNEL|__GFP_REPEAT|__GFP_NOTRACK), order=0
..
..
__alloc_pages_nodemask+0xe52/0xe60
? new_slab+0x39c/0x3b0
new_slab+0x39c/0x3b0
___slab_alloc.constprop.87+0x6da/0x840
? __alloc_skb+0x3c/0x260
? _raw_spin_unlock_irq+0x27/0x60
? trace_hardirqs_on_caller+0xec/0x1b0
? finish_task_switch+0xa6/0x220
? poll_select_copy_remaining+0x140/0x140
__slab_alloc.isra.81.constprop.86+0x40/0x6d
? __alloc_skb+0x3c/0x260
kmem_cache_alloc+0x22c/0x260
? __alloc_skb+0x3c/0x260
__alloc_skb+0x3c/0x260
alloc_skb_with_frags+0x4e/0x1a0
sock_alloc_send_pskb+0x16a/0x1b0
? wait_for_unix_gc+0x31/0x90
? alloc_set_pte+0x2ad/0x310
unix_stream_sendmsg+0x28d/0x340
sock_sendmsg+0x2d/0x40
sock_write_iter+0x6c/0xc0
__vfs_write+0xc0/0x120
vfs_write+0x9b/0x1a0
? __might_fault+0x49/0xa0
SyS_write+0x44/0x90
do_fast_syscall_32+0xa6/0x1e0
sysenter_past_esp+0x45/0x74
Mem-Info:
active_anon:104698 inactive_anon:105791 isolated_anon:192
active_file:433 inactive_file:283 isolated_file:22
unevictable:0 dirty:0 writeback:296 unstable:0
slab_reclaimable:6389 slab_unreclaimable:78927
mapped:474 shmem:0 pagetables:101426 bounce:0
free:10518 free_pcp:334 free_cma:0
Node 0 active_anon:418792kB inactive_anon:423164kB active_file:1732kB inactive_file:1132kB unevictable:0kB isolated(anon):768kB isolated(file):88kB mapped:1896kB dirty:0kB writeback:1184kB shmem:0kB writeback_tmp:0kB unstable:0kB pages_scanned:1478632 all_unreclaimable? yes
DMA free:3304kB min:68kB low:84kB high:100kB present:15992kB managed:15916kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:4088kB kernel_stack:0kB pagetables:2480kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
lowmem_reserve[]: 0 809 1965 1965
Normal free:3436kB min:3604kB low:4504kB high:5404kB present:897016kB managed:858460kB mlocked:0kB slab_reclaimable:25556kB slab_unreclaimable:311712kB kernel_stack:164608kB pagetables:30844kB bounce:0kB free_pcp:620kB local_pcp:104kB free_cma:0kB
lowmem_reserve[]: 0 0 9247 9247
HighMem free:33808kB min:512kB low:1796kB high:3080kB present:1183736kB managed:1183736kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:372252kB bounce:0kB free_pcp:428kB local_pcp:72kB free_cma:0kB
lowmem_reserve[]: 0 0 0 0
DMA: 2*4kB (UM) 2*8kB (UM) 0*16kB 1*32kB (U) 1*64kB (U) 2*128kB (UM) 1*256kB (U) 1*512kB (M) 0*1024kB 1*2048kB (U) 0*4096kB = 3192kB
Normal: 33*4kB (MH) 79*8kB (ME) 11*16kB (M) 4*32kB (M) 2*64kB (ME) 2*128kB (EH) 7*256kB (EH) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 3244kB
HighMem: 2590*4kB (UM) 1568*8kB (UM) 491*16kB (UM) 60*32kB (UM) 6*64kB (M) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 33064kB
Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
25121 total pagecache pages
24160 pages in swap cache
Swap cache stats: add 86371, delete 62211, find 42865/60187
Free swap = 4015560kB
Total swap = 4192252kB
524186 pages RAM
295934 pages HighMem/MovableOnly
9658 pages reserved
0 pages cma reserved
The order-0 allocation for normal zone failed while there are a lot of
reclaimable memory(i.e., anonymous memory with free swap). I wanted to
analyze the problem but it was hard because we removed per-zone lru stat
so I couldn't know how many of anonymous memory there are in normal/dma
zone.
When we investigate OOM problem, reclaimable memory count is crucial
stat to find a problem. Without it, it's hard to parse the OOM message
so I believe we should keep it.
With per-zone lru stat,
gfp_mask=0x26004c0(GFP_KERNEL|__GFP_REPEAT|__GFP_NOTRACK), order=0
Mem-Info:
active_anon:101103 inactive_anon:102219 isolated_anon:0
active_file:503 inactive_file:544 isolated_file:0
unevictable:0 dirty:0 writeback:34 unstable:0
slab_reclaimable:6298 slab_unreclaimable:74669
mapped:863 shmem:0 pagetables:100998 bounce:0
free:23573 free_pcp:1861 free_cma:0
Node 0 active_anon:404412kB inactive_anon:409040kB active_file:2012kB inactive_file:2176kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:3452kB dirty:0kB writeback:136kB shmem:0kB writeback_tmp:0kB unstable:0kB pages_scanned:1320845 all_unreclaimable? yes
DMA free:3296kB min:68kB low:84kB high:100kB active_anon:5540kB inactive_anon:0kB active_file:0kB inactive_file:0kB present:15992kB managed:15916kB mlocked:0kB slab_reclaimable:248kB slab_unreclaimable:2628kB kernel_stack:792kB pagetables:2316kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
lowmem_reserve[]: 0 809 1965 1965
Normal free:3600kB min:3604kB low:4504kB high:5404kB active_anon:86304kB inactive_anon:0kB active_file:160kB inactive_file:376kB present:897016kB managed:858524kB mlocked:0kB slab_reclaimable:24944kB slab_unreclaimable:296048kB kernel_stack:163832kB pagetables:35892kB bounce:0kB free_pcp:3076kB local_pcp:656kB free_cma:0kB
lowmem_reserve[]: 0 0 9247 9247
HighMem free:86156kB min:512kB low:1796kB high:3080kB active_anon:312852kB inactive_anon:410024kB active_file:1924kB inactive_file:2012kB present:1183736kB managed:1183736kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:365784kB bounce:0kB free_pcp:3868kB local_pcp:720kB free_cma:0kB
lowmem_reserve[]: 0 0 0 0
DMA: 8*4kB (UM) 8*8kB (UM) 4*16kB (M) 2*32kB (UM) 2*64kB (UM) 1*128kB (M) 3*256kB (UME) 2*512kB (UE) 1*1024kB (E) 0*2048kB 0*4096kB = 3296kB
Normal: 240*4kB (UME) 160*8kB (UME) 23*16kB (ME) 3*32kB (UE) 3*64kB (UME) 2*128kB (ME) 1*256kB (U) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 3408kB
HighMem: 10942*4kB (UM) 3102*8kB (UM) 866*16kB (UM) 76*32kB (UM) 11*64kB (UM) 4*128kB (UM) 1*256kB (M) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 86344kB
Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
54409 total pagecache pages
53215 pages in swap cache
Swap cache stats: add 300982, delete 247765, find 157978/226539
Free swap = 3803244kB
Total swap = 4192252kB
524186 pages RAM
295934 pages HighMem/MovableOnly
9642 pages reserved
0 pages cma reserved
With that, we can see normal zone has a 86M reclaimable memory so we can
know something goes wrong(I will fix the problem in next patch) in
reclaim.
[mgorman@techsingularity.net: rename zone LRU stats in /proc/vmstat]
Link: http://lkml.kernel.org/r/20160725072300.GK10438@techsingularity.net
Link: http://lkml.kernel.org/r/1469110261-7365-2-git-send-email-mgorman@techsingularity.net
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-29 06:47:26 +08:00
|
|
|
__mod_zone_page_state(&pgdat->node_zones[zid],
|
|
|
|
NR_ZONE_LRU_BASE + lru, nr_pages);
|
2016-07-29 06:47:17 +08:00
|
|
|
#ifdef CONFIG_MEMCG
|
2017-01-11 08:58:04 +08:00
|
|
|
mem_cgroup_update_lru_size(lruvec, lru, zid, nr_pages);
|
mm: update_lru_size do the __mod_zone_page_state
Konstantin Khlebnikov pointed out (nearly four years ago, when lumpy
reclaim was removed) that lru_size can be updated by -nr_taken once per
call to isolate_lru_pages(), instead of page by page.
Update it inside isolate_lru_pages(), or at its two callsites? I chose
to update it at the callsites, rearranging and grouping the updates by
nr_taken and nr_scanned together in both.
With one exception, mem_cgroup_update_lru_size(,lru,) is then used where
__mod_zone_page_state(,NR_LRU_BASE+lru,) is used; and we shall be adding
some more calls in a future commit. Make the code a little smaller and
simpler by incorporating stat update in lru_size update.
The exception was move_active_pages_to_lru(), which aggregated the
pgmoved stat update separately from the individual lru_size updates; but
I still think this a simplification worth making.
However, the __mod_zone_page_state is not peculiar to mem_cgroups: so
better use the name update_lru_size, calls mem_cgroup_update_lru_size
when CONFIG_MEMCG.
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andres Lagar-Cavilla <andreslc@google.com>
Cc: Yang Shi <yang.shi@linaro.org>
Cc: Ning Qu <quning@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-20 08:12:38 +08:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2012-01-13 09:20:04 +08:00
|
|
|
/**
|
2021-02-25 22:47:41 +08:00
|
|
|
* __folio_clear_lru_flags - Clear page lru flags before releasing a page.
|
|
|
|
* @folio: The folio that was on lru and now has a zero reference.
|
2012-01-13 09:20:04 +08:00
|
|
|
*/
|
2021-02-25 22:47:41 +08:00
|
|
|
static __always_inline void __folio_clear_lru_flags(struct folio *folio)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2021-02-25 22:47:41 +08:00
|
|
|
VM_BUG_ON_FOLIO(!folio_test_lru(folio), folio);
|
2021-02-25 04:08:32 +08:00
|
|
|
|
2021-02-25 22:47:41 +08:00
|
|
|
__folio_clear_lru(folio);
|
2021-02-25 04:08:28 +08:00
|
|
|
|
|
|
|
/* this shouldn't happen, so leave the flags to bad_page() */
|
2021-02-25 22:47:41 +08:00
|
|
|
if (folio_test_active(folio) && folio_test_unevictable(folio))
|
2021-02-25 04:08:28 +08:00
|
|
|
return;
|
2008-10-19 11:26:14 +08:00
|
|
|
|
2021-02-25 22:47:41 +08:00
|
|
|
__folio_clear_active(folio);
|
|
|
|
__folio_clear_unevictable(folio);
|
|
|
|
}
|
|
|
|
|
|
|
|
static __always_inline void __clear_page_lru_flags(struct page *page)
|
|
|
|
{
|
|
|
|
__folio_clear_lru_flags(page_folio(page));
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2006-01-08 17:00:45 +08:00
|
|
|
|
2008-10-19 11:26:14 +08:00
|
|
|
/**
|
2021-02-25 22:47:41 +08:00
|
|
|
* folio_lru_list - Which LRU list should a folio be on?
|
|
|
|
* @folio: The folio to test.
|
2008-10-19 11:26:14 +08:00
|
|
|
*
|
2021-02-25 22:47:41 +08:00
|
|
|
* Return: The LRU list a folio should be on, as an index
|
2008-10-19 11:26:14 +08:00
|
|
|
* into the array of LRU lists.
|
|
|
|
*/
|
2021-02-25 22:47:41 +08:00
|
|
|
static __always_inline enum lru_list folio_lru_list(struct folio *folio)
|
2008-10-19 11:26:14 +08:00
|
|
|
{
|
2009-09-22 08:02:58 +08:00
|
|
|
enum lru_list lru;
|
2008-10-19 11:26:14 +08:00
|
|
|
|
2021-02-25 22:47:41 +08:00
|
|
|
VM_BUG_ON_FOLIO(folio_test_active(folio) && folio_test_unevictable(folio), folio);
|
2021-02-25 04:08:32 +08:00
|
|
|
|
2021-02-25 22:47:41 +08:00
|
|
|
if (folio_test_unevictable(folio))
|
2021-02-25 04:08:36 +08:00
|
|
|
return LRU_UNEVICTABLE;
|
|
|
|
|
2021-02-25 22:47:41 +08:00
|
|
|
lru = folio_is_file_lru(folio) ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON;
|
|
|
|
if (folio_test_active(folio))
|
2021-02-25 04:08:36 +08:00
|
|
|
lru += LRU_ACTIVE;
|
|
|
|
|
2008-10-19 11:26:14 +08:00
|
|
|
return lru;
|
|
|
|
}
|
2021-02-25 04:08:13 +08:00
|
|
|
|
2021-02-25 22:47:41 +08:00
|
|
|
static __always_inline
|
|
|
|
void lruvec_add_folio(struct lruvec *lruvec, struct folio *folio)
|
|
|
|
{
|
|
|
|
enum lru_list lru = folio_lru_list(folio);
|
|
|
|
|
|
|
|
update_lru_size(lruvec, lru, folio_zonenum(folio),
|
|
|
|
folio_nr_pages(folio));
|
|
|
|
list_add(&folio->lru, &lruvec->lists[lru]);
|
|
|
|
}
|
|
|
|
|
2021-02-25 04:08:13 +08:00
|
|
|
static __always_inline void add_page_to_lru_list(struct page *page,
|
2021-02-25 04:08:17 +08:00
|
|
|
struct lruvec *lruvec)
|
2021-02-25 04:08:13 +08:00
|
|
|
{
|
2021-02-25 22:47:41 +08:00
|
|
|
lruvec_add_folio(lruvec, page_folio(page));
|
|
|
|
}
|
2021-02-25 04:08:17 +08:00
|
|
|
|
2021-02-25 22:47:41 +08:00
|
|
|
static __always_inline
|
|
|
|
void lruvec_add_folio_tail(struct lruvec *lruvec, struct folio *folio)
|
|
|
|
{
|
|
|
|
enum lru_list lru = folio_lru_list(folio);
|
|
|
|
|
|
|
|
update_lru_size(lruvec, lru, folio_zonenum(folio),
|
|
|
|
folio_nr_pages(folio));
|
|
|
|
list_add_tail(&folio->lru, &lruvec->lists[lru]);
|
2021-02-25 04:08:13 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static __always_inline void add_page_to_lru_list_tail(struct page *page,
|
2021-02-25 04:08:17 +08:00
|
|
|
struct lruvec *lruvec)
|
2021-02-25 04:08:13 +08:00
|
|
|
{
|
2021-02-25 22:47:41 +08:00
|
|
|
lruvec_add_folio_tail(lruvec, page_folio(page));
|
|
|
|
}
|
2021-02-25 04:08:17 +08:00
|
|
|
|
2021-02-25 22:47:41 +08:00
|
|
|
static __always_inline
|
|
|
|
void lruvec_del_folio(struct lruvec *lruvec, struct folio *folio)
|
|
|
|
{
|
|
|
|
list_del(&folio->lru);
|
|
|
|
update_lru_size(lruvec, folio_lru_list(folio), folio_zonenum(folio),
|
|
|
|
-folio_nr_pages(folio));
|
2021-02-25 04:08:13 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static __always_inline void del_page_from_lru_list(struct page *page,
|
2021-02-25 04:08:25 +08:00
|
|
|
struct lruvec *lruvec)
|
2021-02-25 04:08:13 +08:00
|
|
|
{
|
2021-02-25 22:47:41 +08:00
|
|
|
lruvec_del_folio(lruvec, page_folio(page));
|
2021-02-25 04:08:13 +08:00
|
|
|
}
|
2022-01-15 06:06:07 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_ANON_VMA_NAME
|
|
|
|
/*
|
|
|
|
* mmap_lock should be read-locked when calling vma_anon_name() and while using
|
|
|
|
* the returned pointer.
|
|
|
|
*/
|
|
|
|
extern const char *vma_anon_name(struct vm_area_struct *vma);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* mmap_lock should be read-locked for orig_vma->vm_mm.
|
|
|
|
* mmap_lock should be write-locked for new_vma->vm_mm or new_vma should be
|
|
|
|
* isolated.
|
|
|
|
*/
|
|
|
|
extern void dup_vma_anon_name(struct vm_area_struct *orig_vma,
|
|
|
|
struct vm_area_struct *new_vma);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* mmap_lock should be write-locked or vma should have been isolated under
|
|
|
|
* write-locked mmap_lock protection.
|
|
|
|
*/
|
|
|
|
extern void free_vma_anon_name(struct vm_area_struct *vma);
|
|
|
|
|
|
|
|
/* mmap_lock should be read-locked */
|
|
|
|
static inline bool is_same_vma_anon_name(struct vm_area_struct *vma,
|
|
|
|
const char *name)
|
|
|
|
{
|
|
|
|
const char *vma_name = vma_anon_name(vma);
|
|
|
|
|
|
|
|
/* either both NULL, or pointers to same string */
|
|
|
|
if (vma_name == name)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
return name && vma_name && !strcmp(name, vma_name);
|
|
|
|
}
|
|
|
|
#else /* CONFIG_ANON_VMA_NAME */
|
|
|
|
static inline const char *vma_anon_name(struct vm_area_struct *vma)
|
|
|
|
{
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
static inline void dup_vma_anon_name(struct vm_area_struct *orig_vma,
|
|
|
|
struct vm_area_struct *new_vma) {}
|
|
|
|
static inline void free_vma_anon_name(struct vm_area_struct *vma) {}
|
|
|
|
static inline bool is_same_vma_anon_name(struct vm_area_struct *vma,
|
|
|
|
const char *name)
|
|
|
|
{
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
#endif /* CONFIG_ANON_VMA_NAME */
|
|
|
|
|
2022-01-15 06:06:10 +08:00
|
|
|
static inline void init_tlb_flush_pending(struct mm_struct *mm)
|
|
|
|
{
|
|
|
|
atomic_set(&mm->tlb_flush_pending, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void inc_tlb_flush_pending(struct mm_struct *mm)
|
|
|
|
{
|
|
|
|
atomic_inc(&mm->tlb_flush_pending);
|
|
|
|
/*
|
|
|
|
* The only time this value is relevant is when there are indeed pages
|
|
|
|
* to flush. And we'll only flush pages after changing them, which
|
|
|
|
* requires the PTL.
|
|
|
|
*
|
|
|
|
* So the ordering here is:
|
|
|
|
*
|
|
|
|
* atomic_inc(&mm->tlb_flush_pending);
|
|
|
|
* spin_lock(&ptl);
|
|
|
|
* ...
|
|
|
|
* set_pte_at();
|
|
|
|
* spin_unlock(&ptl);
|
|
|
|
*
|
|
|
|
* spin_lock(&ptl)
|
|
|
|
* mm_tlb_flush_pending();
|
|
|
|
* ....
|
|
|
|
* spin_unlock(&ptl);
|
|
|
|
*
|
|
|
|
* flush_tlb_range();
|
|
|
|
* atomic_dec(&mm->tlb_flush_pending);
|
|
|
|
*
|
|
|
|
* Where the increment if constrained by the PTL unlock, it thus
|
|
|
|
* ensures that the increment is visible if the PTE modification is
|
|
|
|
* visible. After all, if there is no PTE modification, nobody cares
|
|
|
|
* about TLB flushes either.
|
|
|
|
*
|
|
|
|
* This very much relies on users (mm_tlb_flush_pending() and
|
|
|
|
* mm_tlb_flush_nested()) only caring about _specific_ PTEs (and
|
|
|
|
* therefore specific PTLs), because with SPLIT_PTE_PTLOCKS and RCpc
|
|
|
|
* locks (PPC) the unlock of one doesn't order against the lock of
|
|
|
|
* another PTL.
|
|
|
|
*
|
|
|
|
* The decrement is ordered by the flush_tlb_range(), such that
|
|
|
|
* mm_tlb_flush_pending() will not return false unless all flushes have
|
|
|
|
* completed.
|
|
|
|
*/
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void dec_tlb_flush_pending(struct mm_struct *mm)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* See inc_tlb_flush_pending().
|
|
|
|
*
|
|
|
|
* This cannot be smp_mb__before_atomic() because smp_mb() simply does
|
|
|
|
* not order against TLB invalidate completion, which is what we need.
|
|
|
|
*
|
|
|
|
* Therefore we must rely on tlb_flush_*() to guarantee order.
|
|
|
|
*/
|
|
|
|
atomic_dec(&mm->tlb_flush_pending);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool mm_tlb_flush_pending(struct mm_struct *mm)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Must be called after having acquired the PTL; orders against that
|
|
|
|
* PTLs release and therefore ensures that if we observe the modified
|
|
|
|
* PTE we must also observe the increment from inc_tlb_flush_pending().
|
|
|
|
*
|
|
|
|
* That is, it only guarantees to return true if there is a flush
|
|
|
|
* pending for _this_ PTL.
|
|
|
|
*/
|
|
|
|
return atomic_read(&mm->tlb_flush_pending);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool mm_tlb_flush_nested(struct mm_struct *mm)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Similar to mm_tlb_flush_pending(), we must have acquired the PTL
|
|
|
|
* for which there is a TLB flush pending in order to guarantee
|
|
|
|
* we've seen both that PTE modification and the increment.
|
|
|
|
*
|
|
|
|
* (no requirement on actually still holding the PTL, that is irrelevant)
|
|
|
|
*/
|
|
|
|
return atomic_read(&mm->tlb_flush_pending) > 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2008-10-19 11:26:30 +08:00
|
|
|
#endif
|