License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2017-02-01 23:36:40 +08:00
|
|
|
#ifndef _LINUX_SCHED_TOPOLOGY_H
|
|
|
|
#define _LINUX_SCHED_TOPOLOGY_H
|
|
|
|
|
2017-02-06 17:01:09 +08:00
|
|
|
#include <linux/topology.h>
|
|
|
|
|
2017-02-01 23:36:40 +08:00
|
|
|
#include <linux/sched/idle.h>
|
|
|
|
|
2017-02-01 23:36:40 +08:00
|
|
|
/*
|
|
|
|
* sched-domains (multiprocessor balancing) declarations:
|
|
|
|
*/
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
|
2020-08-17 19:29:49 +08:00
|
|
|
/* Generate SD flag indexes */
|
2020-08-17 19:29:50 +08:00
|
|
|
#define SD_FLAG(name, mflags) __##name,
|
2020-08-17 19:29:49 +08:00
|
|
|
enum {
|
|
|
|
#include <linux/sched/sd_flags.h>
|
|
|
|
__SD_FLAG_CNT,
|
|
|
|
};
|
|
|
|
#undef SD_FLAG
|
|
|
|
/* Generate SD flag bits */
|
2020-08-17 19:29:50 +08:00
|
|
|
#define SD_FLAG(name, mflags) name = 1 << __##name,
|
2020-08-17 19:29:49 +08:00
|
|
|
enum {
|
|
|
|
#include <linux/sched/sd_flags.h>
|
|
|
|
};
|
|
|
|
#undef SD_FLAG
|
2017-02-01 23:36:40 +08:00
|
|
|
|
2020-08-17 19:29:50 +08:00
|
|
|
#ifdef CONFIG_SCHED_DEBUG
|
2020-08-25 21:32:15 +08:00
|
|
|
|
|
|
|
struct sd_flag_debug {
|
2020-08-17 19:29:50 +08:00
|
|
|
unsigned int meta_flags;
|
|
|
|
char *name;
|
|
|
|
};
|
2020-08-25 21:32:15 +08:00
|
|
|
extern const struct sd_flag_debug sd_flag_debug[];
|
|
|
|
|
2020-08-17 19:29:50 +08:00
|
|
|
#endif
|
|
|
|
|
2017-02-01 23:36:40 +08:00
|
|
|
#ifdef CONFIG_SCHED_SMT
|
|
|
|
static inline int cpu_smt_flags(void)
|
|
|
|
{
|
|
|
|
return SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
sched: Add cluster scheduler level in core and related Kconfig for ARM64
This patch adds scheduler level for clusters and automatically enables
the load balance among clusters. It will directly benefit a lot of
workload which loves more resources such as memory bandwidth, caches.
Testing has widely been done in two different hardware configurations of
Kunpeng920:
24 cores in one NUMA(6 clusters in each NUMA node);
32 cores in one NUMA(8 clusters in each NUMA node)
Workload is running on either one NUMA node or four NUMA nodes, thus,
this can estimate the effect of cluster spreading w/ and w/o NUMA load
balance.
* Stream benchmark:
4threads stream (on 1NUMA * 24cores = 24cores)
stream stream
w/o patch w/ patch
MB/sec copy 29929.64 ( 0.00%) 32932.68 ( 10.03%)
MB/sec scale 29861.10 ( 0.00%) 32710.58 ( 9.54%)
MB/sec add 27034.42 ( 0.00%) 32400.68 ( 19.85%)
MB/sec triad 27225.26 ( 0.00%) 31965.36 ( 17.41%)
6threads stream (on 1NUMA * 24cores = 24cores)
stream stream
w/o patch w/ patch
MB/sec copy 40330.24 ( 0.00%) 42377.68 ( 5.08%)
MB/sec scale 40196.42 ( 0.00%) 42197.90 ( 4.98%)
MB/sec add 37427.00 ( 0.00%) 41960.78 ( 12.11%)
MB/sec triad 37841.36 ( 0.00%) 42513.64 ( 12.35%)
12threads stream (on 1NUMA * 24cores = 24cores)
stream stream
w/o patch w/ patch
MB/sec copy 52639.82 ( 0.00%) 53818.04 ( 2.24%)
MB/sec scale 52350.30 ( 0.00%) 53253.38 ( 1.73%)
MB/sec add 53607.68 ( 0.00%) 55198.82 ( 2.97%)
MB/sec triad 54776.66 ( 0.00%) 56360.40 ( 2.89%)
Thus, it could help memory-bound workload especially under medium load.
Similar improvement is also seen in lkp-pbzip2:
* lkp-pbzip2 benchmark
2-96 threads (on 4NUMA * 24cores = 96cores)
lkp-pbzip2 lkp-pbzip2
w/o patch w/ patch
Hmean tput-2 11062841.57 ( 0.00%) 11341817.51 * 2.52%*
Hmean tput-5 26815503.70 ( 0.00%) 27412872.65 * 2.23%*
Hmean tput-8 41873782.21 ( 0.00%) 43326212.92 * 3.47%*
Hmean tput-12 61875980.48 ( 0.00%) 64578337.51 * 4.37%*
Hmean tput-21 105814963.07 ( 0.00%) 111381851.01 * 5.26%*
Hmean tput-30 150349470.98 ( 0.00%) 156507070.73 * 4.10%*
Hmean tput-48 237195937.69 ( 0.00%) 242353597.17 * 2.17%*
Hmean tput-79 360252509.37 ( 0.00%) 362635169.23 * 0.66%*
Hmean tput-96 394571737.90 ( 0.00%) 400952978.48 * 1.62%*
2-24 threads (on 1NUMA * 24cores = 24cores)
lkp-pbzip2 lkp-pbzip2
w/o patch w/ patch
Hmean tput-2 11071705.49 ( 0.00%) 11296869.10 * 2.03%*
Hmean tput-4 20782165.19 ( 0.00%) 21949232.15 * 5.62%*
Hmean tput-6 30489565.14 ( 0.00%) 33023026.96 * 8.31%*
Hmean tput-8 40376495.80 ( 0.00%) 42779286.27 * 5.95%*
Hmean tput-12 61264033.85 ( 0.00%) 62995632.78 * 2.83%*
Hmean tput-18 86697139.39 ( 0.00%) 86461545.74 ( -0.27%)
Hmean tput-24 104854637.04 ( 0.00%) 104522649.46 * -0.32%*
In the case of 6 threads and 8 threads, we see the greatest performance
improvement.
Similar improvement can be seen on lkp-pixz though the improvement is
smaller:
* lkp-pixz benchmark
2-24 threads lkp-pixz (on 1NUMA * 24cores = 24cores)
lkp-pixz lkp-pixz
w/o patch w/ patch
Hmean tput-2 6486981.16 ( 0.00%) 6561515.98 * 1.15%*
Hmean tput-4 11645766.38 ( 0.00%) 11614628.43 ( -0.27%)
Hmean tput-6 15429943.96 ( 0.00%) 15957350.76 * 3.42%*
Hmean tput-8 19974087.63 ( 0.00%) 20413746.98 * 2.20%*
Hmean tput-12 28172068.18 ( 0.00%) 28751997.06 * 2.06%*
Hmean tput-18 39413409.54 ( 0.00%) 39896830.55 * 1.23%*
Hmean tput-24 49101815.85 ( 0.00%) 49418141.47 * 0.64%*
* SPECrate benchmark
4,8,16 copies mcf_r(on 1NUMA * 32cores = 32cores)
Base Base
Run Time Rate
------- ---------
4 Copies w/o 580 (w/ 570) w/o 11.1 (w/ 11.3)
8 Copies w/o 647 (w/ 605) w/o 20.0 (w/ 21.4, +7%)
16 Copies w/o 844 (w/ 844) w/o 30.6 (w/ 30.6)
32 Copies(on 4NUMA * 32 cores = 128cores)
[w/o patch]
Base Base Base
Benchmarks Copies Run Time Rate
--------------- ------- --------- ---------
500.perlbench_r 32 584 87.2 *
502.gcc_r 32 503 90.2 *
505.mcf_r 32 745 69.4 *
520.omnetpp_r 32 1031 40.7 *
523.xalancbmk_r 32 597 56.6 *
525.x264_r 1 -- CE
531.deepsjeng_r 32 336 109 *
541.leela_r 32 556 95.4 *
548.exchange2_r 32 513 163 *
557.xz_r 32 530 65.2 *
Est. SPECrate2017_int_base 80.3
[w/ patch]
Base Base Base
Benchmarks Copies Run Time Rate
--------------- ------- --------- ---------
500.perlbench_r 32 580 87.8 (+0.688%) *
502.gcc_r 32 477 95.1 (+5.432%) *
505.mcf_r 32 644 80.3 (+13.574%) *
520.omnetpp_r 32 942 44.6 (+9.58%) *
523.xalancbmk_r 32 560 60.4 (+6.714%%) *
525.x264_r 1 -- CE
531.deepsjeng_r 32 337 109 (+0.000%) *
541.leela_r 32 554 95.6 (+0.210%) *
548.exchange2_r 32 515 163 (+0.000%) *
557.xz_r 32 524 66.0 (+1.227%) *
Est. SPECrate2017_int_base 83.7 (+4.062%)
On the other hand, it is slightly helpful to CPU-bound tasks like
kernbench:
* 24-96 threads kernbench (on 4NUMA * 24cores = 96cores)
kernbench kernbench
w/o cluster w/ cluster
Min user-24 12054.67 ( 0.00%) 12024.19 ( 0.25%)
Min syst-24 1751.51 ( 0.00%) 1731.68 ( 1.13%)
Min elsp-24 600.46 ( 0.00%) 598.64 ( 0.30%)
Min user-48 12361.93 ( 0.00%) 12315.32 ( 0.38%)
Min syst-48 1917.66 ( 0.00%) 1892.73 ( 1.30%)
Min elsp-48 333.96 ( 0.00%) 332.57 ( 0.42%)
Min user-96 12922.40 ( 0.00%) 12921.17 ( 0.01%)
Min syst-96 2143.94 ( 0.00%) 2110.39 ( 1.56%)
Min elsp-96 211.22 ( 0.00%) 210.47 ( 0.36%)
Amean user-24 12063.99 ( 0.00%) 12030.78 * 0.28%*
Amean syst-24 1755.20 ( 0.00%) 1735.53 * 1.12%*
Amean elsp-24 601.60 ( 0.00%) 600.19 ( 0.23%)
Amean user-48 12362.62 ( 0.00%) 12315.56 * 0.38%*
Amean syst-48 1921.59 ( 0.00%) 1894.95 * 1.39%*
Amean elsp-48 334.10 ( 0.00%) 332.82 * 0.38%*
Amean user-96 12925.27 ( 0.00%) 12922.63 ( 0.02%)
Amean syst-96 2146.66 ( 0.00%) 2122.20 * 1.14%*
Amean elsp-96 211.96 ( 0.00%) 211.79 ( 0.08%)
Note this patch isn't an universal win, it might hurt those workload
which can benefit from packing. Though tasks which want to take
advantages of lower communication latency of one cluster won't
necessarily been packed in one cluster while kernel is not aware of
clusters, they have some chance to be randomly packed. But this
patch will make them more likely spread.
Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
Tested-by: Yicong Yang <yangyicong@hisilicon.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
2021-09-24 16:51:03 +08:00
|
|
|
#ifdef CONFIG_SCHED_CLUSTER
|
|
|
|
static inline int cpu_cluster_flags(void)
|
|
|
|
{
|
|
|
|
return SD_SHARE_PKG_RESOURCES;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2017-02-01 23:36:40 +08:00
|
|
|
#ifdef CONFIG_SCHED_MC
|
|
|
|
static inline int cpu_core_flags(void)
|
|
|
|
{
|
|
|
|
return SD_SHARE_PKG_RESOURCES;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef CONFIG_NUMA
|
|
|
|
static inline int cpu_numa_flags(void)
|
|
|
|
{
|
|
|
|
return SD_NUMA;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
extern int arch_asym_cpu_priority(int cpu);
|
|
|
|
|
|
|
|
struct sched_domain_attr {
|
|
|
|
int relax_domain_level;
|
|
|
|
};
|
|
|
|
|
|
|
|
#define SD_ATTR_INIT (struct sched_domain_attr) { \
|
|
|
|
.relax_domain_level = -1, \
|
|
|
|
}
|
|
|
|
|
|
|
|
extern int sched_domain_level_max;
|
|
|
|
|
|
|
|
struct sched_group;
|
|
|
|
|
|
|
|
struct sched_domain_shared {
|
|
|
|
atomic_t ref;
|
|
|
|
atomic_t nr_busy_cpus;
|
|
|
|
int has_idle_cores;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct sched_domain {
|
|
|
|
/* These fields must be setup */
|
2019-03-21 08:34:24 +08:00
|
|
|
struct sched_domain __rcu *parent; /* top domain must be null terminated */
|
|
|
|
struct sched_domain __rcu *child; /* bottom domain must be null terminated */
|
2017-02-01 23:36:40 +08:00
|
|
|
struct sched_group *groups; /* the balancing groups of the domain */
|
|
|
|
unsigned long min_interval; /* Minimum balance interval ms */
|
|
|
|
unsigned long max_interval; /* Maximum balance interval ms */
|
|
|
|
unsigned int busy_factor; /* less balancing by factor if busy */
|
|
|
|
unsigned int imbalance_pct; /* No balance until over watermark */
|
|
|
|
unsigned int cache_nice_tries; /* Leave cache hot tasks for # tries */
|
2022-02-08 17:43:34 +08:00
|
|
|
unsigned int imb_numa_nr; /* Nr running tasks that allows a NUMA imbalance */
|
2017-02-01 23:36:40 +08:00
|
|
|
|
|
|
|
int nohz_idle; /* NOHZ IDLE status */
|
|
|
|
int flags; /* See SD_* */
|
|
|
|
int level;
|
|
|
|
|
|
|
|
/* Runtime fields. */
|
|
|
|
unsigned long last_balance; /* init to jiffies. units in jiffies */
|
|
|
|
unsigned int balance_interval; /* initialise to 1. units in ms. */
|
|
|
|
unsigned int nr_balance_failed; /* initialise to 0 */
|
|
|
|
|
|
|
|
/* idle_balance() stats */
|
|
|
|
u64 max_newidle_lb_cost;
|
2021-10-19 20:35:35 +08:00
|
|
|
unsigned long last_decay_max_lb_cost;
|
2017-02-01 23:36:40 +08:00
|
|
|
|
|
|
|
u64 avg_scan_cost; /* select_idle_sibling */
|
|
|
|
|
|
|
|
#ifdef CONFIG_SCHEDSTATS
|
|
|
|
/* load_balance() stats */
|
|
|
|
unsigned int lb_count[CPU_MAX_IDLE_TYPES];
|
|
|
|
unsigned int lb_failed[CPU_MAX_IDLE_TYPES];
|
|
|
|
unsigned int lb_balanced[CPU_MAX_IDLE_TYPES];
|
|
|
|
unsigned int lb_imbalance[CPU_MAX_IDLE_TYPES];
|
|
|
|
unsigned int lb_gained[CPU_MAX_IDLE_TYPES];
|
|
|
|
unsigned int lb_hot_gained[CPU_MAX_IDLE_TYPES];
|
|
|
|
unsigned int lb_nobusyg[CPU_MAX_IDLE_TYPES];
|
|
|
|
unsigned int lb_nobusyq[CPU_MAX_IDLE_TYPES];
|
|
|
|
|
|
|
|
/* Active load balancing */
|
|
|
|
unsigned int alb_count;
|
|
|
|
unsigned int alb_failed;
|
|
|
|
unsigned int alb_pushed;
|
|
|
|
|
|
|
|
/* SD_BALANCE_EXEC stats */
|
|
|
|
unsigned int sbe_count;
|
|
|
|
unsigned int sbe_balanced;
|
|
|
|
unsigned int sbe_pushed;
|
|
|
|
|
|
|
|
/* SD_BALANCE_FORK stats */
|
|
|
|
unsigned int sbf_count;
|
|
|
|
unsigned int sbf_balanced;
|
|
|
|
unsigned int sbf_pushed;
|
|
|
|
|
|
|
|
/* try_to_wake_up() stats */
|
|
|
|
unsigned int ttwu_wake_remote;
|
|
|
|
unsigned int ttwu_move_affine;
|
|
|
|
unsigned int ttwu_move_balance;
|
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_SCHED_DEBUG
|
|
|
|
char *name;
|
|
|
|
#endif
|
|
|
|
union {
|
|
|
|
void *private; /* used during construction */
|
|
|
|
struct rcu_head rcu; /* used during destruction */
|
|
|
|
};
|
|
|
|
struct sched_domain_shared *shared;
|
|
|
|
|
|
|
|
unsigned int span_weight;
|
|
|
|
/*
|
|
|
|
* Span of all CPUs in this domain.
|
|
|
|
*
|
|
|
|
* NOTE: this field is variable length. (Allocated dynamically
|
|
|
|
* by attaching extra space to the end of the structure,
|
|
|
|
* depending on how many CPUs the kernel has booted up with)
|
|
|
|
*/
|
2020-03-24 08:14:37 +08:00
|
|
|
unsigned long span[];
|
2017-02-01 23:36:40 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
static inline struct cpumask *sched_domain_span(struct sched_domain *sd)
|
|
|
|
{
|
|
|
|
return to_cpumask(sd->span);
|
|
|
|
}
|
|
|
|
|
2019-07-19 21:59:53 +08:00
|
|
|
extern void partition_sched_domains_locked(int ndoms_new,
|
|
|
|
cpumask_var_t doms_new[],
|
|
|
|
struct sched_domain_attr *dattr_new);
|
|
|
|
|
2017-02-01 23:36:40 +08:00
|
|
|
extern void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
|
|
|
|
struct sched_domain_attr *dattr_new);
|
|
|
|
|
|
|
|
/* Allocate an array of sched domains, for partition_sched_domains(). */
|
|
|
|
cpumask_var_t *alloc_sched_domains(unsigned int ndoms);
|
|
|
|
void free_sched_domains(cpumask_var_t doms[], unsigned int ndoms);
|
|
|
|
|
|
|
|
bool cpus_share_cache(int this_cpu, int that_cpu);
|
|
|
|
|
|
|
|
typedef const struct cpumask *(*sched_domain_mask_f)(int cpu);
|
|
|
|
typedef int (*sched_domain_flags_f)(void);
|
|
|
|
|
|
|
|
#define SDTL_OVERLAP 0x01
|
|
|
|
|
|
|
|
struct sd_data {
|
2019-01-18 22:49:36 +08:00
|
|
|
struct sched_domain *__percpu *sd;
|
|
|
|
struct sched_domain_shared *__percpu *sds;
|
|
|
|
struct sched_group *__percpu *sg;
|
|
|
|
struct sched_group_capacity *__percpu *sgc;
|
2017-02-01 23:36:40 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
struct sched_domain_topology_level {
|
|
|
|
sched_domain_mask_f mask;
|
|
|
|
sched_domain_flags_f sd_flags;
|
|
|
|
int flags;
|
|
|
|
int numa_level;
|
|
|
|
struct sd_data data;
|
|
|
|
#ifdef CONFIG_SCHED_DEBUG
|
|
|
|
char *name;
|
|
|
|
#endif
|
|
|
|
};
|
|
|
|
|
|
|
|
extern void set_sched_topology(struct sched_domain_topology_level *tl);
|
|
|
|
|
|
|
|
#ifdef CONFIG_SCHED_DEBUG
|
|
|
|
# define SD_INIT_NAME(type) .name = #type
|
|
|
|
#else
|
|
|
|
# define SD_INIT_NAME(type)
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#else /* CONFIG_SMP */
|
|
|
|
|
|
|
|
struct sched_domain_attr;
|
|
|
|
|
2019-07-19 21:59:53 +08:00
|
|
|
static inline void
|
|
|
|
partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[],
|
|
|
|
struct sched_domain_attr *dattr_new)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2017-02-01 23:36:40 +08:00
|
|
|
static inline void
|
|
|
|
partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
|
|
|
|
struct sched_domain_attr *dattr_new)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool cpus_share_cache(int this_cpu, int that_cpu)
|
|
|
|
{
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2019-06-17 23:00:17 +08:00
|
|
|
#endif /* !CONFIG_SMP */
|
|
|
|
|
2020-10-28 02:07:11 +08:00
|
|
|
#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
|
|
|
|
extern void rebuild_sched_domains_energy(void);
|
|
|
|
#else
|
|
|
|
static inline void rebuild_sched_domains_energy(void)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2018-12-03 17:56:14 +08:00
|
|
|
#ifndef arch_scale_cpu_capacity
|
2020-08-01 03:20:14 +08:00
|
|
|
/**
|
|
|
|
* arch_scale_cpu_capacity - get the capacity scale factor of a given CPU.
|
|
|
|
* @cpu: the CPU in question.
|
|
|
|
*
|
|
|
|
* Return: the CPU scale factor normalized against SCHED_CAPACITY_SCALE, i.e.
|
|
|
|
*
|
|
|
|
* max_perf(cpu)
|
|
|
|
* ----------------------------- * SCHED_CAPACITY_SCALE
|
|
|
|
* max(max_perf(c) : c \in CPUs)
|
|
|
|
*/
|
2018-12-03 17:56:14 +08:00
|
|
|
static __always_inline
|
2019-06-17 23:00:17 +08:00
|
|
|
unsigned long arch_scale_cpu_capacity(int cpu)
|
2018-12-03 17:56:14 +08:00
|
|
|
{
|
|
|
|
return SCHED_CAPACITY_SCALE;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2020-02-22 08:52:06 +08:00
|
|
|
#ifndef arch_scale_thermal_pressure
|
|
|
|
static __always_inline
|
|
|
|
unsigned long arch_scale_thermal_pressure(int cpu)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2021-11-10 03:57:10 +08:00
|
|
|
#ifndef arch_update_thermal_pressure
|
|
|
|
static __always_inline
|
|
|
|
void arch_update_thermal_pressure(const struct cpumask *cpus,
|
|
|
|
unsigned long capped_frequency)
|
|
|
|
{ }
|
|
|
|
#endif
|
|
|
|
|
2017-02-06 17:01:09 +08:00
|
|
|
static inline int task_node(const struct task_struct *p)
|
|
|
|
{
|
|
|
|
return cpu_to_node(task_cpu(p));
|
|
|
|
}
|
|
|
|
|
2017-02-01 23:36:40 +08:00
|
|
|
#endif /* _LINUX_SCHED_TOPOLOGY_H */
|