License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2009-09-16 14:54:45 +08:00
|
|
|
/*
|
|
|
|
* Only give sleepers 50% of their service deficit. This allows
|
|
|
|
* them to run sooner, but does not allow tons of sleepers to
|
|
|
|
* rip the spread apart.
|
|
|
|
*/
|
2011-07-06 20:20:14 +08:00
|
|
|
SCHED_FEAT(GENTLE_FAIR_SLEEPERS, true)
|
2009-09-11 18:31:23 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Place new tasks ahead so that they do not starve already running
|
|
|
|
* tasks
|
|
|
|
*/
|
2011-07-06 20:20:14 +08:00
|
|
|
SCHED_FEAT(START_DEBIT, true)
|
2009-09-11 18:31:23 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Prefer to schedule the task we woke last (assuming it failed
|
|
|
|
* wakeup-preemption), since its likely going to consume data we
|
|
|
|
* touched, increases cache locality.
|
|
|
|
*/
|
2011-07-06 20:20:14 +08:00
|
|
|
SCHED_FEAT(NEXT_BUDDY, false)
|
2009-09-11 18:31:23 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Prefer to schedule the task that ran last (when we did
|
|
|
|
* wake-preempt) as that likely will touch the same data, increases
|
|
|
|
* cache locality.
|
|
|
|
*/
|
2011-07-06 20:20:14 +08:00
|
|
|
SCHED_FEAT(LAST_BUDDY, true)
|
2009-09-11 18:31:23 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Consider buddies to be cache hot, decreases the likelyness of a
|
|
|
|
* cache buddy being migrated away, increases cache locality.
|
|
|
|
*/
|
2011-07-06 20:20:14 +08:00
|
|
|
SCHED_FEAT(CACHE_HOT_BUDDY, true)
|
2009-09-11 18:31:23 +08:00
|
|
|
|
2012-10-14 20:28:50 +08:00
|
|
|
/*
|
|
|
|
* Allow wakeup-time preemption of the current task:
|
|
|
|
*/
|
|
|
|
SCHED_FEAT(WAKEUP_PREEMPTION, true)
|
|
|
|
|
2011-07-06 20:20:14 +08:00
|
|
|
SCHED_FEAT(HRTICK, false)
|
|
|
|
SCHED_FEAT(DOUBLE_TICK, false)
|
|
|
|
SCHED_FEAT(LB_BIAS, true)
|
2009-09-11 18:31:23 +08:00
|
|
|
|
2010-10-05 08:03:22 +08:00
|
|
|
/*
|
2014-05-28 01:50:41 +08:00
|
|
|
* Decrement CPU capacity based on time not spent running tasks
|
2010-10-05 08:03:22 +08:00
|
|
|
*/
|
2014-05-28 01:50:41 +08:00
|
|
|
SCHED_FEAT(NONTASK_CAPACITY, true)
|
2011-04-05 23:23:58 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Queue remote wakeups on the target CPU and process them
|
|
|
|
* using the scheduler IPI. Reduces rq->lock contention/bounces.
|
|
|
|
*/
|
2011-07-06 20:20:14 +08:00
|
|
|
SCHED_FEAT(TTWU_QUEUE, true)
|
2011-07-15 16:35:52 +08:00
|
|
|
|
2017-03-01 18:24:35 +08:00
|
|
|
/*
|
|
|
|
* When doing wakeups, attempt to limit superfluous scans of the LLC domain.
|
|
|
|
*/
|
|
|
|
SCHED_FEAT(SIS_AVG_CPU, false)
|
2017-05-17 18:53:50 +08:00
|
|
|
SCHED_FEAT(SIS_PROP, true)
|
2017-03-01 18:24:35 +08:00
|
|
|
|
2016-10-03 22:53:49 +08:00
|
|
|
/*
|
|
|
|
* Issue a WARN when we do multiple update_rq_clock() calls
|
|
|
|
* in a single rq->lock section. Default disabled because the
|
|
|
|
* annotations are not complete.
|
|
|
|
*/
|
|
|
|
SCHED_FEAT(WARN_DOUBLE_CLOCK, false)
|
|
|
|
|
sched/rt: Use IPI to trigger RT task push migration instead of pulling
When debugging the latencies on a 40 core box, where we hit 300 to
500 microsecond latencies, I found there was a huge contention on the
runqueue locks.
Investigating it further, running ftrace, I found that it was due to
the pulling of RT tasks.
The test that was run was the following:
cyclictest --numa -p95 -m -d0 -i100
This created a thread on each CPU, that would set its wakeup in iterations
of 100 microseconds. The -d0 means that all the threads had the same
interval (100us). Each thread sleeps for 100us and wakes up and measures
its latencies.
cyclictest is maintained at:
git://git.kernel.org/pub/scm/linux/kernel/git/clrkwllms/rt-tests.git
What happened was another RT task would be scheduled on one of the CPUs
that was running our test, when the other CPU tests went to sleep and
scheduled idle. This caused the "pull" operation to execute on all
these CPUs. Each one of these saw the RT task that was overloaded on
the CPU of the test that was still running, and each one tried
to grab that task in a thundering herd way.
To grab the task, each thread would do a double rq lock grab, grabbing
its own lock as well as the rq of the overloaded CPU. As the sched
domains on this box was rather flat for its size, I saw up to 12 CPUs
block on this lock at once. This caused a ripple affect with the
rq locks especially since the taking was done via a double rq lock, which
means that several of the CPUs had their own rq locks held while trying
to take this rq lock. As these locks were blocked, any wakeups or load
balanceing on these CPUs would also block on these locks, and the wait
time escalated.
I've tried various methods to lessen the load, but things like an
atomic counter to only let one CPU grab the task wont work, because
the task may have a limited affinity, and we may pick the wrong
CPU to take that lock and do the pull, to only find out that the
CPU we picked isn't in the task's affinity.
Instead of doing the PULL, I now have the CPUs that want the pull to
send over an IPI to the overloaded CPU, and let that CPU pick what
CPU to push the task to. No more need to grab the rq lock, and the
push/pull algorithm still works fine.
With this patch, the latency dropped to just 150us over a 20 hour run.
Without the patch, the huge latencies would trigger in seconds.
I've created a new sched feature called RT_PUSH_IPI, which is enabled
by default.
When RT_PUSH_IPI is not enabled, the old method of grabbing the rq locks
and having the pulling CPU do the work is implemented. When RT_PUSH_IPI
is enabled, the IPI is sent to the overloaded CPU to do a push.
To enabled or disable this at run time:
# mount -t debugfs nodev /sys/kernel/debug
# echo RT_PUSH_IPI > /sys/kernel/debug/sched_features
or
# echo NO_RT_PUSH_IPI > /sys/kernel/debug/sched_features
Update: This original patch would send an IPI to all CPUs in the RT overload
list. But that could theoretically cause the reverse issue. That is, there
could be lots of overloaded RT queues and one CPU lowers its priority. It would
then send an IPI to all the overloaded RT queues and they could then all try
to grab the rq lock of the CPU lowering its priority, and then we have the
same problem.
The latest design sends out only one IPI to the first overloaded CPU. It tries to
push any tasks that it can, and then looks for the next overloaded CPU that can
push to the source CPU. The IPIs stop when all overloaded CPUs that have pushable
tasks that have priorities greater than the source CPU are covered. In case the
source CPU lowers its priority again, a flag is set to tell the IPI traversal to
restart with the first RT overloaded CPU after the source CPU.
Parts-suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Joern Engel <joern@purestorage.com>
Cc: Clark Williams <williams@redhat.com>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150318144946.2f3cc982@gandalf.local.home
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-19 02:49:46 +08:00
|
|
|
#ifdef HAVE_RT_PUSH_IPI
|
|
|
|
/*
|
|
|
|
* In order to avoid a thundering herd attack of CPUs that are
|
|
|
|
* lowering their priorities at the same time, and there being
|
|
|
|
* a single CPU that has an RT task that can migrate and is waiting
|
|
|
|
* to run, where the other CPUs will try to take that CPUs
|
|
|
|
* rq lock and possibly create a large contention, sending an
|
|
|
|
* IPI to that CPU and let that CPU push the RT task to where
|
|
|
|
* it should go may be a better scenario.
|
|
|
|
*/
|
|
|
|
SCHED_FEAT(RT_PUSH_IPI, true)
|
|
|
|
#endif
|
|
|
|
|
2011-07-06 20:20:14 +08:00
|
|
|
SCHED_FEAT(RT_RUNTIME_SHARE, true)
|
2012-04-17 19:38:40 +08:00
|
|
|
SCHED_FEAT(LB_MIN, false)
|
2015-09-11 22:10:59 +08:00
|
|
|
SCHED_FEAT(ATTACH_AGE_LOAD, true)
|
|
|
|
|
2017-09-27 17:35:30 +08:00
|
|
|
SCHED_FEAT(WA_IDLE, true)
|
2017-10-06 15:23:24 +08:00
|
|
|
SCHED_FEAT(WA_WEIGHT, true)
|
|
|
|
SCHED_FEAT(WA_BIAS, true)
|
sched/fair: Add util_est on top of PELT
The util_avg signal computed by PELT is too variable for some use-cases.
For example, a big task waking up after a long sleep period will have its
utilization almost completely decayed. This introduces some latency before
schedutil will be able to pick the best frequency to run a task.
The same issue can affect task placement. Indeed, since the task
utilization is already decayed at wakeup, when the task is enqueued in a
CPU, this can result in a CPU running a big task as being temporarily
represented as being almost empty. This leads to a race condition where
other tasks can be potentially allocated on a CPU which just started to run
a big task which slept for a relatively long period.
Moreover, the PELT utilization of a task can be updated every [ms], thus
making it a continuously changing value for certain longer running
tasks. This means that the instantaneous PELT utilization of a RUNNING
task is not really meaningful to properly support scheduler decisions.
For all these reasons, a more stable signal can do a better job of
representing the expected/estimated utilization of a task/cfs_rq.
Such a signal can be easily created on top of PELT by still using it as
an estimator which produces values to be aggregated on meaningful
events.
This patch adds a simple implementation of util_est, a new signal built on
top of PELT's util_avg where:
util_est(task) = max(task::util_avg, f(task::util_avg@dequeue))
This allows to remember how big a task has been reported by PELT in its
previous activations via f(task::util_avg@dequeue), which is the new
_task_util_est(struct task_struct*) function added by this patch.
If a task should change its behavior and it runs longer in a new
activation, after a certain time its util_est will just track the
original PELT signal (i.e. task::util_avg).
The estimated utilization of cfs_rq is defined only for root ones.
That's because the only sensible consumer of this signal are the
scheduler and schedutil when looking for the overall CPU utilization
due to FAIR tasks.
For this reason, the estimated utilization of a root cfs_rq is simply
defined as:
util_est(cfs_rq) = max(cfs_rq::util_avg, cfs_rq::util_est::enqueued)
where:
cfs_rq::util_est::enqueued = sum(_task_util_est(task))
for each RUNNABLE task on that root cfs_rq
It's worth noting that the estimated utilization is tracked only for
objects of interests, specifically:
- Tasks: to better support tasks placement decisions
- root cfs_rqs: to better support both tasks placement decisions as
well as frequencies selection
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Joel Fernandes <joelaf@google.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Paul Turner <pjt@google.com>
Cc: Rafael J . Wysocki <rafael.j.wysocki@intel.com>
Cc: Steve Muckle <smuckle@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Todd Kjos <tkjos@android.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Link: http://lkml.kernel.org/r/20180309095245.11071-2-patrick.bellasi@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-03-09 17:52:42 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* UtilEstimation. Use estimated CPU utilization.
|
|
|
|
*/
|
2018-03-09 17:52:45 +08:00
|
|
|
SCHED_FEAT(UTIL_EST, true)
|