linux-sg2042/kernel/sched
Juri Lelli faa5993736 sched/deadline: Prevent rt_time growth to infinity
Kirill Tkhai noted:

  Since deadline tasks share rt bandwidth, we must care about
  bandwidth timer set. Otherwise rt_time may grow up to infinity
  in update_curr_dl(), if there are no other available RT tasks
  on top level bandwidth.

RT task were in fact throttled right after they got enqueued,
and never executed again (rt_time never again went below rt_runtime).

Peter then proposed to accrue DL execution on rt_time only when
rt timer is active, and proposed a patch (this patch is a slight
modification of that) to implement that behavior. While this
solves Kirill problem, it has a drawback.

Indeed, Kirill noted again:

  It looks we may get into a situation, when all CPU time is shared
  between RT and DL tasks:

  rt_runtime = n
  rt_period  = 2n

  | RT working, DL sleeping  | DL working, RT sleeping      |
  -----------------------------------------------------------
  | (1)     duration = n     | (2)     duration = n         | (repeat)
  |--------------------------|------------------------------|
  | (rt_bw timer is running) | (rt_bw timer is not running) |

  No time for fair tasks at all.

While this can happen during the first period, if rq is always backlogged,
RT tasks won't have the opportunity to execute anymore: rt_time reached
rt_runtime during (1), suppose after (2) RT is enqueued back, it gets
throttled since rt timer didn't fire, replenishment is from now on eaten up
by DL tasks that accrue their execution on rt_time (while rt timer is
active - we have an RT task waiting for replenishment). FAIR tasks are
not touched after this first period. Ok, this is not ideal, and the situation
is even worse!

What above (the nice case), practically never happens in reality, where
your rt timer is not aligned to tasks periods, tasks are in general not
periodic, etc.. Long story short, you always risk to overload your system.

This patch is based on Peter's idea, but exploits an additional fact:
if you don't have RT tasks enqueued, it makes little sense to continue
incrementing rt_time once you reached the upper limit (DL tasks have their
own mechanism for throttling).

This cures both problems:

 - no matter how many DL instances in the past, you'll have an rt_time
   slightly above rt_runtime when an RT task is enqueued, and from that
   point on (after the first replenishment), the task will normally execute;

 - you can still eat up all bandwidth during the first period, but not
   anymore after that, remember that DL execution will increment rt_time
   till the upper limit is reached.

The situation is still not perfect! But, we have a simple solution for now,
that limits how much you can jeopardize your system, as we keep working
towards the right answer: RT groups scheduled using deadline servers.

Reported-by: Kirill Tkhai <tkhai@yandex.ru>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/r/20140225151515.617714e2f2cd6c558531ba61@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27 12:29:41 +01:00
..
Makefile sched/deadline: speed up SCHED_DEADLINE pushes with a push-heap 2014-01-13 13:46:46 +01:00
auto_group.c sched/autogroup: Fix race with task_groups list 2013-05-28 09:40:22 +02:00
auto_group.h Revert "sched/autogroup: Fix crash on reboot when autogroup is disabled" 2012-12-11 10:23:45 +01:00
clock.c sched/clock: Fixup early initialization 2014-01-23 14:48:36 +01:00
completion.c sched: Move completion code from core.c to completion.c 2013-11-06 07:49:19 +01:00
core.c sched: Add 'flags' argument to sched_{set,get}attr() syscalls 2014-02-21 21:27:10 +01:00
cpuacct.c cgroup: replace cftype->read_seq_string() with cftype->seq_show() 2013-12-05 12:28:04 -05:00
cpuacct.h sched/cpuacct: Initialize root cpuacct earlier 2013-04-10 13:54:20 +02:00
cpudeadline.c sched/deadline: Switch CPU's presence test order 2014-02-27 12:29:40 +01:00
cpudeadline.h sched/deadline: speed up SCHED_DEADLINE pushes with a push-heap 2014-01-13 13:46:46 +01:00
cpupri.c sched: Fix some kernel-doc warnings 2013-07-18 09:58:21 +02:00
cpupri.h sched: Move all scheduler bits into kernel/sched/ 2011-11-17 12:20:22 +01:00
cputime.c Merge branch 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip 2013-09-05 12:36:46 -07:00
deadline.c sched/deadline: Prevent rt_time growth to infinity 2014-02-27 12:29:41 +01:00
debug.c sched/clock, x86: Use a static_key for sched_clock_stable 2014-01-13 15:13:13 +01:00
fair.c sched: Fix double normalization of vruntime 2014-02-27 12:29:38 +01:00
features.h sched/numa: Resist moving tasks towards nodes with fewer hinting faults 2013-10-09 12:40:27 +02:00
idle_task.c sched/numa: Introduce migrate_swap() 2013-10-09 12:40:46 +02:00
proc.c sched: Change get_rq_runnable_load() to static and inline 2013-06-27 10:07:44 +02:00
rt.c sched/deadline: Prevent rt_time growth to infinity 2014-02-27 12:29:41 +01:00
sched.h sched/deadline: Remove useless dl_nr_total 2014-02-21 21:27:10 +01:00
stats.c fix a leak in /proc/schedstats 2013-04-29 15:41:45 -04:00
stats.h sched: Micro-optimize by dropping unnecessary task_rq() calls 2013-09-25 13:51:06 +02:00
stop_task.c sched/deadline: Add SCHED_DEADLINE structures & implementation 2014-01-13 13:41:06 +01:00
wait.c sched: Move wait code from core.c to wait.c 2013-11-06 07:49:18 +01:00