linux-sg2042/kernel/sched
Peter Zijlstra e44bc5c5d0 sched/fair: Improve the ->group_imb logic
Group imbalance is meant to deal with situations where affinity masks
and sched domains don't align well, such as 3 cpus from one group and
6 from another. In this case the domain based balancer will want to
put an equal amount of tasks on each side even though they don't have
equal cpus.

Currently group_imb is set whenever two cpus of a group have a weight
difference of at least one avg task and the heaviest cpu has at least
two tasks. A group with imbalance set will always be picked as busiest
and a balance pass will be forced.

The problem is that even if there are no affinity masks this stuff can
trigger and cause weird balancing decisions, eg. the observed
behaviour was that of 6 cpus, 5 had 2 and 1 had 3 tasks, due to the
difference of 1 avg load (they all had the same weight) and nr_running
being >1 the group_imbalance logic triggered and did the weird thing
of pulling more load instead of trying to move the 1 excess task to
the other domain of 6 cpus that had 5 cpu with 2 tasks and 1 cpu with
1 task.

Curb the group_imbalance stuff by making the nr_running condition
weaker by also tracking the min_nr_running and using the difference in
nr_running over the set instead of the absolute max nr_running.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-9s7dedozxo8kjsb9kqlrukkf@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-14 15:05:28 +02:00
..
Makefile
auto_group.c sched: Clean up parameter passing of proc_sched_autogroup_set_nice() 2012-03-02 12:23:49 +01:00
auto_group.h
clock.c
core.c sched/nohz: Fix rq->cpu_load[] calculations 2012-05-14 15:05:27 +02:00
cpupri.c kernel-doc: fix kernel-doc warnings in sched 2012-01-23 08:44:54 -08:00
cpupri.h
debug.c sched: Change rq->nr_running to unsigned int 2012-05-09 15:00:49 +02:00
fair.c sched/fair: Improve the ->group_imb logic 2012-05-14 15:05:28 +02:00
features.h sched: Fix more load-balancing fallout 2012-04-26 12:54:52 +02:00
idle_task.c sched: Update documentation and comments 2012-05-07 15:04:18 +02:00
rt.c Merge branch 'tip/sched/core' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace into sched/core 2012-04-14 15:12:04 +02:00
sched.h sched/nohz: Fix rq->cpu_load[] calculations 2012-05-14 15:05:27 +02:00
stats.c sched: Remove sched_switch 2012-01-27 13:28:53 +01:00
stats.h
stop_task.c