sched/fair: Reduce long-tail newly idle balance cost
A long-tail load balance cost is observed on the newly idle path, this is caused by a race window between the first nr_running check of the busiest runqueue and its nr_running recheck in detach_tasks. Before the busiest runqueue is locked, the tasks on the busiest runqueue could be pulled by other CPUs and nr_running of the busiest runqueu becomes 1 or even 0 if the running task becomes idle, this causes detach_tasks breaks with LBF_ALL_PINNED flag set, and triggers load_balance redo at the same sched_domain level. In order to find the new busiest sched_group and CPU, load balance will recompute and update the various load statistics, which eventually leads to the long-tail load balance cost. This patch clears LBF_ALL_PINNED flag for this race condition, and hence reduces the long-tail cost of newly idle balance. Signed-off-by: Aubrey Li <aubrey.li@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org> Link: https://lkml.kernel.org/r/1614154549-116078-1-git-send-email-aubrey.li@intel.com
This commit is contained in:
parent
c8987ae5af
commit
acb4decc1e
|
@ -7687,6 +7687,15 @@ static int detach_tasks(struct lb_env *env)
|
||||||
|
|
||||||
lockdep_assert_held(&env->src_rq->lock);
|
lockdep_assert_held(&env->src_rq->lock);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Source run queue has been emptied by another CPU, clear
|
||||||
|
* LBF_ALL_PINNED flag as we will not test any task.
|
||||||
|
*/
|
||||||
|
if (env->src_rq->nr_running <= 1) {
|
||||||
|
env->flags &= ~LBF_ALL_PINNED;
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
if (env->imbalance <= 0)
|
if (env->imbalance <= 0)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue