sched/fair: Return early from update_tg_cfs_load() if delta == 0

In case the _avg delta is 0 there is no need to update se's _avg
(level n) nor cfs_rq's _avg (level n-1). These values stay the same.

Since cfs_rq's _avg isn't changed, i.e. no load is propagated down,
cfs_rq's _sum should stay the same as well.

So bail out after se's _sum has been updated.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://lore.kernel.org/r/20210601083616.804229-1-dietmar.eggemann@arm.com
This commit is contained in:
Dietmar Eggemann 2021-06-01 10:36:16 +02:00 committed by Peter Zijlstra
parent 9e077b52d8
commit 83c5e9d573
1 changed files with 5 additions and 2 deletions

View File

@ -3502,9 +3502,12 @@ update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq
load_sum = (s64)se_weight(se) * runnable_sum; load_sum = (s64)se_weight(se) * runnable_sum;
load_avg = div_s64(load_sum, divider); load_avg = div_s64(load_sum, divider);
delta = load_avg - se->avg.load_avg;
se->avg.load_sum = runnable_sum; se->avg.load_sum = runnable_sum;
delta = load_avg - se->avg.load_avg;
if (!delta)
return;
se->avg.load_avg = load_avg; se->avg.load_avg = load_avg;
add_positive(&cfs_rq->avg.load_avg, delta); add_positive(&cfs_rq->avg.load_avg, delta);