perf: Use task_ctx_sched_out()
We have a function that does exactly what we want here, use it. This reduces the amount of cpuctx->task_ctx muckery. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
parent
3e349507d1
commit
8833d0e286
|
@ -2545,8 +2545,7 @@ unlock:
|
|||
|
||||
if (do_switch) {
|
||||
raw_spin_lock(&ctx->lock);
|
||||
ctx_sched_out(ctx, cpuctx, EVENT_ALL);
|
||||
cpuctx->task_ctx = NULL;
|
||||
task_ctx_sched_out(cpuctx, ctx);
|
||||
raw_spin_unlock(&ctx->lock);
|
||||
}
|
||||
}
|
||||
|
|
Loading…
Reference in New Issue