arm64, riscv, perf: Remove RCU_NONIDLE() usage

The PM notifiers should no longer be ran with RCU disabled (per the
previous patches), as such this hack is no longer required either.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Tested-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lore.kernel.org/r/20230112195542.151174682@infradead.org
This commit is contained in:
Peter Zijlstra 2023-01-12 20:44:00 +01:00 committed by Ingo Molnar
parent f176d4ccb3
commit 1c38b0615f
2 changed files with 2 additions and 17 deletions

View File

@ -758,17 +758,8 @@ static void cpu_pm_pmu_setup(struct arm_pmu *armpmu, unsigned long cmd)
case CPU_PM_ENTER_FAILED:
/*
* Restore and enable the counter.
* armpmu_start() indirectly calls
*
* perf_event_update_userpage()
*
* that requires RCU read locking to be functional,
* wrap the call within RCU_NONIDLE to make the
* RCU subsystem aware this cpu is not idle from
* an RCU perspective for the armpmu_start() call
* duration.
*/
RCU_NONIDLE(armpmu_start(event, PERF_EF_RELOAD));
armpmu_start(event, PERF_EF_RELOAD);
break;
default:
break;

View File

@ -771,14 +771,8 @@ static int riscv_pm_pmu_notify(struct notifier_block *b, unsigned long cmd,
case CPU_PM_ENTER_FAILED:
/*
* Restore and enable the counter.
*
* Requires RCU read locking to be functional,
* wrap the call within RCU_NONIDLE to make the
* RCU subsystem aware this cpu is not idle from
* an RCU perspective for the riscv_pmu_start() call
* duration.
*/
RCU_NONIDLE(riscv_pmu_start(event, PERF_EF_RELOAD));
riscv_pmu_start(event, PERF_EF_RELOAD);
break;
default:
break;