License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Copyright (C) 1999 Cort Dougan <cort@cs.nmt.edu>
|
|
|
|
*/
|
2005-09-22 05:52:55 +08:00
|
|
|
#ifndef _ASM_POWERPC_HW_IRQ_H
|
|
|
|
#define _ASM_POWERPC_HW_IRQ_H
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
#ifdef __KERNEL__
|
|
|
|
|
|
|
|
#include <linux/errno.h>
|
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 14:47:49 +08:00
|
|
|
#include <linux/compiler.h>
|
2005-09-22 05:52:55 +08:00
|
|
|
#include <asm/ptrace.h>
|
|
|
|
#include <asm/processor.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 15:27:59 +08:00
|
|
|
#ifdef CONFIG_PPC64
|
|
|
|
|
|
|
|
/*
|
|
|
|
* PACA flags in paca->irq_happened.
|
|
|
|
*
|
|
|
|
* This bits are set when interrupts occur while soft-disabled
|
2021-06-17 23:51:11 +08:00
|
|
|
* and allow a proper replay.
|
|
|
|
*
|
|
|
|
* The PACA_IRQ_HARD_DIS is set whenever we hard disable. It is almost
|
|
|
|
* always in synch with the MSR[EE] state, except:
|
|
|
|
* - A window in interrupt entry, where hardware disables MSR[EE] and that
|
|
|
|
* must be "reconciled" with the soft mask state.
|
|
|
|
* - NMI interrupts that hit in awkward places, until they fix the state.
|
|
|
|
* - When local irqs are being enabled and state is being fixed up.
|
|
|
|
* - When returning from an interrupt there are some windows where this
|
|
|
|
* can become out of synch, but gets fixed before the RFI or before
|
|
|
|
* executing the next user instruction (see arch/powerpc/kernel/interrupt.c).
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 15:27:59 +08:00
|
|
|
*/
|
|
|
|
#define PACA_IRQ_HARD_DIS 0x01
|
|
|
|
#define PACA_IRQ_DBELL 0x02
|
|
|
|
#define PACA_IRQ_EE 0x04
|
|
|
|
#define PACA_IRQ_DEC 0x08 /* Or FIT */
|
2020-09-15 19:46:47 +08:00
|
|
|
#define PACA_IRQ_HMI 0x10
|
|
|
|
#define PACA_IRQ_PMI 0x20
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 15:27:59 +08:00
|
|
|
|
2018-02-03 15:17:50 +08:00
|
|
|
/*
|
|
|
|
* Some soft-masked interrupts must be hard masked until they are replayed
|
|
|
|
* (e.g., because the soft-masked handler does not clear the exception).
|
|
|
|
*/
|
|
|
|
#ifdef CONFIG_PPC_BOOK3S
|
|
|
|
#define PACA_IRQ_MUST_HARD_MASK (PACA_IRQ_EE|PACA_IRQ_PMI)
|
|
|
|
#else
|
|
|
|
#define PACA_IRQ_MUST_HARD_MASK (PACA_IRQ_EE)
|
|
|
|
#endif
|
|
|
|
|
2021-02-08 23:10:24 +08:00
|
|
|
#endif /* CONFIG_PPC64 */
|
|
|
|
|
2017-12-20 11:55:42 +08:00
|
|
|
/*
|
2017-12-20 11:55:50 +08:00
|
|
|
* flags for paca->irq_soft_mask
|
2017-12-20 11:55:42 +08:00
|
|
|
*/
|
powerpc/64: Change soft_enabled from flag to bitmask
"paca->soft_enabled" is used as a flag to mask some of interrupts.
Currently supported flags values and their details:
soft_enabled MSR[EE]
0 0 Disabled (PMI and HMI not masked)
1 1 Enabled
"paca->soft_enabled" is initialized to 1 to make the interripts as
enabled. arch_local_irq_disable() will toggle the value when
interrupts needs to disbled. At this point, the interrupts are not
actually disabled, instead, interrupt vector has code to check for the
flag and mask it when it occurs. By "mask it", it update interrupt
paca->irq_happened and return. arch_local_irq_restore() is called to
re-enable interrupts, which checks and replays interrupts if any
occured.
Now, as mentioned, current logic doesnot mask "performance monitoring
interrupts" and PMIs are implemented as NMI. But this patchset depends
on local_irq_* for a successful local_* update. Meaning, mask all
possible interrupts during local_* update and replay them after the
update.
So the idea here is to reserve the "paca->soft_enabled" logic. New
values and details:
soft_enabled MSR[EE]
1 0 Disabled (PMI and HMI not masked)
0 1 Enabled
Reason for the this change is to create foundation for a third mask
value "0x2" for "soft_enabled" to add support to mask PMIs. When
->soft_enabled is set to a value "3", PMI interrupts are mask and when
set to a value of "1", PMI are not mask. With this patch also extends
soft_enabled as interrupt disable mask.
Current flags are renamed from IRQ_[EN?DIS}ABLED to
IRQS_ENABLED and IRQS_DISABLED.
Patch also fixes the ptrace call to force the user to see the softe
value to be alway 1. Reason being, even though userspace has no
business knowing about softe, it is part of pt_regs. Like-wise in
signal context.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-12-20 11:55:49 +08:00
|
|
|
#define IRQS_ENABLED 0
|
2017-12-20 11:55:53 +08:00
|
|
|
#define IRQS_DISABLED 1 /* local_irq_disable() interrupts */
|
|
|
|
#define IRQS_PMI_DISABLED 2
|
|
|
|
#define IRQS_ALL_DISABLED (IRQS_DISABLED | IRQS_PMI_DISABLED)
|
2017-12-20 11:55:42 +08:00
|
|
|
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 15:27:59 +08:00
|
|
|
#ifndef __ASSEMBLY__
|
|
|
|
|
2021-02-08 23:10:25 +08:00
|
|
|
static inline void __hard_irq_enable(void)
|
|
|
|
{
|
2021-10-19 15:29:25 +08:00
|
|
|
if (IS_ENABLED(CONFIG_BOOKE_OR_40x))
|
2021-02-08 23:10:25 +08:00
|
|
|
wrtee(MSR_EE);
|
|
|
|
else if (IS_ENABLED(CONFIG_PPC_8xx))
|
|
|
|
wrtspr(SPRN_EIE);
|
|
|
|
else if (IS_ENABLED(CONFIG_PPC_BOOK3S_64))
|
|
|
|
__mtmsrd(MSR_EE | MSR_RI, 1);
|
|
|
|
else
|
|
|
|
mtmsr(mfmsr() | MSR_EE);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void __hard_irq_disable(void)
|
|
|
|
{
|
2021-10-19 15:29:25 +08:00
|
|
|
if (IS_ENABLED(CONFIG_BOOKE_OR_40x))
|
2021-02-08 23:10:25 +08:00
|
|
|
wrtee(0);
|
|
|
|
else if (IS_ENABLED(CONFIG_PPC_8xx))
|
|
|
|
wrtspr(SPRN_EID);
|
|
|
|
else if (IS_ENABLED(CONFIG_PPC_BOOK3S_64))
|
|
|
|
__mtmsrd(MSR_RI, 1);
|
|
|
|
else
|
|
|
|
mtmsr(mfmsr() & ~MSR_EE);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void __hard_EE_RI_disable(void)
|
|
|
|
{
|
2021-10-19 15:29:25 +08:00
|
|
|
if (IS_ENABLED(CONFIG_BOOKE_OR_40x))
|
2021-02-08 23:10:25 +08:00
|
|
|
wrtee(0);
|
|
|
|
else if (IS_ENABLED(CONFIG_PPC_8xx))
|
|
|
|
wrtspr(SPRN_NRI);
|
|
|
|
else if (IS_ENABLED(CONFIG_PPC_BOOK3S_64))
|
|
|
|
__mtmsrd(0, 1);
|
|
|
|
else
|
|
|
|
mtmsr(mfmsr() & ~(MSR_EE | MSR_RI));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void __hard_RI_enable(void)
|
|
|
|
{
|
2021-10-19 15:29:25 +08:00
|
|
|
if (IS_ENABLED(CONFIG_BOOKE_OR_40x))
|
2021-02-08 23:10:25 +08:00
|
|
|
return;
|
|
|
|
|
|
|
|
if (IS_ENABLED(CONFIG_PPC_8xx))
|
|
|
|
wrtspr(SPRN_EID);
|
|
|
|
else if (IS_ENABLED(CONFIG_PPC_BOOK3S_64))
|
|
|
|
__mtmsrd(MSR_RI, 1);
|
|
|
|
else
|
|
|
|
mtmsr(mfmsr() | MSR_RI);
|
|
|
|
}
|
|
|
|
|
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 14:47:49 +08:00
|
|
|
#ifdef CONFIG_PPC64
|
|
|
|
#include <asm/paca.h>
|
|
|
|
|
2017-12-20 11:55:50 +08:00
|
|
|
static inline notrace unsigned long irq_soft_mask_return(void)
|
2017-12-20 11:55:46 +08:00
|
|
|
{
|
Revert "powerpc/irq: Don't open code irq_soft_mask helpers"
This reverts commit ef5b570d3700fbb8628a58da0487486ceeb713cd.
Zhouyi reported that commit is causing crashes when running rcutorture
with KASAN enabled:
BUG: using smp_processor_id() in preemptible [00000000] code: rcu_torture_rea/100
caller is rcu_preempt_deferred_qs_irqrestore+0x74/0xed0
CPU: 4 PID: 100 Comm: rcu_torture_rea Tainted: G W 5.19.0-rc5-next-20220708-dirty #253
Call Trace:
dump_stack_lvl+0xbc/0x108 (unreliable)
check_preemption_disabled+0x154/0x160
rcu_preempt_deferred_qs_irqrestore+0x74/0xed0
__rcu_read_unlock+0x290/0x3b0
rcu_torture_read_unlock+0x30/0xb0
rcutorture_one_extend+0x198/0x810
rcu_torture_one_read+0x58c/0xc90
rcu_torture_reader+0x12c/0x360
kthread+0x1e8/0x220
ret_from_kernel_thread+0x5c/0x64
KASAN will generate instrumentation instructions around the
WRITE_ONCE(local_paca->irq_soft_mask, mask):
0xc000000000295cb0 <+0>: addis r2,r12,774
0xc000000000295cb4 <+4>: addi r2,r2,16464
0xc000000000295cb8 <+8>: mflr r0
0xc000000000295cbc <+12>: bl 0xc00000000008bb4c <mcount>
0xc000000000295cc0 <+16>: mflr r0
0xc000000000295cc4 <+20>: std r31,-8(r1)
0xc000000000295cc8 <+24>: addi r3,r13,2354
0xc000000000295ccc <+28>: mr r31,r13
0xc000000000295cd0 <+32>: std r0,16(r1)
0xc000000000295cd4 <+36>: stdu r1,-48(r1)
0xc000000000295cd8 <+40>: bl 0xc000000000609b98 <__asan_store1+8>
0xc000000000295cdc <+44>: nop
0xc000000000295ce0 <+48>: li r9,1
0xc000000000295ce4 <+52>: stb r9,2354(r31)
0xc000000000295ce8 <+56>: addi r1,r1,48
0xc000000000295cec <+60>: ld r0,16(r1)
0xc000000000295cf0 <+64>: ld r31,-8(r1)
0xc000000000295cf4 <+68>: mtlr r0
If there is a context switch before "stb r9,2354(r31)", r31 may
not equal to r13, in such case, irq soft mask will not work.
The usual solution of marking the code ineligible for instrumentation
forces the code out-of-line, which we would prefer to avoid. Christophe
proposed a partial revert, but Nick raised some concerns with that. So
for now do a full revert.
Reported-by: Zhouyi Zhou <zhouzhouyi@gmail.com>
[mpe: Construct change log based on Zhouyi's original report]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220831131052.42250-1-mpe@ellerman.id.au
2022-08-31 21:10:52 +08:00
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
asm volatile(
|
|
|
|
"lbz %0,%1(13)"
|
|
|
|
: "=r" (flags)
|
|
|
|
: "i" (offsetof(struct paca_struct, irq_soft_mask)));
|
|
|
|
|
|
|
|
return flags;
|
2017-12-20 11:55:46 +08:00
|
|
|
}
|
|
|
|
|
2017-12-20 11:55:45 +08:00
|
|
|
/*
|
|
|
|
* The "memory" clobber acts as both a compiler barrier
|
|
|
|
* for the critical section and as a clobber because
|
2017-12-20 11:55:50 +08:00
|
|
|
* we changed paca->irq_soft_mask
|
2017-12-20 11:55:45 +08:00
|
|
|
*/
|
2017-12-20 11:55:50 +08:00
|
|
|
static inline notrace void irq_soft_mask_set(unsigned long mask)
|
2017-12-20 11:55:45 +08:00
|
|
|
{
|
powerpc/64: Change soft_enabled from flag to bitmask
"paca->soft_enabled" is used as a flag to mask some of interrupts.
Currently supported flags values and their details:
soft_enabled MSR[EE]
0 0 Disabled (PMI and HMI not masked)
1 1 Enabled
"paca->soft_enabled" is initialized to 1 to make the interripts as
enabled. arch_local_irq_disable() will toggle the value when
interrupts needs to disbled. At this point, the interrupts are not
actually disabled, instead, interrupt vector has code to check for the
flag and mask it when it occurs. By "mask it", it update interrupt
paca->irq_happened and return. arch_local_irq_restore() is called to
re-enable interrupts, which checks and replays interrupts if any
occured.
Now, as mentioned, current logic doesnot mask "performance monitoring
interrupts" and PMIs are implemented as NMI. But this patchset depends
on local_irq_* for a successful local_* update. Meaning, mask all
possible interrupts during local_* update and replay them after the
update.
So the idea here is to reserve the "paca->soft_enabled" logic. New
values and details:
soft_enabled MSR[EE]
1 0 Disabled (PMI and HMI not masked)
0 1 Enabled
Reason for the this change is to create foundation for a third mask
value "0x2" for "soft_enabled" to add support to mask PMIs. When
->soft_enabled is set to a value "3", PMI interrupts are mask and when
set to a value of "1", PMI are not mask. With this patch also extends
soft_enabled as interrupt disable mask.
Current flags are renamed from IRQ_[EN?DIS}ABLED to
IRQS_ENABLED and IRQS_DISABLED.
Patch also fixes the ptrace call to force the user to see the softe
value to be alway 1. Reason being, even though userspace has no
business knowing about softe, it is part of pt_regs. Like-wise in
signal context.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-12-20 11:55:49 +08:00
|
|
|
/*
|
2017-12-20 11:55:50 +08:00
|
|
|
* The irq mask must always include the STD bit if any are set.
|
|
|
|
*
|
|
|
|
* and interrupts don't get replayed until the standard
|
|
|
|
* interrupt (local_irq_disable()) is unmasked.
|
|
|
|
*
|
|
|
|
* Other masks must only provide additional masking beyond
|
|
|
|
* the standard, and they are also not replayed until the
|
|
|
|
* standard interrupt becomes unmasked.
|
|
|
|
*
|
|
|
|
* This could be changed, but it will require partial
|
|
|
|
* unmasks to be replayed, among other things. For now, take
|
|
|
|
* the simple approach.
|
powerpc/64: Change soft_enabled from flag to bitmask
"paca->soft_enabled" is used as a flag to mask some of interrupts.
Currently supported flags values and their details:
soft_enabled MSR[EE]
0 0 Disabled (PMI and HMI not masked)
1 1 Enabled
"paca->soft_enabled" is initialized to 1 to make the interripts as
enabled. arch_local_irq_disable() will toggle the value when
interrupts needs to disbled. At this point, the interrupts are not
actually disabled, instead, interrupt vector has code to check for the
flag and mask it when it occurs. By "mask it", it update interrupt
paca->irq_happened and return. arch_local_irq_restore() is called to
re-enable interrupts, which checks and replays interrupts if any
occured.
Now, as mentioned, current logic doesnot mask "performance monitoring
interrupts" and PMIs are implemented as NMI. But this patchset depends
on local_irq_* for a successful local_* update. Meaning, mask all
possible interrupts during local_* update and replay them after the
update.
So the idea here is to reserve the "paca->soft_enabled" logic. New
values and details:
soft_enabled MSR[EE]
1 0 Disabled (PMI and HMI not masked)
0 1 Enabled
Reason for the this change is to create foundation for a third mask
value "0x2" for "soft_enabled" to add support to mask PMIs. When
->soft_enabled is set to a value "3", PMI interrupts are mask and when
set to a value of "1", PMI are not mask. With this patch also extends
soft_enabled as interrupt disable mask.
Current flags are renamed from IRQ_[EN?DIS}ABLED to
IRQS_ENABLED and IRQS_DISABLED.
Patch also fixes the ptrace call to force the user to see the softe
value to be alway 1. Reason being, even though userspace has no
business knowing about softe, it is part of pt_regs. Like-wise in
signal context.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-12-20 11:55:49 +08:00
|
|
|
*/
|
2022-05-18 16:32:28 +08:00
|
|
|
if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
|
|
|
|
WARN_ON(mask && !(mask & IRQS_DISABLED));
|
powerpc/64: Change soft_enabled from flag to bitmask
"paca->soft_enabled" is used as a flag to mask some of interrupts.
Currently supported flags values and their details:
soft_enabled MSR[EE]
0 0 Disabled (PMI and HMI not masked)
1 1 Enabled
"paca->soft_enabled" is initialized to 1 to make the interripts as
enabled. arch_local_irq_disable() will toggle the value when
interrupts needs to disbled. At this point, the interrupts are not
actually disabled, instead, interrupt vector has code to check for the
flag and mask it when it occurs. By "mask it", it update interrupt
paca->irq_happened and return. arch_local_irq_restore() is called to
re-enable interrupts, which checks and replays interrupts if any
occured.
Now, as mentioned, current logic doesnot mask "performance monitoring
interrupts" and PMIs are implemented as NMI. But this patchset depends
on local_irq_* for a successful local_* update. Meaning, mask all
possible interrupts during local_* update and replay them after the
update.
So the idea here is to reserve the "paca->soft_enabled" logic. New
values and details:
soft_enabled MSR[EE]
1 0 Disabled (PMI and HMI not masked)
0 1 Enabled
Reason for the this change is to create foundation for a third mask
value "0x2" for "soft_enabled" to add support to mask PMIs. When
->soft_enabled is set to a value "3", PMI interrupts are mask and when
set to a value of "1", PMI are not mask. With this patch also extends
soft_enabled as interrupt disable mask.
Current flags are renamed from IRQ_[EN?DIS}ABLED to
IRQS_ENABLED and IRQS_DISABLED.
Patch also fixes the ptrace call to force the user to see the softe
value to be alway 1. Reason being, even though userspace has no
business knowing about softe, it is part of pt_regs. Like-wise in
signal context.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-12-20 11:55:49 +08:00
|
|
|
|
Revert "powerpc/irq: Don't open code irq_soft_mask helpers"
This reverts commit ef5b570d3700fbb8628a58da0487486ceeb713cd.
Zhouyi reported that commit is causing crashes when running rcutorture
with KASAN enabled:
BUG: using smp_processor_id() in preemptible [00000000] code: rcu_torture_rea/100
caller is rcu_preempt_deferred_qs_irqrestore+0x74/0xed0
CPU: 4 PID: 100 Comm: rcu_torture_rea Tainted: G W 5.19.0-rc5-next-20220708-dirty #253
Call Trace:
dump_stack_lvl+0xbc/0x108 (unreliable)
check_preemption_disabled+0x154/0x160
rcu_preempt_deferred_qs_irqrestore+0x74/0xed0
__rcu_read_unlock+0x290/0x3b0
rcu_torture_read_unlock+0x30/0xb0
rcutorture_one_extend+0x198/0x810
rcu_torture_one_read+0x58c/0xc90
rcu_torture_reader+0x12c/0x360
kthread+0x1e8/0x220
ret_from_kernel_thread+0x5c/0x64
KASAN will generate instrumentation instructions around the
WRITE_ONCE(local_paca->irq_soft_mask, mask):
0xc000000000295cb0 <+0>: addis r2,r12,774
0xc000000000295cb4 <+4>: addi r2,r2,16464
0xc000000000295cb8 <+8>: mflr r0
0xc000000000295cbc <+12>: bl 0xc00000000008bb4c <mcount>
0xc000000000295cc0 <+16>: mflr r0
0xc000000000295cc4 <+20>: std r31,-8(r1)
0xc000000000295cc8 <+24>: addi r3,r13,2354
0xc000000000295ccc <+28>: mr r31,r13
0xc000000000295cd0 <+32>: std r0,16(r1)
0xc000000000295cd4 <+36>: stdu r1,-48(r1)
0xc000000000295cd8 <+40>: bl 0xc000000000609b98 <__asan_store1+8>
0xc000000000295cdc <+44>: nop
0xc000000000295ce0 <+48>: li r9,1
0xc000000000295ce4 <+52>: stb r9,2354(r31)
0xc000000000295ce8 <+56>: addi r1,r1,48
0xc000000000295cec <+60>: ld r0,16(r1)
0xc000000000295cf0 <+64>: ld r31,-8(r1)
0xc000000000295cf4 <+68>: mtlr r0
If there is a context switch before "stb r9,2354(r31)", r31 may
not equal to r13, in such case, irq soft mask will not work.
The usual solution of marking the code ineligible for instrumentation
forces the code out-of-line, which we would prefer to avoid. Christophe
proposed a partial revert, but Nick raised some concerns with that. So
for now do a full revert.
Reported-by: Zhouyi Zhou <zhouzhouyi@gmail.com>
[mpe: Construct change log based on Zhouyi's original report]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220831131052.42250-1-mpe@ellerman.id.au
2022-08-31 21:10:52 +08:00
|
|
|
asm volatile(
|
|
|
|
"stb %0,%1(13)"
|
|
|
|
:
|
|
|
|
: "r" (mask),
|
|
|
|
"i" (offsetof(struct paca_struct, irq_soft_mask))
|
|
|
|
: "memory");
|
2017-12-20 11:55:45 +08:00
|
|
|
}
|
|
|
|
|
2017-12-20 11:55:50 +08:00
|
|
|
static inline notrace unsigned long irq_soft_mask_set_return(unsigned long mask)
|
2017-12-20 11:55:47 +08:00
|
|
|
{
|
2022-09-20 14:41:08 +08:00
|
|
|
unsigned long flags = irq_soft_mask_return();
|
Revert "powerpc/irq: Don't open code irq_soft_mask helpers"
This reverts commit ef5b570d3700fbb8628a58da0487486ceeb713cd.
Zhouyi reported that commit is causing crashes when running rcutorture
with KASAN enabled:
BUG: using smp_processor_id() in preemptible [00000000] code: rcu_torture_rea/100
caller is rcu_preempt_deferred_qs_irqrestore+0x74/0xed0
CPU: 4 PID: 100 Comm: rcu_torture_rea Tainted: G W 5.19.0-rc5-next-20220708-dirty #253
Call Trace:
dump_stack_lvl+0xbc/0x108 (unreliable)
check_preemption_disabled+0x154/0x160
rcu_preempt_deferred_qs_irqrestore+0x74/0xed0
__rcu_read_unlock+0x290/0x3b0
rcu_torture_read_unlock+0x30/0xb0
rcutorture_one_extend+0x198/0x810
rcu_torture_one_read+0x58c/0xc90
rcu_torture_reader+0x12c/0x360
kthread+0x1e8/0x220
ret_from_kernel_thread+0x5c/0x64
KASAN will generate instrumentation instructions around the
WRITE_ONCE(local_paca->irq_soft_mask, mask):
0xc000000000295cb0 <+0>: addis r2,r12,774
0xc000000000295cb4 <+4>: addi r2,r2,16464
0xc000000000295cb8 <+8>: mflr r0
0xc000000000295cbc <+12>: bl 0xc00000000008bb4c <mcount>
0xc000000000295cc0 <+16>: mflr r0
0xc000000000295cc4 <+20>: std r31,-8(r1)
0xc000000000295cc8 <+24>: addi r3,r13,2354
0xc000000000295ccc <+28>: mr r31,r13
0xc000000000295cd0 <+32>: std r0,16(r1)
0xc000000000295cd4 <+36>: stdu r1,-48(r1)
0xc000000000295cd8 <+40>: bl 0xc000000000609b98 <__asan_store1+8>
0xc000000000295cdc <+44>: nop
0xc000000000295ce0 <+48>: li r9,1
0xc000000000295ce4 <+52>: stb r9,2354(r31)
0xc000000000295ce8 <+56>: addi r1,r1,48
0xc000000000295cec <+60>: ld r0,16(r1)
0xc000000000295cf0 <+64>: ld r31,-8(r1)
0xc000000000295cf4 <+68>: mtlr r0
If there is a context switch before "stb r9,2354(r31)", r31 may
not equal to r13, in such case, irq soft mask will not work.
The usual solution of marking the code ineligible for instrumentation
forces the code out-of-line, which we would prefer to avoid. Christophe
proposed a partial revert, but Nick raised some concerns with that. So
for now do a full revert.
Reported-by: Zhouyi Zhou <zhouzhouyi@gmail.com>
[mpe: Construct change log based on Zhouyi's original report]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220831131052.42250-1-mpe@ellerman.id.au
2022-08-31 21:10:52 +08:00
|
|
|
|
2022-09-20 14:41:08 +08:00
|
|
|
irq_soft_mask_set(mask);
|
2017-12-20 11:55:47 +08:00
|
|
|
|
|
|
|
return flags;
|
|
|
|
}
|
|
|
|
|
2017-12-20 11:55:55 +08:00
|
|
|
static inline notrace unsigned long irq_soft_mask_or_return(unsigned long mask)
|
|
|
|
{
|
2022-09-20 14:41:08 +08:00
|
|
|
unsigned long flags = irq_soft_mask_return();
|
Revert "powerpc/irq: Don't open code irq_soft_mask helpers"
This reverts commit ef5b570d3700fbb8628a58da0487486ceeb713cd.
Zhouyi reported that commit is causing crashes when running rcutorture
with KASAN enabled:
BUG: using smp_processor_id() in preemptible [00000000] code: rcu_torture_rea/100
caller is rcu_preempt_deferred_qs_irqrestore+0x74/0xed0
CPU: 4 PID: 100 Comm: rcu_torture_rea Tainted: G W 5.19.0-rc5-next-20220708-dirty #253
Call Trace:
dump_stack_lvl+0xbc/0x108 (unreliable)
check_preemption_disabled+0x154/0x160
rcu_preempt_deferred_qs_irqrestore+0x74/0xed0
__rcu_read_unlock+0x290/0x3b0
rcu_torture_read_unlock+0x30/0xb0
rcutorture_one_extend+0x198/0x810
rcu_torture_one_read+0x58c/0xc90
rcu_torture_reader+0x12c/0x360
kthread+0x1e8/0x220
ret_from_kernel_thread+0x5c/0x64
KASAN will generate instrumentation instructions around the
WRITE_ONCE(local_paca->irq_soft_mask, mask):
0xc000000000295cb0 <+0>: addis r2,r12,774
0xc000000000295cb4 <+4>: addi r2,r2,16464
0xc000000000295cb8 <+8>: mflr r0
0xc000000000295cbc <+12>: bl 0xc00000000008bb4c <mcount>
0xc000000000295cc0 <+16>: mflr r0
0xc000000000295cc4 <+20>: std r31,-8(r1)
0xc000000000295cc8 <+24>: addi r3,r13,2354
0xc000000000295ccc <+28>: mr r31,r13
0xc000000000295cd0 <+32>: std r0,16(r1)
0xc000000000295cd4 <+36>: stdu r1,-48(r1)
0xc000000000295cd8 <+40>: bl 0xc000000000609b98 <__asan_store1+8>
0xc000000000295cdc <+44>: nop
0xc000000000295ce0 <+48>: li r9,1
0xc000000000295ce4 <+52>: stb r9,2354(r31)
0xc000000000295ce8 <+56>: addi r1,r1,48
0xc000000000295cec <+60>: ld r0,16(r1)
0xc000000000295cf0 <+64>: ld r31,-8(r1)
0xc000000000295cf4 <+68>: mtlr r0
If there is a context switch before "stb r9,2354(r31)", r31 may
not equal to r13, in such case, irq soft mask will not work.
The usual solution of marking the code ineligible for instrumentation
forces the code out-of-line, which we would prefer to avoid. Christophe
proposed a partial revert, but Nick raised some concerns with that. So
for now do a full revert.
Reported-by: Zhouyi Zhou <zhouzhouyi@gmail.com>
[mpe: Construct change log based on Zhouyi's original report]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220831131052.42250-1-mpe@ellerman.id.au
2022-08-31 21:10:52 +08:00
|
|
|
|
2022-09-20 14:41:08 +08:00
|
|
|
irq_soft_mask_set(flags | mask);
|
2017-12-20 11:55:55 +08:00
|
|
|
|
|
|
|
return flags;
|
|
|
|
}
|
|
|
|
|
2010-10-07 21:08:55 +08:00
|
|
|
static inline unsigned long arch_local_save_flags(void)
|
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 14:47:49 +08:00
|
|
|
{
|
2017-12-20 11:55:50 +08:00
|
|
|
return irq_soft_mask_return();
|
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 14:47:49 +08:00
|
|
|
}
|
|
|
|
|
2017-12-20 11:55:44 +08:00
|
|
|
static inline void arch_local_irq_disable(void)
|
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 14:47:49 +08:00
|
|
|
{
|
2017-12-20 11:55:50 +08:00
|
|
|
irq_soft_mask_set(IRQS_DISABLED);
|
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 14:47:49 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-10-07 21:08:55 +08:00
|
|
|
extern void arch_local_irq_restore(unsigned long);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-10-07 21:08:55 +08:00
|
|
|
static inline void arch_local_irq_enable(void)
|
|
|
|
{
|
2017-12-20 11:55:42 +08:00
|
|
|
arch_local_irq_restore(IRQS_ENABLED);
|
2010-10-07 21:08:55 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned long arch_local_irq_save(void)
|
|
|
|
{
|
2017-12-20 11:55:50 +08:00
|
|
|
return irq_soft_mask_set_return(IRQS_DISABLED);
|
2010-10-07 21:08:55 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool arch_irqs_disabled_flags(unsigned long flags)
|
|
|
|
{
|
powerpc/64: Change soft_enabled from flag to bitmask
"paca->soft_enabled" is used as a flag to mask some of interrupts.
Currently supported flags values and their details:
soft_enabled MSR[EE]
0 0 Disabled (PMI and HMI not masked)
1 1 Enabled
"paca->soft_enabled" is initialized to 1 to make the interripts as
enabled. arch_local_irq_disable() will toggle the value when
interrupts needs to disbled. At this point, the interrupts are not
actually disabled, instead, interrupt vector has code to check for the
flag and mask it when it occurs. By "mask it", it update interrupt
paca->irq_happened and return. arch_local_irq_restore() is called to
re-enable interrupts, which checks and replays interrupts if any
occured.
Now, as mentioned, current logic doesnot mask "performance monitoring
interrupts" and PMIs are implemented as NMI. But this patchset depends
on local_irq_* for a successful local_* update. Meaning, mask all
possible interrupts during local_* update and replay them after the
update.
So the idea here is to reserve the "paca->soft_enabled" logic. New
values and details:
soft_enabled MSR[EE]
1 0 Disabled (PMI and HMI not masked)
0 1 Enabled
Reason for the this change is to create foundation for a third mask
value "0x2" for "soft_enabled" to add support to mask PMIs. When
->soft_enabled is set to a value "3", PMI interrupts are mask and when
set to a value of "1", PMI are not mask. With this patch also extends
soft_enabled as interrupt disable mask.
Current flags are renamed from IRQ_[EN?DIS}ABLED to
IRQS_ENABLED and IRQS_DISABLED.
Patch also fixes the ptrace call to force the user to see the softe
value to be alway 1. Reason being, even though userspace has no
business knowing about softe, it is part of pt_regs. Like-wise in
signal context.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-12-20 11:55:49 +08:00
|
|
|
return flags & IRQS_DISABLED;
|
2010-10-07 21:08:55 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-10-07 21:08:55 +08:00
|
|
|
static inline bool arch_irqs_disabled(void)
|
|
|
|
{
|
|
|
|
return arch_irqs_disabled_flags(arch_local_save_flags());
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
powerpc/perf: Fix PMU callbacks to clear pending PMI before resetting an overflown PMC
Running perf fuzzer showed below in dmesg logs:
"Can't find PMC that caused IRQ"
This means a PMU exception happened, but none of the PMC's (Performance
Monitor Counter) were found to be overflown. There are some corner cases
that clears the PMCs after PMI gets masked. In such cases, the perf
interrupt handler will not find the active PMC values that had caused
the overflow and thus leads to this message while replaying.
Case 1: PMU Interrupt happens during replay of other interrupts and
counter values gets cleared by PMU callbacks before replay:
During replay of interrupts like timer, __do_irq() and doorbell
exception, we conditionally enable interrupts via may_hard_irq_enable().
This could potentially create a window to generate a PMI. Since irq soft
mask is set to ALL_DISABLED, the PMI will get masked here. We could get
IPIs run before perf interrupt is replayed and the PMU events could
be deleted or stopped. This will change the PMU SPR values and resets
the counters. Snippet of ftrace log showing PMU callbacks invoked in
__do_irq():
<idle>-0 [051] dns. 132025441306354: __do_irq <-call_do_irq
<idle>-0 [051] dns. 132025441306430: irq_enter <-__do_irq
<idle>-0 [051] dns. 132025441306503: irq_enter_rcu <-__do_irq
<idle>-0 [051] dnH. 132025441306599: xive_get_irq <-__do_irq
<<>>
<idle>-0 [051] dnH. 132025441307770: generic_smp_call_function_single_interrupt <-smp_ipi_demux_relaxed
<idle>-0 [051] dnH. 132025441307839: flush_smp_call_function_queue <-smp_ipi_demux_relaxed
<idle>-0 [051] dnH. 132025441308057: _raw_spin_lock <-event_function
<idle>-0 [051] dnH. 132025441308206: power_pmu_disable <-perf_pmu_disable
<idle>-0 [051] dnH. 132025441308337: power_pmu_del <-event_sched_out
<idle>-0 [051] dnH. 132025441308407: power_pmu_read <-power_pmu_del
<idle>-0 [051] dnH. 132025441308477: read_pmc <-power_pmu_read
<idle>-0 [051] dnH. 132025441308590: isa207_disable_pmc <-power_pmu_del
<idle>-0 [051] dnH. 132025441308663: write_pmc <-power_pmu_del
<idle>-0 [051] dnH. 132025441308787: power_pmu_event_idx <-perf_event_update_userpage
<idle>-0 [051] dnH. 132025441308859: rcu_read_unlock_strict <-perf_event_update_userpage
<idle>-0 [051] dnH. 132025441308975: power_pmu_enable <-perf_pmu_enable
<<>>
<idle>-0 [051] dnH. 132025441311108: irq_exit <-__do_irq
<idle>-0 [051] dns. 132025441311319: performance_monitor_exception <-replay_soft_interrupts
Case 2: PMI's masked during local_* operations, example local_add(). If
the local_add() operation happens within a local_irq_save(), replay of
PMI will be during local_irq_restore(). Similar to case 1, this could
also create a window before replay where PMU events gets deleted or
stopped.
Fix it by updating the PMU callback function power_pmu_disable() to
check for pending perf interrupt. If there is an overflown PMC and
pending perf interrupt indicated in paca, clear the PMI bit in paca to
drop that sample. Clearing of PMI bit is done in power_pmu_disable()
since disable is invoked before any event gets deleted/stopped. With
this fix, if there are more than one event running in the PMU, there is
a chance that we clear the PMI bit for the event which is not getting
deleted/stopped. The other events may still remain active. Hence to make
sure we don't drop valid sample in such cases, another check is added in
power_pmu_enable. This checks if there is an overflown PMC found among
the active events and if so enable back the PMI bit. Two new helper
functions are introduced to clear/set the PMI, ie
clear_pmi_irq_pending() and set_pmi_irq_pending(). Helper function
pmi_irq_pending() is introduced to give a warning if there is pending
PMI bit in paca, but no PMC is overflown.
Also there are corner cases which result in performance monitor
interrupts being triggered during power_pmu_disable(). This happens
since PMXE bit is not cleared along with disabling of other MMCR0 bits
in the pmu_disable. Such PMI's could leave the PMU running and could
trigger PMI again which will set MMCR0 PMAO bit. This could lead to
spurious interrupts in some corner cases. Example, a timer after
power_pmu_del() which will re-enable interrupts and triggers a PMI again
since PMAO bit is still set. But fails to find valid overflow since PMC
was cleared in power_pmu_del(). Fix that by disabling PMXE along with
disabling of other MMCR0 bits in power_pmu_disable().
We can't just replay PMI any time. Hence this approach is preferred
rather than replaying PMI before resetting overflown PMC. Patch also
documents core-book3s on a race condition which can trigger these PMC
messages during idle path in PowerNV.
Fixes: f442d004806e ("powerpc/64s: Add support to mask perf interrupts and replay them")
Reported-by: Nageswara R Sastry <nasastry@in.ibm.com>
Suggested-by: Nicholas Piggin <npiggin@gmail.com>
Suggested-by: Madhavan Srinivasan <maddy@linux.ibm.com>
Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Tested-by: Nageswara R Sastry <rnsastry@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Make pmi_irq_pending() return bool, reflow/reword some comments]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1626846509-1350-2-git-send-email-atrajeev@linux.vnet.ibm.com
2021-07-21 13:48:29 +08:00
|
|
|
static inline void set_pmi_irq_pending(void)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Invoked from PMU callback functions to set PMI bit in the paca.
|
|
|
|
* This has to be called with irq's disabled (via hard_irq_disable()).
|
|
|
|
*/
|
|
|
|
if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
|
|
|
|
WARN_ON_ONCE(mfmsr() & MSR_EE);
|
|
|
|
|
|
|
|
get_paca()->irq_happened |= PACA_IRQ_PMI;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void clear_pmi_irq_pending(void)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Invoked from PMU callback functions to clear the pending PMI bit
|
|
|
|
* in the paca.
|
|
|
|
*/
|
|
|
|
if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
|
|
|
|
WARN_ON_ONCE(mfmsr() & MSR_EE);
|
|
|
|
|
|
|
|
get_paca()->irq_happened &= ~PACA_IRQ_PMI;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool pmi_irq_pending(void)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Invoked from PMU callback functions to check if there is a pending
|
|
|
|
* PMI bit in the paca.
|
|
|
|
*/
|
|
|
|
if (get_paca()->irq_happened & PACA_IRQ_PMI)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2017-12-20 11:55:55 +08:00
|
|
|
#ifdef CONFIG_PPC_BOOK3S
|
|
|
|
/*
|
|
|
|
* To support disabling and enabling of irq with PMI, set of
|
|
|
|
* new powerpc_local_irq_pmu_save() and powerpc_local_irq_restore()
|
|
|
|
* functions are added. These macros are implemented using generic
|
|
|
|
* linux local_irq_* code from include/linux/irqflags.h.
|
|
|
|
*/
|
|
|
|
#define raw_local_irq_pmu_save(flags) \
|
|
|
|
do { \
|
|
|
|
typecheck(unsigned long, flags); \
|
|
|
|
flags = irq_soft_mask_or_return(IRQS_DISABLED | \
|
|
|
|
IRQS_PMI_DISABLED); \
|
|
|
|
} while(0)
|
|
|
|
|
|
|
|
#define raw_local_irq_pmu_restore(flags) \
|
|
|
|
do { \
|
|
|
|
typecheck(unsigned long, flags); \
|
|
|
|
arch_local_irq_restore(flags); \
|
|
|
|
} while(0)
|
|
|
|
|
|
|
|
#ifdef CONFIG_TRACE_IRQFLAGS
|
|
|
|
#define powerpc_local_irq_pmu_save(flags) \
|
|
|
|
do { \
|
|
|
|
raw_local_irq_pmu_save(flags); \
|
2020-07-23 18:56:14 +08:00
|
|
|
if (!raw_irqs_disabled_flags(flags)) \
|
|
|
|
trace_hardirqs_off(); \
|
2017-12-20 11:55:55 +08:00
|
|
|
} while(0)
|
|
|
|
#define powerpc_local_irq_pmu_restore(flags) \
|
|
|
|
do { \
|
2020-07-23 18:56:14 +08:00
|
|
|
if (!raw_irqs_disabled_flags(flags)) \
|
2017-12-20 11:55:55 +08:00
|
|
|
trace_hardirqs_on(); \
|
2020-07-23 18:56:14 +08:00
|
|
|
raw_local_irq_pmu_restore(flags); \
|
2017-12-20 11:55:55 +08:00
|
|
|
} while(0)
|
|
|
|
#else
|
|
|
|
#define powerpc_local_irq_pmu_save(flags) \
|
|
|
|
do { \
|
|
|
|
raw_local_irq_pmu_save(flags); \
|
|
|
|
} while(0)
|
|
|
|
#define powerpc_local_irq_pmu_restore(flags) \
|
|
|
|
do { \
|
|
|
|
raw_local_irq_pmu_restore(flags); \
|
|
|
|
} while (0)
|
|
|
|
#endif /* CONFIG_TRACE_IRQFLAGS */
|
|
|
|
|
|
|
|
#endif /* CONFIG_PPC_BOOK3S */
|
|
|
|
|
2017-12-20 11:55:53 +08:00
|
|
|
#define hard_irq_disable() do { \
|
|
|
|
unsigned long flags; \
|
|
|
|
__hard_irq_disable(); \
|
|
|
|
flags = irq_soft_mask_set_return(IRQS_ALL_DISABLED); \
|
|
|
|
local_paca->irq_happened |= PACA_IRQ_HARD_DIS; \
|
2018-05-02 21:07:26 +08:00
|
|
|
if (!arch_irqs_disabled_flags(flags)) { \
|
2022-08-24 00:36:35 +08:00
|
|
|
asm volatile("std%X0 %1,%0" : "=m" (local_paca->saved_r1) \
|
|
|
|
: "r" (current_stack_pointer)); \
|
2017-12-20 11:55:53 +08:00
|
|
|
trace_hardirqs_off(); \
|
2018-05-02 21:07:26 +08:00
|
|
|
} \
|
2013-05-07 05:04:02 +08:00
|
|
|
} while(0)
|
2012-06-15 12:51:39 +08:00
|
|
|
|
2020-05-02 22:33:16 +08:00
|
|
|
static inline bool __lazy_irq_pending(u8 irq_happened)
|
|
|
|
{
|
|
|
|
return !!(irq_happened & ~PACA_IRQ_HARD_DIS);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check if a lazy IRQ is pending. Should be called with IRQs hard disabled.
|
|
|
|
*/
|
2012-06-27 21:13:52 +08:00
|
|
|
static inline bool lazy_irq_pending(void)
|
|
|
|
{
|
2020-05-02 22:33:16 +08:00
|
|
|
return __lazy_irq_pending(get_paca()->irq_happened);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check if a lazy IRQ is pending, with no debugging checks.
|
|
|
|
* Should be called with IRQs hard disabled.
|
|
|
|
* For use in RI disabled code or other constrained situations.
|
|
|
|
*/
|
|
|
|
static inline bool lazy_irq_pending_nocheck(void)
|
|
|
|
{
|
|
|
|
return __lazy_irq_pending(local_paca->irq_happened);
|
2012-06-27 21:13:52 +08:00
|
|
|
}
|
|
|
|
|
2021-09-22 22:54:49 +08:00
|
|
|
bool power_pmu_wants_prompt_pmi(void);
|
|
|
|
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 15:27:59 +08:00
|
|
|
/*
|
2021-09-22 22:54:50 +08:00
|
|
|
* This is called by asynchronous interrupts to check whether to
|
|
|
|
* conditionally re-enable hard interrupts after having cleared
|
|
|
|
* the source of the interrupt. They are kept disabled if there
|
|
|
|
* is a different soft-masked interrupt pending that requires hard
|
|
|
|
* masking.
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 15:27:59 +08:00
|
|
|
*/
|
2021-09-22 22:54:50 +08:00
|
|
|
static inline bool should_hard_irq_enable(void)
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 15:27:59 +08:00
|
|
|
{
|
2022-05-18 16:32:28 +08:00
|
|
|
if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) {
|
|
|
|
WARN_ON(irq_soft_mask_return() == IRQS_ENABLED);
|
|
|
|
WARN_ON(mfmsr() & MSR_EE);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!IS_ENABLED(CONFIG_PERF_EVENTS))
|
|
|
|
return false;
|
2021-09-22 22:54:50 +08:00
|
|
|
/*
|
|
|
|
* If the PMU is not running, there is not much reason to enable
|
|
|
|
* MSR[EE] in irq handlers because any interrupts would just be
|
|
|
|
* soft-masked.
|
|
|
|
*
|
|
|
|
* TODO: Add test for 64e
|
|
|
|
*/
|
|
|
|
if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && !power_pmu_wants_prompt_pmi())
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (get_paca()->irq_happened & PACA_IRQ_MUST_HARD_MASK)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Do the hard enabling, only call this if should_hard_irq_enable is true.
|
|
|
|
*/
|
|
|
|
static inline void do_hard_irq_enable(void)
|
|
|
|
{
|
2022-05-18 16:32:28 +08:00
|
|
|
if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) {
|
|
|
|
WARN_ON(irq_soft_mask_return() == IRQS_ENABLED);
|
|
|
|
WARN_ON(get_paca()->irq_happened & PACA_IRQ_MUST_HARD_MASK);
|
|
|
|
WARN_ON(mfmsr() & MSR_EE);
|
|
|
|
}
|
2021-09-22 22:54:50 +08:00
|
|
|
/*
|
|
|
|
* This allows PMI interrupts (and watchdog soft-NMIs) through.
|
|
|
|
* There is no other reason to enable this way.
|
|
|
|
*/
|
|
|
|
get_paca()->irq_happened &= ~PACA_IRQ_HARD_DIS;
|
|
|
|
__hard_irq_enable();
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 15:27:59 +08:00
|
|
|
}
|
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 14:47:49 +08:00
|
|
|
|
2012-03-07 13:48:45 +08:00
|
|
|
static inline bool arch_irq_disabled_regs(struct pt_regs *regs)
|
|
|
|
{
|
powerpc/64: Change soft_enabled from flag to bitmask
"paca->soft_enabled" is used as a flag to mask some of interrupts.
Currently supported flags values and their details:
soft_enabled MSR[EE]
0 0 Disabled (PMI and HMI not masked)
1 1 Enabled
"paca->soft_enabled" is initialized to 1 to make the interripts as
enabled. arch_local_irq_disable() will toggle the value when
interrupts needs to disbled. At this point, the interrupts are not
actually disabled, instead, interrupt vector has code to check for the
flag and mask it when it occurs. By "mask it", it update interrupt
paca->irq_happened and return. arch_local_irq_restore() is called to
re-enable interrupts, which checks and replays interrupts if any
occured.
Now, as mentioned, current logic doesnot mask "performance monitoring
interrupts" and PMIs are implemented as NMI. But this patchset depends
on local_irq_* for a successful local_* update. Meaning, mask all
possible interrupts during local_* update and replay them after the
update.
So the idea here is to reserve the "paca->soft_enabled" logic. New
values and details:
soft_enabled MSR[EE]
1 0 Disabled (PMI and HMI not masked)
0 1 Enabled
Reason for the this change is to create foundation for a third mask
value "0x2" for "soft_enabled" to add support to mask PMIs. When
->soft_enabled is set to a value "3", PMI interrupts are mask and when
set to a value of "1", PMI are not mask. With this patch also extends
soft_enabled as interrupt disable mask.
Current flags are renamed from IRQ_[EN?DIS}ABLED to
IRQS_ENABLED and IRQS_DISABLED.
Patch also fixes the ptrace call to force the user to see the softe
value to be alway 1. Reason being, even though userspace has no
business knowing about softe, it is part of pt_regs. Like-wise in
signal context.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-12-20 11:55:49 +08:00
|
|
|
return (regs->softe & IRQS_DISABLED);
|
2012-03-07 13:48:45 +08:00
|
|
|
}
|
|
|
|
|
2012-07-10 16:36:40 +08:00
|
|
|
extern bool prep_irq_for_idle(void);
|
2017-06-13 21:05:45 +08:00
|
|
|
extern bool prep_irq_for_idle_irqsoff(void);
|
2017-06-13 21:05:47 +08:00
|
|
|
extern void irq_set_pending_from_srr1(unsigned long srr1);
|
2017-06-13 21:05:45 +08:00
|
|
|
|
|
|
|
#define fini_irq_for_idle_irqsoff() trace_hardirqs_off();
|
2012-07-10 16:36:40 +08:00
|
|
|
|
2016-07-08 14:37:07 +08:00
|
|
|
extern void force_external_irq_replay(void);
|
|
|
|
|
2021-02-08 23:10:24 +08:00
|
|
|
static inline void irq_soft_mask_regs_set_state(struct pt_regs *regs, unsigned long val)
|
|
|
|
{
|
|
|
|
regs->softe = val;
|
|
|
|
}
|
2010-10-07 21:08:55 +08:00
|
|
|
#else /* CONFIG_PPC64 */
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2021-02-08 23:10:26 +08:00
|
|
|
static inline notrace unsigned long irq_soft_mask_return(void)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-10-07 21:08:55 +08:00
|
|
|
static inline unsigned long arch_local_save_flags(void)
|
|
|
|
{
|
|
|
|
return mfmsr();
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void arch_local_irq_restore(unsigned long flags)
|
|
|
|
{
|
2019-08-29 16:45:13 +08:00
|
|
|
if (IS_ENABLED(CONFIG_BOOKE))
|
|
|
|
wrtee(flags);
|
|
|
|
else
|
|
|
|
mtmsr(flags);
|
2010-10-07 21:08:55 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-10-07 21:08:55 +08:00
|
|
|
static inline unsigned long arch_local_irq_save(void)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2010-10-07 21:08:55 +08:00
|
|
|
unsigned long flags = arch_local_save_flags();
|
2019-08-29 16:45:13 +08:00
|
|
|
|
|
|
|
if (IS_ENABLED(CONFIG_BOOKE))
|
|
|
|
wrtee(0);
|
|
|
|
else if (IS_ENABLED(CONFIG_PPC_8xx))
|
|
|
|
wrtspr(SPRN_EID);
|
|
|
|
else
|
|
|
|
mtmsr(flags & ~MSR_EE);
|
|
|
|
|
2010-10-07 21:08:55 +08:00
|
|
|
return flags;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2010-10-07 21:08:55 +08:00
|
|
|
static inline void arch_local_irq_disable(void)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2021-02-08 23:10:25 +08:00
|
|
|
__hard_irq_disable();
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2010-10-07 21:08:55 +08:00
|
|
|
static inline void arch_local_irq_enable(void)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2021-02-08 23:10:25 +08:00
|
|
|
__hard_irq_enable();
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2010-10-07 21:08:55 +08:00
|
|
|
static inline bool arch_irqs_disabled_flags(unsigned long flags)
|
2008-05-15 11:49:43 +08:00
|
|
|
{
|
|
|
|
return (flags & MSR_EE) == 0;
|
|
|
|
}
|
|
|
|
|
2010-10-07 21:08:55 +08:00
|
|
|
static inline bool arch_irqs_disabled(void)
|
|
|
|
{
|
|
|
|
return arch_irqs_disabled_flags(arch_local_save_flags());
|
|
|
|
}
|
|
|
|
|
|
|
|
#define hard_irq_disable() arch_local_irq_disable()
|
|
|
|
|
2012-03-07 13:48:45 +08:00
|
|
|
static inline bool arch_irq_disabled_regs(struct pt_regs *regs)
|
|
|
|
{
|
|
|
|
return !(regs->msr & MSR_EE);
|
|
|
|
}
|
|
|
|
|
2022-01-10 23:29:53 +08:00
|
|
|
static __always_inline bool should_hard_irq_enable(void)
|
2021-06-17 23:51:03 +08:00
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void do_hard_irq_enable(void)
|
|
|
|
{
|
|
|
|
BUILD_BUG();
|
|
|
|
}
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 15:27:59 +08:00
|
|
|
|
powerpc/perf: Fix PMU callbacks to clear pending PMI before resetting an overflown PMC
Running perf fuzzer showed below in dmesg logs:
"Can't find PMC that caused IRQ"
This means a PMU exception happened, but none of the PMC's (Performance
Monitor Counter) were found to be overflown. There are some corner cases
that clears the PMCs after PMI gets masked. In such cases, the perf
interrupt handler will not find the active PMC values that had caused
the overflow and thus leads to this message while replaying.
Case 1: PMU Interrupt happens during replay of other interrupts and
counter values gets cleared by PMU callbacks before replay:
During replay of interrupts like timer, __do_irq() and doorbell
exception, we conditionally enable interrupts via may_hard_irq_enable().
This could potentially create a window to generate a PMI. Since irq soft
mask is set to ALL_DISABLED, the PMI will get masked here. We could get
IPIs run before perf interrupt is replayed and the PMU events could
be deleted or stopped. This will change the PMU SPR values and resets
the counters. Snippet of ftrace log showing PMU callbacks invoked in
__do_irq():
<idle>-0 [051] dns. 132025441306354: __do_irq <-call_do_irq
<idle>-0 [051] dns. 132025441306430: irq_enter <-__do_irq
<idle>-0 [051] dns. 132025441306503: irq_enter_rcu <-__do_irq
<idle>-0 [051] dnH. 132025441306599: xive_get_irq <-__do_irq
<<>>
<idle>-0 [051] dnH. 132025441307770: generic_smp_call_function_single_interrupt <-smp_ipi_demux_relaxed
<idle>-0 [051] dnH. 132025441307839: flush_smp_call_function_queue <-smp_ipi_demux_relaxed
<idle>-0 [051] dnH. 132025441308057: _raw_spin_lock <-event_function
<idle>-0 [051] dnH. 132025441308206: power_pmu_disable <-perf_pmu_disable
<idle>-0 [051] dnH. 132025441308337: power_pmu_del <-event_sched_out
<idle>-0 [051] dnH. 132025441308407: power_pmu_read <-power_pmu_del
<idle>-0 [051] dnH. 132025441308477: read_pmc <-power_pmu_read
<idle>-0 [051] dnH. 132025441308590: isa207_disable_pmc <-power_pmu_del
<idle>-0 [051] dnH. 132025441308663: write_pmc <-power_pmu_del
<idle>-0 [051] dnH. 132025441308787: power_pmu_event_idx <-perf_event_update_userpage
<idle>-0 [051] dnH. 132025441308859: rcu_read_unlock_strict <-perf_event_update_userpage
<idle>-0 [051] dnH. 132025441308975: power_pmu_enable <-perf_pmu_enable
<<>>
<idle>-0 [051] dnH. 132025441311108: irq_exit <-__do_irq
<idle>-0 [051] dns. 132025441311319: performance_monitor_exception <-replay_soft_interrupts
Case 2: PMI's masked during local_* operations, example local_add(). If
the local_add() operation happens within a local_irq_save(), replay of
PMI will be during local_irq_restore(). Similar to case 1, this could
also create a window before replay where PMU events gets deleted or
stopped.
Fix it by updating the PMU callback function power_pmu_disable() to
check for pending perf interrupt. If there is an overflown PMC and
pending perf interrupt indicated in paca, clear the PMI bit in paca to
drop that sample. Clearing of PMI bit is done in power_pmu_disable()
since disable is invoked before any event gets deleted/stopped. With
this fix, if there are more than one event running in the PMU, there is
a chance that we clear the PMI bit for the event which is not getting
deleted/stopped. The other events may still remain active. Hence to make
sure we don't drop valid sample in such cases, another check is added in
power_pmu_enable. This checks if there is an overflown PMC found among
the active events and if so enable back the PMI bit. Two new helper
functions are introduced to clear/set the PMI, ie
clear_pmi_irq_pending() and set_pmi_irq_pending(). Helper function
pmi_irq_pending() is introduced to give a warning if there is pending
PMI bit in paca, but no PMC is overflown.
Also there are corner cases which result in performance monitor
interrupts being triggered during power_pmu_disable(). This happens
since PMXE bit is not cleared along with disabling of other MMCR0 bits
in the pmu_disable. Such PMI's could leave the PMU running and could
trigger PMI again which will set MMCR0 PMAO bit. This could lead to
spurious interrupts in some corner cases. Example, a timer after
power_pmu_del() which will re-enable interrupts and triggers a PMI again
since PMAO bit is still set. But fails to find valid overflow since PMC
was cleared in power_pmu_del(). Fix that by disabling PMXE along with
disabling of other MMCR0 bits in power_pmu_disable().
We can't just replay PMI any time. Hence this approach is preferred
rather than replaying PMI before resetting overflown PMC. Patch also
documents core-book3s on a race condition which can trigger these PMC
messages during idle path in PowerNV.
Fixes: f442d004806e ("powerpc/64s: Add support to mask perf interrupts and replay them")
Reported-by: Nageswara R Sastry <nasastry@in.ibm.com>
Suggested-by: Nicholas Piggin <npiggin@gmail.com>
Suggested-by: Madhavan Srinivasan <maddy@linux.ibm.com>
Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Tested-by: Nageswara R Sastry <rnsastry@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Make pmi_irq_pending() return bool, reflow/reword some comments]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1626846509-1350-2-git-send-email-atrajeev@linux.vnet.ibm.com
2021-07-21 13:48:29 +08:00
|
|
|
static inline void clear_pmi_irq_pending(void) { }
|
|
|
|
static inline void set_pmi_irq_pending(void) { }
|
|
|
|
static inline bool pmi_irq_pending(void) { return false; }
|
|
|
|
|
2021-02-08 23:10:24 +08:00
|
|
|
static inline void irq_soft_mask_regs_set_state(struct pt_regs *regs, unsigned long val)
|
|
|
|
{
|
|
|
|
}
|
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 14:47:49 +08:00
|
|
|
#endif /* CONFIG_PPC64 */
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2022-10-04 13:11:57 +08:00
|
|
|
static inline unsigned long mtmsr_isync_irqsafe(unsigned long msr)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_PPC64
|
|
|
|
if (arch_irqs_disabled()) {
|
|
|
|
/*
|
|
|
|
* With soft-masking, MSR[EE] can change from 1 to 0
|
|
|
|
* asynchronously when irqs are disabled, and we don't want to
|
|
|
|
* set MSR[EE] back to 1 here if that has happened. A race-free
|
|
|
|
* way to do this is ensure EE is already 0. Another way it
|
|
|
|
* could be done is with a RESTART_TABLE handler, but that's
|
|
|
|
* probably overkill here.
|
|
|
|
*/
|
|
|
|
msr &= ~MSR_EE;
|
|
|
|
mtmsr_isync(msr);
|
|
|
|
irq_soft_mask_set(IRQS_ALL_DISABLED);
|
|
|
|
local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
|
|
|
|
} else
|
|
|
|
#endif
|
|
|
|
mtmsr_isync(msr);
|
|
|
|
|
|
|
|
return msr;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2011-01-21 14:12:28 +08:00
|
|
|
#define ARCH_IRQ_INIT_FLAGS IRQ_NOREQUEST
|
|
|
|
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 15:27:59 +08:00
|
|
|
#endif /* __ASSEMBLY__ */
|
2005-09-22 05:52:55 +08:00
|
|
|
#endif /* __KERNEL__ */
|
|
|
|
#endif /* _ASM_POWERPC_HW_IRQ_H */
|