2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Local APIC handling, local APIC timers
|
|
|
|
*
|
2009-01-31 09:03:42 +08:00
|
|
|
* (c) 1999, 2000, 2009 Ingo Molnar <mingo@redhat.com>
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
|
|
|
* Fixes
|
|
|
|
* Maciej W. Rozycki : Bits for genuine 82489DX APICs;
|
|
|
|
* thanks to Eric Gilmore
|
|
|
|
* and Rolf G. Tews
|
|
|
|
* for testing these extensively.
|
|
|
|
* Maciej W. Rozycki : Various updates and fixes.
|
|
|
|
* Mikael Pettersson : Power Management for UP-APIC.
|
|
|
|
* Pavel Machek and
|
|
|
|
* Mikael Pettersson : PM converted to driver model.
|
|
|
|
*/
|
|
|
|
|
perf: Do the big rename: Performance Counters -> Performance Events
Bye-bye Performance Counters, welcome Performance Events!
In the past few months the perfcounters subsystem has grown out its
initial role of counting hardware events, and has become (and is
becoming) a much broader generic event enumeration, reporting, logging,
monitoring, analysis facility.
Naming its core object 'perf_counter' and naming the subsystem
'perfcounters' has become more and more of a misnomer. With pending
code like hw-breakpoints support the 'counter' name is less and
less appropriate.
All in one, we've decided to rename the subsystem to 'performance
events' and to propagate this rename through all fields, variables
and API names. (in an ABI compatible fashion)
The word 'event' is also a bit shorter than 'counter' - which makes
it slightly more convenient to write/handle as well.
Thanks goes to Stephane Eranian who first observed this misnomer and
suggested a rename.
User-space tooling and ABI compatibility is not affected - this patch
should be function-invariant. (Also, defconfigs were not touched to
keep the size down.)
This patch has been generated via the following script:
FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')
sed -i \
-e 's/PERF_EVENT_/PERF_RECORD_/g' \
-e 's/PERF_COUNTER/PERF_EVENT/g' \
-e 's/perf_counter/perf_event/g' \
-e 's/nb_counters/nb_events/g' \
-e 's/swcounter/swevent/g' \
-e 's/tpcounter_event/tp_event/g' \
$FILES
for N in $(find . -name perf_counter.[ch]); do
M=$(echo $N | sed 's/perf_counter/perf_event/g')
mv $N $M
done
FILES=$(find . -name perf_event.*)
sed -i \
-e 's/COUNTER_MASK/REG_MASK/g' \
-e 's/COUNTER/EVENT/g' \
-e 's/\<event\>/event_id/g' \
-e 's/counter/event/g' \
-e 's/Counter/Event/g' \
$FILES
... to keep it as correct as possible. This script can also be
used by anyone who has pending perfcounters patches - it converts
a Linux kernel tree over to the new naming. We tried to time this
change to the point in time where the amount of pending patches
is the smallest: the end of the merge window.
Namespace clashes were fixed up in a preparatory patch - and some
stylistic fallout will be fixed up in a subsequent patch.
( NOTE: 'counters' are still the proper terminology when we deal
with hardware registers - and these sed scripts are a bit
over-eager in renaming them. I've undone some of that, but
in case there's something left where 'counter' would be
better than 'event' we can undo that on an individual basis
instead of touching an otherwise nicely automated patch. )
Suggested-by: Stephane Eranian <eranian@google.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <linux-arch@vger.kernel.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-21 18:02:48 +08:00
|
|
|
#include <linux/perf_event.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/kernel_stat.h>
|
2009-01-31 08:59:14 +08:00
|
|
|
#include <linux/mc146818rtc.h>
|
2008-01-30 20:30:18 +08:00
|
|
|
#include <linux/acpi_pmtmr.h>
|
2009-01-31 08:59:14 +08:00
|
|
|
#include <linux/clockchips.h>
|
|
|
|
#include <linux/interrupt.h>
|
|
|
|
#include <linux/bootmem.h>
|
|
|
|
#include <linux/ftrace.h>
|
|
|
|
#include <linux/ioport.h>
|
2016-07-14 08:18:56 +08:00
|
|
|
#include <linux/export.h>
|
2011-03-24 05:15:54 +08:00
|
|
|
#include <linux/syscore_ops.h>
|
2009-01-31 08:59:14 +08:00
|
|
|
#include <linux/delay.h>
|
|
|
|
#include <linux/timex.h>
|
2011-06-02 02:04:57 +08:00
|
|
|
#include <linux/i8253.h>
|
2008-07-11 02:16:58 +08:00
|
|
|
#include <linux/dmar.h>
|
2009-01-31 08:59:14 +08:00
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/cpu.h>
|
|
|
|
#include <linux/dmi.h>
|
|
|
|
#include <linux/smp.h>
|
|
|
|
#include <linux/mm.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2013-06-21 22:29:05 +08:00
|
|
|
#include <asm/trace/irq_vectors.h>
|
2012-03-31 02:47:08 +08:00
|
|
|
#include <asm/irq_remapping.h>
|
perf: Do the big rename: Performance Counters -> Performance Events
Bye-bye Performance Counters, welcome Performance Events!
In the past few months the perfcounters subsystem has grown out its
initial role of counting hardware events, and has become (and is
becoming) a much broader generic event enumeration, reporting, logging,
monitoring, analysis facility.
Naming its core object 'perf_counter' and naming the subsystem
'perfcounters' has become more and more of a misnomer. With pending
code like hw-breakpoints support the 'counter' name is less and
less appropriate.
All in one, we've decided to rename the subsystem to 'performance
events' and to propagate this rename through all fields, variables
and API names. (in an ABI compatible fashion)
The word 'event' is also a bit shorter than 'counter' - which makes
it slightly more convenient to write/handle as well.
Thanks goes to Stephane Eranian who first observed this misnomer and
suggested a rename.
User-space tooling and ABI compatibility is not affected - this patch
should be function-invariant. (Also, defconfigs were not touched to
keep the size down.)
This patch has been generated via the following script:
FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')
sed -i \
-e 's/PERF_EVENT_/PERF_RECORD_/g' \
-e 's/PERF_COUNTER/PERF_EVENT/g' \
-e 's/perf_counter/perf_event/g' \
-e 's/nb_counters/nb_events/g' \
-e 's/swcounter/swevent/g' \
-e 's/tpcounter_event/tp_event/g' \
$FILES
for N in $(find . -name perf_counter.[ch]); do
M=$(echo $N | sed 's/perf_counter/perf_event/g')
mv $N $M
done
FILES=$(find . -name perf_event.*)
sed -i \
-e 's/COUNTER_MASK/REG_MASK/g' \
-e 's/COUNTER/EVENT/g' \
-e 's/\<event\>/event_id/g' \
-e 's/counter/event/g' \
-e 's/Counter/Event/g' \
$FILES
... to keep it as correct as possible. This script can also be
used by anyone who has pending perfcounters patches - it converts
a Linux kernel tree over to the new naming. We tried to time this
change to the point in time where the amount of pending patches
is the smallest: the end of the merge window.
Namespace clashes were fixed up in a preparatory patch - and some
stylistic fallout will be fixed up in a subsequent patch.
( NOTE: 'counters' are still the proper terminology when we deal
with hardware registers - and these sed scripts are a bit
over-eager in renaming them. I've undone some of that, but
in case there's something left where 'counter' would be
better than 'event' we can undo that on an individual basis
instead of touching an otherwise nicely automated patch. )
Suggested-by: Stephane Eranian <eranian@google.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <linux-arch@vger.kernel.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-21 18:02:48 +08:00
|
|
|
#include <asm/perf_event.h>
|
2009-08-19 18:35:53 +08:00
|
|
|
#include <asm/x86_init.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <asm/pgalloc.h>
|
2011-07-27 07:09:06 +08:00
|
|
|
#include <linux/atomic.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <asm/mpspec.h>
|
2009-01-31 08:59:14 +08:00
|
|
|
#include <asm/i8259.h>
|
2006-02-04 04:50:50 +08:00
|
|
|
#include <asm/proto.h>
|
2006-09-26 16:52:32 +08:00
|
|
|
#include <asm/apic.h>
|
2011-02-22 22:38:05 +08:00
|
|
|
#include <asm/io_apic.h>
|
2009-01-31 08:59:14 +08:00
|
|
|
#include <asm/desc.h>
|
|
|
|
#include <asm/hpet.h>
|
|
|
|
#include <asm/idle.h>
|
|
|
|
#include <asm/mtrr.h>
|
2011-06-02 02:05:06 +08:00
|
|
|
#include <asm/time.h>
|
2009-01-11 23:04:47 +08:00
|
|
|
#include <asm/smp.h>
|
2009-02-12 20:49:38 +08:00
|
|
|
#include <asm/mce.h>
|
2010-05-25 03:13:15 +08:00
|
|
|
#include <asm/tsc.h>
|
2010-12-21 14:18:48 +08:00
|
|
|
#include <asm/hypervisor.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-01-27 11:56:47 +08:00
|
|
|
unsigned int num_processors;
|
2009-01-31 10:57:12 +08:00
|
|
|
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-19 06:23:59 +08:00
|
|
|
unsigned disabled_cpus;
|
2009-01-31 10:57:12 +08:00
|
|
|
|
2009-01-27 11:56:47 +08:00
|
|
|
/* Processor that is doing the boot up */
|
|
|
|
unsigned int boot_cpu_physical_apicid = -1U;
|
2013-11-15 07:05:32 +08:00
|
|
|
EXPORT_SYMBOL_GPL(boot_cpu_physical_apicid);
|
2008-03-26 00:28:56 +08:00
|
|
|
|
2016-09-14 02:12:32 +08:00
|
|
|
u8 boot_cpu_apic_version;
|
|
|
|
|
2008-08-24 17:01:42 +08:00
|
|
|
/*
|
2009-01-31 10:57:12 +08:00
|
|
|
* The highest APIC ID seen during enumeration.
|
2008-08-24 17:01:42 +08:00
|
|
|
*/
|
2014-06-09 16:19:32 +08:00
|
|
|
static unsigned int max_physical_apicid;
|
2008-03-26 00:28:56 +08:00
|
|
|
|
2008-08-24 17:01:42 +08:00
|
|
|
/*
|
2009-01-31 10:57:12 +08:00
|
|
|
* Bitmask of physically existing CPUs:
|
2008-08-24 17:01:42 +08:00
|
|
|
*/
|
2009-01-27 11:56:47 +08:00
|
|
|
physid_mask_t phys_cpu_present_map;
|
|
|
|
|
x86, apic, kexec: Add disable_cpu_apicid kernel parameter
Add disable_cpu_apicid kernel parameter. To use this kernel parameter,
specify an initial APIC ID of the corresponding CPU you want to
disable.
This is mostly used for the kdump 2nd kernel to disable BSP to wake up
multiple CPUs without causing system reset or hang due to sending INIT
from AP to BSP.
Kdump users first figure out initial APIC ID of the BSP, CPU0 in the
1st kernel, for example from /proc/cpuinfo and then set up this kernel
parameter for the 2nd kernel using the obtained APIC ID.
However, doing this procedure at each boot time manually is awkward,
which should be automatically done by user-land service scripts, for
example, kexec-tools on fedora/RHEL distributions.
This design is more flexible than disabling BSP in kernel boot time
automatically in that in kernel boot time we have no choice but
referring to ACPI/MP table to obtain initial APIC ID for BSP, meaning
that the method is not applicable to the systems without such BIOS
tables.
One assumption behind this design is that users get initial APIC ID of
the BSP in still healthy state and so BSP is uniquely kept in
CPU0. Thus, through the kernel parameter, only one initial APIC ID can
be specified.
In a comparison with disabled_cpu_apicid, we use read_apic_id(), not
boot_cpu_physical_apicid, because on some platforms, the variable is
modified to the apicid reported as BSP through MP table and this
function is executed with the temporarily modified
boot_cpu_physical_apicid. As a result, disabled_cpu_apicid kernel
parameter doesn't work well for apicids of APs.
Fixing the wrong handling of boot_cpu_physical_apicid requires some
reviews and tests beyond some platforms and it could take some
time. The fix here is a kind of workaround to focus on the main topic
of this patch.
Signed-off-by: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
Link: http://lkml.kernel.org/r/20140115064458.1545.38775.stgit@localhost6.localdomain6
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2014-01-15 14:44:58 +08:00
|
|
|
/*
|
|
|
|
* Processor to be disabled specified by kernel parameter
|
|
|
|
* disable_cpu_apicid=<int>, mostly used for the kdump 2nd kernel to
|
|
|
|
* avoid undefined behaviour caused by sending INIT from AP to BSP.
|
|
|
|
*/
|
2014-01-16 05:02:08 +08:00
|
|
|
static unsigned int disabled_cpu_apicid __read_mostly = BAD_APICID;
|
x86, apic, kexec: Add disable_cpu_apicid kernel parameter
Add disable_cpu_apicid kernel parameter. To use this kernel parameter,
specify an initial APIC ID of the corresponding CPU you want to
disable.
This is mostly used for the kdump 2nd kernel to disable BSP to wake up
multiple CPUs without causing system reset or hang due to sending INIT
from AP to BSP.
Kdump users first figure out initial APIC ID of the BSP, CPU0 in the
1st kernel, for example from /proc/cpuinfo and then set up this kernel
parameter for the 2nd kernel using the obtained APIC ID.
However, doing this procedure at each boot time manually is awkward,
which should be automatically done by user-land service scripts, for
example, kexec-tools on fedora/RHEL distributions.
This design is more flexible than disabling BSP in kernel boot time
automatically in that in kernel boot time we have no choice but
referring to ACPI/MP table to obtain initial APIC ID for BSP, meaning
that the method is not applicable to the systems without such BIOS
tables.
One assumption behind this design is that users get initial APIC ID of
the BSP in still healthy state and so BSP is uniquely kept in
CPU0. Thus, through the kernel parameter, only one initial APIC ID can
be specified.
In a comparison with disabled_cpu_apicid, we use read_apic_id(), not
boot_cpu_physical_apicid, because on some platforms, the variable is
modified to the apicid reported as BSP through MP table and this
function is executed with the temporarily modified
boot_cpu_physical_apicid. As a result, disabled_cpu_apicid kernel
parameter doesn't work well for apicids of APs.
Fixing the wrong handling of boot_cpu_physical_apicid requires some
reviews and tests beyond some platforms and it could take some
time. The fix here is a kind of workaround to focus on the main topic
of this patch.
Signed-off-by: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
Link: http://lkml.kernel.org/r/20140115064458.1545.38775.stgit@localhost6.localdomain6
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2014-01-15 14:44:58 +08:00
|
|
|
|
2015-12-14 18:19:12 +08:00
|
|
|
/*
|
|
|
|
* This variable controls which CPUs receive external NMIs. By default,
|
|
|
|
* external NMIs are delivered only to the BSP.
|
|
|
|
*/
|
|
|
|
static int apic_extnmi = APIC_EXTNMI_BSP;
|
|
|
|
|
2009-01-27 11:56:47 +08:00
|
|
|
/*
|
|
|
|
* Map cpu index to physical APIC ID
|
|
|
|
*/
|
x86: Add read_mostly declaration/definition to variables from smp.h
Add "read-mostly" qualifier to the following variables in
smp.h:
- cpu_sibling_map
- cpu_core_map
- cpu_llc_shared_map
- cpu_llc_id
- cpu_number
- x86_cpu_to_apicid
- x86_bios_cpu_apicid
- x86_cpu_to_logical_apicid
As long as all the variables above are only written during the
initialization, this change is meant to prevent the false
sharing. More specifically, on vSMP Foundation platform
x86_cpu_to_apicid shared the same internode_cache_line with
frequently written lapic_events.
From the analysis of the first 33 per_cpu variables out of 219
(memories they describe, to be more specific) the 8 have read_mostly
nature (tlb_vector_offset, cpu_loops_per_jiffy, xen_debug_irq, etc.)
and 25 are frequently written (irq_stack_union, gdt_page,
exception_stacks, idt_desc, etc.).
Assuming that the spread of the rest of the per_cpu variables is
similar, identifying the read mostly memories will make more sense
in terms of long-term code maintenance comparing to identifying
frequently written memories.
Signed-off-by: Vlad Zolotarov <vlad@scalemp.com>
Acked-by: Shai Fultheim <shai@scalemp.com>
Cc: Shai Fultheim (Shai@ScaleMP.com) <Shai@scalemp.com>
Cc: ido@wizery.com
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1719258.EYKzE4Zbq5@vlad
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-06-11 17:56:52 +08:00
|
|
|
DEFINE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_cpu_to_apicid, BAD_APICID);
|
|
|
|
DEFINE_EARLY_PER_CPU_READ_MOSTLY(u16, x86_bios_cpu_apicid, BAD_APICID);
|
2016-06-30 23:56:36 +08:00
|
|
|
DEFINE_EARLY_PER_CPU_READ_MOSTLY(u32, x86_cpu_to_acpiid, U32_MAX);
|
2009-01-27 11:56:47 +08:00
|
|
|
EXPORT_EARLY_PER_CPU_SYMBOL(x86_cpu_to_apicid);
|
|
|
|
EXPORT_EARLY_PER_CPU_SYMBOL(x86_bios_cpu_apicid);
|
2016-06-30 23:56:36 +08:00
|
|
|
EXPORT_EARLY_PER_CPU_SYMBOL(x86_cpu_to_acpiid);
|
2008-08-24 17:01:42 +08:00
|
|
|
|
2008-08-24 17:01:46 +08:00
|
|
|
#ifdef CONFIG_X86_32
|
2011-01-23 21:37:30 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* On x86_32, the mapping between cpu and logical apicid may vary
|
|
|
|
* depending on apic in use. The following early percpu variable is
|
|
|
|
* used for the mapping. This is where the behaviors of x86_64 and 32
|
|
|
|
* actually diverge. Let's keep it ugly for now.
|
|
|
|
*/
|
x86: Add read_mostly declaration/definition to variables from smp.h
Add "read-mostly" qualifier to the following variables in
smp.h:
- cpu_sibling_map
- cpu_core_map
- cpu_llc_shared_map
- cpu_llc_id
- cpu_number
- x86_cpu_to_apicid
- x86_bios_cpu_apicid
- x86_cpu_to_logical_apicid
As long as all the variables above are only written during the
initialization, this change is meant to prevent the false
sharing. More specifically, on vSMP Foundation platform
x86_cpu_to_apicid shared the same internode_cache_line with
frequently written lapic_events.
From the analysis of the first 33 per_cpu variables out of 219
(memories they describe, to be more specific) the 8 have read_mostly
nature (tlb_vector_offset, cpu_loops_per_jiffy, xen_debug_irq, etc.)
and 25 are frequently written (irq_stack_union, gdt_page,
exception_stacks, idt_desc, etc.).
Assuming that the spread of the rest of the per_cpu variables is
similar, identifying the read mostly memories will make more sense
in terms of long-term code maintenance comparing to identifying
frequently written memories.
Signed-off-by: Vlad Zolotarov <vlad@scalemp.com>
Acked-by: Shai Fultheim <shai@scalemp.com>
Cc: Shai Fultheim (Shai@ScaleMP.com) <Shai@scalemp.com>
Cc: ido@wizery.com
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1719258.EYKzE4Zbq5@vlad
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-06-11 17:56:52 +08:00
|
|
|
DEFINE_EARLY_PER_CPU_READ_MOSTLY(int, x86_cpu_to_logical_apicid, BAD_APICID);
|
2011-01-23 21:37:30 +08:00
|
|
|
|
2008-08-24 17:01:49 +08:00
|
|
|
/* Local APIC was disabled by the BIOS and enabled by the kernel */
|
|
|
|
static int enabled_via_apicbase;
|
|
|
|
|
2009-04-13 00:47:40 +08:00
|
|
|
/*
|
|
|
|
* Handle interrupt mode configuration register (IMCR).
|
|
|
|
* This register controls whether the interrupt signals
|
|
|
|
* that reach the BSP come from the master PIC or from the
|
|
|
|
* local APIC. Before entering Symmetric I/O Mode, either
|
|
|
|
* the BIOS or the operating system must switch out of
|
|
|
|
* PIC Mode by changing the IMCR.
|
|
|
|
*/
|
2009-04-13 23:39:24 +08:00
|
|
|
static inline void imcr_pic_to_apic(void)
|
2009-04-13 00:47:40 +08:00
|
|
|
{
|
|
|
|
/* select IMCR register */
|
|
|
|
outb(0x70, 0x22);
|
|
|
|
/* NMI and 8259 INTR go through APIC */
|
|
|
|
outb(0x01, 0x23);
|
|
|
|
}
|
|
|
|
|
2009-04-13 23:39:24 +08:00
|
|
|
static inline void imcr_apic_to_pic(void)
|
2009-04-13 00:47:40 +08:00
|
|
|
{
|
|
|
|
/* select IMCR register */
|
|
|
|
outb(0x70, 0x22);
|
|
|
|
/* NMI and 8259 INTR go directly to BSP */
|
|
|
|
outb(0x00, 0x23);
|
|
|
|
}
|
2008-08-24 17:01:46 +08:00
|
|
|
#endif
|
|
|
|
|
2012-10-23 05:37:58 +08:00
|
|
|
/*
|
|
|
|
* Knob to control our willingness to enable the local APIC.
|
|
|
|
*
|
|
|
|
* +1=force-enable
|
|
|
|
*/
|
|
|
|
static int force_enable_local_apic __initdata;
|
2014-02-05 15:55:06 +08:00
|
|
|
|
2012-10-23 05:37:58 +08:00
|
|
|
/*
|
|
|
|
* APIC command line parameters
|
|
|
|
*/
|
|
|
|
static int __init parse_lapic(char *arg)
|
|
|
|
{
|
2016-08-04 04:45:50 +08:00
|
|
|
if (IS_ENABLED(CONFIG_X86_32) && !arg)
|
2012-10-23 05:37:58 +08:00
|
|
|
force_enable_local_apic = 1;
|
2013-02-20 03:47:07 +08:00
|
|
|
else if (arg && !strncmp(arg, "notscdeadline", 13))
|
2012-10-23 05:37:58 +08:00
|
|
|
setup_clear_cpu_cap(X86_FEATURE_TSC_DEADLINE_TIMER);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
early_param("lapic", parse_lapic);
|
|
|
|
|
2008-08-24 17:01:46 +08:00
|
|
|
#ifdef CONFIG_X86_64
|
2007-10-13 05:04:23 +08:00
|
|
|
static int apic_calibrate_pmtmr __initdata;
|
2008-08-24 17:01:46 +08:00
|
|
|
static __init int setup_apicpmtimer(char *s)
|
|
|
|
{
|
|
|
|
apic_calibrate_pmtmr = 1;
|
|
|
|
notsc_setup(NULL);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
__setup("apicpmtimer", setup_apicpmtimer);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
unsigned long mp_lapic_addr;
|
|
|
|
int disable_apic;
|
|
|
|
/* Disable local APIC timer from the kernel commandline or via dmi quirk */
|
2011-03-11 15:02:36 +08:00
|
|
|
static int disable_apic_timer __initdata;
|
2008-01-30 20:32:35 +08:00
|
|
|
/* Local APIC timer works in C2 */
|
2007-03-24 02:32:31 +08:00
|
|
|
int local_apic_timer_c2_ok;
|
|
|
|
EXPORT_SYMBOL_GPL(local_apic_timer_c2_ok);
|
|
|
|
|
2014-11-03 16:39:43 +08:00
|
|
|
int first_system_vector = FIRST_SYSTEM_VECTOR;
|
2008-08-20 11:50:36 +08:00
|
|
|
|
2008-01-30 20:32:35 +08:00
|
|
|
/*
|
|
|
|
* Debug level, exported for io_apic.c
|
|
|
|
*/
|
2008-07-15 01:44:51 +08:00
|
|
|
unsigned int apic_verbosity;
|
2008-01-30 20:32:35 +08:00
|
|
|
|
2008-08-24 17:01:43 +08:00
|
|
|
int pic_mode;
|
|
|
|
|
2008-05-19 23:47:03 +08:00
|
|
|
/* Have we found an MP table */
|
|
|
|
int smp_found_config;
|
|
|
|
|
2006-12-07 09:14:01 +08:00
|
|
|
static struct resource lapic_resource = {
|
|
|
|
.name = "Local APIC",
|
|
|
|
.flags = IORESOURCE_MEM | IORESOURCE_BUSY,
|
|
|
|
};
|
|
|
|
|
2011-11-10 21:42:40 +08:00
|
|
|
unsigned int lapic_timer_frequency = 0;
|
2007-10-13 05:04:06 +08:00
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
static void apic_pm_activate(void);
|
2007-10-13 05:04:07 +08:00
|
|
|
|
2008-01-30 20:33:17 +08:00
|
|
|
static unsigned long apic_phys;
|
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
/*
|
|
|
|
* Get the LAPIC version
|
|
|
|
*/
|
|
|
|
static inline int lapic_get_version(void)
|
2007-10-13 05:04:07 +08:00
|
|
|
{
|
2008-01-30 20:30:20 +08:00
|
|
|
return GET_APIC_VERSION(apic_read(APIC_LVR));
|
2007-10-13 05:04:07 +08:00
|
|
|
}
|
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
/*
|
2008-08-17 03:21:54 +08:00
|
|
|
* Check, if the APIC is integrated or a separate chip
|
2008-01-30 20:30:20 +08:00
|
|
|
*/
|
|
|
|
static inline int lapic_is_integrated(void)
|
2007-10-13 05:04:07 +08:00
|
|
|
{
|
2008-08-17 03:21:54 +08:00
|
|
|
#ifdef CONFIG_X86_64
|
2008-01-30 20:30:20 +08:00
|
|
|
return 1;
|
2008-08-17 03:21:54 +08:00
|
|
|
#else
|
|
|
|
return APIC_INTEGRATED(lapic_get_version());
|
|
|
|
#endif
|
2007-10-13 05:04:07 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2008-01-30 20:30:20 +08:00
|
|
|
* Check, whether this is a modern or a first generation APIC
|
2007-10-13 05:04:07 +08:00
|
|
|
*/
|
2008-01-30 20:30:20 +08:00
|
|
|
static int modern_apic(void)
|
2007-10-13 05:04:07 +08:00
|
|
|
{
|
2008-01-30 20:30:20 +08:00
|
|
|
/* AMD systems use old APIC versions, so check the CPU */
|
|
|
|
if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD &&
|
|
|
|
boot_cpu_data.x86 >= 0xf)
|
|
|
|
return 1;
|
|
|
|
return lapic_get_version() >= 0x14;
|
2007-10-13 05:04:07 +08:00
|
|
|
}
|
|
|
|
|
2009-04-13 00:47:41 +08:00
|
|
|
/*
|
2009-10-14 04:07:04 +08:00
|
|
|
* right after this call apic become NOOP driven
|
|
|
|
* so apic->write/read doesn't do anything
|
2009-04-13 00:47:41 +08:00
|
|
|
*/
|
2011-03-11 15:02:36 +08:00
|
|
|
static void __init apic_disable(void)
|
2009-04-13 00:47:41 +08:00
|
|
|
{
|
2009-10-15 23:04:16 +08:00
|
|
|
pr_info("APIC: switched to apic NOOP\n");
|
2009-10-14 04:07:04 +08:00
|
|
|
apic = &apic_noop;
|
2009-04-13 00:47:41 +08:00
|
|
|
}
|
|
|
|
|
2009-02-17 15:02:14 +08:00
|
|
|
void native_apic_wait_icr_idle(void)
|
[PATCH] x86-64: safe_apic_wait_icr_idle - x86_64
apic_wait_icr_idle looks like this:
static __inline__ void apic_wait_icr_idle(void)
{
while (apic_read(APIC_ICR) & APIC_ICR_BUSY)
cpu_relax();
}
The busy loop in this function would not be problematic if the
corresponding status bit in the ICR were always updated, but that does
not seem to be the case under certain crash scenarios. Kdump uses an IPI
to stop the other CPUs in the event of a crash, but when any of the
other CPUs are locked-up inside the NMI handler the CPU that sends the
IPI will end up looping forever in the ICR check, effectively
hard-locking the whole system.
Quoting from Intel's "MultiProcessor Specification" (Version 1.4), B-3:
"A local APIC unit indicates successful dispatch of an IPI by
resetting the Delivery Status bit in the Interrupt Command
Register (ICR). The operating system polls the delivery status
bit after sending an INIT or STARTUP IPI until the command has
been dispatched.
A period of 20 microseconds should be sufficient for IPI dispatch
to complete under normal operating conditions. If the IPI is not
successfully dispatched, the operating system can abort the
command. Alternatively, the operating system can retry the IPI by
writing the lower 32-bit double word of the ICR. This “time-out”
mechanism can be implemented through an external interrupt, if
interrupts are enabled on the processor, or through execution of
an instruction or time-stamp counter spin loop."
Intel's documentation suggests the implementation of a time-out
mechanism, which, by the way, is already being open-coded in some parts
of the kernel that tinker with ICR.
Create a apic_wait_icr_idle replacement that implements the time-out
mechanism and that can be used to solve the aforementioned problem.
AK: moved both functions out of line
AK: Added improved loop from Keith Owens
Signed-off-by: Fernando Luis Vazquez Cao <fernando@oss.ntt.co.jp>
Signed-off-by: Andi Kleen <ak@suse.de>
2007-05-03 01:27:17 +08:00
|
|
|
{
|
|
|
|
while (apic_read(APIC_ICR) & APIC_ICR_BUSY)
|
|
|
|
cpu_relax();
|
|
|
|
}
|
|
|
|
|
2009-02-17 15:02:14 +08:00
|
|
|
u32 native_safe_apic_wait_icr_idle(void)
|
[PATCH] x86-64: safe_apic_wait_icr_idle - x86_64
apic_wait_icr_idle looks like this:
static __inline__ void apic_wait_icr_idle(void)
{
while (apic_read(APIC_ICR) & APIC_ICR_BUSY)
cpu_relax();
}
The busy loop in this function would not be problematic if the
corresponding status bit in the ICR were always updated, but that does
not seem to be the case under certain crash scenarios. Kdump uses an IPI
to stop the other CPUs in the event of a crash, but when any of the
other CPUs are locked-up inside the NMI handler the CPU that sends the
IPI will end up looping forever in the ICR check, effectively
hard-locking the whole system.
Quoting from Intel's "MultiProcessor Specification" (Version 1.4), B-3:
"A local APIC unit indicates successful dispatch of an IPI by
resetting the Delivery Status bit in the Interrupt Command
Register (ICR). The operating system polls the delivery status
bit after sending an INIT or STARTUP IPI until the command has
been dispatched.
A period of 20 microseconds should be sufficient for IPI dispatch
to complete under normal operating conditions. If the IPI is not
successfully dispatched, the operating system can abort the
command. Alternatively, the operating system can retry the IPI by
writing the lower 32-bit double word of the ICR. This “time-out”
mechanism can be implemented through an external interrupt, if
interrupts are enabled on the processor, or through execution of
an instruction or time-stamp counter spin loop."
Intel's documentation suggests the implementation of a time-out
mechanism, which, by the way, is already being open-coded in some parts
of the kernel that tinker with ICR.
Create a apic_wait_icr_idle replacement that implements the time-out
mechanism and that can be used to solve the aforementioned problem.
AK: moved both functions out of line
AK: Added improved loop from Keith Owens
Signed-off-by: Fernando Luis Vazquez Cao <fernando@oss.ntt.co.jp>
Signed-off-by: Andi Kleen <ak@suse.de>
2007-05-03 01:27:17 +08:00
|
|
|
{
|
2008-01-30 20:30:15 +08:00
|
|
|
u32 send_status;
|
[PATCH] x86-64: safe_apic_wait_icr_idle - x86_64
apic_wait_icr_idle looks like this:
static __inline__ void apic_wait_icr_idle(void)
{
while (apic_read(APIC_ICR) & APIC_ICR_BUSY)
cpu_relax();
}
The busy loop in this function would not be problematic if the
corresponding status bit in the ICR were always updated, but that does
not seem to be the case under certain crash scenarios. Kdump uses an IPI
to stop the other CPUs in the event of a crash, but when any of the
other CPUs are locked-up inside the NMI handler the CPU that sends the
IPI will end up looping forever in the ICR check, effectively
hard-locking the whole system.
Quoting from Intel's "MultiProcessor Specification" (Version 1.4), B-3:
"A local APIC unit indicates successful dispatch of an IPI by
resetting the Delivery Status bit in the Interrupt Command
Register (ICR). The operating system polls the delivery status
bit after sending an INIT or STARTUP IPI until the command has
been dispatched.
A period of 20 microseconds should be sufficient for IPI dispatch
to complete under normal operating conditions. If the IPI is not
successfully dispatched, the operating system can abort the
command. Alternatively, the operating system can retry the IPI by
writing the lower 32-bit double word of the ICR. This “time-out”
mechanism can be implemented through an external interrupt, if
interrupts are enabled on the processor, or through execution of
an instruction or time-stamp counter spin loop."
Intel's documentation suggests the implementation of a time-out
mechanism, which, by the way, is already being open-coded in some parts
of the kernel that tinker with ICR.
Create a apic_wait_icr_idle replacement that implements the time-out
mechanism and that can be used to solve the aforementioned problem.
AK: moved both functions out of line
AK: Added improved loop from Keith Owens
Signed-off-by: Fernando Luis Vazquez Cao <fernando@oss.ntt.co.jp>
Signed-off-by: Andi Kleen <ak@suse.de>
2007-05-03 01:27:17 +08:00
|
|
|
int timeout;
|
|
|
|
|
|
|
|
timeout = 0;
|
|
|
|
do {
|
|
|
|
send_status = apic_read(APIC_ICR) & APIC_ICR_BUSY;
|
|
|
|
if (!send_status)
|
|
|
|
break;
|
2011-12-15 10:32:24 +08:00
|
|
|
inc_irq_stat(icr_read_retry_count);
|
[PATCH] x86-64: safe_apic_wait_icr_idle - x86_64
apic_wait_icr_idle looks like this:
static __inline__ void apic_wait_icr_idle(void)
{
while (apic_read(APIC_ICR) & APIC_ICR_BUSY)
cpu_relax();
}
The busy loop in this function would not be problematic if the
corresponding status bit in the ICR were always updated, but that does
not seem to be the case under certain crash scenarios. Kdump uses an IPI
to stop the other CPUs in the event of a crash, but when any of the
other CPUs are locked-up inside the NMI handler the CPU that sends the
IPI will end up looping forever in the ICR check, effectively
hard-locking the whole system.
Quoting from Intel's "MultiProcessor Specification" (Version 1.4), B-3:
"A local APIC unit indicates successful dispatch of an IPI by
resetting the Delivery Status bit in the Interrupt Command
Register (ICR). The operating system polls the delivery status
bit after sending an INIT or STARTUP IPI until the command has
been dispatched.
A period of 20 microseconds should be sufficient for IPI dispatch
to complete under normal operating conditions. If the IPI is not
successfully dispatched, the operating system can abort the
command. Alternatively, the operating system can retry the IPI by
writing the lower 32-bit double word of the ICR. This “time-out”
mechanism can be implemented through an external interrupt, if
interrupts are enabled on the processor, or through execution of
an instruction or time-stamp counter spin loop."
Intel's documentation suggests the implementation of a time-out
mechanism, which, by the way, is already being open-coded in some parts
of the kernel that tinker with ICR.
Create a apic_wait_icr_idle replacement that implements the time-out
mechanism and that can be used to solve the aforementioned problem.
AK: moved both functions out of line
AK: Added improved loop from Keith Owens
Signed-off-by: Fernando Luis Vazquez Cao <fernando@oss.ntt.co.jp>
Signed-off-by: Andi Kleen <ak@suse.de>
2007-05-03 01:27:17 +08:00
|
|
|
udelay(100);
|
|
|
|
} while (timeout++ < 1000);
|
|
|
|
|
|
|
|
return send_status;
|
|
|
|
}
|
|
|
|
|
2009-02-17 15:02:14 +08:00
|
|
|
void native_apic_icr_write(u32 low, u32 id)
|
2008-07-11 02:16:49 +08:00
|
|
|
{
|
2014-01-28 03:14:06 +08:00
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
local_irq_save(flags);
|
2008-08-15 19:51:20 +08:00
|
|
|
apic_write(APIC_ICR2, SET_APIC_DEST_FIELD(id));
|
2008-07-11 02:16:49 +08:00
|
|
|
apic_write(APIC_ICR, low);
|
2014-01-28 03:14:06 +08:00
|
|
|
local_irq_restore(flags);
|
2008-07-11 02:16:49 +08:00
|
|
|
}
|
|
|
|
|
2009-02-17 15:02:14 +08:00
|
|
|
u64 native_apic_icr_read(void)
|
2008-07-11 02:16:49 +08:00
|
|
|
{
|
|
|
|
u32 icr1, icr2;
|
|
|
|
|
|
|
|
icr2 = apic_read(APIC_ICR2);
|
|
|
|
icr1 = apic_read(APIC_ICR);
|
|
|
|
|
2008-08-17 03:21:55 +08:00
|
|
|
return icr1 | ((u64)icr2 << 32);
|
2008-07-11 02:16:49 +08:00
|
|
|
}
|
|
|
|
|
2008-08-24 17:01:40 +08:00
|
|
|
#ifdef CONFIG_X86_32
|
|
|
|
/**
|
|
|
|
* get_physical_broadcast - Get number of physical broadcast IDs
|
|
|
|
*/
|
|
|
|
int get_physical_broadcast(void)
|
|
|
|
{
|
|
|
|
return modern_apic() ? 0xff : 0xf;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
/**
|
|
|
|
* lapic_get_maxlvt - get the maximum number of local vector table entries
|
|
|
|
*/
|
2008-01-30 20:30:14 +08:00
|
|
|
int lapic_get_maxlvt(void)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-07-24 19:52:28 +08:00
|
|
|
unsigned int v;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
v = apic_read(APIC_LVR);
|
2008-07-24 19:52:28 +08:00
|
|
|
/*
|
|
|
|
* - we always have APIC integrated on 64bit mode
|
|
|
|
* - 82489DXs do not report # of LVT entries
|
|
|
|
*/
|
|
|
|
return APIC_INTEGRATED(GET_APIC_VERSION(v)) ? GET_APIC_MAXLVT(v) : 2;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-08-17 03:21:53 +08:00
|
|
|
/*
|
|
|
|
* Local APIC timer
|
|
|
|
*/
|
|
|
|
|
2008-08-19 00:45:55 +08:00
|
|
|
/* Clock divisor */
|
|
|
|
#define APIC_DIVISOR 16
|
x86/timers/apic: Fix imprecise timer interrupts by eliminating TSC clockevents frequency roundoff error
I noticed the following bug/misbehavior on certain Intel systems: with a
single task running on a NOHZ CPU on an Intel Haswell, I recognized
that I did not only get the one expected local_timer APIC interrupt, but
two per second at minimum. (!)
Further tracing showed that the first one precedes the programmed deadline
by up to ~50us and hence, it did nothing except for reprogramming the TSC
deadline clockevent device to trigger shortly thereafter again.
The reason for this is imprecise calibration, the timeout we program into
the APIC results in 'too short' timer interrupts. The core (hr)timer code
notices this (because it has a precise ktime source and sees the short
interrupt) and fixes it up by programming an additional very short
interrupt period.
This is obviously suboptimal.
The reason for the imprecise calibration is twofold, and this patch
fixes the first reason:
In setup_APIC_timer(), the registered clockevent device's frequency
is calculated by first dividing tsc_khz by TSC_DIVISOR and multiplying
it with 1000 afterwards:
(tsc_khz / TSC_DIVISOR) * 1000
The multiplication with 1000 is done for converting from kHz to Hz and the
division by TSC_DIVISOR is carried out in order to make sure that the final
result fits into an u32.
However, with the order given in this calculation, the roundoff error
introduced by the division gets magnified by a factor of 1000 by the
following multiplication.
To fix it, reversing the order of the division and the multiplication a la:
(tsc_khz * 1000) / TSC_DIVISOR
... reduces the roundoff error already.
Furthermore, if TSC_DIVISOR divides 1000, associativity holds:
(tsc_khz * 1000) / TSC_DIVISOR = tsc_khz * (1000 / TSC_DIVISOR)
and thus, the roundoff error even vanishes and the whole operation can be
carried out within 32 bits.
The powers of two that divide 1000 are 2, 4 and 8. A value of 8 for
TSC_DIVISOR still allows for TSC frequencies up to
2^32 / 10^9ns * 8 = 34.4GHz which is way larger than anything to expect
in the next years.
Thus we also replace the current TSC_DIVISOR value of 32 by 8. Reverse
the order of the divison and the multiplication in the calculation of
the registered clockevent device's frequency.
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Christopher S. Hall <christopher.s.hall@intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
Cc: Len Brown <len.brown@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Link: http://lkml.kernel.org/r/20160714152255.18295-2-nicstange@gmail.com
[ Improved changelog. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-07-14 23:22:54 +08:00
|
|
|
#define TSC_DIVISOR 8
|
2008-08-15 19:51:21 +08:00
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
/*
|
|
|
|
* This function sets up the local APIC timer, with a timeout of
|
|
|
|
* 'clocks' APIC bus clock. During calibration we actually call
|
|
|
|
* this function twice on the boot CPU, once with a bogus timeout
|
|
|
|
* value, second time for real. The other (noncalibrating) CPUs
|
|
|
|
* call this function only once, with the real, calibrated value.
|
|
|
|
*
|
|
|
|
* We do reads before writes even if unnecessary, to get around the
|
|
|
|
* P5 APIC double write bug.
|
|
|
|
*/
|
|
|
|
static void __setup_APIC_LVTT(unsigned int clocks, int oneshot, int irqen)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-01-30 20:30:20 +08:00
|
|
|
unsigned int lvtt_value, tmp_value;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
lvtt_value = LOCAL_TIMER_VECTOR;
|
|
|
|
if (!oneshot)
|
|
|
|
lvtt_value |= APIC_LVT_TIMER_PERIODIC;
|
2012-10-23 05:37:58 +08:00
|
|
|
else if (boot_cpu_has(X86_FEATURE_TSC_DEADLINE_TIMER))
|
|
|
|
lvtt_value |= APIC_LVT_TIMER_TSCDEADLINE;
|
|
|
|
|
2008-08-15 19:51:21 +08:00
|
|
|
if (!lapic_is_integrated())
|
|
|
|
lvtt_value |= SET_APIC_TIMER_BASE(APIC_TIMER_BASE_DIV);
|
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
if (!irqen)
|
|
|
|
lvtt_value |= APIC_LVT_MASKED;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
apic_write(APIC_LVTT, lvtt_value);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-10-23 05:37:58 +08:00
|
|
|
if (lvtt_value & APIC_LVT_TIMER_TSCDEADLINE) {
|
2015-07-31 07:24:43 +08:00
|
|
|
/*
|
|
|
|
* See Intel SDM: TSC-Deadline Mode chapter. In xAPIC mode,
|
|
|
|
* writing to the APIC LVTT and TSC_DEADLINE MSR isn't serialized.
|
|
|
|
* According to Intel, MFENCE can do the serialization here.
|
|
|
|
*/
|
|
|
|
asm volatile("mfence" : : : "memory");
|
|
|
|
|
2012-10-23 05:37:58 +08:00
|
|
|
printk_once(KERN_DEBUG "TSC deadline timer enabled\n");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2008-01-30 20:30:20 +08:00
|
|
|
* Divide PICLK by 16
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2008-01-30 20:30:20 +08:00
|
|
|
tmp_value = apic_read(APIC_TDCR);
|
2008-08-19 00:45:55 +08:00
|
|
|
apic_write(APIC_TDCR,
|
|
|
|
(tmp_value & ~(APIC_TDR_DIV_1 | APIC_TDR_DIV_TMBASE)) |
|
|
|
|
APIC_TDR_DIV_16);
|
2008-01-30 20:30:20 +08:00
|
|
|
|
|
|
|
if (!oneshot)
|
2008-08-15 19:51:21 +08:00
|
|
|
apic_write(APIC_TMICT, clocks / APIC_DIVISOR);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
/*
|
2010-10-06 18:27:53 +08:00
|
|
|
* Setup extended LVT, AMD specific
|
2008-01-30 20:30:40 +08:00
|
|
|
*
|
2010-10-06 18:27:53 +08:00
|
|
|
* Software should use the LVT offsets the BIOS provides. The offsets
|
|
|
|
* are determined by the subsystems using it like those for MCE
|
|
|
|
* threshold or IBS. On K8 only offset 0 (APIC500) and MCE interrupts
|
|
|
|
* are supported. Beginning with family 10h at least 4 offsets are
|
|
|
|
* available.
|
2008-07-23 03:08:46 +08:00
|
|
|
*
|
2010-10-06 18:27:53 +08:00
|
|
|
* Since the offsets must be consistent for all cores, we keep track
|
|
|
|
* of the LVT offsets in software and reserve the offset for the same
|
|
|
|
* vector also to be used on other cores. An offset is freed by
|
|
|
|
* setting the entry to APIC_EILVT_MASKED.
|
|
|
|
*
|
|
|
|
* If the BIOS is right, there should be no conflicts. Otherwise a
|
|
|
|
* "[Firmware Bug]: ..." error message is generated. However, if
|
|
|
|
* software does not properly determines the offsets, it is not
|
|
|
|
* necessarily a BIOS bug.
|
2008-01-30 20:30:20 +08:00
|
|
|
*/
|
2008-01-30 20:30:40 +08:00
|
|
|
|
2010-10-06 18:27:53 +08:00
|
|
|
static atomic_t eilvt_offsets[APIC_EILVT_NR_MAX];
|
|
|
|
|
|
|
|
static inline int eilvt_entry_is_changeable(unsigned int old, unsigned int new)
|
|
|
|
{
|
|
|
|
return (old & APIC_EILVT_MASKED)
|
|
|
|
|| (new == APIC_EILVT_MASKED)
|
|
|
|
|| ((new & ~APIC_EILVT_MASKED) == old);
|
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned int reserve_eilvt_offset(int offset, unsigned int new)
|
|
|
|
{
|
2012-03-28 02:04:02 +08:00
|
|
|
unsigned int rsvd, vector;
|
2010-10-06 18:27:53 +08:00
|
|
|
|
|
|
|
if (offset >= APIC_EILVT_NR_MAX)
|
|
|
|
return ~0;
|
|
|
|
|
2012-03-28 02:04:02 +08:00
|
|
|
rsvd = atomic_read(&eilvt_offsets[offset]);
|
2010-10-06 18:27:53 +08:00
|
|
|
do {
|
2012-03-28 02:04:02 +08:00
|
|
|
vector = rsvd & ~APIC_EILVT_MASKED; /* 0: unassigned */
|
|
|
|
if (vector && !eilvt_entry_is_changeable(vector, new))
|
2010-10-06 18:27:53 +08:00
|
|
|
/* may not change if vectors are different */
|
|
|
|
return rsvd;
|
|
|
|
rsvd = atomic_cmpxchg(&eilvt_offsets[offset], rsvd, new);
|
|
|
|
} while (rsvd != new);
|
|
|
|
|
2012-03-28 02:04:02 +08:00
|
|
|
rsvd &= ~APIC_EILVT_MASKED;
|
|
|
|
if (rsvd && rsvd != vector)
|
|
|
|
pr_info("LVT offset %d assigned for vector 0x%02x\n",
|
|
|
|
offset, rsvd);
|
|
|
|
|
2010-10-06 18:27:53 +08:00
|
|
|
return new;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If mask=1, the LVT entry does not generate interrupts while mask=0
|
2011-05-30 22:31:11 +08:00
|
|
|
* enables the vector. See also the BKDGs. Must be called with
|
|
|
|
* preemption disabled.
|
2010-10-06 18:27:53 +08:00
|
|
|
*/
|
|
|
|
|
2010-10-06 18:27:54 +08:00
|
|
|
int setup_APIC_eilvt(u8 offset, u8 vector, u8 msg_type, u8 mask)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2010-10-06 18:27:53 +08:00
|
|
|
unsigned long reg = APIC_EILVTn(offset);
|
|
|
|
unsigned int new, old, reserved;
|
|
|
|
|
|
|
|
new = (mask << 16) | (msg_type << 8) | vector;
|
|
|
|
old = apic_read(reg);
|
|
|
|
reserved = reserve_eilvt_offset(offset, new);
|
|
|
|
|
|
|
|
if (reserved != new) {
|
2010-10-25 22:03:39 +08:00
|
|
|
pr_err(FW_BUG "cpu %d, try to use APIC%lX (LVT offset %d) for "
|
|
|
|
"vector 0x%x, but the register is already in use for "
|
|
|
|
"vector 0x%x on another cpu\n",
|
|
|
|
smp_processor_id(), reg, offset, new, reserved);
|
2010-10-06 18:27:53 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!eilvt_entry_is_changeable(old, new)) {
|
2010-10-25 22:03:39 +08:00
|
|
|
pr_err(FW_BUG "cpu %d, try to use APIC%lX (LVT offset %d) for "
|
|
|
|
"vector 0x%x, but the register is already in use for "
|
|
|
|
"vector 0x%x on this cpu\n",
|
|
|
|
smp_processor_id(), reg, offset, new, old);
|
2010-10-06 18:27:53 +08:00
|
|
|
return -EBUSY;
|
|
|
|
}
|
|
|
|
|
|
|
|
apic_write(reg, new);
|
2006-09-26 16:52:30 +08:00
|
|
|
|
2010-10-06 18:27:53 +08:00
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2010-10-06 18:27:54 +08:00
|
|
|
EXPORT_SYMBOL_GPL(setup_APIC_eilvt);
|
2008-01-30 20:30:40 +08:00
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
/*
|
|
|
|
* Program the next event, relative to now
|
|
|
|
*/
|
|
|
|
static int lapic_next_event(unsigned long delta,
|
|
|
|
struct clock_event_device *evt)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-01-30 20:30:20 +08:00
|
|
|
apic_write(APIC_TMICT, delta);
|
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2012-10-23 05:37:58 +08:00
|
|
|
static int lapic_next_deadline(unsigned long delta,
|
|
|
|
struct clock_event_device *evt)
|
|
|
|
{
|
|
|
|
u64 tsc;
|
|
|
|
|
2015-06-26 00:44:07 +08:00
|
|
|
tsc = rdtsc();
|
2012-10-23 05:37:58 +08:00
|
|
|
wrmsrl(MSR_IA32_TSC_DEADLINE, tsc + (((u64) delta) * TSC_DIVISOR));
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2015-07-16 18:58:44 +08:00
|
|
|
static int lapic_timer_shutdown(struct clock_event_device *evt)
|
2007-10-20 09:21:11 +08:00
|
|
|
{
|
2008-01-30 20:30:20 +08:00
|
|
|
unsigned int v;
|
2007-10-20 09:21:11 +08:00
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
/* Lapic used as dummy for broadcast ? */
|
|
|
|
if (evt->features & CLOCK_EVT_FEAT_DUMMY)
|
2015-07-16 18:58:44 +08:00
|
|
|
return 0;
|
2007-10-20 09:21:11 +08:00
|
|
|
|
2015-07-16 18:58:44 +08:00
|
|
|
v = apic_read(APIC_LVTT);
|
|
|
|
v |= (APIC_LVT_MASKED | LOCAL_TIMER_VECTOR);
|
|
|
|
apic_write(APIC_LVTT, v);
|
|
|
|
apic_write(APIC_TMICT, 0);
|
|
|
|
return 0;
|
|
|
|
}
|
2007-10-20 09:21:11 +08:00
|
|
|
|
2015-07-16 18:58:44 +08:00
|
|
|
static inline int
|
|
|
|
lapic_timer_set_periodic_oneshot(struct clock_event_device *evt, bool oneshot)
|
|
|
|
{
|
|
|
|
/* Lapic used as dummy for broadcast ? */
|
|
|
|
if (evt->features & CLOCK_EVT_FEAT_DUMMY)
|
|
|
|
return 0;
|
2007-10-20 09:21:11 +08:00
|
|
|
|
2015-07-16 18:58:44 +08:00
|
|
|
__setup_APIC_LVTT(lapic_timer_frequency, oneshot, 1);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int lapic_timer_set_periodic(struct clock_event_device *evt)
|
|
|
|
{
|
|
|
|
return lapic_timer_set_periodic_oneshot(evt, false);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int lapic_timer_set_oneshot(struct clock_event_device *evt)
|
|
|
|
{
|
|
|
|
return lapic_timer_set_periodic_oneshot(evt, true);
|
2007-10-20 09:21:11 +08:00
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2008-01-30 20:30:20 +08:00
|
|
|
* Local APIC timer broadcast function
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2009-01-01 10:08:46 +08:00
|
|
|
static void lapic_timer_broadcast(const struct cpumask *mask)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-01-30 20:30:20 +08:00
|
|
|
#ifdef CONFIG_SMP
|
2009-01-28 22:42:24 +08:00
|
|
|
apic->send_IPI_mask(mask, LOCAL_TIMER_VECTOR);
|
2008-01-30 20:30:20 +08:00
|
|
|
#endif
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-03-11 15:02:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The local apic timer can be used for any function which is CPU local.
|
|
|
|
*/
|
|
|
|
static struct clock_event_device lapic_clockevent = {
|
2015-07-16 18:58:44 +08:00
|
|
|
.name = "lapic",
|
|
|
|
.features = CLOCK_EVT_FEAT_PERIODIC |
|
|
|
|
CLOCK_EVT_FEAT_ONESHOT | CLOCK_EVT_FEAT_C3STOP
|
|
|
|
| CLOCK_EVT_FEAT_DUMMY,
|
|
|
|
.shift = 32,
|
|
|
|
.set_state_shutdown = lapic_timer_shutdown,
|
|
|
|
.set_state_periodic = lapic_timer_set_periodic,
|
|
|
|
.set_state_oneshot = lapic_timer_set_oneshot,
|
|
|
|
.set_next_event = lapic_next_event,
|
|
|
|
.broadcast = lapic_timer_broadcast,
|
|
|
|
.rating = 100,
|
|
|
|
.irq = -1,
|
2011-03-11 15:02:36 +08:00
|
|
|
};
|
|
|
|
static DEFINE_PER_CPU(struct clock_event_device, lapic_events);
|
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
/*
|
2010-06-11 18:17:00 +08:00
|
|
|
* Setup the local APIC timer for this CPU. Copy the initialized values
|
2008-01-30 20:30:20 +08:00
|
|
|
* of the boot CPU and register the clock event in the framework.
|
|
|
|
*/
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-19 06:23:59 +08:00
|
|
|
static void setup_APIC_timer(void)
|
2008-01-30 20:30:20 +08:00
|
|
|
{
|
x86: Replace __get_cpu_var uses
__get_cpu_var() is used for multiple purposes in the kernel source. One of
them is address calculation via the form &__get_cpu_var(x). This calculates
the address for the instance of the percpu variable of the current processor
based on an offset.
Other use cases are for storing and retrieving data from the current
processors percpu area. __get_cpu_var() can be used as an lvalue when
writing data or on the right side of an assignment.
__get_cpu_var() is defined as :
#define __get_cpu_var(var) (*this_cpu_ptr(&(var)))
__get_cpu_var() always only does an address determination. However, store
and retrieve operations could use a segment prefix (or global register on
other platforms) to avoid the address calculation.
this_cpu_write() and this_cpu_read() can directly take an offset into a
percpu area and use optimized assembly code to read and write per cpu
variables.
This patch converts __get_cpu_var into either an explicit address
calculation using this_cpu_ptr() or into a use of this_cpu operations that
use the offset. Thereby address calculations are avoided and less registers
are used when code is generated.
Transformations done to __get_cpu_var()
1. Determine the address of the percpu instance of the current processor.
DEFINE_PER_CPU(int, y);
int *x = &__get_cpu_var(y);
Converts to
int *x = this_cpu_ptr(&y);
2. Same as #1 but this time an array structure is involved.
DEFINE_PER_CPU(int, y[20]);
int *x = __get_cpu_var(y);
Converts to
int *x = this_cpu_ptr(y);
3. Retrieve the content of the current processors instance of a per cpu
variable.
DEFINE_PER_CPU(int, y);
int x = __get_cpu_var(y)
Converts to
int x = __this_cpu_read(y);
4. Retrieve the content of a percpu struct
DEFINE_PER_CPU(struct mystruct, y);
struct mystruct x = __get_cpu_var(y);
Converts to
memcpy(&x, this_cpu_ptr(&y), sizeof(x));
5. Assignment to a per cpu variable
DEFINE_PER_CPU(int, y)
__get_cpu_var(y) = x;
Converts to
__this_cpu_write(y, x);
6. Increment/Decrement etc of a per cpu variable
DEFINE_PER_CPU(int, y);
__get_cpu_var(y)++
Converts to
__this_cpu_inc(y)
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
2014-08-18 01:30:40 +08:00
|
|
|
struct clock_event_device *levt = this_cpu_ptr(&lapic_events);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-03-12 19:50:10 +08:00
|
|
|
if (this_cpu_has(X86_FEATURE_ARAT)) {
|
2009-04-07 09:51:29 +08:00
|
|
|
lapic_clockevent.features &= ~CLOCK_EVT_FEAT_C3STOP;
|
|
|
|
/* Make LAPIC timer preferrable over percpu HPET */
|
|
|
|
lapic_clockevent.rating = 150;
|
|
|
|
}
|
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
memcpy(levt, &lapic_clockevent, sizeof(*levt));
|
2008-12-13 18:50:26 +08:00
|
|
|
levt->cpumask = cpumask_of(smp_processor_id());
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-10-23 05:37:58 +08:00
|
|
|
if (this_cpu_has(X86_FEATURE_TSC_DEADLINE_TIMER)) {
|
|
|
|
levt->features &= ~(CLOCK_EVT_FEAT_PERIODIC |
|
|
|
|
CLOCK_EVT_FEAT_DUMMY);
|
|
|
|
levt->set_next_event = lapic_next_deadline;
|
|
|
|
clockevents_config_and_register(levt,
|
x86/timers/apic: Fix imprecise timer interrupts by eliminating TSC clockevents frequency roundoff error
I noticed the following bug/misbehavior on certain Intel systems: with a
single task running on a NOHZ CPU on an Intel Haswell, I recognized
that I did not only get the one expected local_timer APIC interrupt, but
two per second at minimum. (!)
Further tracing showed that the first one precedes the programmed deadline
by up to ~50us and hence, it did nothing except for reprogramming the TSC
deadline clockevent device to trigger shortly thereafter again.
The reason for this is imprecise calibration, the timeout we program into
the APIC results in 'too short' timer interrupts. The core (hr)timer code
notices this (because it has a precise ktime source and sees the short
interrupt) and fixes it up by programming an additional very short
interrupt period.
This is obviously suboptimal.
The reason for the imprecise calibration is twofold, and this patch
fixes the first reason:
In setup_APIC_timer(), the registered clockevent device's frequency
is calculated by first dividing tsc_khz by TSC_DIVISOR and multiplying
it with 1000 afterwards:
(tsc_khz / TSC_DIVISOR) * 1000
The multiplication with 1000 is done for converting from kHz to Hz and the
division by TSC_DIVISOR is carried out in order to make sure that the final
result fits into an u32.
However, with the order given in this calculation, the roundoff error
introduced by the division gets magnified by a factor of 1000 by the
following multiplication.
To fix it, reversing the order of the division and the multiplication a la:
(tsc_khz * 1000) / TSC_DIVISOR
... reduces the roundoff error already.
Furthermore, if TSC_DIVISOR divides 1000, associativity holds:
(tsc_khz * 1000) / TSC_DIVISOR = tsc_khz * (1000 / TSC_DIVISOR)
and thus, the roundoff error even vanishes and the whole operation can be
carried out within 32 bits.
The powers of two that divide 1000 are 2, 4 and 8. A value of 8 for
TSC_DIVISOR still allows for TSC frequencies up to
2^32 / 10^9ns * 8 = 34.4GHz which is way larger than anything to expect
in the next years.
Thus we also replace the current TSC_DIVISOR value of 32 by 8. Reverse
the order of the divison and the multiplication in the calculation of
the registered clockevent device's frequency.
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Christopher S. Hall <christopher.s.hall@intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
Cc: Len Brown <len.brown@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Link: http://lkml.kernel.org/r/20160714152255.18295-2-nicstange@gmail.com
[ Improved changelog. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-07-14 23:22:54 +08:00
|
|
|
tsc_khz * (1000 / TSC_DIVISOR),
|
2012-10-23 05:37:58 +08:00
|
|
|
0xF, ~0UL);
|
|
|
|
} else
|
|
|
|
clockevents_register_device(levt);
|
2008-01-30 20:30:20 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2016-07-14 23:22:55 +08:00
|
|
|
/*
|
|
|
|
* Install the updated TSC frequency from recalibration at the TSC
|
|
|
|
* deadline clockevent devices.
|
|
|
|
*/
|
|
|
|
static void __lapic_update_tsc_freq(void *info)
|
|
|
|
{
|
|
|
|
struct clock_event_device *levt = this_cpu_ptr(&lapic_events);
|
|
|
|
|
|
|
|
if (!this_cpu_has(X86_FEATURE_TSC_DEADLINE_TIMER))
|
|
|
|
return;
|
|
|
|
|
|
|
|
clockevents_update_freq(levt, tsc_khz * (1000 / TSC_DIVISOR));
|
|
|
|
}
|
|
|
|
|
|
|
|
void lapic_update_tsc_freq(void)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* The clockevent device's ->mult and ->shift can both be
|
|
|
|
* changed. In order to avoid races, schedule the frequency
|
|
|
|
* update code on each CPU.
|
|
|
|
*/
|
|
|
|
on_each_cpu(__lapic_update_tsc_freq, NULL, 0);
|
|
|
|
}
|
|
|
|
|
2008-08-24 17:01:54 +08:00
|
|
|
/*
|
|
|
|
* In this functions we calibrate APIC bus clocks to the external timer.
|
|
|
|
*
|
|
|
|
* We want to do the calibration only once since we want to have local timer
|
|
|
|
* irqs syncron. CPUs connected by the same APIC bus have the very same bus
|
|
|
|
* frequency.
|
|
|
|
*
|
|
|
|
* This was previously done by reading the PIT/HPET and waiting for a wrap
|
|
|
|
* around to find out, that a tick has elapsed. I have a box, where the PIT
|
|
|
|
* readout is broken, so it never gets out of the wait loop again. This was
|
|
|
|
* also reported by others.
|
|
|
|
*
|
|
|
|
* Monitoring the jiffies value is inaccurate and the clockevents
|
|
|
|
* infrastructure allows us to do a simple substitution of the interrupt
|
|
|
|
* handler.
|
|
|
|
*
|
|
|
|
* The calibration routine also uses the pm_timer when possible, as the PIT
|
|
|
|
* happens to run way too slow (factor 2.3 on my VAIO CoreDuo, which goes
|
|
|
|
* back to normal later in the boot process).
|
|
|
|
*/
|
|
|
|
|
|
|
|
#define LAPIC_CAL_LOOPS (HZ/10)
|
|
|
|
|
|
|
|
static __initdata int lapic_cal_loops = -1;
|
|
|
|
static __initdata long lapic_cal_t1, lapic_cal_t2;
|
|
|
|
static __initdata unsigned long long lapic_cal_tsc1, lapic_cal_tsc2;
|
|
|
|
static __initdata unsigned long lapic_cal_pm1, lapic_cal_pm2;
|
|
|
|
static __initdata unsigned long lapic_cal_j1, lapic_cal_j2;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Temporary interrupt handler.
|
|
|
|
*/
|
|
|
|
static void __init lapic_cal_handler(struct clock_event_device *dev)
|
|
|
|
{
|
|
|
|
unsigned long long tsc = 0;
|
|
|
|
long tapic = apic_read(APIC_TMCCT);
|
|
|
|
unsigned long pm = acpi_pm_read_early();
|
|
|
|
|
2016-04-05 04:24:59 +08:00
|
|
|
if (boot_cpu_has(X86_FEATURE_TSC))
|
2015-06-26 00:44:07 +08:00
|
|
|
tsc = rdtsc();
|
2008-08-24 17:01:54 +08:00
|
|
|
|
|
|
|
switch (lapic_cal_loops++) {
|
|
|
|
case 0:
|
|
|
|
lapic_cal_t1 = tapic;
|
|
|
|
lapic_cal_tsc1 = tsc;
|
|
|
|
lapic_cal_pm1 = pm;
|
|
|
|
lapic_cal_j1 = jiffies;
|
|
|
|
break;
|
|
|
|
|
|
|
|
case LAPIC_CAL_LOOPS:
|
|
|
|
lapic_cal_t2 = tapic;
|
|
|
|
lapic_cal_tsc2 = tsc;
|
|
|
|
if (pm < lapic_cal_pm1)
|
|
|
|
pm += ACPI_PM_OVRRUN;
|
|
|
|
lapic_cal_pm2 = pm;
|
|
|
|
lapic_cal_j2 = jiffies;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-01-28 11:51:09 +08:00
|
|
|
static int __init
|
|
|
|
calibrate_by_pmtimer(long deltapm, long *delta, long *deltatsc)
|
2008-09-13 03:58:24 +08:00
|
|
|
{
|
|
|
|
const long pm_100ms = PMTMR_TICKS_PER_SEC / 10;
|
|
|
|
const long pm_thresh = pm_100ms / 100;
|
|
|
|
unsigned long mult;
|
|
|
|
u64 res;
|
|
|
|
|
|
|
|
#ifndef CONFIG_X86_PM_TIMER
|
|
|
|
return -1;
|
|
|
|
#endif
|
|
|
|
|
2009-01-28 11:52:24 +08:00
|
|
|
apic_printk(APIC_VERBOSE, "... PM-Timer delta = %ld\n", deltapm);
|
2008-09-13 03:58:24 +08:00
|
|
|
|
|
|
|
/* Check, if the PM timer is available */
|
|
|
|
if (!deltapm)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
mult = clocksource_hz2mult(PMTMR_TICKS_PER_SEC, 22);
|
|
|
|
|
|
|
|
if (deltapm > (pm_100ms - pm_thresh) &&
|
|
|
|
deltapm < (pm_100ms + pm_thresh)) {
|
2009-01-28 11:52:24 +08:00
|
|
|
apic_printk(APIC_VERBOSE, "... PM-Timer result ok\n");
|
2009-01-28 11:51:09 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
res = (((u64)deltapm) * mult) >> 22;
|
|
|
|
do_div(res, 1000000);
|
|
|
|
pr_warning("APIC calibration not consistent "
|
2009-01-28 11:52:24 +08:00
|
|
|
"with PM-Timer: %ldms instead of 100ms\n",(long)res);
|
2009-01-28 11:51:09 +08:00
|
|
|
|
|
|
|
/* Correct the lapic counter value */
|
|
|
|
res = (((u64)(*delta)) * pm_100ms);
|
|
|
|
do_div(res, deltapm);
|
|
|
|
pr_info("APIC delta adjusted to PM-Timer: "
|
|
|
|
"%lu (%ld)\n", (unsigned long)res, *delta);
|
|
|
|
*delta = (long)res;
|
|
|
|
|
|
|
|
/* Correct the tsc counter value */
|
2016-04-05 04:24:59 +08:00
|
|
|
if (boot_cpu_has(X86_FEATURE_TSC)) {
|
2009-01-28 11:51:09 +08:00
|
|
|
res = (((u64)(*deltatsc)) * pm_100ms);
|
2008-09-13 03:58:24 +08:00
|
|
|
do_div(res, deltapm);
|
2009-01-28 11:51:09 +08:00
|
|
|
apic_printk(APIC_VERBOSE, "TSC delta adjusted to "
|
2010-02-07 01:47:17 +08:00
|
|
|
"PM-Timer: %lu (%ld)\n",
|
2009-01-28 11:51:09 +08:00
|
|
|
(unsigned long)res, *deltatsc);
|
|
|
|
*deltatsc = (long)res;
|
2008-09-13 03:58:24 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-08-24 17:01:54 +08:00
|
|
|
static int __init calibrate_APIC_clock(void)
|
|
|
|
{
|
x86: Replace __get_cpu_var uses
__get_cpu_var() is used for multiple purposes in the kernel source. One of
them is address calculation via the form &__get_cpu_var(x). This calculates
the address for the instance of the percpu variable of the current processor
based on an offset.
Other use cases are for storing and retrieving data from the current
processors percpu area. __get_cpu_var() can be used as an lvalue when
writing data or on the right side of an assignment.
__get_cpu_var() is defined as :
#define __get_cpu_var(var) (*this_cpu_ptr(&(var)))
__get_cpu_var() always only does an address determination. However, store
and retrieve operations could use a segment prefix (or global register on
other platforms) to avoid the address calculation.
this_cpu_write() and this_cpu_read() can directly take an offset into a
percpu area and use optimized assembly code to read and write per cpu
variables.
This patch converts __get_cpu_var into either an explicit address
calculation using this_cpu_ptr() or into a use of this_cpu operations that
use the offset. Thereby address calculations are avoided and less registers
are used when code is generated.
Transformations done to __get_cpu_var()
1. Determine the address of the percpu instance of the current processor.
DEFINE_PER_CPU(int, y);
int *x = &__get_cpu_var(y);
Converts to
int *x = this_cpu_ptr(&y);
2. Same as #1 but this time an array structure is involved.
DEFINE_PER_CPU(int, y[20]);
int *x = __get_cpu_var(y);
Converts to
int *x = this_cpu_ptr(y);
3. Retrieve the content of the current processors instance of a per cpu
variable.
DEFINE_PER_CPU(int, y);
int x = __get_cpu_var(y)
Converts to
int x = __this_cpu_read(y);
4. Retrieve the content of a percpu struct
DEFINE_PER_CPU(struct mystruct, y);
struct mystruct x = __get_cpu_var(y);
Converts to
memcpy(&x, this_cpu_ptr(&y), sizeof(x));
5. Assignment to a per cpu variable
DEFINE_PER_CPU(int, y)
__get_cpu_var(y) = x;
Converts to
__this_cpu_write(y, x);
6. Increment/Decrement etc of a per cpu variable
DEFINE_PER_CPU(int, y);
__get_cpu_var(y)++
Converts to
__this_cpu_inc(y)
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
2014-08-18 01:30:40 +08:00
|
|
|
struct clock_event_device *levt = this_cpu_ptr(&lapic_events);
|
2008-08-24 17:01:54 +08:00
|
|
|
void (*real_handler)(struct clock_event_device *dev);
|
|
|
|
unsigned long deltaj;
|
2009-01-28 11:51:09 +08:00
|
|
|
long delta, deltatsc;
|
2008-08-24 17:01:54 +08:00
|
|
|
int pm_referenced = 0;
|
|
|
|
|
2011-11-10 21:42:40 +08:00
|
|
|
/**
|
|
|
|
* check if lapic timer has already been calibrated by platform
|
|
|
|
* specific routine, such as tsc calibration code. if so, we just fill
|
|
|
|
* in the clockevent structure and return.
|
|
|
|
*/
|
|
|
|
|
2012-10-23 05:37:58 +08:00
|
|
|
if (boot_cpu_has(X86_FEATURE_TSC_DEADLINE_TIMER)) {
|
|
|
|
return 0;
|
|
|
|
} else if (lapic_timer_frequency) {
|
2011-11-10 21:42:40 +08:00
|
|
|
apic_printk(APIC_VERBOSE, "lapic timer already calibrated %d\n",
|
|
|
|
lapic_timer_frequency);
|
|
|
|
lapic_clockevent.mult = div_sc(lapic_timer_frequency/APIC_DIVISOR,
|
|
|
|
TICK_NSEC, lapic_clockevent.shift);
|
|
|
|
lapic_clockevent.max_delta_ns =
|
|
|
|
clockevent_delta2ns(0x7FFFFF, &lapic_clockevent);
|
|
|
|
lapic_clockevent.min_delta_ns =
|
|
|
|
clockevent_delta2ns(0xF, &lapic_clockevent);
|
|
|
|
lapic_clockevent.features &= ~CLOCK_EVT_FEAT_DUMMY;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-10-23 05:37:58 +08:00
|
|
|
apic_printk(APIC_VERBOSE, "Using local APIC timer interrupts.\n"
|
|
|
|
"calibrating APIC timer ...\n");
|
|
|
|
|
2008-08-24 17:01:54 +08:00
|
|
|
local_irq_disable();
|
|
|
|
|
|
|
|
/* Replace the global interrupt handler */
|
|
|
|
real_handler = global_clock_event->event_handler;
|
|
|
|
global_clock_event->event_handler = lapic_cal_handler;
|
|
|
|
|
|
|
|
/*
|
2008-10-10 23:00:17 +08:00
|
|
|
* Setup the APIC counter to maximum. There is no way the lapic
|
2008-08-24 17:01:54 +08:00
|
|
|
* can underflow in the 100ms detection time frame
|
|
|
|
*/
|
2008-10-10 23:00:17 +08:00
|
|
|
__setup_APIC_LVTT(0xffffffff, 0, 0);
|
2008-08-24 17:01:54 +08:00
|
|
|
|
|
|
|
/* Let the interrupts run */
|
|
|
|
local_irq_enable();
|
|
|
|
|
|
|
|
while (lapic_cal_loops <= LAPIC_CAL_LOOPS)
|
|
|
|
cpu_relax();
|
|
|
|
|
|
|
|
local_irq_disable();
|
|
|
|
|
|
|
|
/* Restore the real event handler */
|
|
|
|
global_clock_event->event_handler = real_handler;
|
|
|
|
|
|
|
|
/* Build delta t1-t2 as apic timer counts down */
|
|
|
|
delta = lapic_cal_t1 - lapic_cal_t2;
|
|
|
|
apic_printk(APIC_VERBOSE, "... lapic delta = %ld\n", delta);
|
|
|
|
|
2009-01-28 11:51:09 +08:00
|
|
|
deltatsc = (long)(lapic_cal_tsc2 - lapic_cal_tsc1);
|
|
|
|
|
2008-09-13 03:58:24 +08:00
|
|
|
/* we trust the PM based calibration if possible */
|
|
|
|
pm_referenced = !calibrate_by_pmtimer(lapic_cal_pm2 - lapic_cal_pm1,
|
2009-01-28 11:51:09 +08:00
|
|
|
&delta, &deltatsc);
|
2008-08-24 17:01:54 +08:00
|
|
|
|
|
|
|
/* Calculate the scaled math multiplication factor */
|
|
|
|
lapic_clockevent.mult = div_sc(delta, TICK_NSEC * LAPIC_CAL_LOOPS,
|
|
|
|
lapic_clockevent.shift);
|
|
|
|
lapic_clockevent.max_delta_ns =
|
2011-01-06 23:23:29 +08:00
|
|
|
clockevent_delta2ns(0x7FFFFFFF, &lapic_clockevent);
|
2008-08-24 17:01:54 +08:00
|
|
|
lapic_clockevent.min_delta_ns =
|
|
|
|
clockevent_delta2ns(0xF, &lapic_clockevent);
|
|
|
|
|
2011-11-10 21:42:40 +08:00
|
|
|
lapic_timer_frequency = (delta * APIC_DIVISOR) / LAPIC_CAL_LOOPS;
|
2008-08-24 17:01:54 +08:00
|
|
|
|
|
|
|
apic_printk(APIC_VERBOSE, "..... delta %ld\n", delta);
|
2009-11-16 18:52:39 +08:00
|
|
|
apic_printk(APIC_VERBOSE, "..... mult: %u\n", lapic_clockevent.mult);
|
2008-08-24 17:01:54 +08:00
|
|
|
apic_printk(APIC_VERBOSE, "..... calibration result: %u\n",
|
2011-11-10 21:42:40 +08:00
|
|
|
lapic_timer_frequency);
|
2008-08-24 17:01:54 +08:00
|
|
|
|
2016-04-05 04:24:59 +08:00
|
|
|
if (boot_cpu_has(X86_FEATURE_TSC)) {
|
2008-08-24 17:01:54 +08:00
|
|
|
apic_printk(APIC_VERBOSE, "..... CPU clock speed is "
|
|
|
|
"%ld.%04ld MHz.\n",
|
2009-01-28 11:51:09 +08:00
|
|
|
(deltatsc / LAPIC_CAL_LOOPS) / (1000000 / HZ),
|
|
|
|
(deltatsc / LAPIC_CAL_LOOPS) % (1000000 / HZ));
|
2008-08-24 17:01:54 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
apic_printk(APIC_VERBOSE, "..... host bus clock speed is "
|
|
|
|
"%u.%04u MHz.\n",
|
2011-11-10 21:42:40 +08:00
|
|
|
lapic_timer_frequency / (1000000 / HZ),
|
|
|
|
lapic_timer_frequency % (1000000 / HZ));
|
2008-08-24 17:01:54 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Do a sanity check on the APIC calibration result
|
|
|
|
*/
|
2011-11-10 21:42:40 +08:00
|
|
|
if (lapic_timer_frequency < (1000000 / HZ)) {
|
2008-08-24 17:01:54 +08:00
|
|
|
local_irq_enable();
|
2008-11-10 16:16:41 +08:00
|
|
|
pr_warning("APIC frequency too slow, disabling apic timer\n");
|
2008-08-24 17:01:54 +08:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
levt->features &= ~CLOCK_EVT_FEAT_DUMMY;
|
|
|
|
|
2008-09-13 03:58:24 +08:00
|
|
|
/*
|
|
|
|
* PM timer calibration failed or not turned on
|
|
|
|
* so lets try APIC timer based calibration
|
|
|
|
*/
|
2008-08-24 17:01:54 +08:00
|
|
|
if (!pm_referenced) {
|
|
|
|
apic_printk(APIC_VERBOSE, "... verify APIC timer\n");
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Setup the apic timer manually
|
|
|
|
*/
|
|
|
|
levt->event_handler = lapic_cal_handler;
|
2015-07-16 18:58:44 +08:00
|
|
|
lapic_timer_set_periodic(levt);
|
2008-08-24 17:01:54 +08:00
|
|
|
lapic_cal_loops = -1;
|
|
|
|
|
|
|
|
/* Let the interrupts run */
|
|
|
|
local_irq_enable();
|
|
|
|
|
|
|
|
while (lapic_cal_loops <= LAPIC_CAL_LOOPS)
|
|
|
|
cpu_relax();
|
|
|
|
|
|
|
|
/* Stop the lapic timer */
|
2015-07-30 06:30:51 +08:00
|
|
|
local_irq_disable();
|
2015-07-16 18:58:44 +08:00
|
|
|
lapic_timer_shutdown(levt);
|
2008-08-24 17:01:54 +08:00
|
|
|
|
|
|
|
/* Jiffies delta */
|
|
|
|
deltaj = lapic_cal_j2 - lapic_cal_j1;
|
|
|
|
apic_printk(APIC_VERBOSE, "... jiffies delta = %lu\n", deltaj);
|
|
|
|
|
|
|
|
/* Check, if the jiffies result is consistent */
|
|
|
|
if (deltaj >= LAPIC_CAL_LOOPS-2 && deltaj <= LAPIC_CAL_LOOPS+2)
|
|
|
|
apic_printk(APIC_VERBOSE, "... jiffies result ok\n");
|
|
|
|
else
|
|
|
|
levt->features |= CLOCK_EVT_FEAT_DUMMY;
|
2015-07-30 06:30:51 +08:00
|
|
|
}
|
|
|
|
local_irq_enable();
|
2008-08-24 17:01:54 +08:00
|
|
|
|
|
|
|
if (levt->features & CLOCK_EVT_FEAT_DUMMY) {
|
2009-01-04 18:46:25 +08:00
|
|
|
pr_warning("APIC timer disabled due to verification failure\n");
|
2008-08-24 17:01:54 +08:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-01-30 20:32:35 +08:00
|
|
|
/*
|
|
|
|
* Setup the boot APIC
|
|
|
|
*
|
|
|
|
* Calibrate and verify the result.
|
|
|
|
*/
|
2008-01-30 20:30:20 +08:00
|
|
|
void __init setup_boot_APIC_clock(void)
|
|
|
|
{
|
|
|
|
/*
|
2008-08-17 03:21:53 +08:00
|
|
|
* The local apic timer can be disabled via the kernel
|
|
|
|
* commandline or from the CPU detection code. Register the lapic
|
|
|
|
* timer as a dummy clock event source on SMP systems, so the
|
|
|
|
* broadcast mechanism is used. On UP systems simply ignore it.
|
2008-01-30 20:30:20 +08:00
|
|
|
*/
|
|
|
|
if (disable_apic_timer) {
|
2008-11-10 16:16:41 +08:00
|
|
|
pr_info("Disabling APIC timer\n");
|
2008-01-30 20:30:20 +08:00
|
|
|
/* No broadcast on UP ! */
|
2008-01-30 20:33:04 +08:00
|
|
|
if (num_possible_cpus() > 1) {
|
|
|
|
lapic_clockevent.mult = 1;
|
2008-01-30 20:30:20 +08:00
|
|
|
setup_APIC_timer();
|
2008-01-30 20:33:04 +08:00
|
|
|
}
|
2008-01-30 20:30:20 +08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2008-07-16 01:02:54 +08:00
|
|
|
if (calibrate_APIC_clock()) {
|
2008-01-30 20:33:04 +08:00
|
|
|
/* No broadcast on UP ! */
|
|
|
|
if (num_possible_cpus() > 1)
|
|
|
|
setup_APIC_timer();
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
/*
|
|
|
|
* If nmi_watchdog is set to IO_APIC, we need the
|
|
|
|
* PIT/HPET going. Otherwise register lapic as a dummy
|
|
|
|
* device.
|
|
|
|
*/
|
2010-11-13 00:22:24 +08:00
|
|
|
lapic_clockevent.features &= ~CLOCK_EVT_FEAT_DUMMY;
|
2008-01-30 20:30:20 +08:00
|
|
|
|
2008-08-17 03:21:53 +08:00
|
|
|
/* Setup the lapic or request the broadcast */
|
2008-01-30 20:30:20 +08:00
|
|
|
setup_APIC_timer();
|
|
|
|
}
|
|
|
|
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-19 06:23:59 +08:00
|
|
|
void setup_secondary_APIC_clock(void)
|
2008-01-30 20:30:20 +08:00
|
|
|
{
|
|
|
|
setup_APIC_timer();
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The guts of the apic timer interrupt
|
|
|
|
*/
|
|
|
|
static void local_apic_timer_interrupt(void)
|
|
|
|
{
|
|
|
|
int cpu = smp_processor_id();
|
|
|
|
struct clock_event_device *evt = &per_cpu(lapic_events, cpu);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Normally we should not be here till LAPIC has been initialized but
|
|
|
|
* in some cases like kdump, its possible that there is a pending LAPIC
|
|
|
|
* timer interrupt from previous kernel's context and is delivered in
|
|
|
|
* new kernel the moment interrupts are enabled.
|
|
|
|
*
|
|
|
|
* Interrupts are enabled early and LAPIC is setup much later, hence
|
|
|
|
* its possible that when we get here evt->event_handler is NULL.
|
|
|
|
* Check for event_handler being NULL and discard the interrupt as
|
|
|
|
* spurious.
|
|
|
|
*/
|
|
|
|
if (!evt->event_handler) {
|
2008-11-10 16:16:41 +08:00
|
|
|
pr_warning("Spurious LAPIC timer interrupt on cpu %d\n", cpu);
|
2008-01-30 20:30:20 +08:00
|
|
|
/* Switch it off */
|
2015-07-16 18:58:44 +08:00
|
|
|
lapic_timer_shutdown(evt);
|
2008-01-30 20:30:20 +08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* the NMI deadlock-detector uses this.
|
|
|
|
*/
|
2008-12-09 11:19:26 +08:00
|
|
|
inc_irq_stat(apic_timer_irqs);
|
2008-01-30 20:30:20 +08:00
|
|
|
|
|
|
|
evt->event_handler(evt);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Local APIC timer interrupt. This is the most natural way for doing
|
|
|
|
* local interrupts, but local timer interrupts can be emulated by
|
|
|
|
* broadcast interrupts too. [in case the hw doesn't support APIC timers]
|
|
|
|
*
|
|
|
|
* [ if a single-CPU system runs an SMP kernel then we call the local
|
|
|
|
* interrupt as well. Thus we cannot inline the local irq ... ]
|
|
|
|
*/
|
2013-08-06 06:02:37 +08:00
|
|
|
__visible void __irq_entry smp_apic_timer_interrupt(struct pt_regs *regs)
|
2008-01-30 20:30:20 +08:00
|
|
|
{
|
|
|
|
struct pt_regs *old_regs = set_irq_regs(regs);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* NOTE! We'd better ACK the irq immediately,
|
|
|
|
* because timer handling can be slow.
|
x86, trace: Introduce entering/exiting_irq()
When implementing tracepoints in interrupt handers, if the tracepoints are
simply added in the performance sensitive path of interrupt handers,
it may cause potential performance problem due to the time penalty.
To solve the problem, an idea is to prepare non-trace/trace irq handers and
switch their IDTs at the enabling/disabling time.
So, let's introduce entering_irq()/exiting_irq() for pre/post-
processing of each irq handler.
A way to use them is as follows.
Non-trace irq handler:
smp_irq_handler()
{
entering_irq(); /* pre-processing of this handler */
__smp_irq_handler(); /*
* common logic between non-trace and trace handlers
* in a vector.
*/
exiting_irq(); /* post-processing of this handler */
}
Trace irq_handler:
smp_trace_irq_handler()
{
entering_irq(); /* pre-processing of this handler */
trace_irq_entry(); /* tracepoint for irq entry */
__smp_irq_handler(); /*
* common logic between non-trace and trace handlers
* in a vector.
*/
trace_irq_exit(); /* tracepoint for irq exit */
exiting_irq(); /* post-processing of this handler */
}
If tracepoints can place outside entering_irq()/exiting_irq() as follows,
it looks cleaner.
smp_trace_irq_handler()
{
trace_irq_entry();
smp_irq_handler();
trace_irq_exit();
}
But it doesn't work.
The problem is with irq_enter/exit() being called. They must be called before
trace_irq_enter/exit(), because of the rcu_irq_enter() must be called before
any tracepoints are used, as tracepoints use rcu to synchronize.
As a possible alternative, we may be able to call irq_enter() first as follows
if irq_enter() can nest.
smp_trace_irq_hander()
{
irq_entry();
trace_irq_entry();
smp_irq_handler();
trace_irq_exit();
irq_exit();
}
But it doesn't work, either.
If irq_enter() is nested, it may have a time penalty because it has to check if it
was already called or not. The time penalty is not desired in performance sensitive
paths even if it is tiny.
Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com>
Link: http://lkml.kernel.org/r/51C3238D.9040706@hds.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
2013-06-20 23:45:17 +08:00
|
|
|
*
|
2008-01-30 20:30:20 +08:00
|
|
|
* update_process_times() expects us to have done irq_enter().
|
|
|
|
* Besides, if we don't timer interrupts ignore the global
|
|
|
|
* interrupt lock, which is the WrongThing (tm) to do.
|
|
|
|
*/
|
x86, trace: Introduce entering/exiting_irq()
When implementing tracepoints in interrupt handers, if the tracepoints are
simply added in the performance sensitive path of interrupt handers,
it may cause potential performance problem due to the time penalty.
To solve the problem, an idea is to prepare non-trace/trace irq handers and
switch their IDTs at the enabling/disabling time.
So, let's introduce entering_irq()/exiting_irq() for pre/post-
processing of each irq handler.
A way to use them is as follows.
Non-trace irq handler:
smp_irq_handler()
{
entering_irq(); /* pre-processing of this handler */
__smp_irq_handler(); /*
* common logic between non-trace and trace handlers
* in a vector.
*/
exiting_irq(); /* post-processing of this handler */
}
Trace irq_handler:
smp_trace_irq_handler()
{
entering_irq(); /* pre-processing of this handler */
trace_irq_entry(); /* tracepoint for irq entry */
__smp_irq_handler(); /*
* common logic between non-trace and trace handlers
* in a vector.
*/
trace_irq_exit(); /* tracepoint for irq exit */
exiting_irq(); /* post-processing of this handler */
}
If tracepoints can place outside entering_irq()/exiting_irq() as follows,
it looks cleaner.
smp_trace_irq_handler()
{
trace_irq_entry();
smp_irq_handler();
trace_irq_exit();
}
But it doesn't work.
The problem is with irq_enter/exit() being called. They must be called before
trace_irq_enter/exit(), because of the rcu_irq_enter() must be called before
any tracepoints are used, as tracepoints use rcu to synchronize.
As a possible alternative, we may be able to call irq_enter() first as follows
if irq_enter() can nest.
smp_trace_irq_hander()
{
irq_entry();
trace_irq_entry();
smp_irq_handler();
trace_irq_exit();
irq_exit();
}
But it doesn't work, either.
If irq_enter() is nested, it may have a time penalty because it has to check if it
was already called or not. The time penalty is not desired in performance sensitive
paths even if it is tiny.
Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com>
Link: http://lkml.kernel.org/r/51C3238D.9040706@hds.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
2013-06-20 23:45:17 +08:00
|
|
|
entering_ack_irq();
|
2008-01-30 20:30:20 +08:00
|
|
|
local_apic_timer_interrupt();
|
x86, trace: Introduce entering/exiting_irq()
When implementing tracepoints in interrupt handers, if the tracepoints are
simply added in the performance sensitive path of interrupt handers,
it may cause potential performance problem due to the time penalty.
To solve the problem, an idea is to prepare non-trace/trace irq handers and
switch their IDTs at the enabling/disabling time.
So, let's introduce entering_irq()/exiting_irq() for pre/post-
processing of each irq handler.
A way to use them is as follows.
Non-trace irq handler:
smp_irq_handler()
{
entering_irq(); /* pre-processing of this handler */
__smp_irq_handler(); /*
* common logic between non-trace and trace handlers
* in a vector.
*/
exiting_irq(); /* post-processing of this handler */
}
Trace irq_handler:
smp_trace_irq_handler()
{
entering_irq(); /* pre-processing of this handler */
trace_irq_entry(); /* tracepoint for irq entry */
__smp_irq_handler(); /*
* common logic between non-trace and trace handlers
* in a vector.
*/
trace_irq_exit(); /* tracepoint for irq exit */
exiting_irq(); /* post-processing of this handler */
}
If tracepoints can place outside entering_irq()/exiting_irq() as follows,
it looks cleaner.
smp_trace_irq_handler()
{
trace_irq_entry();
smp_irq_handler();
trace_irq_exit();
}
But it doesn't work.
The problem is with irq_enter/exit() being called. They must be called before
trace_irq_enter/exit(), because of the rcu_irq_enter() must be called before
any tracepoints are used, as tracepoints use rcu to synchronize.
As a possible alternative, we may be able to call irq_enter() first as follows
if irq_enter() can nest.
smp_trace_irq_hander()
{
irq_entry();
trace_irq_entry();
smp_irq_handler();
trace_irq_exit();
irq_exit();
}
But it doesn't work, either.
If irq_enter() is nested, it may have a time penalty because it has to check if it
was already called or not. The time penalty is not desired in performance sensitive
paths even if it is tiny.
Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com>
Link: http://lkml.kernel.org/r/51C3238D.9040706@hds.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
2013-06-20 23:45:17 +08:00
|
|
|
exiting_irq();
|
2008-08-17 03:21:53 +08:00
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
set_irq_regs(old_regs);
|
|
|
|
}
|
|
|
|
|
2013-08-06 06:02:37 +08:00
|
|
|
__visible void __irq_entry smp_trace_apic_timer_interrupt(struct pt_regs *regs)
|
x86, trace: Add irq vector tracepoints
[Purpose of this patch]
As Vaibhav explained in the thread below, tracepoints for irq vectors
are useful.
http://www.spinics.net/lists/mm-commits/msg85707.html
<snip>
The current interrupt traces from irq_handler_entry and irq_handler_exit
provide when an interrupt is handled. They provide good data about when
the system has switched to kernel space and how it affects the currently
running processes.
There are some IRQ vectors which trigger the system into kernel space,
which are not handled in generic IRQ handlers. Tracing such events gives
us the information about IRQ interaction with other system events.
The trace also tells where the system is spending its time. We want to
know which cores are handling interrupts and how they are affecting other
processes in the system. Also, the trace provides information about when
the cores are idle and which interrupts are changing that state.
<snip>
On the other hand, my usecase is tracing just local timer event and
getting a value of instruction pointer.
I suggested to add an argument local timer event to get instruction pointer before.
But there is another way to get it with external module like systemtap.
So, I don't need to add any argument to irq vector tracepoints now.
[Patch Description]
Vaibhav's patch shared a trace point ,irq_vector_entry/irq_vector_exit, in all events.
But there is an above use case to trace specific irq_vector rather than tracing all events.
In this case, we are concerned about overhead due to unwanted events.
So, add following tracepoints instead of introducing irq_vector_entry/exit.
so that we can enable them independently.
- local_timer_vector
- reschedule_vector
- call_function_vector
- call_function_single_vector
- irq_work_entry_vector
- error_apic_vector
- thermal_apic_vector
- threshold_apic_vector
- spurious_apic_vector
- x86_platform_ipi_vector
Also, introduce a logic switching IDT at enabling/disabling time so that a time penalty
makes a zero when tracepoints are disabled. Detailed explanations are as follows.
- Create trace irq handlers with entering_irq()/exiting_irq().
- Create a new IDT, trace_idt_table, at boot time by adding a logic to
_set_gate(). It is just a copy of original idt table.
- Register the new handlers for tracpoints to the new IDT by introducing
macros to alloc_intr_gate() called at registering time of irq_vector handlers.
- Add checking, whether irq vector tracing is on/off, into load_current_idt().
This has to be done below debug checking for these reasons.
- Switching to debug IDT may be kicked while tracing is enabled.
- On the other hands, switching to trace IDT is kicked only when debugging
is disabled.
In addition, the new IDT is created only when CONFIG_TRACING is enabled to avoid being
used for other purposes.
Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com>
Link: http://lkml.kernel.org/r/51C323ED.5050708@hds.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
2013-06-20 23:46:53 +08:00
|
|
|
{
|
|
|
|
struct pt_regs *old_regs = set_irq_regs(regs);
|
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
/*
|
x86, trace: Add irq vector tracepoints
[Purpose of this patch]
As Vaibhav explained in the thread below, tracepoints for irq vectors
are useful.
http://www.spinics.net/lists/mm-commits/msg85707.html
<snip>
The current interrupt traces from irq_handler_entry and irq_handler_exit
provide when an interrupt is handled. They provide good data about when
the system has switched to kernel space and how it affects the currently
running processes.
There are some IRQ vectors which trigger the system into kernel space,
which are not handled in generic IRQ handlers. Tracing such events gives
us the information about IRQ interaction with other system events.
The trace also tells where the system is spending its time. We want to
know which cores are handling interrupts and how they are affecting other
processes in the system. Also, the trace provides information about when
the cores are idle and which interrupts are changing that state.
<snip>
On the other hand, my usecase is tracing just local timer event and
getting a value of instruction pointer.
I suggested to add an argument local timer event to get instruction pointer before.
But there is another way to get it with external module like systemtap.
So, I don't need to add any argument to irq vector tracepoints now.
[Patch Description]
Vaibhav's patch shared a trace point ,irq_vector_entry/irq_vector_exit, in all events.
But there is an above use case to trace specific irq_vector rather than tracing all events.
In this case, we are concerned about overhead due to unwanted events.
So, add following tracepoints instead of introducing irq_vector_entry/exit.
so that we can enable them independently.
- local_timer_vector
- reschedule_vector
- call_function_vector
- call_function_single_vector
- irq_work_entry_vector
- error_apic_vector
- thermal_apic_vector
- threshold_apic_vector
- spurious_apic_vector
- x86_platform_ipi_vector
Also, introduce a logic switching IDT at enabling/disabling time so that a time penalty
makes a zero when tracepoints are disabled. Detailed explanations are as follows.
- Create trace irq handlers with entering_irq()/exiting_irq().
- Create a new IDT, trace_idt_table, at boot time by adding a logic to
_set_gate(). It is just a copy of original idt table.
- Register the new handlers for tracpoints to the new IDT by introducing
macros to alloc_intr_gate() called at registering time of irq_vector handlers.
- Add checking, whether irq vector tracing is on/off, into load_current_idt().
This has to be done below debug checking for these reasons.
- Switching to debug IDT may be kicked while tracing is enabled.
- On the other hands, switching to trace IDT is kicked only when debugging
is disabled.
In addition, the new IDT is created only when CONFIG_TRACING is enabled to avoid being
used for other purposes.
Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com>
Link: http://lkml.kernel.org/r/51C323ED.5050708@hds.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
2013-06-20 23:46:53 +08:00
|
|
|
* NOTE! We'd better ACK the irq immediately,
|
|
|
|
* because timer handling can be slow.
|
|
|
|
*
|
2008-01-30 20:30:20 +08:00
|
|
|
* update_process_times() expects us to have done irq_enter().
|
|
|
|
* Besides, if we don't timer interrupts ignore the global
|
|
|
|
* interrupt lock, which is the WrongThing (tm) to do.
|
|
|
|
*/
|
x86, trace: Add irq vector tracepoints
[Purpose of this patch]
As Vaibhav explained in the thread below, tracepoints for irq vectors
are useful.
http://www.spinics.net/lists/mm-commits/msg85707.html
<snip>
The current interrupt traces from irq_handler_entry and irq_handler_exit
provide when an interrupt is handled. They provide good data about when
the system has switched to kernel space and how it affects the currently
running processes.
There are some IRQ vectors which trigger the system into kernel space,
which are not handled in generic IRQ handlers. Tracing such events gives
us the information about IRQ interaction with other system events.
The trace also tells where the system is spending its time. We want to
know which cores are handling interrupts and how they are affecting other
processes in the system. Also, the trace provides information about when
the cores are idle and which interrupts are changing that state.
<snip>
On the other hand, my usecase is tracing just local timer event and
getting a value of instruction pointer.
I suggested to add an argument local timer event to get instruction pointer before.
But there is another way to get it with external module like systemtap.
So, I don't need to add any argument to irq vector tracepoints now.
[Patch Description]
Vaibhav's patch shared a trace point ,irq_vector_entry/irq_vector_exit, in all events.
But there is an above use case to trace specific irq_vector rather than tracing all events.
In this case, we are concerned about overhead due to unwanted events.
So, add following tracepoints instead of introducing irq_vector_entry/exit.
so that we can enable them independently.
- local_timer_vector
- reschedule_vector
- call_function_vector
- call_function_single_vector
- irq_work_entry_vector
- error_apic_vector
- thermal_apic_vector
- threshold_apic_vector
- spurious_apic_vector
- x86_platform_ipi_vector
Also, introduce a logic switching IDT at enabling/disabling time so that a time penalty
makes a zero when tracepoints are disabled. Detailed explanations are as follows.
- Create trace irq handlers with entering_irq()/exiting_irq().
- Create a new IDT, trace_idt_table, at boot time by adding a logic to
_set_gate(). It is just a copy of original idt table.
- Register the new handlers for tracpoints to the new IDT by introducing
macros to alloc_intr_gate() called at registering time of irq_vector handlers.
- Add checking, whether irq vector tracing is on/off, into load_current_idt().
This has to be done below debug checking for these reasons.
- Switching to debug IDT may be kicked while tracing is enabled.
- On the other hands, switching to trace IDT is kicked only when debugging
is disabled.
In addition, the new IDT is created only when CONFIG_TRACING is enabled to avoid being
used for other purposes.
Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com>
Link: http://lkml.kernel.org/r/51C323ED.5050708@hds.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
2013-06-20 23:46:53 +08:00
|
|
|
entering_ack_irq();
|
|
|
|
trace_local_timer_entry(LOCAL_TIMER_VECTOR);
|
2008-01-30 20:30:20 +08:00
|
|
|
local_apic_timer_interrupt();
|
x86, trace: Add irq vector tracepoints
[Purpose of this patch]
As Vaibhav explained in the thread below, tracepoints for irq vectors
are useful.
http://www.spinics.net/lists/mm-commits/msg85707.html
<snip>
The current interrupt traces from irq_handler_entry and irq_handler_exit
provide when an interrupt is handled. They provide good data about when
the system has switched to kernel space and how it affects the currently
running processes.
There are some IRQ vectors which trigger the system into kernel space,
which are not handled in generic IRQ handlers. Tracing such events gives
us the information about IRQ interaction with other system events.
The trace also tells where the system is spending its time. We want to
know which cores are handling interrupts and how they are affecting other
processes in the system. Also, the trace provides information about when
the cores are idle and which interrupts are changing that state.
<snip>
On the other hand, my usecase is tracing just local timer event and
getting a value of instruction pointer.
I suggested to add an argument local timer event to get instruction pointer before.
But there is another way to get it with external module like systemtap.
So, I don't need to add any argument to irq vector tracepoints now.
[Patch Description]
Vaibhav's patch shared a trace point ,irq_vector_entry/irq_vector_exit, in all events.
But there is an above use case to trace specific irq_vector rather than tracing all events.
In this case, we are concerned about overhead due to unwanted events.
So, add following tracepoints instead of introducing irq_vector_entry/exit.
so that we can enable them independently.
- local_timer_vector
- reschedule_vector
- call_function_vector
- call_function_single_vector
- irq_work_entry_vector
- error_apic_vector
- thermal_apic_vector
- threshold_apic_vector
- spurious_apic_vector
- x86_platform_ipi_vector
Also, introduce a logic switching IDT at enabling/disabling time so that a time penalty
makes a zero when tracepoints are disabled. Detailed explanations are as follows.
- Create trace irq handlers with entering_irq()/exiting_irq().
- Create a new IDT, trace_idt_table, at boot time by adding a logic to
_set_gate(). It is just a copy of original idt table.
- Register the new handlers for tracpoints to the new IDT by introducing
macros to alloc_intr_gate() called at registering time of irq_vector handlers.
- Add checking, whether irq vector tracing is on/off, into load_current_idt().
This has to be done below debug checking for these reasons.
- Switching to debug IDT may be kicked while tracing is enabled.
- On the other hands, switching to trace IDT is kicked only when debugging
is disabled.
In addition, the new IDT is created only when CONFIG_TRACING is enabled to avoid being
used for other purposes.
Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com>
Link: http://lkml.kernel.org/r/51C323ED.5050708@hds.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
2013-06-20 23:46:53 +08:00
|
|
|
trace_local_timer_exit(LOCAL_TIMER_VECTOR);
|
|
|
|
exiting_irq();
|
2008-08-17 03:21:53 +08:00
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
set_irq_regs(old_regs);
|
|
|
|
}
|
|
|
|
|
|
|
|
int setup_profiling_timer(unsigned int multiplier)
|
|
|
|
{
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Local APIC start and shutdown
|
|
|
|
*/
|
|
|
|
|
|
|
|
/**
|
|
|
|
* clear_local_APIC - shutdown the local APIC
|
|
|
|
*
|
|
|
|
* This is called, when a CPU is disabled and before rebooting, so the state of
|
|
|
|
* the local APIC has no dangling leftovers. Also used to cleanout any BIOS
|
|
|
|
* leftovers during boot.
|
|
|
|
*/
|
|
|
|
void clear_local_APIC(void)
|
|
|
|
{
|
2008-05-21 06:18:12 +08:00
|
|
|
int maxlvt;
|
2008-01-30 20:30:20 +08:00
|
|
|
u32 v;
|
|
|
|
|
2008-01-30 20:33:17 +08:00
|
|
|
/* APIC hasn't been mapped yet */
|
2009-04-21 04:02:27 +08:00
|
|
|
if (!x2apic_mode && !apic_phys)
|
2008-01-30 20:33:17 +08:00
|
|
|
return;
|
|
|
|
|
|
|
|
maxlvt = lapic_get_maxlvt();
|
2008-01-30 20:30:20 +08:00
|
|
|
/*
|
|
|
|
* Masking an LVT entry can trigger a local APIC error
|
|
|
|
* if the vector is zero. Mask LVTERR first to prevent this.
|
|
|
|
*/
|
|
|
|
if (maxlvt >= 3) {
|
|
|
|
v = ERROR_APIC_VECTOR; /* any non-zero vector will do */
|
|
|
|
apic_write(APIC_LVTERR, v | APIC_LVT_MASKED);
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Careful: we have to set masks only first to deassert
|
|
|
|
* any level-triggered sources.
|
|
|
|
*/
|
|
|
|
v = apic_read(APIC_LVTT);
|
|
|
|
apic_write(APIC_LVTT, v | APIC_LVT_MASKED);
|
|
|
|
v = apic_read(APIC_LVT0);
|
|
|
|
apic_write(APIC_LVT0, v | APIC_LVT_MASKED);
|
|
|
|
v = apic_read(APIC_LVT1);
|
|
|
|
apic_write(APIC_LVT1, v | APIC_LVT_MASKED);
|
|
|
|
if (maxlvt >= 4) {
|
|
|
|
v = apic_read(APIC_LVTPC);
|
|
|
|
apic_write(APIC_LVTPC, v | APIC_LVT_MASKED);
|
|
|
|
}
|
|
|
|
|
2008-08-17 03:21:50 +08:00
|
|
|
/* lets not touch this if we didn't frob it */
|
x86, mce: use 64bit machine check code on 32bit
The 64bit machine check code is in many ways much better than
the 32bit machine check code: it is more specification compliant,
is cleaner, only has a single code base versus one per CPU,
has better infrastructure for recovery, has a cleaner way to communicate
with user space etc. etc.
Use the 64bit code for 32bit too.
This is the second attempt to do this. There was one a couple of years
ago to unify this code for 32bit and 64bit. Back then this ran into some
trouble with K7s and was reverted.
I believe this time the K7 problems (and some others) are addressed.
I went over the old handlers and was very careful to retain
all quirks.
But of course this needs a lot of testing on old systems. On newer
64bit capable systems I don't expect much problems because they have been
already tested with the 64bit kernel.
I made this a CONFIG for now that still allows to select the old
machine check code. This is mostly to make testing easier,
if someone runs into a problem we can ask them to try
with the CONFIG switched.
The new code is default y for more coverage.
Once there is confidence the 64bit code works well on older hardware
too the CONFIG_X86_OLD_MCE and the associated code can be easily
removed.
This causes a behaviour change for 32bit installations. They now
have to install the mcelog package to be able to log
corrected machine checks.
The 64bit machine check code only handles CPUs which support the
standard Intel machine check architecture described in the IA32 SDM.
The 32bit code has special support for some older CPUs which
have non standard machine check architectures, in particular
WinChip C3 and Intel P5. I made those a separate CONFIG option
and kept them for now. The WinChip variant could be probably
removed without too much pain, it doesn't really do anything
interesting. P5 is also disabled by default (like it
was before) because many motherboards have it miswired, but
according to Alan Cox a few embedded setups use that one.
Forward ported/heavily changed version of old patch, original patch
included review/fixes from Thomas Gleixner, Bert Wesarg.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-04-29 01:07:31 +08:00
|
|
|
#ifdef CONFIG_X86_THERMAL_VECTOR
|
2008-08-17 03:21:50 +08:00
|
|
|
if (maxlvt >= 5) {
|
|
|
|
v = apic_read(APIC_LVTTHMR);
|
|
|
|
apic_write(APIC_LVTTHMR, v | APIC_LVT_MASKED);
|
|
|
|
}
|
|
|
|
#endif
|
2009-02-12 20:49:37 +08:00
|
|
|
#ifdef CONFIG_X86_MCE_INTEL
|
|
|
|
if (maxlvt >= 6) {
|
|
|
|
v = apic_read(APIC_LVTCMCI);
|
|
|
|
if (!(v & APIC_LVT_MASKED))
|
|
|
|
apic_write(APIC_LVTCMCI, v | APIC_LVT_MASKED);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
/*
|
|
|
|
* Clean APIC state for other OSs:
|
|
|
|
*/
|
|
|
|
apic_write(APIC_LVTT, APIC_LVT_MASKED);
|
|
|
|
apic_write(APIC_LVT0, APIC_LVT_MASKED);
|
|
|
|
apic_write(APIC_LVT1, APIC_LVT_MASKED);
|
|
|
|
if (maxlvt >= 3)
|
|
|
|
apic_write(APIC_LVTERR, APIC_LVT_MASKED);
|
|
|
|
if (maxlvt >= 4)
|
|
|
|
apic_write(APIC_LVTPC, APIC_LVT_MASKED);
|
2008-08-17 03:21:50 +08:00
|
|
|
|
|
|
|
/* Integrated APIC (!82489DX) ? */
|
|
|
|
if (lapic_is_integrated()) {
|
|
|
|
if (maxlvt > 3)
|
|
|
|
/* Clear ESR due to Pentium errata 3AP and 11AP */
|
|
|
|
apic_write(APIC_ESR, 0);
|
|
|
|
apic_read(APIC_ESR);
|
|
|
|
}
|
2008-01-30 20:30:20 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* disable_local_APIC - clear and disable the local APIC
|
|
|
|
*/
|
|
|
|
void disable_local_APIC(void)
|
|
|
|
{
|
|
|
|
unsigned int value;
|
|
|
|
|
2009-01-14 20:28:51 +08:00
|
|
|
/* APIC hasn't been mapped yet */
|
2010-07-15 15:00:59 +08:00
|
|
|
if (!x2apic_mode && !apic_phys)
|
2009-01-14 20:28:51 +08:00
|
|
|
return;
|
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
clear_local_APIC();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Disable APIC (implies clearing of registers
|
|
|
|
* for 82489DX!).
|
|
|
|
*/
|
|
|
|
value = apic_read(APIC_SPIV);
|
|
|
|
value &= ~APIC_SPIV_APIC_ENABLED;
|
|
|
|
apic_write(APIC_SPIV, value);
|
2008-08-19 00:45:51 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_X86_32
|
|
|
|
/*
|
|
|
|
* When LAPIC was disabled by the BIOS and enabled by the kernel,
|
|
|
|
* restore the disabled state.
|
|
|
|
*/
|
|
|
|
if (enabled_via_apicbase) {
|
|
|
|
unsigned int l, h;
|
|
|
|
|
|
|
|
rdmsr(MSR_IA32_APICBASE, l, h);
|
|
|
|
l &= ~MSR_IA32_APICBASE_ENABLE;
|
|
|
|
wrmsr(MSR_IA32_APICBASE, l, h);
|
|
|
|
}
|
|
|
|
#endif
|
2008-01-30 20:30:20 +08:00
|
|
|
}
|
|
|
|
|
2008-08-19 00:45:52 +08:00
|
|
|
/*
|
|
|
|
* If Linux enabled the LAPIC against the BIOS default disable it down before
|
|
|
|
* re-entering the BIOS on shutdown. Otherwise the BIOS may get confused and
|
|
|
|
* not power-off. Additionally clear all LVT entries before disable_local_APIC
|
|
|
|
* for the case where Linux didn't enable the LAPIC.
|
|
|
|
*/
|
2008-01-30 20:30:20 +08:00
|
|
|
void lapic_shutdown(void)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
|
2016-04-05 04:25:00 +08:00
|
|
|
if (!boot_cpu_has(X86_FEATURE_APIC) && !apic_from_smp_config())
|
2008-01-30 20:30:20 +08:00
|
|
|
return;
|
|
|
|
|
|
|
|
local_irq_save(flags);
|
|
|
|
|
2008-08-19 00:45:52 +08:00
|
|
|
#ifdef CONFIG_X86_32
|
|
|
|
if (!enabled_via_apicbase)
|
|
|
|
clear_local_APIC();
|
|
|
|
else
|
|
|
|
#endif
|
|
|
|
disable_local_APIC();
|
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
|
|
|
|
local_irq_restore(flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* sync_Arb_IDs - synchronize APIC bus arbitration IDs
|
|
|
|
*/
|
2005-04-17 06:20:36 +08:00
|
|
|
void __init sync_Arb_IDs(void)
|
|
|
|
{
|
2008-08-15 19:51:23 +08:00
|
|
|
/*
|
|
|
|
* Unsupported on P4 - see Intel Dev. Manual Vol. 3, Ch. 8.6.1 And not
|
|
|
|
* needed on AMD.
|
|
|
|
*/
|
|
|
|
if (modern_apic() || boot_cpu_data.x86_vendor == X86_VENDOR_AMD)
|
2005-04-17 06:20:36 +08:00
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wait for idle.
|
|
|
|
*/
|
|
|
|
apic_wait_icr_idle();
|
|
|
|
|
|
|
|
apic_printk(APIC_DEBUG, "Synchronizing Arb IDs.\n");
|
2008-08-16 03:05:19 +08:00
|
|
|
apic_write(APIC_ICR, APIC_DEST_ALLINC |
|
|
|
|
APIC_INT_LEVELTRIG | APIC_DM_INIT);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* An initial setup of the virtual wire mode.
|
|
|
|
*/
|
|
|
|
void __init init_bsp_APIC(void)
|
|
|
|
{
|
2006-01-12 05:46:51 +08:00
|
|
|
unsigned int value;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Don't do the setup now if we have a SMP BIOS as the
|
|
|
|
* through-I/O-APIC virtual wire mode might be active.
|
|
|
|
*/
|
2016-04-05 04:25:00 +08:00
|
|
|
if (smp_found_config || !boot_cpu_has(X86_FEATURE_APIC))
|
2005-04-17 06:20:36 +08:00
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Do not trust the local APIC being empty at bootup.
|
|
|
|
*/
|
|
|
|
clear_local_APIC();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Enable APIC.
|
|
|
|
*/
|
|
|
|
value = apic_read(APIC_SPIV);
|
|
|
|
value &= ~APIC_VECTOR_MASK;
|
|
|
|
value |= APIC_SPIV_APIC_ENABLED;
|
2008-08-16 03:05:18 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_X86_32
|
|
|
|
/* This bit is reserved on P4/Xeon and should be cleared */
|
|
|
|
if ((boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) &&
|
|
|
|
(boot_cpu_data.x86 == 15))
|
|
|
|
value &= ~APIC_SPIV_FOCUS_DISABLED;
|
|
|
|
else
|
|
|
|
#endif
|
|
|
|
value |= APIC_SPIV_FOCUS_DISABLED;
|
2005-04-17 06:20:36 +08:00
|
|
|
value |= SPURIOUS_APIC_VECTOR;
|
2006-01-12 05:46:51 +08:00
|
|
|
apic_write(APIC_SPIV, value);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Set up the virtual wire mode.
|
|
|
|
*/
|
2006-01-12 05:46:51 +08:00
|
|
|
apic_write(APIC_LVT0, APIC_DM_EXTINT);
|
2005-04-17 06:20:36 +08:00
|
|
|
value = APIC_DM_NMI;
|
2008-08-16 03:05:18 +08:00
|
|
|
if (!lapic_is_integrated()) /* 82489DX */
|
|
|
|
value |= APIC_LVT_LEVEL_TRIGGER;
|
2015-12-14 18:19:12 +08:00
|
|
|
if (apic_extnmi == APIC_EXTNMI_NONE)
|
|
|
|
value |= APIC_LVT_MASKED;
|
2006-01-12 05:46:51 +08:00
|
|
|
apic_write(APIC_LVT1, value);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-19 06:23:59 +08:00
|
|
|
static void lapic_setup_esr(void)
|
2008-08-19 00:45:54 +08:00
|
|
|
{
|
2008-09-14 15:55:37 +08:00
|
|
|
unsigned int oldvalue, value, maxlvt;
|
|
|
|
|
|
|
|
if (!lapic_is_integrated()) {
|
2008-11-10 16:16:41 +08:00
|
|
|
pr_info("No ESR for 82489DX.\n");
|
2008-09-14 15:55:37 +08:00
|
|
|
return;
|
|
|
|
}
|
2008-08-19 00:45:54 +08:00
|
|
|
|
2009-01-28 12:08:44 +08:00
|
|
|
if (apic->disable_esr) {
|
2008-08-19 00:45:54 +08:00
|
|
|
/*
|
2008-09-14 15:55:37 +08:00
|
|
|
* Something untraceable is creating bad interrupts on
|
|
|
|
* secondary quads ... for the moment, just leave the
|
|
|
|
* ESR disabled - we can't do anything useful with the
|
|
|
|
* errors anyway - mbligh
|
2008-08-19 00:45:54 +08:00
|
|
|
*/
|
2008-11-10 16:16:41 +08:00
|
|
|
pr_info("Leaving ESR disabled.\n");
|
2008-09-14 15:55:37 +08:00
|
|
|
return;
|
2008-08-19 00:45:54 +08:00
|
|
|
}
|
2008-09-14 15:55:37 +08:00
|
|
|
|
|
|
|
maxlvt = lapic_get_maxlvt();
|
|
|
|
if (maxlvt > 3) /* Due to the Pentium erratum 3AP. */
|
|
|
|
apic_write(APIC_ESR, 0);
|
|
|
|
oldvalue = apic_read(APIC_ESR);
|
|
|
|
|
|
|
|
/* enables sending errors */
|
|
|
|
value = ERROR_APIC_VECTOR;
|
|
|
|
apic_write(APIC_LVTERR, value);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* spec says clear errors after enabling vector.
|
|
|
|
*/
|
|
|
|
if (maxlvt > 3)
|
|
|
|
apic_write(APIC_ESR, 0);
|
|
|
|
value = apic_read(APIC_ESR);
|
|
|
|
if (value != oldvalue)
|
|
|
|
apic_printk(APIC_VERBOSE, "ESR value before enabling "
|
|
|
|
"vector: 0x%08x after: 0x%08x\n",
|
|
|
|
oldvalue, value);
|
2008-08-19 00:45:54 +08:00
|
|
|
}
|
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
/**
|
|
|
|
* setup_local_APIC - setup the local APIC
|
2010-12-09 18:47:21 +08:00
|
|
|
*
|
|
|
|
* Used to setup local APIC while initializing BSP or bringin up APs.
|
|
|
|
* Always called with preemption disabled.
|
2008-01-30 20:30:20 +08:00
|
|
|
*/
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-19 06:23:59 +08:00
|
|
|
void setup_local_APIC(void)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2010-12-09 18:47:21 +08:00
|
|
|
int cpu = smp_processor_id();
|
2010-05-25 03:13:15 +08:00
|
|
|
unsigned int value, queued;
|
|
|
|
int i, j, acked = 0;
|
|
|
|
unsigned long long tsc = 0, ntsc;
|
2014-10-16 01:12:07 +08:00
|
|
|
long long max_loops = cpu_khz ? cpu_khz : 1000000;
|
2010-05-25 03:13:15 +08:00
|
|
|
|
2016-04-05 04:24:59 +08:00
|
|
|
if (boot_cpu_has(X86_FEATURE_TSC))
|
2015-06-26 00:44:07 +08:00
|
|
|
tsc = rdtsc();
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-01-14 20:27:35 +08:00
|
|
|
if (disable_apic) {
|
2011-02-22 22:38:05 +08:00
|
|
|
disable_ioapic_support();
|
2009-01-14 20:27:35 +08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2008-08-24 17:01:43 +08:00
|
|
|
#ifdef CONFIG_X86_32
|
|
|
|
/* Pound the ESR really hard over the head with a big hammer - mbligh */
|
2009-01-28 12:08:44 +08:00
|
|
|
if (lapic_is_integrated() && apic->disable_esr) {
|
2008-08-24 17:01:43 +08:00
|
|
|
apic_write(APIC_ESR, 0);
|
|
|
|
apic_write(APIC_ESR, 0);
|
|
|
|
apic_write(APIC_ESR, 0);
|
|
|
|
apic_write(APIC_ESR, 0);
|
|
|
|
}
|
|
|
|
#endif
|
perf: Do the big rename: Performance Counters -> Performance Events
Bye-bye Performance Counters, welcome Performance Events!
In the past few months the perfcounters subsystem has grown out its
initial role of counting hardware events, and has become (and is
becoming) a much broader generic event enumeration, reporting, logging,
monitoring, analysis facility.
Naming its core object 'perf_counter' and naming the subsystem
'perfcounters' has become more and more of a misnomer. With pending
code like hw-breakpoints support the 'counter' name is less and
less appropriate.
All in one, we've decided to rename the subsystem to 'performance
events' and to propagate this rename through all fields, variables
and API names. (in an ABI compatible fashion)
The word 'event' is also a bit shorter than 'counter' - which makes
it slightly more convenient to write/handle as well.
Thanks goes to Stephane Eranian who first observed this misnomer and
suggested a rename.
User-space tooling and ABI compatibility is not affected - this patch
should be function-invariant. (Also, defconfigs were not touched to
keep the size down.)
This patch has been generated via the following script:
FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')
sed -i \
-e 's/PERF_EVENT_/PERF_RECORD_/g' \
-e 's/PERF_COUNTER/PERF_EVENT/g' \
-e 's/perf_counter/perf_event/g' \
-e 's/nb_counters/nb_events/g' \
-e 's/swcounter/swevent/g' \
-e 's/tpcounter_event/tp_event/g' \
$FILES
for N in $(find . -name perf_counter.[ch]); do
M=$(echo $N | sed 's/perf_counter/perf_event/g')
mv $N $M
done
FILES=$(find . -name perf_event.*)
sed -i \
-e 's/COUNTER_MASK/REG_MASK/g' \
-e 's/COUNTER/EVENT/g' \
-e 's/\<event\>/event_id/g' \
-e 's/counter/event/g' \
-e 's/Counter/Event/g' \
$FILES
... to keep it as correct as possible. This script can also be
used by anyone who has pending perfcounters patches - it converts
a Linux kernel tree over to the new naming. We tried to time this
change to the point in time where the amount of pending patches
is the smallest: the end of the merge window.
Namespace clashes were fixed up in a preparatory patch - and some
stylistic fallout will be fixed up in a subsequent patch.
( NOTE: 'counters' are still the proper terminology when we deal
with hardware registers - and these sed scripts are a bit
over-eager in renaming them. I've undone some of that, but
in case there's something left where 'counter' would be
better than 'event' we can undo that on an individual basis
instead of touching an otherwise nicely automated patch. )
Suggested-by: Stephane Eranian <eranian@google.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <linux-arch@vger.kernel.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-21 18:02:48 +08:00
|
|
|
perf_events_lapic_init();
|
2008-08-24 17:01:43 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Double-check whether this APIC is really registered.
|
|
|
|
* This is meaningless in clustered apic mode, so we skip it.
|
|
|
|
*/
|
2009-09-13 01:40:20 +08:00
|
|
|
BUG_ON(!apic->apic_id_registered());
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Intel recommends to set DFR, LDR and TPR before enabling
|
|
|
|
* an APIC. See e.g. "AP-388 82489DX User's Manual" (Intel
|
|
|
|
* document number 292116). So here it goes...
|
|
|
|
*/
|
2009-01-28 13:50:47 +08:00
|
|
|
apic->init_apic_ldr();
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-01-23 21:37:31 +08:00
|
|
|
#ifdef CONFIG_X86_32
|
|
|
|
/*
|
2011-01-23 21:37:33 +08:00
|
|
|
* APIC LDR is initialized. If logical_apicid mapping was
|
|
|
|
* initialized during get_smp_config(), make sure it matches the
|
|
|
|
* actual value.
|
2011-01-23 21:37:31 +08:00
|
|
|
*/
|
2011-01-23 21:37:33 +08:00
|
|
|
i = early_per_cpu(x86_cpu_to_logical_apicid, cpu);
|
|
|
|
WARN_ON(i != BAD_APICID && i != logical_smp_processor_id());
|
|
|
|
/* always use the value from LDR */
|
2011-01-23 21:37:31 +08:00
|
|
|
early_per_cpu(x86_cpu_to_logical_apicid, cpu) =
|
|
|
|
logical_smp_processor_id();
|
|
|
|
#endif
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Set Task Priority to 'accept all'. We never change this
|
|
|
|
* later on.
|
|
|
|
*/
|
|
|
|
value = apic_read(APIC_TASKPRI);
|
|
|
|
value &= ~APIC_TPRI_MASK;
|
2006-01-12 05:46:51 +08:00
|
|
|
apic_write(APIC_TASKPRI, value);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-03-25 23:31:16 +08:00
|
|
|
/*
|
|
|
|
* After a crash, we no longer service the interrupts and a pending
|
|
|
|
* interrupt from previous kernel might still have ISR bit set.
|
|
|
|
*
|
|
|
|
* Most probably by now CPU has serviced that pending interrupt and
|
|
|
|
* it might not have done the ack_APIC_irq() because it thought,
|
|
|
|
* interrupt came from i8259 as ExtInt. LAPIC did not get EOI so it
|
|
|
|
* does not clear the ISR bit and cpu thinks it has already serivced
|
|
|
|
* the interrupt. Hence a vector might get locked. It was noticed
|
|
|
|
* for timer irq (vector 0x31). Issue an extra EOI to clear ISR.
|
|
|
|
*/
|
2010-05-25 03:13:15 +08:00
|
|
|
do {
|
|
|
|
queued = 0;
|
|
|
|
for (i = APIC_ISR_NR - 1; i >= 0; i--)
|
|
|
|
queued |= apic_read(APIC_IRR + i*0x10);
|
|
|
|
|
|
|
|
for (i = APIC_ISR_NR - 1; i >= 0; i--) {
|
|
|
|
value = apic_read(APIC_ISR + i*0x10);
|
|
|
|
for (j = 31; j >= 0; j--) {
|
|
|
|
if (value & (1<<j)) {
|
|
|
|
ack_APIC_irq();
|
|
|
|
acked++;
|
|
|
|
}
|
|
|
|
}
|
2006-03-25 23:31:16 +08:00
|
|
|
}
|
2010-05-25 03:13:15 +08:00
|
|
|
if (acked > 256) {
|
|
|
|
printk(KERN_ERR "LAPIC pending interrupts after %d EOI\n",
|
|
|
|
acked);
|
|
|
|
break;
|
|
|
|
}
|
2012-04-20 06:12:32 +08:00
|
|
|
if (queued) {
|
2016-04-05 04:24:59 +08:00
|
|
|
if (boot_cpu_has(X86_FEATURE_TSC) && cpu_khz) {
|
2015-06-26 00:44:07 +08:00
|
|
|
ntsc = rdtsc();
|
2012-04-20 06:12:32 +08:00
|
|
|
max_loops = (cpu_khz << 10) - (ntsc - tsc);
|
|
|
|
} else
|
|
|
|
max_loops--;
|
|
|
|
}
|
2010-05-25 03:13:15 +08:00
|
|
|
} while (queued && max_loops > 0);
|
|
|
|
WARN_ON(max_loops <= 0);
|
2006-03-25 23:31:16 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Now that we are all set up, enable the APIC
|
|
|
|
*/
|
|
|
|
value = apic_read(APIC_SPIV);
|
|
|
|
value &= ~APIC_VECTOR_MASK;
|
|
|
|
/*
|
|
|
|
* Enable APIC
|
|
|
|
*/
|
|
|
|
value |= APIC_SPIV_APIC_ENABLED;
|
|
|
|
|
2008-08-24 17:01:43 +08:00
|
|
|
#ifdef CONFIG_X86_32
|
|
|
|
/*
|
|
|
|
* Some unknown Intel IO/APIC (or APIC) errata is biting us with
|
|
|
|
* certain networking cards. If high frequency interrupts are
|
|
|
|
* happening on a particular IOAPIC pin, plus the IOAPIC routing
|
|
|
|
* entry is masked/unmasked at a high rate as well then sooner or
|
|
|
|
* later IOAPIC line gets 'stuck', no more interrupts are received
|
|
|
|
* from the device. If focus CPU is disabled then the hang goes
|
|
|
|
* away, oh well :-(
|
|
|
|
*
|
|
|
|
* [ This bug can be reproduced easily with a level-triggered
|
|
|
|
* PCI Ne2000 networking cards and PII/PIII processors, dual
|
|
|
|
* BX chipset. ]
|
|
|
|
*/
|
|
|
|
/*
|
|
|
|
* Actually disabling the focus CPU check just makes the hang less
|
|
|
|
* frequent as it makes the interrupt distributon model be more
|
|
|
|
* like LRU than MRU (the short-term load is more even across CPUs).
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* - enable focus processor (bit==0)
|
|
|
|
* - 64bit mode always use processor focus
|
|
|
|
* so no need to set it
|
|
|
|
*/
|
|
|
|
value &= ~APIC_SPIV_FOCUS_DISABLED;
|
|
|
|
#endif
|
2006-09-26 16:52:29 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Set spurious IRQ vector
|
|
|
|
*/
|
|
|
|
value |= SPURIOUS_APIC_VECTOR;
|
2006-01-12 05:46:51 +08:00
|
|
|
apic_write(APIC_SPIV, value);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Set up LVT0, LVT1:
|
|
|
|
*
|
|
|
|
* set up through-local-APIC on the BP's LINT0. This is not
|
|
|
|
* strictly necessary in pure symmetric-IO mode, but sometimes
|
|
|
|
* we delegate interrupts to the 8259A.
|
|
|
|
*/
|
|
|
|
/*
|
|
|
|
* TODO: set up through-local-APIC from through-I/O-APIC? --macro
|
|
|
|
*/
|
|
|
|
value = apic_read(APIC_LVT0) & APIC_LVT_MASKED;
|
2010-12-09 18:47:21 +08:00
|
|
|
if (!cpu && (pic_mode || !value)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
value = APIC_DM_EXTINT;
|
2010-12-09 18:47:21 +08:00
|
|
|
apic_printk(APIC_VERBOSE, "enabled ExtINT on CPU#%d\n", cpu);
|
2005-04-17 06:20:36 +08:00
|
|
|
} else {
|
|
|
|
value = APIC_DM_EXTINT | APIC_LVT_MASKED;
|
2010-12-09 18:47:21 +08:00
|
|
|
apic_printk(APIC_VERBOSE, "masked ExtINT on CPU#%d\n", cpu);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2006-01-12 05:46:51 +08:00
|
|
|
apic_write(APIC_LVT0, value);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
2015-12-14 18:19:12 +08:00
|
|
|
* Only the BSP sees the LINT1 NMI signal by default. This can be
|
|
|
|
* modified by apic_extnmi= boot option.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2015-12-14 18:19:12 +08:00
|
|
|
if ((!cpu && apic_extnmi != APIC_EXTNMI_NONE) ||
|
|
|
|
apic_extnmi == APIC_EXTNMI_ALL)
|
2005-04-17 06:20:36 +08:00
|
|
|
value = APIC_DM_NMI;
|
|
|
|
else
|
|
|
|
value = APIC_DM_NMI | APIC_LVT_MASKED;
|
2008-08-24 17:01:43 +08:00
|
|
|
if (!lapic_is_integrated()) /* 82489DX */
|
|
|
|
value |= APIC_LVT_LEVEL_TRIGGER;
|
2006-01-12 05:46:51 +08:00
|
|
|
apic_write(APIC_LVT1, value);
|
2008-08-24 17:01:43 +08:00
|
|
|
|
2009-02-12 20:49:38 +08:00
|
|
|
#ifdef CONFIG_X86_MCE_INTEL
|
|
|
|
/* Recheck CMCI information after local APIC is up on CPU #0 */
|
2010-12-09 18:47:21 +08:00
|
|
|
if (!cpu)
|
2009-02-12 20:49:38 +08:00
|
|
|
cmci_recheck();
|
|
|
|
#endif
|
2008-01-30 20:30:40 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2015-01-16 05:22:40 +08:00
|
|
|
static void end_local_APIC_setup(void)
|
2008-01-30 20:30:40 +08:00
|
|
|
{
|
|
|
|
lapic_setup_esr();
|
2008-08-19 00:45:58 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_X86_32
|
2008-08-19 03:12:33 +08:00
|
|
|
{
|
|
|
|
unsigned int value;
|
|
|
|
/* Disable the local apic timer */
|
|
|
|
value = apic_read(APIC_LVTT);
|
|
|
|
value |= (APIC_LVT_MASKED | LOCAL_TIMER_VECTOR);
|
|
|
|
apic_write(APIC_LVTT, value);
|
|
|
|
}
|
2008-08-19 00:45:58 +08:00
|
|
|
#endif
|
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
apic_pm_activate();
|
2011-02-09 16:21:02 +08:00
|
|
|
}
|
|
|
|
|
2015-01-16 05:22:40 +08:00
|
|
|
/*
|
|
|
|
* APIC setup function for application processors. Called from smpboot.c
|
|
|
|
*/
|
|
|
|
void apic_ap_setup(void)
|
2011-02-09 16:21:02 +08:00
|
|
|
{
|
2015-01-16 05:22:40 +08:00
|
|
|
setup_local_APIC();
|
2011-02-09 16:21:02 +08:00
|
|
|
end_local_APIC_setup();
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2009-02-17 09:29:58 +08:00
|
|
|
#ifdef CONFIG_X86_X2APIC
|
2015-01-16 05:22:12 +08:00
|
|
|
int x2apic_mode;
|
2015-01-16 05:22:22 +08:00
|
|
|
|
|
|
|
enum {
|
|
|
|
X2APIC_OFF,
|
|
|
|
X2APIC_ON,
|
|
|
|
X2APIC_DISABLED,
|
|
|
|
};
|
|
|
|
static int x2apic_state;
|
|
|
|
|
2015-09-30 04:37:02 +08:00
|
|
|
static void __x2apic_disable(void)
|
2015-01-16 05:22:24 +08:00
|
|
|
{
|
|
|
|
u64 msr;
|
|
|
|
|
2016-04-05 04:25:00 +08:00
|
|
|
if (!boot_cpu_has(X86_FEATURE_APIC))
|
2015-01-16 05:22:26 +08:00
|
|
|
return;
|
|
|
|
|
2015-01-16 05:22:24 +08:00
|
|
|
rdmsrl(MSR_IA32_APICBASE, msr);
|
|
|
|
if (!(msr & X2APIC_ENABLE))
|
|
|
|
return;
|
|
|
|
/* Disable xapic and x2apic first and then reenable xapic mode */
|
|
|
|
wrmsrl(MSR_IA32_APICBASE, msr & ~(X2APIC_ENABLE | XAPIC_ENABLE));
|
|
|
|
wrmsrl(MSR_IA32_APICBASE, msr & ~X2APIC_ENABLE);
|
|
|
|
printk_once(KERN_INFO "x2apic disabled\n");
|
|
|
|
}
|
|
|
|
|
2015-09-30 04:37:02 +08:00
|
|
|
static void __x2apic_enable(void)
|
2015-01-16 05:22:26 +08:00
|
|
|
{
|
|
|
|
u64 msr;
|
|
|
|
|
|
|
|
rdmsrl(MSR_IA32_APICBASE, msr);
|
|
|
|
if (msr & X2APIC_ENABLE)
|
|
|
|
return;
|
|
|
|
wrmsrl(MSR_IA32_APICBASE, msr | X2APIC_ENABLE);
|
|
|
|
printk_once(KERN_INFO "x2apic enabled\n");
|
|
|
|
}
|
|
|
|
|
2015-01-16 05:22:12 +08:00
|
|
|
static int __init setup_nox2apic(char *str)
|
|
|
|
{
|
|
|
|
if (x2apic_enabled()) {
|
|
|
|
int apicid = native_apic_msr_read(APIC_ID);
|
|
|
|
|
|
|
|
if (apicid >= 255) {
|
|
|
|
pr_warning("Apicid: %08x, cannot enforce nox2apic\n",
|
|
|
|
apicid);
|
|
|
|
return 0;
|
|
|
|
}
|
2015-01-16 05:22:24 +08:00
|
|
|
pr_warning("x2apic already enabled.\n");
|
|
|
|
__x2apic_disable();
|
|
|
|
}
|
|
|
|
setup_clear_cpu_cap(X86_FEATURE_X2APIC);
|
2015-01-16 05:22:22 +08:00
|
|
|
x2apic_state = X2APIC_DISABLED;
|
2015-01-16 05:22:24 +08:00
|
|
|
x2apic_mode = 0;
|
2015-01-16 05:22:12 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
early_param("nox2apic", setup_nox2apic);
|
|
|
|
|
2015-01-16 05:22:26 +08:00
|
|
|
/* Called from cpu_init() to enable x2apic on (secondary) cpus */
|
|
|
|
void x2apic_setup(void)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* If x2apic is not in ON state, disable it if already enabled
|
|
|
|
* from BIOS.
|
|
|
|
*/
|
|
|
|
if (x2apic_state != X2APIC_ON) {
|
|
|
|
__x2apic_disable();
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
__x2apic_enable();
|
|
|
|
}
|
|
|
|
|
2015-01-16 05:22:24 +08:00
|
|
|
static __init void x2apic_disable(void)
|
2011-12-22 09:45:17 +08:00
|
|
|
{
|
2015-08-22 22:41:17 +08:00
|
|
|
u32 x2apic_id, state = x2apic_state;
|
2011-12-22 09:45:17 +08:00
|
|
|
|
2015-08-22 22:41:17 +08:00
|
|
|
x2apic_mode = 0;
|
|
|
|
x2apic_state = X2APIC_DISABLED;
|
|
|
|
|
|
|
|
if (state != X2APIC_ON)
|
|
|
|
return;
|
2011-12-22 09:45:17 +08:00
|
|
|
|
2015-01-16 05:22:27 +08:00
|
|
|
x2apic_id = read_apic_id();
|
|
|
|
if (x2apic_id >= 255)
|
|
|
|
panic("Cannot disable x2apic, id: %08x\n", x2apic_id);
|
2015-01-16 05:22:16 +08:00
|
|
|
|
2015-01-16 05:22:27 +08:00
|
|
|
__x2apic_disable();
|
|
|
|
register_lapic_address(mp_lapic_addr);
|
2011-12-22 09:45:17 +08:00
|
|
|
}
|
|
|
|
|
2015-01-16 05:22:26 +08:00
|
|
|
static __init void x2apic_enable(void)
|
2008-07-11 02:16:58 +08:00
|
|
|
{
|
2015-01-16 05:22:26 +08:00
|
|
|
if (x2apic_state != X2APIC_OFF)
|
2009-02-17 09:29:58 +08:00
|
|
|
return;
|
|
|
|
|
2015-01-16 05:22:26 +08:00
|
|
|
x2apic_mode = 1;
|
2015-01-16 05:22:22 +08:00
|
|
|
x2apic_state = X2APIC_ON;
|
2015-01-16 05:22:26 +08:00
|
|
|
__x2apic_enable();
|
2008-07-11 02:16:58 +08:00
|
|
|
}
|
2015-01-16 05:22:17 +08:00
|
|
|
|
2015-01-16 05:22:21 +08:00
|
|
|
static __init void try_to_enable_x2apic(int remap_mode)
|
2015-01-07 15:31:34 +08:00
|
|
|
{
|
2015-01-16 05:22:26 +08:00
|
|
|
if (x2apic_state == X2APIC_DISABLED)
|
2015-01-07 15:31:34 +08:00
|
|
|
return;
|
|
|
|
|
2015-01-16 05:22:21 +08:00
|
|
|
if (remap_mode != IRQ_REMAP_X2APIC_MODE) {
|
2015-01-07 15:31:34 +08:00
|
|
|
/* IR is required if there is APIC ID > 255 even when running
|
|
|
|
* under KVM
|
|
|
|
*/
|
|
|
|
if (max_physical_apicid > 255 ||
|
2015-02-14 02:26:18 +08:00
|
|
|
!hypervisor_x2apic_available()) {
|
2015-01-16 05:22:21 +08:00
|
|
|
pr_info("x2apic: IRQ remapping doesn't support X2APIC mode\n");
|
2015-01-16 05:22:24 +08:00
|
|
|
x2apic_disable();
|
2015-01-07 15:31:34 +08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* without IR all CPUs can be addressed by IOAPIC/MSI
|
|
|
|
* only in physical mode
|
|
|
|
*/
|
2015-01-16 05:22:19 +08:00
|
|
|
x2apic_phys = 1;
|
2015-01-07 15:31:34 +08:00
|
|
|
}
|
2015-01-16 05:22:26 +08:00
|
|
|
x2apic_enable();
|
2015-01-16 05:22:19 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void __init check_x2apic(void)
|
|
|
|
{
|
|
|
|
if (x2apic_enabled()) {
|
|
|
|
pr_info("x2apic: enabled by BIOS, switching to x2apic ops\n");
|
|
|
|
x2apic_mode = 1;
|
2015-01-16 05:22:22 +08:00
|
|
|
x2apic_state = X2APIC_ON;
|
2016-03-29 23:41:57 +08:00
|
|
|
} else if (!boot_cpu_has(X86_FEATURE_X2APIC)) {
|
2015-01-16 05:22:22 +08:00
|
|
|
x2apic_state = X2APIC_DISABLED;
|
2015-01-16 05:22:19 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
#else /* CONFIG_X86_X2APIC */
|
|
|
|
static int __init validate_x2apic(void)
|
|
|
|
{
|
|
|
|
if (!apic_is_x2apic_enabled())
|
|
|
|
return 0;
|
|
|
|
/*
|
|
|
|
* Checkme: Can we simply turn off x2apic here instead of panic?
|
|
|
|
*/
|
|
|
|
panic("BIOS has enabled x2apic but kernel doesn't support x2apic, please disable x2apic in BIOS.\n");
|
|
|
|
}
|
|
|
|
early_initcall(validate_x2apic);
|
|
|
|
|
2015-01-16 05:22:21 +08:00
|
|
|
static inline void try_to_enable_x2apic(int remap_mode) { }
|
2015-01-16 05:22:26 +08:00
|
|
|
static inline void __x2apic_enable(void) { }
|
2015-01-16 05:22:19 +08:00
|
|
|
#endif /* !CONFIG_X86_X2APIC */
|
|
|
|
|
|
|
|
static int __init try_to_enable_IR(void)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_X86_IO_APIC
|
|
|
|
if (!x2apic_enabled() && skip_ioapic_setup) {
|
|
|
|
pr_info("Not enabling interrupt remapping due to skipped IO-APIC setup\n");
|
|
|
|
return -1;
|
|
|
|
}
|
2009-07-20 20:24:17 +08:00
|
|
|
#endif
|
2015-01-16 05:22:19 +08:00
|
|
|
return irq_remapping_enable();
|
2009-07-20 20:24:17 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void __init enable_IR_x2apic(void)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
2015-01-07 15:31:34 +08:00
|
|
|
int ret, ir_stat;
|
2009-08-18 02:19:40 +08:00
|
|
|
|
2016-08-23 20:07:19 +08:00
|
|
|
if (skip_ioapic_setup)
|
|
|
|
return;
|
|
|
|
|
2015-01-07 15:31:34 +08:00
|
|
|
ir_stat = irq_remapping_prepare();
|
|
|
|
if (ir_stat < 0 && !x2apic_supported())
|
2009-11-21 16:23:37 +08:00
|
|
|
return;
|
2009-07-20 20:24:17 +08:00
|
|
|
|
2011-05-19 07:31:33 +08:00
|
|
|
ret = save_ioapic_entries();
|
2008-09-19 03:37:57 +08:00
|
|
|
if (ret) {
|
2008-11-10 16:16:41 +08:00
|
|
|
pr_info("Saving IO-APIC state failed: %d\n", ret);
|
2011-12-22 09:45:17 +08:00
|
|
|
return;
|
2008-09-19 03:37:57 +08:00
|
|
|
}
|
2008-07-11 02:16:58 +08:00
|
|
|
|
2009-03-17 08:05:03 +08:00
|
|
|
local_irq_save(flags);
|
2009-11-10 03:27:04 +08:00
|
|
|
legacy_pic->mask_all();
|
2011-05-19 07:31:33 +08:00
|
|
|
mask_ioapic_entries();
|
2009-03-17 08:05:03 +08:00
|
|
|
|
2016-02-24 07:34:30 +08:00
|
|
|
/* If irq_remapping_prepare() succeeded, try to enable it */
|
2015-01-07 15:31:34 +08:00
|
|
|
if (ir_stat >= 0)
|
|
|
|
ir_stat = try_to_enable_IR();
|
|
|
|
/* ir_stat contains the remap mode or an error code */
|
|
|
|
try_to_enable_x2apic(ir_stat);
|
2011-12-24 03:01:43 +08:00
|
|
|
|
2015-01-07 15:31:34 +08:00
|
|
|
if (ir_stat < 0)
|
2011-05-19 07:31:33 +08:00
|
|
|
restore_ioapic_entries();
|
2009-11-10 03:27:04 +08:00
|
|
|
legacy_pic->restore_mask();
|
2008-07-11 02:16:58 +08:00
|
|
|
local_irq_restore(flags);
|
|
|
|
}
|
2009-04-17 16:42:14 +08:00
|
|
|
|
2008-08-24 17:01:51 +08:00
|
|
|
#ifdef CONFIG_X86_64
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Detect and enable local APICs on non-SMP boards.
|
|
|
|
* Original code written by Keir Fraser.
|
|
|
|
* On AMD64 we trust the BIOS - if it says no APIC it is likely
|
2007-07-21 23:10:17 +08:00
|
|
|
* not correctly set up (usually the APIC timer won't work etc.)
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2008-01-30 20:30:20 +08:00
|
|
|
static int __init detect_init_APIC(void)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2016-04-05 04:25:00 +08:00
|
|
|
if (!boot_cpu_has(X86_FEATURE_APIC)) {
|
2008-11-10 16:16:41 +08:00
|
|
|
pr_info("No local APIC present\n");
|
2005-04-17 06:20:36 +08:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
mp_lapic_addr = APIC_DEFAULT_PHYS_BASE;
|
|
|
|
return 0;
|
|
|
|
}
|
2008-08-24 17:01:51 +08:00
|
|
|
#else
|
2010-10-20 01:46:28 +08:00
|
|
|
|
2011-03-11 15:02:36 +08:00
|
|
|
static int __init apic_verify(void)
|
2010-10-20 01:46:28 +08:00
|
|
|
{
|
|
|
|
u32 features, h, l;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The APIC feature bit should now be enabled
|
|
|
|
* in `cpuid'
|
|
|
|
*/
|
|
|
|
features = cpuid_edx(1);
|
|
|
|
if (!(features & (1 << X86_FEATURE_APIC))) {
|
|
|
|
pr_warning("Could not enable APIC!\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
set_cpu_cap(&boot_cpu_data, X86_FEATURE_APIC);
|
|
|
|
mp_lapic_addr = APIC_DEFAULT_PHYS_BASE;
|
|
|
|
|
|
|
|
/* The BIOS may have set up the APIC at some other address */
|
2012-04-19 00:37:39 +08:00
|
|
|
if (boot_cpu_data.x86 >= 6) {
|
|
|
|
rdmsr(MSR_IA32_APICBASE, l, h);
|
|
|
|
if (l & MSR_IA32_APICBASE_ENABLE)
|
|
|
|
mp_lapic_addr = l & MSR_IA32_APICBASE_BASE;
|
|
|
|
}
|
2010-10-20 01:46:28 +08:00
|
|
|
|
|
|
|
pr_info("Found and enabled local APIC!\n");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-03-11 15:02:36 +08:00
|
|
|
int __init apic_force_enable(unsigned long addr)
|
2010-10-20 01:46:28 +08:00
|
|
|
{
|
|
|
|
u32 h, l;
|
|
|
|
|
|
|
|
if (disable_apic)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Some BIOSes disable the local APIC in the APIC_BASE
|
|
|
|
* MSR. This can only be done in software for Intel P6 or later
|
|
|
|
* and AMD K7 (Model > 1) or later.
|
|
|
|
*/
|
2012-04-19 00:37:39 +08:00
|
|
|
if (boot_cpu_data.x86 >= 6) {
|
|
|
|
rdmsr(MSR_IA32_APICBASE, l, h);
|
|
|
|
if (!(l & MSR_IA32_APICBASE_ENABLE)) {
|
|
|
|
pr_info("Local APIC disabled by BIOS -- reenabling.\n");
|
|
|
|
l &= ~MSR_IA32_APICBASE_BASE;
|
|
|
|
l |= MSR_IA32_APICBASE_ENABLE | addr;
|
|
|
|
wrmsr(MSR_IA32_APICBASE, l, h);
|
|
|
|
enabled_via_apicbase = 1;
|
|
|
|
}
|
2010-10-20 01:46:28 +08:00
|
|
|
}
|
|
|
|
return apic_verify();
|
|
|
|
}
|
|
|
|
|
2008-08-24 17:01:51 +08:00
|
|
|
/*
|
|
|
|
* Detect and initialize APIC
|
|
|
|
*/
|
|
|
|
static int __init detect_init_APIC(void)
|
|
|
|
{
|
|
|
|
/* Disabled by kernel option? */
|
|
|
|
if (disable_apic)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
switch (boot_cpu_data.x86_vendor) {
|
|
|
|
case X86_VENDOR_AMD:
|
|
|
|
if ((boot_cpu_data.x86 == 6 && boot_cpu_data.x86_model > 1) ||
|
2009-02-03 23:24:22 +08:00
|
|
|
(boot_cpu_data.x86 >= 15))
|
2008-08-24 17:01:51 +08:00
|
|
|
break;
|
|
|
|
goto no_apic;
|
|
|
|
case X86_VENDOR_INTEL:
|
|
|
|
if (boot_cpu_data.x86 == 6 || boot_cpu_data.x86 == 15 ||
|
2016-04-05 04:25:00 +08:00
|
|
|
(boot_cpu_data.x86 == 5 && boot_cpu_has(X86_FEATURE_APIC)))
|
2008-08-24 17:01:51 +08:00
|
|
|
break;
|
|
|
|
goto no_apic;
|
|
|
|
default:
|
|
|
|
goto no_apic;
|
|
|
|
}
|
|
|
|
|
2016-04-05 04:25:00 +08:00
|
|
|
if (!boot_cpu_has(X86_FEATURE_APIC)) {
|
2008-08-24 17:01:51 +08:00
|
|
|
/*
|
|
|
|
* Over-ride BIOS and try to enable the local APIC only if
|
|
|
|
* "lapic" specified.
|
|
|
|
*/
|
|
|
|
if (!force_enable_local_apic) {
|
2008-11-10 16:16:41 +08:00
|
|
|
pr_info("Local APIC disabled by BIOS -- "
|
|
|
|
"you can enable it with \"lapic\"\n");
|
2008-08-24 17:01:51 +08:00
|
|
|
return -1;
|
|
|
|
}
|
2011-02-25 23:09:31 +08:00
|
|
|
if (apic_force_enable(APIC_DEFAULT_PHYS_BASE))
|
2010-10-20 01:46:28 +08:00
|
|
|
return -1;
|
|
|
|
} else {
|
|
|
|
if (apic_verify())
|
|
|
|
return -1;
|
2008-08-24 17:01:51 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
apic_pm_activate();
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
no_apic:
|
2008-11-10 16:16:41 +08:00
|
|
|
pr_info("No local APIC present or hardware disabled\n");
|
2008-08-24 17:01:51 +08:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
#endif
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
/**
|
|
|
|
* init_apic_mappings - initialize APIC mappings
|
|
|
|
*/
|
2005-04-17 06:20:36 +08:00
|
|
|
void __init init_apic_mappings(void)
|
|
|
|
{
|
x86: read apic ID in the !acpi_lapic case
Ed found that on 32-bit, boot_cpu_physical_apicid is not read right,
when the mptable is broken.
Interestingly, actually three paths use/set it:
1. acpi: at that time that is already read from reg
2. mptable: only read from mptable
3. no madt, and no mptable, that use default apic id 0 for 64-bit, -1 for 32-bit
so we could read the apic id for the 2/3 path. We trust the hardware
register more than we trust a BIOS data structure (the mptable).
We can also avoid the double set_fixmap() when acpi_lapic
is used, and also need to move cpu_has_apic earlier and
call apic_disable().
Also when need to update the apic id, we'd better read and
set the apic version as well - so that quirks are applied precisely.
v2: make path 3 with 64bit, use -1 as apic id, so could read it later.
v3: fix whitespace problem pointed out by Ed Swierk
[ Impact: get correct apic id for bsp other than acpi path ]
Reported-by: Ed Swierk <eswierk@aristanetworks.com>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Acked-by: Cyrill Gorcunov <gorcunov@openvz.org>
LKML-Reference: <49FC85A9.2070702@kernel.org>
[ v4: sanity-check in the ACPI case too ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-05-03 01:40:57 +08:00
|
|
|
unsigned int new_apicid;
|
|
|
|
|
2009-04-21 04:02:27 +08:00
|
|
|
if (x2apic_mode) {
|
2008-07-12 09:44:16 +08:00
|
|
|
boot_cpu_physical_apicid = read_apic_id();
|
2008-07-11 02:16:58 +08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
x86: read apic ID in the !acpi_lapic case
Ed found that on 32-bit, boot_cpu_physical_apicid is not read right,
when the mptable is broken.
Interestingly, actually three paths use/set it:
1. acpi: at that time that is already read from reg
2. mptable: only read from mptable
3. no madt, and no mptable, that use default apic id 0 for 64-bit, -1 for 32-bit
so we could read the apic id for the 2/3 path. We trust the hardware
register more than we trust a BIOS data structure (the mptable).
We can also avoid the double set_fixmap() when acpi_lapic
is used, and also need to move cpu_has_apic earlier and
call apic_disable().
Also when need to update the apic id, we'd better read and
set the apic version as well - so that quirks are applied precisely.
v2: make path 3 with 64bit, use -1 as apic id, so could read it later.
v3: fix whitespace problem pointed out by Ed Swierk
v5: fix boot crash
[ Impact: get correct apic id for bsp other than acpi path ]
Reported-by: Ed Swierk <eswierk@aristanetworks.com>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Acked-by: Cyrill Gorcunov <gorcunov@openvz.org>
LKML-Reference: <49FC85A9.2070702@kernel.org>
[ v4: sanity-check in the ACPI case too ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-05-03 01:40:57 +08:00
|
|
|
/* If no local APIC can be found return early */
|
2005-04-17 06:20:36 +08:00
|
|
|
if (!smp_found_config && detect_init_APIC()) {
|
x86: read apic ID in the !acpi_lapic case
Ed found that on 32-bit, boot_cpu_physical_apicid is not read right,
when the mptable is broken.
Interestingly, actually three paths use/set it:
1. acpi: at that time that is already read from reg
2. mptable: only read from mptable
3. no madt, and no mptable, that use default apic id 0 for 64-bit, -1 for 32-bit
so we could read the apic id for the 2/3 path. We trust the hardware
register more than we trust a BIOS data structure (the mptable).
We can also avoid the double set_fixmap() when acpi_lapic
is used, and also need to move cpu_has_apic earlier and
call apic_disable().
Also when need to update the apic id, we'd better read and
set the apic version as well - so that quirks are applied precisely.
v2: make path 3 with 64bit, use -1 as apic id, so could read it later.
v3: fix whitespace problem pointed out by Ed Swierk
v5: fix boot crash
[ Impact: get correct apic id for bsp other than acpi path ]
Reported-by: Ed Swierk <eswierk@aristanetworks.com>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Acked-by: Cyrill Gorcunov <gorcunov@openvz.org>
LKML-Reference: <49FC85A9.2070702@kernel.org>
[ v4: sanity-check in the ACPI case too ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-05-03 01:40:57 +08:00
|
|
|
/* lets NOP'ify apic operations */
|
|
|
|
pr_info("APIC: disable apic facility\n");
|
|
|
|
apic_disable();
|
|
|
|
} else {
|
2005-04-17 06:20:36 +08:00
|
|
|
apic_phys = mp_lapic_addr;
|
|
|
|
|
x86: read apic ID in the !acpi_lapic case
Ed found that on 32-bit, boot_cpu_physical_apicid is not read right,
when the mptable is broken.
Interestingly, actually three paths use/set it:
1. acpi: at that time that is already read from reg
2. mptable: only read from mptable
3. no madt, and no mptable, that use default apic id 0 for 64-bit, -1 for 32-bit
so we could read the apic id for the 2/3 path. We trust the hardware
register more than we trust a BIOS data structure (the mptable).
We can also avoid the double set_fixmap() when acpi_lapic
is used, and also need to move cpu_has_apic earlier and
call apic_disable().
Also when need to update the apic id, we'd better read and
set the apic version as well - so that quirks are applied precisely.
v2: make path 3 with 64bit, use -1 as apic id, so could read it later.
v3: fix whitespace problem pointed out by Ed Swierk
v5: fix boot crash
[ Impact: get correct apic id for bsp other than acpi path ]
Reported-by: Ed Swierk <eswierk@aristanetworks.com>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Acked-by: Cyrill Gorcunov <gorcunov@openvz.org>
LKML-Reference: <49FC85A9.2070702@kernel.org>
[ v4: sanity-check in the ACPI case too ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-05-03 01:40:57 +08:00
|
|
|
/*
|
|
|
|
* acpi lapic path already maps that address in
|
|
|
|
* acpi_register_lapic_address()
|
|
|
|
*/
|
2010-08-05 04:30:27 +08:00
|
|
|
if (!acpi_lapic && !smp_found_config)
|
2010-12-07 16:55:38 +08:00
|
|
|
register_lapic_address(apic_phys);
|
2009-05-11 21:41:40 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Fetch the APIC ID of the BSP in case we have a
|
|
|
|
* default configuration (or the MP table is broken).
|
|
|
|
*/
|
x86: read apic ID in the !acpi_lapic case
Ed found that on 32-bit, boot_cpu_physical_apicid is not read right,
when the mptable is broken.
Interestingly, actually three paths use/set it:
1. acpi: at that time that is already read from reg
2. mptable: only read from mptable
3. no madt, and no mptable, that use default apic id 0 for 64-bit, -1 for 32-bit
so we could read the apic id for the 2/3 path. We trust the hardware
register more than we trust a BIOS data structure (the mptable).
We can also avoid the double set_fixmap() when acpi_lapic
is used, and also need to move cpu_has_apic earlier and
call apic_disable().
Also when need to update the apic id, we'd better read and
set the apic version as well - so that quirks are applied precisely.
v2: make path 3 with 64bit, use -1 as apic id, so could read it later.
v3: fix whitespace problem pointed out by Ed Swierk
[ Impact: get correct apic id for bsp other than acpi path ]
Reported-by: Ed Swierk <eswierk@aristanetworks.com>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Acked-by: Cyrill Gorcunov <gorcunov@openvz.org>
LKML-Reference: <49FC85A9.2070702@kernel.org>
[ v4: sanity-check in the ACPI case too ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-05-03 01:40:57 +08:00
|
|
|
new_apicid = read_apic_id();
|
|
|
|
if (boot_cpu_physical_apicid != new_apicid) {
|
|
|
|
boot_cpu_physical_apicid = new_apicid;
|
2009-06-07 20:48:40 +08:00
|
|
|
/*
|
|
|
|
* yeah -- we lie about apic_version
|
|
|
|
* in case if apic was disabled via boot option
|
|
|
|
* but it's not a problem for SMP compiled kernel
|
|
|
|
* since smp_sanity_check is prepared for such a case
|
|
|
|
* and disable smp mode
|
|
|
|
*/
|
2016-09-14 02:12:32 +08:00
|
|
|
boot_cpu_apic_version = GET_APIC_VERSION(apic_read(APIC_LVR));
|
2009-04-13 00:47:41 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2010-12-07 16:55:17 +08:00
|
|
|
void __init register_lapic_address(unsigned long address)
|
|
|
|
{
|
|
|
|
mp_lapic_addr = address;
|
|
|
|
|
2010-12-07 16:55:56 +08:00
|
|
|
if (!x2apic_mode) {
|
|
|
|
set_fixmap_nocache(FIX_APIC_BASE, address);
|
|
|
|
apic_printk(APIC_VERBOSE, "mapped APIC to %16lx (%16lx)\n",
|
2016-08-12 14:57:13 +08:00
|
|
|
APIC_BASE, address);
|
2010-12-07 16:55:56 +08:00
|
|
|
}
|
2010-12-07 16:55:17 +08:00
|
|
|
if (boot_cpu_physical_apicid == -1U) {
|
|
|
|
boot_cpu_physical_apicid = read_apic_id();
|
2016-09-14 02:12:32 +08:00
|
|
|
boot_cpu_apic_version = GET_APIC_VERSION(apic_read(APIC_LVR));
|
2010-12-07 16:55:17 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2008-01-30 20:30:20 +08:00
|
|
|
* Local APIC interrupts
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
/*
|
|
|
|
* This interrupt should _never_ happen with our APIC/SMP architecture
|
|
|
|
*/
|
2015-09-30 04:37:02 +08:00
|
|
|
static void __smp_spurious_interrupt(u8 vector)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-08-24 17:01:53 +08:00
|
|
|
u32 v;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2008-01-30 20:30:20 +08:00
|
|
|
* Check if this really is a spurious interrupt and ACK it
|
|
|
|
* if it is a vectored one. Just in case...
|
|
|
|
* Spurious interrupts should not be ACKed.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2014-11-03 16:39:43 +08:00
|
|
|
v = apic_read(APIC_ISR + ((vector & ~0x1f) >> 1));
|
|
|
|
if (v & (1 << (vector & 0x1f)))
|
2008-01-30 20:30:20 +08:00
|
|
|
ack_APIC_irq();
|
2007-10-13 05:04:07 +08:00
|
|
|
|
2008-12-09 11:19:26 +08:00
|
|
|
inc_irq_stat(irq_spurious_count);
|
|
|
|
|
2008-08-24 17:01:53 +08:00
|
|
|
/* see sw-dev-man vol 3, chapter 7.4.13.5 */
|
2014-11-03 16:39:43 +08:00
|
|
|
pr_info("spurious APIC interrupt through vector %02x on CPU#%d, "
|
|
|
|
"should never happen.\n", vector, smp_processor_id());
|
x86, trace: Introduce entering/exiting_irq()
When implementing tracepoints in interrupt handers, if the tracepoints are
simply added in the performance sensitive path of interrupt handers,
it may cause potential performance problem due to the time penalty.
To solve the problem, an idea is to prepare non-trace/trace irq handers and
switch their IDTs at the enabling/disabling time.
So, let's introduce entering_irq()/exiting_irq() for pre/post-
processing of each irq handler.
A way to use them is as follows.
Non-trace irq handler:
smp_irq_handler()
{
entering_irq(); /* pre-processing of this handler */
__smp_irq_handler(); /*
* common logic between non-trace and trace handlers
* in a vector.
*/
exiting_irq(); /* post-processing of this handler */
}
Trace irq_handler:
smp_trace_irq_handler()
{
entering_irq(); /* pre-processing of this handler */
trace_irq_entry(); /* tracepoint for irq entry */
__smp_irq_handler(); /*
* common logic between non-trace and trace handlers
* in a vector.
*/
trace_irq_exit(); /* tracepoint for irq exit */
exiting_irq(); /* post-processing of this handler */
}
If tracepoints can place outside entering_irq()/exiting_irq() as follows,
it looks cleaner.
smp_trace_irq_handler()
{
trace_irq_entry();
smp_irq_handler();
trace_irq_exit();
}
But it doesn't work.
The problem is with irq_enter/exit() being called. They must be called before
trace_irq_enter/exit(), because of the rcu_irq_enter() must be called before
any tracepoints are used, as tracepoints use rcu to synchronize.
As a possible alternative, we may be able to call irq_enter() first as follows
if irq_enter() can nest.
smp_trace_irq_hander()
{
irq_entry();
trace_irq_entry();
smp_irq_handler();
trace_irq_exit();
irq_exit();
}
But it doesn't work, either.
If irq_enter() is nested, it may have a time penalty because it has to check if it
was already called or not. The time penalty is not desired in performance sensitive
paths even if it is tiny.
Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com>
Link: http://lkml.kernel.org/r/51C3238D.9040706@hds.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
2013-06-20 23:45:17 +08:00
|
|
|
}
|
|
|
|
|
2013-08-06 06:02:37 +08:00
|
|
|
__visible void smp_spurious_interrupt(struct pt_regs *regs)
|
x86, trace: Introduce entering/exiting_irq()
When implementing tracepoints in interrupt handers, if the tracepoints are
simply added in the performance sensitive path of interrupt handers,
it may cause potential performance problem due to the time penalty.
To solve the problem, an idea is to prepare non-trace/trace irq handers and
switch their IDTs at the enabling/disabling time.
So, let's introduce entering_irq()/exiting_irq() for pre/post-
processing of each irq handler.
A way to use them is as follows.
Non-trace irq handler:
smp_irq_handler()
{
entering_irq(); /* pre-processing of this handler */
__smp_irq_handler(); /*
* common logic between non-trace and trace handlers
* in a vector.
*/
exiting_irq(); /* post-processing of this handler */
}
Trace irq_handler:
smp_trace_irq_handler()
{
entering_irq(); /* pre-processing of this handler */
trace_irq_entry(); /* tracepoint for irq entry */
__smp_irq_handler(); /*
* common logic between non-trace and trace handlers
* in a vector.
*/
trace_irq_exit(); /* tracepoint for irq exit */
exiting_irq(); /* post-processing of this handler */
}
If tracepoints can place outside entering_irq()/exiting_irq() as follows,
it looks cleaner.
smp_trace_irq_handler()
{
trace_irq_entry();
smp_irq_handler();
trace_irq_exit();
}
But it doesn't work.
The problem is with irq_enter/exit() being called. They must be called before
trace_irq_enter/exit(), because of the rcu_irq_enter() must be called before
any tracepoints are used, as tracepoints use rcu to synchronize.
As a possible alternative, we may be able to call irq_enter() first as follows
if irq_enter() can nest.
smp_trace_irq_hander()
{
irq_entry();
trace_irq_entry();
smp_irq_handler();
trace_irq_exit();
irq_exit();
}
But it doesn't work, either.
If irq_enter() is nested, it may have a time penalty because it has to check if it
was already called or not. The time penalty is not desired in performance sensitive
paths even if it is tiny.
Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com>
Link: http://lkml.kernel.org/r/51C3238D.9040706@hds.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
2013-06-20 23:45:17 +08:00
|
|
|
{
|
|
|
|
entering_irq();
|
2014-11-03 16:39:43 +08:00
|
|
|
__smp_spurious_interrupt(~regs->orig_ax);
|
x86, trace: Introduce entering/exiting_irq()
When implementing tracepoints in interrupt handers, if the tracepoints are
simply added in the performance sensitive path of interrupt handers,
it may cause potential performance problem due to the time penalty.
To solve the problem, an idea is to prepare non-trace/trace irq handers and
switch their IDTs at the enabling/disabling time.
So, let's introduce entering_irq()/exiting_irq() for pre/post-
processing of each irq handler.
A way to use them is as follows.
Non-trace irq handler:
smp_irq_handler()
{
entering_irq(); /* pre-processing of this handler */
__smp_irq_handler(); /*
* common logic between non-trace and trace handlers
* in a vector.
*/
exiting_irq(); /* post-processing of this handler */
}
Trace irq_handler:
smp_trace_irq_handler()
{
entering_irq(); /* pre-processing of this handler */
trace_irq_entry(); /* tracepoint for irq entry */
__smp_irq_handler(); /*
* common logic between non-trace and trace handlers
* in a vector.
*/
trace_irq_exit(); /* tracepoint for irq exit */
exiting_irq(); /* post-processing of this handler */
}
If tracepoints can place outside entering_irq()/exiting_irq() as follows,
it looks cleaner.
smp_trace_irq_handler()
{
trace_irq_entry();
smp_irq_handler();
trace_irq_exit();
}
But it doesn't work.
The problem is with irq_enter/exit() being called. They must be called before
trace_irq_enter/exit(), because of the rcu_irq_enter() must be called before
any tracepoints are used, as tracepoints use rcu to synchronize.
As a possible alternative, we may be able to call irq_enter() first as follows
if irq_enter() can nest.
smp_trace_irq_hander()
{
irq_entry();
trace_irq_entry();
smp_irq_handler();
trace_irq_exit();
irq_exit();
}
But it doesn't work, either.
If irq_enter() is nested, it may have a time penalty because it has to check if it
was already called or not. The time penalty is not desired in performance sensitive
paths even if it is tiny.
Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com>
Link: http://lkml.kernel.org/r/51C3238D.9040706@hds.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
2013-06-20 23:45:17 +08:00
|
|
|
exiting_irq();
|
2008-01-30 20:30:20 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2013-08-06 06:02:37 +08:00
|
|
|
__visible void smp_trace_spurious_interrupt(struct pt_regs *regs)
|
x86, trace: Add irq vector tracepoints
[Purpose of this patch]
As Vaibhav explained in the thread below, tracepoints for irq vectors
are useful.
http://www.spinics.net/lists/mm-commits/msg85707.html
<snip>
The current interrupt traces from irq_handler_entry and irq_handler_exit
provide when an interrupt is handled. They provide good data about when
the system has switched to kernel space and how it affects the currently
running processes.
There are some IRQ vectors which trigger the system into kernel space,
which are not handled in generic IRQ handlers. Tracing such events gives
us the information about IRQ interaction with other system events.
The trace also tells where the system is spending its time. We want to
know which cores are handling interrupts and how they are affecting other
processes in the system. Also, the trace provides information about when
the cores are idle and which interrupts are changing that state.
<snip>
On the other hand, my usecase is tracing just local timer event and
getting a value of instruction pointer.
I suggested to add an argument local timer event to get instruction pointer before.
But there is another way to get it with external module like systemtap.
So, I don't need to add any argument to irq vector tracepoints now.
[Patch Description]
Vaibhav's patch shared a trace point ,irq_vector_entry/irq_vector_exit, in all events.
But there is an above use case to trace specific irq_vector rather than tracing all events.
In this case, we are concerned about overhead due to unwanted events.
So, add following tracepoints instead of introducing irq_vector_entry/exit.
so that we can enable them independently.
- local_timer_vector
- reschedule_vector
- call_function_vector
- call_function_single_vector
- irq_work_entry_vector
- error_apic_vector
- thermal_apic_vector
- threshold_apic_vector
- spurious_apic_vector
- x86_platform_ipi_vector
Also, introduce a logic switching IDT at enabling/disabling time so that a time penalty
makes a zero when tracepoints are disabled. Detailed explanations are as follows.
- Create trace irq handlers with entering_irq()/exiting_irq().
- Create a new IDT, trace_idt_table, at boot time by adding a logic to
_set_gate(). It is just a copy of original idt table.
- Register the new handlers for tracpoints to the new IDT by introducing
macros to alloc_intr_gate() called at registering time of irq_vector handlers.
- Add checking, whether irq vector tracing is on/off, into load_current_idt().
This has to be done below debug checking for these reasons.
- Switching to debug IDT may be kicked while tracing is enabled.
- On the other hands, switching to trace IDT is kicked only when debugging
is disabled.
In addition, the new IDT is created only when CONFIG_TRACING is enabled to avoid being
used for other purposes.
Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com>
Link: http://lkml.kernel.org/r/51C323ED.5050708@hds.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
2013-06-20 23:46:53 +08:00
|
|
|
{
|
2014-11-03 16:39:43 +08:00
|
|
|
u8 vector = ~regs->orig_ax;
|
|
|
|
|
x86, trace: Add irq vector tracepoints
[Purpose of this patch]
As Vaibhav explained in the thread below, tracepoints for irq vectors
are useful.
http://www.spinics.net/lists/mm-commits/msg85707.html
<snip>
The current interrupt traces from irq_handler_entry and irq_handler_exit
provide when an interrupt is handled. They provide good data about when
the system has switched to kernel space and how it affects the currently
running processes.
There are some IRQ vectors which trigger the system into kernel space,
which are not handled in generic IRQ handlers. Tracing such events gives
us the information about IRQ interaction with other system events.
The trace also tells where the system is spending its time. We want to
know which cores are handling interrupts and how they are affecting other
processes in the system. Also, the trace provides information about when
the cores are idle and which interrupts are changing that state.
<snip>
On the other hand, my usecase is tracing just local timer event and
getting a value of instruction pointer.
I suggested to add an argument local timer event to get instruction pointer before.
But there is another way to get it with external module like systemtap.
So, I don't need to add any argument to irq vector tracepoints now.
[Patch Description]
Vaibhav's patch shared a trace point ,irq_vector_entry/irq_vector_exit, in all events.
But there is an above use case to trace specific irq_vector rather than tracing all events.
In this case, we are concerned about overhead due to unwanted events.
So, add following tracepoints instead of introducing irq_vector_entry/exit.
so that we can enable them independently.
- local_timer_vector
- reschedule_vector
- call_function_vector
- call_function_single_vector
- irq_work_entry_vector
- error_apic_vector
- thermal_apic_vector
- threshold_apic_vector
- spurious_apic_vector
- x86_platform_ipi_vector
Also, introduce a logic switching IDT at enabling/disabling time so that a time penalty
makes a zero when tracepoints are disabled. Detailed explanations are as follows.
- Create trace irq handlers with entering_irq()/exiting_irq().
- Create a new IDT, trace_idt_table, at boot time by adding a logic to
_set_gate(). It is just a copy of original idt table.
- Register the new handlers for tracpoints to the new IDT by introducing
macros to alloc_intr_gate() called at registering time of irq_vector handlers.
- Add checking, whether irq vector tracing is on/off, into load_current_idt().
This has to be done below debug checking for these reasons.
- Switching to debug IDT may be kicked while tracing is enabled.
- On the other hands, switching to trace IDT is kicked only when debugging
is disabled.
In addition, the new IDT is created only when CONFIG_TRACING is enabled to avoid being
used for other purposes.
Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com>
Link: http://lkml.kernel.org/r/51C323ED.5050708@hds.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
2013-06-20 23:46:53 +08:00
|
|
|
entering_irq();
|
2014-11-03 16:39:43 +08:00
|
|
|
trace_spurious_apic_entry(vector);
|
|
|
|
__smp_spurious_interrupt(vector);
|
|
|
|
trace_spurious_apic_exit(vector);
|
x86, trace: Add irq vector tracepoints
[Purpose of this patch]
As Vaibhav explained in the thread below, tracepoints for irq vectors
are useful.
http://www.spinics.net/lists/mm-commits/msg85707.html
<snip>
The current interrupt traces from irq_handler_entry and irq_handler_exit
provide when an interrupt is handled. They provide good data about when
the system has switched to kernel space and how it affects the currently
running processes.
There are some IRQ vectors which trigger the system into kernel space,
which are not handled in generic IRQ handlers. Tracing such events gives
us the information about IRQ interaction with other system events.
The trace also tells where the system is spending its time. We want to
know which cores are handling interrupts and how they are affecting other
processes in the system. Also, the trace provides information about when
the cores are idle and which interrupts are changing that state.
<snip>
On the other hand, my usecase is tracing just local timer event and
getting a value of instruction pointer.
I suggested to add an argument local timer event to get instruction pointer before.
But there is another way to get it with external module like systemtap.
So, I don't need to add any argument to irq vector tracepoints now.
[Patch Description]
Vaibhav's patch shared a trace point ,irq_vector_entry/irq_vector_exit, in all events.
But there is an above use case to trace specific irq_vector rather than tracing all events.
In this case, we are concerned about overhead due to unwanted events.
So, add following tracepoints instead of introducing irq_vector_entry/exit.
so that we can enable them independently.
- local_timer_vector
- reschedule_vector
- call_function_vector
- call_function_single_vector
- irq_work_entry_vector
- error_apic_vector
- thermal_apic_vector
- threshold_apic_vector
- spurious_apic_vector
- x86_platform_ipi_vector
Also, introduce a logic switching IDT at enabling/disabling time so that a time penalty
makes a zero when tracepoints are disabled. Detailed explanations are as follows.
- Create trace irq handlers with entering_irq()/exiting_irq().
- Create a new IDT, trace_idt_table, at boot time by adding a logic to
_set_gate(). It is just a copy of original idt table.
- Register the new handlers for tracpoints to the new IDT by introducing
macros to alloc_intr_gate() called at registering time of irq_vector handlers.
- Add checking, whether irq vector tracing is on/off, into load_current_idt().
This has to be done below debug checking for these reasons.
- Switching to debug IDT may be kicked while tracing is enabled.
- On the other hands, switching to trace IDT is kicked only when debugging
is disabled.
In addition, the new IDT is created only when CONFIG_TRACING is enabled to avoid being
used for other purposes.
Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com>
Link: http://lkml.kernel.org/r/51C323ED.5050708@hds.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
2013-06-20 23:46:53 +08:00
|
|
|
exiting_irq();
|
2008-01-30 20:30:20 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
/*
|
|
|
|
* This interrupt should never happen with our APIC/SMP architecture
|
|
|
|
*/
|
2015-09-30 04:37:02 +08:00
|
|
|
static void __smp_error_interrupt(struct pt_regs *regs)
|
2008-01-30 20:30:20 +08:00
|
|
|
{
|
2014-01-14 15:44:47 +08:00
|
|
|
u32 v;
|
2011-04-14 14:36:08 +08:00
|
|
|
u32 i = 0;
|
|
|
|
static const char * const error_interrupt_reason[] = {
|
|
|
|
"Send CS error", /* APIC Error Bit 0 */
|
|
|
|
"Receive CS error", /* APIC Error Bit 1 */
|
|
|
|
"Send accept error", /* APIC Error Bit 2 */
|
|
|
|
"Receive accept error", /* APIC Error Bit 3 */
|
|
|
|
"Redirectable IPI", /* APIC Error Bit 4 */
|
|
|
|
"Send illegal vector", /* APIC Error Bit 5 */
|
|
|
|
"Received illegal vector", /* APIC Error Bit 6 */
|
|
|
|
"Illegal register address", /* APIC Error Bit 7 */
|
|
|
|
};
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
/* First tickle the hardware, only then report what went on. -- REW */
|
x86/apic: Reinstate error IRQ Pentium erratum 3AP workaround
A change introduced with commit 60283df7ac26a4fe2d56631ca2946e04725e7eaf
("x86/apic: Read Error Status Register correctly") removed a read from the
APIC ESR register made before writing to same required to retrieve the
correct error status on Pentium systems affected by the 3AP erratum[1]:
"3AP. Writes to Error Register Clears Register
PROBLEM: The APIC Error register is intended to only be read.
If there is a write to this register the data in the APIC Error
register will be cleared and lost.
IMPLICATION: There is a possibility of clearing the Error
register status since the write to the register is not
specifically blocked.
WORKAROUND: Writes should not occur to the Pentium processor
APIC Error register.
STATUS: For the steppings affected see the Summary Table of
Changes at the beginning of this section."
The steppings affected are actually: B1, B3 and B5.
To avoid this information loss this change avoids the write to
ESR on all Pentium systems where it is actually never needed;
in Pentium processor documentation ESR was noted read-only and
the write only required for future architectural
compatibility[2].
The approach taken is the same as in lapic_setup_esr().
References:
[1] "Pentium Processor Family Developer's Manual", Intel Corporation,
1997, order number 241428-005, Appendix A "Errata and S-Specs for the
Pentium Processor Family", p. A-92,
[2] "Pentium Processor Family Developer's Manual, Volume 3: Architecture
and Programming Manual", Intel Corporation, 1995, order number
241430-004, Section 19.3.3. "Error Handling In APIC", p. 19-33.
Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org>
Cc: Richard Weinberger <richard@nod.at>
Link: http://lkml.kernel.org/r/alpine.LFD.2.11.1404011300010.27402@eddie.linux-mips.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-04-01 20:30:21 +08:00
|
|
|
if (lapic_get_maxlvt() > 3) /* Due to the Pentium erratum 3AP. */
|
|
|
|
apic_write(APIC_ESR, 0);
|
2014-01-14 15:44:47 +08:00
|
|
|
v = apic_read(APIC_ESR);
|
2008-01-30 20:30:20 +08:00
|
|
|
ack_APIC_irq();
|
|
|
|
atomic_inc(&irq_err_count);
|
2007-10-13 05:04:07 +08:00
|
|
|
|
2014-01-14 15:44:47 +08:00
|
|
|
apic_printk(APIC_DEBUG, KERN_DEBUG "APIC error on CPU%d: %02x",
|
|
|
|
smp_processor_id(), v);
|
2011-04-14 14:36:08 +08:00
|
|
|
|
2014-01-14 15:44:47 +08:00
|
|
|
v &= 0xff;
|
|
|
|
while (v) {
|
|
|
|
if (v & 0x1)
|
2011-04-14 14:36:08 +08:00
|
|
|
apic_printk(APIC_DEBUG, KERN_CONT " : %s", error_interrupt_reason[i]);
|
|
|
|
i++;
|
2014-01-14 15:44:47 +08:00
|
|
|
v >>= 1;
|
2012-09-19 00:36:14 +08:00
|
|
|
}
|
2011-04-14 14:36:08 +08:00
|
|
|
|
|
|
|
apic_printk(APIC_DEBUG, KERN_CONT "\n");
|
|
|
|
|
x86, trace: Introduce entering/exiting_irq()
When implementing tracepoints in interrupt handers, if the tracepoints are
simply added in the performance sensitive path of interrupt handers,
it may cause potential performance problem due to the time penalty.
To solve the problem, an idea is to prepare non-trace/trace irq handers and
switch their IDTs at the enabling/disabling time.
So, let's introduce entering_irq()/exiting_irq() for pre/post-
processing of each irq handler.
A way to use them is as follows.
Non-trace irq handler:
smp_irq_handler()
{
entering_irq(); /* pre-processing of this handler */
__smp_irq_handler(); /*
* common logic between non-trace and trace handlers
* in a vector.
*/
exiting_irq(); /* post-processing of this handler */
}
Trace irq_handler:
smp_trace_irq_handler()
{
entering_irq(); /* pre-processing of this handler */
trace_irq_entry(); /* tracepoint for irq entry */
__smp_irq_handler(); /*
* common logic between non-trace and trace handlers
* in a vector.
*/
trace_irq_exit(); /* tracepoint for irq exit */
exiting_irq(); /* post-processing of this handler */
}
If tracepoints can place outside entering_irq()/exiting_irq() as follows,
it looks cleaner.
smp_trace_irq_handler()
{
trace_irq_entry();
smp_irq_handler();
trace_irq_exit();
}
But it doesn't work.
The problem is with irq_enter/exit() being called. They must be called before
trace_irq_enter/exit(), because of the rcu_irq_enter() must be called before
any tracepoints are used, as tracepoints use rcu to synchronize.
As a possible alternative, we may be able to call irq_enter() first as follows
if irq_enter() can nest.
smp_trace_irq_hander()
{
irq_entry();
trace_irq_entry();
smp_irq_handler();
trace_irq_exit();
irq_exit();
}
But it doesn't work, either.
If irq_enter() is nested, it may have a time penalty because it has to check if it
was already called or not. The time penalty is not desired in performance sensitive
paths even if it is tiny.
Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com>
Link: http://lkml.kernel.org/r/51C3238D.9040706@hds.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
2013-06-20 23:45:17 +08:00
|
|
|
}
|
|
|
|
|
2013-08-06 06:02:37 +08:00
|
|
|
__visible void smp_error_interrupt(struct pt_regs *regs)
|
x86, trace: Introduce entering/exiting_irq()
When implementing tracepoints in interrupt handers, if the tracepoints are
simply added in the performance sensitive path of interrupt handers,
it may cause potential performance problem due to the time penalty.
To solve the problem, an idea is to prepare non-trace/trace irq handers and
switch their IDTs at the enabling/disabling time.
So, let's introduce entering_irq()/exiting_irq() for pre/post-
processing of each irq handler.
A way to use them is as follows.
Non-trace irq handler:
smp_irq_handler()
{
entering_irq(); /* pre-processing of this handler */
__smp_irq_handler(); /*
* common logic between non-trace and trace handlers
* in a vector.
*/
exiting_irq(); /* post-processing of this handler */
}
Trace irq_handler:
smp_trace_irq_handler()
{
entering_irq(); /* pre-processing of this handler */
trace_irq_entry(); /* tracepoint for irq entry */
__smp_irq_handler(); /*
* common logic between non-trace and trace handlers
* in a vector.
*/
trace_irq_exit(); /* tracepoint for irq exit */
exiting_irq(); /* post-processing of this handler */
}
If tracepoints can place outside entering_irq()/exiting_irq() as follows,
it looks cleaner.
smp_trace_irq_handler()
{
trace_irq_entry();
smp_irq_handler();
trace_irq_exit();
}
But it doesn't work.
The problem is with irq_enter/exit() being called. They must be called before
trace_irq_enter/exit(), because of the rcu_irq_enter() must be called before
any tracepoints are used, as tracepoints use rcu to synchronize.
As a possible alternative, we may be able to call irq_enter() first as follows
if irq_enter() can nest.
smp_trace_irq_hander()
{
irq_entry();
trace_irq_entry();
smp_irq_handler();
trace_irq_exit();
irq_exit();
}
But it doesn't work, either.
If irq_enter() is nested, it may have a time penalty because it has to check if it
was already called or not. The time penalty is not desired in performance sensitive
paths even if it is tiny.
Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com>
Link: http://lkml.kernel.org/r/51C3238D.9040706@hds.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
2013-06-20 23:45:17 +08:00
|
|
|
{
|
|
|
|
entering_irq();
|
|
|
|
__smp_error_interrupt(regs);
|
|
|
|
exiting_irq();
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2013-08-06 06:02:37 +08:00
|
|
|
__visible void smp_trace_error_interrupt(struct pt_regs *regs)
|
x86, trace: Add irq vector tracepoints
[Purpose of this patch]
As Vaibhav explained in the thread below, tracepoints for irq vectors
are useful.
http://www.spinics.net/lists/mm-commits/msg85707.html
<snip>
The current interrupt traces from irq_handler_entry and irq_handler_exit
provide when an interrupt is handled. They provide good data about when
the system has switched to kernel space and how it affects the currently
running processes.
There are some IRQ vectors which trigger the system into kernel space,
which are not handled in generic IRQ handlers. Tracing such events gives
us the information about IRQ interaction with other system events.
The trace also tells where the system is spending its time. We want to
know which cores are handling interrupts and how they are affecting other
processes in the system. Also, the trace provides information about when
the cores are idle and which interrupts are changing that state.
<snip>
On the other hand, my usecase is tracing just local timer event and
getting a value of instruction pointer.
I suggested to add an argument local timer event to get instruction pointer before.
But there is another way to get it with external module like systemtap.
So, I don't need to add any argument to irq vector tracepoints now.
[Patch Description]
Vaibhav's patch shared a trace point ,irq_vector_entry/irq_vector_exit, in all events.
But there is an above use case to trace specific irq_vector rather than tracing all events.
In this case, we are concerned about overhead due to unwanted events.
So, add following tracepoints instead of introducing irq_vector_entry/exit.
so that we can enable them independently.
- local_timer_vector
- reschedule_vector
- call_function_vector
- call_function_single_vector
- irq_work_entry_vector
- error_apic_vector
- thermal_apic_vector
- threshold_apic_vector
- spurious_apic_vector
- x86_platform_ipi_vector
Also, introduce a logic switching IDT at enabling/disabling time so that a time penalty
makes a zero when tracepoints are disabled. Detailed explanations are as follows.
- Create trace irq handlers with entering_irq()/exiting_irq().
- Create a new IDT, trace_idt_table, at boot time by adding a logic to
_set_gate(). It is just a copy of original idt table.
- Register the new handlers for tracpoints to the new IDT by introducing
macros to alloc_intr_gate() called at registering time of irq_vector handlers.
- Add checking, whether irq vector tracing is on/off, into load_current_idt().
This has to be done below debug checking for these reasons.
- Switching to debug IDT may be kicked while tracing is enabled.
- On the other hands, switching to trace IDT is kicked only when debugging
is disabled.
In addition, the new IDT is created only when CONFIG_TRACING is enabled to avoid being
used for other purposes.
Signed-off-by: Seiji Aguchi <seiji.aguchi@hds.com>
Link: http://lkml.kernel.org/r/51C323ED.5050708@hds.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
2013-06-20 23:46:53 +08:00
|
|
|
{
|
|
|
|
entering_irq();
|
|
|
|
trace_error_apic_entry(ERROR_APIC_VECTOR);
|
|
|
|
__smp_error_interrupt(regs);
|
|
|
|
trace_error_apic_exit(ERROR_APIC_VECTOR);
|
|
|
|
exiting_irq();
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-05-29 00:38:28 +08:00
|
|
|
/**
|
2008-08-19 00:45:53 +08:00
|
|
|
* connect_bsp_APIC - attach the APIC to the interrupt system
|
|
|
|
*/
|
2015-01-16 05:22:40 +08:00
|
|
|
static void __init connect_bsp_APIC(void)
|
2008-05-29 00:38:28 +08:00
|
|
|
{
|
2008-08-19 00:45:53 +08:00
|
|
|
#ifdef CONFIG_X86_32
|
|
|
|
if (pic_mode) {
|
|
|
|
/*
|
|
|
|
* Do not trust the local APIC being empty at bootup.
|
|
|
|
*/
|
|
|
|
clear_local_APIC();
|
|
|
|
/*
|
|
|
|
* PIC mode, enable APIC mode in the IMCR, i.e. connect BSP's
|
|
|
|
* local APIC to INT and NMI lines.
|
|
|
|
*/
|
|
|
|
apic_printk(APIC_VERBOSE, "leaving PIC mode, "
|
|
|
|
"enabling APIC mode.\n");
|
2009-04-13 00:47:40 +08:00
|
|
|
imcr_pic_to_apic();
|
2008-08-19 00:45:53 +08:00
|
|
|
}
|
|
|
|
#endif
|
2008-05-29 00:38:28 +08:00
|
|
|
}
|
|
|
|
|
2008-08-17 03:21:53 +08:00
|
|
|
/**
|
|
|
|
* disconnect_bsp_APIC - detach the APIC from the interrupt system
|
|
|
|
* @virt_wire_setup: indicates, whether virtual wire mode is selected
|
|
|
|
*
|
|
|
|
* Virtual wire mode is necessary to deliver legacy interrupts even when the
|
|
|
|
* APIC is disabled.
|
|
|
|
*/
|
2008-01-30 20:30:20 +08:00
|
|
|
void disconnect_bsp_APIC(int virt_wire_setup)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-08-19 03:12:33 +08:00
|
|
|
unsigned int value;
|
|
|
|
|
2008-08-19 00:45:56 +08:00
|
|
|
#ifdef CONFIG_X86_32
|
|
|
|
if (pic_mode) {
|
|
|
|
/*
|
|
|
|
* Put the board back into PIC mode (has an effect only on
|
|
|
|
* certain older boards). Note that APIC interrupts, including
|
|
|
|
* IPIs, won't work beyond this point! The only exception are
|
|
|
|
* INIT IPIs.
|
|
|
|
*/
|
|
|
|
apic_printk(APIC_VERBOSE, "disabling APIC mode, "
|
|
|
|
"entering PIC mode.\n");
|
2009-04-13 00:47:40 +08:00
|
|
|
imcr_apic_to_pic();
|
2008-08-19 00:45:56 +08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
/* Go back to Virtual Wire compatibility mode */
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
/* For the spurious interrupt use vector F, and enable it */
|
|
|
|
value = apic_read(APIC_SPIV);
|
|
|
|
value &= ~APIC_VECTOR_MASK;
|
|
|
|
value |= APIC_SPIV_APIC_ENABLED;
|
|
|
|
value |= 0xf;
|
|
|
|
apic_write(APIC_SPIV, value);
|
2007-10-13 05:04:07 +08:00
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
if (!virt_wire_setup) {
|
|
|
|
/*
|
|
|
|
* For LVT0 make it edge triggered, active high,
|
|
|
|
* external and enabled
|
|
|
|
*/
|
|
|
|
value = apic_read(APIC_LVT0);
|
|
|
|
value &= ~(APIC_MODE_MASK | APIC_SEND_PENDING |
|
|
|
|
APIC_INPUT_POLARITY | APIC_LVT_REMOTE_IRR |
|
|
|
|
APIC_LVT_LEVEL_TRIGGER | APIC_LVT_MASKED);
|
|
|
|
value |= APIC_LVT_REMOTE_IRR | APIC_SEND_PENDING;
|
|
|
|
value = SET_APIC_DELIVERY_MODE(value, APIC_MODE_EXTINT);
|
|
|
|
apic_write(APIC_LVT0, value);
|
|
|
|
} else {
|
|
|
|
/* Disable LVT0 */
|
|
|
|
apic_write(APIC_LVT0, APIC_LVT_MASKED);
|
|
|
|
}
|
2007-10-13 05:04:07 +08:00
|
|
|
|
2008-08-19 00:45:56 +08:00
|
|
|
/*
|
|
|
|
* For LVT1 make it edge triggered, active high,
|
|
|
|
* nmi and enabled
|
|
|
|
*/
|
2008-01-30 20:30:20 +08:00
|
|
|
value = apic_read(APIC_LVT1);
|
|
|
|
value &= ~(APIC_MODE_MASK | APIC_SEND_PENDING |
|
|
|
|
APIC_INPUT_POLARITY | APIC_LVT_REMOTE_IRR |
|
|
|
|
APIC_LVT_LEVEL_TRIGGER | APIC_LVT_MASKED);
|
|
|
|
value |= APIC_LVT_REMOTE_IRR | APIC_SEND_PENDING;
|
|
|
|
value = SET_APIC_DELIVERY_MODE(value, APIC_MODE_NMI);
|
|
|
|
apic_write(APIC_LVT1, value);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
x86/acpi: Introduce persistent storage for cpuid <-> apicid mapping
The whole patch-set aims at making cpuid <-> nodeid mapping persistent. So that,
when node online/offline happens, cache based on cpuid <-> nodeid mapping such as
wq_numa_possible_cpumask will not cause any problem.
It contains 4 steps:
1. Enable apic registeration flow to handle both enabled and disabled cpus.
2. Introduce a new array storing all possible cpuid <-> apicid mapping.
3. Enable _MAT and MADT relative apis to return non-present or disabled cpus' apicid.
4. Establish all possible cpuid <-> nodeid mapping.
This patch finishes step 2.
In this patch, we introduce a new static array named cpuid_to_apicid[],
which is large enough to store info for all possible cpus.
And then, we modify the cpuid calculation. In generic_processor_info(),
it simply finds the next unused cpuid. And it is also why the cpuid <-> nodeid
mapping changes with node hotplug.
After this patch, we find the next unused cpuid, map it to an apicid,
and store the mapping in cpuid_to_apicid[], so that cpuid <-> apicid
mapping will be persistent.
And finally we will use this array to make cpuid <-> nodeid persistent.
cpuid <-> apicid mapping is established at local apic registeration time.
But non-present or disabled cpus are ignored.
In this patch, we establish all possible cpuid <-> apicid mapping when
registering local apic.
Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com>
Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
Signed-off-by: Zhu Guihua <zhugh.fnst@cn.fujitsu.com>
Signed-off-by: Dou Liyang <douly.fnst@cn.fujitsu.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: mika.j.penttila@gmail.com
Cc: len.brown@intel.com
Cc: rafael@kernel.org
Cc: rjw@rjwysocki.net
Cc: yasu.isimatu@gmail.com
Cc: linux-mm@kvack.org
Cc: linux-acpi@vger.kernel.org
Cc: isimatu.yasuaki@jp.fujitsu.com
Cc: gongzhaogang@inspur.com
Cc: tj@kernel.org
Cc: izumi.taku@jp.fujitsu.com
Cc: cl@linux.com
Cc: chen.tang@easystack.cn
Cc: akpm@linux-foundation.org
Cc: kamezawa.hiroyu@jp.fujitsu.com
Cc: lenb@kernel.org
Link: http://lkml.kernel.org/r/1472114120-3281-4-git-send-email-douly.fnst@cn.fujitsu.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-08-25 16:35:16 +08:00
|
|
|
/*
|
|
|
|
* The number of allocated logical CPU IDs. Since logical CPU IDs are allocated
|
|
|
|
* contiguously, it equals to current allocated max logical CPU ID plus 1.
|
|
|
|
* All allocated CPU ID should be in [0, nr_logical_cpuidi), so the maximum of
|
|
|
|
* nr_logical_cpuids is nr_cpu_ids.
|
|
|
|
*
|
|
|
|
* NOTE: Reserve 0 for BSP.
|
|
|
|
*/
|
|
|
|
static int nr_logical_cpuids = 1;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Used to store mapping between logical CPU IDs and APIC IDs.
|
|
|
|
*/
|
|
|
|
static int cpuid_to_apicid[] = {
|
|
|
|
[0 ... NR_CPUS - 1] = -1,
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Should use this API to allocate logical CPU IDs to keep nr_logical_cpuids
|
|
|
|
* and cpuid_to_apicid[] synchronized.
|
|
|
|
*/
|
|
|
|
static int allocate_logical_cpuid(int apicid)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* cpuid <-> apicid mapping is persistent, so when a cpu is up,
|
|
|
|
* check if the kernel has allocated a cpuid for it.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < nr_logical_cpuids; i++) {
|
|
|
|
if (cpuid_to_apicid[i] == apicid)
|
|
|
|
return i;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Allocate a new cpuid. */
|
|
|
|
if (nr_logical_cpuids >= nr_cpu_ids) {
|
|
|
|
WARN_ONCE(1, "Only %d processors supported."
|
|
|
|
"Processor %d/0x%x and the rest are ignored.\n",
|
|
|
|
nr_cpu_ids - 1, nr_logical_cpuids, apicid);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
cpuid_to_apicid[nr_logical_cpuids] = apicid;
|
|
|
|
return nr_logical_cpuids++;
|
|
|
|
}
|
|
|
|
|
|
|
|
int __generic_processor_info(int apicid, int version, bool enabled)
|
2008-03-28 04:56:19 +08:00
|
|
|
{
|
2011-07-09 01:19:26 +08:00
|
|
|
int cpu, max = nr_cpu_ids;
|
|
|
|
bool boot_cpu_detected = physid_isset(boot_cpu_physical_apicid,
|
|
|
|
phys_cpu_present_map);
|
|
|
|
|
x86, apic, kexec: Add disable_cpu_apicid kernel parameter
Add disable_cpu_apicid kernel parameter. To use this kernel parameter,
specify an initial APIC ID of the corresponding CPU you want to
disable.
This is mostly used for the kdump 2nd kernel to disable BSP to wake up
multiple CPUs without causing system reset or hang due to sending INIT
from AP to BSP.
Kdump users first figure out initial APIC ID of the BSP, CPU0 in the
1st kernel, for example from /proc/cpuinfo and then set up this kernel
parameter for the 2nd kernel using the obtained APIC ID.
However, doing this procedure at each boot time manually is awkward,
which should be automatically done by user-land service scripts, for
example, kexec-tools on fedora/RHEL distributions.
This design is more flexible than disabling BSP in kernel boot time
automatically in that in kernel boot time we have no choice but
referring to ACPI/MP table to obtain initial APIC ID for BSP, meaning
that the method is not applicable to the systems without such BIOS
tables.
One assumption behind this design is that users get initial APIC ID of
the BSP in still healthy state and so BSP is uniquely kept in
CPU0. Thus, through the kernel parameter, only one initial APIC ID can
be specified.
In a comparison with disabled_cpu_apicid, we use read_apic_id(), not
boot_cpu_physical_apicid, because on some platforms, the variable is
modified to the apicid reported as BSP through MP table and this
function is executed with the temporarily modified
boot_cpu_physical_apicid. As a result, disabled_cpu_apicid kernel
parameter doesn't work well for apicids of APs.
Fixing the wrong handling of boot_cpu_physical_apicid requires some
reviews and tests beyond some platforms and it could take some
time. The fix here is a kind of workaround to focus on the main topic
of this patch.
Signed-off-by: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
Link: http://lkml.kernel.org/r/20140115064458.1545.38775.stgit@localhost6.localdomain6
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2014-01-15 14:44:58 +08:00
|
|
|
/*
|
|
|
|
* boot_cpu_physical_apicid is designed to have the apicid
|
|
|
|
* returned by read_apic_id(), i.e, the apicid of the
|
|
|
|
* currently booting-up processor. However, on some platforms,
|
2014-01-16 05:02:08 +08:00
|
|
|
* it is temporarily modified by the apicid reported as BSP
|
x86, apic, kexec: Add disable_cpu_apicid kernel parameter
Add disable_cpu_apicid kernel parameter. To use this kernel parameter,
specify an initial APIC ID of the corresponding CPU you want to
disable.
This is mostly used for the kdump 2nd kernel to disable BSP to wake up
multiple CPUs without causing system reset or hang due to sending INIT
from AP to BSP.
Kdump users first figure out initial APIC ID of the BSP, CPU0 in the
1st kernel, for example from /proc/cpuinfo and then set up this kernel
parameter for the 2nd kernel using the obtained APIC ID.
However, doing this procedure at each boot time manually is awkward,
which should be automatically done by user-land service scripts, for
example, kexec-tools on fedora/RHEL distributions.
This design is more flexible than disabling BSP in kernel boot time
automatically in that in kernel boot time we have no choice but
referring to ACPI/MP table to obtain initial APIC ID for BSP, meaning
that the method is not applicable to the systems without such BIOS
tables.
One assumption behind this design is that users get initial APIC ID of
the BSP in still healthy state and so BSP is uniquely kept in
CPU0. Thus, through the kernel parameter, only one initial APIC ID can
be specified.
In a comparison with disabled_cpu_apicid, we use read_apic_id(), not
boot_cpu_physical_apicid, because on some platforms, the variable is
modified to the apicid reported as BSP through MP table and this
function is executed with the temporarily modified
boot_cpu_physical_apicid. As a result, disabled_cpu_apicid kernel
parameter doesn't work well for apicids of APs.
Fixing the wrong handling of boot_cpu_physical_apicid requires some
reviews and tests beyond some platforms and it could take some
time. The fix here is a kind of workaround to focus on the main topic
of this patch.
Signed-off-by: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
Link: http://lkml.kernel.org/r/20140115064458.1545.38775.stgit@localhost6.localdomain6
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2014-01-15 14:44:58 +08:00
|
|
|
* through MP table. Concretely:
|
|
|
|
*
|
|
|
|
* - arch/x86/kernel/mpparse.c: MP_processor_info()
|
|
|
|
* - arch/x86/mm/amdtopology.c: amd_numa_init()
|
|
|
|
*
|
|
|
|
* This function is executed with the modified
|
|
|
|
* boot_cpu_physical_apicid. So, disabled_cpu_apicid kernel
|
|
|
|
* parameter doesn't work to disable APs on kdump 2nd kernel.
|
|
|
|
*
|
|
|
|
* Since fixing handling of boot_cpu_physical_apicid requires
|
|
|
|
* another discussion and tests on each platform, we leave it
|
|
|
|
* for now and here we use read_apic_id() directly in this
|
|
|
|
* function, generic_processor_info().
|
|
|
|
*/
|
|
|
|
if (disabled_cpu_apicid != BAD_APICID &&
|
|
|
|
disabled_cpu_apicid != read_apic_id() &&
|
|
|
|
disabled_cpu_apicid == apicid) {
|
|
|
|
int thiscpu = num_processors + disabled_cpus;
|
|
|
|
|
2014-01-16 05:02:08 +08:00
|
|
|
pr_warning("APIC: Disabling requested cpu."
|
x86, apic, kexec: Add disable_cpu_apicid kernel parameter
Add disable_cpu_apicid kernel parameter. To use this kernel parameter,
specify an initial APIC ID of the corresponding CPU you want to
disable.
This is mostly used for the kdump 2nd kernel to disable BSP to wake up
multiple CPUs without causing system reset or hang due to sending INIT
from AP to BSP.
Kdump users first figure out initial APIC ID of the BSP, CPU0 in the
1st kernel, for example from /proc/cpuinfo and then set up this kernel
parameter for the 2nd kernel using the obtained APIC ID.
However, doing this procedure at each boot time manually is awkward,
which should be automatically done by user-land service scripts, for
example, kexec-tools on fedora/RHEL distributions.
This design is more flexible than disabling BSP in kernel boot time
automatically in that in kernel boot time we have no choice but
referring to ACPI/MP table to obtain initial APIC ID for BSP, meaning
that the method is not applicable to the systems without such BIOS
tables.
One assumption behind this design is that users get initial APIC ID of
the BSP in still healthy state and so BSP is uniquely kept in
CPU0. Thus, through the kernel parameter, only one initial APIC ID can
be specified.
In a comparison with disabled_cpu_apicid, we use read_apic_id(), not
boot_cpu_physical_apicid, because on some platforms, the variable is
modified to the apicid reported as BSP through MP table and this
function is executed with the temporarily modified
boot_cpu_physical_apicid. As a result, disabled_cpu_apicid kernel
parameter doesn't work well for apicids of APs.
Fixing the wrong handling of boot_cpu_physical_apicid requires some
reviews and tests beyond some platforms and it could take some
time. The fix here is a kind of workaround to focus on the main topic
of this patch.
Signed-off-by: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
Link: http://lkml.kernel.org/r/20140115064458.1545.38775.stgit@localhost6.localdomain6
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2014-01-15 14:44:58 +08:00
|
|
|
" Processor %d/0x%x ignored.\n",
|
|
|
|
thiscpu, apicid);
|
|
|
|
|
|
|
|
disabled_cpus++;
|
|
|
|
return -ENODEV;
|
|
|
|
}
|
|
|
|
|
2011-07-09 01:19:26 +08:00
|
|
|
/*
|
|
|
|
* If boot cpu has not been detected yet, then only allow upto
|
|
|
|
* nr_cpu_ids - 1 processors and keep one slot free for boot cpu
|
|
|
|
*/
|
|
|
|
if (!boot_cpu_detected && num_processors >= nr_cpu_ids - 1 &&
|
|
|
|
apicid != boot_cpu_physical_apicid) {
|
|
|
|
int thiscpu = max + disabled_cpus - 1;
|
|
|
|
|
|
|
|
pr_warning(
|
2016-06-09 18:31:58 +08:00
|
|
|
"APIC: NR_CPUS/possible_cpus limit of %i almost"
|
2011-07-09 01:19:26 +08:00
|
|
|
" reached. Keeping one slot for boot cpu."
|
|
|
|
" Processor %d/0x%x ignored.\n", max, thiscpu, apicid);
|
|
|
|
|
|
|
|
disabled_cpus++;
|
2013-09-02 11:57:36 +08:00
|
|
|
return -ENODEV;
|
2011-07-09 01:19:26 +08:00
|
|
|
}
|
2008-03-28 04:56:19 +08:00
|
|
|
|
2008-12-18 07:21:39 +08:00
|
|
|
if (num_processors >= nr_cpu_ids) {
|
|
|
|
int thiscpu = max + disabled_cpus;
|
|
|
|
|
2016-10-07 21:55:13 +08:00
|
|
|
if (enabled) {
|
|
|
|
pr_warning("APIC: NR_CPUS/possible_cpus limit of %i "
|
|
|
|
"reached. Processor %d/0x%x ignored.\n",
|
|
|
|
max, thiscpu, apicid);
|
|
|
|
}
|
2008-12-18 07:21:39 +08:00
|
|
|
|
|
|
|
disabled_cpus++;
|
2013-09-02 11:57:36 +08:00
|
|
|
return -EINVAL;
|
2008-03-28 04:56:19 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
if (apicid == boot_cpu_physical_apicid) {
|
|
|
|
/*
|
|
|
|
* x86_bios_cpu_apicid is required to have processors listed
|
|
|
|
* in same order as logical cpu numbers. Hence the first
|
|
|
|
* entry is BSP, and so on.
|
2011-02-09 15:22:17 +08:00
|
|
|
* boot_cpu_init() already hold bit 0 in cpu_present_mask
|
|
|
|
* for BSP.
|
2008-03-28 04:56:19 +08:00
|
|
|
*/
|
|
|
|
cpu = 0;
|
x86/acpi: Introduce persistent storage for cpuid <-> apicid mapping
The whole patch-set aims at making cpuid <-> nodeid mapping persistent. So that,
when node online/offline happens, cache based on cpuid <-> nodeid mapping such as
wq_numa_possible_cpumask will not cause any problem.
It contains 4 steps:
1. Enable apic registeration flow to handle both enabled and disabled cpus.
2. Introduce a new array storing all possible cpuid <-> apicid mapping.
3. Enable _MAT and MADT relative apis to return non-present or disabled cpus' apicid.
4. Establish all possible cpuid <-> nodeid mapping.
This patch finishes step 2.
In this patch, we introduce a new static array named cpuid_to_apicid[],
which is large enough to store info for all possible cpus.
And then, we modify the cpuid calculation. In generic_processor_info(),
it simply finds the next unused cpuid. And it is also why the cpuid <-> nodeid
mapping changes with node hotplug.
After this patch, we find the next unused cpuid, map it to an apicid,
and store the mapping in cpuid_to_apicid[], so that cpuid <-> apicid
mapping will be persistent.
And finally we will use this array to make cpuid <-> nodeid persistent.
cpuid <-> apicid mapping is established at local apic registeration time.
But non-present or disabled cpus are ignored.
In this patch, we establish all possible cpuid <-> apicid mapping when
registering local apic.
Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com>
Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
Signed-off-by: Zhu Guihua <zhugh.fnst@cn.fujitsu.com>
Signed-off-by: Dou Liyang <douly.fnst@cn.fujitsu.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: mika.j.penttila@gmail.com
Cc: len.brown@intel.com
Cc: rafael@kernel.org
Cc: rjw@rjwysocki.net
Cc: yasu.isimatu@gmail.com
Cc: linux-mm@kvack.org
Cc: linux-acpi@vger.kernel.org
Cc: isimatu.yasuaki@jp.fujitsu.com
Cc: gongzhaogang@inspur.com
Cc: tj@kernel.org
Cc: izumi.taku@jp.fujitsu.com
Cc: cl@linux.com
Cc: chen.tang@easystack.cn
Cc: akpm@linux-foundation.org
Cc: kamezawa.hiroyu@jp.fujitsu.com
Cc: lenb@kernel.org
Link: http://lkml.kernel.org/r/1472114120-3281-4-git-send-email-douly.fnst@cn.fujitsu.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-08-25 16:35:16 +08:00
|
|
|
|
|
|
|
/* Logical cpuid 0 is reserved for BSP. */
|
|
|
|
cpuid_to_apicid[0] = apicid;
|
|
|
|
} else {
|
|
|
|
cpu = allocate_logical_cpuid(apicid);
|
|
|
|
if (cpu < 0) {
|
|
|
|
disabled_cpus++;
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
}
|
2011-02-09 15:22:17 +08:00
|
|
|
|
2016-02-23 06:19:15 +08:00
|
|
|
/*
|
|
|
|
* This can happen on physical hotplug. The sanity check at boot time
|
|
|
|
* is done from native_smp_prepare_cpus() after num_possible_cpus() is
|
|
|
|
* established.
|
|
|
|
*/
|
|
|
|
if (topology_update_package_map(apicid, cpu) < 0) {
|
|
|
|
int thiscpu = max + disabled_cpus;
|
|
|
|
|
2016-06-09 18:31:58 +08:00
|
|
|
pr_warning("APIC: Package limit reached. Processor %d/0x%x ignored.\n",
|
2016-02-23 06:19:15 +08:00
|
|
|
thiscpu, apicid);
|
x86/acpi: Enable acpi to register all possible cpus at boot time
cpuid <-> nodeid mapping is firstly established at boot time. And workqueue caches
the mapping in wq_numa_possible_cpumask in wq_numa_init() at boot time.
When doing node online/offline, cpuid <-> nodeid mapping is established/destroyed,
which means, cpuid <-> nodeid mapping will change if node hotplug happens. But
workqueue does not update wq_numa_possible_cpumask.
So here is the problem:
Assume we have the following cpuid <-> nodeid in the beginning:
Node | CPU
------------------------
node 0 | 0-14, 60-74
node 1 | 15-29, 75-89
node 2 | 30-44, 90-104
node 3 | 45-59, 105-119
and we hot-remove node2 and node3, it becomes:
Node | CPU
------------------------
node 0 | 0-14, 60-74
node 1 | 15-29, 75-89
and we hot-add node4 and node5, it becomes:
Node | CPU
------------------------
node 0 | 0-14, 60-74
node 1 | 15-29, 75-89
node 4 | 30-59
node 5 | 90-119
But in wq_numa_possible_cpumask, cpu30 is still mapped to node2, and the like.
When a pool workqueue is initialized, if its cpumask belongs to a node, its
pool->node will be mapped to that node. And memory used by this workqueue will
also be allocated on that node.
static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs){
...
/* if cpumask is contained inside a NUMA node, we belong to that node */
if (wq_numa_enabled) {
for_each_node(node) {
if (cpumask_subset(pool->attrs->cpumask,
wq_numa_possible_cpumask[node])) {
pool->node = node;
break;
}
}
}
Since wq_numa_possible_cpumask is not updated, it could be mapped to an offline node,
which will lead to memory allocation failure:
SLUB: Unable to allocate memory on node 2 (gfp=0x80d0)
cache: kmalloc-192, object size: 192, buffer size: 192, default order: 1, min order: 0
node 0: slabs: 6172, objs: 259224, free: 245741
node 1: slabs: 3261, objs: 136962, free: 127656
It happens here:
create_worker(struct worker_pool *pool)
|--> worker = alloc_worker(pool->node);
static struct worker *alloc_worker(int node)
{
struct worker *worker;
worker = kzalloc_node(sizeof(*worker), GFP_KERNEL, node); --> Here, useing the wrong node.
......
return worker;
}
[Solution]
There are four mappings in the kernel:
1. nodeid (logical node id) <-> pxm
2. apicid (physical cpu id) <-> nodeid
3. cpuid (logical cpu id) <-> apicid
4. cpuid (logical cpu id) <-> nodeid
1. pxm (proximity domain) is provided by ACPI firmware in SRAT, and nodeid <-> pxm
mapping is setup at boot time. This mapping is persistent, won't change.
2. apicid <-> nodeid mapping is setup using info in 1. The mapping is setup at boot
time and CPU hotadd time, and cleared at CPU hotremove time. This mapping is also
persistent.
3. cpuid <-> apicid mapping is setup at boot time and CPU hotadd time. cpuid is
allocated, lower ids first, and released at CPU hotremove time, reused for other
hotadded CPUs. So this mapping is not persistent.
4. cpuid <-> nodeid mapping is also setup at boot time and CPU hotadd time, and
cleared at CPU hotremove time. As a result of 3, this mapping is not persistent.
To fix this problem, we establish cpuid <-> nodeid mapping for all the possible
cpus at boot time, and make it persistent. And according to init_cpu_to_node(),
cpuid <-> nodeid mapping is based on apicid <-> nodeid mapping and cpuid <-> apicid
mapping. So the key point is obtaining all cpus' apicid.
apicid can be obtained by _MAT (Multiple APIC Table Entry) method or found in
MADT (Multiple APIC Description Table). So we finish the job in the following steps:
1. Enable apic registeration flow to handle both enabled and disabled cpus.
This is done by introducing an extra parameter to generic_processor_info to let the
caller control if disabled cpus are ignored.
2. Introduce a new array storing all possible cpuid <-> apicid mapping. And also modify
the way cpuid is calculated. Establish all possible cpuid <-> apicid mapping when
registering local apic. Store the mapping in this array.
3. Enable _MAT and MADT relative apis to return non-present or disabled cpus' apicid.
This is also done by introducing an extra parameter to these apis to let the caller
control if disabled cpus are ignored.
4. Establish all possible cpuid <-> nodeid mapping.
This is done via an additional acpi namespace walk for processors.
This patch finished step 1.
Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com>
Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
Signed-off-by: Zhu Guihua <zhugh.fnst@cn.fujitsu.com>
Signed-off-by: Dou Liyang <douly.fnst@cn.fujitsu.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: mika.j.penttila@gmail.com
Cc: len.brown@intel.com
Cc: rafael@kernel.org
Cc: rjw@rjwysocki.net
Cc: yasu.isimatu@gmail.com
Cc: linux-mm@kvack.org
Cc: linux-acpi@vger.kernel.org
Cc: isimatu.yasuaki@jp.fujitsu.com
Cc: gongzhaogang@inspur.com
Cc: tj@kernel.org
Cc: izumi.taku@jp.fujitsu.com
Cc: cl@linux.com
Cc: chen.tang@easystack.cn
Cc: akpm@linux-foundation.org
Cc: kamezawa.hiroyu@jp.fujitsu.com
Cc: lenb@kernel.org
Link: http://lkml.kernel.org/r/1472114120-3281-3-git-send-email-douly.fnst@cn.fujitsu.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-08-25 16:35:15 +08:00
|
|
|
|
2016-02-23 06:19:15 +08:00
|
|
|
disabled_cpus++;
|
|
|
|
return -ENOSPC;
|
|
|
|
}
|
|
|
|
|
2011-02-09 15:22:17 +08:00
|
|
|
/*
|
|
|
|
* Validate version
|
|
|
|
*/
|
|
|
|
if (version == 0x0) {
|
|
|
|
pr_warning("BIOS bug: APIC version is 0 for CPU %d/0x%x, fixing up to 0x10\n",
|
|
|
|
cpu, apicid);
|
|
|
|
version = 0x10;
|
2008-03-28 04:56:19 +08:00
|
|
|
}
|
2011-02-09 15:22:17 +08:00
|
|
|
|
2016-09-14 02:12:32 +08:00
|
|
|
if (version != boot_cpu_apic_version) {
|
2011-02-09 15:22:17 +08:00
|
|
|
pr_warning("BIOS bug: APIC version mismatch, boot CPU: %x, CPU %d: version %x\n",
|
2016-09-14 02:12:32 +08:00
|
|
|
boot_cpu_apic_version, cpu, version);
|
2011-02-09 15:22:17 +08:00
|
|
|
}
|
|
|
|
|
2008-06-09 09:29:22 +08:00
|
|
|
if (apicid > max_physical_apicid)
|
|
|
|
max_physical_apicid = apicid;
|
|
|
|
|
2009-01-28 00:07:08 +08:00
|
|
|
#if defined(CONFIG_SMP) || defined(CONFIG_X86_64)
|
2009-01-13 19:41:34 +08:00
|
|
|
early_per_cpu(x86_cpu_to_apicid, cpu) = apicid;
|
|
|
|
early_per_cpu(x86_bios_cpu_apicid, cpu) = apicid;
|
2008-08-19 00:45:57 +08:00
|
|
|
#endif
|
2011-01-23 21:37:33 +08:00
|
|
|
#ifdef CONFIG_X86_32
|
|
|
|
early_per_cpu(x86_cpu_to_logical_apicid, cpu) =
|
|
|
|
apic->x86_32_early_logical_apicid(cpu);
|
|
|
|
#endif
|
2008-12-17 09:34:02 +08:00
|
|
|
set_cpu_possible(cpu, true);
|
x86/acpi: Enable acpi to register all possible cpus at boot time
cpuid <-> nodeid mapping is firstly established at boot time. And workqueue caches
the mapping in wq_numa_possible_cpumask in wq_numa_init() at boot time.
When doing node online/offline, cpuid <-> nodeid mapping is established/destroyed,
which means, cpuid <-> nodeid mapping will change if node hotplug happens. But
workqueue does not update wq_numa_possible_cpumask.
So here is the problem:
Assume we have the following cpuid <-> nodeid in the beginning:
Node | CPU
------------------------
node 0 | 0-14, 60-74
node 1 | 15-29, 75-89
node 2 | 30-44, 90-104
node 3 | 45-59, 105-119
and we hot-remove node2 and node3, it becomes:
Node | CPU
------------------------
node 0 | 0-14, 60-74
node 1 | 15-29, 75-89
and we hot-add node4 and node5, it becomes:
Node | CPU
------------------------
node 0 | 0-14, 60-74
node 1 | 15-29, 75-89
node 4 | 30-59
node 5 | 90-119
But in wq_numa_possible_cpumask, cpu30 is still mapped to node2, and the like.
When a pool workqueue is initialized, if its cpumask belongs to a node, its
pool->node will be mapped to that node. And memory used by this workqueue will
also be allocated on that node.
static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs){
...
/* if cpumask is contained inside a NUMA node, we belong to that node */
if (wq_numa_enabled) {
for_each_node(node) {
if (cpumask_subset(pool->attrs->cpumask,
wq_numa_possible_cpumask[node])) {
pool->node = node;
break;
}
}
}
Since wq_numa_possible_cpumask is not updated, it could be mapped to an offline node,
which will lead to memory allocation failure:
SLUB: Unable to allocate memory on node 2 (gfp=0x80d0)
cache: kmalloc-192, object size: 192, buffer size: 192, default order: 1, min order: 0
node 0: slabs: 6172, objs: 259224, free: 245741
node 1: slabs: 3261, objs: 136962, free: 127656
It happens here:
create_worker(struct worker_pool *pool)
|--> worker = alloc_worker(pool->node);
static struct worker *alloc_worker(int node)
{
struct worker *worker;
worker = kzalloc_node(sizeof(*worker), GFP_KERNEL, node); --> Here, useing the wrong node.
......
return worker;
}
[Solution]
There are four mappings in the kernel:
1. nodeid (logical node id) <-> pxm
2. apicid (physical cpu id) <-> nodeid
3. cpuid (logical cpu id) <-> apicid
4. cpuid (logical cpu id) <-> nodeid
1. pxm (proximity domain) is provided by ACPI firmware in SRAT, and nodeid <-> pxm
mapping is setup at boot time. This mapping is persistent, won't change.
2. apicid <-> nodeid mapping is setup using info in 1. The mapping is setup at boot
time and CPU hotadd time, and cleared at CPU hotremove time. This mapping is also
persistent.
3. cpuid <-> apicid mapping is setup at boot time and CPU hotadd time. cpuid is
allocated, lower ids first, and released at CPU hotremove time, reused for other
hotadded CPUs. So this mapping is not persistent.
4. cpuid <-> nodeid mapping is also setup at boot time and CPU hotadd time, and
cleared at CPU hotremove time. As a result of 3, this mapping is not persistent.
To fix this problem, we establish cpuid <-> nodeid mapping for all the possible
cpus at boot time, and make it persistent. And according to init_cpu_to_node(),
cpuid <-> nodeid mapping is based on apicid <-> nodeid mapping and cpuid <-> apicid
mapping. So the key point is obtaining all cpus' apicid.
apicid can be obtained by _MAT (Multiple APIC Table Entry) method or found in
MADT (Multiple APIC Description Table). So we finish the job in the following steps:
1. Enable apic registeration flow to handle both enabled and disabled cpus.
This is done by introducing an extra parameter to generic_processor_info to let the
caller control if disabled cpus are ignored.
2. Introduce a new array storing all possible cpuid <-> apicid mapping. And also modify
the way cpuid is calculated. Establish all possible cpuid <-> apicid mapping when
registering local apic. Store the mapping in this array.
3. Enable _MAT and MADT relative apis to return non-present or disabled cpus' apicid.
This is also done by introducing an extra parameter to these apis to let the caller
control if disabled cpus are ignored.
4. Establish all possible cpuid <-> nodeid mapping.
This is done via an additional acpi namespace walk for processors.
This patch finished step 1.
Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com>
Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
Signed-off-by: Zhu Guihua <zhugh.fnst@cn.fujitsu.com>
Signed-off-by: Dou Liyang <douly.fnst@cn.fujitsu.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: mika.j.penttila@gmail.com
Cc: len.brown@intel.com
Cc: rafael@kernel.org
Cc: rjw@rjwysocki.net
Cc: yasu.isimatu@gmail.com
Cc: linux-mm@kvack.org
Cc: linux-acpi@vger.kernel.org
Cc: isimatu.yasuaki@jp.fujitsu.com
Cc: gongzhaogang@inspur.com
Cc: tj@kernel.org
Cc: izumi.taku@jp.fujitsu.com
Cc: cl@linux.com
Cc: chen.tang@easystack.cn
Cc: akpm@linux-foundation.org
Cc: kamezawa.hiroyu@jp.fujitsu.com
Cc: lenb@kernel.org
Link: http://lkml.kernel.org/r/1472114120-3281-3-git-send-email-douly.fnst@cn.fujitsu.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-08-25 16:35:15 +08:00
|
|
|
|
|
|
|
if (enabled) {
|
|
|
|
num_processors++;
|
|
|
|
physid_set(apicid, phys_cpu_present_map);
|
|
|
|
set_cpu_present(cpu, true);
|
|
|
|
} else {
|
|
|
|
disabled_cpus++;
|
|
|
|
}
|
2013-09-02 11:57:36 +08:00
|
|
|
|
|
|
|
return cpu;
|
2008-03-28 04:56:19 +08:00
|
|
|
}
|
|
|
|
|
x86/acpi: Enable acpi to register all possible cpus at boot time
cpuid <-> nodeid mapping is firstly established at boot time. And workqueue caches
the mapping in wq_numa_possible_cpumask in wq_numa_init() at boot time.
When doing node online/offline, cpuid <-> nodeid mapping is established/destroyed,
which means, cpuid <-> nodeid mapping will change if node hotplug happens. But
workqueue does not update wq_numa_possible_cpumask.
So here is the problem:
Assume we have the following cpuid <-> nodeid in the beginning:
Node | CPU
------------------------
node 0 | 0-14, 60-74
node 1 | 15-29, 75-89
node 2 | 30-44, 90-104
node 3 | 45-59, 105-119
and we hot-remove node2 and node3, it becomes:
Node | CPU
------------------------
node 0 | 0-14, 60-74
node 1 | 15-29, 75-89
and we hot-add node4 and node5, it becomes:
Node | CPU
------------------------
node 0 | 0-14, 60-74
node 1 | 15-29, 75-89
node 4 | 30-59
node 5 | 90-119
But in wq_numa_possible_cpumask, cpu30 is still mapped to node2, and the like.
When a pool workqueue is initialized, if its cpumask belongs to a node, its
pool->node will be mapped to that node. And memory used by this workqueue will
also be allocated on that node.
static struct worker_pool *get_unbound_pool(const struct workqueue_attrs *attrs){
...
/* if cpumask is contained inside a NUMA node, we belong to that node */
if (wq_numa_enabled) {
for_each_node(node) {
if (cpumask_subset(pool->attrs->cpumask,
wq_numa_possible_cpumask[node])) {
pool->node = node;
break;
}
}
}
Since wq_numa_possible_cpumask is not updated, it could be mapped to an offline node,
which will lead to memory allocation failure:
SLUB: Unable to allocate memory on node 2 (gfp=0x80d0)
cache: kmalloc-192, object size: 192, buffer size: 192, default order: 1, min order: 0
node 0: slabs: 6172, objs: 259224, free: 245741
node 1: slabs: 3261, objs: 136962, free: 127656
It happens here:
create_worker(struct worker_pool *pool)
|--> worker = alloc_worker(pool->node);
static struct worker *alloc_worker(int node)
{
struct worker *worker;
worker = kzalloc_node(sizeof(*worker), GFP_KERNEL, node); --> Here, useing the wrong node.
......
return worker;
}
[Solution]
There are four mappings in the kernel:
1. nodeid (logical node id) <-> pxm
2. apicid (physical cpu id) <-> nodeid
3. cpuid (logical cpu id) <-> apicid
4. cpuid (logical cpu id) <-> nodeid
1. pxm (proximity domain) is provided by ACPI firmware in SRAT, and nodeid <-> pxm
mapping is setup at boot time. This mapping is persistent, won't change.
2. apicid <-> nodeid mapping is setup using info in 1. The mapping is setup at boot
time and CPU hotadd time, and cleared at CPU hotremove time. This mapping is also
persistent.
3. cpuid <-> apicid mapping is setup at boot time and CPU hotadd time. cpuid is
allocated, lower ids first, and released at CPU hotremove time, reused for other
hotadded CPUs. So this mapping is not persistent.
4. cpuid <-> nodeid mapping is also setup at boot time and CPU hotadd time, and
cleared at CPU hotremove time. As a result of 3, this mapping is not persistent.
To fix this problem, we establish cpuid <-> nodeid mapping for all the possible
cpus at boot time, and make it persistent. And according to init_cpu_to_node(),
cpuid <-> nodeid mapping is based on apicid <-> nodeid mapping and cpuid <-> apicid
mapping. So the key point is obtaining all cpus' apicid.
apicid can be obtained by _MAT (Multiple APIC Table Entry) method or found in
MADT (Multiple APIC Description Table). So we finish the job in the following steps:
1. Enable apic registeration flow to handle both enabled and disabled cpus.
This is done by introducing an extra parameter to generic_processor_info to let the
caller control if disabled cpus are ignored.
2. Introduce a new array storing all possible cpuid <-> apicid mapping. And also modify
the way cpuid is calculated. Establish all possible cpuid <-> apicid mapping when
registering local apic. Store the mapping in this array.
3. Enable _MAT and MADT relative apis to return non-present or disabled cpus' apicid.
This is also done by introducing an extra parameter to these apis to let the caller
control if disabled cpus are ignored.
4. Establish all possible cpuid <-> nodeid mapping.
This is done via an additional acpi namespace walk for processors.
This patch finished step 1.
Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com>
Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
Signed-off-by: Zhu Guihua <zhugh.fnst@cn.fujitsu.com>
Signed-off-by: Dou Liyang <douly.fnst@cn.fujitsu.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: mika.j.penttila@gmail.com
Cc: len.brown@intel.com
Cc: rafael@kernel.org
Cc: rjw@rjwysocki.net
Cc: yasu.isimatu@gmail.com
Cc: linux-mm@kvack.org
Cc: linux-acpi@vger.kernel.org
Cc: isimatu.yasuaki@jp.fujitsu.com
Cc: gongzhaogang@inspur.com
Cc: tj@kernel.org
Cc: izumi.taku@jp.fujitsu.com
Cc: cl@linux.com
Cc: chen.tang@easystack.cn
Cc: akpm@linux-foundation.org
Cc: kamezawa.hiroyu@jp.fujitsu.com
Cc: lenb@kernel.org
Link: http://lkml.kernel.org/r/1472114120-3281-3-git-send-email-douly.fnst@cn.fujitsu.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2016-08-25 16:35:15 +08:00
|
|
|
int generic_processor_info(int apicid, int version)
|
|
|
|
{
|
|
|
|
return __generic_processor_info(apicid, version, true);
|
|
|
|
}
|
|
|
|
|
2008-07-11 02:16:48 +08:00
|
|
|
int hard_smp_processor_id(void)
|
|
|
|
{
|
|
|
|
return read_apic_id();
|
|
|
|
}
|
2009-01-29 00:55:37 +08:00
|
|
|
|
|
|
|
void default_init_apic_ldr(void)
|
|
|
|
{
|
|
|
|
unsigned long val;
|
|
|
|
|
|
|
|
apic_write(APIC_DFR, APIC_DFR_VALUE);
|
|
|
|
val = apic_read(APIC_LDR) & ~APIC_LDR_MASK;
|
|
|
|
val |= SET_APIC_LOGICAL_ID(1UL << smp_processor_id());
|
|
|
|
apic_write(APIC_LDR, val);
|
|
|
|
}
|
|
|
|
|
2012-06-07 21:15:59 +08:00
|
|
|
int default_cpu_mask_to_apicid_and(const struct cpumask *cpumask,
|
|
|
|
const struct cpumask *andmask,
|
|
|
|
unsigned int *apicid)
|
2012-06-05 19:23:44 +08:00
|
|
|
{
|
2012-06-14 15:49:55 +08:00
|
|
|
unsigned int cpu;
|
2012-06-05 19:23:44 +08:00
|
|
|
|
|
|
|
for_each_cpu_and(cpu, cpumask, andmask) {
|
|
|
|
if (cpumask_test_cpu(cpu, cpu_online_mask))
|
|
|
|
break;
|
|
|
|
}
|
2012-06-07 21:15:59 +08:00
|
|
|
|
2012-06-14 15:49:55 +08:00
|
|
|
if (likely(cpu < nr_cpu_ids)) {
|
2012-06-14 15:49:35 +08:00
|
|
|
*apicid = per_cpu(x86_cpu_to_apicid, cpu);
|
|
|
|
return 0;
|
|
|
|
}
|
2012-06-14 15:49:55 +08:00
|
|
|
|
|
|
|
return -EINVAL;
|
2012-06-05 19:23:44 +08:00
|
|
|
}
|
|
|
|
|
2012-07-15 20:56:46 +08:00
|
|
|
/*
|
|
|
|
* Override the generic EOI implementation with an optimized version.
|
|
|
|
* Only called during early boot when only one CPU is active and with
|
|
|
|
* interrupts disabled, so we know this does not race with actual APIC driver
|
|
|
|
* use.
|
|
|
|
*/
|
|
|
|
void __init apic_set_eoi_write(void (*eoi_write)(u32 reg, u32 v))
|
|
|
|
{
|
|
|
|
struct apic **drv;
|
|
|
|
|
|
|
|
for (drv = __apicdrivers; drv < __apicdrivers_end; drv++) {
|
|
|
|
/* Should happen once for each apic */
|
|
|
|
WARN_ON((*drv)->eoi_write == eoi_write);
|
|
|
|
(*drv)->eoi_write = eoi_write;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-01-16 05:22:44 +08:00
|
|
|
static void __init apic_bsp_up_setup(void)
|
2015-01-16 05:22:40 +08:00
|
|
|
{
|
2015-01-16 05:22:44 +08:00
|
|
|
#ifdef CONFIG_X86_64
|
|
|
|
apic_write(APIC_ID, SET_APIC_ID(boot_cpu_physical_apicid));
|
|
|
|
#else
|
2015-01-16 05:22:40 +08:00
|
|
|
/*
|
2015-01-16 05:22:44 +08:00
|
|
|
* Hack: In case of kdump, after a crash, kernel might be booting
|
|
|
|
* on a cpu with non-zero lapic id. But boot_cpu_physical_apicid
|
|
|
|
* might be zero if read from MP tables. Get it from LAPIC.
|
2015-01-16 05:22:40 +08:00
|
|
|
*/
|
2015-01-16 05:22:44 +08:00
|
|
|
# ifdef CONFIG_CRASH_DUMP
|
|
|
|
boot_cpu_physical_apicid = read_apic_id();
|
|
|
|
# endif
|
|
|
|
#endif
|
|
|
|
physid_set_mask_of_physid(boot_cpu_physical_apicid, &phys_cpu_present_map);
|
2015-01-16 05:22:40 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* apic_bsp_setup - Setup function for local apic and io-apic
|
2015-01-16 05:22:44 +08:00
|
|
|
* @upmode: Force UP mode (for APIC_init_uniprocessor)
|
2015-01-16 05:22:40 +08:00
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* apic_id of BSP APIC
|
|
|
|
*/
|
2015-01-16 05:22:44 +08:00
|
|
|
int __init apic_bsp_setup(bool upmode)
|
2015-01-16 05:22:40 +08:00
|
|
|
{
|
|
|
|
int id;
|
|
|
|
|
|
|
|
connect_bsp_APIC();
|
2015-01-16 05:22:44 +08:00
|
|
|
if (upmode)
|
|
|
|
apic_bsp_up_setup();
|
2015-01-16 05:22:40 +08:00
|
|
|
setup_local_APIC();
|
|
|
|
|
|
|
|
if (x2apic_mode)
|
|
|
|
id = apic_read(APIC_LDR);
|
|
|
|
else
|
|
|
|
id = GET_APIC_LOGICAL_ID(apic_read(APIC_LDR));
|
|
|
|
|
|
|
|
enable_IO_APIC();
|
2015-01-16 05:22:44 +08:00
|
|
|
end_local_APIC_setup();
|
|
|
|
irq_remap_enable_fault_handling();
|
2015-01-16 05:22:40 +08:00
|
|
|
setup_IO_APIC();
|
2015-01-16 05:22:45 +08:00
|
|
|
/* Setup local timer */
|
|
|
|
x86_init.timers.setup_percpu_clockev();
|
2015-01-16 05:22:40 +08:00
|
|
|
return id;
|
|
|
|
}
|
|
|
|
|
2015-01-16 05:22:37 +08:00
|
|
|
/*
|
|
|
|
* This initializes the IO-APIC and APIC hardware if this is
|
|
|
|
* a UP kernel.
|
|
|
|
*/
|
|
|
|
int __init APIC_init_uniprocessor(void)
|
|
|
|
{
|
|
|
|
if (disable_apic) {
|
|
|
|
pr_info("Apic disabled\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
#ifdef CONFIG_X86_64
|
2016-04-05 04:25:00 +08:00
|
|
|
if (!boot_cpu_has(X86_FEATURE_APIC)) {
|
2015-01-16 05:22:37 +08:00
|
|
|
disable_apic = 1;
|
|
|
|
pr_info("Apic disabled by BIOS\n");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
#else
|
2016-04-05 04:25:00 +08:00
|
|
|
if (!smp_found_config && !boot_cpu_has(X86_FEATURE_APIC))
|
2015-01-16 05:22:37 +08:00
|
|
|
return -1;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Complain if the BIOS pretends there is one.
|
|
|
|
*/
|
2016-04-05 04:25:00 +08:00
|
|
|
if (!boot_cpu_has(X86_FEATURE_APIC) &&
|
2016-09-14 02:12:32 +08:00
|
|
|
APIC_INTEGRATED(boot_cpu_apic_version)) {
|
2015-01-16 05:22:37 +08:00
|
|
|
pr_err("BIOS bug, local APIC 0x%x not detected!...\n",
|
|
|
|
boot_cpu_physical_apicid);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2015-01-16 05:22:44 +08:00
|
|
|
if (!smp_found_config)
|
|
|
|
disable_ioapic_support();
|
2015-01-16 05:22:37 +08:00
|
|
|
|
2015-01-16 05:22:44 +08:00
|
|
|
default_setup_apic_routing();
|
|
|
|
apic_bsp_setup(true);
|
2015-01-16 05:22:37 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2015-01-16 05:22:39 +08:00
|
|
|
#ifdef CONFIG_UP_LATE_INIT
|
|
|
|
void __init up_late_init(void)
|
|
|
|
{
|
|
|
|
APIC_init_uniprocessor();
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2007-10-15 04:57:45 +08:00
|
|
|
/*
|
2008-01-30 20:30:20 +08:00
|
|
|
* Power management
|
2007-10-15 04:57:45 +08:00
|
|
|
*/
|
2008-01-30 20:30:20 +08:00
|
|
|
#ifdef CONFIG_PM
|
|
|
|
|
|
|
|
static struct {
|
2008-08-17 03:21:53 +08:00
|
|
|
/*
|
|
|
|
* 'active' is true if the local APIC was enabled by us and
|
|
|
|
* not the BIOS; this signifies that we are also responsible
|
|
|
|
* for disabling it before entering apm/acpi suspend
|
|
|
|
*/
|
2008-01-30 20:30:20 +08:00
|
|
|
int active;
|
|
|
|
/* r/w apic fields */
|
|
|
|
unsigned int apic_id;
|
|
|
|
unsigned int apic_taskpri;
|
|
|
|
unsigned int apic_ldr;
|
|
|
|
unsigned int apic_dfr;
|
|
|
|
unsigned int apic_spiv;
|
|
|
|
unsigned int apic_lvtt;
|
|
|
|
unsigned int apic_lvtpc;
|
|
|
|
unsigned int apic_lvt0;
|
|
|
|
unsigned int apic_lvt1;
|
|
|
|
unsigned int apic_lvterr;
|
|
|
|
unsigned int apic_tmict;
|
|
|
|
unsigned int apic_tdcr;
|
|
|
|
unsigned int apic_thmr;
|
2015-11-23 18:59:24 +08:00
|
|
|
unsigned int apic_cmci;
|
2008-01-30 20:30:20 +08:00
|
|
|
} apic_pm_state;
|
|
|
|
|
2011-03-24 05:15:54 +08:00
|
|
|
static int lapic_suspend(void)
|
2008-01-30 20:30:20 +08:00
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
int maxlvt;
|
2007-10-15 04:57:45 +08:00
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
if (!apic_pm_state.active)
|
|
|
|
return 0;
|
2007-10-15 04:57:45 +08:00
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
maxlvt = lapic_get_maxlvt();
|
2007-10-15 04:57:45 +08:00
|
|
|
|
2008-07-12 05:24:19 +08:00
|
|
|
apic_pm_state.apic_id = apic_read(APIC_ID);
|
2008-01-30 20:30:20 +08:00
|
|
|
apic_pm_state.apic_taskpri = apic_read(APIC_TASKPRI);
|
|
|
|
apic_pm_state.apic_ldr = apic_read(APIC_LDR);
|
|
|
|
apic_pm_state.apic_dfr = apic_read(APIC_DFR);
|
|
|
|
apic_pm_state.apic_spiv = apic_read(APIC_SPIV);
|
|
|
|
apic_pm_state.apic_lvtt = apic_read(APIC_LVTT);
|
|
|
|
if (maxlvt >= 4)
|
|
|
|
apic_pm_state.apic_lvtpc = apic_read(APIC_LVTPC);
|
|
|
|
apic_pm_state.apic_lvt0 = apic_read(APIC_LVT0);
|
|
|
|
apic_pm_state.apic_lvt1 = apic_read(APIC_LVT1);
|
|
|
|
apic_pm_state.apic_lvterr = apic_read(APIC_LVTERR);
|
|
|
|
apic_pm_state.apic_tmict = apic_read(APIC_TMICT);
|
|
|
|
apic_pm_state.apic_tdcr = apic_read(APIC_TDCR);
|
x86, mce: use 64bit machine check code on 32bit
The 64bit machine check code is in many ways much better than
the 32bit machine check code: it is more specification compliant,
is cleaner, only has a single code base versus one per CPU,
has better infrastructure for recovery, has a cleaner way to communicate
with user space etc. etc.
Use the 64bit code for 32bit too.
This is the second attempt to do this. There was one a couple of years
ago to unify this code for 32bit and 64bit. Back then this ran into some
trouble with K7s and was reverted.
I believe this time the K7 problems (and some others) are addressed.
I went over the old handlers and was very careful to retain
all quirks.
But of course this needs a lot of testing on old systems. On newer
64bit capable systems I don't expect much problems because they have been
already tested with the 64bit kernel.
I made this a CONFIG for now that still allows to select the old
machine check code. This is mostly to make testing easier,
if someone runs into a problem we can ask them to try
with the CONFIG switched.
The new code is default y for more coverage.
Once there is confidence the 64bit code works well on older hardware
too the CONFIG_X86_OLD_MCE and the associated code can be easily
removed.
This causes a behaviour change for 32bit installations. They now
have to install the mcelog package to be able to log
corrected machine checks.
The 64bit machine check code only handles CPUs which support the
standard Intel machine check architecture described in the IA32 SDM.
The 32bit code has special support for some older CPUs which
have non standard machine check architectures, in particular
WinChip C3 and Intel P5. I made those a separate CONFIG option
and kept them for now. The WinChip variant could be probably
removed without too much pain, it doesn't really do anything
interesting. P5 is also disabled by default (like it
was before) because many motherboards have it miswired, but
according to Alan Cox a few embedded setups use that one.
Forward ported/heavily changed version of old patch, original patch
included review/fixes from Thomas Gleixner, Bert Wesarg.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-04-29 01:07:31 +08:00
|
|
|
#ifdef CONFIG_X86_THERMAL_VECTOR
|
2008-01-30 20:30:20 +08:00
|
|
|
if (maxlvt >= 5)
|
|
|
|
apic_pm_state.apic_thmr = apic_read(APIC_LVTTHMR);
|
|
|
|
#endif
|
2015-11-23 18:59:24 +08:00
|
|
|
#ifdef CONFIG_X86_MCE_INTEL
|
|
|
|
if (maxlvt >= 6)
|
|
|
|
apic_pm_state.apic_cmci = apic_read(APIC_LVTCMCI);
|
|
|
|
#endif
|
2008-08-17 03:21:52 +08:00
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
local_irq_save(flags);
|
|
|
|
disable_local_APIC();
|
2009-04-21 04:02:27 +08:00
|
|
|
|
2012-09-26 18:44:33 +08:00
|
|
|
irq_remapping_disable();
|
2009-04-21 04:02:27 +08:00
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
local_irq_restore(flags);
|
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2011-03-24 05:15:54 +08:00
|
|
|
static void lapic_resume(void)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-01-30 20:30:20 +08:00
|
|
|
unsigned int l, h;
|
|
|
|
unsigned long flags;
|
2011-05-19 07:31:33 +08:00
|
|
|
int maxlvt;
|
2009-03-28 05:22:44 +08:00
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
if (!apic_pm_state.active)
|
2011-03-24 05:15:54 +08:00
|
|
|
return;
|
2005-11-06 00:25:53 +08:00
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
local_irq_save(flags);
|
2012-09-26 18:44:34 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* IO-APIC and PIC have their own resume routines.
|
|
|
|
* We just mask them here to make sure the interrupt
|
|
|
|
* subsystem is completely quiet while we enable x2apic
|
|
|
|
* and interrupt-remapping.
|
|
|
|
*/
|
|
|
|
mask_ioapic_entries();
|
|
|
|
legacy_pic->mask_all();
|
2008-08-17 03:21:51 +08:00
|
|
|
|
2015-01-16 05:22:26 +08:00
|
|
|
if (x2apic_mode) {
|
|
|
|
__x2apic_enable();
|
|
|
|
} else {
|
2008-08-17 03:21:51 +08:00
|
|
|
/*
|
|
|
|
* Make sure the APICBASE points to the right address
|
|
|
|
*
|
|
|
|
* FIXME! This will be wrong if we ever support suspend on
|
|
|
|
* SMP! We'll need to do this as part of the CPU restore!
|
|
|
|
*/
|
2012-04-19 00:37:39 +08:00
|
|
|
if (boot_cpu_data.x86 >= 6) {
|
|
|
|
rdmsr(MSR_IA32_APICBASE, l, h);
|
|
|
|
l &= ~MSR_IA32_APICBASE_BASE;
|
|
|
|
l |= MSR_IA32_APICBASE_ENABLE | mp_lapic_addr;
|
|
|
|
wrmsr(MSR_IA32_APICBASE, l, h);
|
|
|
|
}
|
2008-08-18 12:12:27 +08:00
|
|
|
}
|
2008-07-11 02:16:58 +08:00
|
|
|
|
2009-03-28 05:22:44 +08:00
|
|
|
maxlvt = lapic_get_maxlvt();
|
2008-01-30 20:30:20 +08:00
|
|
|
apic_write(APIC_LVTERR, ERROR_APIC_VECTOR | APIC_LVT_MASKED);
|
|
|
|
apic_write(APIC_ID, apic_pm_state.apic_id);
|
|
|
|
apic_write(APIC_DFR, apic_pm_state.apic_dfr);
|
|
|
|
apic_write(APIC_LDR, apic_pm_state.apic_ldr);
|
|
|
|
apic_write(APIC_TASKPRI, apic_pm_state.apic_taskpri);
|
|
|
|
apic_write(APIC_SPIV, apic_pm_state.apic_spiv);
|
|
|
|
apic_write(APIC_LVT0, apic_pm_state.apic_lvt0);
|
|
|
|
apic_write(APIC_LVT1, apic_pm_state.apic_lvt1);
|
2015-11-23 18:59:24 +08:00
|
|
|
#ifdef CONFIG_X86_THERMAL_VECTOR
|
2008-01-30 20:30:20 +08:00
|
|
|
if (maxlvt >= 5)
|
|
|
|
apic_write(APIC_LVTTHMR, apic_pm_state.apic_thmr);
|
2015-11-23 18:59:24 +08:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_X86_MCE_INTEL
|
|
|
|
if (maxlvt >= 6)
|
|
|
|
apic_write(APIC_LVTCMCI, apic_pm_state.apic_cmci);
|
2008-01-30 20:30:20 +08:00
|
|
|
#endif
|
|
|
|
if (maxlvt >= 4)
|
|
|
|
apic_write(APIC_LVTPC, apic_pm_state.apic_lvtpc);
|
|
|
|
apic_write(APIC_LVTT, apic_pm_state.apic_lvtt);
|
|
|
|
apic_write(APIC_TDCR, apic_pm_state.apic_tdcr);
|
|
|
|
apic_write(APIC_TMICT, apic_pm_state.apic_tmict);
|
|
|
|
apic_write(APIC_ESR, 0);
|
|
|
|
apic_read(APIC_ESR);
|
|
|
|
apic_write(APIC_LVTERR, apic_pm_state.apic_lvterr);
|
|
|
|
apic_write(APIC_ESR, 0);
|
|
|
|
apic_read(APIC_ESR);
|
2008-08-17 03:21:51 +08:00
|
|
|
|
2012-09-26 18:44:33 +08:00
|
|
|
irq_remapping_reenable(x2apic_mode);
|
2011-05-19 07:31:33 +08:00
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
local_irq_restore(flags);
|
|
|
|
}
|
2007-10-13 05:04:07 +08:00
|
|
|
|
2008-08-17 03:21:53 +08:00
|
|
|
/*
|
|
|
|
* This device has no shutdown method - fully functioning local APICs
|
|
|
|
* are needed on every CPU up until machine_halt/restart/poweroff.
|
|
|
|
*/
|
|
|
|
|
2011-03-24 05:15:54 +08:00
|
|
|
static struct syscore_ops lapic_syscore_ops = {
|
2008-01-30 20:30:20 +08:00
|
|
|
.resume = lapic_resume,
|
|
|
|
.suspend = lapic_suspend,
|
|
|
|
};
|
2007-10-13 05:04:07 +08:00
|
|
|
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-19 06:23:59 +08:00
|
|
|
static void apic_pm_activate(void)
|
2008-01-30 20:30:20 +08:00
|
|
|
{
|
|
|
|
apic_pm_state.active = 1;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-01-30 20:30:20 +08:00
|
|
|
static int __init init_lapic_sysfs(void)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-01-30 20:30:20 +08:00
|
|
|
/* XXX: remove suspend/resume procs if !apic_pm_state.active? */
|
2016-04-05 04:25:00 +08:00
|
|
|
if (boot_cpu_has(X86_FEATURE_APIC))
|
2011-03-24 05:15:54 +08:00
|
|
|
register_syscore_ops(&lapic_syscore_ops);
|
2008-01-30 20:32:35 +08:00
|
|
|
|
2011-03-24 05:15:54 +08:00
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2009-03-28 05:22:44 +08:00
|
|
|
|
|
|
|
/* local apic needs to resume before other devices access its registers. */
|
|
|
|
core_initcall(init_lapic_sysfs);
|
2008-01-30 20:30:20 +08:00
|
|
|
|
|
|
|
#else /* CONFIG_PM */
|
|
|
|
|
|
|
|
static void apic_pm_activate(void) { }
|
|
|
|
|
|
|
|
#endif /* CONFIG_PM */
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-08-24 17:01:49 +08:00
|
|
|
#ifdef CONFIG_X86_64
|
2009-04-27 14:39:38 +08:00
|
|
|
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-19 06:23:59 +08:00
|
|
|
static int multi_checked;
|
|
|
|
static int multi;
|
2009-04-27 14:39:38 +08:00
|
|
|
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-19 06:23:59 +08:00
|
|
|
static int set_multi(const struct dmi_system_id *d)
|
2009-04-27 14:39:38 +08:00
|
|
|
{
|
|
|
|
if (multi)
|
|
|
|
return 0;
|
2009-05-02 03:54:25 +08:00
|
|
|
pr_info("APIC: %s detected, Multi Chassis\n", d->ident);
|
2009-04-27 14:39:38 +08:00
|
|
|
multi = 1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-19 06:23:59 +08:00
|
|
|
static const struct dmi_system_id multi_dmi_table[] = {
|
2009-04-27 14:39:38 +08:00
|
|
|
{
|
|
|
|
.callback = set_multi,
|
|
|
|
.ident = "IBM System Summit2",
|
|
|
|
.matches = {
|
|
|
|
DMI_MATCH(DMI_SYS_VENDOR, "IBM"),
|
|
|
|
DMI_MATCH(DMI_PRODUCT_NAME, "Summit2"),
|
|
|
|
},
|
|
|
|
},
|
|
|
|
{}
|
|
|
|
};
|
|
|
|
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-19 06:23:59 +08:00
|
|
|
static void dmi_check_multi(void)
|
2009-04-27 14:39:38 +08:00
|
|
|
{
|
|
|
|
if (multi_checked)
|
|
|
|
return;
|
|
|
|
|
|
|
|
dmi_check_system(multi_dmi_table);
|
|
|
|
multi_checked = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* apic_is_clustered_box() -- Check if we can expect good TSC
|
|
|
|
*
|
|
|
|
* Thus far, the major user of this is IBM's Summit2 series:
|
|
|
|
* Clustered boxes may have unsynced TSC problems if they are
|
|
|
|
* multi-chassis.
|
|
|
|
* Use DMI to check them
|
|
|
|
*/
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-19 06:23:59 +08:00
|
|
|
int apic_is_clustered_box(void)
|
2009-04-27 14:39:38 +08:00
|
|
|
{
|
|
|
|
dmi_check_multi();
|
2014-06-29 18:01:08 +08:00
|
|
|
return multi;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2008-08-24 17:01:49 +08:00
|
|
|
#endif
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
2008-01-30 20:30:20 +08:00
|
|
|
* APIC command line parameters
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2008-08-19 00:46:01 +08:00
|
|
|
static int __init setup_disableapic(char *arg)
|
2007-07-21 23:10:17 +08:00
|
|
|
{
|
2005-04-17 06:20:36 +08:00
|
|
|
disable_apic = 1;
|
2008-07-21 16:38:14 +08:00
|
|
|
setup_clear_cpu_cap(X86_FEATURE_APIC);
|
2006-09-26 16:52:32 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
early_param("disableapic", setup_disableapic);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-09-26 16:52:32 +08:00
|
|
|
/* same as disableapic, for compatibility */
|
2008-08-19 00:46:01 +08:00
|
|
|
static int __init setup_nolapic(char *arg)
|
2007-07-21 23:10:17 +08:00
|
|
|
{
|
2008-08-19 00:46:01 +08:00
|
|
|
return setup_disableapic(arg);
|
2007-07-21 23:10:17 +08:00
|
|
|
}
|
2006-09-26 16:52:32 +08:00
|
|
|
early_param("nolapic", setup_nolapic);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2007-03-24 02:32:31 +08:00
|
|
|
static int __init parse_lapic_timer_c2_ok(char *arg)
|
|
|
|
{
|
|
|
|
local_apic_timer_c2_ok = 1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
early_param("lapic_timer_c2_ok", parse_lapic_timer_c2_ok);
|
|
|
|
|
2008-08-15 19:51:20 +08:00
|
|
|
static int __init parse_disable_apic_timer(char *arg)
|
2007-07-21 23:10:17 +08:00
|
|
|
{
|
2005-04-17 06:20:36 +08:00
|
|
|
disable_apic_timer = 1;
|
2008-08-15 19:51:20 +08:00
|
|
|
return 0;
|
2007-07-21 23:10:17 +08:00
|
|
|
}
|
2008-08-15 19:51:20 +08:00
|
|
|
early_param("noapictimer", parse_disable_apic_timer);
|
|
|
|
|
|
|
|
static int __init parse_nolapic_timer(char *arg)
|
|
|
|
{
|
|
|
|
disable_apic_timer = 1;
|
|
|
|
return 0;
|
2007-07-21 23:10:17 +08:00
|
|
|
}
|
2008-08-15 19:51:20 +08:00
|
|
|
early_param("nolapic_timer", parse_nolapic_timer);
|
2006-02-04 04:50:50 +08:00
|
|
|
|
2008-08-19 00:46:00 +08:00
|
|
|
static int __init apic_set_verbosity(char *arg)
|
|
|
|
{
|
|
|
|
if (!arg) {
|
|
|
|
#ifdef CONFIG_X86_64
|
|
|
|
skip_ioapic_setup = 0;
|
|
|
|
return 0;
|
|
|
|
#endif
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (strcmp("debug", arg) == 0)
|
|
|
|
apic_verbosity = APIC_DEBUG;
|
|
|
|
else if (strcmp("verbose", arg) == 0)
|
|
|
|
apic_verbosity = APIC_VERBOSE;
|
|
|
|
else {
|
2008-11-10 16:16:41 +08:00
|
|
|
pr_warning("APIC Verbosity level %s not recognised"
|
2008-08-19 00:46:00 +08:00
|
|
|
" use apic=verbose or apic=debug\n", arg);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
early_param("apic", apic_set_verbosity);
|
|
|
|
|
2008-02-23 05:37:26 +08:00
|
|
|
static int __init lapic_insert_resource(void)
|
|
|
|
{
|
|
|
|
if (!apic_phys)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
/* Put local APIC into the resource map. */
|
|
|
|
lapic_resource.start = apic_phys;
|
|
|
|
lapic_resource.end = lapic_resource.start + PAGE_SIZE - 1;
|
|
|
|
insert_resource(&iomem_resource, &lapic_resource);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* need call insert after e820_reserve_resources()
|
|
|
|
* that is using request_resource
|
|
|
|
*/
|
|
|
|
late_initcall(lapic_insert_resource);
|
x86, apic, kexec: Add disable_cpu_apicid kernel parameter
Add disable_cpu_apicid kernel parameter. To use this kernel parameter,
specify an initial APIC ID of the corresponding CPU you want to
disable.
This is mostly used for the kdump 2nd kernel to disable BSP to wake up
multiple CPUs without causing system reset or hang due to sending INIT
from AP to BSP.
Kdump users first figure out initial APIC ID of the BSP, CPU0 in the
1st kernel, for example from /proc/cpuinfo and then set up this kernel
parameter for the 2nd kernel using the obtained APIC ID.
However, doing this procedure at each boot time manually is awkward,
which should be automatically done by user-land service scripts, for
example, kexec-tools on fedora/RHEL distributions.
This design is more flexible than disabling BSP in kernel boot time
automatically in that in kernel boot time we have no choice but
referring to ACPI/MP table to obtain initial APIC ID for BSP, meaning
that the method is not applicable to the systems without such BIOS
tables.
One assumption behind this design is that users get initial APIC ID of
the BSP in still healthy state and so BSP is uniquely kept in
CPU0. Thus, through the kernel parameter, only one initial APIC ID can
be specified.
In a comparison with disabled_cpu_apicid, we use read_apic_id(), not
boot_cpu_physical_apicid, because on some platforms, the variable is
modified to the apicid reported as BSP through MP table and this
function is executed with the temporarily modified
boot_cpu_physical_apicid. As a result, disabled_cpu_apicid kernel
parameter doesn't work well for apicids of APs.
Fixing the wrong handling of boot_cpu_physical_apicid requires some
reviews and tests beyond some platforms and it could take some
time. The fix here is a kind of workaround to focus on the main topic
of this patch.
Signed-off-by: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
Link: http://lkml.kernel.org/r/20140115064458.1545.38775.stgit@localhost6.localdomain6
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2014-01-15 14:44:58 +08:00
|
|
|
|
|
|
|
static int __init apic_set_disabled_cpu_apicid(char *arg)
|
|
|
|
{
|
|
|
|
if (!arg || !get_option(&arg, &disabled_cpu_apicid))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
early_param("disable_cpu_apicid", apic_set_disabled_cpu_apicid);
|
2015-12-14 18:19:12 +08:00
|
|
|
|
|
|
|
static int __init apic_set_extnmi(char *arg)
|
|
|
|
{
|
|
|
|
if (!arg)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (!strncmp("all", arg, 3))
|
|
|
|
apic_extnmi = APIC_EXTNMI_ALL;
|
|
|
|
else if (!strncmp("none", arg, 4))
|
|
|
|
apic_extnmi = APIC_EXTNMI_NONE;
|
|
|
|
else if (!strncmp("bsp", arg, 3))
|
|
|
|
apic_extnmi = APIC_EXTNMI_BSP;
|
|
|
|
else {
|
|
|
|
pr_warn("Unknown external NMI delivery mode `%s' ignored\n", arg);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
early_param("apic_extnmi", apic_set_extnmi);
|