Merge branch 'for-linus' of git://one.firstfloor.org/home/andi/git/linux-2.6

* 'for-linus' of git://one.firstfloor.org/home/andi/git/linux-2.6: (225 commits)
  [PATCH] Don't set calgary iommu as default y
  [PATCH] i386/x86-64: New Intel feature flags
  [PATCH] x86: Add a cumulative thermal throttle event counter.
  [PATCH] i386: Make the jiffies compares use the 64bit safe macros.
  [PATCH] x86: Refactor thermal throttle processing
  [PATCH] Add 64bit jiffies compares (for use with get_jiffies_64)
  [PATCH] Fix unwinder warning in traps.c
  [PATCH] x86: Allow disabling early pci scans with pci=noearly or disallowing conf1
  [PATCH] x86: Move direct PCI scanning functions out of line
  [PATCH] i386/x86-64: Make all early PCI scans dependent on CONFIG_PCI
  [PATCH] Don't leak NT bit into next task
  [PATCH] i386/x86-64: Work around gcc bug with noreturn functions in unwinder
  [PATCH] Fix some broken white space in ia32_signal.c
  [PATCH] Initialize argument registers for 32bit signal handlers.
  [PATCH] Remove all traces of signal number conversion
  [PATCH] Don't synchronize time reading on single core AMD systems
  [PATCH] Remove outdated comment in x86-64 mmconfig code
  [PATCH] Use string instructions for Core2 copy/clear
  [PATCH] x86: - restore i8259A eoi status on resume
  [PATCH] i386: Split multi-line printk in oops output.
  ...
This commit is contained in:
Linus Torvalds 2006-09-26 13:07:55 -07:00
commit b278240839
250 changed files with 6923 additions and 5320 deletions

View File

@ -358,7 +358,8 @@ Here is a list of some of the different kernel trees available:
quilt trees:
- USB, PCI, Driver Core, and I2C, Greg Kroah-Hartman <gregkh@suse.de>
kernel.org/pub/linux/kernel/people/gregkh/gregkh-2.6/
- x86-64, partly i386, Andi Kleen <ak@suse.de>
ftp.firstfloor.org:/pub/ak/x86_64/quilt/
Bug Reporting
-------------

View File

@ -1124,11 +1124,15 @@ debugging information is displayed on console.
NMI switch that most IA32 servers have fires unknown NMI up, for example.
If a system hangs up, try pressing the NMI switch.
[NOTE]
This function and oprofile share a NMI callback. Therefore this function
cannot be enabled when oprofile is activated.
And NMI watchdog will be disabled when the value in this file is set to
non-zero.
nmi_watchdog
------------
Enables/Disables the NMI watchdog on x86 systems. When the value is non-zero
the NMI watchdog is enabled and will continuously test all online cpus to
determine whether or not they are still functioning properly.
Because the NMI watchdog shares registers with oprofile, by disabling the NMI
watchdog, oprofile may have more registers to utilize.
2.4 /proc/sys/vm - The virtual memory subsystem

View File

@ -421,6 +421,11 @@ more details, with real examples.
The second argument is optional, and if supplied will be used
if first argument is not supported.
as-instr
as-instr checks if the assembler reports a specific instruction
and then outputs either option1 or option2
C escapes are supported in the test instruction
cc-option
cc-option is used to check if $(CC) supports a given option, and not
supported to use an optional second option.

View File

@ -1240,7 +1240,11 @@ running once the system is up.
bootloader. This is currently used on
IXP2000 systems where the bus has to be
configured a certain way for adjunct CPUs.
noearly [X86] Don't do any early type 1 scanning.
This might help on some broken boards which
machine check when some devices' config space
is read. But various workarounds are disabled
and some IOMMU drivers will not work.
pcmv= [HW,PCMCIA] BadgePAD 4
pd. [PARIDE]

View File

@ -245,6 +245,13 @@ Debugging
newfallback: use new unwinder but fall back to old if it gets
stuck (default)
call_trace=[old|both|newfallback|new]
old: use old inexact backtracer
new: use new exact dwarf2 unwinder
both: print entries from both
newfallback: use new unwinder but fall back to old if it gets
stuck (default)
Misc
noreplacement Don't replace instructions with more appropriate ones

View File

@ -0,0 +1,99 @@
Most of the text from Keith Owens, hacked by AK
x86_64 page size (PAGE_SIZE) is 4K.
Like all other architectures, x86_64 has a kernel stack for every
active thread. These thread stacks are THREAD_SIZE (2*PAGE_SIZE) big.
These stacks contain useful data as long as a thread is alive or a
zombie. While the thread is in user space the kernel stack is empty
except for the thread_info structure at the bottom.
In addition to the per thread stacks, there are specialized stacks
associated with each cpu. These stacks are only used while the kernel
is in control on that cpu, when a cpu returns to user space the
specialized stacks contain no useful data. The main cpu stacks is
* Interrupt stack. IRQSTACKSIZE
Used for external hardware interrupts. If this is the first external
hardware interrupt (i.e. not a nested hardware interrupt) then the
kernel switches from the current task to the interrupt stack. Like
the split thread and interrupt stacks on i386 (with CONFIG_4KSTACKS),
this gives more room for kernel interrupt processing without having
to increase the size of every per thread stack.
The interrupt stack is also used when processing a softirq.
Switching to the kernel interrupt stack is done by software based on a
per CPU interrupt nest counter. This is needed because x86-64 "IST"
hardware stacks cannot nest without races.
x86_64 also has a feature which is not available on i386, the ability
to automatically switch to a new stack for designated events such as
double fault or NMI, which makes it easier to handle these unusual
events on x86_64. This feature is called the Interrupt Stack Table
(IST). There can be up to 7 IST entries per cpu. The IST code is an
index into the Task State Segment (TSS), the IST entries in the TSS
point to dedicated stacks, each stack can be a different size.
An IST is selected by an non-zero value in the IST field of an
interrupt-gate descriptor. When an interrupt occurs and the hardware
loads such a descriptor, the hardware automatically sets the new stack
pointer based on the IST value, then invokes the interrupt handler. If
software wants to allow nested IST interrupts then the handler must
adjust the IST values on entry to and exit from the interrupt handler.
(this is occasionally done, e.g. for debug exceptions)
Events with different IST codes (i.e. with different stacks) can be
nested. For example, a debug interrupt can safely be interrupted by an
NMI. arch/x86_64/kernel/entry.S::paranoidentry adjusts the stack
pointers on entry to and exit from all IST events, in theory allowing
IST events with the same code to be nested. However in most cases, the
stack size allocated to an IST assumes no nesting for the same code.
If that assumption is ever broken then the stacks will become corrupt.
The currently assigned IST stacks are :-
* STACKFAULT_STACK. EXCEPTION_STKSZ (PAGE_SIZE).
Used for interrupt 12 - Stack Fault Exception (#SS).
This allows to recover from invalid stack segments. Rarely
happens.
* DOUBLEFAULT_STACK. EXCEPTION_STKSZ (PAGE_SIZE).
Used for interrupt 8 - Double Fault Exception (#DF).
Invoked when handling a exception causes another exception. Happens
when the kernel is very confused (e.g. kernel stack pointer corrupt)
Using a separate stack allows to recover from it well enough in many
cases to still output an oops.
* NMI_STACK. EXCEPTION_STKSZ (PAGE_SIZE).
Used for non-maskable interrupts (NMI).
NMI can be delivered at any time, including when the kernel is in the
middle of switching stacks. Using IST for NMI events avoids making
assumptions about the previous state of the kernel stack.
* DEBUG_STACK. DEBUG_STKSZ
Used for hardware debug interrupts (interrupt 1) and for software
debug interrupts (INT3).
When debugging a kernel, debug interrupts (both hardware and
software) can occur at any time. Using IST for these interrupts
avoids making assumptions about the previous state of the kernel
stack.
* MCE_STACK. EXCEPTION_STKSZ (PAGE_SIZE).
Used for interrupt 18 - Machine Check Exception (#MC).
MCE can be delivered at any time, including when the kernel is in the
middle of switching stacks. Using IST for MCE events avoids making
assumptions about the previous state of the kernel stack.
For more details see the Intel IA32 or AMD AMD64 architecture manuals.

View File

@ -166,7 +166,6 @@ config X86_VISWS
config X86_GENERICARCH
bool "Generic architecture (Summit, bigsmp, ES7000, default)"
depends on SMP
help
This option compiles in the Summit, bigsmp, ES7000, default subarchitectures.
It is intended for a generic binary kernel.
@ -263,7 +262,7 @@ source "kernel/Kconfig.preempt"
config X86_UP_APIC
bool "Local APIC support on uniprocessors"
depends on !SMP && !(X86_VISWS || X86_VOYAGER)
depends on !SMP && !(X86_VISWS || X86_VOYAGER || X86_GENERICARCH)
help
A local APIC (Advanced Programmable Interrupt Controller) is an
integrated interrupt controller in the CPU. If you have a single-CPU
@ -288,12 +287,12 @@ config X86_UP_IOAPIC
config X86_LOCAL_APIC
bool
depends on X86_UP_APIC || ((X86_VISWS || SMP) && !X86_VOYAGER)
depends on X86_UP_APIC || ((X86_VISWS || SMP) && !X86_VOYAGER) || X86_GENERICARCH
default y
config X86_IO_APIC
bool
depends on X86_UP_IOAPIC || (SMP && !(X86_VISWS || X86_VOYAGER))
depends on X86_UP_IOAPIC || (SMP && !(X86_VISWS || X86_VOYAGER)) || X86_GENERICARCH
default y
config X86_VISWS_APIC
@ -741,8 +740,7 @@ config SECCOMP
source kernel/Kconfig.hz
config KEXEC
bool "kexec system call (EXPERIMENTAL)"
depends on EXPERIMENTAL
bool "kexec system call"
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
@ -763,6 +761,13 @@ config CRASH_DUMP
depends on HIGHMEM
help
Generate crash dump after being started by kexec.
This should be normally only set in special crash dump kernels
which are loaded in the main kernel with kexec-tools into
a specially reserved region and then later executed after
a crash by kdump/kexec. The crash dump kernel must be compiled
to a memory address not used by the main kernel or BIOS using
PHYSICAL_START.
For more details see Documentation/kdump/kdump.txt
config PHYSICAL_START
hex "Physical address where the kernel is loaded" if (EMBEDDED || CRASH_DUMP)

View File

@ -46,6 +46,14 @@ cflags-y += -ffreestanding
# a lot more stack due to the lack of sharing of stacklots:
CFLAGS += $(shell if [ $(call cc-version) -lt 0400 ] ; then echo $(call cc-option,-fno-unit-at-a-time); fi ;)
# do binutils support CFI?
cflags-y += $(call as-instr,.cfi_startproc\n.cfi_endproc,-DCONFIG_AS_CFI=1,)
AFLAGS += $(call as-instr,.cfi_startproc\n.cfi_endproc,-DCONFIG_AS_CFI=1,)
# is .cfi_signal_frame supported too?
cflags-y += $(call as-instr,.cfi_startproc\n.cfi_endproc,-DCONFIG_AS_CFI=1,)
AFLAGS += $(call as-instr,.cfi_startproc\n.cfi_endproc,-DCONFIG_AS_CFI=1,)
CFLAGS += $(cflags-y)
# Default subarch .c files

View File

@ -15,42 +15,95 @@
#include <asm/setup.h>
#if defined(CONFIG_EDD) || defined(CONFIG_EDD_MODULE)
# It is assumed that %ds == INITSEG here
movb $0, (EDD_MBR_SIG_NR_BUF)
movb $0, (EDDNR)
# Check the command line for two options:
# Check the command line for options:
# edd=of disables EDD completely (edd=off)
# edd=sk skips the MBR test (edd=skipmbr)
# edd=on re-enables EDD (edd=on)
pushl %esi
cmpl $0, %cs:cmd_line_ptr
jz done_cl
movw $edd_mbr_sig_start, %di # Default to edd=on
movl %cs:(cmd_line_ptr), %esi
# ds:esi has the pointer to the command line now
movl $(COMMAND_LINE_SIZE-7), %ecx
# loop through kernel command line one byte at a time
cl_loop:
cmpl $EDD_CL_EQUALS, (%si)
andl %esi, %esi
jz old_cl # Old boot protocol?
# Convert to a real-mode pointer in fs:si
movl %esi, %eax
shrl $4, %eax
movw %ax, %fs
andw $0xf, %si
jmp have_cl_pointer
# Old-style boot protocol?
old_cl:
push %ds # aka INITSEG
pop %fs
cmpw $0xa33f, (0x20)
jne done_cl # No command line at all?
movw (0x22), %si # Pointer relative to INITSEG
# fs:si has the pointer to the command line now
have_cl_pointer:
# Loop through kernel command line one byte at a time. Just in
# case the loader is buggy and failed to null-terminate the command line
# terminate if we get close enough to the end of the segment that we
# cannot fit "edd=XX"...
cl_atspace:
cmpw $-5, %si # Watch for segment wraparound
jae done_cl
movl %fs:(%si), %eax
andb %al, %al # End of line?
jz done_cl
cmpl $EDD_CL_EQUALS, %eax
jz found_edd_equals
incl %esi
loop cl_loop
jmp done_cl
cmpb $0x20, %al # <= space consider whitespace
ja cl_skipword
incw %si
jmp cl_atspace
cl_skipword:
cmpw $-5, %si # Watch for segment wraparound
jae done_cl
movb %fs:(%si), %al # End of string?
andb %al, %al
jz done_cl
cmpb $0x20, %al
jbe cl_atspace
incw %si
jmp cl_skipword
found_edd_equals:
# only looking at first two characters after equals
addl $4, %esi
cmpw $EDD_CL_OFF, (%si) # edd=of
jz do_edd_off
cmpw $EDD_CL_SKIP, (%si) # edd=sk
jz do_edd_skipmbr
jmp done_cl
# late overrides early on the command line, so keep going after finding something
movw %fs:4(%si), %ax
cmpw $EDD_CL_OFF, %ax # edd=of
je do_edd_off
cmpw $EDD_CL_SKIP, %ax # edd=sk
je do_edd_skipmbr
cmpw $EDD_CL_ON, %ax # edd=on
je do_edd_on
jmp cl_skipword
do_edd_skipmbr:
popl %esi
jmp edd_start
movw $edd_start, %di
jmp cl_skipword
do_edd_off:
popl %esi
jmp edd_done
movw $edd_done, %di
jmp cl_skipword
do_edd_on:
movw $edd_mbr_sig_start, %di
jmp cl_skipword
done_cl:
popl %esi
jmpw *%di
# Read the first sector of each BIOS disk device and store the 4-byte signature
edd_mbr_sig_start:

View File

@ -494,12 +494,12 @@ no_voyager:
movw %cs, %ax # aka SETUPSEG
subw $DELTA_INITSEG, %ax # aka INITSEG
movw %ax, %ds
movw $0, (0x1ff) # default is no pointing device
movb $0, (0x1ff) # default is no pointing device
int $0x11 # int 0x11: equipment list
testb $0x04, %al # check if mouse installed
jz no_psmouse
movw $0xAA, (0x1ff) # device present
movb $0xAA, (0x1ff) # device present
no_psmouse:
#if defined(CONFIG_X86_SPEEDSTEP_SMI) || defined(CONFIG_X86_SPEEDSTEP_SMI_MODULE)

File diff suppressed because it is too large Load Diff

View File

@ -4,7 +4,7 @@
extra-y := head.o init_task.o vmlinux.lds
obj-y := process.o semaphore.o signal.o entry.o traps.o irq.o \
obj-y := process.o signal.o entry.o traps.o irq.o \
ptrace.o time.o ioport.o ldt.o setup.o i8259.o sys_i386.o \
pci-dma.o i386_ksyms.o i387.o bootflag.o \
quirks.o i8237.o topology.o alternative.o i8253.o tsc.o
@ -81,4 +81,5 @@ $(obj)/vsyscall-syms.o: $(src)/vsyscall.lds \
$(call if_changed,syscall)
k8-y += ../../x86_64/kernel/k8.o
stacktrace-y += ../../x86_64/kernel/stacktrace.o

View File

@ -1,5 +1,7 @@
obj-$(CONFIG_ACPI) += boot.o
ifneq ($(CONFIG_PCI),)
obj-$(CONFIG_X86_IO_APIC) += earlyquirk.o
endif
obj-$(CONFIG_ACPI_SLEEP) += sleep.o wakeup.o
ifneq ($(CONFIG_ACPI_PROCESSOR),)

View File

@ -26,9 +26,12 @@
#include <linux/init.h>
#include <linux/acpi.h>
#include <linux/efi.h>
#include <linux/cpumask.h>
#include <linux/module.h>
#include <linux/dmi.h>
#include <linux/irq.h>
#include <linux/bootmem.h>
#include <linux/ioport.h>
#include <asm/pgtable.h>
#include <asm/io_apic.h>
@ -36,11 +39,17 @@
#include <asm/io.h>
#include <asm/mpspec.h>
static int __initdata acpi_force = 0;
#ifdef CONFIG_ACPI
int acpi_disabled = 0;
#else
int acpi_disabled = 1;
#endif
EXPORT_SYMBOL(acpi_disabled);
#ifdef CONFIG_X86_64
extern void __init clustered_apic_check(void);
extern int gsi_irq_sharing(int gsi);
#include <asm/proto.h>
static inline int acpi_madt_oem_check(char *oem_id, char *oem_table_id) { return 0; }
@ -506,16 +515,76 @@ EXPORT_SYMBOL(acpi_register_gsi);
#ifdef CONFIG_ACPI_HOTPLUG_CPU
int acpi_map_lsapic(acpi_handle handle, int *pcpu)
{
/* TBD */
return -EINVAL;
struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
union acpi_object *obj;
struct acpi_table_lapic *lapic;
cpumask_t tmp_map, new_map;
u8 physid;
int cpu;
if (ACPI_FAILURE(acpi_evaluate_object(handle, "_MAT", NULL, &buffer)))
return -EINVAL;
if (!buffer.length || !buffer.pointer)
return -EINVAL;
obj = buffer.pointer;
if (obj->type != ACPI_TYPE_BUFFER ||
obj->buffer.length < sizeof(*lapic)) {
kfree(buffer.pointer);
return -EINVAL;
}
lapic = (struct acpi_table_lapic *)obj->buffer.pointer;
if ((lapic->header.type != ACPI_MADT_LAPIC) ||
(!lapic->flags.enabled)) {
kfree(buffer.pointer);
return -EINVAL;
}
physid = lapic->id;
kfree(buffer.pointer);
buffer.length = ACPI_ALLOCATE_BUFFER;
buffer.pointer = NULL;
tmp_map = cpu_present_map;
mp_register_lapic(physid, lapic->flags.enabled);
/*
* If mp_register_lapic successfully generates a new logical cpu
* number, then the following will get us exactly what was mapped
*/
cpus_andnot(new_map, cpu_present_map, tmp_map);
if (cpus_empty(new_map)) {
printk ("Unable to map lapic to logical cpu number\n");
return -EINVAL;
}
cpu = first_cpu(new_map);
*pcpu = cpu;
return 0;
}
EXPORT_SYMBOL(acpi_map_lsapic);
int acpi_unmap_lsapic(int cpu)
{
/* TBD */
return -EINVAL;
int i;
for_each_possible_cpu(i) {
if (x86_acpiid_to_apicid[i] == x86_cpu_to_apicid[cpu]) {
x86_acpiid_to_apicid[i] = -1;
break;
}
}
x86_cpu_to_apicid[cpu] = -1;
cpu_clear(cpu, cpu_present_map);
num_processors--;
return (0);
}
EXPORT_SYMBOL(acpi_unmap_lsapic);
@ -579,6 +648,8 @@ static int __init acpi_parse_sbf(unsigned long phys_addr, unsigned long size)
static int __init acpi_parse_hpet(unsigned long phys, unsigned long size)
{
struct acpi_table_hpet *hpet_tbl;
struct resource *hpet_res;
resource_size_t res_start;
if (!phys || !size)
return -EINVAL;
@ -594,12 +665,26 @@ static int __init acpi_parse_hpet(unsigned long phys, unsigned long size)
"memory.\n");
return -1;
}
#define HPET_RESOURCE_NAME_SIZE 9
hpet_res = alloc_bootmem(sizeof(*hpet_res) + HPET_RESOURCE_NAME_SIZE);
if (hpet_res) {
memset(hpet_res, 0, sizeof(*hpet_res));
hpet_res->name = (void *)&hpet_res[1];
hpet_res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
snprintf((char *)hpet_res->name, HPET_RESOURCE_NAME_SIZE,
"HPET %u", hpet_tbl->number);
hpet_res->end = (1 * 1024) - 1;
}
#ifdef CONFIG_X86_64
vxtime.hpet_address = hpet_tbl->addr.addrl |
((long)hpet_tbl->addr.addrh << 32);
printk(KERN_INFO PREFIX "HPET id: %#x base: %#lx\n",
hpet_tbl->id, vxtime.hpet_address);
res_start = vxtime.hpet_address;
#else /* X86 */
{
extern unsigned long hpet_address;
@ -607,9 +692,17 @@ static int __init acpi_parse_hpet(unsigned long phys, unsigned long size)
hpet_address = hpet_tbl->addr.addrl;
printk(KERN_INFO PREFIX "HPET id: %#x base: %#lx\n",
hpet_tbl->id, hpet_address);
res_start = hpet_address;
}
#endif /* X86 */
if (hpet_res) {
hpet_res->start = res_start;
hpet_res->end += res_start;
insert_resource(&iomem_resource, hpet_res);
}
return 0;
}
#else
@ -860,8 +953,6 @@ static void __init acpi_process_madt(void)
return;
}
extern int acpi_force;
#ifdef __i386__
static int __init disable_acpi_irq(struct dmi_system_id *d)
@ -1163,3 +1254,75 @@ int __init acpi_boot_init(void)
return 0;
}
static int __init parse_acpi(char *arg)
{
if (!arg)
return -EINVAL;
/* "acpi=off" disables both ACPI table parsing and interpreter */
if (strcmp(arg, "off") == 0) {
disable_acpi();
}
/* acpi=force to over-ride black-list */
else if (strcmp(arg, "force") == 0) {
acpi_force = 1;
acpi_ht = 1;
acpi_disabled = 0;
}
/* acpi=strict disables out-of-spec workarounds */
else if (strcmp(arg, "strict") == 0) {
acpi_strict = 1;
}
/* Limit ACPI just to boot-time to enable HT */
else if (strcmp(arg, "ht") == 0) {
if (!acpi_force)
disable_acpi();
acpi_ht = 1;
}
/* "acpi=noirq" disables ACPI interrupt routing */
else if (strcmp(arg, "noirq") == 0) {
acpi_noirq_set();
} else {
/* Core will printk when we return error. */
return -EINVAL;
}
return 0;
}
early_param("acpi", parse_acpi);
/* FIXME: Using pci= for an ACPI parameter is a travesty. */
static int __init parse_pci(char *arg)
{
if (arg && strcmp(arg, "noacpi") == 0)
acpi_disable_pci();
return 0;
}
early_param("pci", parse_pci);
#ifdef CONFIG_X86_IO_APIC
static int __init parse_acpi_skip_timer_override(char *arg)
{
acpi_skip_timer_override = 1;
return 0;
}
early_param("acpi_skip_timer_override", parse_acpi_skip_timer_override);
#endif /* CONFIG_X86_IO_APIC */
static int __init setup_acpi_sci(char *s)
{
if (!s)
return -EINVAL;
if (!strcmp(s, "edge"))
acpi_sci_flags.trigger = 1;
else if (!strcmp(s, "level"))
acpi_sci_flags.trigger = 3;
else if (!strcmp(s, "high"))
acpi_sci_flags.polarity = 1;
else if (!strcmp(s, "low"))
acpi_sci_flags.polarity = 3;
else
return -EINVAL;
return 0;
}
early_param("acpi_sci", setup_acpi_sci);

View File

@ -48,7 +48,11 @@ void __init check_acpi_pci(void)
int num, slot, func;
/* Assume the machine supports type 1. If not it will
always read ffffffff and should not have any side effect. */
always read ffffffff and should not have any side effect.
Actually a few buggy systems can machine check. Allow the user
to disable it by command line option at least -AK */
if (!early_pci_allowed())
return;
/* Poor man's PCI discovery */
for (num = 0; num < 32; num++) {

View File

@ -52,7 +52,18 @@ static cpumask_t timer_bcast_ipi;
/*
* Knob to control our willingness to enable the local APIC.
*/
int enable_local_apic __initdata = 0; /* -1=force-disable, +1=force-enable */
static int enable_local_apic __initdata = 0; /* -1=force-disable, +1=force-enable */
static inline void lapic_disable(void)
{
enable_local_apic = -1;
clear_bit(X86_FEATURE_APIC, boot_cpu_data.x86_capability);
}
static inline void lapic_enable(void)
{
enable_local_apic = 1;
}
/*
* Debug level
@ -586,8 +597,7 @@ void __devinit setup_local_APIC(void)
printk("No ESR for 82489DX.\n");
}
if (nmi_watchdog == NMI_LOCAL_APIC)
setup_apic_nmi_watchdog();
setup_apic_nmi_watchdog(NULL);
apic_pm_activate();
}
@ -1373,3 +1383,18 @@ int __init APIC_init_uniprocessor (void)
return 0;
}
static int __init parse_lapic(char *arg)
{
lapic_enable();
return 0;
}
early_param("lapic", parse_lapic);
static int __init parse_nolapic(char *arg)
{
lapic_disable();
return 0;
}
early_param("nolapic", parse_nolapic);

View File

@ -22,7 +22,7 @@
extern void vide(void);
__asm__(".align 4\nvide: ret");
static void __init init_amd(struct cpuinfo_x86 *c)
static void __cpuinit init_amd(struct cpuinfo_x86 *c)
{
u32 l, h;
int mbytes = num_physpages >> (20-PAGE_SHIFT);
@ -246,7 +246,7 @@ static void __init init_amd(struct cpuinfo_x86 *c)
num_cache_leaves = 3;
}
static unsigned int amd_size_cache(struct cpuinfo_x86 * c, unsigned int size)
static unsigned int __cpuinit amd_size_cache(struct cpuinfo_x86 * c, unsigned int size)
{
/* AMD errata T13 (order #21922) */
if ((c->x86 == 6)) {
@ -259,7 +259,7 @@ static unsigned int amd_size_cache(struct cpuinfo_x86 * c, unsigned int size)
return size;
}
static struct cpu_dev amd_cpu_dev __initdata = {
static struct cpu_dev amd_cpu_dev __cpuinitdata = {
.c_vendor = "AMD",
.c_ident = { "AuthenticAMD" },
.c_models = {
@ -275,7 +275,6 @@ static struct cpu_dev amd_cpu_dev __initdata = {
},
},
.c_init = init_amd,
.c_identify = generic_identify,
.c_size_cache = amd_size_cache,
};

View File

@ -9,7 +9,7 @@
#ifdef CONFIG_X86_OOSTORE
static u32 __init power2(u32 x)
static u32 __cpuinit power2(u32 x)
{
u32 s=1;
while(s<=x)
@ -22,7 +22,7 @@ static u32 __init power2(u32 x)
* Set up an actual MCR
*/
static void __init centaur_mcr_insert(int reg, u32 base, u32 size, int key)
static void __cpuinit centaur_mcr_insert(int reg, u32 base, u32 size, int key)
{
u32 lo, hi;
@ -40,7 +40,7 @@ static void __init centaur_mcr_insert(int reg, u32 base, u32 size, int key)
* Shortcut: We know you can't put 4Gig of RAM on a winchip
*/
static u32 __init ramtop(void) /* 16388 */
static u32 __cpuinit ramtop(void) /* 16388 */
{
int i;
u32 top = 0;
@ -91,7 +91,7 @@ static u32 __init ramtop(void) /* 16388 */
* Compute a set of MCR's to give maximum coverage
*/
static int __init centaur_mcr_compute(int nr, int key)
static int __cpuinit centaur_mcr_compute(int nr, int key)
{
u32 mem = ramtop();
u32 root = power2(mem);
@ -166,7 +166,7 @@ static int __init centaur_mcr_compute(int nr, int key)
return ct;
}
static void __init centaur_create_optimal_mcr(void)
static void __cpuinit centaur_create_optimal_mcr(void)
{
int i;
/*
@ -189,7 +189,7 @@ static void __init centaur_create_optimal_mcr(void)
wrmsr(MSR_IDT_MCR0+i, 0, 0);
}
static void __init winchip2_create_optimal_mcr(void)
static void __cpuinit winchip2_create_optimal_mcr(void)
{
u32 lo, hi;
int i;
@ -227,7 +227,7 @@ static void __init winchip2_create_optimal_mcr(void)
* Handle the MCR key on the Winchip 2.
*/
static void __init winchip2_unprotect_mcr(void)
static void __cpuinit winchip2_unprotect_mcr(void)
{
u32 lo, hi;
u32 key;
@ -239,7 +239,7 @@ static void __init winchip2_unprotect_mcr(void)
wrmsr(MSR_IDT_MCR_CTRL, lo, hi);
}
static void __init winchip2_protect_mcr(void)
static void __cpuinit winchip2_protect_mcr(void)
{
u32 lo, hi;
@ -257,7 +257,7 @@ static void __init winchip2_protect_mcr(void)
#define RNG_ENABLED (1 << 3)
#define RNG_ENABLE (1 << 6) /* MSR_VIA_RNG */
static void __init init_c3(struct cpuinfo_x86 *c)
static void __cpuinit init_c3(struct cpuinfo_x86 *c)
{
u32 lo, hi;
@ -303,7 +303,7 @@ static void __init init_c3(struct cpuinfo_x86 *c)
display_cacheinfo(c);
}
static void __init init_centaur(struct cpuinfo_x86 *c)
static void __cpuinit init_centaur(struct cpuinfo_x86 *c)
{
enum {
ECX8=1<<1,
@ -442,7 +442,7 @@ static void __init init_centaur(struct cpuinfo_x86 *c)
}
}
static unsigned int centaur_size_cache(struct cpuinfo_x86 * c, unsigned int size)
static unsigned int __cpuinit centaur_size_cache(struct cpuinfo_x86 * c, unsigned int size)
{
/* VIA C3 CPUs (670-68F) need further shifting. */
if ((c->x86 == 6) && ((c->x86_model == 7) || (c->x86_model == 8)))
@ -457,7 +457,7 @@ static unsigned int centaur_size_cache(struct cpuinfo_x86 * c, unsigned int size
return size;
}
static struct cpu_dev centaur_cpu_dev __initdata = {
static struct cpu_dev centaur_cpu_dev __cpuinitdata = {
.c_vendor = "Centaur",
.c_ident = { "CentaurHauls" },
.c_init = init_centaur,

View File

@ -36,7 +36,7 @@ struct cpu_dev * cpu_devs[X86_VENDOR_NUM] = {};
extern int disable_pse;
static void default_init(struct cpuinfo_x86 * c)
static void __cpuinit default_init(struct cpuinfo_x86 * c)
{
/* Not much we can do here... */
/* Check if at least it has cpuid */
@ -49,7 +49,7 @@ static void default_init(struct cpuinfo_x86 * c)
}
}
static struct cpu_dev default_cpu = {
static struct cpu_dev __cpuinitdata default_cpu = {
.c_init = default_init,
.c_vendor = "Unknown",
};
@ -265,7 +265,7 @@ static void __init early_cpu_detect(void)
}
}
void __cpuinit generic_identify(struct cpuinfo_x86 * c)
static void __cpuinit generic_identify(struct cpuinfo_x86 * c)
{
u32 tfms, xlvl;
int ebx;
@ -675,7 +675,7 @@ old_gdt:
#endif
/* Clear %fs and %gs. */
asm volatile ("xorl %eax, %eax; movl %eax, %fs; movl %eax, %gs");
asm volatile ("movl %0, %%fs; movl %0, %%gs" : : "r" (0));
/* Clear all 6 debug registers: */
set_debugreg(0, 0);

View File

@ -24,7 +24,5 @@ extern struct cpu_dev * cpu_devs [X86_VENDOR_NUM];
extern int get_model_name(struct cpuinfo_x86 *c);
extern void display_cacheinfo(struct cpuinfo_x86 *c);
extern void generic_identify(struct cpuinfo_x86 * c);
extern void early_intel_workaround(struct cpuinfo_x86 *c);

View File

@ -12,7 +12,7 @@
/*
* Read NSC/Cyrix DEVID registers (DIR) to get more detailed info. about the CPU
*/
static void __init do_cyrix_devid(unsigned char *dir0, unsigned char *dir1)
static void __cpuinit do_cyrix_devid(unsigned char *dir0, unsigned char *dir1)
{
unsigned char ccr2, ccr3;
unsigned long flags;
@ -52,25 +52,25 @@ static void __init do_cyrix_devid(unsigned char *dir0, unsigned char *dir1)
* Actually since bugs.h doesn't even reference this perhaps someone should
* fix the documentation ???
*/
static unsigned char Cx86_dir0_msb __initdata = 0;
static unsigned char Cx86_dir0_msb __cpuinitdata = 0;
static char Cx86_model[][9] __initdata = {
static char Cx86_model[][9] __cpuinitdata = {
"Cx486", "Cx486", "5x86 ", "6x86", "MediaGX ", "6x86MX ",
"M II ", "Unknown"
};
static char Cx486_name[][5] __initdata = {
static char Cx486_name[][5] __cpuinitdata = {
"SLC", "DLC", "SLC2", "DLC2", "SRx", "DRx",
"SRx2", "DRx2"
};
static char Cx486S_name[][4] __initdata = {
static char Cx486S_name[][4] __cpuinitdata = {
"S", "S2", "Se", "S2e"
};
static char Cx486D_name[][4] __initdata = {
static char Cx486D_name[][4] __cpuinitdata = {
"DX", "DX2", "?", "?", "?", "DX4"
};
static char Cx86_cb[] __initdata = "?.5x Core/Bus Clock";
static char cyrix_model_mult1[] __initdata = "12??43";
static char cyrix_model_mult2[] __initdata = "12233445";
static char Cx86_cb[] __cpuinitdata = "?.5x Core/Bus Clock";
static char cyrix_model_mult1[] __cpuinitdata = "12??43";
static char cyrix_model_mult2[] __cpuinitdata = "12233445";
/*
* Reset the slow-loop (SLOP) bit on the 686(L) which is set by some old
@ -82,7 +82,7 @@ static char cyrix_model_mult2[] __initdata = "12233445";
extern void calibrate_delay(void) __init;
static void __init check_cx686_slop(struct cpuinfo_x86 *c)
static void __cpuinit check_cx686_slop(struct cpuinfo_x86 *c)
{
unsigned long flags;
@ -107,7 +107,7 @@ static void __init check_cx686_slop(struct cpuinfo_x86 *c)
}
static void __init set_cx86_reorder(void)
static void __cpuinit set_cx86_reorder(void)
{
u8 ccr3;
@ -122,7 +122,7 @@ static void __init set_cx86_reorder(void)
setCx86(CX86_CCR3, ccr3);
}
static void __init set_cx86_memwb(void)
static void __cpuinit set_cx86_memwb(void)
{
u32 cr0;
@ -137,7 +137,7 @@ static void __init set_cx86_memwb(void)
setCx86(CX86_CCR2, getCx86(CX86_CCR2) | 0x14 );
}
static void __init set_cx86_inc(void)
static void __cpuinit set_cx86_inc(void)
{
unsigned char ccr3;
@ -158,7 +158,7 @@ static void __init set_cx86_inc(void)
* Configure later MediaGX and/or Geode processor.
*/
static void __init geode_configure(void)
static void __cpuinit geode_configure(void)
{
unsigned long flags;
u8 ccr3, ccr4;
@ -184,14 +184,14 @@ static void __init geode_configure(void)
#ifdef CONFIG_PCI
static struct pci_device_id __initdata cyrix_55x0[] = {
static struct pci_device_id __cpuinitdata cyrix_55x0[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_CYRIX, PCI_DEVICE_ID_CYRIX_5510) },
{ PCI_DEVICE(PCI_VENDOR_ID_CYRIX, PCI_DEVICE_ID_CYRIX_5520) },
{ },
};
#endif
static void __init init_cyrix(struct cpuinfo_x86 *c)
static void __cpuinit init_cyrix(struct cpuinfo_x86 *c)
{
unsigned char dir0, dir0_msn, dir0_lsn, dir1 = 0;
char *buf = c->x86_model_id;
@ -346,7 +346,7 @@ static void __init init_cyrix(struct cpuinfo_x86 *c)
/*
* Handle National Semiconductor branded processors
*/
static void __init init_nsc(struct cpuinfo_x86 *c)
static void __cpuinit init_nsc(struct cpuinfo_x86 *c)
{
/* There may be GX1 processors in the wild that are branded
* NSC and not Cyrix.
@ -394,7 +394,7 @@ static inline int test_cyrix_52div(void)
return (unsigned char) (test >> 8) == 0x02;
}
static void cyrix_identify(struct cpuinfo_x86 * c)
static void __cpuinit cyrix_identify(struct cpuinfo_x86 * c)
{
/* Detect Cyrix with disabled CPUID */
if ( c->x86 == 4 && test_cyrix_52div() ) {
@ -427,10 +427,9 @@ static void cyrix_identify(struct cpuinfo_x86 * c)
local_irq_restore(flags);
}
}
generic_identify(c);
}
static struct cpu_dev cyrix_cpu_dev __initdata = {
static struct cpu_dev cyrix_cpu_dev __cpuinitdata = {
.c_vendor = "Cyrix",
.c_ident = { "CyrixInstead" },
.c_init = init_cyrix,
@ -453,11 +452,10 @@ static int __init cyrix_exit_cpu(void)
late_initcall(cyrix_exit_cpu);
static struct cpu_dev nsc_cpu_dev __initdata = {
static struct cpu_dev nsc_cpu_dev __cpuinitdata = {
.c_vendor = "NSC",
.c_ident = { "Geode by NSC" },
.c_init = init_nsc,
.c_identify = generic_identify,
};
int __init nsc_init_cpu(void)

View File

@ -198,7 +198,7 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
}
static unsigned int intel_size_cache(struct cpuinfo_x86 * c, unsigned int size)
static unsigned int __cpuinit intel_size_cache(struct cpuinfo_x86 * c, unsigned int size)
{
/* Intel PIII Tualatin. This comes in two flavours.
* One has 256kb of cache, the other 512. We have no way
@ -263,7 +263,6 @@ static struct cpu_dev intel_cpu_dev __cpuinitdata = {
},
},
.c_init = init_intel,
.c_identify = generic_identify,
.c_size_cache = intel_size_cache,
};

View File

@ -1,2 +1,2 @@
obj-y = mce.o k7.o p4.o p5.o p6.o winchip.o
obj-y = mce.o k7.o p4.o p5.o p6.o winchip.o therm_throt.o
obj-$(CONFIG_X86_MCE_NONFATAL) += non-fatal.o

View File

@ -13,6 +13,8 @@
#include <asm/msr.h>
#include <asm/apic.h>
#include <asm/therm_throt.h>
#include "mce.h"
/* as supported by the P4/Xeon family */
@ -44,25 +46,12 @@ static void unexpected_thermal_interrupt(struct pt_regs *regs)
/* P4/Xeon Thermal transition interrupt handler */
static void intel_thermal_interrupt(struct pt_regs *regs)
{
u32 l, h;
unsigned int cpu = smp_processor_id();
static unsigned long next[NR_CPUS];
__u64 msr_val;
ack_APIC_irq();
if (time_after(next[cpu], jiffies))
return;
next[cpu] = jiffies + HZ*5;
rdmsr(MSR_IA32_THERM_STATUS, l, h);
if (l & 0x1) {
printk(KERN_EMERG "CPU%d: Temperature above threshold\n", cpu);
printk(KERN_EMERG "CPU%d: Running in modulated clock mode\n",
cpu);
add_taint(TAINT_MACHINE_CHECK);
} else {
printk(KERN_INFO "CPU%d: Temperature/speed normal\n", cpu);
}
rdmsrl(MSR_IA32_THERM_STATUS, msr_val);
therm_throt_process(msr_val & 0x1);
}
/* Thermal interrupt handler for this CPU setup */
@ -122,10 +111,13 @@ static void intel_init_thermal(struct cpuinfo_x86 *c)
rdmsr (MSR_IA32_MISC_ENABLE, l, h);
wrmsr (MSR_IA32_MISC_ENABLE, l | (1<<3), h);
l = apic_read (APIC_LVTTHMR);
apic_write_around (APIC_LVTTHMR, l & ~APIC_LVT_MASKED);
printk (KERN_INFO "CPU%d: Thermal monitoring enabled\n", cpu);
/* enable thermal throttle processing */
atomic_set(&therm_throt_en, 1);
return;
}
#endif /* CONFIG_X86_MCE_P4THERMAL */

View File

@ -0,0 +1,180 @@
/*
* linux/arch/i386/kerne/cpu/mcheck/therm_throt.c
*
* Thermal throttle event support code (such as syslog messaging and rate
* limiting) that was factored out from x86_64 (mce_intel.c) and i386 (p4.c).
* This allows consistent reporting of CPU thermal throttle events.
*
* Maintains a counter in /sys that keeps track of the number of thermal
* events, such that the user knows how bad the thermal problem might be
* (since the logging to syslog and mcelog is rate limited).
*
* Author: Dmitriy Zavin (dmitriyz@google.com)
*
* Credits: Adapted from Zwane Mwaikambo's original code in mce_intel.c.
* Inspired by Ross Biro's and Al Borchers' counter code.
*/
#include <linux/percpu.h>
#include <linux/sysdev.h>
#include <linux/cpu.h>
#include <asm/cpu.h>
#include <linux/notifier.h>
#include <asm/therm_throt.h>
/* How long to wait between reporting thermal events */
#define CHECK_INTERVAL (300 * HZ)
static DEFINE_PER_CPU(__u64, next_check) = INITIAL_JIFFIES;
static DEFINE_PER_CPU(unsigned long, thermal_throttle_count);
atomic_t therm_throt_en = ATOMIC_INIT(0);
#ifdef CONFIG_SYSFS
#define define_therm_throt_sysdev_one_ro(_name) \
static SYSDEV_ATTR(_name, 0444, therm_throt_sysdev_show_##_name, NULL)
#define define_therm_throt_sysdev_show_func(name) \
static ssize_t therm_throt_sysdev_show_##name(struct sys_device *dev, \
char *buf) \
{ \
unsigned int cpu = dev->id; \
ssize_t ret; \
\
preempt_disable(); /* CPU hotplug */ \
if (cpu_online(cpu)) \
ret = sprintf(buf, "%lu\n", \
per_cpu(thermal_throttle_##name, cpu)); \
else \
ret = 0; \
preempt_enable(); \
\
return ret; \
}
define_therm_throt_sysdev_show_func(count);
define_therm_throt_sysdev_one_ro(count);
static struct attribute *thermal_throttle_attrs[] = {
&attr_count.attr,
NULL
};
static struct attribute_group thermal_throttle_attr_group = {
.attrs = thermal_throttle_attrs,
.name = "thermal_throttle"
};
#endif /* CONFIG_SYSFS */
/***
* therm_throt_process - Process thermal throttling event from interrupt
* @curr: Whether the condition is current or not (boolean), since the
* thermal interrupt normally gets called both when the thermal
* event begins and once the event has ended.
*
* This function is called by the thermal interrupt after the
* IRQ has been acknowledged.
*
* It will take care of rate limiting and printing messages to the syslog.
*
* Returns: 0 : Event should NOT be further logged, i.e. still in
* "timeout" from previous log message.
* 1 : Event should be logged further, and a message has been
* printed to the syslog.
*/
int therm_throt_process(int curr)
{
unsigned int cpu = smp_processor_id();
__u64 tmp_jiffs = get_jiffies_64();
if (curr)
__get_cpu_var(thermal_throttle_count)++;
if (time_before64(tmp_jiffs, __get_cpu_var(next_check)))
return 0;
__get_cpu_var(next_check) = tmp_jiffs + CHECK_INTERVAL;
/* if we just entered the thermal event */
if (curr) {
printk(KERN_CRIT "CPU%d: Temperature above threshold, "
"cpu clock throttled (total events = %lu)\n", cpu,
__get_cpu_var(thermal_throttle_count));
add_taint(TAINT_MACHINE_CHECK);
} else {
printk(KERN_CRIT "CPU%d: Temperature/speed normal\n", cpu);
}
return 1;
}
#ifdef CONFIG_SYSFS
/* Add/Remove thermal_throttle interface for CPU device */
static __cpuinit int thermal_throttle_add_dev(struct sys_device * sys_dev)
{
sysfs_create_group(&sys_dev->kobj, &thermal_throttle_attr_group);
return 0;
}
#ifdef CONFIG_HOTPLUG_CPU
static __cpuinit int thermal_throttle_remove_dev(struct sys_device * sys_dev)
{
sysfs_remove_group(&sys_dev->kobj, &thermal_throttle_attr_group);
return 0;
}
/* Mutex protecting device creation against CPU hotplug */
static DEFINE_MUTEX(therm_cpu_lock);
/* Get notified when a cpu comes on/off. Be hotplug friendly. */
static __cpuinit int thermal_throttle_cpu_callback(struct notifier_block *nfb,
unsigned long action,
void *hcpu)
{
unsigned int cpu = (unsigned long)hcpu;
struct sys_device *sys_dev;
sys_dev = get_cpu_sysdev(cpu);
mutex_lock(&therm_cpu_lock);
switch (action) {
case CPU_ONLINE:
thermal_throttle_add_dev(sys_dev);
break;
case CPU_DEAD:
thermal_throttle_remove_dev(sys_dev);
break;
}
mutex_unlock(&therm_cpu_lock);
return NOTIFY_OK;
}
static struct notifier_block thermal_throttle_cpu_notifier =
{
.notifier_call = thermal_throttle_cpu_callback,
};
#endif /* CONFIG_HOTPLUG_CPU */
static __init int thermal_throttle_init_device(void)
{
unsigned int cpu = 0;
if (!atomic_read(&therm_throt_en))
return 0;
register_hotcpu_notifier(&thermal_throttle_cpu_notifier);
#ifdef CONFIG_HOTPLUG_CPU
mutex_lock(&therm_cpu_lock);
#endif
/* connect live CPUs to sysfs */
for_each_online_cpu(cpu)
thermal_throttle_add_dev(get_cpu_sysdev(cpu));
#ifdef CONFIG_HOTPLUG_CPU
mutex_unlock(&therm_cpu_lock);
#endif
return 0;
}
device_initcall(thermal_throttle_init_device);
#endif /* CONFIG_SYSFS */

View File

@ -10,7 +10,7 @@
* to have CPUID. (Thanks to Herbert Oppmann)
*/
static int __init deep_magic_nexgen_probe(void)
static int __cpuinit deep_magic_nexgen_probe(void)
{
int ret;
@ -27,21 +27,20 @@ static int __init deep_magic_nexgen_probe(void)
return ret;
}
static void __init init_nexgen(struct cpuinfo_x86 * c)
static void __cpuinit init_nexgen(struct cpuinfo_x86 * c)
{
c->x86_cache_size = 256; /* A few had 1 MB... */
}
static void __init nexgen_identify(struct cpuinfo_x86 * c)
static void __cpuinit nexgen_identify(struct cpuinfo_x86 * c)
{
/* Detect NexGen with old hypercode */
if ( deep_magic_nexgen_probe() ) {
strcpy(c->x86_vendor_id, "NexGenDriven");
}
generic_identify(c);
}
static struct cpu_dev nexgen_cpu_dev __initdata = {
static struct cpu_dev nexgen_cpu_dev __cpuinitdata = {
.c_vendor = "Nexgen",
.c_ident = { "NexGenDriven" },
.c_models = {

View File

@ -46,8 +46,8 @@ static int show_cpuinfo(struct seq_file *m, void *v)
/* Intel-defined (#2) */
"pni", NULL, NULL, "monitor", "ds_cpl", "vmx", "smx", "est",
"tm2", NULL, "cid", NULL, NULL, "cx16", "xtpr", NULL,
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
"tm2", "ssse3", "cid", NULL, NULL, "cx16", "xtpr", NULL,
NULL, NULL, "dca", NULL, NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
/* VIA/Cyrix/Centaur-defined */

View File

@ -5,7 +5,7 @@
#include "cpu.h"
static void __init init_rise(struct cpuinfo_x86 *c)
static void __cpuinit init_rise(struct cpuinfo_x86 *c)
{
printk("CPU: Rise iDragon");
if (c->x86_model > 2)
@ -28,7 +28,7 @@ static void __init init_rise(struct cpuinfo_x86 *c)
set_bit(X86_FEATURE_CX8, c->x86_capability);
}
static struct cpu_dev rise_cpu_dev __initdata = {
static struct cpu_dev rise_cpu_dev __cpuinitdata = {
.c_vendor = "Rise",
.c_ident = { "RiseRiseRise" },
.c_models = {

View File

@ -5,7 +5,7 @@
#include <asm/msr.h>
#include "cpu.h"
static void __init init_transmeta(struct cpuinfo_x86 *c)
static void __cpuinit init_transmeta(struct cpuinfo_x86 *c)
{
unsigned int cap_mask, uk, max, dummy;
unsigned int cms_rev1, cms_rev2;
@ -85,10 +85,9 @@ static void __init init_transmeta(struct cpuinfo_x86 *c)
#endif
}
static void __init transmeta_identify(struct cpuinfo_x86 * c)
static void __cpuinit transmeta_identify(struct cpuinfo_x86 * c)
{
u32 xlvl;
generic_identify(c);
/* Transmeta-defined flags: level 0x80860001 */
xlvl = cpuid_eax(0x80860000);
@ -98,7 +97,7 @@ static void __init transmeta_identify(struct cpuinfo_x86 * c)
}
}
static struct cpu_dev transmeta_cpu_dev __initdata = {
static struct cpu_dev transmeta_cpu_dev __cpuinitdata = {
.c_vendor = "Transmeta",
.c_ident = { "GenuineTMx86", "TransmetaCPU" },
.c_init = init_transmeta,

View File

@ -5,12 +5,8 @@
/* UMC chips appear to be only either 386 or 486, so no special init takes place.
*/
static void __init init_umc(struct cpuinfo_x86 * c)
{
}
static struct cpu_dev umc_cpu_dev __initdata = {
static struct cpu_dev umc_cpu_dev __cpuinitdata = {
.c_vendor = "UMC",
.c_ident = { "UMC UMC UMC" },
.c_models = {
@ -21,7 +17,6 @@ static struct cpu_dev umc_cpu_dev __initdata = {
}
},
},
.c_init = init_umc,
};
int __init umc_init_cpu(void)

View File

@ -22,6 +22,8 @@
#include <asm/nmi.h>
#include <asm/hw_irq.h>
#include <asm/apic.h>
#include <asm/kdebug.h>
#include <mach_ipi.h>
@ -93,16 +95,25 @@ static void crash_save_self(struct pt_regs *regs)
#if defined(CONFIG_SMP) && defined(CONFIG_X86_LOCAL_APIC)
static atomic_t waiting_for_crash_ipi;
static int crash_nmi_callback(struct pt_regs *regs, int cpu)
static int crash_nmi_callback(struct notifier_block *self,
unsigned long val, void *data)
{
struct pt_regs *regs;
struct pt_regs fixed_regs;
int cpu;
if (val != DIE_NMI_IPI)
return NOTIFY_OK;
regs = ((struct die_args *)data)->regs;
cpu = raw_smp_processor_id();
/* Don't do anything if this handler is invoked on crashing cpu.
* Otherwise, system will completely hang. Crashing cpu can get
* an NMI if system was initially booted with nmi_watchdog parameter.
*/
if (cpu == crashing_cpu)
return 1;
return NOTIFY_STOP;
local_irq_disable();
if (!user_mode_vm(regs)) {
@ -125,13 +136,18 @@ static void smp_send_nmi_allbutself(void)
send_IPI_allbutself(NMI_VECTOR);
}
static struct notifier_block crash_nmi_nb = {
.notifier_call = crash_nmi_callback,
};
static void nmi_shootdown_cpus(void)
{
unsigned long msecs;
atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
/* Would it be better to replace the trap vector here? */
set_nmi_callback(crash_nmi_callback);
if (register_die_notifier(&crash_nmi_nb))
return; /* return what? */
/* Ensure the new callback function is set before sending
* out the NMI
*/

View File

@ -76,8 +76,15 @@ DF_MASK = 0x00000400
NT_MASK = 0x00004000
VM_MASK = 0x00020000
/* These are replaces for paravirtualization */
#define DISABLE_INTERRUPTS cli
#define ENABLE_INTERRUPTS sti
#define ENABLE_INTERRUPTS_SYSEXIT sti; sysexit
#define INTERRUPT_RETURN iret
#define GET_CR0_INTO_EAX movl %cr0, %eax
#ifdef CONFIG_PREEMPT
#define preempt_stop cli; TRACE_IRQS_OFF
#define preempt_stop DISABLE_INTERRUPTS; TRACE_IRQS_OFF
#else
#define preempt_stop
#define resume_kernel restore_nocheck
@ -176,18 +183,21 @@ VM_MASK = 0x00020000
#define RING0_INT_FRAME \
CFI_STARTPROC simple;\
CFI_SIGNAL_FRAME;\
CFI_DEF_CFA esp, 3*4;\
/*CFI_OFFSET cs, -2*4;*/\
CFI_OFFSET eip, -3*4
#define RING0_EC_FRAME \
CFI_STARTPROC simple;\
CFI_SIGNAL_FRAME;\
CFI_DEF_CFA esp, 4*4;\
/*CFI_OFFSET cs, -2*4;*/\
CFI_OFFSET eip, -3*4
#define RING0_PTREGS_FRAME \
CFI_STARTPROC simple;\
CFI_SIGNAL_FRAME;\
CFI_DEF_CFA esp, OLDESP-EBX;\
/*CFI_OFFSET cs, CS-OLDESP;*/\
CFI_OFFSET eip, EIP-OLDESP;\
@ -233,10 +243,11 @@ ret_from_intr:
check_userspace:
movl EFLAGS(%esp), %eax # mix EFLAGS and CS
movb CS(%esp), %al
testl $(VM_MASK | 3), %eax
jz resume_kernel
andl $(VM_MASK | SEGMENT_RPL_MASK), %eax
cmpl $USER_RPL, %eax
jb resume_kernel # not returning to v8086 or userspace
ENTRY(resume_userspace)
cli # make sure we don't miss an interrupt
DISABLE_INTERRUPTS # make sure we don't miss an interrupt
# setting need_resched or sigpending
# between sampling and the iret
movl TI_flags(%ebp), %ecx
@ -247,7 +258,7 @@ ENTRY(resume_userspace)
#ifdef CONFIG_PREEMPT
ENTRY(resume_kernel)
cli
DISABLE_INTERRUPTS
cmpl $0,TI_preempt_count(%ebp) # non-zero preempt_count ?
jnz restore_nocheck
need_resched:
@ -267,6 +278,7 @@ need_resched:
# sysenter call handler stub
ENTRY(sysenter_entry)
CFI_STARTPROC simple
CFI_SIGNAL_FRAME
CFI_DEF_CFA esp, 0
CFI_REGISTER esp, ebp
movl TSS_sysenter_esp0(%esp),%esp
@ -275,7 +287,7 @@ sysenter_past_esp:
* No need to follow this irqs on/off section: the syscall
* disabled irqs and here we enable it straight after entry:
*/
sti
ENABLE_INTERRUPTS
pushl $(__USER_DS)
CFI_ADJUST_CFA_OFFSET 4
/*CFI_REL_OFFSET ss, 0*/
@ -320,7 +332,7 @@ sysenter_past_esp:
jae syscall_badsys
call *sys_call_table(,%eax,4)
movl %eax,EAX(%esp)
cli
DISABLE_INTERRUPTS
TRACE_IRQS_OFF
movl TI_flags(%ebp), %ecx
testw $_TIF_ALLWORK_MASK, %cx
@ -330,8 +342,7 @@ sysenter_past_esp:
movl OLDESP(%esp), %ecx
xorl %ebp,%ebp
TRACE_IRQS_ON
sti
sysexit
ENABLE_INTERRUPTS_SYSEXIT
CFI_ENDPROC
@ -356,7 +367,7 @@ syscall_call:
call *sys_call_table(,%eax,4)
movl %eax,EAX(%esp) # store the return value
syscall_exit:
cli # make sure we don't miss an interrupt
DISABLE_INTERRUPTS # make sure we don't miss an interrupt
# setting need_resched or sigpending
# between sampling and the iret
TRACE_IRQS_OFF
@ -371,8 +382,8 @@ restore_all:
# See comments in process.c:copy_thread() for details.
movb OLDSS(%esp), %ah
movb CS(%esp), %al
andl $(VM_MASK | (4 << 8) | 3), %eax
cmpl $((4 << 8) | 3), %eax
andl $(VM_MASK | (SEGMENT_TI_MASK << 8) | SEGMENT_RPL_MASK), %eax
cmpl $((SEGMENT_LDT << 8) | USER_RPL), %eax
CFI_REMEMBER_STATE
je ldt_ss # returning to user-space with LDT SS
restore_nocheck:
@ -381,11 +392,11 @@ restore_nocheck_notrace:
RESTORE_REGS
addl $4, %esp
CFI_ADJUST_CFA_OFFSET -4
1: iret
1: INTERRUPT_RETURN
.section .fixup,"ax"
iret_exc:
TRACE_IRQS_ON
sti
ENABLE_INTERRUPTS
pushl $0 # no error code
pushl $do_iret_error
jmp error_code
@ -409,7 +420,7 @@ ldt_ss:
* dosemu and wine happy. */
subl $8, %esp # reserve space for switch16 pointer
CFI_ADJUST_CFA_OFFSET 8
cli
DISABLE_INTERRUPTS
TRACE_IRQS_OFF
movl %esp, %eax
/* Set up the 16bit stack frame with switch32 pointer on top,
@ -419,7 +430,7 @@ ldt_ss:
TRACE_IRQS_IRET
RESTORE_REGS
lss 20+4(%esp), %esp # switch to 16bit stack
1: iret
1: INTERRUPT_RETURN
.section __ex_table,"a"
.align 4
.long 1b,iret_exc
@ -434,7 +445,7 @@ work_pending:
jz work_notifysig
work_resched:
call schedule
cli # make sure we don't miss an interrupt
DISABLE_INTERRUPTS # make sure we don't miss an interrupt
# setting need_resched or sigpending
# between sampling and the iret
TRACE_IRQS_OFF
@ -490,7 +501,7 @@ syscall_exit_work:
testb $(_TIF_SYSCALL_TRACE|_TIF_SYSCALL_AUDIT|_TIF_SINGLESTEP), %cl
jz work_pending
TRACE_IRQS_ON
sti # could let do_syscall_trace() call
ENABLE_INTERRUPTS # could let do_syscall_trace() call
# schedule() instead
movl %esp, %eax
movl $1, %edx
@ -591,11 +602,9 @@ ENTRY(name) \
/* The include is where all of the SMP etc. interrupts come from */
#include "entry_arch.h"
ENTRY(divide_error)
RING0_INT_FRAME
pushl $0 # no error code
CFI_ADJUST_CFA_OFFSET 4
pushl $do_divide_error
KPROBE_ENTRY(page_fault)
RING0_EC_FRAME
pushl $do_page_fault
CFI_ADJUST_CFA_OFFSET 4
ALIGN
error_code:
@ -645,6 +654,7 @@ error_code:
call *%edi
jmp ret_from_exception
CFI_ENDPROC
KPROBE_END(page_fault)
ENTRY(coprocessor_error)
RING0_INT_FRAME
@ -669,7 +679,7 @@ ENTRY(device_not_available)
pushl $-1 # mark this as an int
CFI_ADJUST_CFA_OFFSET 4
SAVE_ALL
movl %cr0, %eax
GET_CR0_INTO_EAX
testl $0x4, %eax # EM (math emulation bit)
jne device_not_available_emulate
preempt_stop
@ -702,9 +712,15 @@ device_not_available_emulate:
jne ok; \
label: \
movl TSS_sysenter_esp0+offset(%esp),%esp; \
CFI_DEF_CFA esp, 0; \
CFI_UNDEFINED eip; \
pushfl; \
CFI_ADJUST_CFA_OFFSET 4; \
pushl $__KERNEL_CS; \
pushl $sysenter_past_esp
CFI_ADJUST_CFA_OFFSET 4; \
pushl $sysenter_past_esp; \
CFI_ADJUST_CFA_OFFSET 4; \
CFI_REL_OFFSET eip, 0
KPROBE_ENTRY(debug)
RING0_INT_FRAME
@ -720,7 +736,8 @@ debug_stack_correct:
call do_debug
jmp ret_from_exception
CFI_ENDPROC
.previous .text
KPROBE_END(debug)
/*
* NMI is doubly nasty. It can happen _while_ we're handling
* a debug fault, and the debug fault hasn't yet been able to
@ -729,7 +746,7 @@ debug_stack_correct:
* check whether we got an NMI on the debug path where the debug
* fault happened on the sysenter path.
*/
ENTRY(nmi)
KPROBE_ENTRY(nmi)
RING0_INT_FRAME
pushl %eax
CFI_ADJUST_CFA_OFFSET 4
@ -754,6 +771,7 @@ ENTRY(nmi)
cmpl $sysenter_entry,12(%esp)
je nmi_debug_stack_check
nmi_stack_correct:
/* We have a RING0_INT_FRAME here */
pushl %eax
CFI_ADJUST_CFA_OFFSET 4
SAVE_ALL
@ -764,9 +782,12 @@ nmi_stack_correct:
CFI_ENDPROC
nmi_stack_fixup:
RING0_INT_FRAME
FIX_STACK(12,nmi_stack_correct, 1)
jmp nmi_stack_correct
nmi_debug_stack_check:
/* We have a RING0_INT_FRAME here */
cmpw $__KERNEL_CS,16(%esp)
jne nmi_stack_correct
cmpl $debug,(%esp)
@ -777,8 +798,10 @@ nmi_debug_stack_check:
jmp nmi_stack_correct
nmi_16bit_stack:
RING0_INT_FRAME
/* create the pointer to lss back */
/* We have a RING0_INT_FRAME here.
*
* create the pointer to lss back
*/
pushl %ss
CFI_ADJUST_CFA_OFFSET 4
pushl %esp
@ -799,12 +822,13 @@ nmi_16bit_stack:
call do_nmi
RESTORE_REGS
lss 12+4(%esp), %esp # back to 16bit stack
1: iret
1: INTERRUPT_RETURN
CFI_ENDPROC
.section __ex_table,"a"
.align 4
.long 1b,iret_exc
.previous
KPROBE_END(nmi)
KPROBE_ENTRY(int3)
RING0_INT_FRAME
@ -816,7 +840,7 @@ KPROBE_ENTRY(int3)
call do_int3
jmp ret_from_exception
CFI_ENDPROC
.previous .text
KPROBE_END(int3)
ENTRY(overflow)
RING0_INT_FRAME
@ -881,7 +905,7 @@ KPROBE_ENTRY(general_protection)
CFI_ADJUST_CFA_OFFSET 4
jmp error_code
CFI_ENDPROC
.previous .text
KPROBE_END(general_protection)
ENTRY(alignment_check)
RING0_EC_FRAME
@ -890,13 +914,14 @@ ENTRY(alignment_check)
jmp error_code
CFI_ENDPROC
KPROBE_ENTRY(page_fault)
RING0_EC_FRAME
pushl $do_page_fault
ENTRY(divide_error)
RING0_INT_FRAME
pushl $0 # no error code
CFI_ADJUST_CFA_OFFSET 4
pushl $do_divide_error
CFI_ADJUST_CFA_OFFSET 4
jmp error_code
CFI_ENDPROC
.previous .text
#ifdef CONFIG_X86_MCE
ENTRY(machine_check)
@ -949,6 +974,19 @@ ENTRY(arch_unwind_init_running)
ENDPROC(arch_unwind_init_running)
#endif
ENTRY(kernel_thread_helper)
pushl $0 # fake return address for unwinder
CFI_STARTPROC
movl %edx,%eax
push %edx
CFI_ADJUST_CFA_OFFSET 4
call *%ebx
push %eax
CFI_ADJUST_CFA_OFFSET 4
call do_exit
CFI_ENDPROC
ENDPROC(kernel_thread_helper)
.section .rodata,"a"
#include "syscall_table.S"

View File

@ -371,8 +371,65 @@ rp_sidt:
addl $8,%edi
dec %ecx
jne rp_sidt
.macro set_early_handler handler,trapno
lea \handler,%edx
movl $(__KERNEL_CS << 16),%eax
movw %dx,%ax
movw $0x8E00,%dx /* interrupt gate - dpl=0, present */
lea idt_table,%edi
movl %eax,8*\trapno(%edi)
movl %edx,8*\trapno+4(%edi)
.endm
set_early_handler handler=early_divide_err,trapno=0
set_early_handler handler=early_illegal_opcode,trapno=6
set_early_handler handler=early_protection_fault,trapno=13
set_early_handler handler=early_page_fault,trapno=14
ret
early_divide_err:
xor %edx,%edx
pushl $0 /* fake errcode */
jmp early_fault
early_illegal_opcode:
movl $6,%edx
pushl $0 /* fake errcode */
jmp early_fault
early_protection_fault:
movl $13,%edx
jmp early_fault
early_page_fault:
movl $14,%edx
jmp early_fault
early_fault:
cld
#ifdef CONFIG_PRINTK
movl $(__KERNEL_DS),%eax
movl %eax,%ds
movl %eax,%es
cmpl $2,early_recursion_flag
je hlt_loop
incl early_recursion_flag
movl %cr2,%eax
pushl %eax
pushl %edx /* trapno */
pushl $fault_msg
#ifdef CONFIG_EARLY_PRINTK
call early_printk
#else
call printk
#endif
#endif
hlt_loop:
hlt
jmp hlt_loop
/* This is the default interrupt "handler" :-) */
ALIGN
ignore_int:
@ -386,6 +443,9 @@ ignore_int:
movl $(__KERNEL_DS),%eax
movl %eax,%ds
movl %eax,%es
cmpl $2,early_recursion_flag
je hlt_loop
incl early_recursion_flag
pushl 16(%esp)
pushl 24(%esp)
pushl 32(%esp)
@ -431,9 +491,16 @@ ENTRY(stack_start)
ready: .byte 0
early_recursion_flag:
.long 0
int_msg:
.asciz "Unknown interrupt or fault at EIP %p %p %p\n"
fault_msg:
.ascii "Int %d: CR2 %p err %p EIP %p CS %p flags %p\n"
.asciz "Stack: %p %p %p %p %p %p %p %p\n"
/*
* The IDT and GDT 'descriptors' are a strange 48-bit object
* only used by the lidt and lgdt instructions. They are not

View File

@ -45,6 +45,8 @@ static void end_8259A_irq (unsigned int irq)
#define shutdown_8259A_irq disable_8259A_irq
static int i8259A_auto_eoi;
static void mask_and_ack_8259A(unsigned int);
unsigned int startup_8259A_irq(unsigned int irq)
@ -253,7 +255,7 @@ static void save_ELCR(char *trigger)
static int i8259A_resume(struct sys_device *dev)
{
init_8259A(0);
init_8259A(i8259A_auto_eoi);
restore_ELCR(irq_trigger);
return 0;
}
@ -301,6 +303,8 @@ void init_8259A(int auto_eoi)
{
unsigned long flags;
i8259A_auto_eoi = auto_eoi;
spin_lock_irqsave(&i8259A_lock, flags);
outb(0xff, PIC_MASTER_IMR); /* mask all of 8259A-1 */

View File

@ -40,6 +40,7 @@
#include <asm/nmi.h>
#include <mach_apic.h>
#include <mach_apicdef.h>
#include "io_ports.h"
@ -65,7 +66,7 @@ int sis_apic_bug = -1;
*/
int nr_ioapic_registers[MAX_IO_APICS];
int disable_timer_pin_1 __initdata;
static int disable_timer_pin_1 __initdata;
/*
* Rough estimation of how many shared IRQs there are, can
@ -93,6 +94,34 @@ int vector_irq[NR_VECTORS] __read_mostly = { [0 ... NR_VECTORS - 1] = -1};
#define vector_to_irq(vector) (vector)
#endif
union entry_union {
struct { u32 w1, w2; };
struct IO_APIC_route_entry entry;
};
static struct IO_APIC_route_entry ioapic_read_entry(int apic, int pin)
{
union entry_union eu;
unsigned long flags;
spin_lock_irqsave(&ioapic_lock, flags);
eu.w1 = io_apic_read(apic, 0x10 + 2 * pin);
eu.w2 = io_apic_read(apic, 0x11 + 2 * pin);
spin_unlock_irqrestore(&ioapic_lock, flags);
return eu.entry;
}
static void ioapic_write_entry(int apic, int pin, struct IO_APIC_route_entry e)
{
unsigned long flags;
union entry_union eu;
eu.entry = e;
spin_lock_irqsave(&ioapic_lock, flags);
io_apic_write(apic, 0x10 + 2*pin, eu.w1);
io_apic_write(apic, 0x11 + 2*pin, eu.w2);
spin_unlock_irqrestore(&ioapic_lock, flags);
}
/*
* The common case is 1:1 IRQ<->pin mappings. Sometimes there are
* shared ISA-space IRQs, so we have to support them. We are super
@ -200,13 +229,9 @@ static void unmask_IO_APIC_irq (unsigned int irq)
static void clear_IO_APIC_pin(unsigned int apic, unsigned int pin)
{
struct IO_APIC_route_entry entry;
unsigned long flags;
/* Check delivery_mode to be sure we're not clearing an SMI pin */
spin_lock_irqsave(&ioapic_lock, flags);
*(((int*)&entry) + 0) = io_apic_read(apic, 0x10 + 2 * pin);
*(((int*)&entry) + 1) = io_apic_read(apic, 0x11 + 2 * pin);
spin_unlock_irqrestore(&ioapic_lock, flags);
entry = ioapic_read_entry(apic, pin);
if (entry.delivery_mode == dest_SMI)
return;
@ -215,10 +240,7 @@ static void clear_IO_APIC_pin(unsigned int apic, unsigned int pin)
*/
memset(&entry, 0, sizeof(entry));
entry.mask = 1;
spin_lock_irqsave(&ioapic_lock, flags);
io_apic_write(apic, 0x10 + 2 * pin, *(((int *)&entry) + 0));
io_apic_write(apic, 0x11 + 2 * pin, *(((int *)&entry) + 1));
spin_unlock_irqrestore(&ioapic_lock, flags);
ioapic_write_entry(apic, pin, entry);
}
static void clear_IO_APIC (void)
@ -1283,9 +1305,8 @@ static void __init setup_IO_APIC_irqs(void)
if (!apic && (irq < 16))
disable_8259A_irq(irq);
}
ioapic_write_entry(apic, pin, entry);
spin_lock_irqsave(&ioapic_lock, flags);
io_apic_write(apic, 0x11+2*pin, *(((int *)&entry)+1));
io_apic_write(apic, 0x10+2*pin, *(((int *)&entry)+0));
set_native_irq_info(irq, TARGET_CPUS);
spin_unlock_irqrestore(&ioapic_lock, flags);
}
@ -1301,7 +1322,6 @@ static void __init setup_IO_APIC_irqs(void)
static void __init setup_ExtINT_IRQ0_pin(unsigned int apic, unsigned int pin, int vector)
{
struct IO_APIC_route_entry entry;
unsigned long flags;
memset(&entry,0,sizeof(entry));
@ -1331,10 +1351,7 @@ static void __init setup_ExtINT_IRQ0_pin(unsigned int apic, unsigned int pin, in
/*
* Add it to the IO-APIC irq-routing table:
*/
spin_lock_irqsave(&ioapic_lock, flags);
io_apic_write(apic, 0x11+2*pin, *(((int *)&entry)+1));
io_apic_write(apic, 0x10+2*pin, *(((int *)&entry)+0));
spin_unlock_irqrestore(&ioapic_lock, flags);
ioapic_write_entry(apic, pin, entry);
enable_8259A_irq(0);
}
@ -1444,10 +1461,7 @@ void __init print_IO_APIC(void)
for (i = 0; i <= reg_01.bits.entries; i++) {
struct IO_APIC_route_entry entry;
spin_lock_irqsave(&ioapic_lock, flags);
*(((int *)&entry)+0) = io_apic_read(apic, 0x10+i*2);
*(((int *)&entry)+1) = io_apic_read(apic, 0x11+i*2);
spin_unlock_irqrestore(&ioapic_lock, flags);
entry = ioapic_read_entry(apic, i);
printk(KERN_DEBUG " %02x %03X %02X ",
i,
@ -1666,10 +1680,7 @@ static void __init enable_IO_APIC(void)
/* See if any of the pins is in ExtINT mode */
for (pin = 0; pin < nr_ioapic_registers[apic]; pin++) {
struct IO_APIC_route_entry entry;
spin_lock_irqsave(&ioapic_lock, flags);
*(((int *)&entry) + 0) = io_apic_read(apic, 0x10 + 2 * pin);
*(((int *)&entry) + 1) = io_apic_read(apic, 0x11 + 2 * pin);
spin_unlock_irqrestore(&ioapic_lock, flags);
entry = ioapic_read_entry(apic, pin);
/* If the interrupt line is enabled and in ExtInt mode
@ -1726,7 +1737,6 @@ void disable_IO_APIC(void)
*/
if (ioapic_i8259.pin != -1) {
struct IO_APIC_route_entry entry;
unsigned long flags;
memset(&entry, 0, sizeof(entry));
entry.mask = 0; /* Enabled */
@ -1743,12 +1753,7 @@ void disable_IO_APIC(void)
/*
* Add it to the IO-APIC irq-routing table:
*/
spin_lock_irqsave(&ioapic_lock, flags);
io_apic_write(ioapic_i8259.apic, 0x11+2*ioapic_i8259.pin,
*(((int *)&entry)+1));
io_apic_write(ioapic_i8259.apic, 0x10+2*ioapic_i8259.pin,
*(((int *)&entry)+0));
spin_unlock_irqrestore(&ioapic_lock, flags);
ioapic_write_entry(ioapic_i8259.apic, ioapic_i8259.pin, entry);
}
disconnect_bsp_APIC(ioapic_i8259.pin != -1);
}
@ -2213,17 +2218,13 @@ static inline void unlock_ExtINT_logic(void)
int apic, pin, i;
struct IO_APIC_route_entry entry0, entry1;
unsigned char save_control, save_freq_select;
unsigned long flags;
pin = find_isa_irq_pin(8, mp_INT);
apic = find_isa_irq_apic(8, mp_INT);
if (pin == -1)
return;
spin_lock_irqsave(&ioapic_lock, flags);
*(((int *)&entry0) + 1) = io_apic_read(apic, 0x11 + 2 * pin);
*(((int *)&entry0) + 0) = io_apic_read(apic, 0x10 + 2 * pin);
spin_unlock_irqrestore(&ioapic_lock, flags);
entry0 = ioapic_read_entry(apic, pin);
clear_IO_APIC_pin(apic, pin);
memset(&entry1, 0, sizeof(entry1));
@ -2236,10 +2237,7 @@ static inline void unlock_ExtINT_logic(void)
entry1.trigger = 0;
entry1.vector = 0;
spin_lock_irqsave(&ioapic_lock, flags);
io_apic_write(apic, 0x11 + 2 * pin, *(((int *)&entry1) + 1));
io_apic_write(apic, 0x10 + 2 * pin, *(((int *)&entry1) + 0));
spin_unlock_irqrestore(&ioapic_lock, flags);
ioapic_write_entry(apic, pin, entry1);
save_control = CMOS_READ(RTC_CONTROL);
save_freq_select = CMOS_READ(RTC_FREQ_SELECT);
@ -2258,10 +2256,7 @@ static inline void unlock_ExtINT_logic(void)
CMOS_WRITE(save_freq_select, RTC_FREQ_SELECT);
clear_IO_APIC_pin(apic, pin);
spin_lock_irqsave(&ioapic_lock, flags);
io_apic_write(apic, 0x11 + 2 * pin, *(((int *)&entry0) + 1));
io_apic_write(apic, 0x10 + 2 * pin, *(((int *)&entry0) + 0));
spin_unlock_irqrestore(&ioapic_lock, flags);
ioapic_write_entry(apic, pin, entry0);
}
int timer_uses_ioapic_pin_0;
@ -2461,17 +2456,12 @@ static int ioapic_suspend(struct sys_device *dev, pm_message_t state)
{
struct IO_APIC_route_entry *entry;
struct sysfs_ioapic_data *data;
unsigned long flags;
int i;
data = container_of(dev, struct sysfs_ioapic_data, dev);
entry = data->entry;
spin_lock_irqsave(&ioapic_lock, flags);
for (i = 0; i < nr_ioapic_registers[dev->id]; i ++, entry ++ ) {
*(((int *)entry) + 1) = io_apic_read(dev->id, 0x11 + 2 * i);
*(((int *)entry) + 0) = io_apic_read(dev->id, 0x10 + 2 * i);
}
spin_unlock_irqrestore(&ioapic_lock, flags);
for (i = 0; i < nr_ioapic_registers[dev->id]; i ++)
entry[i] = ioapic_read_entry(dev->id, i);
return 0;
}
@ -2493,11 +2483,9 @@ static int ioapic_resume(struct sys_device *dev)
reg_00.bits.ID = mp_ioapics[dev->id].mpc_apicid;
io_apic_write(dev->id, 0, reg_00.raw);
}
for (i = 0; i < nr_ioapic_registers[dev->id]; i ++, entry ++ ) {
io_apic_write(dev->id, 0x11+2*i, *(((int *)entry)+1));
io_apic_write(dev->id, 0x10+2*i, *(((int *)entry)+0));
}
spin_unlock_irqrestore(&ioapic_lock, flags);
for (i = 0; i < nr_ioapic_registers[dev->id]; i ++)
ioapic_write_entry(dev->id, i, entry[i]);
return 0;
}
@ -2694,9 +2682,8 @@ int io_apic_set_pci_routing (int ioapic, int pin, int irq, int edge_level, int a
if (!ioapic && (irq < 16))
disable_8259A_irq(irq);
ioapic_write_entry(ioapic, pin, entry);
spin_lock_irqsave(&ioapic_lock, flags);
io_apic_write(ioapic, 0x11+2*pin, *(((int *)&entry)+1));
io_apic_write(ioapic, 0x10+2*pin, *(((int *)&entry)+0));
set_native_irq_info(use_pci_vector() ? entry.vector : irq, TARGET_CPUS);
spin_unlock_irqrestore(&ioapic_lock, flags);
@ -2704,3 +2691,25 @@ int io_apic_set_pci_routing (int ioapic, int pin, int irq, int edge_level, int a
}
#endif /* CONFIG_ACPI */
static int __init parse_disable_timer_pin_1(char *arg)
{
disable_timer_pin_1 = 1;
return 0;
}
early_param("disable_timer_pin_1", parse_disable_timer_pin_1);
static int __init parse_enable_timer_pin_1(char *arg)
{
disable_timer_pin_1 = -1;
return 0;
}
early_param("enable_timer_pin_1", parse_enable_timer_pin_1);
static int __init parse_noapic(char *arg)
{
/* disable IO-APIC */
disable_ioapic_setup();
return 0;
}
early_param("noapic", parse_noapic);

View File

@ -9,6 +9,7 @@
#include <linux/mm.h>
#include <linux/kexec.h>
#include <linux/delay.h>
#include <linux/init.h>
#include <asm/pgtable.h>
#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
@ -20,70 +21,13 @@
#include <asm/system.h>
#define PAGE_ALIGNED __attribute__ ((__aligned__(PAGE_SIZE)))
#define L0_ATTR (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | _PAGE_DIRTY)
#define L1_ATTR (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | _PAGE_DIRTY)
#define L2_ATTR (_PAGE_PRESENT)
#define LEVEL0_SIZE (1UL << 12UL)
#ifndef CONFIG_X86_PAE
#define LEVEL1_SIZE (1UL << 22UL)
static u32 pgtable_level1[1024] PAGE_ALIGNED;
static void identity_map_page(unsigned long address)
{
unsigned long level1_index, level2_index;
u32 *pgtable_level2;
/* Find the current page table */
pgtable_level2 = __va(read_cr3());
/* Find the indexes of the physical address to identity map */
level1_index = (address % LEVEL1_SIZE)/LEVEL0_SIZE;
level2_index = address / LEVEL1_SIZE;
/* Identity map the page table entry */
pgtable_level1[level1_index] = address | L0_ATTR;
pgtable_level2[level2_index] = __pa(pgtable_level1) | L1_ATTR;
/* Flush the tlb so the new mapping takes effect.
* Global tlb entries are not flushed but that is not an issue.
*/
load_cr3(pgtable_level2);
}
#else
#define LEVEL1_SIZE (1UL << 21UL)
#define LEVEL2_SIZE (1UL << 30UL)
static u64 pgtable_level1[512] PAGE_ALIGNED;
static u64 pgtable_level2[512] PAGE_ALIGNED;
static void identity_map_page(unsigned long address)
{
unsigned long level1_index, level2_index, level3_index;
u64 *pgtable_level3;
/* Find the current page table */
pgtable_level3 = __va(read_cr3());
/* Find the indexes of the physical address to identity map */
level1_index = (address % LEVEL1_SIZE)/LEVEL0_SIZE;
level2_index = (address % LEVEL2_SIZE)/LEVEL1_SIZE;
level3_index = address / LEVEL2_SIZE;
/* Identity map the page table entry */
pgtable_level1[level1_index] = address | L0_ATTR;
pgtable_level2[level2_index] = __pa(pgtable_level1) | L1_ATTR;
set_64bit(&pgtable_level3[level3_index],
__pa(pgtable_level2) | L2_ATTR);
/* Flush the tlb so the new mapping takes effect.
* Global tlb entries are not flushed but that is not an issue.
*/
load_cr3(pgtable_level3);
}
static u32 kexec_pgd[1024] PAGE_ALIGNED;
#ifdef CONFIG_X86_PAE
static u32 kexec_pmd0[1024] PAGE_ALIGNED;
static u32 kexec_pmd1[1024] PAGE_ALIGNED;
#endif
static u32 kexec_pte0[1024] PAGE_ALIGNED;
static u32 kexec_pte1[1024] PAGE_ALIGNED;
static void set_idt(void *newidt, __u16 limit)
{
@ -127,16 +71,6 @@ static void load_segments(void)
#undef __STR
}
typedef asmlinkage NORET_TYPE void (*relocate_new_kernel_t)(
unsigned long indirection_page,
unsigned long reboot_code_buffer,
unsigned long start_address,
unsigned int has_pae) ATTRIB_NORET;
extern const unsigned char relocate_new_kernel[];
extern void relocate_new_kernel_end(void);
extern const unsigned int relocate_new_kernel_size;
/*
* A architecture hook called to validate the
* proposed image and prepare the control pages
@ -169,25 +103,29 @@ void machine_kexec_cleanup(struct kimage *image)
*/
NORET_TYPE void machine_kexec(struct kimage *image)
{
unsigned long page_list;
unsigned long reboot_code_buffer;
relocate_new_kernel_t rnk;
unsigned long page_list[PAGES_NR];
void *control_page;
/* Interrupts aren't acceptable while we reboot */
local_irq_disable();
/* Compute some offsets */
reboot_code_buffer = page_to_pfn(image->control_code_page)
<< PAGE_SHIFT;
page_list = image->head;
control_page = page_address(image->control_code_page);
memcpy(control_page, relocate_kernel, PAGE_SIZE);
/* Set up an identity mapping for the reboot_code_buffer */
identity_map_page(reboot_code_buffer);
/* copy it out */
memcpy((void *)reboot_code_buffer, relocate_new_kernel,
relocate_new_kernel_size);
page_list[PA_CONTROL_PAGE] = __pa(control_page);
page_list[VA_CONTROL_PAGE] = (unsigned long)relocate_kernel;
page_list[PA_PGD] = __pa(kexec_pgd);
page_list[VA_PGD] = (unsigned long)kexec_pgd;
#ifdef CONFIG_X86_PAE
page_list[PA_PMD_0] = __pa(kexec_pmd0);
page_list[VA_PMD_0] = (unsigned long)kexec_pmd0;
page_list[PA_PMD_1] = __pa(kexec_pmd1);
page_list[VA_PMD_1] = (unsigned long)kexec_pmd1;
#endif
page_list[PA_PTE_0] = __pa(kexec_pte0);
page_list[VA_PTE_0] = (unsigned long)kexec_pte0;
page_list[PA_PTE_1] = __pa(kexec_pte1);
page_list[VA_PTE_1] = (unsigned long)kexec_pte1;
/* The segment registers are funny things, they have both a
* visible and an invisible part. Whenever the visible part is
@ -206,6 +144,28 @@ NORET_TYPE void machine_kexec(struct kimage *image)
set_idt(phys_to_virt(0),0);
/* now call it */
rnk = (relocate_new_kernel_t) reboot_code_buffer;
(*rnk)(page_list, reboot_code_buffer, image->start, cpu_has_pae);
relocate_kernel((unsigned long)image->head, (unsigned long)page_list,
image->start, cpu_has_pae);
}
/* crashkernel=size@addr specifies the location to reserve for
* a crash kernel. By reserving this memory we guarantee
* that linux never sets it up as a DMA target.
* Useful for holding code to do something appropriate
* after a kernel panic.
*/
static int __init parse_crashkernel(char *arg)
{
unsigned long size, base;
size = memparse(arg, &arg);
if (*arg == '@') {
base = memparse(arg+1, &arg);
/* FIXME: Do I want a sanity check
* to validate the memory range?
*/
crashk_res.start = base;
crashk_res.end = base + size - 1;
}
return 0;
}
early_param("crashkernel", parse_crashkernel);

View File

@ -42,6 +42,7 @@
#include <linux/errno.h>
#include <linux/kernel.h>
#include <linux/mca.h>
#include <linux/kprobes.h>
#include <asm/system.h>
#include <asm/io.h>
#include <linux/proc_fs.h>
@ -414,7 +415,8 @@ subsys_initcall(mca_init);
/*--------------------------------------------------------------------*/
static void mca_handle_nmi_device(struct mca_device *mca_dev, int check_flag)
static __kprobes void
mca_handle_nmi_device(struct mca_device *mca_dev, int check_flag)
{
int slot = mca_dev->slot;
@ -444,7 +446,7 @@ static void mca_handle_nmi_device(struct mca_device *mca_dev, int check_flag)
/*--------------------------------------------------------------------*/
static int mca_handle_nmi_callback(struct device *dev, void *data)
static int __kprobes mca_handle_nmi_callback(struct device *dev, void *data)
{
struct mca_device *mca_dev = to_mca_device(dev);
unsigned char pos5;
@ -462,7 +464,7 @@ static int mca_handle_nmi_callback(struct device *dev, void *data)
return 0;
}
void mca_handle_nmi(void)
void __kprobes mca_handle_nmi(void)
{
/* First try - scan the various adapters and see if a specific
* adapter was responsible for the error.

View File

@ -30,6 +30,7 @@
#include <asm/io_apic.h>
#include <mach_apic.h>
#include <mach_apicdef.h>
#include <mach_mpparse.h>
#include <bios_ebda.h>
@ -68,7 +69,7 @@ unsigned int def_to_bigsmp = 0;
/* Processor that is doing the boot up */
unsigned int boot_cpu_physical_apicid = -1U;
/* Internal processor count */
static unsigned int __devinitdata num_processors;
unsigned int __cpuinitdata num_processors;
/* Bitmask of physically existing CPUs */
physid_mask_t phys_cpu_present_map;
@ -228,12 +229,14 @@ static void __init MP_bus_info (struct mpc_config_bus *m)
mpc_oem_bus_info(m, str, translation_table[mpc_record]);
#if MAX_MP_BUSSES < 256
if (m->mpc_busid >= MAX_MP_BUSSES) {
printk(KERN_WARNING "MP table busid value (%d) for bustype %s "
" is too large, max. supported is %d\n",
m->mpc_busid, str, MAX_MP_BUSSES - 1);
return;
}
#endif
if (strncmp(str, BUSTYPE_ISA, sizeof(BUSTYPE_ISA)-1) == 0) {
mp_bus_id_to_type[m->mpc_busid] = MP_BUS_ISA;
@ -293,19 +296,6 @@ static void __init MP_lintsrc_info (struct mpc_config_lintsrc *m)
m->mpc_irqtype, m->mpc_irqflag & 3,
(m->mpc_irqflag >> 2) &3, m->mpc_srcbusid,
m->mpc_srcbusirq, m->mpc_destapic, m->mpc_destapiclint);
/*
* Well it seems all SMP boards in existence
* use ExtINT/LVT1 == LINT0 and
* NMI/LVT2 == LINT1 - the following check
* will show us if this assumptions is false.
* Until then we do not have to add baggage.
*/
if ((m->mpc_irqtype == mp_ExtINT) &&
(m->mpc_destapiclint != 0))
BUG();
if ((m->mpc_irqtype == mp_NMI) &&
(m->mpc_destapiclint != 1))
BUG();
}
#ifdef CONFIG_X86_NUMAQ
@ -822,8 +812,7 @@ int es7000_plat;
#ifdef CONFIG_ACPI
void __init mp_register_lapic_address (
u64 address)
void __init mp_register_lapic_address(u64 address)
{
mp_lapic_addr = (unsigned long) address;
@ -835,13 +824,10 @@ void __init mp_register_lapic_address (
Dprintk("Boot CPU = %d\n", boot_cpu_physical_apicid);
}
void __devinit mp_register_lapic (
u8 id,
u8 enabled)
void __devinit mp_register_lapic (u8 id, u8 enabled)
{
struct mpc_config_processor processor;
int boot_cpu = 0;
int boot_cpu = 0;
if (MAX_APICS - id <= 0) {
printk(KERN_WARNING "Processor #%d invalid (max %d)\n",
@ -878,11 +864,9 @@ static struct mp_ioapic_routing {
u32 pin_programmed[4];
} mp_ioapic_routing[MAX_IO_APICS];
static int mp_find_ioapic (
int gsi)
static int mp_find_ioapic (int gsi)
{
int i = 0;
int i = 0;
/* Find the IOAPIC that manages this GSI. */
for (i = 0; i < nr_ioapics; i++) {
@ -895,15 +879,11 @@ static int mp_find_ioapic (
return -1;
}
void __init mp_register_ioapic (
u8 id,
u32 address,
u32 gsi_base)
void __init mp_register_ioapic(u8 id, u32 address, u32 gsi_base)
{
int idx = 0;
int tmpid;
int idx = 0;
int tmpid;
if (nr_ioapics >= MAX_IO_APICS) {
printk(KERN_ERR "ERROR: Max # of I/O APICs (%d) exceeded "
@ -949,16 +929,10 @@ void __init mp_register_ioapic (
mp_ioapics[idx].mpc_apicver, mp_ioapics[idx].mpc_apicaddr,
mp_ioapic_routing[idx].gsi_base,
mp_ioapic_routing[idx].gsi_end);
return;
}
void __init mp_override_legacy_irq (
u8 bus_irq,
u8 polarity,
u8 trigger,
u32 gsi)
void __init
mp_override_legacy_irq(u8 bus_irq, u8 polarity, u8 trigger, u32 gsi)
{
struct mpc_config_intsrc intsrc;
int ioapic = -1;
@ -996,15 +970,13 @@ void __init mp_override_legacy_irq (
mp_irqs[mp_irq_entries] = intsrc;
if (++mp_irq_entries == MAX_IRQ_SOURCES)
panic("Max # of irq sources exceeded!\n");
return;
}
void __init mp_config_acpi_legacy_irqs (void)
{
struct mpc_config_intsrc intsrc;
int i = 0;
int ioapic = -1;
int i = 0;
int ioapic = -1;
/*
* Fabricate the legacy ISA bus (bus #31).
@ -1073,12 +1045,12 @@ void __init mp_config_acpi_legacy_irqs (void)
#define MAX_GSI_NUM 4096
int mp_register_gsi (u32 gsi, int triggering, int polarity)
int mp_register_gsi(u32 gsi, int triggering, int polarity)
{
int ioapic = -1;
int ioapic_pin = 0;
int idx, bit = 0;
static int pci_irq = 16;
int ioapic = -1;
int ioapic_pin = 0;
int idx, bit = 0;
static int pci_irq = 16;
/*
* Mapping between Global System Interrups, which
* represent all possible interrupts, and IRQs

File diff suppressed because it is too large Load Diff

View File

@ -37,6 +37,7 @@
#include <linux/kallsyms.h>
#include <linux/ptrace.h>
#include <linux/random.h>
#include <linux/personality.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h>
@ -320,15 +321,6 @@ void show_regs(struct pt_regs * regs)
* the "args".
*/
extern void kernel_thread_helper(void);
__asm__(".section .text\n"
".align 4\n"
"kernel_thread_helper:\n\t"
"movl %edx,%eax\n\t"
"pushl %edx\n\t"
"call *%ebx\n\t"
"pushl %eax\n\t"
"call do_exit\n"
".previous");
/*
* Create a kernel thread
@ -346,7 +338,7 @@ int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags)
regs.xes = __USER_DS;
regs.orig_eax = -1;
regs.eip = (unsigned long) kernel_thread_helper;
regs.xcs = __KERNEL_CS;
regs.xcs = __KERNEL_CS | get_kernel_rpl();
regs.eflags = X86_EFLAGS_IF | X86_EFLAGS_SF | X86_EFLAGS_PF | 0x2;
/* Ok, create the new process.. */
@ -905,7 +897,7 @@ asmlinkage int sys_get_thread_area(struct user_desc __user *u_info)
unsigned long arch_align_stack(unsigned long sp)
{
if (randomize_va_space)
if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
sp -= get_random_int() % 8192;
return sp & ~0xf;
}

View File

@ -185,17 +185,17 @@ static unsigned long convert_eip_to_linear(struct task_struct *child, struct pt_
return addr;
}
static inline int is_at_popf(struct task_struct *child, struct pt_regs *regs)
static inline int is_setting_trap_flag(struct task_struct *child, struct pt_regs *regs)
{
int i, copied;
unsigned char opcode[16];
unsigned char opcode[15];
unsigned long addr = convert_eip_to_linear(child, regs);
copied = access_process_vm(child, addr, opcode, sizeof(opcode), 0);
for (i = 0; i < copied; i++) {
switch (opcode[i]) {
/* popf */
case 0x9d:
/* popf and iret */
case 0x9d: case 0xcf:
return 1;
/* opcode and address size prefixes */
case 0x66: case 0x67:
@ -247,7 +247,7 @@ static void set_singlestep(struct task_struct *child)
* don't mark it as being "us" that set it, so that we
* won't clear it by hand later.
*/
if (is_at_popf(child, regs))
if (is_setting_trap_flag(child, regs))
return;
child->ptrace |= PT_DTRACE;

View File

@ -7,16 +7,138 @@
*/
#include <linux/linkage.h>
#include <asm/page.h>
#include <asm/kexec.h>
/*
* Must be relocatable PIC code callable as a C function
*/
#define PTR(x) (x << 2)
#define PAGE_ALIGNED (1 << PAGE_SHIFT)
#define PAGE_ATTR 0x63 /* _PAGE_PRESENT|_PAGE_RW|_PAGE_ACCESSED|_PAGE_DIRTY */
#define PAE_PGD_ATTR 0x01 /* _PAGE_PRESENT */
.text
.align PAGE_ALIGNED
.globl relocate_kernel
relocate_kernel:
movl 8(%esp), %ebp /* list of pages */
#ifdef CONFIG_X86_PAE
/* map the control page at its virtual address */
movl PTR(VA_PGD)(%ebp), %edi
movl PTR(VA_CONTROL_PAGE)(%ebp), %eax
andl $0xc0000000, %eax
shrl $27, %eax
addl %edi, %eax
movl PTR(PA_PMD_0)(%ebp), %edx
orl $PAE_PGD_ATTR, %edx
movl %edx, (%eax)
movl PTR(VA_PMD_0)(%ebp), %edi
movl PTR(VA_CONTROL_PAGE)(%ebp), %eax
andl $0x3fe00000, %eax
shrl $18, %eax
addl %edi, %eax
movl PTR(PA_PTE_0)(%ebp), %edx
orl $PAGE_ATTR, %edx
movl %edx, (%eax)
movl PTR(VA_PTE_0)(%ebp), %edi
movl PTR(VA_CONTROL_PAGE)(%ebp), %eax
andl $0x001ff000, %eax
shrl $9, %eax
addl %edi, %eax
movl PTR(PA_CONTROL_PAGE)(%ebp), %edx
orl $PAGE_ATTR, %edx
movl %edx, (%eax)
/* identity map the control page at its physical address */
movl PTR(VA_PGD)(%ebp), %edi
movl PTR(PA_CONTROL_PAGE)(%ebp), %eax
andl $0xc0000000, %eax
shrl $27, %eax
addl %edi, %eax
movl PTR(PA_PMD_1)(%ebp), %edx
orl $PAE_PGD_ATTR, %edx
movl %edx, (%eax)
movl PTR(VA_PMD_1)(%ebp), %edi
movl PTR(PA_CONTROL_PAGE)(%ebp), %eax
andl $0x3fe00000, %eax
shrl $18, %eax
addl %edi, %eax
movl PTR(PA_PTE_1)(%ebp), %edx
orl $PAGE_ATTR, %edx
movl %edx, (%eax)
movl PTR(VA_PTE_1)(%ebp), %edi
movl PTR(PA_CONTROL_PAGE)(%ebp), %eax
andl $0x001ff000, %eax
shrl $9, %eax
addl %edi, %eax
movl PTR(PA_CONTROL_PAGE)(%ebp), %edx
orl $PAGE_ATTR, %edx
movl %edx, (%eax)
#else
/* map the control page at its virtual address */
movl PTR(VA_PGD)(%ebp), %edi
movl PTR(VA_CONTROL_PAGE)(%ebp), %eax
andl $0xffc00000, %eax
shrl $20, %eax
addl %edi, %eax
movl PTR(PA_PTE_0)(%ebp), %edx
orl $PAGE_ATTR, %edx
movl %edx, (%eax)
movl PTR(VA_PTE_0)(%ebp), %edi
movl PTR(VA_CONTROL_PAGE)(%ebp), %eax
andl $0x003ff000, %eax
shrl $10, %eax
addl %edi, %eax
movl PTR(PA_CONTROL_PAGE)(%ebp), %edx
orl $PAGE_ATTR, %edx
movl %edx, (%eax)
/* identity map the control page at its physical address */
movl PTR(VA_PGD)(%ebp), %edi
movl PTR(PA_CONTROL_PAGE)(%ebp), %eax
andl $0xffc00000, %eax
shrl $20, %eax
addl %edi, %eax
movl PTR(PA_PTE_1)(%ebp), %edx
orl $PAGE_ATTR, %edx
movl %edx, (%eax)
movl PTR(VA_PTE_1)(%ebp), %edi
movl PTR(PA_CONTROL_PAGE)(%ebp), %eax
andl $0x003ff000, %eax
shrl $10, %eax
addl %edi, %eax
movl PTR(PA_CONTROL_PAGE)(%ebp), %edx
orl $PAGE_ATTR, %edx
movl %edx, (%eax)
#endif
/*
* Must be relocatable PIC code callable as a C function, that once
* it starts can not use the previous processes stack.
*/
.globl relocate_new_kernel
relocate_new_kernel:
/* read the arguments and say goodbye to the stack */
movl 4(%esp), %ebx /* page_list */
movl 8(%esp), %ebp /* reboot_code_buffer */
movl 8(%esp), %ebp /* list of pages */
movl 12(%esp), %edx /* start address */
movl 16(%esp), %ecx /* cpu_has_pae */
@ -24,11 +146,26 @@ relocate_new_kernel:
pushl $0
popfl
/* set a new stack at the bottom of our page... */
lea 4096(%ebp), %esp
/* get physical address of control page now */
/* this is impossible after page table switch */
movl PTR(PA_CONTROL_PAGE)(%ebp), %edi
/* store the parameters back on the stack */
pushl %edx /* store the start address */
/* switch to new set of page tables */
movl PTR(PA_PGD)(%ebp), %eax
movl %eax, %cr3
/* setup a new stack at the end of the physical control page */
lea 4096(%edi), %esp
/* jump to identity mapped page */
movl %edi, %eax
addl $(identity_mapped - relocate_kernel), %eax
pushl %eax
ret
identity_mapped:
/* store the start address on the stack */
pushl %edx
/* Set cr0 to a known state:
* 31 0 == Paging disabled
@ -113,8 +250,3 @@ relocate_new_kernel:
xorl %edi, %edi
xorl %ebp, %ebp
ret
relocate_new_kernel_end:
.globl relocate_new_kernel_size
relocate_new_kernel_size:
.long relocate_new_kernel_end - relocate_new_kernel

View File

@ -1,134 +0,0 @@
/*
* i386 semaphore implementation.
*
* (C) Copyright 1999 Linus Torvalds
*
* Portions Copyright 1999 Red Hat, Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*
* rw semaphores implemented November 1999 by Benjamin LaHaise <bcrl@kvack.org>
*/
#include <asm/semaphore.h>
/*
* The semaphore operations have a special calling sequence that
* allow us to do a simpler in-line version of them. These routines
* need to convert that sequence back into the C sequence when
* there is contention on the semaphore.
*
* %eax contains the semaphore pointer on entry. Save the C-clobbered
* registers (%eax, %edx and %ecx) except %eax whish is either a return
* value or just clobbered..
*/
asm(
".section .sched.text\n"
".align 4\n"
".globl __down_failed\n"
"__down_failed:\n\t"
#if defined(CONFIG_FRAME_POINTER)
"pushl %ebp\n\t"
"movl %esp,%ebp\n\t"
#endif
"pushl %edx\n\t"
"pushl %ecx\n\t"
"call __down\n\t"
"popl %ecx\n\t"
"popl %edx\n\t"
#if defined(CONFIG_FRAME_POINTER)
"movl %ebp,%esp\n\t"
"popl %ebp\n\t"
#endif
"ret"
);
asm(
".section .sched.text\n"
".align 4\n"
".globl __down_failed_interruptible\n"
"__down_failed_interruptible:\n\t"
#if defined(CONFIG_FRAME_POINTER)
"pushl %ebp\n\t"
"movl %esp,%ebp\n\t"
#endif
"pushl %edx\n\t"
"pushl %ecx\n\t"
"call __down_interruptible\n\t"
"popl %ecx\n\t"
"popl %edx\n\t"
#if defined(CONFIG_FRAME_POINTER)
"movl %ebp,%esp\n\t"
"popl %ebp\n\t"
#endif
"ret"
);
asm(
".section .sched.text\n"
".align 4\n"
".globl __down_failed_trylock\n"
"__down_failed_trylock:\n\t"
#if defined(CONFIG_FRAME_POINTER)
"pushl %ebp\n\t"
"movl %esp,%ebp\n\t"
#endif
"pushl %edx\n\t"
"pushl %ecx\n\t"
"call __down_trylock\n\t"
"popl %ecx\n\t"
"popl %edx\n\t"
#if defined(CONFIG_FRAME_POINTER)
"movl %ebp,%esp\n\t"
"popl %ebp\n\t"
#endif
"ret"
);
asm(
".section .sched.text\n"
".align 4\n"
".globl __up_wakeup\n"
"__up_wakeup:\n\t"
"pushl %edx\n\t"
"pushl %ecx\n\t"
"call __up\n\t"
"popl %ecx\n\t"
"popl %edx\n\t"
"ret"
);
/*
* rw spinlock fallbacks
*/
#if defined(CONFIG_SMP)
asm(
".section .sched.text\n"
".align 4\n"
".globl __write_lock_failed\n"
"__write_lock_failed:\n\t"
LOCK_PREFIX "addl $" RW_LOCK_BIAS_STR ",(%eax)\n"
"1: rep; nop\n\t"
"cmpl $" RW_LOCK_BIAS_STR ",(%eax)\n\t"
"jne 1b\n\t"
LOCK_PREFIX "subl $" RW_LOCK_BIAS_STR ",(%eax)\n\t"
"jnz __write_lock_failed\n\t"
"ret"
);
asm(
".section .sched.text\n"
".align 4\n"
".globl __read_lock_failed\n"
"__read_lock_failed:\n\t"
LOCK_PREFIX "incl (%eax)\n"
"1: rep; nop\n\t"
"cmpl $1,(%eax)\n\t"
"js 1b\n\t"
LOCK_PREFIX "decl (%eax)\n\t"
"js __read_lock_failed\n\t"
"ret"
);
#endif

View File

@ -90,18 +90,6 @@ EXPORT_SYMBOL(boot_cpu_data);
unsigned long mmu_cr4_features;
#ifdef CONFIG_ACPI
int acpi_disabled = 0;
#else
int acpi_disabled = 1;
#endif
EXPORT_SYMBOL(acpi_disabled);
#ifdef CONFIG_ACPI
int __initdata acpi_force = 0;
extern acpi_interrupt_flags acpi_sci_flags;
#endif
/* for MCA, but anyone else can use it if they want */
unsigned int machine_id;
#ifdef CONFIG_MCA
@ -149,7 +137,6 @@ EXPORT_SYMBOL(ist_info);
struct e820map e820;
extern void early_cpu_init(void);
extern void generic_apic_probe(char *);
extern int root_mountflags;
unsigned long saved_videomode;
@ -701,238 +688,132 @@ static inline void copy_edd(void)
}
#endif
static void __init parse_cmdline_early (char ** cmdline_p)
static int __initdata user_defined_memmap = 0;
/*
* "mem=nopentium" disables the 4MB page tables.
* "mem=XXX[kKmM]" defines a memory region from HIGH_MEM
* to <mem>, overriding the bios size.
* "memmap=XXX[KkmM]@XXX[KkmM]" defines a memory region from
* <start> to <start>+<mem>, overriding the bios size.
*
* HPA tells me bootloaders need to parse mem=, so no new
* option should be mem= [also see Documentation/i386/boot.txt]
*/
static int __init parse_mem(char *arg)
{
char c = ' ', *to = command_line, *from = saved_command_line;
int len = 0;
int userdef = 0;
if (!arg)
return -EINVAL;
/* Save unparsed command line copy for /proc/cmdline */
saved_command_line[COMMAND_LINE_SIZE-1] = '\0';
for (;;) {
if (c != ' ')
goto next_char;
/*
* "mem=nopentium" disables the 4MB page tables.
* "mem=XXX[kKmM]" defines a memory region from HIGH_MEM
* to <mem>, overriding the bios size.
* "memmap=XXX[KkmM]@XXX[KkmM]" defines a memory region from
* <start> to <start>+<mem>, overriding the bios size.
*
* HPA tells me bootloaders need to parse mem=, so no new
* option should be mem= [also see Documentation/i386/boot.txt]
if (strcmp(arg, "nopentium") == 0) {
clear_bit(X86_FEATURE_PSE, boot_cpu_data.x86_capability);
disable_pse = 1;
} else {
/* If the user specifies memory size, we
* limit the BIOS-provided memory map to
* that size. exactmap can be used to specify
* the exact map. mem=number can be used to
* trim the existing memory map.
*/
if (!memcmp(from, "mem=", 4)) {
if (to != command_line)
to--;
if (!memcmp(from+4, "nopentium", 9)) {
from += 9+4;
clear_bit(X86_FEATURE_PSE, boot_cpu_data.x86_capability);
disable_pse = 1;
} else {
/* If the user specifies memory size, we
* limit the BIOS-provided memory map to
* that size. exactmap can be used to specify
* the exact map. mem=number can be used to
* trim the existing memory map.
*/
unsigned long long mem_size;
unsigned long long mem_size;
mem_size = memparse(from+4, &from);
limit_regions(mem_size);
userdef=1;
}
}
else if (!memcmp(from, "memmap=", 7)) {
if (to != command_line)
to--;
if (!memcmp(from+7, "exactmap", 8)) {
#ifdef CONFIG_CRASH_DUMP
/* If we are doing a crash dump, we
* still need to know the real mem
* size before original memory map is
* reset.
*/
find_max_pfn();
saved_max_pfn = max_pfn;
#endif
from += 8+7;
e820.nr_map = 0;
userdef = 1;
} else {
/* If the user specifies memory size, we
* limit the BIOS-provided memory map to
* that size. exactmap can be used to specify
* the exact map. mem=number can be used to
* trim the existing memory map.
*/
unsigned long long start_at, mem_size;
mem_size = memparse(from+7, &from);
if (*from == '@') {
start_at = memparse(from+1, &from);
add_memory_region(start_at, mem_size, E820_RAM);
} else if (*from == '#') {
start_at = memparse(from+1, &from);
add_memory_region(start_at, mem_size, E820_ACPI);
} else if (*from == '$') {
start_at = memparse(from+1, &from);
add_memory_region(start_at, mem_size, E820_RESERVED);
} else {
limit_regions(mem_size);
userdef=1;
}
}
}
else if (!memcmp(from, "noexec=", 7))
noexec_setup(from + 7);
#ifdef CONFIG_X86_SMP
/*
* If the BIOS enumerates physical processors before logical,
* maxcpus=N at enumeration-time can be used to disable HT.
*/
else if (!memcmp(from, "maxcpus=", 8)) {
extern unsigned int maxcpus;
maxcpus = simple_strtoul(from + 8, NULL, 0);
}
#endif
#ifdef CONFIG_ACPI
/* "acpi=off" disables both ACPI table parsing and interpreter */
else if (!memcmp(from, "acpi=off", 8)) {
disable_acpi();
}
/* acpi=force to over-ride black-list */
else if (!memcmp(from, "acpi=force", 10)) {
acpi_force = 1;
acpi_ht = 1;
acpi_disabled = 0;
}
/* acpi=strict disables out-of-spec workarounds */
else if (!memcmp(from, "acpi=strict", 11)) {
acpi_strict = 1;
}
/* Limit ACPI just to boot-time to enable HT */
else if (!memcmp(from, "acpi=ht", 7)) {
if (!acpi_force)
disable_acpi();
acpi_ht = 1;
}
/* "pci=noacpi" disable ACPI IRQ routing and PCI scan */
else if (!memcmp(from, "pci=noacpi", 10)) {
acpi_disable_pci();
}
/* "acpi=noirq" disables ACPI interrupt routing */
else if (!memcmp(from, "acpi=noirq", 10)) {
acpi_noirq_set();
}
else if (!memcmp(from, "acpi_sci=edge", 13))
acpi_sci_flags.trigger = 1;
else if (!memcmp(from, "acpi_sci=level", 14))
acpi_sci_flags.trigger = 3;
else if (!memcmp(from, "acpi_sci=high", 13))
acpi_sci_flags.polarity = 1;
else if (!memcmp(from, "acpi_sci=low", 12))
acpi_sci_flags.polarity = 3;
#ifdef CONFIG_X86_IO_APIC
else if (!memcmp(from, "acpi_skip_timer_override", 24))
acpi_skip_timer_override = 1;
if (!memcmp(from, "disable_timer_pin_1", 19))
disable_timer_pin_1 = 1;
if (!memcmp(from, "enable_timer_pin_1", 18))
disable_timer_pin_1 = -1;
/* disable IO-APIC */
else if (!memcmp(from, "noapic", 6))
disable_ioapic_setup();
#endif /* CONFIG_X86_IO_APIC */
#endif /* CONFIG_ACPI */
#ifdef CONFIG_X86_LOCAL_APIC
/* enable local APIC */
else if (!memcmp(from, "lapic", 5))
lapic_enable();
/* disable local APIC */
else if (!memcmp(from, "nolapic", 6))
lapic_disable();
#endif /* CONFIG_X86_LOCAL_APIC */
#ifdef CONFIG_KEXEC
/* crashkernel=size@addr specifies the location to reserve for
* a crash kernel. By reserving this memory we guarantee
* that linux never set's it up as a DMA target.
* Useful for holding code to do something appropriate
* after a kernel panic.
*/
else if (!memcmp(from, "crashkernel=", 12)) {
unsigned long size, base;
size = memparse(from+12, &from);
if (*from == '@') {
base = memparse(from+1, &from);
/* FIXME: Do I want a sanity check
* to validate the memory range?
*/
crashk_res.start = base;
crashk_res.end = base + size - 1;
}
}
#endif
#ifdef CONFIG_PROC_VMCORE
/* elfcorehdr= specifies the location of elf core header
* stored by the crashed kernel.
*/
else if (!memcmp(from, "elfcorehdr=", 11))
elfcorehdr_addr = memparse(from+11, &from);
#endif
/*
* highmem=size forces highmem to be exactly 'size' bytes.
* This works even on boxes that have no highmem otherwise.
* This also works to reduce highmem size on bigger boxes.
*/
else if (!memcmp(from, "highmem=", 8))
highmem_pages = memparse(from+8, &from) >> PAGE_SHIFT;
/*
* vmalloc=size forces the vmalloc area to be exactly 'size'
* bytes. This can be used to increase (or decrease) the
* vmalloc area - the default is 128m.
*/
else if (!memcmp(from, "vmalloc=", 8))
__VMALLOC_RESERVE = memparse(from+8, &from);
next_char:
c = *(from++);
if (!c)
break;
if (COMMAND_LINE_SIZE <= ++len)
break;
*(to++) = c;
}
*to = '\0';
*cmdline_p = command_line;
if (userdef) {
printk(KERN_INFO "user-defined physical RAM map:\n");
print_memory_map("user");
mem_size = memparse(arg, &arg);
limit_regions(mem_size);
user_defined_memmap = 1;
}
return 0;
}
early_param("mem", parse_mem);
static int __init parse_memmap(char *arg)
{
if (!arg)
return -EINVAL;
if (strcmp(arg, "exactmap") == 0) {
#ifdef CONFIG_CRASH_DUMP
/* If we are doing a crash dump, we
* still need to know the real mem
* size before original memory map is
* reset.
*/
find_max_pfn();
saved_max_pfn = max_pfn;
#endif
e820.nr_map = 0;
user_defined_memmap = 1;
} else {
/* If the user specifies memory size, we
* limit the BIOS-provided memory map to
* that size. exactmap can be used to specify
* the exact map. mem=number can be used to
* trim the existing memory map.
*/
unsigned long long start_at, mem_size;
mem_size = memparse(arg, &arg);
if (*arg == '@') {
start_at = memparse(arg+1, &arg);
add_memory_region(start_at, mem_size, E820_RAM);
} else if (*arg == '#') {
start_at = memparse(arg+1, &arg);
add_memory_region(start_at, mem_size, E820_ACPI);
} else if (*arg == '$') {
start_at = memparse(arg+1, &arg);
add_memory_region(start_at, mem_size, E820_RESERVED);
} else {
limit_regions(mem_size);
user_defined_memmap = 1;
}
}
return 0;
}
early_param("memmap", parse_memmap);
#ifdef CONFIG_PROC_VMCORE
/* elfcorehdr= specifies the location of elf core header
* stored by the crashed kernel.
*/
static int __init parse_elfcorehdr(char *arg)
{
if (!arg)
return -EINVAL;
elfcorehdr_addr = memparse(arg, &arg);
return 0;
}
early_param("elfcorehdr", parse_elfcorehdr);
#endif /* CONFIG_PROC_VMCORE */
/*
* highmem=size forces highmem to be exactly 'size' bytes.
* This works even on boxes that have no highmem otherwise.
* This also works to reduce highmem size on bigger boxes.
*/
static int __init parse_highmem(char *arg)
{
if (!arg)
return -EINVAL;
highmem_pages = memparse(arg, &arg) >> PAGE_SHIFT;
return 0;
}
early_param("highmem", parse_highmem);
/*
* vmalloc=size forces the vmalloc area to be exactly 'size'
* bytes. This can be used to increase (or decrease) the
* vmalloc area - the default is 128m.
*/
static int __init parse_vmalloc(char *arg)
{
if (!arg)
return -EINVAL;
__VMALLOC_RESERVE = memparse(arg, &arg);
return 0;
}
early_param("vmalloc", parse_vmalloc);
/*
* reservetop=size reserves a hole at the top of the kernel address space which
@ -1189,6 +1070,14 @@ static unsigned long __init setup_memory(void)
}
printk(KERN_NOTICE "%ldMB HIGHMEM available.\n",
pages_to_mb(highend_pfn - highstart_pfn));
num_physpages = highend_pfn;
high_memory = (void *) __va(highstart_pfn * PAGE_SIZE - 1) + 1;
#else
num_physpages = max_low_pfn;
high_memory = (void *) __va(max_low_pfn * PAGE_SIZE - 1) + 1;
#endif
#ifdef CONFIG_FLATMEM
max_mapnr = num_physpages;
#endif
printk(KERN_NOTICE "%ldMB LOWMEM available.\n",
pages_to_mb(max_low_pfn));
@ -1518,17 +1407,15 @@ void __init setup_arch(char **cmdline_p)
data_resource.start = virt_to_phys(_etext);
data_resource.end = virt_to_phys(_edata)-1;
parse_cmdline_early(cmdline_p);
parse_early_param();
#ifdef CONFIG_EARLY_PRINTK
{
char *s = strstr(*cmdline_p, "earlyprintk=");
if (s) {
setup_early_printk(strchr(s, '=') + 1);
printk("early console enabled\n");
}
if (user_defined_memmap) {
printk(KERN_INFO "user-defined physical RAM map:\n");
print_memory_map("user");
}
#endif
strlcpy(command_line, saved_command_line, COMMAND_LINE_SIZE);
*cmdline_p = command_line;
max_low_pfn = setup_memory();
@ -1557,7 +1444,7 @@ void __init setup_arch(char **cmdline_p)
dmi_scan_machine();
#ifdef CONFIG_X86_GENERICARCH
generic_apic_probe(*cmdline_p);
generic_apic_probe();
#endif
if (efi_enabled)
efi_map_memmap();
@ -1569,9 +1456,11 @@ void __init setup_arch(char **cmdline_p)
acpi_boot_table_init();
#endif
#ifdef CONFIG_PCI
#ifdef CONFIG_X86_IO_APIC
check_acpi_pci(); /* Checks more than just ACPI actually */
#endif
#endif
#ifdef CONFIG_ACPI
acpi_boot_init();

View File

@ -177,6 +177,9 @@ static void __devinit smp_store_cpu_info(int id)
*/
if ((c->x86_vendor == X86_VENDOR_AMD) && (c->x86 == 6)) {
if (num_possible_cpus() == 1)
goto valid_k7;
/* Athlon 660/661 is valid. */
if ((c->x86_model==6) && ((c->x86_mask==0) || (c->x86_mask==1)))
goto valid_k7;
@ -1376,7 +1379,8 @@ int __cpu_disable(void)
*/
if (cpu == 0)
return -EBUSY;
if (nmi_watchdog == NMI_LOCAL_APIC)
stop_apic_nmi_watchdog(NULL);
clear_local_APIC();
/* Allow any queued timer interrupts to get serviced */
local_irq_enable();
@ -1490,3 +1494,16 @@ void __init smp_intr_init(void)
/* IPI for generic function call */
set_intr_gate(CALL_FUNCTION_VECTOR, call_function_interrupt);
}
/*
* If the BIOS enumerates physical processors before logical,
* maxcpus=N at enumeration-time can be used to disable HT.
*/
static int __init parse_maxcpus(char *arg)
{
extern unsigned int maxcpus;
maxcpus = simple_strtoul(arg, NULL, 0);
return 0;
}
early_param("maxcpus", parse_maxcpus);

View File

@ -1,98 +0,0 @@
/*
* arch/i386/kernel/stacktrace.c
*
* Stack trace management functions
*
* Copyright (C) 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
*/
#include <linux/sched.h>
#include <linux/stacktrace.h>
static inline int valid_stack_ptr(struct thread_info *tinfo, void *p)
{
return p > (void *)tinfo &&
p < (void *)tinfo + THREAD_SIZE - 3;
}
/*
* Save stack-backtrace addresses into a stack_trace buffer:
*/
static inline unsigned long
save_context_stack(struct stack_trace *trace, unsigned int skip,
struct thread_info *tinfo, unsigned long *stack,
unsigned long ebp)
{
unsigned long addr;
#ifdef CONFIG_FRAME_POINTER
while (valid_stack_ptr(tinfo, (void *)ebp)) {
addr = *(unsigned long *)(ebp + 4);
if (!skip)
trace->entries[trace->nr_entries++] = addr;
else
skip--;
if (trace->nr_entries >= trace->max_entries)
break;
/*
* break out of recursive entries (such as
* end_of_stack_stop_unwind_function):
*/
if (ebp == *(unsigned long *)ebp)
break;
ebp = *(unsigned long *)ebp;
}
#else
while (valid_stack_ptr(tinfo, stack)) {
addr = *stack++;
if (__kernel_text_address(addr)) {
if (!skip)
trace->entries[trace->nr_entries++] = addr;
else
skip--;
if (trace->nr_entries >= trace->max_entries)
break;
}
}
#endif
return ebp;
}
/*
* Save stack-backtrace addresses into a stack_trace buffer.
* If all_contexts is set, all contexts (hardirq, softirq and process)
* are saved. If not set then only the current context is saved.
*/
void save_stack_trace(struct stack_trace *trace,
struct task_struct *task, int all_contexts,
unsigned int skip)
{
unsigned long ebp;
unsigned long *stack = &ebp;
WARN_ON(trace->nr_entries || !trace->max_entries);
if (!task || task == current) {
/* Grab ebp right from our regs: */
asm ("movl %%ebp, %0" : "=r" (ebp));
} else {
/* ebp is the last reg pushed by switch_to(): */
ebp = *(unsigned long *) task->thread.esp;
}
while (1) {
struct thread_info *context = (struct thread_info *)
((unsigned long)stack & (~(THREAD_SIZE - 1)));
ebp = save_context_stack(trace, skip, context, stack, ebp);
stack = (unsigned long *)context->previous_esp;
if (!all_contexts || !stack ||
trace->nr_entries >= trace->max_entries)
break;
trace->entries[trace->nr_entries++] = ULONG_MAX;
if (trace->nr_entries >= trace->max_entries)
break;
}
}

View File

@ -317,3 +317,4 @@ ENTRY(sys_call_table)
.long sys_tee /* 315 */
.long sys_vmsplice
.long sys_move_pages
.long sys_getcpu

View File

@ -130,18 +130,33 @@ static int set_rtc_mmss(unsigned long nowtime)
int timer_ack;
#if defined(CONFIG_SMP) && defined(CONFIG_FRAME_POINTER)
unsigned long profile_pc(struct pt_regs *regs)
{
unsigned long pc = instruction_pointer(regs);
if (!user_mode_vm(regs) && in_lock_functions(pc))
#ifdef CONFIG_SMP
if (!user_mode_vm(regs) && in_lock_functions(pc)) {
#ifdef CONFIG_FRAME_POINTER
return *(unsigned long *)(regs->ebp + 4);
#else
unsigned long *sp;
if ((regs->xcs & 3) == 0)
sp = (unsigned long *)&regs->esp;
else
sp = (unsigned long *)regs->esp;
/* Return address is either directly at stack pointer
or above a saved eflags. Eflags has bits 22-31 zero,
kernel addresses don't. */
if (sp[0] >> 22)
return sp[0];
if (sp[1] >> 22)
return sp[1];
#endif
}
#endif
return pc;
}
EXPORT_SYMBOL(profile_pc);
#endif
/*
* This is the same as the above, except we _also_ save the current

View File

@ -28,6 +28,7 @@
#include <linux/init.h>
#include <linux/smp.h>
#include <linux/nodemask.h>
#include <linux/mmzone.h>
#include <asm/cpu.h>
static struct i386_cpu cpu_devices[NR_CPUS];
@ -55,34 +56,18 @@ EXPORT_SYMBOL(arch_register_cpu);
EXPORT_SYMBOL(arch_unregister_cpu);
#endif /*CONFIG_HOTPLUG_CPU*/
static int __init topology_init(void)
{
int i;
#ifdef CONFIG_NUMA
#include <linux/mmzone.h>
static int __init topology_init(void)
{
int i;
for_each_online_node(i)
register_one_node(i);
for_each_present_cpu(i)
arch_register_cpu(i);
return 0;
}
#else /* !CONFIG_NUMA */
static int __init topology_init(void)
{
int i;
for_each_present_cpu(i)
arch_register_cpu(i);
return 0;
}
#endif /* CONFIG_NUMA */
for_each_present_cpu(i)
arch_register_cpu(i);
return 0;
}
subsys_initcall(topology_init);

View File

@ -51,6 +51,7 @@
#include <asm/smp.h>
#include <asm/arch_hooks.h>
#include <asm/kdebug.h>
#include <asm/stacktrace.h>
#include <linux/module.h>
@ -118,26 +119,16 @@ static inline int valid_stack_ptr(struct thread_info *tinfo, void *p)
p < (void *)tinfo + THREAD_SIZE - 3;
}
/*
* Print one address/symbol entries per line.
*/
static inline void print_addr_and_symbol(unsigned long addr, char *log_lvl)
{
printk(" [<%08lx>] ", addr);
print_symbol("%s\n", addr);
}
static inline unsigned long print_context_stack(struct thread_info *tinfo,
unsigned long *stack, unsigned long ebp,
char *log_lvl)
struct stacktrace_ops *ops, void *data)
{
unsigned long addr;
#ifdef CONFIG_FRAME_POINTER
while (valid_stack_ptr(tinfo, (void *)ebp)) {
addr = *(unsigned long *)(ebp + 4);
print_addr_and_symbol(addr, log_lvl);
ops->address(data, addr);
/*
* break out of recursive entries (such as
* end_of_stack_stop_unwind_function):
@ -150,30 +141,37 @@ static inline unsigned long print_context_stack(struct thread_info *tinfo,
while (valid_stack_ptr(tinfo, stack)) {
addr = *stack++;
if (__kernel_text_address(addr))
print_addr_and_symbol(addr, log_lvl);
ops->address(data, addr);
}
#endif
return ebp;
}
struct ops_and_data {
struct stacktrace_ops *ops;
void *data;
};
static asmlinkage int
show_trace_unwind(struct unwind_frame_info *info, void *log_lvl)
dump_trace_unwind(struct unwind_frame_info *info, void *data)
{
struct ops_and_data *oad = (struct ops_and_data *)data;
int n = 0;
while (unwind(info) == 0 && UNW_PC(info)) {
n++;
print_addr_and_symbol(UNW_PC(info), log_lvl);
oad->ops->address(oad->data, UNW_PC(info));
if (arch_unw_user_mode(info))
break;
}
return n;
}
static void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
unsigned long *stack, char *log_lvl)
void dump_trace(struct task_struct *task, struct pt_regs *regs,
unsigned long *stack,
struct stacktrace_ops *ops, void *data)
{
unsigned long ebp;
unsigned long ebp = 0;
if (!task)
task = current;
@ -181,54 +179,116 @@ static void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
if (call_trace >= 0) {
int unw_ret = 0;
struct unwind_frame_info info;
struct ops_and_data oad = { .ops = ops, .data = data };
if (regs) {
if (unwind_init_frame_info(&info, task, regs) == 0)
unw_ret = show_trace_unwind(&info, log_lvl);
unw_ret = dump_trace_unwind(&info, &oad);
} else if (task == current)
unw_ret = unwind_init_running(&info, show_trace_unwind, log_lvl);
unw_ret = unwind_init_running(&info, dump_trace_unwind, &oad);
else {
if (unwind_init_blocked(&info, task) == 0)
unw_ret = show_trace_unwind(&info, log_lvl);
unw_ret = dump_trace_unwind(&info, &oad);
}
if (unw_ret > 0) {
if (call_trace == 1 && !arch_unw_user_mode(&info)) {
print_symbol("DWARF2 unwinder stuck at %s\n",
ops->warning_symbol(data, "DWARF2 unwinder stuck at %s\n",
UNW_PC(&info));
if (UNW_SP(&info) >= PAGE_OFFSET) {
printk("Leftover inexact backtrace:\n");
ops->warning(data, "Leftover inexact backtrace:\n");
stack = (void *)UNW_SP(&info);
if (!stack)
return;
ebp = UNW_FP(&info);
} else
printk("Full inexact backtrace again:\n");
ops->warning(data, "Full inexact backtrace again:\n");
} else if (call_trace >= 1)
return;
else
printk("Full inexact backtrace again:\n");
ops->warning(data, "Full inexact backtrace again:\n");
} else
printk("Inexact backtrace:\n");
ops->warning(data, "Inexact backtrace:\n");
}
if (!stack) {
unsigned long dummy;
stack = &dummy;
if (task && task != current)
stack = (unsigned long *)task->thread.esp;
}
if (task == current) {
/* Grab ebp right from our regs */
asm ("movl %%ebp, %0" : "=r" (ebp) : );
} else {
/* ebp is the last reg pushed by switch_to */
ebp = *(unsigned long *) task->thread.esp;
#ifdef CONFIG_FRAME_POINTER
if (!ebp) {
if (task == current) {
/* Grab ebp right from our regs */
asm ("movl %%ebp, %0" : "=r" (ebp) : );
} else {
/* ebp is the last reg pushed by switch_to */
ebp = *(unsigned long *) task->thread.esp;
}
}
#endif
while (1) {
struct thread_info *context;
context = (struct thread_info *)
((unsigned long)stack & (~(THREAD_SIZE - 1)));
ebp = print_context_stack(context, stack, ebp, log_lvl);
ebp = print_context_stack(context, stack, ebp, ops, data);
/* Should be after the line below, but somewhere
in early boot context comes out corrupted and we
can't reference it -AK */
if (ops->stack(data, "IRQ") < 0)
break;
stack = (unsigned long*)context->previous_esp;
if (!stack)
break;
printk("%s =======================\n", log_lvl);
}
}
EXPORT_SYMBOL(dump_trace);
void show_trace(struct task_struct *task, struct pt_regs *regs, unsigned long * stack)
static void
print_trace_warning_symbol(void *data, char *msg, unsigned long symbol)
{
printk(data);
print_symbol(msg, symbol);
printk("\n");
}
static void print_trace_warning(void *data, char *msg)
{
printk("%s%s\n", (char *)data, msg);
}
static int print_trace_stack(void *data, char *name)
{
return 0;
}
/*
* Print one address/symbol entries per line.
*/
static void print_trace_address(void *data, unsigned long addr)
{
printk("%s [<%08lx>] ", (char *)data, addr);
print_symbol("%s\n", addr);
}
static struct stacktrace_ops print_trace_ops = {
.warning = print_trace_warning,
.warning_symbol = print_trace_warning_symbol,
.stack = print_trace_stack,
.address = print_trace_address,
};
static void
show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
unsigned long * stack, char *log_lvl)
{
dump_trace(task, regs, stack, &print_trace_ops, log_lvl);
printk("%s =======================\n", log_lvl);
}
void show_trace(struct task_struct *task, struct pt_regs *regs,
unsigned long * stack)
{
show_trace_log_lvl(task, regs, stack, "");
}
@ -291,8 +351,9 @@ void show_registers(struct pt_regs *regs)
ss = regs->xss & 0xffff;
}
print_modules();
printk(KERN_EMERG "CPU: %d\nEIP: %04x:[<%08lx>] %s VLI\n"
"EFLAGS: %08lx (%s %.*s) \n",
printk(KERN_EMERG "CPU: %d\n"
KERN_EMERG "EIP: %04x:[<%08lx>] %s VLI\n"
KERN_EMERG "EFLAGS: %08lx (%s %.*s)\n",
smp_processor_id(), 0xffff & regs->xcs, regs->eip,
print_tainted(), regs->eflags, system_utsname.release,
(int)strcspn(system_utsname.version, " "),
@ -634,18 +695,24 @@ gp_in_kernel:
}
}
static void mem_parity_error(unsigned char reason, struct pt_regs * regs)
static __kprobes void
mem_parity_error(unsigned char reason, struct pt_regs * regs)
{
printk(KERN_EMERG "Uhhuh. NMI received. Dazed and confused, but trying "
"to continue\n");
printk(KERN_EMERG "Uhhuh. NMI received for unknown reason %02x on "
"CPU %d.\n", reason, smp_processor_id());
printk(KERN_EMERG "You probably have a hardware problem with your RAM "
"chips\n");
if (panic_on_unrecovered_nmi)
panic("NMI: Not continuing");
printk(KERN_EMERG "Dazed and confused, but trying to continue\n");
/* Clear and disable the memory parity error line. */
clear_mem_error(reason);
}
static void io_check_error(unsigned char reason, struct pt_regs * regs)
static __kprobes void
io_check_error(unsigned char reason, struct pt_regs * regs)
{
unsigned long i;
@ -661,7 +728,8 @@ static void io_check_error(unsigned char reason, struct pt_regs * regs)
outb(reason, 0x61);
}
static void unknown_nmi_error(unsigned char reason, struct pt_regs * regs)
static __kprobes void
unknown_nmi_error(unsigned char reason, struct pt_regs * regs)
{
#ifdef CONFIG_MCA
/* Might actually be able to figure out what the guilty party
@ -671,15 +739,18 @@ static void unknown_nmi_error(unsigned char reason, struct pt_regs * regs)
return;
}
#endif
printk("Uhhuh. NMI received for unknown reason %02x on CPU %d.\n",
reason, smp_processor_id());
printk("Dazed and confused, but trying to continue\n");
printk("Do you have a strange power saving mode enabled?\n");
printk(KERN_EMERG "Uhhuh. NMI received for unknown reason %02x on "
"CPU %d.\n", reason, smp_processor_id());
printk(KERN_EMERG "Do you have a strange power saving mode enabled?\n");
if (panic_on_unrecovered_nmi)
panic("NMI: Not continuing");
printk(KERN_EMERG "Dazed and confused, but trying to continue\n");
}
static DEFINE_SPINLOCK(nmi_print_lock);
void die_nmi (struct pt_regs *regs, const char *msg)
void __kprobes die_nmi(struct pt_regs *regs, const char *msg)
{
if (notify_die(DIE_NMIWATCHDOG, msg, regs, 0, 2, SIGINT) ==
NOTIFY_STOP)
@ -711,7 +782,7 @@ void die_nmi (struct pt_regs *regs, const char *msg)
do_exit(SIGSEGV);
}
static void default_do_nmi(struct pt_regs * regs)
static __kprobes void default_do_nmi(struct pt_regs * regs)
{
unsigned char reason = 0;
@ -728,12 +799,12 @@ static void default_do_nmi(struct pt_regs * regs)
* Ok, so this is none of the documented NMI sources,
* so it must be the NMI watchdog.
*/
if (nmi_watchdog) {
nmi_watchdog_tick(regs);
if (nmi_watchdog_tick(regs, reason))
return;
}
if (!do_nmi_callback(regs, smp_processor_id()))
#endif
unknown_nmi_error(reason, regs);
unknown_nmi_error(reason, regs);
return;
}
if (notify_die(DIE_NMI, "nmi", regs, reason, 2, SIGINT) == NOTIFY_STOP)
@ -749,14 +820,7 @@ static void default_do_nmi(struct pt_regs * regs)
reassert_nmi();
}
static int dummy_nmi_callback(struct pt_regs * regs, int cpu)
{
return 0;
}
static nmi_callback_t nmi_callback = dummy_nmi_callback;
fastcall void do_nmi(struct pt_regs * regs, long error_code)
fastcall __kprobes void do_nmi(struct pt_regs * regs, long error_code)
{
int cpu;
@ -766,25 +830,11 @@ fastcall void do_nmi(struct pt_regs * regs, long error_code)
++nmi_count(cpu);
if (!rcu_dereference(nmi_callback)(regs, cpu))
default_do_nmi(regs);
default_do_nmi(regs);
nmi_exit();
}
void set_nmi_callback(nmi_callback_t callback)
{
vmalloc_sync_all();
rcu_assign_pointer(nmi_callback, callback);
}
EXPORT_SYMBOL_GPL(set_nmi_callback);
void unset_nmi_callback(void)
{
nmi_callback = dummy_nmi_callback;
}
EXPORT_SYMBOL_GPL(unset_nmi_callback);
#ifdef CONFIG_KPROBES
fastcall void __kprobes do_int3(struct pt_regs *regs, long error_code)
{
@ -1124,20 +1174,6 @@ void __init trap_init_f00f_bug(void)
}
#endif
#define _set_gate(gate_addr,type,dpl,addr,seg) \
do { \
int __d0, __d1; \
__asm__ __volatile__ ("movw %%dx,%%ax\n\t" \
"movw %4,%%dx\n\t" \
"movl %%eax,%0\n\t" \
"movl %%edx,%1" \
:"=m" (*((long *) (gate_addr))), \
"=m" (*(1+(long *) (gate_addr))), "=&a" (__d0), "=&d" (__d1) \
:"i" ((short) (0x8000+(dpl<<13)+(type<<8))), \
"3" ((char *) (addr)),"2" ((seg) << 16)); \
} while (0)
/*
* This needs to use 'idt_table' rather than 'idt', and
* thus use the _nonmapped_ version of the IDT, as the
@ -1146,7 +1182,7 @@ do { \
*/
void set_intr_gate(unsigned int n, void *addr)
{
_set_gate(idt_table+n,14,0,addr,__KERNEL_CS);
_set_gate(n, DESCTYPE_INT, addr, __KERNEL_CS);
}
/*
@ -1154,22 +1190,22 @@ void set_intr_gate(unsigned int n, void *addr)
*/
static inline void set_system_intr_gate(unsigned int n, void *addr)
{
_set_gate(idt_table+n, 14, 3, addr, __KERNEL_CS);
_set_gate(n, DESCTYPE_INT | DESCTYPE_DPL3, addr, __KERNEL_CS);
}
static void __init set_trap_gate(unsigned int n, void *addr)
{
_set_gate(idt_table+n,15,0,addr,__KERNEL_CS);
_set_gate(n, DESCTYPE_TRAP, addr, __KERNEL_CS);
}
static void __init set_system_gate(unsigned int n, void *addr)
{
_set_gate(idt_table+n,15,3,addr,__KERNEL_CS);
_set_gate(n, DESCTYPE_TRAP | DESCTYPE_DPL3, addr, __KERNEL_CS);
}
static void __init set_task_gate(unsigned int n, unsigned int gdt_entry)
{
_set_gate(idt_table+n,5,0,0,(gdt_entry<<3));
_set_gate(n, DESCTYPE_TASK, (void *)0, (gdt_entry<<3));
}

View File

@ -192,7 +192,7 @@ int recalibrate_cpu_khz(void)
EXPORT_SYMBOL(recalibrate_cpu_khz);
void tsc_init(void)
void __init tsc_init(void)
{
if (!cpu_has_tsc || tsc_disable)
return;

View File

@ -4,6 +4,6 @@
lib-y = checksum.o delay.o usercopy.o getuser.o putuser.o memcpy.o strstr.o \
bitops.o
bitops.o semaphore.o
lib-$(CONFIG_X86_USE_3DNOW) += mmx.o

217
arch/i386/lib/semaphore.S Normal file
View File

@ -0,0 +1,217 @@
/*
* i386 semaphore implementation.
*
* (C) Copyright 1999 Linus Torvalds
*
* Portions Copyright 1999 Red Hat, Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*
* rw semaphores implemented November 1999 by Benjamin LaHaise <bcrl@kvack.org>
*/
#include <linux/config.h>
#include <linux/linkage.h>
#include <asm/rwlock.h>
#include <asm/alternative-asm.i>
#include <asm/frame.i>
#include <asm/dwarf2.h>
/*
* The semaphore operations have a special calling sequence that
* allow us to do a simpler in-line version of them. These routines
* need to convert that sequence back into the C sequence when
* there is contention on the semaphore.
*
* %eax contains the semaphore pointer on entry. Save the C-clobbered
* registers (%eax, %edx and %ecx) except %eax whish is either a return
* value or just clobbered..
*/
.section .sched.text
ENTRY(__down_failed)
CFI_STARTPROC
FRAME
pushl %edx
CFI_ADJUST_CFA_OFFSET 4
CFI_REL_OFFSET edx,0
pushl %ecx
CFI_ADJUST_CFA_OFFSET 4
CFI_REL_OFFSET ecx,0
call __down
popl %ecx
CFI_ADJUST_CFA_OFFSET -4
CFI_RESTORE ecx
popl %edx
CFI_ADJUST_CFA_OFFSET -4
CFI_RESTORE edx
ENDFRAME
ret
CFI_ENDPROC
END(__down_failed)
ENTRY(__down_failed_interruptible)
CFI_STARTPROC
FRAME
pushl %edx
CFI_ADJUST_CFA_OFFSET 4
CFI_REL_OFFSET edx,0
pushl %ecx
CFI_ADJUST_CFA_OFFSET 4
CFI_REL_OFFSET ecx,0
call __down_interruptible
popl %ecx
CFI_ADJUST_CFA_OFFSET -4
CFI_RESTORE ecx
popl %edx
CFI_ADJUST_CFA_OFFSET -4
CFI_RESTORE edx
ENDFRAME
ret
CFI_ENDPROC
END(__down_failed_interruptible)
ENTRY(__down_failed_trylock)
CFI_STARTPROC
FRAME
pushl %edx
CFI_ADJUST_CFA_OFFSET 4
CFI_REL_OFFSET edx,0
pushl %ecx
CFI_ADJUST_CFA_OFFSET 4
CFI_REL_OFFSET ecx,0
call __down_trylock
popl %ecx
CFI_ADJUST_CFA_OFFSET -4
CFI_RESTORE ecx
popl %edx
CFI_ADJUST_CFA_OFFSET -4
CFI_RESTORE edx
ENDFRAME
ret
CFI_ENDPROC
END(__down_failed_trylock)
ENTRY(__up_wakeup)
CFI_STARTPROC
FRAME
pushl %edx
CFI_ADJUST_CFA_OFFSET 4
CFI_REL_OFFSET edx,0
pushl %ecx
CFI_ADJUST_CFA_OFFSET 4
CFI_REL_OFFSET ecx,0
call __up
popl %ecx
CFI_ADJUST_CFA_OFFSET -4
CFI_RESTORE ecx
popl %edx
CFI_ADJUST_CFA_OFFSET -4
CFI_RESTORE edx
ENDFRAME
ret
CFI_ENDPROC
END(__up_wakeup)
/*
* rw spinlock fallbacks
*/
#ifdef CONFIG_SMP
ENTRY(__write_lock_failed)
CFI_STARTPROC simple
FRAME
2: LOCK_PREFIX
addl $ RW_LOCK_BIAS,(%eax)
1: rep; nop
cmpl $ RW_LOCK_BIAS,(%eax)
jne 1b
LOCK_PREFIX
subl $ RW_LOCK_BIAS,(%eax)
jnz 2b
ENDFRAME
ret
CFI_ENDPROC
END(__write_lock_failed)
ENTRY(__read_lock_failed)
CFI_STARTPROC
FRAME
2: LOCK_PREFIX
incl (%eax)
1: rep; nop
cmpl $1,(%eax)
js 1b
LOCK_PREFIX
decl (%eax)
js 2b
ENDFRAME
ret
CFI_ENDPROC
END(__read_lock_failed)
#endif
/* Fix up special calling conventions */
ENTRY(call_rwsem_down_read_failed)
CFI_STARTPROC
push %ecx
CFI_ADJUST_CFA_OFFSET 4
CFI_REL_OFFSET ecx,0
push %edx
CFI_ADJUST_CFA_OFFSET 4
CFI_REL_OFFSET edx,0
call rwsem_down_read_failed
pop %edx
CFI_ADJUST_CFA_OFFSET -4
pop %ecx
CFI_ADJUST_CFA_OFFSET -4
ret
CFI_ENDPROC
END(call_rwsem_down_read_failed)
ENTRY(call_rwsem_down_write_failed)
CFI_STARTPROC
push %ecx
CFI_ADJUST_CFA_OFFSET 4
CFI_REL_OFFSET ecx,0
calll rwsem_down_write_failed
pop %ecx
CFI_ADJUST_CFA_OFFSET -4
ret
CFI_ENDPROC
END(call_rwsem_down_write_failed)
ENTRY(call_rwsem_wake)
CFI_STARTPROC
decw %dx /* do nothing if still outstanding active readers */
jnz 1f
push %ecx
CFI_ADJUST_CFA_OFFSET 4
CFI_REL_OFFSET ecx,0
call rwsem_wake
pop %ecx
CFI_ADJUST_CFA_OFFSET -4
1: ret
CFI_ENDPROC
END(call_rwsem_wake)
/* Fix up special calling conventions */
ENTRY(call_rwsem_downgrade_wake)
CFI_STARTPROC
push %ecx
CFI_ADJUST_CFA_OFFSET 4
CFI_REL_OFFSET ecx,0
push %edx
CFI_ADJUST_CFA_OFFSET 4
CFI_REL_OFFSET edx,0
call rwsem_downgrade_wake
pop %edx
CFI_ADJUST_CFA_OFFSET -4
pop %ecx
CFI_ADJUST_CFA_OFFSET -4
ret
CFI_ENDPROC
END(call_rwsem_downgrade_wake)

View File

@ -5,6 +5,7 @@
#define APIC_DEFINITION 1
#include <linux/threads.h>
#include <linux/cpumask.h>
#include <asm/smp.h>
#include <asm/mpspec.h>
#include <asm/genapic.h>
#include <asm/fixmap.h>

View File

@ -4,6 +4,7 @@
#define APIC_DEFINITION 1
#include <linux/threads.h>
#include <linux/cpumask.h>
#include <asm/smp.h>
#include <asm/mpspec.h>
#include <asm/genapic.h>
#include <asm/fixmap.h>

View File

@ -9,6 +9,7 @@
#include <linux/kernel.h>
#include <linux/ctype.h>
#include <linux/init.h>
#include <linux/errno.h>
#include <asm/fixmap.h>
#include <asm/mpspec.h>
#include <asm/apicdef.h>
@ -29,7 +30,24 @@ struct genapic *apic_probe[] __initdata = {
NULL,
};
static int cmdline_apic;
static int cmdline_apic __initdata;
static int __init parse_apic(char *arg)
{
int i;
if (!arg)
return -EINVAL;
for (i = 0; apic_probe[i]; i++) {
if (!strcmp(apic_probe[i]->name, arg)) {
genapic = apic_probe[i];
cmdline_apic = 1;
return 0;
}
}
return -ENOENT;
}
early_param("apic", parse_apic);
void __init generic_bigsmp_probe(void)
{
@ -48,40 +66,20 @@ void __init generic_bigsmp_probe(void)
}
}
void __init generic_apic_probe(char *command_line)
void __init generic_apic_probe(void)
{
char *s;
int i;
int changed = 0;
s = strstr(command_line, "apic=");
if (s && (s == command_line || isspace(s[-1]))) {
char *p = strchr(s, ' '), old;
if (!p)
p = strchr(s, '\0');
old = *p;
*p = 0;
for (i = 0; !changed && apic_probe[i]; i++) {
if (!strcmp(apic_probe[i]->name, s+5)) {
changed = 1;
if (!cmdline_apic) {
int i;
for (i = 0; apic_probe[i]; i++) {
if (apic_probe[i]->probe()) {
genapic = apic_probe[i];
break;
}
}
if (!changed)
printk(KERN_ERR "Unknown genapic `%s' specified.\n", s);
*p = old;
cmdline_apic = changed;
}
for (i = 0; !changed && apic_probe[i]; i++) {
if (apic_probe[i]->probe()) {
changed = 1;
genapic = apic_probe[i];
}
/* Not visible without early console */
if (!apic_probe[i])
panic("Didn't find an APIC driver");
}
/* Not visible without early console */
if (!changed)
panic("Didn't find an APIC driver");
printk(KERN_INFO "Using APIC driver %s\n", genapic->name);
}
@ -119,7 +117,9 @@ int __init acpi_madt_oem_check(char *oem_id, char *oem_table_id)
return 0;
}
#ifdef CONFIG_SMP
int hard_smp_processor_id(void)
{
return genapic->get_apic_id(*(unsigned long *)(APIC_BASE+APIC_ID));
}
#endif

View File

@ -4,6 +4,7 @@
#define APIC_DEFINITION 1
#include <linux/threads.h>
#include <linux/cpumask.h>
#include <asm/smp.h>
#include <asm/mpspec.h>
#include <asm/genapic.h>
#include <asm/fixmap.h>

View File

@ -322,6 +322,11 @@ unsigned long __init setup_memory(void)
highstart_pfn = system_max_low_pfn;
printk(KERN_NOTICE "%ldMB HIGHMEM available.\n",
pages_to_mb(highend_pfn - highstart_pfn));
num_physpages = highend_pfn;
high_memory = (void *) __va(highstart_pfn * PAGE_SIZE - 1) + 1;
#else
num_physpages = system_max_low_pfn;
high_memory = (void *) __va(system_max_low_pfn * PAGE_SIZE - 1) + 1;
#endif
printk(KERN_NOTICE "%ldMB LOWMEM available.\n",
pages_to_mb(system_max_low_pfn));

View File

@ -11,7 +11,7 @@ int fixup_exception(struct pt_regs *regs)
const struct exception_table_entry *fixup;
#ifdef CONFIG_PNPBIOS
if (unlikely((regs->xcs & ~15) == (GDT_ENTRY_PNPBIOS_BASE << 3)))
if (unlikely(SEGMENT_IS_PNP_CODE(regs->xcs)))
{
extern u32 pnp_bios_fault_eip, pnp_bios_fault_esp;
extern u32 pnp_bios_is_utter_crap;

View File

@ -27,21 +27,24 @@
#include <asm/uaccess.h>
#include <asm/desc.h>
#include <asm/kdebug.h>
#include <asm/segment.h>
extern void die(const char *,struct pt_regs *,long);
#ifdef CONFIG_KPROBES
ATOMIC_NOTIFIER_HEAD(notify_page_fault_chain);
static ATOMIC_NOTIFIER_HEAD(notify_page_fault_chain);
int register_page_fault_notifier(struct notifier_block *nb)
{
vmalloc_sync_all();
return atomic_notifier_chain_register(&notify_page_fault_chain, nb);
}
EXPORT_SYMBOL_GPL(register_page_fault_notifier);
int unregister_page_fault_notifier(struct notifier_block *nb)
{
return atomic_notifier_chain_unregister(&notify_page_fault_chain, nb);
}
EXPORT_SYMBOL_GPL(unregister_page_fault_notifier);
static inline int notify_page_fault(enum die_val val, const char *str,
struct pt_regs *regs, long err, int trap, int sig)
@ -55,14 +58,6 @@ static inline int notify_page_fault(enum die_val val, const char *str,
};
return atomic_notifier_call_chain(&notify_page_fault_chain, val, &args);
}
#else
static inline int notify_page_fault(enum die_val val, const char *str,
struct pt_regs *regs, long err, int trap, int sig)
{
return NOTIFY_DONE;
}
#endif
/*
* Unlock any spinlocks which will prevent us from getting the
@ -119,10 +114,10 @@ static inline unsigned long get_segment_eip(struct pt_regs *regs,
}
/* The standard kernel/user address space limit. */
*eip_limit = (seg & 3) ? USER_DS.seg : KERNEL_DS.seg;
*eip_limit = user_mode(regs) ? USER_DS.seg : KERNEL_DS.seg;
/* By far the most common cases. */
if (likely(seg == __USER_CS || seg == __KERNEL_CS))
if (likely(SEGMENT_IS_FLAT_CODE(seg)))
return eip;
/* Check the segment exists, is within the current LDT/GDT size,
@ -436,11 +431,7 @@ good_area:
write = 0;
switch (error_code & 3) {
default: /* 3: write, present */
#ifdef TEST_VERIFY_AREA
if (regs->cs == KERNEL_CS)
printk("WP fault at %08lx\n", regs->eip);
#endif
/* fall through */
/* fall through */
case 2: /* write, not present */
if (!(vma->vm_flags & VM_WRITE))
goto bad_area;

View File

@ -54,7 +54,7 @@ void kunmap_atomic(void *kvaddr, enum km_type type)
unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
enum fixed_addresses idx = type + KM_TYPE_NR*smp_processor_id();
if (vaddr < FIXADDR_START) { // FIXME
if (vaddr >= PAGE_OFFSET && vaddr < (unsigned long)high_memory) {
dec_preempt_count();
preempt_check_resched();
return;

View File

@ -435,16 +435,22 @@ u64 __supported_pte_mask __read_mostly = ~_PAGE_NX;
* on Enable
* off Disable
*/
void __init noexec_setup(const char *str)
static int __init noexec_setup(char *str)
{
if (!strncmp(str, "on",2) && cpu_has_nx) {
__supported_pte_mask |= _PAGE_NX;
disable_nx = 0;
} else if (!strncmp(str,"off",3)) {
if (!str || !strcmp(str, "on")) {
if (cpu_has_nx) {
__supported_pte_mask |= _PAGE_NX;
disable_nx = 0;
}
} else if (!strcmp(str,"off")) {
disable_nx = 1;
__supported_pte_mask &= ~_PAGE_NX;
}
} else
return -EINVAL;
return 0;
}
early_param("noexec", noexec_setup);
int nx_enabled = 0;
#ifdef CONFIG_X86_PAE
@ -552,18 +558,6 @@ static void __init test_wp_bit(void)
}
}
static void __init set_max_mapnr_init(void)
{
#ifdef CONFIG_HIGHMEM
num_physpages = highend_pfn;
#else
num_physpages = max_low_pfn;
#endif
#ifdef CONFIG_FLATMEM
max_mapnr = num_physpages;
#endif
}
static struct kcore_list kcore_mem, kcore_vmalloc;
void __init mem_init(void)
@ -590,14 +584,6 @@ void __init mem_init(void)
}
#endif
set_max_mapnr_init();
#ifdef CONFIG_HIGHMEM
high_memory = (void *) __va(highstart_pfn * PAGE_SIZE - 1) + 1;
#else
high_memory = (void *) __va(max_low_pfn * PAGE_SIZE - 1) + 1;
#endif
/* this will put all low memory onto the freelists */
totalram_pages += free_all_bootmem();

View File

@ -17,14 +17,15 @@
#include <asm/nmi.h>
#include <asm/msr.h>
#include <asm/apic.h>
#include <asm/kdebug.h>
#include "op_counter.h"
#include "op_x86_model.h"
static struct op_x86_model_spec const * model;
static struct op_msrs cpu_msrs[NR_CPUS];
static unsigned long saved_lvtpc[NR_CPUS];
static int nmi_start(void);
static void nmi_stop(void);
@ -82,13 +83,24 @@ static void exit_driverfs(void)
#define exit_driverfs() do { } while (0)
#endif /* CONFIG_PM */
static int nmi_callback(struct pt_regs * regs, int cpu)
static int profile_exceptions_notify(struct notifier_block *self,
unsigned long val, void *data)
{
return model->check_ctrs(regs, &cpu_msrs[cpu]);
struct die_args *args = (struct die_args *)data;
int ret = NOTIFY_DONE;
int cpu = smp_processor_id();
switch(val) {
case DIE_NMI:
if (model->check_ctrs(args->regs, &cpu_msrs[cpu]))
ret = NOTIFY_STOP;
break;
default:
break;
}
return ret;
}
static void nmi_cpu_save_registers(struct op_msrs * msrs)
{
unsigned int const nr_ctrs = model->num_counters;
@ -98,15 +110,19 @@ static void nmi_cpu_save_registers(struct op_msrs * msrs)
unsigned int i;
for (i = 0; i < nr_ctrs; ++i) {
rdmsr(counters[i].addr,
counters[i].saved.low,
counters[i].saved.high);
if (counters[i].addr){
rdmsr(counters[i].addr,
counters[i].saved.low,
counters[i].saved.high);
}
}
for (i = 0; i < nr_ctrls; ++i) {
rdmsr(controls[i].addr,
controls[i].saved.low,
controls[i].saved.high);
if (controls[i].addr){
rdmsr(controls[i].addr,
controls[i].saved.low,
controls[i].saved.high);
}
}
}
@ -170,27 +186,29 @@ static void nmi_cpu_setup(void * dummy)
apic_write(APIC_LVTPC, APIC_DM_NMI);
}
static struct notifier_block profile_exceptions_nb = {
.notifier_call = profile_exceptions_notify,
.next = NULL,
.priority = 0
};
static int nmi_setup(void)
{
int err=0;
if (!allocate_msrs())
return -ENOMEM;
/* We walk a thin line between law and rape here.
* We need to be careful to install our NMI handler
* without actually triggering any NMIs as this will
* break the core code horrifically.
*/
if (reserve_lapic_nmi() < 0) {
if ((err = register_die_notifier(&profile_exceptions_nb))){
free_msrs();
return -EBUSY;
return err;
}
/* We need to serialize save and setup for HT because the subset
* of msrs are distinct for save and setup operations
*/
on_each_cpu(nmi_save_registers, NULL, 0, 1);
on_each_cpu(nmi_cpu_setup, NULL, 0, 1);
set_nmi_callback(nmi_callback);
nmi_enabled = 1;
return 0;
}
@ -205,15 +223,19 @@ static void nmi_restore_registers(struct op_msrs * msrs)
unsigned int i;
for (i = 0; i < nr_ctrls; ++i) {
wrmsr(controls[i].addr,
controls[i].saved.low,
controls[i].saved.high);
if (controls[i].addr){
wrmsr(controls[i].addr,
controls[i].saved.low,
controls[i].saved.high);
}
}
for (i = 0; i < nr_ctrs; ++i) {
wrmsr(counters[i].addr,
counters[i].saved.low,
counters[i].saved.high);
if (counters[i].addr){
wrmsr(counters[i].addr,
counters[i].saved.low,
counters[i].saved.high);
}
}
}
@ -234,6 +256,7 @@ static void nmi_cpu_shutdown(void * dummy)
apic_write(APIC_LVTPC, saved_lvtpc[cpu]);
apic_write(APIC_LVTERR, v);
nmi_restore_registers(msrs);
model->shutdown(msrs);
}
@ -241,8 +264,7 @@ static void nmi_shutdown(void)
{
nmi_enabled = 0;
on_each_cpu(nmi_cpu_shutdown, NULL, 0, 1);
unset_nmi_callback();
release_lapic_nmi();
unregister_die_notifier(&profile_exceptions_nb);
free_msrs();
}
@ -284,6 +306,14 @@ static int nmi_create_files(struct super_block * sb, struct dentry * root)
struct dentry * dir;
char buf[4];
/* quick little hack to _not_ expose a counter if it is not
* available for use. This should protect userspace app.
* NOTE: assumes 1:1 mapping here (that counters are organized
* sequentially in their struct assignment).
*/
if (unlikely(!avail_to_resrv_perfctr_nmi_bit(i)))
continue;
snprintf(buf, sizeof(buf), "%d", i);
dir = oprofilefs_mkdir(sb, root, buf);
oprofilefs_create_ulong(sb, dir, "enabled", &counter_config[i].enabled);

View File

@ -17,34 +17,49 @@
#include <asm/nmi.h>
#include <asm/apic.h>
#include <asm/ptrace.h>
#include <asm/kdebug.h>
static int nmi_timer_callback(struct pt_regs * regs, int cpu)
static int profile_timer_exceptions_notify(struct notifier_block *self,
unsigned long val, void *data)
{
oprofile_add_sample(regs, 0);
return 1;
struct die_args *args = (struct die_args *)data;
int ret = NOTIFY_DONE;
switch(val) {
case DIE_NMI:
oprofile_add_sample(args->regs, 0);
ret = NOTIFY_STOP;
break;
default:
break;
}
return ret;
}
static struct notifier_block profile_timer_exceptions_nb = {
.notifier_call = profile_timer_exceptions_notify,
.next = NULL,
.priority = 0
};
static int timer_start(void)
{
disable_timer_nmi_watchdog();
set_nmi_callback(nmi_timer_callback);
if (register_die_notifier(&profile_timer_exceptions_nb))
return 1;
return 0;
}
static void timer_stop(void)
{
enable_timer_nmi_watchdog();
unset_nmi_callback();
unregister_die_notifier(&profile_timer_exceptions_nb);
synchronize_sched(); /* Allow already-started NMIs to complete. */
}
int __init op_nmi_timer_init(struct oprofile_operations * ops)
{
extern int nmi_active;
if (nmi_active <= 0)
if ((nmi_watchdog != NMI_IO_APIC) || (atomic_read(&nmi_active) <= 0))
return -ENODEV;
ops->start = timer_start;

View File

@ -21,10 +21,12 @@
#define NUM_COUNTERS 4
#define NUM_CONTROLS 4
#define CTR_IS_RESERVED(msrs,c) (msrs->counters[(c)].addr ? 1 : 0)
#define CTR_READ(l,h,msrs,c) do {rdmsr(msrs->counters[(c)].addr, (l), (h));} while (0)
#define CTR_WRITE(l,msrs,c) do {wrmsr(msrs->counters[(c)].addr, -(unsigned int)(l), -1);} while (0)
#define CTR_OVERFLOWED(n) (!((n) & (1U<<31)))
#define CTRL_IS_RESERVED(msrs,c) (msrs->controls[(c)].addr ? 1 : 0)
#define CTRL_READ(l,h,msrs,c) do {rdmsr(msrs->controls[(c)].addr, (l), (h));} while (0)
#define CTRL_WRITE(l,h,msrs,c) do {wrmsr(msrs->controls[(c)].addr, (l), (h));} while (0)
#define CTRL_SET_ACTIVE(n) (n |= (1<<22))
@ -40,15 +42,21 @@ static unsigned long reset_value[NUM_COUNTERS];
static void athlon_fill_in_addresses(struct op_msrs * const msrs)
{
msrs->counters[0].addr = MSR_K7_PERFCTR0;
msrs->counters[1].addr = MSR_K7_PERFCTR1;
msrs->counters[2].addr = MSR_K7_PERFCTR2;
msrs->counters[3].addr = MSR_K7_PERFCTR3;
int i;
msrs->controls[0].addr = MSR_K7_EVNTSEL0;
msrs->controls[1].addr = MSR_K7_EVNTSEL1;
msrs->controls[2].addr = MSR_K7_EVNTSEL2;
msrs->controls[3].addr = MSR_K7_EVNTSEL3;
for (i=0; i < NUM_COUNTERS; i++) {
if (reserve_perfctr_nmi(MSR_K7_PERFCTR0 + i))
msrs->counters[i].addr = MSR_K7_PERFCTR0 + i;
else
msrs->counters[i].addr = 0;
}
for (i=0; i < NUM_CONTROLS; i++) {
if (reserve_evntsel_nmi(MSR_K7_EVNTSEL0 + i))
msrs->controls[i].addr = MSR_K7_EVNTSEL0 + i;
else
msrs->controls[i].addr = 0;
}
}
@ -59,19 +67,23 @@ static void athlon_setup_ctrs(struct op_msrs const * const msrs)
/* clear all counters */
for (i = 0 ; i < NUM_CONTROLS; ++i) {
if (unlikely(!CTRL_IS_RESERVED(msrs,i)))
continue;
CTRL_READ(low, high, msrs, i);
CTRL_CLEAR(low);
CTRL_WRITE(low, high, msrs, i);
}
/* avoid a false detection of ctr overflows in NMI handler */
for (i = 0; i < NUM_COUNTERS; ++i) {
if (unlikely(!CTR_IS_RESERVED(msrs,i)))
continue;
CTR_WRITE(1, msrs, i);
}
/* enable active counters */
for (i = 0; i < NUM_COUNTERS; ++i) {
if (counter_config[i].enabled) {
if ((counter_config[i].enabled) && (CTR_IS_RESERVED(msrs,i))) {
reset_value[i] = counter_config[i].count;
CTR_WRITE(counter_config[i].count, msrs, i);
@ -98,6 +110,8 @@ static int athlon_check_ctrs(struct pt_regs * const regs,
int i;
for (i = 0 ; i < NUM_COUNTERS; ++i) {
if (!reset_value[i])
continue;
CTR_READ(low, high, msrs, i);
if (CTR_OVERFLOWED(low)) {
oprofile_add_sample(regs, i);
@ -132,12 +146,27 @@ static void athlon_stop(struct op_msrs const * const msrs)
/* Subtle: stop on all counters to avoid race with
* setting our pm callback */
for (i = 0 ; i < NUM_COUNTERS ; ++i) {
if (!reset_value[i])
continue;
CTRL_READ(low, high, msrs, i);
CTRL_SET_INACTIVE(low);
CTRL_WRITE(low, high, msrs, i);
}
}
static void athlon_shutdown(struct op_msrs const * const msrs)
{
int i;
for (i = 0 ; i < NUM_COUNTERS ; ++i) {
if (CTR_IS_RESERVED(msrs,i))
release_perfctr_nmi(MSR_K7_PERFCTR0 + i);
}
for (i = 0 ; i < NUM_CONTROLS ; ++i) {
if (CTRL_IS_RESERVED(msrs,i))
release_evntsel_nmi(MSR_K7_EVNTSEL0 + i);
}
}
struct op_x86_model_spec const op_athlon_spec = {
.num_counters = NUM_COUNTERS,
@ -146,5 +175,6 @@ struct op_x86_model_spec const op_athlon_spec = {
.setup_ctrs = &athlon_setup_ctrs,
.check_ctrs = &athlon_check_ctrs,
.start = &athlon_start,
.stop = &athlon_stop
.stop = &athlon_stop,
.shutdown = &athlon_shutdown
};

View File

@ -32,7 +32,7 @@
#define NUM_CONTROLS_HT2 (NUM_ESCRS_HT2 + NUM_CCCRS_HT2)
static unsigned int num_counters = NUM_COUNTERS_NON_HT;
static unsigned int num_controls = NUM_CONTROLS_NON_HT;
/* this has to be checked dynamically since the
hyper-threadedness of a chip is discovered at
@ -40,8 +40,10 @@ static unsigned int num_counters = NUM_COUNTERS_NON_HT;
static inline void setup_num_counters(void)
{
#ifdef CONFIG_SMP
if (smp_num_siblings == 2)
if (smp_num_siblings == 2){
num_counters = NUM_COUNTERS_HT2;
num_controls = NUM_CONTROLS_HT2;
}
#endif
}
@ -97,15 +99,6 @@ static struct p4_counter_binding p4_counters [NUM_COUNTERS_NON_HT] = {
#define NUM_UNUSED_CCCRS NUM_CCCRS_NON_HT - NUM_COUNTERS_NON_HT
/* All cccr we don't use. */
static int p4_unused_cccr[NUM_UNUSED_CCCRS] = {
MSR_P4_BPU_CCCR1, MSR_P4_BPU_CCCR3,
MSR_P4_MS_CCCR1, MSR_P4_MS_CCCR3,
MSR_P4_FLAME_CCCR1, MSR_P4_FLAME_CCCR3,
MSR_P4_IQ_CCCR0, MSR_P4_IQ_CCCR1,
MSR_P4_IQ_CCCR2, MSR_P4_IQ_CCCR3
};
/* p4 event codes in libop/op_event.h are indices into this table. */
static struct p4_event_binding p4_events[NUM_EVENTS] = {
@ -372,6 +365,8 @@ static struct p4_event_binding p4_events[NUM_EVENTS] = {
#define CCCR_OVF_P(cccr) ((cccr) & (1U<<31))
#define CCCR_CLEAR_OVF(cccr) ((cccr) &= (~(1U<<31)))
#define CTRL_IS_RESERVED(msrs,c) (msrs->controls[(c)].addr ? 1 : 0)
#define CTR_IS_RESERVED(msrs,c) (msrs->counters[(c)].addr ? 1 : 0)
#define CTR_READ(l,h,i) do {rdmsr(p4_counters[(i)].counter_address, (l), (h));} while (0)
#define CTR_WRITE(l,i) do {wrmsr(p4_counters[(i)].counter_address, -(u32)(l), -1);} while (0)
#define CTR_OVERFLOW_P(ctr) (!((ctr) & 0x80000000))
@ -401,29 +396,34 @@ static unsigned long reset_value[NUM_COUNTERS_NON_HT];
static void p4_fill_in_addresses(struct op_msrs * const msrs)
{
unsigned int i;
unsigned int addr, stag;
unsigned int addr, cccraddr, stag;
setup_num_counters();
stag = get_stagger();
/* the counter registers we pay attention to */
/* initialize some registers */
for (i = 0; i < num_counters; ++i) {
msrs->counters[i].addr =
p4_counters[VIRT_CTR(stag, i)].counter_address;
msrs->counters[i].addr = 0;
}
/* FIXME: bad feeling, we don't save the 10 counters we don't use. */
/* 18 CCCR registers */
for (i = 0, addr = MSR_P4_BPU_CCCR0 + stag;
addr <= MSR_P4_IQ_CCCR5; ++i, addr += addr_increment()) {
msrs->controls[i].addr = addr;
for (i = 0; i < num_controls; ++i) {
msrs->controls[i].addr = 0;
}
/* the counter & cccr registers we pay attention to */
for (i = 0; i < num_counters; ++i) {
addr = p4_counters[VIRT_CTR(stag, i)].counter_address;
cccraddr = p4_counters[VIRT_CTR(stag, i)].cccr_address;
if (reserve_perfctr_nmi(addr)){
msrs->counters[i].addr = addr;
msrs->controls[i].addr = cccraddr;
}
}
/* 43 ESCR registers in three or four discontiguous group */
for (addr = MSR_P4_BSU_ESCR0 + stag;
addr < MSR_P4_IQ_ESCR0; ++i, addr += addr_increment()) {
msrs->controls[i].addr = addr;
if (reserve_evntsel_nmi(addr))
msrs->controls[i].addr = addr;
}
/* no IQ_ESCR0/1 on some models, we save a seconde time BSU_ESCR0/1
@ -431,47 +431,57 @@ static void p4_fill_in_addresses(struct op_msrs * const msrs)
if (boot_cpu_data.x86_model >= 0x3) {
for (addr = MSR_P4_BSU_ESCR0 + stag;
addr <= MSR_P4_BSU_ESCR1; ++i, addr += addr_increment()) {
msrs->controls[i].addr = addr;
if (reserve_evntsel_nmi(addr))
msrs->controls[i].addr = addr;
}
} else {
for (addr = MSR_P4_IQ_ESCR0 + stag;
addr <= MSR_P4_IQ_ESCR1; ++i, addr += addr_increment()) {
msrs->controls[i].addr = addr;
if (reserve_evntsel_nmi(addr))
msrs->controls[i].addr = addr;
}
}
for (addr = MSR_P4_RAT_ESCR0 + stag;
addr <= MSR_P4_SSU_ESCR0; ++i, addr += addr_increment()) {
msrs->controls[i].addr = addr;
if (reserve_evntsel_nmi(addr))
msrs->controls[i].addr = addr;
}
for (addr = MSR_P4_MS_ESCR0 + stag;
addr <= MSR_P4_TC_ESCR1; ++i, addr += addr_increment()) {
msrs->controls[i].addr = addr;
if (reserve_evntsel_nmi(addr))
msrs->controls[i].addr = addr;
}
for (addr = MSR_P4_IX_ESCR0 + stag;
addr <= MSR_P4_CRU_ESCR3; ++i, addr += addr_increment()) {
msrs->controls[i].addr = addr;
if (reserve_evntsel_nmi(addr))
msrs->controls[i].addr = addr;
}
/* there are 2 remaining non-contiguously located ESCRs */
if (num_counters == NUM_COUNTERS_NON_HT) {
/* standard non-HT CPUs handle both remaining ESCRs*/
msrs->controls[i++].addr = MSR_P4_CRU_ESCR5;
msrs->controls[i++].addr = MSR_P4_CRU_ESCR4;
if (reserve_evntsel_nmi(MSR_P4_CRU_ESCR5))
msrs->controls[i++].addr = MSR_P4_CRU_ESCR5;
if (reserve_evntsel_nmi(MSR_P4_CRU_ESCR4))
msrs->controls[i++].addr = MSR_P4_CRU_ESCR4;
} else if (stag == 0) {
/* HT CPUs give the first remainder to the even thread, as
the 32nd control register */
msrs->controls[i++].addr = MSR_P4_CRU_ESCR4;
if (reserve_evntsel_nmi(MSR_P4_CRU_ESCR4))
msrs->controls[i++].addr = MSR_P4_CRU_ESCR4;
} else {
/* and two copies of the second to the odd thread,
for the 22st and 23nd control registers */
msrs->controls[i++].addr = MSR_P4_CRU_ESCR5;
msrs->controls[i++].addr = MSR_P4_CRU_ESCR5;
if (reserve_evntsel_nmi(MSR_P4_CRU_ESCR5)) {
msrs->controls[i++].addr = MSR_P4_CRU_ESCR5;
msrs->controls[i++].addr = MSR_P4_CRU_ESCR5;
}
}
}
@ -544,7 +554,6 @@ static void p4_setup_ctrs(struct op_msrs const * const msrs)
{
unsigned int i;
unsigned int low, high;
unsigned int addr;
unsigned int stag;
stag = get_stagger();
@ -557,59 +566,24 @@ static void p4_setup_ctrs(struct op_msrs const * const msrs)
/* clear the cccrs we will use */
for (i = 0 ; i < num_counters ; i++) {
if (unlikely(!CTRL_IS_RESERVED(msrs,i)))
continue;
rdmsr(p4_counters[VIRT_CTR(stag, i)].cccr_address, low, high);
CCCR_CLEAR(low);
CCCR_SET_REQUIRED_BITS(low);
wrmsr(p4_counters[VIRT_CTR(stag, i)].cccr_address, low, high);
}
/* clear cccrs outside our concern */
for (i = stag ; i < NUM_UNUSED_CCCRS ; i += addr_increment()) {
rdmsr(p4_unused_cccr[i], low, high);
CCCR_CLEAR(low);
CCCR_SET_REQUIRED_BITS(low);
wrmsr(p4_unused_cccr[i], low, high);
}
/* clear all escrs (including those outside our concern) */
for (addr = MSR_P4_BSU_ESCR0 + stag;
addr < MSR_P4_IQ_ESCR0; addr += addr_increment()) {
wrmsr(addr, 0, 0);
for (i = num_counters; i < num_controls; i++) {
if (unlikely(!CTRL_IS_RESERVED(msrs,i)))
continue;
wrmsr(msrs->controls[i].addr, 0, 0);
}
/* On older models clear also MSR_P4_IQ_ESCR0/1 */
if (boot_cpu_data.x86_model < 0x3) {
wrmsr(MSR_P4_IQ_ESCR0, 0, 0);
wrmsr(MSR_P4_IQ_ESCR1, 0, 0);
}
for (addr = MSR_P4_RAT_ESCR0 + stag;
addr <= MSR_P4_SSU_ESCR0; ++i, addr += addr_increment()) {
wrmsr(addr, 0, 0);
}
for (addr = MSR_P4_MS_ESCR0 + stag;
addr <= MSR_P4_TC_ESCR1; addr += addr_increment()){
wrmsr(addr, 0, 0);
}
for (addr = MSR_P4_IX_ESCR0 + stag;
addr <= MSR_P4_CRU_ESCR3; addr += addr_increment()){
wrmsr(addr, 0, 0);
}
if (num_counters == NUM_COUNTERS_NON_HT) {
wrmsr(MSR_P4_CRU_ESCR4, 0, 0);
wrmsr(MSR_P4_CRU_ESCR5, 0, 0);
} else if (stag == 0) {
wrmsr(MSR_P4_CRU_ESCR4, 0, 0);
} else {
wrmsr(MSR_P4_CRU_ESCR5, 0, 0);
}
/* setup all counters */
for (i = 0 ; i < num_counters ; ++i) {
if (counter_config[i].enabled) {
if ((counter_config[i].enabled) && (CTRL_IS_RESERVED(msrs,i))) {
reset_value[i] = counter_config[i].count;
pmc_setup_one_p4_counter(i);
CTR_WRITE(counter_config[i].count, VIRT_CTR(stag, i));
@ -696,12 +670,32 @@ static void p4_stop(struct op_msrs const * const msrs)
stag = get_stagger();
for (i = 0; i < num_counters; ++i) {
if (!reset_value[i])
continue;
CCCR_READ(low, high, VIRT_CTR(stag, i));
CCCR_SET_DISABLE(low);
CCCR_WRITE(low, high, VIRT_CTR(stag, i));
}
}
static void p4_shutdown(struct op_msrs const * const msrs)
{
int i;
for (i = 0 ; i < num_counters ; ++i) {
if (CTR_IS_RESERVED(msrs,i))
release_perfctr_nmi(msrs->counters[i].addr);
}
/* some of the control registers are specially reserved in
* conjunction with the counter registers (hence the starting offset).
* This saves a few bits.
*/
for (i = num_counters ; i < num_controls ; ++i) {
if (CTRL_IS_RESERVED(msrs,i))
release_evntsel_nmi(msrs->controls[i].addr);
}
}
#ifdef CONFIG_SMP
struct op_x86_model_spec const op_p4_ht2_spec = {
@ -711,7 +705,8 @@ struct op_x86_model_spec const op_p4_ht2_spec = {
.setup_ctrs = &p4_setup_ctrs,
.check_ctrs = &p4_check_ctrs,
.start = &p4_start,
.stop = &p4_stop
.stop = &p4_stop,
.shutdown = &p4_shutdown
};
#endif
@ -722,5 +717,6 @@ struct op_x86_model_spec const op_p4_spec = {
.setup_ctrs = &p4_setup_ctrs,
.check_ctrs = &p4_check_ctrs,
.start = &p4_start,
.stop = &p4_stop
.stop = &p4_stop,
.shutdown = &p4_shutdown
};

View File

@ -22,10 +22,12 @@
#define NUM_COUNTERS 2
#define NUM_CONTROLS 2
#define CTR_IS_RESERVED(msrs,c) (msrs->counters[(c)].addr ? 1 : 0)
#define CTR_READ(l,h,msrs,c) do {rdmsr(msrs->counters[(c)].addr, (l), (h));} while (0)
#define CTR_WRITE(l,msrs,c) do {wrmsr(msrs->counters[(c)].addr, -(u32)(l), -1);} while (0)
#define CTR_OVERFLOWED(n) (!((n) & (1U<<31)))
#define CTRL_IS_RESERVED(msrs,c) (msrs->controls[(c)].addr ? 1 : 0)
#define CTRL_READ(l,h,msrs,c) do {rdmsr((msrs->controls[(c)].addr), (l), (h));} while (0)
#define CTRL_WRITE(l,h,msrs,c) do {wrmsr((msrs->controls[(c)].addr), (l), (h));} while (0)
#define CTRL_SET_ACTIVE(n) (n |= (1<<22))
@ -41,11 +43,21 @@ static unsigned long reset_value[NUM_COUNTERS];
static void ppro_fill_in_addresses(struct op_msrs * const msrs)
{
msrs->counters[0].addr = MSR_P6_PERFCTR0;
msrs->counters[1].addr = MSR_P6_PERFCTR1;
int i;
for (i=0; i < NUM_COUNTERS; i++) {
if (reserve_perfctr_nmi(MSR_P6_PERFCTR0 + i))
msrs->counters[i].addr = MSR_P6_PERFCTR0 + i;
else
msrs->counters[i].addr = 0;
}
msrs->controls[0].addr = MSR_P6_EVNTSEL0;
msrs->controls[1].addr = MSR_P6_EVNTSEL1;
for (i=0; i < NUM_CONTROLS; i++) {
if (reserve_evntsel_nmi(MSR_P6_EVNTSEL0 + i))
msrs->controls[i].addr = MSR_P6_EVNTSEL0 + i;
else
msrs->controls[i].addr = 0;
}
}
@ -56,6 +68,8 @@ static void ppro_setup_ctrs(struct op_msrs const * const msrs)
/* clear all counters */
for (i = 0 ; i < NUM_CONTROLS; ++i) {
if (unlikely(!CTRL_IS_RESERVED(msrs,i)))
continue;
CTRL_READ(low, high, msrs, i);
CTRL_CLEAR(low);
CTRL_WRITE(low, high, msrs, i);
@ -63,12 +77,14 @@ static void ppro_setup_ctrs(struct op_msrs const * const msrs)
/* avoid a false detection of ctr overflows in NMI handler */
for (i = 0; i < NUM_COUNTERS; ++i) {
if (unlikely(!CTR_IS_RESERVED(msrs,i)))
continue;
CTR_WRITE(1, msrs, i);
}
/* enable active counters */
for (i = 0; i < NUM_COUNTERS; ++i) {
if (counter_config[i].enabled) {
if ((counter_config[i].enabled) && (CTR_IS_RESERVED(msrs,i))) {
reset_value[i] = counter_config[i].count;
CTR_WRITE(counter_config[i].count, msrs, i);
@ -81,6 +97,8 @@ static void ppro_setup_ctrs(struct op_msrs const * const msrs)
CTRL_SET_UM(low, counter_config[i].unit_mask);
CTRL_SET_EVENT(low, counter_config[i].event);
CTRL_WRITE(low, high, msrs, i);
} else {
reset_value[i] = 0;
}
}
}
@ -93,6 +111,8 @@ static int ppro_check_ctrs(struct pt_regs * const regs,
int i;
for (i = 0 ; i < NUM_COUNTERS; ++i) {
if (!reset_value[i])
continue;
CTR_READ(low, high, msrs, i);
if (CTR_OVERFLOWED(low)) {
oprofile_add_sample(regs, i);
@ -118,18 +138,38 @@ static int ppro_check_ctrs(struct pt_regs * const regs,
static void ppro_start(struct op_msrs const * const msrs)
{
unsigned int low,high;
CTRL_READ(low, high, msrs, 0);
CTRL_SET_ACTIVE(low);
CTRL_WRITE(low, high, msrs, 0);
if (reset_value[0]) {
CTRL_READ(low, high, msrs, 0);
CTRL_SET_ACTIVE(low);
CTRL_WRITE(low, high, msrs, 0);
}
}
static void ppro_stop(struct op_msrs const * const msrs)
{
unsigned int low,high;
CTRL_READ(low, high, msrs, 0);
CTRL_SET_INACTIVE(low);
CTRL_WRITE(low, high, msrs, 0);
if (reset_value[0]) {
CTRL_READ(low, high, msrs, 0);
CTRL_SET_INACTIVE(low);
CTRL_WRITE(low, high, msrs, 0);
}
}
static void ppro_shutdown(struct op_msrs const * const msrs)
{
int i;
for (i = 0 ; i < NUM_COUNTERS ; ++i) {
if (CTR_IS_RESERVED(msrs,i))
release_perfctr_nmi(MSR_P6_PERFCTR0 + i);
}
for (i = 0 ; i < NUM_CONTROLS ; ++i) {
if (CTRL_IS_RESERVED(msrs,i))
release_evntsel_nmi(MSR_P6_EVNTSEL0 + i);
}
}
@ -140,5 +180,6 @@ struct op_x86_model_spec const op_ppro_spec = {
.setup_ctrs = &ppro_setup_ctrs,
.check_ctrs = &ppro_check_ctrs,
.start = &ppro_start,
.stop = &ppro_stop
.stop = &ppro_stop,
.shutdown = &ppro_shutdown
};

View File

@ -40,6 +40,7 @@ struct op_x86_model_spec {
struct op_msrs const * const msrs);
void (*start)(struct op_msrs const * const msrs);
void (*stop)(struct op_msrs const * const msrs);
void (*shutdown)(struct op_msrs const * const msrs);
};
extern struct op_x86_model_spec const op_ppro_spec;

View File

@ -11,4 +11,4 @@ pci-y += legacy.o irq.o
pci-$(CONFIG_X86_VISWS) := visws.o fixup.o
pci-$(CONFIG_X86_NUMAQ) := numa.o irq.o
obj-y += $(pci-y) common.o
obj-y += $(pci-y) common.o early.o

View File

@ -242,6 +242,10 @@ char * __devinit pcibios_setup(char *str)
acpi_noirq_set();
return NULL;
}
else if (!strcmp(str, "noearly")) {
pci_probe |= PCI_PROBE_NOEARLY;
return NULL;
}
#ifndef CONFIG_X86_VISWS
else if (!strcmp(str, "usepirqmask")) {
pci_probe |= PCI_USE_PIRQ_MASK;

View File

@ -254,7 +254,16 @@ static int __init pci_check_type2(void)
return works;
}
void __init pci_direct_init(void)
void __init pci_direct_init(int type)
{
printk(KERN_INFO "PCI: Using configuration type %d\n", type);
if (type == 1)
raw_pci_ops = &pci_direct_conf1;
else
raw_pci_ops = &pci_direct_conf2;
}
int __init pci_direct_probe(void)
{
struct resource *region, *region2;
@ -264,19 +273,16 @@ void __init pci_direct_init(void)
if (!region)
goto type2;
if (pci_check_type1()) {
printk(KERN_INFO "PCI: Using configuration type 1\n");
raw_pci_ops = &pci_direct_conf1;
return;
}
if (pci_check_type1())
return 1;
release_resource(region);
type2:
if ((pci_probe & PCI_PROBE_CONF2) == 0)
return;
return 0;
region = request_region(0xCF8, 4, "PCI conf2");
if (!region)
return;
return 0;
region2 = request_region(0xC000, 0x1000, "PCI conf2");
if (!region2)
goto fail2;
@ -284,10 +290,11 @@ void __init pci_direct_init(void)
if (pci_check_type2()) {
printk(KERN_INFO "PCI: Using configuration type 2\n");
raw_pci_ops = &pci_direct_conf2;
return;
return 2;
}
release_resource(region2);
fail2:
release_resource(region);
return 0;
}

52
arch/i386/pci/early.c Normal file
View File

@ -0,0 +1,52 @@
#include <linux/kernel.h>
#include <linux/pci.h>
#include <asm/pci-direct.h>
#include <asm/io.h>
#include "pci.h"
/* Direct PCI access. This is used for PCI accesses in early boot before
the PCI subsystem works. */
#define PDprintk(x...)
u32 read_pci_config(u8 bus, u8 slot, u8 func, u8 offset)
{
u32 v;
outl(0x80000000 | (bus<<16) | (slot<<11) | (func<<8) | offset, 0xcf8);
v = inl(0xcfc);
if (v != 0xffffffff)
PDprintk("%x reading 4 from %x: %x\n", slot, offset, v);
return v;
}
u8 read_pci_config_byte(u8 bus, u8 slot, u8 func, u8 offset)
{
u8 v;
outl(0x80000000 | (bus<<16) | (slot<<11) | (func<<8) | offset, 0xcf8);
v = inb(0xcfc + (offset&3));
PDprintk("%x reading 1 from %x: %x\n", slot, offset, v);
return v;
}
u16 read_pci_config_16(u8 bus, u8 slot, u8 func, u8 offset)
{
u16 v;
outl(0x80000000 | (bus<<16) | (slot<<11) | (func<<8) | offset, 0xcf8);
v = inw(0xcfc + (offset&2));
PDprintk("%x reading 2 from %x: %x\n", slot, offset, v);
return v;
}
void write_pci_config(u8 bus, u8 slot, u8 func, u8 offset,
u32 val)
{
PDprintk("%x writing to %x: %x\n", slot, offset, val);
outl(0x80000000 | (bus<<16) | (slot<<11) | (func<<8) | offset, 0xcf8);
outl(val, 0xcfc);
}
int early_pci_allowed(void)
{
return (pci_probe & (PCI_PROBE_CONF1|PCI_PROBE_NOEARLY)) ==
PCI_PROBE_CONF1;
}

View File

@ -6,8 +6,13 @@
in the right sequence from here. */
static __init int pci_access_init(void)
{
int type = 0;
#ifdef CONFIG_PCI_DIRECT
type = pci_direct_probe();
#endif
#ifdef CONFIG_PCI_MMCONFIG
pci_mmcfg_init();
pci_mmcfg_init(type);
#endif
if (raw_pci_ops)
return 0;
@ -21,7 +26,7 @@ static __init int pci_access_init(void)
* fails.
*/
#ifdef CONFIG_PCI_DIRECT
pci_direct_init();
pci_direct_init(type);
#endif
return 0;
}

View File

@ -151,6 +151,38 @@ static struct pci_raw_ops pci_mmcfg = {
.write = pci_mmcfg_write,
};
static __init void pci_mmcfg_insert_resources(void)
{
#define PCI_MMCFG_RESOURCE_NAME_LEN 19
int i;
struct resource *res;
char *names;
unsigned num_buses;
res = kcalloc(PCI_MMCFG_RESOURCE_NAME_LEN + sizeof(*res),
pci_mmcfg_config_num, GFP_KERNEL);
if (!res) {
printk(KERN_ERR "PCI: Unable to allocate MMCONFIG resources\n");
return;
}
names = (void *)&res[pci_mmcfg_config_num];
for (i = 0; i < pci_mmcfg_config_num; i++, res++) {
num_buses = pci_mmcfg_config[i].end_bus_number -
pci_mmcfg_config[i].start_bus_number + 1;
res->name = names;
snprintf(names, PCI_MMCFG_RESOURCE_NAME_LEN, "PCI MMCONFIG %u",
pci_mmcfg_config[i].pci_segment_group_number);
res->start = pci_mmcfg_config[i].base_address;
res->end = res->start + (num_buses << 20) - 1;
res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
insert_resource(&iomem_resource, res);
names += PCI_MMCFG_RESOURCE_NAME_LEN;
}
}
/* K8 systems have some devices (typically in the builtin northbridge)
that are only accessible using type1
Normally this can be expressed in the MCFG by not listing them
@ -187,7 +219,9 @@ static __init void unreachable_devices(void)
}
}
void __init pci_mmcfg_init(void)
void __init pci_mmcfg_init(int type)
{
if ((pci_probe & PCI_PROBE_MMCONF) == 0)
return;
@ -198,7 +232,9 @@ void __init pci_mmcfg_init(void)
(pci_mmcfg_config[0].base_address == 0))
return;
if (!e820_all_mapped(pci_mmcfg_config[0].base_address,
/* Only do this check when type 1 works. If it doesn't work
assume we run on a Mac and always use MCFG */
if (type == 1 && !e820_all_mapped(pci_mmcfg_config[0].base_address,
pci_mmcfg_config[0].base_address + MMCONFIG_APER_MIN,
E820_RESERVED)) {
printk(KERN_ERR "PCI: BIOS Bug: MCFG area at %x is not E820-reserved\n",
@ -212,4 +248,5 @@ void __init pci_mmcfg_init(void)
pci_probe = (pci_probe & ~PCI_PROBE_MASK) | PCI_PROBE_MMCONF;
unreachable_devices();
pci_mmcfg_insert_resources();
}

View File

@ -17,6 +17,7 @@
#define PCI_PROBE_CONF2 0x0004
#define PCI_PROBE_MMCONF 0x0008
#define PCI_PROBE_MASK 0x000f
#define PCI_PROBE_NOEARLY 0x0010
#define PCI_NO_SORT 0x0100
#define PCI_BIOS_SORT 0x0200
@ -81,7 +82,9 @@ extern int pci_conf1_write(unsigned int seg, unsigned int bus,
extern int pci_conf1_read(unsigned int seg, unsigned int bus,
unsigned int devfn, int reg, int len, u32 *value);
extern void pci_direct_init(void);
extern int pci_direct_probe(void);
extern void pci_direct_init(int type);
extern void pci_pcbios_init(void);
extern void pci_mmcfg_init(void);
extern void pci_mmcfg_init(int type);
extern void pcibios_sort(void);

View File

@ -59,9 +59,7 @@ static inline unsigned long save_context_stack(struct stack_trace *trace,
}
}
void save_stack_trace(struct stack_trace *trace,
struct task_struct *task, int all_contexts,
unsigned int skip)
void save_stack_trace(struct stack_trace *trace, struct task_struct *task)
{
register unsigned long sp asm ("15");
unsigned long orig_sp;
@ -69,22 +67,23 @@ void save_stack_trace(struct stack_trace *trace,
sp &= PSW_ADDR_INSN;
orig_sp = sp;
sp = save_context_stack(trace, &skip, sp,
sp = save_context_stack(trace, &trace->skip, sp,
S390_lowcore.panic_stack - PAGE_SIZE,
S390_lowcore.panic_stack);
if ((sp != orig_sp) && !all_contexts)
if ((sp != orig_sp) && !trace->all_contexts)
return;
sp = save_context_stack(trace, &skip, sp,
sp = save_context_stack(trace, &trace->skip, sp,
S390_lowcore.async_stack - ASYNC_SIZE,
S390_lowcore.async_stack);
if ((sp != orig_sp) && !all_contexts)
if ((sp != orig_sp) && !trace->all_contexts)
return;
if (task)
save_context_stack(trace, &skip, sp,
save_context_stack(trace, &trace->skip, sp,
(unsigned long) task_stack_page(task),
(unsigned long) task_stack_page(task) + THREAD_SIZE);
else
save_context_stack(trace, &skip, sp, S390_lowcore.thread_info,
save_context_stack(trace, &trace->skip, sp,
S390_lowcore.thread_info,
S390_lowcore.thread_info + THREAD_SIZE);
return;
}

View File

@ -4,7 +4,7 @@ obj-y = bugs.o checksum.o delay.o fault.o ksyms.o ldt.o ptrace.o \
obj-$(CONFIG_MODE_SKAS) += stub.o stub_segv.o
subarch-obj-y = lib/bitops.o kernel/semaphore.o
subarch-obj-y = lib/bitops.o lib/semaphore.o
subarch-obj-$(CONFIG_HIGHMEM) += mm/highmem.o
subarch-obj-$(CONFIG_MODULES) += kernel/module.o

View File

@ -109,6 +109,7 @@ config X86_PC
config X86_VSMP
bool "Support for ScaleMP vSMP"
depends on PCI
help
Support for ScaleMP vSMP systems. Say 'Y' here if this kernel is
supposed to run on these EM64T-based machines. Only choose this option
@ -295,7 +296,7 @@ config NUMA
config K8_NUMA
bool "Old style AMD Opteron NUMA detection"
depends on NUMA
depends on NUMA && PCI
default y
help
Enable K8 NUMA node topology detection. You should say Y here if
@ -425,7 +426,6 @@ config IOMMU
config CALGARY_IOMMU
bool "IBM Calgary IOMMU support"
default y
select SWIOTLB
depends on PCI && EXPERIMENTAL
help
@ -472,8 +472,7 @@ config X86_MCE_AMD
the DRAM Error Threshold.
config KEXEC
bool "kexec system call (EXPERIMENTAL)"
depends on EXPERIMENTAL
bool "kexec system call"
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
@ -492,7 +491,14 @@ config CRASH_DUMP
bool "kernel crash dumps (EXPERIMENTAL)"
depends on EXPERIMENTAL
help
Generate crash dump after being started by kexec.
Generate crash dump after being started by kexec.
This should be normally only set in special crash dump kernels
which are loaded in the main kernel with kexec-tools into
a specially reserved region and then later executed after
a crash by kdump/kexec. The crash dump kernel must be compiled
to a memory address not used by the main kernel or BIOS using
PHYSICAL_START.
For more details see Documentation/kdump/kdump.txt
config PHYSICAL_START
hex "Physical address where the kernel is loaded" if (EMBEDDED || CRASH_DUMP)
@ -530,6 +536,30 @@ config SECCOMP
If unsure, say Y. Only embedded should say N here.
config CC_STACKPROTECTOR
bool "Enable -fstack-protector buffer overflow detection (EXPRIMENTAL)"
depends on EXPERIMENTAL
help
This option turns on the -fstack-protector GCC feature. This
feature puts, at the beginning of critical functions, a canary
value on the stack just before the return address, and validates
the value just before actually returning. Stack based buffer
overflows (that need to overwrite this return address) now also
overwrite the canary, which gets detected and the attack is then
neutralized via a kernel panic.
This feature requires gcc version 4.2 or above, or a distribution
gcc with the feature backported. Older versions are automatically
detected and for those versions, this configuration option is ignored.
config CC_STACKPROTECTOR_ALL
bool "Use stack-protector for all functions"
depends on CC_STACKPROTECTOR
help
Normally, GCC only inserts the canary value protection for
functions that use large-ish on-stack buffers. By enabling
this option, GCC will be asked to do this for ALL functions.
source kernel/Kconfig.hz
config REORDER

View File

@ -54,6 +54,16 @@ endif
cflags-y += $(call cc-option,-funit-at-a-time)
# prevent gcc from generating any FP code by mistake
cflags-y += $(call cc-option,-mno-sse -mno-mmx -mno-sse2 -mno-3dnow,)
# do binutils support CFI?
cflags-y += $(call as-instr,.cfi_startproc\n.cfi_endproc,-DCONFIG_AS_CFI=1,)
AFLAGS += $(call as-instr,.cfi_startproc\n.cfi_endproc,-DCONFIG_AS_CFI=1,)
# is .cfi_signal_frame supported too?
cflags-y += $(call as-instr,.cfi_startproc\n.cfi_signal_frame\n.cfi_endproc,-DCONFIG_AS_CFI_SIGNAL_FRAME=1,)
AFLAGS += $(call as-instr,.cfi_startproc\n.cfi_signal_frame\n.cfi_endproc,-DCONFIG_AS_CFI_SIGNAL_FRAME=1,)
cflags-$(CONFIG_CC_STACKPROTECTOR) += $(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-x86_64-has-stack-protector.sh $(CC) -fstack-protector )
cflags-$(CONFIG_CC_STACKPROTECTOR_ALL) += $(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-x86_64-has-stack-protector.sh $(CC) -fstack-protector-all )
CFLAGS += $(cflags-y)
CFLAGS_KERNEL += $(cflags-kernel-y)

View File

@ -7,7 +7,8 @@
#
targets := vmlinux vmlinux.bin vmlinux.bin.gz head.o misc.o piggy.o
EXTRA_AFLAGS := -traditional -m32
EXTRA_AFLAGS := -traditional
AFLAGS := $(subst -m64,-m32,$(AFLAGS))
# cannot use EXTRA_CFLAGS because base CFLAGS contains -mkernel which conflicts with
# -m32

View File

@ -526,12 +526,12 @@ is_disk1:
movw %cs, %ax # aka SETUPSEG
subw $DELTA_INITSEG, %ax # aka INITSEG
movw %ax, %ds
movw $0, (0x1ff) # default is no pointing device
movb $0, (0x1ff) # default is no pointing device
int $0x11 # int 0x11: equipment list
testb $0x04, %al # check if mouse installed
jz no_psmouse
movw $0xAA, (0x1ff) # device present
movb $0xAA, (0x1ff) # device present
no_psmouse:
#include "../../i386/boot/edd.S"

View File

@ -1,7 +1,7 @@
#
# Automatically generated make config: don't edit
# Linux kernel version: 2.6.18-rc4
# Thu Aug 24 21:05:55 2006
# Linux kernel version: 2.6.18-git5
# Tue Sep 26 09:30:47 2006
#
CONFIG_X86_64=y
CONFIG_64BIT=y
@ -19,6 +19,7 @@ CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_IOMAP=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_DMI=y
CONFIG_AUDIT_ARCH=y
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
#
@ -38,16 +39,16 @@ CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
# CONFIG_BSD_PROCESS_ACCT is not set
# CONFIG_TASKSTATS is not set
CONFIG_SYSCTL=y
# CONFIG_AUDIT is not set
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
# CONFIG_CPUSETS is not set
# CONFIG_RELAY is not set
CONFIG_INITRAMFS_SOURCE=""
CONFIG_UID16=y
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
# CONFIG_EMBEDDED is not set
CONFIG_UID16=y
CONFIG_SYSCTL=y
CONFIG_KALLSYMS=y
CONFIG_KALLSYMS_ALL=y
# CONFIG_KALLSYMS_EXTRA_PASS is not set
@ -56,12 +57,12 @@ CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_BASE_FULL=y
CONFIG_RT_MUTEXES=y
CONFIG_FUTEX=y
CONFIG_EPOLL=y
CONFIG_SHMEM=y
CONFIG_SLAB=y
CONFIG_VM_EVENT_COUNTERS=y
CONFIG_RT_MUTEXES=y
# CONFIG_TINY_SHMEM is not set
CONFIG_BASE_SMALL=0
# CONFIG_SLOB is not set
@ -160,6 +161,7 @@ CONFIG_X86_MCE_AMD=y
# CONFIG_CRASH_DUMP is not set
CONFIG_PHYSICAL_START=0x200000
CONFIG_SECCOMP=y
# CONFIG_CC_STACKPROTECTOR is not set
# CONFIG_HZ_100 is not set
CONFIG_HZ_250=y
# CONFIG_HZ_1000 is not set
@ -307,18 +309,23 @@ CONFIG_IP_PNP_DHCP=y
CONFIG_INET_DIAG=y
CONFIG_INET_TCP_DIAG=y
# CONFIG_TCP_CONG_ADVANCED is not set
CONFIG_TCP_CONG_BIC=y
CONFIG_TCP_CONG_CUBIC=y
CONFIG_DEFAULT_TCP_CONG="cubic"
CONFIG_IPV6=y
# CONFIG_IPV6_PRIVACY is not set
# CONFIG_IPV6_ROUTER_PREF is not set
# CONFIG_INET6_AH is not set
# CONFIG_INET6_ESP is not set
# CONFIG_INET6_IPCOMP is not set
# CONFIG_IPV6_MIP6 is not set
# CONFIG_INET6_XFRM_TUNNEL is not set
# CONFIG_INET6_TUNNEL is not set
# CONFIG_INET6_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET6_XFRM_MODE_TUNNEL is not set
# CONFIG_INET6_XFRM_MODE_ROUTEOPTIMIZATION is not set
# CONFIG_IPV6_TUNNEL is not set
# CONFIG_IPV6_SUBTREES is not set
# CONFIG_IPV6_MULTIPLE_TABLES is not set
# CONFIG_NETWORK_SECMARK is not set
# CONFIG_NETFILTER is not set
@ -345,7 +352,6 @@ CONFIG_IPV6=y
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_NET_DIVERT is not set
# CONFIG_ECONET is not set
# CONFIG_WAN_ROUTER is not set
@ -487,6 +493,7 @@ CONFIG_IDEDMA_AUTO=y
#
# CONFIG_RAID_ATTRS is not set
CONFIG_SCSI=y
CONFIG_SCSI_NETLINK=y
# CONFIG_SCSI_PROC_FS is not set
#
@ -508,12 +515,13 @@ CONFIG_SCSI_CONSTANTS=y
# CONFIG_SCSI_LOGGING is not set
#
# SCSI Transport Attributes
# SCSI Transports
#
CONFIG_SCSI_SPI_ATTRS=y
CONFIG_SCSI_FC_ATTRS=y
# CONFIG_SCSI_ISCSI_ATTRS is not set
CONFIG_SCSI_SAS_ATTRS=y
# CONFIG_SCSI_SAS_LIBSAS is not set
#
# SCSI low-level drivers
@ -532,29 +540,14 @@ CONFIG_AIC79XX_RESET_DELAY_MS=4000
# CONFIG_AIC79XX_DEBUG_ENABLE is not set
CONFIG_AIC79XX_DEBUG_MASK=0
# CONFIG_AIC79XX_REG_PRETTY_PRINT is not set
# CONFIG_SCSI_AIC94XX is not set
# CONFIG_SCSI_ARCMSR is not set
CONFIG_MEGARAID_NEWGEN=y
CONFIG_MEGARAID_MM=y
CONFIG_MEGARAID_MAILBOX=y
# CONFIG_MEGARAID_LEGACY is not set
CONFIG_MEGARAID_SAS=y
CONFIG_SCSI_SATA=y
CONFIG_SCSI_SATA_AHCI=y
CONFIG_SCSI_SATA_SVW=y
CONFIG_SCSI_ATA_PIIX=y
# CONFIG_SCSI_SATA_MV is not set
CONFIG_SCSI_SATA_NV=y
# CONFIG_SCSI_PDC_ADMA is not set
# CONFIG_SCSI_HPTIOP is not set
# CONFIG_SCSI_SATA_QSTOR is not set
# CONFIG_SCSI_SATA_PROMISE is not set
# CONFIG_SCSI_SATA_SX4 is not set
CONFIG_SCSI_SATA_SIL=y
# CONFIG_SCSI_SATA_SIL24 is not set
# CONFIG_SCSI_SATA_SIS is not set
# CONFIG_SCSI_SATA_ULI is not set
CONFIG_SCSI_SATA_VIA=y
# CONFIG_SCSI_SATA_VITESSE is not set
CONFIG_SCSI_SATA_INTEL_COMBINED=y
# CONFIG_SCSI_BUSLOGIC is not set
# CONFIG_SCSI_DMX3191D is not set
# CONFIG_SCSI_EATA is not set
@ -563,6 +556,7 @@ CONFIG_SCSI_SATA_INTEL_COMBINED=y
# CONFIG_SCSI_IPS is not set
# CONFIG_SCSI_INITIO is not set
# CONFIG_SCSI_INIA100 is not set
# CONFIG_SCSI_STEX is not set
# CONFIG_SCSI_SYM53C8XX_2 is not set
# CONFIG_SCSI_IPR is not set
# CONFIG_SCSI_QLOGIC_1280 is not set
@ -572,6 +566,62 @@ CONFIG_SCSI_SATA_INTEL_COMBINED=y
# CONFIG_SCSI_DC390T is not set
# CONFIG_SCSI_DEBUG is not set
#
# Serial ATA (prod) and Parallel ATA (experimental) drivers
#
CONFIG_ATA=y
CONFIG_SATA_AHCI=y
CONFIG_SATA_SVW=y
CONFIG_ATA_PIIX=y
# CONFIG_SATA_MV is not set
CONFIG_SATA_NV=y
# CONFIG_PDC_ADMA is not set
# CONFIG_SATA_QSTOR is not set
# CONFIG_SATA_PROMISE is not set
# CONFIG_SATA_SX4 is not set
CONFIG_SATA_SIL=y
# CONFIG_SATA_SIL24 is not set
# CONFIG_SATA_SIS is not set
# CONFIG_SATA_ULI is not set
CONFIG_SATA_VIA=y
# CONFIG_SATA_VITESSE is not set
CONFIG_SATA_INTEL_COMBINED=y
# CONFIG_PATA_ALI is not set
# CONFIG_PATA_AMD is not set
# CONFIG_PATA_ARTOP is not set
# CONFIG_PATA_ATIIXP is not set
# CONFIG_PATA_CMD64X is not set
# CONFIG_PATA_CS5520 is not set
# CONFIG_PATA_CS5530 is not set
# CONFIG_PATA_CYPRESS is not set
# CONFIG_PATA_EFAR is not set
# CONFIG_ATA_GENERIC is not set
# CONFIG_PATA_HPT366 is not set
# CONFIG_PATA_HPT37X is not set
# CONFIG_PATA_HPT3X2N is not set
# CONFIG_PATA_HPT3X3 is not set
# CONFIG_PATA_IT821X is not set
# CONFIG_PATA_JMICRON is not set
# CONFIG_PATA_LEGACY is not set
# CONFIG_PATA_TRIFLEX is not set
# CONFIG_PATA_MPIIX is not set
# CONFIG_PATA_OLDPIIX is not set
# CONFIG_PATA_NETCELL is not set
# CONFIG_PATA_NS87410 is not set
# CONFIG_PATA_OPTI is not set
# CONFIG_PATA_OPTIDMA is not set
# CONFIG_PATA_PDC_OLD is not set
# CONFIG_PATA_QDI is not set
# CONFIG_PATA_RADISYS is not set
# CONFIG_PATA_RZ1000 is not set
# CONFIG_PATA_SC1200 is not set
# CONFIG_PATA_SERVERWORKS is not set
# CONFIG_PATA_PDC2027X is not set
# CONFIG_PATA_SIL680 is not set
# CONFIG_PATA_SIS is not set
# CONFIG_PATA_VIA is not set
# CONFIG_PATA_WINBOND is not set
#
# Multi-device support (RAID and LVM)
#
@ -678,6 +728,7 @@ CONFIG_NET_PCI=y
# CONFIG_ADAPTEC_STARFIRE is not set
CONFIG_B44=y
CONFIG_FORCEDETH=y
# CONFIG_FORCEDETH_NAPI is not set
# CONFIG_DGRS is not set
# CONFIG_EEPRO100 is not set
CONFIG_E100=y
@ -714,6 +765,7 @@ CONFIG_E1000=y
# CONFIG_VIA_VELOCITY is not set
CONFIG_TIGON3=y
CONFIG_BNX2=y
# CONFIG_QLA3XXX is not set
#
# Ethernet (10000 Mbit)
@ -1036,6 +1088,7 @@ CONFIG_SOUND=y
# Open Sound System
#
CONFIG_SOUND_PRIME=y
CONFIG_OSS_OBSOLETE_DRIVER=y
# CONFIG_SOUND_BT878 is not set
# CONFIG_SOUND_EMU10K1 is not set
# CONFIG_SOUND_FUSION is not set
@ -1046,7 +1099,6 @@ CONFIG_SOUND_ICH=y
# CONFIG_SOUND_MSNDPIN is not set
# CONFIG_SOUND_VIA82CXXX is not set
# CONFIG_SOUND_OSS is not set
# CONFIG_SOUND_TVMIXER is not set
#
# USB support
@ -1203,7 +1255,6 @@ CONFIG_USB_MON=y
# InfiniBand support
#
# CONFIG_INFINIBAND is not set
# CONFIG_IPATH_CORE is not set
#
# EDAC - error detection and reporting (RAS) (EXPERIMENTAL)
@ -1448,10 +1499,6 @@ CONFIG_DEBUG_STACKOVERFLOW=y
#
# CONFIG_CRYPTO is not set
#
# Hardware crypto devices
#
#
# Library routines
#

View File

@ -333,7 +333,8 @@ static int load_aout_binary(struct linux_binprm * bprm, struct pt_regs * regs)
return error;
}
error = bprm->file->f_op->read(bprm->file, (char *)text_addr,
error = bprm->file->f_op->read(bprm->file,
(char __user *)text_addr,
ex.a_text+ex.a_data, &pos);
if ((signed long)error < 0) {
send_sig(SIGKILL, current, 0);
@ -366,7 +367,8 @@ static int load_aout_binary(struct linux_binprm * bprm, struct pt_regs * regs)
down_write(&current->mm->mmap_sem);
do_brk(N_TXTADDR(ex), ex.a_text+ex.a_data);
up_write(&current->mm->mmap_sem);
bprm->file->f_op->read(bprm->file,(char *)N_TXTADDR(ex),
bprm->file->f_op->read(bprm->file,
(char __user *)N_TXTADDR(ex),
ex.a_text+ex.a_data, &pos);
flush_icache_range((unsigned long) N_TXTADDR(ex),
(unsigned long) N_TXTADDR(ex) +
@ -477,7 +479,7 @@ static int load_aout_library(struct file *file)
do_brk(start_addr, ex.a_text + ex.a_data + ex.a_bss);
up_write(&current->mm->mmap_sem);
file->f_op->read(file, (char *)start_addr,
file->f_op->read(file, (char __user *)start_addr,
ex.a_text + ex.a_data, &pos);
flush_icache_range((unsigned long) start_addr,
(unsigned long) start_addr + ex.a_text + ex.a_data);

View File

@ -113,25 +113,19 @@ int copy_siginfo_from_user32(siginfo_t *to, compat_siginfo_t __user *from)
}
asmlinkage long
sys32_sigsuspend(int history0, int history1, old_sigset_t mask,
struct pt_regs *regs)
sys32_sigsuspend(int history0, int history1, old_sigset_t mask)
{
sigset_t saveset;
mask &= _BLOCKABLE;
spin_lock_irq(&current->sighand->siglock);
saveset = current->blocked;
current->saved_sigmask = current->blocked;
siginitset(&current->blocked, mask);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
regs->rax = -EINTR;
while (1) {
current->state = TASK_INTERRUPTIBLE;
schedule();
if (do_signal(regs, &saveset))
return -EINTR;
}
current->state = TASK_INTERRUPTIBLE;
schedule();
set_thread_flag(TIF_RESTORE_SIGMASK);
return -ERESTARTNOHAND;
}
asmlinkage long
@ -437,15 +431,7 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka,
if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
goto give_sigsegv;
{
struct exec_domain *ed = current_thread_info()->exec_domain;
err |= __put_user((ed
&& ed->signal_invmap
&& sig < 32
? ed->signal_invmap[sig]
: sig),
&frame->sig);
}
err |= __put_user(sig, &frame->sig);
if (err)
goto give_sigsegv;
@ -492,6 +478,11 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka,
regs->rsp = (unsigned long) frame;
regs->rip = (unsigned long) ka->sa.sa_handler;
/* Make -mregparm=3 work */
regs->rax = sig;
regs->rdx = 0;
regs->rcx = 0;
asm volatile("movl %0,%%ds" :: "r" (__USER32_DS));
asm volatile("movl %0,%%es" :: "r" (__USER32_DS));
@ -499,20 +490,20 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka,
regs->ss = __USER32_DS;
set_fs(USER_DS);
regs->eflags &= ~TF_MASK;
if (test_thread_flag(TIF_SINGLESTEP))
ptrace_notify(SIGTRAP);
regs->eflags &= ~TF_MASK;
if (test_thread_flag(TIF_SINGLESTEP))
ptrace_notify(SIGTRAP);
#if DEBUG_SIG
printk("SIG deliver (%s:%d): sp=%p pc=%p ra=%p\n",
current->comm, current->pid, frame, regs->rip, frame->pretcode);
#endif
return 1;
return 0;
give_sigsegv:
force_sigsegv(sig, current);
return 0;
return -EFAULT;
}
int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
@ -595,18 +586,18 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
regs->ss = __USER32_DS;
set_fs(USER_DS);
regs->eflags &= ~TF_MASK;
if (test_thread_flag(TIF_SINGLESTEP))
ptrace_notify(SIGTRAP);
regs->eflags &= ~TF_MASK;
if (test_thread_flag(TIF_SINGLESTEP))
ptrace_notify(SIGTRAP);
#if DEBUG_SIG
printk("SIG deliver (%s:%d): sp=%p pc=%p ra=%p\n",
current->comm, current->pid, frame, regs->rip, frame->pretcode);
#endif
return 1;
return 0;
give_sigsegv:
force_sigsegv(sig, current);
return 0;
return -EFAULT;
}

View File

@ -71,6 +71,7 @@
*/
ENTRY(ia32_sysenter_target)
CFI_STARTPROC32 simple
CFI_SIGNAL_FRAME
CFI_DEF_CFA rsp,0
CFI_REGISTER rsp,rbp
swapgs
@ -186,6 +187,7 @@ ENDPROC(ia32_sysenter_target)
*/
ENTRY(ia32_cstar_target)
CFI_STARTPROC32 simple
CFI_SIGNAL_FRAME
CFI_DEF_CFA rsp,PDA_STACKOFFSET
CFI_REGISTER rip,rcx
/*CFI_REGISTER rflags,r11*/
@ -293,6 +295,7 @@ ia32_badarg:
ENTRY(ia32_syscall)
CFI_STARTPROC simple
CFI_SIGNAL_FRAME
CFI_DEF_CFA rsp,SS+8-RIP
/*CFI_REL_OFFSET ss,SS-RIP*/
CFI_REL_OFFSET rsp,RSP-RIP
@ -370,6 +373,7 @@ ENTRY(ia32_ptregs_common)
popq %r11
CFI_ENDPROC
CFI_STARTPROC32 simple
CFI_SIGNAL_FRAME
CFI_DEF_CFA rsp,SS+8-ARGOFFSET
CFI_REL_OFFSET rax,RAX-ARGOFFSET
CFI_REL_OFFSET rcx,RCX-ARGOFFSET
@ -703,8 +707,8 @@ ia32_sys_call_table:
.quad sys_readlinkat /* 305 */
.quad sys_fchmodat
.quad sys_faccessat
.quad quiet_ni_syscall /* pselect6 for now */
.quad quiet_ni_syscall /* ppoll for now */
.quad compat_sys_pselect6
.quad compat_sys_ppoll
.quad sys_unshare /* 310 */
.quad compat_sys_set_robust_list
.quad compat_sys_get_robust_list
@ -713,4 +717,5 @@ ia32_sys_call_table:
.quad sys_tee
.quad compat_sys_vmsplice
.quad compat_sys_move_pages
.quad sys_getcpu
ia32_syscall_end:

View File

@ -117,6 +117,10 @@ static int putreg32(struct task_struct *child, unsigned regno, u32 val)
if ((0x5454 >> ((val >> (16 + 4*i)) & 0xf)) & 1)
return -EIO;
child->thread.debugreg7 = val;
if (val)
set_tsk_thread_flag(child, TIF_DEBUG);
else
clear_tsk_thread_flag(child, TIF_DEBUG);
break;
default:
@ -371,8 +375,10 @@ asmlinkage long sys32_ptrace(long request, u32 pid, u32 addr, u32 data)
ret = -EIO;
if (!access_ok(VERIFY_READ, u, sizeof(*u)))
break;
/* no checking to be bug-to-bug compatible with i386 */
__copy_from_user(&child->thread.i387.fxsave, u, sizeof(*u));
/* no checking to be bug-to-bug compatible with i386. */
/* but silence warning */
if (__copy_from_user(&child->thread.i387.fxsave, u, sizeof(*u)))
;
set_stopped_child_used_math(child);
child->thread.i387.fxsave.mxcsr &= mxcsr_feature_mask;
ret = 0;

View File

@ -60,6 +60,7 @@
#include <linux/highuid.h>
#include <linux/vmalloc.h>
#include <linux/fsnotify.h>
#include <linux/sysctl.h>
#include <asm/mman.h>
#include <asm/types.h>
#include <asm/uaccess.h>
@ -389,7 +390,9 @@ sys32_rt_sigprocmask(int how, compat_sigset_t __user *set,
}
}
set_fs (KERNEL_DS);
ret = sys_rt_sigprocmask(how, set ? &s : NULL, oset ? &s : NULL,
ret = sys_rt_sigprocmask(how,
set ? (sigset_t __user *)&s : NULL,
oset ? (sigset_t __user *)&s : NULL,
sigsetsize);
set_fs (old_fs);
if (ret) return ret;
@ -541,7 +544,7 @@ sys32_sysinfo(struct sysinfo32 __user *info)
int bitcount = 0;
set_fs (KERNEL_DS);
ret = sys_sysinfo(&s);
ret = sys_sysinfo((struct sysinfo __user *)&s);
set_fs (old_fs);
/* Check to see if any memory value is too large for 32-bit and scale
@ -589,7 +592,7 @@ sys32_sched_rr_get_interval(compat_pid_t pid, struct compat_timespec __user *int
mm_segment_t old_fs = get_fs ();
set_fs (KERNEL_DS);
ret = sys_sched_rr_get_interval(pid, &t);
ret = sys_sched_rr_get_interval(pid, (struct timespec __user *)&t);
set_fs (old_fs);
if (put_compat_timespec(&t, interval))
return -EFAULT;
@ -605,7 +608,7 @@ sys32_rt_sigpending(compat_sigset_t __user *set, compat_size_t sigsetsize)
mm_segment_t old_fs = get_fs();
set_fs (KERNEL_DS);
ret = sys_rt_sigpending(&s, sigsetsize);
ret = sys_rt_sigpending((sigset_t __user *)&s, sigsetsize);
set_fs (old_fs);
if (!ret) {
switch (_NSIG_WORDS) {
@ -630,7 +633,7 @@ sys32_rt_sigqueueinfo(int pid, int sig, compat_siginfo_t __user *uinfo)
if (copy_siginfo_from_user32(&info, uinfo))
return -EFAULT;
set_fs (KERNEL_DS);
ret = sys_rt_sigqueueinfo(pid, sig, &info);
ret = sys_rt_sigqueueinfo(pid, sig, (siginfo_t __user *)&info);
set_fs (old_fs);
return ret;
}
@ -666,9 +669,6 @@ sys32_sysctl(struct sysctl_ia32 __user *args32)
size_t oldlen;
int __user *namep;
long ret;
extern int do_sysctl(int *name, int nlen, void *oldval, size_t *oldlenp,
void *newval, size_t newlen);
if (copy_from_user(&a32, args32, sizeof (a32)))
return -EFAULT;
@ -692,7 +692,8 @@ sys32_sysctl(struct sysctl_ia32 __user *args32)
set_fs(KERNEL_DS);
lock_kernel();
ret = do_sysctl(namep, a32.nlen, oldvalp, &oldlen, newvalp, (size_t) a32.newlen);
ret = do_sysctl(namep, a32.nlen, oldvalp, (size_t __user *)&oldlen,
newvalp, (size_t) a32.newlen);
unlock_kernel();
set_fs(old_fs);
@ -743,7 +744,8 @@ sys32_sendfile(int out_fd, int in_fd, compat_off_t __user *offset, s32 count)
return -EFAULT;
set_fs(KERNEL_DS);
ret = sys_sendfile(out_fd, in_fd, offset ? &of : NULL, count);
ret = sys_sendfile(out_fd, in_fd, offset ? (off_t __user *)&of : NULL,
count);
set_fs(old_fs);
if (offset && put_user(of, offset))
@ -778,7 +780,7 @@ asmlinkage long sys32_mmap2(unsigned long addr, unsigned long len,
asmlinkage long sys32_olduname(struct oldold_utsname __user * name)
{
int error;
int err;
if (!name)
return -EFAULT;
@ -787,27 +789,31 @@ asmlinkage long sys32_olduname(struct oldold_utsname __user * name)
down_read(&uts_sem);
error = __copy_to_user(&name->sysname,&system_utsname.sysname,__OLD_UTS_LEN);
__put_user(0,name->sysname+__OLD_UTS_LEN);
__copy_to_user(&name->nodename,&system_utsname.nodename,__OLD_UTS_LEN);
__put_user(0,name->nodename+__OLD_UTS_LEN);
__copy_to_user(&name->release,&system_utsname.release,__OLD_UTS_LEN);
__put_user(0,name->release+__OLD_UTS_LEN);
__copy_to_user(&name->version,&system_utsname.version,__OLD_UTS_LEN);
__put_user(0,name->version+__OLD_UTS_LEN);
err = __copy_to_user(&name->sysname,&system_utsname.sysname,
__OLD_UTS_LEN);
err |= __put_user(0,name->sysname+__OLD_UTS_LEN);
err |= __copy_to_user(&name->nodename,&system_utsname.nodename,
__OLD_UTS_LEN);
err |= __put_user(0,name->nodename+__OLD_UTS_LEN);
err |= __copy_to_user(&name->release,&system_utsname.release,
__OLD_UTS_LEN);
err |= __put_user(0,name->release+__OLD_UTS_LEN);
err |= __copy_to_user(&name->version,&system_utsname.version,
__OLD_UTS_LEN);
err |= __put_user(0,name->version+__OLD_UTS_LEN);
{
char *arch = "x86_64";
if (personality(current->personality) == PER_LINUX32)
arch = "i686";
__copy_to_user(&name->machine,arch,strlen(arch)+1);
err |= __copy_to_user(&name->machine,arch,strlen(arch)+1);
}
up_read(&uts_sem);
error = error ? -EFAULT : 0;
err = err ? -EFAULT : 0;
return error;
return err;
}
long sys32_uname(struct old_utsname __user * name)
@ -831,7 +837,7 @@ long sys32_ustat(unsigned dev, struct ustat32 __user *u32p)
seg = get_fs();
set_fs(KERNEL_DS);
ret = sys_ustat(dev,&u);
ret = sys_ustat(dev, (struct ustat __user *)&u);
set_fs(seg);
if (ret >= 0) {
if (!access_ok(VERIFY_WRITE,u32p,sizeof(struct ustat32)) ||

View File

@ -11,7 +11,7 @@ obj-y := process.o signal.o entry.o traps.o irq.o \
pci-dma.o pci-nommu.o alternative.o
obj-$(CONFIG_STACKTRACE) += stacktrace.o
obj-$(CONFIG_X86_MCE) += mce.o
obj-$(CONFIG_X86_MCE) += mce.o therm_throt.o
obj-$(CONFIG_X86_MCE_INTEL) += mce_intel.o
obj-$(CONFIG_X86_MCE_AMD) += mce_amd.o
obj-$(CONFIG_MTRR) += ../../i386/kernel/cpu/mtrr/
@ -20,8 +20,8 @@ obj-$(CONFIG_X86_MSR) += msr.o
obj-$(CONFIG_MICROCODE) += microcode.o
obj-$(CONFIG_X86_CPUID) += cpuid.o
obj-$(CONFIG_SMP) += smp.o smpboot.o trampoline.o
obj-$(CONFIG_X86_LOCAL_APIC) += apic.o nmi.o
obj-$(CONFIG_X86_IO_APIC) += io_apic.o mpparse.o \
obj-y += apic.o nmi.o
obj-y += io_apic.o mpparse.o \
genapic.o genapic_cluster.o genapic_flat.o
obj-$(CONFIG_KEXEC) += machine_kexec.o relocate_kernel.o crash.o
obj-$(CONFIG_CRASH_DUMP) += crash_dump.o
@ -39,12 +39,14 @@ obj-$(CONFIG_K8_NB) += k8.o
obj-$(CONFIG_AUDIT) += audit.o
obj-$(CONFIG_MODULES) += module.o
obj-$(CONFIG_PCI) += early-quirks.o
obj-y += topology.o
obj-y += intel_cacheinfo.o
CFLAGS_vsyscall.o := $(PROFILING) -g0
therm_throt-y += ../../i386/kernel/cpu/mcheck/therm_throt.o
bootflag-y += ../../i386/kernel/bootflag.o
cpuid-$(subst m,y,$(CONFIG_X86_CPUID)) += ../../i386/kernel/cpuid.o
topology-y += ../../i386/kernel/topology.o
@ -54,4 +56,3 @@ quirks-y += ../../i386/kernel/quirks.o
i8237-y += ../../i386/kernel/i8237.o
msr-$(subst m,y,$(CONFIG_X86_MSR)) += ../../i386/kernel/msr.o
alternative-y += ../../i386/kernel/alternative.o

View File

@ -17,6 +17,7 @@
#include <linux/pci_ids.h>
#include <linux/pci.h>
#include <linux/bitops.h>
#include <linux/ioport.h>
#include <asm/e820.h>
#include <asm/io.h>
#include <asm/proto.h>
@ -33,6 +34,18 @@ int fallback_aper_force __initdata = 0;
int fix_aperture __initdata = 1;
static struct resource gart_resource = {
.name = "GART",
.flags = IORESOURCE_MEM,
};
static void __init insert_aperture_resource(u32 aper_base, u32 aper_size)
{
gart_resource.start = aper_base;
gart_resource.end = aper_base + aper_size - 1;
insert_resource(&iomem_resource, &gart_resource);
}
/* This code runs before the PCI subsystem is initialized, so just
access the northbridge directly. */
@ -48,7 +61,7 @@ static u32 __init allocate_aperture(void)
/*
* Aperture has to be naturally aligned. This means an 2GB aperture won't
* have much chances to find a place in the lower 4GB of memory.
* have much chance of finding a place in the lower 4GB of memory.
* Unfortunately we cannot move it up because that would make the
* IOMMU useless.
*/
@ -62,6 +75,7 @@ static u32 __init allocate_aperture(void)
}
printk("Mapping aperture over %d KB of RAM @ %lx\n",
aper_size >> 10, __pa(p));
insert_aperture_resource((u32)__pa(p), aper_size);
return (u32)__pa(p);
}
@ -198,7 +212,7 @@ void __init iommu_hole_init(void)
u64 aper_base, last_aper_base = 0;
int valid_agp = 0;
if (iommu_aperture_disabled || !fix_aperture)
if (iommu_aperture_disabled || !fix_aperture || !early_pci_allowed())
return;
printk("Checking aperture...\n");
@ -233,8 +247,13 @@ void __init iommu_hole_init(void)
last_aper_base = aper_base;
}
if (!fix && !fallback_aper_force)
if (!fix && !fallback_aper_force) {
if (last_aper_base) {
unsigned long n = (32 * 1024 * 1024) << last_aper_order;
insert_aperture_resource((u32)last_aper_base, n);
}
return;
}
if (!fallback_aper_force)
aper_alloc = search_agp_bridge(&aper_order, &valid_agp);

View File

@ -25,6 +25,7 @@
#include <linux/kernel_stat.h>
#include <linux/sysdev.h>
#include <linux/module.h>
#include <linux/ioport.h>
#include <asm/atomic.h>
#include <asm/smp.h>
@ -36,13 +37,20 @@
#include <asm/idle.h>
#include <asm/proto.h>
#include <asm/timex.h>
#include <asm/apic.h>
int apic_mapped;
int apic_verbosity;
int apic_runs_main_timer;
int apic_calibrate_pmtmr __initdata;
int disable_apic_timer __initdata;
static struct resource lapic_resource = {
.name = "Local APIC",
.flags = IORESOURCE_MEM | IORESOURCE_BUSY,
};
/*
* cpu_mask that denotes the CPUs that needs timer interrupt coming in as
* IPIs in place of local APIC timers
@ -136,72 +144,40 @@ void clear_local_APIC(void)
apic_read(APIC_ESR);
}
void __init connect_bsp_APIC(void)
{
if (pic_mode) {
/*
* Do not trust the local APIC being empty at bootup.
*/
clear_local_APIC();
/*
* PIC mode, enable APIC mode in the IMCR, i.e.
* connect BSP's local APIC to INT and NMI lines.
*/
apic_printk(APIC_VERBOSE, "leaving PIC mode, enabling APIC mode.\n");
outb(0x70, 0x22);
outb(0x01, 0x23);
}
}
void disconnect_bsp_APIC(int virt_wire_setup)
{
if (pic_mode) {
/*
* Put the board back into PIC mode (has an effect
* only on certain older boards). Note that APIC
* interrupts, including IPIs, won't work beyond
* this point! The only exception are INIT IPIs.
*/
apic_printk(APIC_QUIET, "disabling APIC mode, entering PIC mode.\n");
outb(0x70, 0x22);
outb(0x00, 0x23);
/* Go back to Virtual Wire compatibility mode */
unsigned long value;
/* For the spurious interrupt use vector F, and enable it */
value = apic_read(APIC_SPIV);
value &= ~APIC_VECTOR_MASK;
value |= APIC_SPIV_APIC_ENABLED;
value |= 0xf;
apic_write(APIC_SPIV, value);
if (!virt_wire_setup) {
/* For LVT0 make it edge triggered, active high, external and enabled */
value = apic_read(APIC_LVT0);
value &= ~(APIC_MODE_MASK | APIC_SEND_PENDING |
APIC_INPUT_POLARITY | APIC_LVT_REMOTE_IRR |
APIC_LVT_LEVEL_TRIGGER | APIC_LVT_MASKED );
value |= APIC_LVT_REMOTE_IRR | APIC_SEND_PENDING;
value = SET_APIC_DELIVERY_MODE(value, APIC_MODE_EXTINT);
apic_write(APIC_LVT0, value);
} else {
/* Disable LVT0 */
apic_write(APIC_LVT0, APIC_LVT_MASKED);
}
else {
/* Go back to Virtual Wire compatibility mode */
unsigned long value;
/* For the spurious interrupt use vector F, and enable it */
value = apic_read(APIC_SPIV);
value &= ~APIC_VECTOR_MASK;
value |= APIC_SPIV_APIC_ENABLED;
value |= 0xf;
apic_write(APIC_SPIV, value);
if (!virt_wire_setup) {
/* For LVT0 make it edge triggered, active high, external and enabled */
value = apic_read(APIC_LVT0);
value &= ~(APIC_MODE_MASK | APIC_SEND_PENDING |
APIC_INPUT_POLARITY | APIC_LVT_REMOTE_IRR |
APIC_LVT_LEVEL_TRIGGER | APIC_LVT_MASKED );
value |= APIC_LVT_REMOTE_IRR | APIC_SEND_PENDING;
value = SET_APIC_DELIVERY_MODE(value, APIC_MODE_EXTINT);
apic_write(APIC_LVT0, value);
}
else {
/* Disable LVT0 */
apic_write(APIC_LVT0, APIC_LVT_MASKED);
}
/* For LVT1 make it edge triggered, active high, nmi and enabled */
value = apic_read(APIC_LVT1);
value &= ~(
APIC_MODE_MASK | APIC_SEND_PENDING |
/* For LVT1 make it edge triggered, active high, nmi and enabled */
value = apic_read(APIC_LVT1);
value &= ~(APIC_MODE_MASK | APIC_SEND_PENDING |
APIC_INPUT_POLARITY | APIC_LVT_REMOTE_IRR |
APIC_LVT_LEVEL_TRIGGER | APIC_LVT_MASKED);
value |= APIC_LVT_REMOTE_IRR | APIC_SEND_PENDING;
value = SET_APIC_DELIVERY_MODE(value, APIC_MODE_NMI);
apic_write(APIC_LVT1, value);
}
value |= APIC_LVT_REMOTE_IRR | APIC_SEND_PENDING;
value = SET_APIC_DELIVERY_MODE(value, APIC_MODE_NMI);
apic_write(APIC_LVT1, value);
}
void disable_local_APIC(void)
@ -297,8 +273,6 @@ void __init sync_Arb_IDs(void)
| APIC_DM_INIT);
}
extern void __error_in_apic_c (void);
/*
* An initial setup of the virtual wire mode.
*/
@ -345,8 +319,7 @@ void __cpuinit setup_local_APIC (void)
value = apic_read(APIC_LVR);
if ((SPURIOUS_APIC_VECTOR & 0x0f) != 0x0f)
__error_in_apic_c();
BUILD_BUG_ON((SPURIOUS_APIC_VECTOR & 0x0f) != 0x0f);
/*
* Double-check whether this APIC is really registered.
@ -399,32 +372,8 @@ void __cpuinit setup_local_APIC (void)
*/
value |= APIC_SPIV_APIC_ENABLED;
/*
* Some unknown Intel IO/APIC (or APIC) errata is biting us with
* certain networking cards. If high frequency interrupts are
* happening on a particular IOAPIC pin, plus the IOAPIC routing
* entry is masked/unmasked at a high rate as well then sooner or
* later IOAPIC line gets 'stuck', no more interrupts are received
* from the device. If focus CPU is disabled then the hang goes
* away, oh well :-(
*
* [ This bug can be reproduced easily with a level-triggered
* PCI Ne2000 networking cards and PII/PIII processors, dual
* BX chipset. ]
*/
/*
* Actually disabling the focus CPU check just makes the hang less
* frequent as it makes the interrupt distributon model be more
* like LRU than MRU (the short-term load is more even across CPUs).
* See also the comment in end_level_ioapic_irq(). --macro
*/
#if 1
/* Enable focus processor (bit==0) */
value &= ~APIC_SPIV_FOCUS_DISABLED;
#else
/* Disable focus processor (bit==1) */
value |= APIC_SPIV_FOCUS_DISABLED;
#endif
/* We always use processor focus */
/*
* Set spurious IRQ vector
*/
@ -442,7 +391,7 @@ void __cpuinit setup_local_APIC (void)
* TODO: set up through-local-APIC from through-I/O-APIC? --macro
*/
value = apic_read(APIC_LVT0) & APIC_LVT_MASKED;
if (!smp_processor_id() && (pic_mode || !value)) {
if (!smp_processor_id() && !value) {
value = APIC_DM_EXTINT;
apic_printk(APIC_VERBOSE, "enabled ExtINT on CPU#%d\n", smp_processor_id());
} else {
@ -479,8 +428,7 @@ void __cpuinit setup_local_APIC (void)
}
nmi_watchdog_default();
if (nmi_watchdog == NMI_LOCAL_APIC)
setup_apic_nmi_watchdog();
setup_apic_nmi_watchdog(NULL);
apic_pm_activate();
}
@ -527,8 +475,7 @@ static int lapic_suspend(struct sys_device *dev, pm_message_t state)
apic_pm_state.apic_tmict = apic_read(APIC_TMICT);
apic_pm_state.apic_tdcr = apic_read(APIC_TDCR);
apic_pm_state.apic_thmr = apic_read(APIC_LVTTHMR);
local_save_flags(flags);
local_irq_disable();
local_irq_save(flags);
disable_local_APIC();
local_irq_restore(flags);
return 0;
@ -606,18 +553,24 @@ static void apic_pm_activate(void) { }
static int __init apic_set_verbosity(char *str)
{
if (str == NULL) {
skip_ioapic_setup = 0;
ioapic_force = 1;
return 0;
}
if (strcmp("debug", str) == 0)
apic_verbosity = APIC_DEBUG;
else if (strcmp("verbose", str) == 0)
apic_verbosity = APIC_VERBOSE;
else
else {
printk(KERN_WARNING "APIC Verbosity level %s not recognised"
" use apic=verbose or apic=debug", str);
" use apic=verbose or apic=debug\n", str);
return -EINVAL;
}
return 1;
return 0;
}
__setup("apic=", apic_set_verbosity);
early_param("apic", apic_set_verbosity);
/*
* Detect and enable local APICs on non-SMP boards.
@ -638,6 +591,40 @@ static int __init detect_init_APIC (void)
return 0;
}
#ifdef CONFIG_X86_IO_APIC
static struct resource * __init ioapic_setup_resources(void)
{
#define IOAPIC_RESOURCE_NAME_SIZE 11
unsigned long n;
struct resource *res;
char *mem;
int i;
if (nr_ioapics <= 0)
return NULL;
n = IOAPIC_RESOURCE_NAME_SIZE + sizeof(struct resource);
n *= nr_ioapics;
res = alloc_bootmem(n);
if (!res)
return NULL;
memset(res, 0, n);
mem = (void *)&res[nr_ioapics];
for (i = 0; i < nr_ioapics; i++) {
res[i].name = mem;
res[i].flags = IORESOURCE_MEM | IORESOURCE_BUSY;
snprintf(mem, IOAPIC_RESOURCE_NAME_SIZE, "IOAPIC %u", i);
mem += IOAPIC_RESOURCE_NAME_SIZE;
}
return res;
}
#endif
void __init init_apic_mappings(void)
{
unsigned long apic_phys;
@ -654,19 +641,26 @@ void __init init_apic_mappings(void)
apic_phys = mp_lapic_addr;
set_fixmap_nocache(FIX_APIC_BASE, apic_phys);
apic_mapped = 1;
apic_printk(APIC_VERBOSE,"mapped APIC to %16lx (%16lx)\n", APIC_BASE, apic_phys);
/* Put local APIC into the resource map. */
lapic_resource.start = apic_phys;
lapic_resource.end = lapic_resource.start + PAGE_SIZE - 1;
insert_resource(&iomem_resource, &lapic_resource);
/*
* Fetch the APIC ID of the BSP in case we have a
* default configuration (or the MP table is broken).
*/
boot_cpu_id = GET_APIC_ID(apic_read(APIC_ID));
#ifdef CONFIG_X86_IO_APIC
{
unsigned long ioapic_phys, idx = FIX_IO_APIC_BASE_0;
int i;
struct resource *ioapic_res;
ioapic_res = ioapic_setup_resources();
for (i = 0; i < nr_ioapics; i++) {
if (smp_found_config) {
ioapic_phys = mp_ioapics[i].mpc_apicaddr;
@ -678,9 +672,15 @@ void __init init_apic_mappings(void)
apic_printk(APIC_VERBOSE,"mapped IOAPIC to %016lx (%016lx)\n",
__fix_to_virt(idx), ioapic_phys);
idx++;
if (ioapic_res) {
ioapic_res->start = ioapic_phys;
ioapic_res->end = ioapic_phys + (4 * 1024) - 1;
insert_resource(&iomem_resource, ioapic_res);
ioapic_res++;
}
}
}
#endif
}
/*
@ -951,7 +951,7 @@ void smp_local_timer_interrupt(struct pt_regs *regs)
* We take the 'long' return path, and there every subsystem
* grabs the appropriate locks (kernel lock/ irq lock).
*
* we might want to decouple profiling from the 'long path',
* We might want to decouple profiling from the 'long path',
* and do the profiling totally in assembly.
*
* Currently this isn't too much of an issue (performance wise),
@ -1123,19 +1123,15 @@ int __init APIC_init_uniprocessor (void)
verify_local_APIC();
connect_bsp_APIC();
phys_cpu_present_map = physid_mask_of_physid(boot_cpu_id);
apic_write(APIC_ID, SET_APIC_ID(boot_cpu_id));
setup_local_APIC();
#ifdef CONFIG_X86_IO_APIC
if (smp_found_config && !skip_ioapic_setup && nr_ioapics)
setup_IO_APIC();
setup_IO_APIC();
else
nr_ioapics = 0;
#endif
setup_boot_APIC_clock();
check_nmi_watchdog();
return 0;
@ -1144,14 +1140,17 @@ int __init APIC_init_uniprocessor (void)
static __init int setup_disableapic(char *str)
{
disable_apic = 1;
return 1;
}
clear_bit(X86_FEATURE_APIC, boot_cpu_data.x86_capability);
return 0;
}
early_param("disableapic", setup_disableapic);
/* same as disableapic, for compatibility */
static __init int setup_nolapic(char *str)
{
disable_apic = 1;
return 1;
return setup_disableapic(str);
}
early_param("nolapic", setup_nolapic);
static __init int setup_noapictimer(char *str)
{
@ -1184,11 +1183,5 @@ static __init int setup_apicpmtimer(char *s)
}
__setup("apicpmtimer", setup_apicpmtimer);
/* dummy parsing: see setup.c */
__setup("disableapic", setup_disableapic);
__setup("nolapic", setup_nolapic); /* same as disableapic, for compatibility */
__setup("noapictimer", setup_noapictimer);
/* no "lapic" flag - we only use the lapic when the BIOS tells us so. */

View File

@ -23,6 +23,7 @@
#include <asm/nmi.h>
#include <asm/hw_irq.h>
#include <asm/mach_apic.h>
#include <asm/kdebug.h>
/* This keeps a track of which one is crashing cpu. */
static int crashing_cpu;
@ -68,7 +69,7 @@ static void crash_save_this_cpu(struct pt_regs *regs, int cpu)
* for the data I pass, and I need tags
* on the data to indicate what information I have
* squirrelled away. ELF notes happen to provide
* all of that that no need to invent something new.
* all of that, no need to invent something new.
*/
buf = (u32*)per_cpu_ptr(crash_notes, cpu);
@ -95,15 +96,25 @@ static void crash_save_self(struct pt_regs *regs)
#ifdef CONFIG_SMP
static atomic_t waiting_for_crash_ipi;
static int crash_nmi_callback(struct pt_regs *regs, int cpu)
static int crash_nmi_callback(struct notifier_block *self,
unsigned long val, void *data)
{
struct pt_regs *regs;
int cpu;
if (val != DIE_NMI_IPI)
return NOTIFY_OK;
regs = ((struct die_args *)data)->regs;
cpu = raw_smp_processor_id();
/*
* Don't do anything if this handler is invoked on crashing cpu.
* Otherwise, system will completely hang. Crashing cpu can get
* an NMI if system was initially booted with nmi_watchdog parameter.
*/
if (cpu == crashing_cpu)
return 1;
return NOTIFY_STOP;
local_irq_disable();
crash_save_this_cpu(regs, cpu);
@ -127,12 +138,17 @@ static void smp_send_nmi_allbutself(void)
* cpu hotplug shouldn't matter.
*/
static struct notifier_block crash_nmi_nb = {
.notifier_call = crash_nmi_callback,
};
static void nmi_shootdown_cpus(void)
{
unsigned long msecs;
atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
set_nmi_callback(crash_nmi_callback);
if (register_die_notifier(&crash_nmi_nb))
return; /* return what? */
/*
* Ensure the new callback function is set before sending
@ -178,9 +194,7 @@ void machine_crash_shutdown(struct pt_regs *regs)
if(cpu_has_apic)
disable_local_APIC();
#if defined(CONFIG_X86_IO_APIC)
disable_IO_APIC();
#endif
crash_save_self(regs);
}

View File

@ -25,6 +25,8 @@
#include <asm/bootsetup.h>
#include <asm/sections.h>
struct e820map e820 __initdata;
/*
* PFN of last memory page.
*/
@ -41,7 +43,7 @@ unsigned long end_pfn_map;
/*
* Last pfn which the user wants to use.
*/
unsigned long end_user_pfn = MAXMEM>>PAGE_SHIFT;
static unsigned long __initdata end_user_pfn = MAXMEM>>PAGE_SHIFT;
extern struct resource code_resource, data_resource;
@ -70,12 +72,7 @@ static inline int bad_addr(unsigned long *addrp, unsigned long size)
return 1;
}
#endif
/* kernel code + 640k memory hole (later should not be needed, but
be paranoid for now) */
if (last >= 640*1024 && addr < 1024*1024) {
*addrp = 1024*1024;
return 1;
}
/* kernel code */
if (last >= __pa_symbol(&_text) && last < __pa_symbol(&_end)) {
*addrp = __pa_symbol(&_end);
return 1;
@ -565,13 +562,6 @@ static int __init sanitize_e820_map(struct e820entry * biosmap, char * pnr_map)
* If we're lucky and live on a modern system, the setup code
* will have given us a memory map that we can use to properly
* set up memory. If we aren't, we'll fake a memory map.
*
* We check to see that the memory map contains at least 2 elements
* before we'll use it, because the detection code in setup.S may
* not be perfect and most every PC known to man has two memory
* regions: one from 0 to 640k, and one from 1mb up. (The IBM
* thinkpad 560x, for example, does not cooperate with the memory
* detection code.)
*/
static int __init copy_e820_map(struct e820entry * biosmap, int nr_map)
{
@ -589,34 +579,19 @@ static int __init copy_e820_map(struct e820entry * biosmap, int nr_map)
if (start > end)
return -1;
/*
* Some BIOSes claim RAM in the 640k - 1M region.
* Not right. Fix it up.
*
* This should be removed on Hammer which is supposed to not
* have non e820 covered ISA mappings there, but I had some strange
* problems so it stays for now. -AK
*/
if (type == E820_RAM) {
if (start < 0x100000ULL && end > 0xA0000ULL) {
if (start < 0xA0000ULL)
add_memory_region(start, 0xA0000ULL-start, type);
if (end <= 0x100000ULL)
continue;
start = 0x100000ULL;
size = end - start;
}
}
add_memory_region(start, size, type);
} while (biosmap++,--nr_map);
return 0;
}
void early_panic(char *msg)
{
early_printk(msg);
panic(msg);
}
void __init setup_memory_region(void)
{
char *who = "BIOS-e820";
/*
* Try to copy the BIOS-supplied E820-map.
*
@ -624,51 +599,70 @@ void __init setup_memory_region(void)
* the next section from 1mb->appropriate_mem_k
*/
sanitize_e820_map(E820_MAP, &E820_MAP_NR);
if (copy_e820_map(E820_MAP, E820_MAP_NR) < 0) {
unsigned long mem_size;
/* compare results from other methods and take the greater */
if (ALT_MEM_K < EXT_MEM_K) {
mem_size = EXT_MEM_K;
who = "BIOS-88";
} else {
mem_size = ALT_MEM_K;
who = "BIOS-e801";
}
e820.nr_map = 0;
add_memory_region(0, LOWMEMSIZE(), E820_RAM);
add_memory_region(HIGH_MEMORY, mem_size << 10, E820_RAM);
}
if (copy_e820_map(E820_MAP, E820_MAP_NR) < 0)
early_panic("Cannot find a valid memory map");
printk(KERN_INFO "BIOS-provided physical RAM map:\n");
e820_print_map(who);
e820_print_map("BIOS-e820");
}
void __init parse_memopt(char *p, char **from)
{
end_user_pfn = memparse(p, from);
end_user_pfn >>= PAGE_SHIFT;
}
void __init parse_memmapopt(char *p, char **from)
static int __init parse_memopt(char *p)
{
if (!p)
return -EINVAL;
end_user_pfn = memparse(p, &p);
end_user_pfn >>= PAGE_SHIFT;
return 0;
}
early_param("mem", parse_memopt);
static int userdef __initdata;
static int __init parse_memmap_opt(char *p)
{
char *oldp;
unsigned long long start_at, mem_size;
mem_size = memparse(p, from);
p = *from;
if (!strcmp(p, "exactmap")) {
#ifdef CONFIG_CRASH_DUMP
/* If we are doing a crash dump, we
* still need to know the real mem
* size before original memory map is
* reset.
*/
saved_max_pfn = e820_end_of_ram();
#endif
end_pfn_map = 0;
e820.nr_map = 0;
userdef = 1;
return 0;
}
oldp = p;
mem_size = memparse(p, &p);
if (p == oldp)
return -EINVAL;
if (*p == '@') {
start_at = memparse(p+1, from);
start_at = memparse(p+1, &p);
add_memory_region(start_at, mem_size, E820_RAM);
} else if (*p == '#') {
start_at = memparse(p+1, from);
start_at = memparse(p+1, &p);
add_memory_region(start_at, mem_size, E820_ACPI);
} else if (*p == '$') {
start_at = memparse(p+1, from);
start_at = memparse(p+1, &p);
add_memory_region(start_at, mem_size, E820_RESERVED);
} else {
end_user_pfn = (mem_size >> PAGE_SHIFT);
}
p = *from;
return *p == '\0' ? 0 : -EINVAL;
}
early_param("memmap", parse_memmap_opt);
void finish_e820_parsing(void)
{
if (userdef) {
printk(KERN_INFO "user-defined physical RAM map:\n");
e820_print_map("user");
}
}
unsigned long pci_mem_start = 0xaeedbabe;

View File

@ -0,0 +1,122 @@
/* Various workarounds for chipset bugs.
This code runs very early and can't use the regular PCI subsystem
The entries are keyed to PCI bridges which usually identify chipsets
uniquely.
This is only for whole classes of chipsets with specific problems which
need early invasive action (e.g. before the timers are initialized).
Most PCI device specific workarounds can be done later and should be
in standard PCI quirks
Mainboard specific bugs should be handled by DMI entries.
CPU specific bugs in setup.c */
#include <linux/pci.h>
#include <linux/acpi.h>
#include <linux/pci_ids.h>
#include <asm/pci-direct.h>
#include <asm/proto.h>
#include <asm/dma.h>
static void via_bugs(void)
{
#ifdef CONFIG_IOMMU
if ((end_pfn > MAX_DMA32_PFN || force_iommu) &&
!iommu_aperture_allowed) {
printk(KERN_INFO
"Looks like a VIA chipset. Disabling IOMMU. Override with iommu=allowed\n");
iommu_aperture_disabled = 1;
}
#endif
}
#ifdef CONFIG_ACPI
static int nvidia_hpet_detected __initdata;
static int __init nvidia_hpet_check(unsigned long phys, unsigned long size)
{
nvidia_hpet_detected = 1;
return 0;
}
#endif
static void nvidia_bugs(void)
{
#ifdef CONFIG_ACPI
/*
* All timer overrides on Nvidia are
* wrong unless HPET is enabled.
*/
nvidia_hpet_detected = 0;
acpi_table_parse(ACPI_HPET, nvidia_hpet_check);
if (nvidia_hpet_detected == 0) {
acpi_skip_timer_override = 1;
printk(KERN_INFO "Nvidia board "
"detected. Ignoring ACPI "
"timer override.\n");
}
#endif
/* RED-PEN skip them on mptables too? */
}
static void ati_bugs(void)
{
#if 1 /* for testing */
printk("ATI board detected\n");
#endif
/* No bugs right now */
}
struct chipset {
u16 vendor;
void (*f)(void);
};
static struct chipset early_qrk[] = {
{ PCI_VENDOR_ID_NVIDIA, nvidia_bugs },
{ PCI_VENDOR_ID_VIA, via_bugs },
{ PCI_VENDOR_ID_ATI, ati_bugs },
{}
};
void __init early_quirks(void)
{
int num, slot, func;
if (!early_pci_allowed())
return;
/* Poor man's PCI discovery */
for (num = 0; num < 32; num++) {
for (slot = 0; slot < 32; slot++) {
for (func = 0; func < 8; func++) {
u32 class;
u32 vendor;
u8 type;
int i;
class = read_pci_config(num,slot,func,
PCI_CLASS_REVISION);
if (class == 0xffffffff)
break;
if ((class >> 16) != PCI_CLASS_BRIDGE_PCI)
continue;
vendor = read_pci_config(num, slot, func,
PCI_VENDOR_ID);
vendor &= 0xffff;
for (i = 0; early_qrk[i].f; i++)
if (early_qrk[i].vendor == vendor) {
early_qrk[i].f();
return;
}
type = read_pci_config_byte(num, slot, func,
PCI_HEADER_TYPE);
if (!(type & 0x80))
break;
}
}
}
}

View File

@ -215,20 +215,16 @@ void early_printk(const char *fmt, ...)
static int __initdata keep_early;
int __init setup_early_printk(char *opt)
static int __init setup_early_printk(char *buf)
{
char *space;
char buf[256];
if (!buf)
return 0;
if (early_console_initialized)
return 1;
return 0;
early_console_initialized = 1;
strlcpy(buf,opt,sizeof(buf));
space = strchr(buf, ' ');
if (space)
*space = 0;
if (strstr(buf,"keep"))
if (!strcmp(buf,"keep"))
keep_early = 1;
if (!strncmp(buf, "serial", 6)) {
@ -248,11 +244,12 @@ int __init setup_early_printk(char *opt)
early_console = &simnow_console;
keep_early = 1;
}
early_console_initialized = 1;
register_console(early_console);
return 0;
}
early_param("earlyprintk", setup_early_printk);
void __init disable_early_printk(void)
{
if (!early_console_initialized || !early_console)
@ -266,4 +263,3 @@ void __init disable_early_printk(void)
}
}
__setup("earlyprintk=", setup_early_printk);

View File

@ -4,8 +4,6 @@
* Copyright (C) 1991, 1992 Linus Torvalds
* Copyright (C) 2000, 2001, 2002 Andi Kleen SuSE Labs
* Copyright (C) 2000 Pavel Machek <pavel@suse.cz>
*
* $Id$
*/
/*
@ -22,15 +20,25 @@
* at the top of the kernel process stack.
* - partial stack frame: partially saved registers upto R11.
* - full stack frame: Like partial stack frame, but all register saved.
*
* TODO:
* - schedule it carefully for the final hardware.
*
* Some macro usage:
* - CFI macros are used to generate dwarf2 unwind information for better
* backtraces. They don't change any code.
* - SAVE_ALL/RESTORE_ALL - Save/restore all registers
* - SAVE_ARGS/RESTORE_ARGS - Save/restore registers that C functions modify.
* There are unfortunately lots of special cases where some registers
* not touched. The macro is a big mess that should be cleaned up.
* - SAVE_REST/RESTORE_REST - Handle the registers not saved by SAVE_ARGS.
* Gives a full stack frame.
* - ENTRY/END Define functions in the symbol table.
* - FIXUP_TOP_OF_STACK/RESTORE_TOP_OF_STACK - Fix up the hardware stack
* frame that is otherwise undefined after a SYSCALL
* - TRACE_IRQ_* - Trace hard interrupt state for lock debugging.
* - errorentry/paranoidentry/zeroentry - Define exception entry points.
*/
#define ASSEMBLY 1
#include <linux/linkage.h>
#include <asm/segment.h>
#include <asm/smp.h>
#include <asm/cache.h>
#include <asm/errno.h>
#include <asm/dwarf2.h>
@ -115,6 +123,7 @@
.macro CFI_DEFAULT_STACK start=1
.if \start
CFI_STARTPROC simple
CFI_SIGNAL_FRAME
CFI_DEF_CFA rsp,SS+8
.else
CFI_DEF_CFA_OFFSET SS+8
@ -146,6 +155,10 @@
/* rdi: prev */
ENTRY(ret_from_fork)
CFI_DEFAULT_STACK
push kernel_eflags(%rip)
CFI_ADJUST_CFA_OFFSET 4
popf # reset kernel eflags
CFI_ADJUST_CFA_OFFSET -4
call schedule_tail
GET_THREAD_INFO(%rcx)
testl $(_TIF_SYSCALL_TRACE|_TIF_SYSCALL_AUDIT),threadinfo_flags(%rcx)
@ -199,6 +212,7 @@ END(ret_from_fork)
ENTRY(system_call)
CFI_STARTPROC simple
CFI_SIGNAL_FRAME
CFI_DEF_CFA rsp,PDA_STACKOFFSET
CFI_REGISTER rip,rcx
/*CFI_REGISTER rflags,r11*/
@ -316,6 +330,7 @@ END(system_call)
*/
ENTRY(int_ret_from_sys_call)
CFI_STARTPROC simple
CFI_SIGNAL_FRAME
CFI_DEF_CFA rsp,SS+8-ARGOFFSET
/*CFI_REL_OFFSET ss,SS-ARGOFFSET*/
CFI_REL_OFFSET rsp,RSP-ARGOFFSET
@ -476,6 +491,7 @@ END(stub_rt_sigreturn)
*/
.macro _frame ref
CFI_STARTPROC simple
CFI_SIGNAL_FRAME
CFI_DEF_CFA rsp,SS+8-\ref
/*CFI_REL_OFFSET ss,SS-\ref*/
CFI_REL_OFFSET rsp,RSP-\ref
@ -511,7 +527,12 @@ END(stub_rt_sigreturn)
testl $3,CS(%rdi)
je 1f
swapgs
1: incl %gs:pda_irqcount # RED-PEN should check preempt count
/* irqcount is used to check if a CPU is already on an interrupt
stack or not. While this is essentially redundant with preempt_count
it is a little cheaper to use a separate counter in the PDA
(short of moving irq_enter into assembly, which would be too
much work) */
1: incl %gs:pda_irqcount
cmoveq %gs:pda_irqstackptr,%rsp
push %rbp # backlink for old unwinder
/*
@ -619,8 +640,7 @@ retint_signal:
#ifdef CONFIG_PREEMPT
/* Returning to kernel space. Check if we need preemption */
/* rcx: threadinfo. interrupts off. */
.p2align
retint_kernel:
ENTRY(retint_kernel)
cmpl $0,threadinfo_preempt_count(%rcx)
jnz retint_restore_args
bt $TIF_NEED_RESCHED,threadinfo_flags(%rcx)
@ -679,7 +699,6 @@ ENTRY(call_function_interrupt)
END(call_function_interrupt)
#endif
#ifdef CONFIG_X86_LOCAL_APIC
ENTRY(apic_timer_interrupt)
apicinterrupt LOCAL_TIMER_VECTOR,smp_apic_timer_interrupt
END(apic_timer_interrupt)
@ -691,7 +710,6 @@ END(error_interrupt)
ENTRY(spurious_interrupt)
apicinterrupt SPURIOUS_APIC_VECTOR,smp_spurious_interrupt
END(spurious_interrupt)
#endif
/*
* Exception entry points.
@ -768,7 +786,9 @@ paranoid_exit\trace:
testl $3,CS(%rsp)
jnz paranoid_userspace\trace
paranoid_swapgs\trace:
.if \trace
TRACE_IRQS_IRETQ 0
.endif
swapgs
paranoid_restore\trace:
RESTORE_ALL 8
@ -814,7 +834,7 @@ paranoid_schedule\trace:
* Exception entry point. This expects an error code/orig_rax on the stack
* and the exception handler in %rax.
*/
ENTRY(error_entry)
KPROBE_ENTRY(error_entry)
_frame RDI
/* rdi slot contains rax, oldrax contains error code */
cld
@ -898,7 +918,7 @@ error_kernelspace:
cmpq $gs_change,RIP(%rsp)
je error_swapgs
jmp error_sti
END(error_entry)
KPROBE_END(error_entry)
/* Reload gs selector with exception handling */
/* edi: new selector */
@ -1020,8 +1040,7 @@ ENDPROC(execve)
KPROBE_ENTRY(page_fault)
errorentry do_page_fault
END(page_fault)
.previous .text
KPROBE_END(page_fault)
ENTRY(coprocessor_error)
zeroentry do_coprocessor_error
@ -1042,8 +1061,7 @@ KPROBE_ENTRY(debug)
CFI_ADJUST_CFA_OFFSET 8
paranoidentry do_debug, DEBUG_STACK
paranoidexit
END(debug)
.previous .text
KPROBE_END(debug)
/* runs on exception stack */
KPROBE_ENTRY(nmi)
@ -1057,8 +1075,7 @@ KPROBE_ENTRY(nmi)
jmp paranoid_exit1
CFI_ENDPROC
#endif
END(nmi)
.previous .text
KPROBE_END(nmi)
KPROBE_ENTRY(int3)
INTR_FRAME
@ -1067,8 +1084,7 @@ KPROBE_ENTRY(int3)
paranoidentry do_int3, DEBUG_STACK
jmp paranoid_exit1
CFI_ENDPROC
END(int3)
.previous .text
KPROBE_END(int3)
ENTRY(overflow)
zeroentry do_overflow
@ -1116,8 +1132,7 @@ END(stack_segment)
KPROBE_ENTRY(general_protection)
errorentry do_general_protection
END(general_protection)
.previous .text
KPROBE_END(general_protection)
ENTRY(alignment_check)
errorentry do_alignment_check

View File

@ -118,7 +118,6 @@ struct genapic apic_cluster = {
.name = "clustered",
.int_delivery_mode = dest_Fixed,
.int_dest_mode = (APIC_DEST_PHYSICAL != 0),
.int_delivery_dest = APIC_DEST_PHYSICAL | APIC_DM_FIXED,
.target_cpus = cluster_target_cpus,
.apic_id_registered = cluster_apic_id_registered,
.init_apic_ldr = cluster_init_apic_ldr,

View File

@ -49,8 +49,7 @@ static void flat_send_IPI_mask(cpumask_t cpumask, int vector)
unsigned long cfg;
unsigned long flags;
local_save_flags(flags);
local_irq_disable();
local_irq_save(flags);
/*
* Wait for idle.
@ -121,7 +120,6 @@ struct genapic apic_flat = {
.name = "flat",
.int_delivery_mode = dest_LowestPrio,
.int_dest_mode = (APIC_DEST_LOGICAL != 0),
.int_delivery_dest = APIC_DEST_LOGICAL | APIC_DM_LOWEST,
.target_cpus = flat_target_cpus,
.apic_id_registered = flat_apic_id_registered,
.init_apic_ldr = flat_init_apic_ldr,
@ -180,7 +178,6 @@ struct genapic apic_physflat = {
.name = "physical flat",
.int_delivery_mode = dest_Fixed,
.int_dest_mode = (APIC_DEST_PHYSICAL != 0),
.int_delivery_dest = APIC_DEST_PHYSICAL | APIC_DM_FIXED,
.target_cpus = physflat_target_cpus,
.apic_id_registered = flat_apic_id_registered,
.init_apic_ldr = flat_init_apic_ldr,/*not needed, but shouldn't hurt*/

View File

@ -5,8 +5,6 @@
* Copyright (C) 2000 Pavel Machek <pavel@suse.cz>
* Copyright (C) 2000 Karsten Keil <kkeil@suse.de>
* Copyright (C) 2001,2002 Andi Kleen <ak@suse.de>
*
* $Id: head.S,v 1.49 2002/03/19 17:39:25 ak Exp $
*/
@ -187,12 +185,15 @@ startup_64:
/* Finally jump to run C code and to be on real kernel address
* Since we are running on identity-mapped space we have to jump
* to the full 64bit address , this is only possible as indirect
* jump
* to the full 64bit address, this is only possible as indirect
* jump. In addition we need to ensure %cs is set so we make this
* a far return.
*/
movq initial_code(%rip),%rax
pushq $0 # fake return address
jmp *%rax
pushq $0 # fake return address to stop unwinder
pushq $__KERNEL_CS # set correct cs
pushq %rax # target address in negative space
lretq
/* SMP bootup changes these two */
.align 8
@ -371,7 +372,7 @@ ENTRY(cpu_gdt_table)
.quad 0,0 /* TSS */
.quad 0,0 /* LDT */
.quad 0,0,0 /* three TLS descriptors */
.quad 0 /* unused */
.quad 0x0000f40000000000 /* node/CPU stored in limit */
gdt_end:
/* asm/segment.h:GDT_ENTRIES must match this */
/* This should be a multiple of the cache line size */

View File

@ -45,38 +45,16 @@ static void __init copy_bootdata(char *real_mode_data)
new_data = *(int *) (x86_boot_params + NEW_CL_POINTER);
if (!new_data) {
if (OLD_CL_MAGIC != * (u16 *) OLD_CL_MAGIC_ADDR) {
printk("so old bootloader that it does not support commandline?!\n");
return;
}
new_data = OLD_CL_BASE_ADDR + * (u16 *) OLD_CL_OFFSET;
printk("old bootloader convention, maybe loadlin?\n");
}
command_line = (char *) ((u64)(new_data));
memcpy(saved_command_line, command_line, COMMAND_LINE_SIZE);
printk("Bootdata ok (command line is %s)\n", saved_command_line);
}
static void __init setup_boot_cpu_data(void)
{
unsigned int dummy, eax;
/* get vendor info */
cpuid(0, (unsigned int *)&boot_cpu_data.cpuid_level,
(unsigned int *)&boot_cpu_data.x86_vendor_id[0],
(unsigned int *)&boot_cpu_data.x86_vendor_id[8],
(unsigned int *)&boot_cpu_data.x86_vendor_id[4]);
/* get cpu type */
cpuid(1, &eax, &dummy, &dummy,
(unsigned int *) &boot_cpu_data.x86_capability);
boot_cpu_data.x86 = (eax >> 8) & 0xf;
boot_cpu_data.x86_model = (eax >> 4) & 0xf;
boot_cpu_data.x86_mask = eax & 0xf;
}
void __init x86_64_start_kernel(char * real_mode_data)
{
char *s;
int i;
for (i = 0; i < 256; i++)
@ -84,10 +62,7 @@ void __init x86_64_start_kernel(char * real_mode_data)
asm volatile("lidt %0" :: "m" (idt_descr));
clear_bss();
/*
* This must be called really, really early:
*/
lockdep_init();
early_printk("Kernel alive\n");
/*
* switch to init_level4_pgt from boot_level4_pgt
@ -103,22 +78,5 @@ void __init x86_64_start_kernel(char * real_mode_data)
#ifdef CONFIG_SMP
cpu_set(0, cpu_online_map);
#endif
s = strstr(saved_command_line, "earlyprintk=");
if (s != NULL)
setup_early_printk(strchr(s, '=') + 1);
#ifdef CONFIG_NUMA
s = strstr(saved_command_line, "numa=");
if (s != NULL)
numa_setup(s+5);
#endif
#ifdef CONFIG_X86_IO_APIC
if (strstr(saved_command_line, "disableapic"))
disable_apic = 1;
#endif
/* You need early console to see that */
if (__pa_symbol(&_end) >= KERNEL_TEXT_SIZE)
panic("Kernel too big for kernel mapping\n");
setup_boot_cpu_data();
start_kernel();
}

View File

@ -55,7 +55,6 @@
*/
BUILD_16_IRQS(0x0)
#ifdef CONFIG_X86_LOCAL_APIC
/*
* The IO-APIC gives us many more interrupt sources. Most of these
* are unused but an SMP system is supposed to have enough memory ...
@ -75,8 +74,6 @@ BUILD_16_IRQS(0xc) BUILD_16_IRQS(0xd)
BUILD_15_IRQS(0xe)
#endif
#endif
#undef BUILD_16_IRQS
#undef BUILD_15_IRQS
#undef BI
@ -100,7 +97,6 @@ BUILD_16_IRQS(0xc) BUILD_16_IRQS(0xd)
void (*interrupt[NR_IRQS])(void) = {
IRQLIST_16(0x0),
#ifdef CONFIG_X86_IO_APIC
IRQLIST_16(0x1), IRQLIST_16(0x2), IRQLIST_16(0x3),
IRQLIST_16(0x4), IRQLIST_16(0x5), IRQLIST_16(0x6), IRQLIST_16(0x7),
IRQLIST_16(0x8), IRQLIST_16(0x9), IRQLIST_16(0xa), IRQLIST_16(0xb),
@ -110,7 +106,6 @@ void (*interrupt[NR_IRQS])(void) = {
, IRQLIST_15(0xe)
#endif
#endif
};
#undef IRQ
@ -128,6 +123,8 @@ void (*interrupt[NR_IRQS])(void) = {
DEFINE_SPINLOCK(i8259A_lock);
static int i8259A_auto_eoi;
static void end_8259A_irq (unsigned int irq)
{
if (irq > 256) {
@ -341,6 +338,8 @@ void init_8259A(int auto_eoi)
{
unsigned long flags;
i8259A_auto_eoi = auto_eoi;
spin_lock_irqsave(&i8259A_lock, flags);
outb(0xff, 0x21); /* mask all of 8259A-1 */
@ -399,7 +398,7 @@ static void save_ELCR(char *trigger)
static int i8259A_resume(struct sys_device *dev)
{
init_8259A(0);
init_8259A(i8259A_auto_eoi);
restore_ELCR(irq_trigger);
return 0;
}
@ -453,9 +452,7 @@ void __init init_ISA_irqs (void)
{
int i;
#ifdef CONFIG_X86_LOCAL_APIC
init_bsp_APIC();
#endif
init_8259A(0);
for (i = 0; i < NR_IRQS; i++) {
@ -581,14 +578,12 @@ void __init init_IRQ(void)
set_intr_gate(THERMAL_APIC_VECTOR, thermal_interrupt);
set_intr_gate(THRESHOLD_APIC_VECTOR, threshold_interrupt);
#ifdef CONFIG_X86_LOCAL_APIC
/* self generated IPI for local APIC timer */
set_intr_gate(LOCAL_TIMER_VECTOR, apic_timer_interrupt);
/* IPI vectors for APIC spurious and error interrupts */
set_intr_gate(SPURIOUS_APIC_VECTOR, spurious_interrupt);
set_intr_gate(ERROR_APIC_VECTOR, error_interrupt);
#endif
/*
* Set the clock to HZ Hz, we already have a valid

Some files were not shown because too many files have changed in this diff Show More