Merge branch 'x86-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'x86-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (246 commits) x86: traps.c replace #if CONFIG_X86_32 with #ifdef CONFIG_X86_32 x86: PAT: fix address types in track_pfn_vma_new() x86: prioritize the FPU traps for the error code x86: PAT: pfnmap documentation update changes x86: PAT: move track untrack pfnmap stubs to asm-generic x86: PAT: remove follow_pfnmap_pte in favor of follow_phys x86: PAT: modify follow_phys to return phys_addr prot and return value x86: PAT: clarify is_linear_pfn_mapping() interface x86: ia32_signal: remove unnecessary declaration x86: common.c boot_cpu_stack and boot_exception_stacks should be static x86: fix intel x86_64 llc_shared_map/cpu_llc_id anomolies x86: fix warning in arch/x86/kernel/microcode_amd.c x86: ia32.h: remove unused struct sigfram32 and rt_sigframe32 x86: asm-offset_64: use rt_sigframe_ia32 x86: sigframe.h: include headers for dependency x86: traps.c declare functions before they get used x86: PAT: update documentation to cover pgprot and remap_pfn related changes - v3 x86: PAT: add pgprot_writecombine() interface for drivers - v3 x86: PAT: change pgprot_noncached to uc_minus instead of strong uc - v3 x86: PAT: implement track/untrack of pfnmap regions for x86 - v3 ...
This commit is contained in:
commit
be9c5ae4ee
|
@ -244,18 +244,6 @@ Who: Michael Buesch <mb@bu3sch.de>
|
|||
|
||||
---------------------------
|
||||
|
||||
What: init_mm export
|
||||
When: 2.6.26
|
||||
Why: Not used in-tree. The current out-of-tree users used it to
|
||||
work around problems in the CPA code which should be resolved
|
||||
by now. One usecase was described to provide verification code
|
||||
of the CPA operation. That's a good idea in general, but such
|
||||
code / infrastructure should be in the kernel and not in some
|
||||
out-of-tree driver.
|
||||
Who: Thomas Gleixner <tglx@linutronix.de>
|
||||
|
||||
----------------------------
|
||||
|
||||
What: usedac i386 kernel parameter
|
||||
When: 2.6.27
|
||||
Why: replaced by allowdac and no dac combination
|
||||
|
|
|
@ -1339,10 +1339,13 @@ nmi_watchdog
|
|||
|
||||
Enables/Disables the NMI watchdog on x86 systems. When the value is non-zero
|
||||
the NMI watchdog is enabled and will continuously test all online cpus to
|
||||
determine whether or not they are still functioning properly.
|
||||
determine whether or not they are still functioning properly. Currently,
|
||||
passing "nmi_watchdog=" parameter at boot time is required for this function
|
||||
to work.
|
||||
|
||||
Because the NMI watchdog shares registers with oprofile, by disabling the NMI
|
||||
watchdog, oprofile may have more registers to utilize.
|
||||
If LAPIC NMI watchdog method is in use (nmi_watchdog=2 kernel parameter), the
|
||||
NMI watchdog shares registers with oprofile. By disabling the NMI watchdog,
|
||||
oprofile may have more registers to utilize.
|
||||
|
||||
msgmni
|
||||
------
|
||||
|
|
|
@ -1396,7 +1396,20 @@ and is between 256 and 4096 characters. It is defined in the file
|
|||
when a NMI is triggered.
|
||||
Format: [state][,regs][,debounce][,die]
|
||||
|
||||
nmi_watchdog= [KNL,BUGS=X86-32] Debugging features for SMP kernels
|
||||
nmi_watchdog= [KNL,BUGS=X86-32,X86-64] Debugging features for SMP kernels
|
||||
Format: [panic,][num]
|
||||
Valid num: 0,1,2
|
||||
0 - turn nmi_watchdog off
|
||||
1 - use the IO-APIC timer for the NMI watchdog
|
||||
2 - use the local APIC for the NMI watchdog using
|
||||
a performance counter. Note: This will use one performance
|
||||
counter and the local APIC's performance vector.
|
||||
When panic is specified panic when an NMI watchdog timeout occurs.
|
||||
This is useful when you use a panic=... timeout and need the box
|
||||
quickly up again.
|
||||
Instead of 1 and 2 it is possible to use the following
|
||||
symbolic names: lapic and ioapic
|
||||
Example: nmi_watchdog=2 or nmi_watchdog=panic,lapic
|
||||
|
||||
no387 [BUGS=X86-32] Tells the kernel to use the 387 maths
|
||||
emulation library even if a 387 maths coprocessor
|
||||
|
@ -1633,6 +1646,17 @@ and is between 256 and 4096 characters. It is defined in the file
|
|||
nomsi [MSI] If the PCI_MSI kernel config parameter is
|
||||
enabled, this kernel boot option can be used to
|
||||
disable the use of MSI interrupts system-wide.
|
||||
noioapicquirk [APIC] Disable all boot interrupt quirks.
|
||||
Safety option to keep boot IRQs enabled. This
|
||||
should never be necessary.
|
||||
ioapicreroute [APIC] Enable rerouting of boot IRQs to the
|
||||
primary IO-APIC for bridges that cannot disable
|
||||
boot IRQs. This fixes a source of spurious IRQs
|
||||
when the system masks IRQs.
|
||||
noioapicreroute [APIC] Disable workaround that uses the
|
||||
boot IRQ equivalent of an IRQ that connects to
|
||||
a chipset where boot IRQs cannot be disabled.
|
||||
The opposite of ioapicreroute.
|
||||
biosirq [X86-32] Use PCI BIOS calls to get the interrupt
|
||||
routing table. These calls are known to be buggy
|
||||
on several machines and they hang the machine
|
||||
|
@ -2262,6 +2286,13 @@ and is between 256 and 4096 characters. It is defined in the file
|
|||
Format:
|
||||
<io>,<irq>,<dma>,<dma2>,<sb_io>,<sb_irq>,<sb_dma>,<mpu_io>,<mpu_irq>
|
||||
|
||||
tsc= Disable clocksource-must-verify flag for TSC.
|
||||
Format: <string>
|
||||
[x86] reliable: mark tsc clocksource as reliable, this
|
||||
disables clocksource verification at runtime.
|
||||
Used to enable high-resolution timer mode on older
|
||||
hardware, and in virtualized environment.
|
||||
|
||||
turbografx.map[2|3]= [HW,JOY]
|
||||
TurboGraFX parallel port interface
|
||||
Format:
|
||||
|
|
|
@ -69,6 +69,11 @@ to the overall system performance.
|
|||
On x86 nmi_watchdog is disabled by default so you have to enable it with
|
||||
a boot time parameter.
|
||||
|
||||
It's possible to disable the NMI watchdog in run-time by writing "0" to
|
||||
/proc/sys/kernel/nmi_watchdog. Writing "1" to the same file will re-enable
|
||||
the NMI watchdog. Notice that you still need to use "nmi_watchdog=" parameter
|
||||
at boot time.
|
||||
|
||||
NOTE: In kernels prior to 2.4.2-ac18 the NMI-oopser is enabled unconditionally
|
||||
on x86 SMP boxes.
|
||||
|
||||
|
|
|
@ -349,7 +349,7 @@ Protocol: 2.00+
|
|||
3 SYSLINUX
|
||||
4 EtherBoot
|
||||
5 ELILO
|
||||
7 GRuB
|
||||
7 GRUB
|
||||
8 U-BOOT
|
||||
9 Xen
|
||||
A Gujin
|
||||
|
@ -537,8 +537,8 @@ Type: read
|
|||
Offset/size: 0x248/4
|
||||
Protocol: 2.08+
|
||||
|
||||
If non-zero then this field contains the offset from the end of the
|
||||
real-mode code to the payload.
|
||||
If non-zero then this field contains the offset from the beginning
|
||||
of the protected-mode code to the payload.
|
||||
|
||||
The payload may be compressed. The format of both the compressed and
|
||||
uncompressed data should be determined using the standard magic
|
||||
|
|
|
@ -80,6 +80,30 @@ pci proc | -- | -- | WC |
|
|||
| | | |
|
||||
-------------------------------------------------------------------
|
||||
|
||||
Advanced APIs for drivers
|
||||
-------------------------
|
||||
A. Exporting pages to users with remap_pfn_range, io_remap_pfn_range,
|
||||
vm_insert_pfn
|
||||
|
||||
Drivers wanting to export some pages to userspace do it by using mmap
|
||||
interface and a combination of
|
||||
1) pgprot_noncached()
|
||||
2) io_remap_pfn_range() or remap_pfn_range() or vm_insert_pfn()
|
||||
|
||||
With PAT support, a new API pgprot_writecombine is being added. So, drivers can
|
||||
continue to use the above sequence, with either pgprot_noncached() or
|
||||
pgprot_writecombine() in step 1, followed by step 2.
|
||||
|
||||
In addition, step 2 internally tracks the region as UC or WC in memtype
|
||||
list in order to ensure no conflicting mapping.
|
||||
|
||||
Note that this set of APIs only works with IO (non RAM) regions. If driver
|
||||
wants to export a RAM region, it has to do set_memory_uc() or set_memory_wc()
|
||||
as step 0 above and also track the usage of those pages and use set_memory_wb()
|
||||
before the page is freed to free pool.
|
||||
|
||||
|
||||
|
||||
Notes:
|
||||
|
||||
-- in the above table mean "Not suggested usage for the API". Some of the --'s
|
||||
|
|
|
@ -79,17 +79,6 @@ Timing
|
|||
Report when timer interrupts are lost because some code turned off
|
||||
interrupts for too long.
|
||||
|
||||
nmi_watchdog=NUMBER[,panic]
|
||||
NUMBER can be:
|
||||
0 don't use an NMI watchdog
|
||||
1 use the IO-APIC timer for the NMI watchdog
|
||||
2 use the local APIC for the NMI watchdog using a performance counter. Note
|
||||
This will use one performance counter and the local APIC's performance
|
||||
vector.
|
||||
When panic is specified panic when an NMI watchdog timeout occurs.
|
||||
This is useful when you use a panic=... timeout and need the box
|
||||
quickly up again.
|
||||
|
||||
nohpet
|
||||
Don't use the HPET timer.
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@ Virtual memory map with 4 level page tables:
|
|||
0000000000000000 - 00007fffffffffff (=47 bits) user space, different per mm
|
||||
hole caused by [48:63] sign extension
|
||||
ffff800000000000 - ffff80ffffffffff (=40 bits) guard hole
|
||||
ffff810000000000 - ffffc0ffffffffff (=46 bits) direct mapping of all phys. memory
|
||||
ffff880000000000 - ffffc0ffffffffff (=57 TB) direct mapping of all phys. memory
|
||||
ffffc10000000000 - ffffc1ffffffffff (=40 bits) hole
|
||||
ffffc20000000000 - ffffe1ffffffffff (=45 bits) vmalloc/ioremap space
|
||||
ffffe20000000000 - ffffe2ffffffffff (=40 bits) virtual memory map (1TB)
|
||||
|
|
|
@ -19,6 +19,8 @@ config X86_64
|
|||
config X86
|
||||
def_bool y
|
||||
select HAVE_AOUT if X86_32
|
||||
select HAVE_READQ
|
||||
select HAVE_WRITEQ
|
||||
select HAVE_UNSTABLE_SCHED_CLOCK
|
||||
select HAVE_IDE
|
||||
select HAVE_OPROFILE
|
||||
|
@ -87,6 +89,10 @@ config GENERIC_IOMAP
|
|||
config GENERIC_BUG
|
||||
def_bool y
|
||||
depends on BUG
|
||||
select GENERIC_BUG_RELATIVE_POINTERS if X86_64
|
||||
|
||||
config GENERIC_BUG_RELATIVE_POINTERS
|
||||
bool
|
||||
|
||||
config GENERIC_HWEIGHT
|
||||
def_bool y
|
||||
|
@ -242,21 +248,13 @@ config X86_FIND_SMP_CONFIG
|
|||
def_bool y
|
||||
depends on X86_MPPARSE || X86_VOYAGER
|
||||
|
||||
if ACPI
|
||||
config X86_MPPARSE
|
||||
def_bool y
|
||||
bool "Enable MPS table"
|
||||
bool "Enable MPS table" if ACPI
|
||||
default y
|
||||
depends on X86_LOCAL_APIC
|
||||
help
|
||||
For old smp systems that do not have proper acpi support. Newer systems
|
||||
(esp with 64bit cpus) with acpi support, MADT and DSDT will override it
|
||||
endif
|
||||
|
||||
if !ACPI
|
||||
config X86_MPPARSE
|
||||
def_bool y
|
||||
depends on X86_LOCAL_APIC
|
||||
endif
|
||||
|
||||
choice
|
||||
prompt "Subarchitecture Type"
|
||||
|
@ -465,10 +463,6 @@ config X86_CYCLONE_TIMER
|
|||
def_bool y
|
||||
depends on X86_GENERICARCH
|
||||
|
||||
config ES7000_CLUSTERED_APIC
|
||||
def_bool y
|
||||
depends on SMP && X86_ES7000 && MPENTIUMIII
|
||||
|
||||
source "arch/x86/Kconfig.cpu"
|
||||
|
||||
config HPET_TIMER
|
||||
|
@ -569,7 +563,7 @@ config AMD_IOMMU
|
|||
|
||||
# need this always selected by IOMMU for the VIA workaround
|
||||
config SWIOTLB
|
||||
bool
|
||||
def_bool y if X86_64
|
||||
help
|
||||
Support for software bounce buffers used on x86-64 systems
|
||||
which don't have a hardware IOMMU (e.g. the current generation
|
||||
|
@ -660,6 +654,30 @@ config X86_VISWS_APIC
|
|||
def_bool y
|
||||
depends on X86_32 && X86_VISWS
|
||||
|
||||
config X86_REROUTE_FOR_BROKEN_BOOT_IRQS
|
||||
bool "Reroute for broken boot IRQs"
|
||||
default n
|
||||
depends on X86_IO_APIC
|
||||
help
|
||||
This option enables a workaround that fixes a source of
|
||||
spurious interrupts. This is recommended when threaded
|
||||
interrupt handling is used on systems where the generation of
|
||||
superfluous "boot interrupts" cannot be disabled.
|
||||
|
||||
Some chipsets generate a legacy INTx "boot IRQ" when the IRQ
|
||||
entry in the chipset's IO-APIC is masked (as, e.g. the RT
|
||||
kernel does during interrupt handling). On chipsets where this
|
||||
boot IRQ generation cannot be disabled, this workaround keeps
|
||||
the original IRQ line masked so that only the equivalent "boot
|
||||
IRQ" is delivered to the CPUs. The workaround also tells the
|
||||
kernel to set up the IRQ handler on the boot IRQ line. In this
|
||||
way only one interrupt is delivered to the kernel. Otherwise
|
||||
the spurious second interrupt may cause the kernel to bring
|
||||
down (vital) interrupt lines.
|
||||
|
||||
Only affects "broken" chipsets. Interrupt sharing may be
|
||||
increased on these systems.
|
||||
|
||||
config X86_MCE
|
||||
bool "Machine Check Exception"
|
||||
depends on !X86_VOYAGER
|
||||
|
@ -956,24 +974,37 @@ config X86_PAE
|
|||
config ARCH_PHYS_ADDR_T_64BIT
|
||||
def_bool X86_64 || X86_PAE
|
||||
|
||||
config DIRECT_GBPAGES
|
||||
bool "Enable 1GB pages for kernel pagetables" if EMBEDDED
|
||||
default y
|
||||
depends on X86_64
|
||||
help
|
||||
Allow the kernel linear mapping to use 1GB pages on CPUs that
|
||||
support it. This can improve the kernel's performance a tiny bit by
|
||||
reducing TLB pressure. If in doubt, say "Y".
|
||||
|
||||
# Common NUMA Features
|
||||
config NUMA
|
||||
bool "Numa Memory Allocation and Scheduler Support (EXPERIMENTAL)"
|
||||
bool "Numa Memory Allocation and Scheduler Support"
|
||||
depends on SMP
|
||||
depends on X86_64 || (X86_32 && HIGHMEM64G && (X86_NUMAQ || X86_BIGSMP || X86_SUMMIT && ACPI) && EXPERIMENTAL)
|
||||
default n if X86_PC
|
||||
default y if (X86_NUMAQ || X86_SUMMIT || X86_BIGSMP)
|
||||
help
|
||||
Enable NUMA (Non Uniform Memory Access) support.
|
||||
|
||||
The kernel will try to allocate memory used by a CPU on the
|
||||
local memory controller of the CPU and add some more
|
||||
NUMA awareness to the kernel.
|
||||
|
||||
For 32-bit this is currently highly experimental and should be only
|
||||
used for kernel development. It might also cause boot failures.
|
||||
For 64-bit this is recommended on all multiprocessor Opteron systems.
|
||||
If the system is EM64T, you should say N unless your system is
|
||||
EM64T NUMA.
|
||||
For 64-bit this is recommended if the system is Intel Core i7
|
||||
(or later), AMD Opteron, or EM64T NUMA.
|
||||
|
||||
For 32-bit this is only needed on (rare) 32-bit-only platforms
|
||||
that support NUMA topologies, such as NUMAQ / Summit, or if you
|
||||
boot a 32-bit kernel on a 64-bit NUMA platform.
|
||||
|
||||
Otherwise, you should say N.
|
||||
|
||||
comment "NUMA (Summit) requires SMP, 64GB highmem support, ACPI"
|
||||
depends on X86_32 && X86_SUMMIT && (!HIGHMEM64G || !ACPI)
|
||||
|
@ -1493,6 +1524,10 @@ config ARCH_ENABLE_MEMORY_HOTPLUG
|
|||
def_bool y
|
||||
depends on X86_64 || (X86_32 && HIGHMEM)
|
||||
|
||||
config ARCH_ENABLE_MEMORY_HOTREMOVE
|
||||
def_bool y
|
||||
depends on MEMORY_HOTPLUG
|
||||
|
||||
config HAVE_ARCH_EARLY_PFN_TO_NID
|
||||
def_bool X86_64
|
||||
depends on NUMA
|
||||
|
@ -1632,13 +1667,6 @@ config APM_ALLOW_INTS
|
|||
many of the newer IBM Thinkpads. If you experience hangs when you
|
||||
suspend, try setting this to Y. Otherwise, say N.
|
||||
|
||||
config APM_REAL_MODE_POWER_OFF
|
||||
bool "Use real mode APM BIOS call to power off"
|
||||
help
|
||||
Use real mode APM BIOS calls to switch off the computer. This is
|
||||
a work-around for a number of buggy BIOSes. Switch this option on if
|
||||
your computer crashes instead of powering off properly.
|
||||
|
||||
endif # APM
|
||||
|
||||
source "arch/x86/kernel/cpu/cpufreq/Kconfig"
|
||||
|
|
|
@ -114,18 +114,6 @@ config DEBUG_RODATA
|
|||
data. This is recommended so that we can catch kernel bugs sooner.
|
||||
If in doubt, say "Y".
|
||||
|
||||
config DIRECT_GBPAGES
|
||||
bool "Enable gbpages-mapped kernel pagetables"
|
||||
depends on DEBUG_KERNEL && EXPERIMENTAL && X86_64
|
||||
help
|
||||
Enable gigabyte pages support (if the CPU supports it). This can
|
||||
improve the kernel's performance a tiny bit by reducing TLB
|
||||
pressure.
|
||||
|
||||
This is experimental code.
|
||||
|
||||
If in doubt, say "N".
|
||||
|
||||
config DEBUG_RODATA_TEST
|
||||
bool "Testcase for the DEBUG_RODATA feature"
|
||||
depends on DEBUG_RODATA
|
||||
|
@ -307,10 +295,10 @@ config OPTIMIZE_INLINING
|
|||
developers have marked 'inline'. Doing so takes away freedom from gcc to
|
||||
do what it thinks is best, which is desirable for the gcc 3.x series of
|
||||
compilers. The gcc 4.x series have a rewritten inlining algorithm and
|
||||
disabling this option will generate a smaller kernel there. Hopefully
|
||||
this algorithm is so good that allowing gcc4 to make the decision can
|
||||
become the default in the future, until then this option is there to
|
||||
test gcc for this.
|
||||
enabling this option will generate a smaller kernel there. Hopefully
|
||||
this algorithm is so good that allowing gcc 4.x and above to make the
|
||||
decision will become the default in the future. Until then this option
|
||||
is there to test gcc for this.
|
||||
|
||||
If unsure, say N.
|
||||
|
||||
|
|
|
@ -34,7 +34,7 @@ static struct mode_info cga_modes[] = {
|
|||
{ VIDEO_80x25, 80, 25, 0 },
|
||||
};
|
||||
|
||||
__videocard video_vga;
|
||||
static __videocard video_vga;
|
||||
|
||||
/* Set basic 80x25 mode */
|
||||
static u8 vga_set_basic_mode(void)
|
||||
|
@ -259,7 +259,7 @@ static int vga_probe(void)
|
|||
return mode_count[adapter];
|
||||
}
|
||||
|
||||
__videocard video_vga = {
|
||||
static __videocard video_vga = {
|
||||
.card_name = "VGA",
|
||||
.probe = vga_probe,
|
||||
.set_mode = vga_set_mode,
|
||||
|
|
|
@ -226,7 +226,7 @@ static unsigned int mode_menu(void)
|
|||
|
||||
#ifdef CONFIG_VIDEO_RETAIN
|
||||
/* Save screen content to the heap */
|
||||
struct saved_screen {
|
||||
static struct saved_screen {
|
||||
int x, y;
|
||||
int curx, cury;
|
||||
u16 *data;
|
||||
|
|
|
@ -77,7 +77,7 @@ CONFIG_AUDIT=y
|
|||
CONFIG_AUDITSYSCALL=y
|
||||
CONFIG_AUDIT_TREE=y
|
||||
# CONFIG_IKCONFIG is not set
|
||||
CONFIG_LOG_BUF_SHIFT=17
|
||||
CONFIG_LOG_BUF_SHIFT=18
|
||||
CONFIG_CGROUPS=y
|
||||
# CONFIG_CGROUP_DEBUG is not set
|
||||
CONFIG_CGROUP_NS=y
|
||||
|
@ -298,7 +298,7 @@ CONFIG_KEXEC=y
|
|||
CONFIG_CRASH_DUMP=y
|
||||
# CONFIG_KEXEC_JUMP is not set
|
||||
CONFIG_PHYSICAL_START=0x1000000
|
||||
CONFIG_RELOCATABLE=y
|
||||
# CONFIG_RELOCATABLE is not set
|
||||
CONFIG_PHYSICAL_ALIGN=0x200000
|
||||
CONFIG_HOTPLUG_CPU=y
|
||||
# CONFIG_COMPAT_VDSO is not set
|
||||
|
|
|
@ -77,7 +77,7 @@ CONFIG_AUDIT=y
|
|||
CONFIG_AUDITSYSCALL=y
|
||||
CONFIG_AUDIT_TREE=y
|
||||
# CONFIG_IKCONFIG is not set
|
||||
CONFIG_LOG_BUF_SHIFT=17
|
||||
CONFIG_LOG_BUF_SHIFT=18
|
||||
CONFIG_CGROUPS=y
|
||||
# CONFIG_CGROUP_DEBUG is not set
|
||||
CONFIG_CGROUP_NS=y
|
||||
|
@ -298,7 +298,7 @@ CONFIG_SCHED_HRTICK=y
|
|||
CONFIG_KEXEC=y
|
||||
CONFIG_CRASH_DUMP=y
|
||||
CONFIG_PHYSICAL_START=0x1000000
|
||||
CONFIG_RELOCATABLE=y
|
||||
# CONFIG_RELOCATABLE is not set
|
||||
CONFIG_PHYSICAL_ALIGN=0x200000
|
||||
CONFIG_HOTPLUG_CPU=y
|
||||
# CONFIG_COMPAT_VDSO is not set
|
||||
|
|
|
@ -32,6 +32,8 @@
|
|||
#include <asm/proto.h>
|
||||
#include <asm/vdso.h>
|
||||
|
||||
#include <asm/sigframe.h>
|
||||
|
||||
#define DEBUG_SIG 0
|
||||
|
||||
#define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
|
||||
|
@ -41,7 +43,6 @@
|
|||
X86_EFLAGS_ZF | X86_EFLAGS_AF | X86_EFLAGS_PF | \
|
||||
X86_EFLAGS_CF)
|
||||
|
||||
asmlinkage int do_signal(struct pt_regs *regs, sigset_t *oldset);
|
||||
void signal_fault(struct pt_regs *regs, void __user *frame, char *where);
|
||||
|
||||
int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from)
|
||||
|
@ -173,47 +174,28 @@ asmlinkage long sys32_sigaltstack(const stack_ia32_t __user *uss_ptr,
|
|||
/*
|
||||
* Do a signal return; undo the signal stack.
|
||||
*/
|
||||
|
||||
struct sigframe
|
||||
{
|
||||
u32 pretcode;
|
||||
int sig;
|
||||
struct sigcontext_ia32 sc;
|
||||
struct _fpstate_ia32 fpstate_unused; /* look at kernel/sigframe.h */
|
||||
unsigned int extramask[_COMPAT_NSIG_WORDS-1];
|
||||
char retcode[8];
|
||||
/* fp state follows here */
|
||||
};
|
||||
|
||||
struct rt_sigframe
|
||||
{
|
||||
u32 pretcode;
|
||||
int sig;
|
||||
u32 pinfo;
|
||||
u32 puc;
|
||||
compat_siginfo_t info;
|
||||
struct ucontext_ia32 uc;
|
||||
char retcode[8];
|
||||
/* fp state follows here */
|
||||
};
|
||||
|
||||
#define COPY(x) { \
|
||||
unsigned int reg; \
|
||||
err |= __get_user(reg, &sc->x); \
|
||||
regs->x = reg; \
|
||||
#define COPY(x) { \
|
||||
err |= __get_user(regs->x, &sc->x); \
|
||||
}
|
||||
|
||||
#define RELOAD_SEG(seg,mask) \
|
||||
{ unsigned int cur; \
|
||||
unsigned short pre; \
|
||||
err |= __get_user(pre, &sc->seg); \
|
||||
savesegment(seg, cur); \
|
||||
pre |= mask; \
|
||||
if (pre != cur) loadsegment(seg, pre); }
|
||||
#define COPY_SEG_CPL3(seg) { \
|
||||
unsigned short tmp; \
|
||||
err |= __get_user(tmp, &sc->seg); \
|
||||
regs->seg = tmp | 3; \
|
||||
}
|
||||
|
||||
#define RELOAD_SEG(seg) { \
|
||||
unsigned int cur, pre; \
|
||||
err |= __get_user(pre, &sc->seg); \
|
||||
savesegment(seg, cur); \
|
||||
pre |= 3; \
|
||||
if (pre != cur) \
|
||||
loadsegment(seg, pre); \
|
||||
}
|
||||
|
||||
static int ia32_restore_sigcontext(struct pt_regs *regs,
|
||||
struct sigcontext_ia32 __user *sc,
|
||||
unsigned int *peax)
|
||||
unsigned int *pax)
|
||||
{
|
||||
unsigned int tmpflags, gs, oldgs, err = 0;
|
||||
void __user *buf;
|
||||
|
@ -240,18 +222,16 @@ static int ia32_restore_sigcontext(struct pt_regs *regs,
|
|||
if (gs != oldgs)
|
||||
load_gs_index(gs);
|
||||
|
||||
RELOAD_SEG(fs, 3);
|
||||
RELOAD_SEG(ds, 3);
|
||||
RELOAD_SEG(es, 3);
|
||||
RELOAD_SEG(fs);
|
||||
RELOAD_SEG(ds);
|
||||
RELOAD_SEG(es);
|
||||
|
||||
COPY(di); COPY(si); COPY(bp); COPY(sp); COPY(bx);
|
||||
COPY(dx); COPY(cx); COPY(ip);
|
||||
/* Don't touch extended registers */
|
||||
|
||||
err |= __get_user(regs->cs, &sc->cs);
|
||||
regs->cs |= 3;
|
||||
err |= __get_user(regs->ss, &sc->ss);
|
||||
regs->ss |= 3;
|
||||
COPY_SEG_CPL3(cs);
|
||||
COPY_SEG_CPL3(ss);
|
||||
|
||||
err |= __get_user(tmpflags, &sc->flags);
|
||||
regs->flags = (regs->flags & ~FIX_EFLAGS) | (tmpflags & FIX_EFLAGS);
|
||||
|
@ -262,15 +242,13 @@ static int ia32_restore_sigcontext(struct pt_regs *regs,
|
|||
buf = compat_ptr(tmp);
|
||||
err |= restore_i387_xstate_ia32(buf);
|
||||
|
||||
err |= __get_user(tmp, &sc->ax);
|
||||
*peax = tmp;
|
||||
|
||||
err |= __get_user(*pax, &sc->ax);
|
||||
return err;
|
||||
}
|
||||
|
||||
asmlinkage long sys32_sigreturn(struct pt_regs *regs)
|
||||
{
|
||||
struct sigframe __user *frame = (struct sigframe __user *)(regs->sp-8);
|
||||
struct sigframe_ia32 __user *frame = (struct sigframe_ia32 __user *)(regs->sp-8);
|
||||
sigset_t set;
|
||||
unsigned int ax;
|
||||
|
||||
|
@ -300,12 +278,12 @@ badframe:
|
|||
|
||||
asmlinkage long sys32_rt_sigreturn(struct pt_regs *regs)
|
||||
{
|
||||
struct rt_sigframe __user *frame;
|
||||
struct rt_sigframe_ia32 __user *frame;
|
||||
sigset_t set;
|
||||
unsigned int ax;
|
||||
struct pt_regs tregs;
|
||||
|
||||
frame = (struct rt_sigframe __user *)(regs->sp - 4);
|
||||
frame = (struct rt_sigframe_ia32 __user *)(regs->sp - 4);
|
||||
|
||||
if (!access_ok(VERIFY_READ, frame, sizeof(*frame)))
|
||||
goto badframe;
|
||||
|
@ -359,20 +337,15 @@ static int ia32_setup_sigcontext(struct sigcontext_ia32 __user *sc,
|
|||
err |= __put_user(regs->dx, &sc->dx);
|
||||
err |= __put_user(regs->cx, &sc->cx);
|
||||
err |= __put_user(regs->ax, &sc->ax);
|
||||
err |= __put_user(regs->cs, &sc->cs);
|
||||
err |= __put_user(regs->ss, &sc->ss);
|
||||
err |= __put_user(current->thread.trap_no, &sc->trapno);
|
||||
err |= __put_user(current->thread.error_code, &sc->err);
|
||||
err |= __put_user(regs->ip, &sc->ip);
|
||||
err |= __put_user(regs->cs, (unsigned int __user *)&sc->cs);
|
||||
err |= __put_user(regs->flags, &sc->flags);
|
||||
err |= __put_user(regs->sp, &sc->sp_at_signal);
|
||||
err |= __put_user(regs->ss, (unsigned int __user *)&sc->ss);
|
||||
|
||||
tmp = save_i387_xstate_ia32(fpstate);
|
||||
if (tmp < 0)
|
||||
err = -EFAULT;
|
||||
else
|
||||
err |= __put_user(ptr_to_compat(tmp ? fpstate : NULL),
|
||||
&sc->fpstate);
|
||||
err |= __put_user(ptr_to_compat(fpstate), &sc->fpstate);
|
||||
|
||||
/* non-iBCS2 extensions.. */
|
||||
err |= __put_user(mask, &sc->oldmask);
|
||||
|
@ -400,7 +373,7 @@ static void __user *get_sigframe(struct k_sigaction *ka, struct pt_regs *regs,
|
|||
}
|
||||
|
||||
/* This is the legacy signal stack switching. */
|
||||
else if ((regs->ss & 0xffff) != __USER_DS &&
|
||||
else if ((regs->ss & 0xffff) != __USER32_DS &&
|
||||
!(ka->sa.sa_flags & SA_RESTORER) &&
|
||||
ka->sa.sa_restorer)
|
||||
sp = (unsigned long) ka->sa.sa_restorer;
|
||||
|
@ -408,6 +381,8 @@ static void __user *get_sigframe(struct k_sigaction *ka, struct pt_regs *regs,
|
|||
if (used_math()) {
|
||||
sp = sp - sig_xstate_ia32_size;
|
||||
*fpstate = (struct _fpstate_ia32 *) sp;
|
||||
if (save_i387_xstate_ia32(*fpstate) < 0)
|
||||
return (void __user *) -1L;
|
||||
}
|
||||
|
||||
sp -= frame_size;
|
||||
|
@ -420,7 +395,7 @@ static void __user *get_sigframe(struct k_sigaction *ka, struct pt_regs *regs,
|
|||
int ia32_setup_frame(int sig, struct k_sigaction *ka,
|
||||
compat_sigset_t *set, struct pt_regs *regs)
|
||||
{
|
||||
struct sigframe __user *frame;
|
||||
struct sigframe_ia32 __user *frame;
|
||||
void __user *restorer;
|
||||
int err = 0;
|
||||
void __user *fpstate = NULL;
|
||||
|
@ -430,12 +405,10 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka,
|
|||
u16 poplmovl;
|
||||
u32 val;
|
||||
u16 int80;
|
||||
u16 pad;
|
||||
} __attribute__((packed)) code = {
|
||||
0xb858, /* popl %eax ; movl $...,%eax */
|
||||
__NR_ia32_sigreturn,
|
||||
0x80cd, /* int $0x80 */
|
||||
0,
|
||||
};
|
||||
|
||||
frame = get_sigframe(ka, regs, sizeof(*frame), &fpstate);
|
||||
|
@ -471,7 +444,7 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka,
|
|||
* These are actually not used anymore, but left because some
|
||||
* gdb versions depend on them as a marker.
|
||||
*/
|
||||
err |= __copy_to_user(frame->retcode, &code, 8);
|
||||
err |= __put_user(*((u64 *)&code), (u64 *)frame->retcode);
|
||||
if (err)
|
||||
return -EFAULT;
|
||||
|
||||
|
@ -501,7 +474,7 @@ int ia32_setup_frame(int sig, struct k_sigaction *ka,
|
|||
int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
|
||||
compat_sigset_t *set, struct pt_regs *regs)
|
||||
{
|
||||
struct rt_sigframe __user *frame;
|
||||
struct rt_sigframe_ia32 __user *frame;
|
||||
void __user *restorer;
|
||||
int err = 0;
|
||||
void __user *fpstate = NULL;
|
||||
|
@ -511,8 +484,7 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
|
|||
u8 movl;
|
||||
u32 val;
|
||||
u16 int80;
|
||||
u16 pad;
|
||||
u8 pad2;
|
||||
u8 pad;
|
||||
} __attribute__((packed)) code = {
|
||||
0xb8,
|
||||
__NR_ia32_rt_sigreturn,
|
||||
|
@ -559,7 +531,7 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
|
|||
* Not actually used anymore, but left because some gdb
|
||||
* versions need it.
|
||||
*/
|
||||
err |= __copy_to_user(frame->retcode, &code, 8);
|
||||
err |= __put_user(*((u64 *)&code), (u64 *)frame->retcode);
|
||||
if (err)
|
||||
return -EFAULT;
|
||||
|
||||
|
@ -572,11 +544,6 @@ int ia32_setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
|
|||
regs->dx = (unsigned long) &frame->info;
|
||||
regs->cx = (unsigned long) &frame->uc;
|
||||
|
||||
/* Make -mregparm=3 work */
|
||||
regs->ax = sig;
|
||||
regs->dx = (unsigned long) &frame->info;
|
||||
regs->cx = (unsigned long) &frame->uc;
|
||||
|
||||
loadsegment(ds, __USER32_DS);
|
||||
loadsegment(es, __USER32_DS);
|
||||
|
||||
|
|
|
@ -193,6 +193,7 @@ extern u8 setup_APIC_eilvt_ibs(u8 vector, u8 msg_type, u8 mask);
|
|||
static inline void lapic_shutdown(void) { }
|
||||
#define local_apic_timer_c2_ok 1
|
||||
static inline void init_apic_mappings(void) { }
|
||||
static inline void disable_local_APIC(void) { }
|
||||
|
||||
#endif /* !CONFIG_X86_LOCAL_APIC */
|
||||
|
||||
|
|
|
@ -24,8 +24,6 @@ static inline cpumask_t target_cpus(void)
|
|||
#define INT_DELIVERY_MODE (dest_Fixed)
|
||||
#define INT_DEST_MODE (0) /* phys delivery to target proc */
|
||||
#define NO_BALANCE_IRQ (0)
|
||||
#define WAKE_SECONDARY_VIA_INIT
|
||||
|
||||
|
||||
static inline unsigned long check_apicid_used(physid_mask_t bitmap, int apicid)
|
||||
{
|
||||
|
|
|
@ -168,7 +168,15 @@ static inline void __change_bit(int nr, volatile unsigned long *addr)
|
|||
*/
|
||||
static inline void change_bit(int nr, volatile unsigned long *addr)
|
||||
{
|
||||
asm volatile(LOCK_PREFIX "btc %1,%0" : ADDR : "Ir" (nr));
|
||||
if (IS_IMMEDIATE(nr)) {
|
||||
asm volatile(LOCK_PREFIX "xorb %1,%0"
|
||||
: CONST_MASK_ADDR(nr, addr)
|
||||
: "iq" ((u8)CONST_MASK(nr)));
|
||||
} else {
|
||||
asm volatile(LOCK_PREFIX "btc %1,%0"
|
||||
: BITOP_ADDR(addr)
|
||||
: "Ir" (nr));
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -9,7 +9,7 @@
|
|||
#ifdef CONFIG_X86_32
|
||||
# define __BUG_C0 "2:\t.long 1b, %c0\n"
|
||||
#else
|
||||
# define __BUG_C0 "2:\t.quad 1b, %c0\n"
|
||||
# define __BUG_C0 "2:\t.long 1b - 2b, %c0 - 2b\n"
|
||||
#endif
|
||||
|
||||
#define BUG() \
|
||||
|
|
|
@ -4,26 +4,33 @@
|
|||
#include <asm/types.h>
|
||||
#include <linux/compiler.h>
|
||||
|
||||
#ifdef __GNUC__
|
||||
#define __LITTLE_ENDIAN
|
||||
|
||||
#ifdef __i386__
|
||||
|
||||
static inline __attribute_const__ __u32 ___arch__swab32(__u32 x)
|
||||
static inline __attribute_const__ __u32 __arch_swab32(__u32 val)
|
||||
{
|
||||
#ifdef CONFIG_X86_BSWAP
|
||||
asm("bswap %0" : "=r" (x) : "0" (x));
|
||||
#else
|
||||
#ifdef __i386__
|
||||
# ifdef CONFIG_X86_BSWAP
|
||||
asm("bswap %0" : "=r" (val) : "0" (val));
|
||||
# else
|
||||
asm("xchgb %b0,%h0\n\t" /* swap lower bytes */
|
||||
"rorl $16,%0\n\t" /* swap words */
|
||||
"xchgb %b0,%h0" /* swap higher bytes */
|
||||
: "=q" (x)
|
||||
: "0" (x));
|
||||
#endif
|
||||
return x;
|
||||
}
|
||||
: "=q" (val)
|
||||
: "0" (val));
|
||||
# endif
|
||||
|
||||
static inline __attribute_const__ __u64 ___arch__swab64(__u64 val)
|
||||
#else /* __i386__ */
|
||||
asm("bswapl %0"
|
||||
: "=r" (val)
|
||||
: "0" (val));
|
||||
#endif
|
||||
return val;
|
||||
}
|
||||
#define __arch_swab32 __arch_swab32
|
||||
|
||||
static inline __attribute_const__ __u64 __arch_swab64(__u64 val)
|
||||
{
|
||||
#ifdef __i386__
|
||||
union {
|
||||
struct {
|
||||
__u32 a;
|
||||
|
@ -32,50 +39,27 @@ static inline __attribute_const__ __u64 ___arch__swab64(__u64 val)
|
|||
__u64 u;
|
||||
} v;
|
||||
v.u = val;
|
||||
#ifdef CONFIG_X86_BSWAP
|
||||
# ifdef CONFIG_X86_BSWAP
|
||||
asm("bswapl %0 ; bswapl %1 ; xchgl %0,%1"
|
||||
: "=r" (v.s.a), "=r" (v.s.b)
|
||||
: "0" (v.s.a), "1" (v.s.b));
|
||||
#else
|
||||
v.s.a = ___arch__swab32(v.s.a);
|
||||
v.s.b = ___arch__swab32(v.s.b);
|
||||
# else
|
||||
v.s.a = __arch_swab32(v.s.a);
|
||||
v.s.b = __arch_swab32(v.s.b);
|
||||
asm("xchgl %0,%1"
|
||||
: "=r" (v.s.a), "=r" (v.s.b)
|
||||
: "0" (v.s.a), "1" (v.s.b));
|
||||
#endif
|
||||
# endif
|
||||
return v.u;
|
||||
}
|
||||
|
||||
#else /* __i386__ */
|
||||
|
||||
static inline __attribute_const__ __u64 ___arch__swab64(__u64 x)
|
||||
{
|
||||
asm("bswapq %0"
|
||||
: "=r" (x)
|
||||
: "0" (x));
|
||||
return x;
|
||||
}
|
||||
|
||||
static inline __attribute_const__ __u32 ___arch__swab32(__u32 x)
|
||||
{
|
||||
asm("bswapl %0"
|
||||
: "=r" (x)
|
||||
: "0" (x));
|
||||
return x;
|
||||
}
|
||||
|
||||
: "=r" (val)
|
||||
: "0" (val));
|
||||
return val;
|
||||
#endif
|
||||
}
|
||||
#define __arch_swab64 __arch_swab64
|
||||
|
||||
/* Do not define swab16. Gcc is smart enough to recognize "C" version and
|
||||
convert it into rotation or exhange. */
|
||||
|
||||
#define __arch__swab64(x) ___arch__swab64(x)
|
||||
#define __arch__swab32(x) ___arch__swab32(x)
|
||||
|
||||
#define __BYTEORDER_HAS_U64__
|
||||
|
||||
#endif /* __GNUC__ */
|
||||
|
||||
#include <linux/byteorder/little_endian.h>
|
||||
#include <linux/byteorder.h>
|
||||
|
||||
#endif /* _ASM_X86_BYTEORDER_H */
|
||||
|
|
|
@ -80,7 +80,6 @@
|
|||
#define X86_FEATURE_UP (3*32+ 9) /* smp kernel running on up */
|
||||
#define X86_FEATURE_FXSAVE_LEAK (3*32+10) /* "" FXSAVE leaks FOP/FIP/FOP */
|
||||
#define X86_FEATURE_ARCH_PERFMON (3*32+11) /* Intel Architectural PerfMon */
|
||||
#define X86_FEATURE_NOPL (3*32+20) /* The NOPL (0F 1F) instructions */
|
||||
#define X86_FEATURE_PEBS (3*32+12) /* Precise-Event Based Sampling */
|
||||
#define X86_FEATURE_BTS (3*32+13) /* Branch Trace Store */
|
||||
#define X86_FEATURE_SYSCALL32 (3*32+14) /* "" syscall in ia32 userspace */
|
||||
|
@ -92,6 +91,8 @@
|
|||
#define X86_FEATURE_NOPL (3*32+20) /* The NOPL (0F 1F) instructions */
|
||||
#define X86_FEATURE_AMDC1E (3*32+21) /* AMD C1E detected */
|
||||
#define X86_FEATURE_XTOPOLOGY (3*32+22) /* cpu topology enum extensions */
|
||||
#define X86_FEATURE_TSC_RELIABLE (3*32+23) /* TSC is known to be reliable */
|
||||
#define X86_FEATURE_NONSTOP_TSC (3*32+24) /* TSC does not stop in C states */
|
||||
|
||||
/* Intel-defined CPU features, CPUID level 0x00000001 (ecx), word 4 */
|
||||
#define X86_FEATURE_XMM3 (4*32+ 0) /* "pni" SSE-3 */
|
||||
|
@ -117,6 +118,7 @@
|
|||
#define X86_FEATURE_XSAVE (4*32+26) /* XSAVE/XRSTOR/XSETBV/XGETBV */
|
||||
#define X86_FEATURE_OSXSAVE (4*32+27) /* "" XSAVE enabled in the OS */
|
||||
#define X86_FEATURE_AVX (4*32+28) /* Advanced Vector Extensions */
|
||||
#define X86_FEATURE_HYPERVISOR (4*32+31) /* Running on a hypervisor */
|
||||
|
||||
/* VIA/Cyrix/Centaur-defined CPU features, CPUID level 0xC0000001, word 5 */
|
||||
#define X86_FEATURE_XSTORE (5*32+ 2) /* "rng" RNG present (xstore) */
|
||||
|
@ -237,6 +239,7 @@ extern const char * const x86_power_flags[32];
|
|||
#define cpu_has_xmm4_2 boot_cpu_has(X86_FEATURE_XMM4_2)
|
||||
#define cpu_has_x2apic boot_cpu_has(X86_FEATURE_X2APIC)
|
||||
#define cpu_has_xsave boot_cpu_has(X86_FEATURE_XSAVE)
|
||||
#define cpu_has_hypervisor boot_cpu_has(X86_FEATURE_HYPERVISOR)
|
||||
|
||||
#if defined(CONFIG_X86_INVLPG) || defined(CONFIG_X86_64)
|
||||
# define cpu_has_invlpg 1
|
||||
|
|
|
@ -71,12 +71,10 @@ static inline struct dma_mapping_ops *get_dma_ops(struct device *dev)
|
|||
/* Make sure we keep the same behaviour */
|
||||
static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
|
||||
{
|
||||
#ifdef CONFIG_X86_64
|
||||
struct dma_mapping_ops *ops = get_dma_ops(dev);
|
||||
if (ops->mapping_error)
|
||||
return ops->mapping_error(dev, dma_addr);
|
||||
|
||||
#endif
|
||||
return (dma_addr == bad_dma_address);
|
||||
}
|
||||
|
||||
|
|
|
@ -6,56 +6,91 @@
|
|||
#endif
|
||||
|
||||
/*
|
||||
Macros for dwarf2 CFI unwind table entries.
|
||||
See "as.info" for details on these pseudo ops. Unfortunately
|
||||
they are only supported in very new binutils, so define them
|
||||
away for older version.
|
||||
* Macros for dwarf2 CFI unwind table entries.
|
||||
* See "as.info" for details on these pseudo ops. Unfortunately
|
||||
* they are only supported in very new binutils, so define them
|
||||
* away for older version.
|
||||
*/
|
||||
|
||||
#ifdef CONFIG_AS_CFI
|
||||
|
||||
#define CFI_STARTPROC .cfi_startproc
|
||||
#define CFI_ENDPROC .cfi_endproc
|
||||
#define CFI_DEF_CFA .cfi_def_cfa
|
||||
#define CFI_DEF_CFA_REGISTER .cfi_def_cfa_register
|
||||
#define CFI_DEF_CFA_OFFSET .cfi_def_cfa_offset
|
||||
#define CFI_ADJUST_CFA_OFFSET .cfi_adjust_cfa_offset
|
||||
#define CFI_OFFSET .cfi_offset
|
||||
#define CFI_REL_OFFSET .cfi_rel_offset
|
||||
#define CFI_REGISTER .cfi_register
|
||||
#define CFI_RESTORE .cfi_restore
|
||||
#define CFI_REMEMBER_STATE .cfi_remember_state
|
||||
#define CFI_RESTORE_STATE .cfi_restore_state
|
||||
#define CFI_UNDEFINED .cfi_undefined
|
||||
#define CFI_STARTPROC .cfi_startproc
|
||||
#define CFI_ENDPROC .cfi_endproc
|
||||
#define CFI_DEF_CFA .cfi_def_cfa
|
||||
#define CFI_DEF_CFA_REGISTER .cfi_def_cfa_register
|
||||
#define CFI_DEF_CFA_OFFSET .cfi_def_cfa_offset
|
||||
#define CFI_ADJUST_CFA_OFFSET .cfi_adjust_cfa_offset
|
||||
#define CFI_OFFSET .cfi_offset
|
||||
#define CFI_REL_OFFSET .cfi_rel_offset
|
||||
#define CFI_REGISTER .cfi_register
|
||||
#define CFI_RESTORE .cfi_restore
|
||||
#define CFI_REMEMBER_STATE .cfi_remember_state
|
||||
#define CFI_RESTORE_STATE .cfi_restore_state
|
||||
#define CFI_UNDEFINED .cfi_undefined
|
||||
|
||||
#ifdef CONFIG_AS_CFI_SIGNAL_FRAME
|
||||
#define CFI_SIGNAL_FRAME .cfi_signal_frame
|
||||
#define CFI_SIGNAL_FRAME .cfi_signal_frame
|
||||
#else
|
||||
#define CFI_SIGNAL_FRAME
|
||||
#endif
|
||||
|
||||
#else
|
||||
|
||||
/* Due to the structure of pre-exisiting code, don't use assembler line
|
||||
comment character # to ignore the arguments. Instead, use a dummy macro. */
|
||||
/*
|
||||
* Due to the structure of pre-exisiting code, don't use assembler line
|
||||
* comment character # to ignore the arguments. Instead, use a dummy macro.
|
||||
*/
|
||||
.macro cfi_ignore a=0, b=0, c=0, d=0
|
||||
.endm
|
||||
|
||||
#define CFI_STARTPROC cfi_ignore
|
||||
#define CFI_ENDPROC cfi_ignore
|
||||
#define CFI_DEF_CFA cfi_ignore
|
||||
#define CFI_STARTPROC cfi_ignore
|
||||
#define CFI_ENDPROC cfi_ignore
|
||||
#define CFI_DEF_CFA cfi_ignore
|
||||
#define CFI_DEF_CFA_REGISTER cfi_ignore
|
||||
#define CFI_DEF_CFA_OFFSET cfi_ignore
|
||||
#define CFI_ADJUST_CFA_OFFSET cfi_ignore
|
||||
#define CFI_OFFSET cfi_ignore
|
||||
#define CFI_REL_OFFSET cfi_ignore
|
||||
#define CFI_REGISTER cfi_ignore
|
||||
#define CFI_RESTORE cfi_ignore
|
||||
#define CFI_REMEMBER_STATE cfi_ignore
|
||||
#define CFI_RESTORE_STATE cfi_ignore
|
||||
#define CFI_UNDEFINED cfi_ignore
|
||||
#define CFI_SIGNAL_FRAME cfi_ignore
|
||||
#define CFI_OFFSET cfi_ignore
|
||||
#define CFI_REL_OFFSET cfi_ignore
|
||||
#define CFI_REGISTER cfi_ignore
|
||||
#define CFI_RESTORE cfi_ignore
|
||||
#define CFI_REMEMBER_STATE cfi_ignore
|
||||
#define CFI_RESTORE_STATE cfi_ignore
|
||||
#define CFI_UNDEFINED cfi_ignore
|
||||
#define CFI_SIGNAL_FRAME cfi_ignore
|
||||
|
||||
#endif
|
||||
|
||||
/*
|
||||
* An attempt to make CFI annotations more or less
|
||||
* correct and shorter. It is implied that you know
|
||||
* what you're doing if you use them.
|
||||
*/
|
||||
#ifdef __ASSEMBLY__
|
||||
#ifdef CONFIG_X86_64
|
||||
.macro pushq_cfi reg
|
||||
pushq \reg
|
||||
CFI_ADJUST_CFA_OFFSET 8
|
||||
.endm
|
||||
|
||||
.macro popq_cfi reg
|
||||
popq \reg
|
||||
CFI_ADJUST_CFA_OFFSET -8
|
||||
.endm
|
||||
|
||||
.macro movq_cfi reg offset=0
|
||||
movq %\reg, \offset(%rsp)
|
||||
CFI_REL_OFFSET \reg, \offset
|
||||
.endm
|
||||
|
||||
.macro movq_cfi_restore offset reg
|
||||
movq \offset(%rsp), %\reg
|
||||
CFI_RESTORE \reg
|
||||
.endm
|
||||
#else /*!CONFIG_X86_64*/
|
||||
|
||||
/* 32bit defenitions are missed yet */
|
||||
|
||||
#endif /*!CONFIG_X86_64*/
|
||||
#endif /*__ASSEMBLY__*/
|
||||
|
||||
#endif /* _ASM_X86_DWARF2_H */
|
||||
|
|
|
@ -8,7 +8,9 @@ enum reboot_type {
|
|||
BOOT_BIOS = 'b',
|
||||
#endif
|
||||
BOOT_ACPI = 'a',
|
||||
BOOT_EFI = 'e'
|
||||
BOOT_EFI = 'e',
|
||||
BOOT_CF9 = 'p',
|
||||
BOOT_CF9_COND = 'q',
|
||||
};
|
||||
|
||||
extern enum reboot_type reboot_type;
|
||||
|
|
|
@ -9,31 +9,27 @@ static inline int apic_id_registered(void)
|
|||
return (1);
|
||||
}
|
||||
|
||||
static inline cpumask_t target_cpus(void)
|
||||
static inline cpumask_t target_cpus_cluster(void)
|
||||
{
|
||||
#if defined CONFIG_ES7000_CLUSTERED_APIC
|
||||
return CPU_MASK_ALL;
|
||||
#else
|
||||
return cpumask_of_cpu(smp_processor_id());
|
||||
#endif
|
||||
}
|
||||
|
||||
#if defined CONFIG_ES7000_CLUSTERED_APIC
|
||||
#define APIC_DFR_VALUE (APIC_DFR_CLUSTER)
|
||||
#define INT_DELIVERY_MODE (dest_LowestPrio)
|
||||
#define INT_DEST_MODE (1) /* logical delivery broadcast to all procs */
|
||||
#define NO_BALANCE_IRQ (1)
|
||||
#undef WAKE_SECONDARY_VIA_INIT
|
||||
#define WAKE_SECONDARY_VIA_MIP
|
||||
#else
|
||||
static inline cpumask_t target_cpus(void)
|
||||
{
|
||||
return cpumask_of_cpu(smp_processor_id());
|
||||
}
|
||||
|
||||
#define APIC_DFR_VALUE_CLUSTER (APIC_DFR_CLUSTER)
|
||||
#define INT_DELIVERY_MODE_CLUSTER (dest_LowestPrio)
|
||||
#define INT_DEST_MODE_CLUSTER (1) /* logical delivery broadcast to all procs */
|
||||
#define NO_BALANCE_IRQ_CLUSTER (1)
|
||||
|
||||
#define APIC_DFR_VALUE (APIC_DFR_FLAT)
|
||||
#define INT_DELIVERY_MODE (dest_Fixed)
|
||||
#define INT_DEST_MODE (0) /* phys delivery to target procs */
|
||||
#define NO_BALANCE_IRQ (0)
|
||||
#undef APIC_DEST_LOGICAL
|
||||
#define APIC_DEST_LOGICAL 0x0
|
||||
#define WAKE_SECONDARY_VIA_INIT
|
||||
#endif
|
||||
|
||||
static inline unsigned long check_apicid_used(physid_mask_t bitmap, int apicid)
|
||||
{
|
||||
|
@ -60,6 +56,16 @@ static inline unsigned long calculate_ldr(int cpu)
|
|||
* an APIC. See e.g. "AP-388 82489DX User's Manual" (Intel
|
||||
* document number 292116). So here it goes...
|
||||
*/
|
||||
static inline void init_apic_ldr_cluster(void)
|
||||
{
|
||||
unsigned long val;
|
||||
int cpu = smp_processor_id();
|
||||
|
||||
apic_write(APIC_DFR, APIC_DFR_VALUE_CLUSTER);
|
||||
val = calculate_ldr(cpu);
|
||||
apic_write(APIC_LDR, val);
|
||||
}
|
||||
|
||||
static inline void init_apic_ldr(void)
|
||||
{
|
||||
unsigned long val;
|
||||
|
@ -70,10 +76,6 @@ static inline void init_apic_ldr(void)
|
|||
apic_write(APIC_LDR, val);
|
||||
}
|
||||
|
||||
#ifndef CONFIG_X86_GENERICARCH
|
||||
extern void enable_apic_mode(void);
|
||||
#endif
|
||||
|
||||
extern int apic_version [MAX_APICS];
|
||||
static inline void setup_apic_routing(void)
|
||||
{
|
||||
|
@ -144,7 +146,7 @@ static inline int check_phys_apicid_present(int cpu_physical_apicid)
|
|||
return (1);
|
||||
}
|
||||
|
||||
static inline unsigned int cpu_mask_to_apicid(cpumask_t cpumask)
|
||||
static inline unsigned int cpu_mask_to_apicid_cluster(cpumask_t cpumask)
|
||||
{
|
||||
int num_bits_set;
|
||||
int cpus_found = 0;
|
||||
|
@ -154,11 +156,7 @@ static inline unsigned int cpu_mask_to_apicid(cpumask_t cpumask)
|
|||
num_bits_set = cpus_weight(cpumask);
|
||||
/* Return id to all */
|
||||
if (num_bits_set == NR_CPUS)
|
||||
#if defined CONFIG_ES7000_CLUSTERED_APIC
|
||||
return 0xFF;
|
||||
#else
|
||||
return cpu_to_logical_apicid(0);
|
||||
#endif
|
||||
/*
|
||||
* The cpus in the mask must all be on the apic cluster. If are not
|
||||
* on the same apicid cluster return default value of TARGET_CPUS.
|
||||
|
@ -171,11 +169,40 @@ static inline unsigned int cpu_mask_to_apicid(cpumask_t cpumask)
|
|||
if (apicid_cluster(apicid) !=
|
||||
apicid_cluster(new_apicid)){
|
||||
printk ("%s: Not a valid mask!\n", __func__);
|
||||
#if defined CONFIG_ES7000_CLUSTERED_APIC
|
||||
return 0xFF;
|
||||
#else
|
||||
}
|
||||
apicid = new_apicid;
|
||||
cpus_found++;
|
||||
}
|
||||
cpu++;
|
||||
}
|
||||
return apicid;
|
||||
}
|
||||
|
||||
static inline unsigned int cpu_mask_to_apicid(cpumask_t cpumask)
|
||||
{
|
||||
int num_bits_set;
|
||||
int cpus_found = 0;
|
||||
int cpu;
|
||||
int apicid;
|
||||
|
||||
num_bits_set = cpus_weight(cpumask);
|
||||
/* Return id to all */
|
||||
if (num_bits_set == NR_CPUS)
|
||||
return cpu_to_logical_apicid(0);
|
||||
/*
|
||||
* The cpus in the mask must all be on the apic cluster. If are not
|
||||
* on the same apicid cluster return default value of TARGET_CPUS.
|
||||
*/
|
||||
cpu = first_cpu(cpumask);
|
||||
apicid = cpu_to_logical_apicid(cpu);
|
||||
while (cpus_found < num_bits_set) {
|
||||
if (cpu_isset(cpu, cpumask)) {
|
||||
int new_apicid = cpu_to_logical_apicid(cpu);
|
||||
if (apicid_cluster(apicid) !=
|
||||
apicid_cluster(new_apicid)){
|
||||
printk ("%s: Not a valid mask!\n", __func__);
|
||||
return cpu_to_logical_apicid(0);
|
||||
#endif
|
||||
}
|
||||
apicid = new_apicid;
|
||||
cpus_found++;
|
||||
|
|
|
@ -1,36 +1,12 @@
|
|||
#ifndef __ASM_ES7000_WAKECPU_H
|
||||
#define __ASM_ES7000_WAKECPU_H
|
||||
|
||||
/*
|
||||
* This file copes with machines that wakeup secondary CPUs by the
|
||||
* INIT, INIT, STARTUP sequence.
|
||||
*/
|
||||
|
||||
#ifdef CONFIG_ES7000_CLUSTERED_APIC
|
||||
#define WAKE_SECONDARY_VIA_MIP
|
||||
#else
|
||||
#define WAKE_SECONDARY_VIA_INIT
|
||||
#endif
|
||||
|
||||
#ifdef WAKE_SECONDARY_VIA_MIP
|
||||
extern int es7000_start_cpu(int cpu, unsigned long eip);
|
||||
static inline int
|
||||
wakeup_secondary_cpu(int phys_apicid, unsigned long start_eip)
|
||||
{
|
||||
int boot_error = 0;
|
||||
boot_error = es7000_start_cpu(phys_apicid, start_eip);
|
||||
return boot_error;
|
||||
}
|
||||
#endif
|
||||
|
||||
#define TRAMPOLINE_LOW phys_to_virt(0x467)
|
||||
#define TRAMPOLINE_HIGH phys_to_virt(0x469)
|
||||
|
||||
#define boot_cpu_apicid boot_cpu_physical_apicid
|
||||
#define TRAMPOLINE_PHYS_LOW 0x467
|
||||
#define TRAMPOLINE_PHYS_HIGH 0x469
|
||||
|
||||
static inline void wait_for_init_deassert(atomic_t *deassert)
|
||||
{
|
||||
#ifdef WAKE_SECONDARY_VIA_INIT
|
||||
#ifndef CONFIG_ES7000_CLUSTERED_APIC
|
||||
while (!atomic_read(deassert))
|
||||
cpu_relax();
|
||||
#endif
|
||||
|
@ -50,9 +26,12 @@ static inline void restore_NMI_vector(unsigned short *high, unsigned short *low)
|
|||
{
|
||||
}
|
||||
|
||||
#define inquire_remote_apic(apicid) do { \
|
||||
if (apic_verbosity >= APIC_DEBUG) \
|
||||
__inquire_remote_apic(apicid); \
|
||||
} while (0)
|
||||
extern void __inquire_remote_apic(int apicid);
|
||||
|
||||
static inline void inquire_remote_apic(int apicid)
|
||||
{
|
||||
if (apic_verbosity >= APIC_DEBUG)
|
||||
__inquire_remote_apic(apicid);
|
||||
}
|
||||
|
||||
#endif /* __ASM_MACH_WAKECPU_H */
|
||||
|
|
|
@ -29,6 +29,39 @@ extern int fix_aperture;
|
|||
#define AMD64_GARTCACHECTL 0x9c
|
||||
#define AMD64_GARTEN (1<<0)
|
||||
|
||||
#ifdef CONFIG_GART_IOMMU
|
||||
extern int gart_iommu_aperture;
|
||||
extern int gart_iommu_aperture_allowed;
|
||||
extern int gart_iommu_aperture_disabled;
|
||||
|
||||
extern void early_gart_iommu_check(void);
|
||||
extern void gart_iommu_init(void);
|
||||
extern void gart_iommu_shutdown(void);
|
||||
extern void __init gart_parse_options(char *);
|
||||
extern void gart_iommu_hole_init(void);
|
||||
|
||||
#else
|
||||
#define gart_iommu_aperture 0
|
||||
#define gart_iommu_aperture_allowed 0
|
||||
#define gart_iommu_aperture_disabled 1
|
||||
|
||||
static inline void early_gart_iommu_check(void)
|
||||
{
|
||||
}
|
||||
static inline void gart_iommu_init(void)
|
||||
{
|
||||
}
|
||||
static inline void gart_iommu_shutdown(void)
|
||||
{
|
||||
}
|
||||
static inline void gart_parse_options(char *options)
|
||||
{
|
||||
}
|
||||
static inline void gart_iommu_hole_init(void)
|
||||
{
|
||||
}
|
||||
#endif
|
||||
|
||||
extern int agp_amd64_init(void);
|
||||
|
||||
static inline void enable_gart_translation(struct pci_dev *dev, u64 addr)
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
#define _ASM_X86_GENAPIC_32_H
|
||||
|
||||
#include <asm/mpspec.h>
|
||||
#include <asm/atomic.h>
|
||||
|
||||
/*
|
||||
* Generic APIC driver interface.
|
||||
|
@ -65,6 +66,14 @@ struct genapic {
|
|||
void (*send_IPI_allbutself)(int vector);
|
||||
void (*send_IPI_all)(int vector);
|
||||
#endif
|
||||
int (*wakeup_cpu)(int apicid, unsigned long start_eip);
|
||||
int trampoline_phys_low;
|
||||
int trampoline_phys_high;
|
||||
void (*wait_for_init_deassert)(atomic_t *deassert);
|
||||
void (*smp_callin_clear_local_apic)(void);
|
||||
void (*store_NMI_vector)(unsigned short *high, unsigned short *low);
|
||||
void (*restore_NMI_vector)(unsigned short *high, unsigned short *low);
|
||||
void (*inquire_remote_apic)(int apicid);
|
||||
};
|
||||
|
||||
#define APICFUNC(x) .x = x,
|
||||
|
@ -105,16 +114,24 @@ struct genapic {
|
|||
APICFUNC(get_apic_id) \
|
||||
.apic_id_mask = APIC_ID_MASK, \
|
||||
APICFUNC(cpu_mask_to_apicid) \
|
||||
APICFUNC(vector_allocation_domain) \
|
||||
APICFUNC(vector_allocation_domain) \
|
||||
APICFUNC(acpi_madt_oem_check) \
|
||||
IPIFUNC(send_IPI_mask) \
|
||||
IPIFUNC(send_IPI_allbutself) \
|
||||
IPIFUNC(send_IPI_all) \
|
||||
APICFUNC(enable_apic_mode) \
|
||||
APICFUNC(phys_pkg_id) \
|
||||
.trampoline_phys_low = TRAMPOLINE_PHYS_LOW, \
|
||||
.trampoline_phys_high = TRAMPOLINE_PHYS_HIGH, \
|
||||
APICFUNC(wait_for_init_deassert) \
|
||||
APICFUNC(smp_callin_clear_local_apic) \
|
||||
APICFUNC(store_NMI_vector) \
|
||||
APICFUNC(restore_NMI_vector) \
|
||||
APICFUNC(inquire_remote_apic) \
|
||||
}
|
||||
|
||||
extern struct genapic *genapic;
|
||||
extern void es7000_update_genapic_to_cluster(void);
|
||||
|
||||
enum uv_system_type {UV_NONE, UV_LEGACY_APIC, UV_X2APIC, UV_NON_UNIQUE_APIC};
|
||||
#define get_uv_system_type() UV_NONE
|
||||
|
|
|
@ -32,6 +32,8 @@ struct genapic {
|
|||
unsigned int (*get_apic_id)(unsigned long x);
|
||||
unsigned long (*set_apic_id)(unsigned int id);
|
||||
unsigned long apic_id_mask;
|
||||
/* wakeup_secondary_cpu */
|
||||
int (*wakeup_cpu)(int apicid, unsigned long start_eip);
|
||||
};
|
||||
|
||||
extern struct genapic *genapic;
|
||||
|
|
|
@ -22,6 +22,8 @@ DECLARE_PER_CPU(irq_cpustat_t, irq_stat);
|
|||
#define __ARCH_IRQ_STAT
|
||||
#define __IRQ_STAT(cpu, member) (per_cpu(irq_stat, cpu).member)
|
||||
|
||||
#define inc_irq_stat(member) (__get_cpu_var(irq_stat).member++)
|
||||
|
||||
void ack_bad_irq(unsigned int irq);
|
||||
#include <linux/irq_cpustat.h>
|
||||
|
||||
|
|
|
@ -11,6 +11,8 @@
|
|||
|
||||
#define __ARCH_IRQ_STAT 1
|
||||
|
||||
#define inc_irq_stat(member) add_pda(member, 1)
|
||||
|
||||
#define local_softirq_pending() read_pda(__softirq_pending)
|
||||
|
||||
#define __ARCH_SET_SOFTIRQ_PENDING 1
|
||||
|
|
|
@ -109,9 +109,7 @@ extern asmlinkage void smp_invalidate_interrupt(struct pt_regs *);
|
|||
#endif
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
extern void (*const interrupt[NR_VECTORS])(void);
|
||||
#endif
|
||||
extern void (*__initconst interrupt[NR_VECTORS-FIRST_EXTERNAL_VECTOR])(void);
|
||||
|
||||
typedef int vector_irq_t[NR_VECTORS];
|
||||
DECLARE_PER_CPU(vector_irq_t, vector_irq);
|
||||
|
|
|
@ -0,0 +1,26 @@
|
|||
/*
|
||||
* Copyright (C) 2008, VMware, Inc.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
* the Free Software Foundation; either version 2 of the License, or
|
||||
* (at your option) any later version.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but
|
||||
* WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
|
||||
* NON INFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
*
|
||||
*/
|
||||
#ifndef ASM_X86__HYPERVISOR_H
|
||||
#define ASM_X86__HYPERVISOR_H
|
||||
|
||||
extern unsigned long get_hypervisor_tsc_freq(void);
|
||||
extern void init_hypervisor(struct cpuinfo_x86 *c);
|
||||
|
||||
#endif
|
|
@ -129,24 +129,6 @@ typedef struct compat_siginfo {
|
|||
} _sifields;
|
||||
} compat_siginfo_t;
|
||||
|
||||
struct sigframe32 {
|
||||
u32 pretcode;
|
||||
int sig;
|
||||
struct sigcontext_ia32 sc;
|
||||
struct _fpstate_ia32 fpstate;
|
||||
unsigned int extramask[_COMPAT_NSIG_WORDS-1];
|
||||
};
|
||||
|
||||
struct rt_sigframe32 {
|
||||
u32 pretcode;
|
||||
int sig;
|
||||
u32 pinfo;
|
||||
u32 puc;
|
||||
compat_siginfo_t info;
|
||||
struct ucontext_ia32 uc;
|
||||
struct _fpstate_ia32 fpstate;
|
||||
};
|
||||
|
||||
struct ustat32 {
|
||||
__u32 f_tfree;
|
||||
compat_ino_t f_tinode;
|
||||
|
|
|
@ -8,8 +8,13 @@ struct notifier_block;
|
|||
void idle_notifier_register(struct notifier_block *n);
|
||||
void idle_notifier_unregister(struct notifier_block *n);
|
||||
|
||||
#ifdef CONFIG_X86_64
|
||||
void enter_idle(void);
|
||||
void exit_idle(void);
|
||||
#else /* !CONFIG_X86_64 */
|
||||
static inline void enter_idle(void) { }
|
||||
static inline void exit_idle(void) { }
|
||||
#endif /* CONFIG_X86_64 */
|
||||
|
||||
void c1e_remove_cpu(int cpu);
|
||||
|
||||
|
|
|
@ -4,6 +4,7 @@
|
|||
#define ARCH_HAS_IOREMAP_WC
|
||||
|
||||
#include <linux/compiler.h>
|
||||
#include <asm-generic/int-ll64.h>
|
||||
|
||||
#define build_mmio_read(name, size, type, reg, barrier) \
|
||||
static inline type name(const volatile void __iomem *addr) \
|
||||
|
@ -45,21 +46,39 @@ build_mmio_write(__writel, "l", unsigned int, "r", )
|
|||
#define mmiowb() barrier()
|
||||
|
||||
#ifdef CONFIG_X86_64
|
||||
|
||||
build_mmio_read(readq, "q", unsigned long, "=r", :"memory")
|
||||
build_mmio_read(__readq, "q", unsigned long, "=r", )
|
||||
build_mmio_write(writeq, "q", unsigned long, "r", :"memory")
|
||||
build_mmio_write(__writeq, "q", unsigned long, "r", )
|
||||
|
||||
#define readq_relaxed(a) __readq(a)
|
||||
#define __raw_readq __readq
|
||||
#define __raw_writeq writeq
|
||||
#else
|
||||
|
||||
static inline __u64 readq(const volatile void __iomem *addr)
|
||||
{
|
||||
const volatile u32 __iomem *p = addr;
|
||||
u32 low, high;
|
||||
|
||||
low = readl(p);
|
||||
high = readl(p + 1);
|
||||
|
||||
return low + ((u64)high << 32);
|
||||
}
|
||||
|
||||
static inline void writeq(__u64 val, volatile void __iomem *addr)
|
||||
{
|
||||
writel(val, addr);
|
||||
writel(val >> 32, addr+4);
|
||||
}
|
||||
|
||||
/* Let people know we have them */
|
||||
#define readq readq
|
||||
#define writeq writeq
|
||||
#endif
|
||||
|
||||
extern int iommu_bio_merge;
|
||||
#define readq_relaxed(a) readq(a)
|
||||
|
||||
#define __raw_readq(a) readq(a)
|
||||
#define __raw_writeq(val, addr) writeq(val, addr)
|
||||
|
||||
/* Let people know that we have them */
|
||||
#define readq readq
|
||||
#define writeq writeq
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
# include "io_32.h"
|
||||
|
|
|
@ -232,8 +232,6 @@ void memset_io(volatile void __iomem *a, int b, size_t c);
|
|||
|
||||
#define flush_write_buffers()
|
||||
|
||||
#define BIO_VMERGE_BOUNDARY iommu_bio_merge
|
||||
|
||||
/*
|
||||
* Convert a virtual cached pointer to an uncached pointer
|
||||
*/
|
||||
|
|
|
@ -156,11 +156,21 @@ extern int sis_apic_bug;
|
|||
/* 1 if "noapic" boot option passed */
|
||||
extern int skip_ioapic_setup;
|
||||
|
||||
/* 1 if "noapic" boot option passed */
|
||||
extern int noioapicquirk;
|
||||
|
||||
/* -1 if "noapic" boot option passed */
|
||||
extern int noioapicreroute;
|
||||
|
||||
/* 1 if the timer IRQ uses the '8259A Virtual Wire' mode */
|
||||
extern int timer_through_8259;
|
||||
|
||||
static inline void disable_ioapic_setup(void)
|
||||
{
|
||||
#ifdef CONFIG_PCI
|
||||
noioapicquirk = 1;
|
||||
noioapicreroute = -1;
|
||||
#endif
|
||||
skip_ioapic_setup = 1;
|
||||
}
|
||||
|
||||
|
|
|
@ -12,37 +12,4 @@ extern unsigned long iommu_nr_pages(unsigned long addr, unsigned long len);
|
|||
/* 10 seconds */
|
||||
#define DMAR_OPERATION_TIMEOUT ((cycles_t) tsc_khz*10*1000)
|
||||
|
||||
#ifdef CONFIG_GART_IOMMU
|
||||
extern int gart_iommu_aperture;
|
||||
extern int gart_iommu_aperture_allowed;
|
||||
extern int gart_iommu_aperture_disabled;
|
||||
|
||||
extern void early_gart_iommu_check(void);
|
||||
extern void gart_iommu_init(void);
|
||||
extern void gart_iommu_shutdown(void);
|
||||
extern void __init gart_parse_options(char *);
|
||||
extern void gart_iommu_hole_init(void);
|
||||
|
||||
#else
|
||||
#define gart_iommu_aperture 0
|
||||
#define gart_iommu_aperture_allowed 0
|
||||
#define gart_iommu_aperture_disabled 1
|
||||
|
||||
static inline void early_gart_iommu_check(void)
|
||||
{
|
||||
}
|
||||
static inline void gart_iommu_init(void)
|
||||
{
|
||||
}
|
||||
static inline void gart_iommu_shutdown(void)
|
||||
{
|
||||
}
|
||||
static inline void gart_parse_options(char *options)
|
||||
{
|
||||
}
|
||||
static inline void gart_iommu_hole_init(void)
|
||||
{
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif /* _ASM_X86_IOMMU_H */
|
||||
|
|
|
@ -31,10 +31,6 @@ static inline int irq_canonicalize(int irq)
|
|||
# endif
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_IRQBALANCE
|
||||
extern int irqbalance_disable(char *str);
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_HOTPLUG_CPU
|
||||
#include <linux/cpumask.h>
|
||||
extern void fixup_irqs(cpumask_t map);
|
||||
|
|
|
@ -9,6 +9,8 @@
|
|||
|
||||
#include <asm/percpu.h>
|
||||
|
||||
#define ARCH_HAS_OWN_IRQ_REGS
|
||||
|
||||
DECLARE_PER_CPU(struct pt_regs *, irq_regs);
|
||||
|
||||
static inline struct pt_regs *get_irq_regs(void)
|
||||
|
|
|
@ -5,21 +5,8 @@
|
|||
# define PA_CONTROL_PAGE 0
|
||||
# define VA_CONTROL_PAGE 1
|
||||
# define PA_PGD 2
|
||||
# define VA_PGD 3
|
||||
# define PA_PTE_0 4
|
||||
# define VA_PTE_0 5
|
||||
# define PA_PTE_1 6
|
||||
# define VA_PTE_1 7
|
||||
# define PA_SWAP_PAGE 8
|
||||
# ifdef CONFIG_X86_PAE
|
||||
# define PA_PMD_0 9
|
||||
# define VA_PMD_0 10
|
||||
# define PA_PMD_1 11
|
||||
# define VA_PMD_1 12
|
||||
# define PAGES_NR 13
|
||||
# else
|
||||
# define PAGES_NR 9
|
||||
# endif
|
||||
# define PA_SWAP_PAGE 3
|
||||
# define PAGES_NR 4
|
||||
#else
|
||||
# define PA_CONTROL_PAGE 0
|
||||
# define VA_CONTROL_PAGE 1
|
||||
|
@ -170,6 +157,20 @@ relocate_kernel(unsigned long indirection_page,
|
|||
unsigned long start_address) ATTRIB_NORET;
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
#define ARCH_HAS_KIMAGE_ARCH
|
||||
|
||||
struct kimage_arch {
|
||||
pgd_t *pgd;
|
||||
#ifdef CONFIG_X86_PAE
|
||||
pmd_t *pmd0;
|
||||
pmd_t *pmd1;
|
||||
#endif
|
||||
pte_t *pte0;
|
||||
pte_t *pte1;
|
||||
};
|
||||
#endif
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
#endif /* _ASM_X86_KEXEC_H */
|
||||
|
|
|
@ -57,5 +57,65 @@
|
|||
#define __ALIGN_STR ".align 16,0x90"
|
||||
#endif
|
||||
|
||||
/*
|
||||
* to check ENTRY_X86/END_X86 and
|
||||
* KPROBE_ENTRY_X86/KPROBE_END_X86
|
||||
* unbalanced-missed-mixed appearance
|
||||
*/
|
||||
#define __set_entry_x86 .set ENTRY_X86_IN, 0
|
||||
#define __unset_entry_x86 .set ENTRY_X86_IN, 1
|
||||
#define __set_kprobe_x86 .set KPROBE_X86_IN, 0
|
||||
#define __unset_kprobe_x86 .set KPROBE_X86_IN, 1
|
||||
|
||||
#define __macro_err_x86 .error "ENTRY_X86/KPROBE_X86 unbalanced,missed,mixed"
|
||||
|
||||
#define __check_entry_x86 \
|
||||
.ifdef ENTRY_X86_IN; \
|
||||
.ifeq ENTRY_X86_IN; \
|
||||
__macro_err_x86; \
|
||||
.abort; \
|
||||
.endif; \
|
||||
.endif
|
||||
|
||||
#define __check_kprobe_x86 \
|
||||
.ifdef KPROBE_X86_IN; \
|
||||
.ifeq KPROBE_X86_IN; \
|
||||
__macro_err_x86; \
|
||||
.abort; \
|
||||
.endif; \
|
||||
.endif
|
||||
|
||||
#define __check_entry_kprobe_x86 \
|
||||
__check_entry_x86; \
|
||||
__check_kprobe_x86
|
||||
|
||||
#define ENTRY_KPROBE_FINAL_X86 __check_entry_kprobe_x86
|
||||
|
||||
#define ENTRY_X86(name) \
|
||||
__check_entry_kprobe_x86; \
|
||||
__set_entry_x86; \
|
||||
.globl name; \
|
||||
__ALIGN; \
|
||||
name:
|
||||
|
||||
#define END_X86(name) \
|
||||
__unset_entry_x86; \
|
||||
__check_entry_kprobe_x86; \
|
||||
.size name, .-name
|
||||
|
||||
#define KPROBE_ENTRY_X86(name) \
|
||||
__check_entry_kprobe_x86; \
|
||||
__set_kprobe_x86; \
|
||||
.pushsection .kprobes.text, "ax"; \
|
||||
.globl name; \
|
||||
__ALIGN; \
|
||||
name:
|
||||
|
||||
#define KPROBE_END_X86(name) \
|
||||
__unset_kprobe_x86; \
|
||||
__check_entry_kprobe_x86; \
|
||||
.size name, .-name; \
|
||||
.popsection
|
||||
|
||||
#endif /* _ASM_X86_LINKAGE_H */
|
||||
|
||||
|
|
|
@ -32,11 +32,13 @@ static inline cpumask_t target_cpus(void)
|
|||
#define vector_allocation_domain (genapic->vector_allocation_domain)
|
||||
#define read_apic_id() (GET_APIC_ID(apic_read(APIC_ID)))
|
||||
#define send_IPI_self (genapic->send_IPI_self)
|
||||
#define wakeup_secondary_cpu (genapic->wakeup_cpu)
|
||||
extern void setup_apic_routing(void);
|
||||
#else
|
||||
#define INT_DELIVERY_MODE dest_LowestPrio
|
||||
#define INT_DEST_MODE 1 /* logical delivery broadcast to all procs */
|
||||
#define TARGET_CPUS (target_cpus())
|
||||
#define wakeup_secondary_cpu wakeup_secondary_cpu_via_init
|
||||
/*
|
||||
* Set up the logical destination ID.
|
||||
*
|
||||
|
|
|
@ -1,17 +1,8 @@
|
|||
#ifndef _ASM_X86_MACH_DEFAULT_MACH_WAKECPU_H
|
||||
#define _ASM_X86_MACH_DEFAULT_MACH_WAKECPU_H
|
||||
|
||||
/*
|
||||
* This file copes with machines that wakeup secondary CPUs by the
|
||||
* INIT, INIT, STARTUP sequence.
|
||||
*/
|
||||
|
||||
#define WAKE_SECONDARY_VIA_INIT
|
||||
|
||||
#define TRAMPOLINE_LOW phys_to_virt(0x467)
|
||||
#define TRAMPOLINE_HIGH phys_to_virt(0x469)
|
||||
|
||||
#define boot_cpu_apicid boot_cpu_physical_apicid
|
||||
#define TRAMPOLINE_PHYS_LOW (0x467)
|
||||
#define TRAMPOLINE_PHYS_HIGH (0x469)
|
||||
|
||||
static inline void wait_for_init_deassert(atomic_t *deassert)
|
||||
{
|
||||
|
@ -33,9 +24,12 @@ static inline void restore_NMI_vector(unsigned short *high, unsigned short *low)
|
|||
{
|
||||
}
|
||||
|
||||
#define inquire_remote_apic(apicid) do { \
|
||||
if (apic_verbosity >= APIC_DEBUG) \
|
||||
__inquire_remote_apic(apicid); \
|
||||
} while (0)
|
||||
extern void __inquire_remote_apic(int apicid);
|
||||
|
||||
static inline void inquire_remote_apic(int apicid)
|
||||
{
|
||||
if (apic_verbosity >= APIC_DEBUG)
|
||||
__inquire_remote_apic(apicid);
|
||||
}
|
||||
|
||||
#endif /* _ASM_X86_MACH_DEFAULT_MACH_WAKECPU_H */
|
||||
|
|
|
@ -13,9 +13,11 @@ static inline void smpboot_setup_warm_reset_vector(unsigned long start_eip)
|
|||
CMOS_WRITE(0xa, 0xf);
|
||||
local_flush_tlb();
|
||||
pr_debug("1.\n");
|
||||
*((volatile unsigned short *) TRAMPOLINE_HIGH) = start_eip >> 4;
|
||||
*((volatile unsigned short *)phys_to_virt(TRAMPOLINE_PHYS_HIGH)) =
|
||||
start_eip >> 4;
|
||||
pr_debug("2.\n");
|
||||
*((volatile unsigned short *) TRAMPOLINE_LOW) = start_eip & 0xf;
|
||||
*((volatile unsigned short *)phys_to_virt(TRAMPOLINE_PHYS_LOW)) =
|
||||
start_eip & 0xf;
|
||||
pr_debug("3.\n");
|
||||
}
|
||||
|
||||
|
@ -32,7 +34,7 @@ static inline void smpboot_restore_warm_reset_vector(void)
|
|||
*/
|
||||
CMOS_WRITE(0, 0xf);
|
||||
|
||||
*((volatile long *) phys_to_virt(0x467)) = 0;
|
||||
*((volatile long *)phys_to_virt(TRAMPOLINE_PHYS_LOW)) = 0;
|
||||
}
|
||||
|
||||
static inline void __init smpboot_setup_io_apic(void)
|
||||
|
|
|
@ -27,6 +27,7 @@
|
|||
#define vector_allocation_domain (genapic->vector_allocation_domain)
|
||||
#define enable_apic_mode (genapic->enable_apic_mode)
|
||||
#define phys_pkg_id (genapic->phys_pkg_id)
|
||||
#define wakeup_secondary_cpu (genapic->wakeup_cpu)
|
||||
|
||||
extern void generic_bigsmp_probe(void);
|
||||
|
||||
|
|
|
@ -0,0 +1,12 @@
|
|||
#ifndef _ASM_X86_MACH_GENERIC_MACH_WAKECPU_H
|
||||
#define _ASM_X86_MACH_GENERIC_MACH_WAKECPU_H
|
||||
|
||||
#define TRAMPOLINE_PHYS_LOW (genapic->trampoline_phys_low)
|
||||
#define TRAMPOLINE_PHYS_HIGH (genapic->trampoline_phys_high)
|
||||
#define wait_for_init_deassert (genapic->wait_for_init_deassert)
|
||||
#define smp_callin_clear_local_apic (genapic->smp_callin_clear_local_apic)
|
||||
#define store_NMI_vector (genapic->store_NMI_vector)
|
||||
#define restore_NMI_vector (genapic->restore_NMI_vector)
|
||||
#define inquire_remote_apic (genapic->inquire_remote_apic)
|
||||
|
||||
#endif /* _ASM_X86_MACH_GENERIC_MACH_APIC_H */
|
|
@ -4,9 +4,8 @@
|
|||
static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
|
||||
{
|
||||
#ifdef CONFIG_SMP
|
||||
unsigned cpu = smp_processor_id();
|
||||
if (per_cpu(cpu_tlbstate, cpu).state == TLBSTATE_OK)
|
||||
per_cpu(cpu_tlbstate, cpu).state = TLBSTATE_LAZY;
|
||||
if (x86_read_percpu(cpu_tlbstate.state) == TLBSTATE_OK)
|
||||
x86_write_percpu(cpu_tlbstate.state, TLBSTATE_LAZY);
|
||||
#endif
|
||||
}
|
||||
|
||||
|
@ -20,8 +19,8 @@ static inline void switch_mm(struct mm_struct *prev,
|
|||
/* stop flush ipis for the previous mm */
|
||||
cpu_clear(cpu, prev->cpu_vm_mask);
|
||||
#ifdef CONFIG_SMP
|
||||
per_cpu(cpu_tlbstate, cpu).state = TLBSTATE_OK;
|
||||
per_cpu(cpu_tlbstate, cpu).active_mm = next;
|
||||
x86_write_percpu(cpu_tlbstate.state, TLBSTATE_OK);
|
||||
x86_write_percpu(cpu_tlbstate.active_mm, next);
|
||||
#endif
|
||||
cpu_set(cpu, next->cpu_vm_mask);
|
||||
|
||||
|
@ -36,8 +35,8 @@ static inline void switch_mm(struct mm_struct *prev,
|
|||
}
|
||||
#ifdef CONFIG_SMP
|
||||
else {
|
||||
per_cpu(cpu_tlbstate, cpu).state = TLBSTATE_OK;
|
||||
BUG_ON(per_cpu(cpu_tlbstate, cpu).active_mm != next);
|
||||
x86_write_percpu(cpu_tlbstate.state, TLBSTATE_OK);
|
||||
BUG_ON(x86_read_percpu(cpu_tlbstate.active_mm) != next);
|
||||
|
||||
if (!cpu_test_and_set(cpu, next->cpu_vm_mask)) {
|
||||
/* We were in lazy tlb mode and leave_mm disabled
|
||||
|
|
|
@ -85,7 +85,9 @@
|
|||
/* AMD64 MSRs. Not complete. See the architecture manual for a more
|
||||
complete list. */
|
||||
|
||||
#define MSR_AMD64_PATCH_LEVEL 0x0000008b
|
||||
#define MSR_AMD64_NB_CFG 0xc001001f
|
||||
#define MSR_AMD64_PATCH_LOADER 0xc0010020
|
||||
#define MSR_AMD64_IBSFETCHCTL 0xc0011030
|
||||
#define MSR_AMD64_IBSFETCHLINAD 0xc0011031
|
||||
#define MSR_AMD64_IBSFETCHPHYSAD 0xc0011032
|
||||
|
|
|
@ -22,10 +22,10 @@ static inline unsigned long long native_read_tscp(unsigned int *aux)
|
|||
}
|
||||
|
||||
/*
|
||||
* i386 calling convention returns 64-bit value in edx:eax, while
|
||||
* x86_64 returns at rax. Also, the "A" constraint does not really
|
||||
* mean rdx:rax in x86_64, so we need specialized behaviour for each
|
||||
* architecture
|
||||
* both i386 and x86_64 returns 64-bit value in edx:eax, but gcc's "A"
|
||||
* constraint has different meanings. For i386, "A" means exactly
|
||||
* edx:eax, while for x86_64 it doesn't mean rdx:rax or edx:eax. Instead,
|
||||
* it means rax *or* rdx.
|
||||
*/
|
||||
#ifdef CONFIG_X86_64
|
||||
#define DECLARE_ARGS(val, low, high) unsigned low, high
|
||||
|
@ -181,10 +181,10 @@ static inline int rdmsrl_amd_safe(unsigned msr, unsigned long long *p)
|
|||
}
|
||||
|
||||
#define rdtscl(low) \
|
||||
((low) = (u32)native_read_tsc())
|
||||
((low) = (u32)__native_read_tsc())
|
||||
|
||||
#define rdtscll(val) \
|
||||
((val) = native_read_tsc())
|
||||
((val) = __native_read_tsc())
|
||||
|
||||
#define rdpmc(counter, low, high) \
|
||||
do { \
|
||||
|
|
|
@ -3,12 +3,8 @@
|
|||
|
||||
/* This file copes with machines that wakeup secondary CPUs by NMIs */
|
||||
|
||||
#define WAKE_SECONDARY_VIA_NMI
|
||||
|
||||
#define TRAMPOLINE_LOW phys_to_virt(0x8)
|
||||
#define TRAMPOLINE_HIGH phys_to_virt(0xa)
|
||||
|
||||
#define boot_cpu_apicid boot_cpu_logical_apicid
|
||||
#define TRAMPOLINE_PHYS_LOW (0x8)
|
||||
#define TRAMPOLINE_PHYS_HIGH (0xa)
|
||||
|
||||
/* We don't do anything here because we use NMI's to boot instead */
|
||||
static inline void wait_for_init_deassert(atomic_t *deassert)
|
||||
|
@ -27,17 +23,23 @@ static inline void smp_callin_clear_local_apic(void)
|
|||
static inline void store_NMI_vector(unsigned short *high, unsigned short *low)
|
||||
{
|
||||
printk("Storing NMI vector\n");
|
||||
*high = *((volatile unsigned short *) TRAMPOLINE_HIGH);
|
||||
*low = *((volatile unsigned short *) TRAMPOLINE_LOW);
|
||||
*high =
|
||||
*((volatile unsigned short *)phys_to_virt(TRAMPOLINE_PHYS_HIGH));
|
||||
*low =
|
||||
*((volatile unsigned short *)phys_to_virt(TRAMPOLINE_PHYS_LOW));
|
||||
}
|
||||
|
||||
static inline void restore_NMI_vector(unsigned short *high, unsigned short *low)
|
||||
{
|
||||
printk("Restoring NMI vector\n");
|
||||
*((volatile unsigned short *) TRAMPOLINE_HIGH) = *high;
|
||||
*((volatile unsigned short *) TRAMPOLINE_LOW) = *low;
|
||||
*((volatile unsigned short *)phys_to_virt(TRAMPOLINE_PHYS_HIGH)) =
|
||||
*high;
|
||||
*((volatile unsigned short *)phys_to_virt(TRAMPOLINE_PHYS_LOW)) =
|
||||
*low;
|
||||
}
|
||||
|
||||
#define inquire_remote_apic(apicid) {}
|
||||
static inline void inquire_remote_apic(int apicid)
|
||||
{
|
||||
}
|
||||
|
||||
#endif /* __ASM_NUMAQ_WAKECPU_H */
|
||||
|
|
|
@ -19,6 +19,8 @@ struct pci_sysdata {
|
|||
};
|
||||
|
||||
extern int pci_routeirq;
|
||||
extern int noioapicquirk;
|
||||
extern int noioapicreroute;
|
||||
|
||||
/* scan a bus after allocating a pci_sysdata for it */
|
||||
extern struct pci_bus *pci_scan_bus_on_node(int busno, struct pci_ops *ops,
|
||||
|
|
|
@ -56,23 +56,55 @@ static inline pte_t native_ptep_get_and_clear(pte_t *xp)
|
|||
#define pte_none(x) (!(x).pte_low)
|
||||
|
||||
/*
|
||||
* Bits 0, 6 and 7 are taken, split up the 29 bits of offset
|
||||
* into this range:
|
||||
* Bits _PAGE_BIT_PRESENT, _PAGE_BIT_FILE and _PAGE_BIT_PROTNONE are taken,
|
||||
* split up the 29 bits of offset into this range:
|
||||
*/
|
||||
#define PTE_FILE_MAX_BITS 29
|
||||
#define PTE_FILE_SHIFT1 (_PAGE_BIT_PRESENT + 1)
|
||||
#if _PAGE_BIT_FILE < _PAGE_BIT_PROTNONE
|
||||
#define PTE_FILE_SHIFT2 (_PAGE_BIT_FILE + 1)
|
||||
#define PTE_FILE_SHIFT3 (_PAGE_BIT_PROTNONE + 1)
|
||||
#else
|
||||
#define PTE_FILE_SHIFT2 (_PAGE_BIT_PROTNONE + 1)
|
||||
#define PTE_FILE_SHIFT3 (_PAGE_BIT_FILE + 1)
|
||||
#endif
|
||||
#define PTE_FILE_BITS1 (PTE_FILE_SHIFT2 - PTE_FILE_SHIFT1 - 1)
|
||||
#define PTE_FILE_BITS2 (PTE_FILE_SHIFT3 - PTE_FILE_SHIFT2 - 1)
|
||||
|
||||
#define pte_to_pgoff(pte) \
|
||||
((((pte).pte_low >> 1) & 0x1f) + (((pte).pte_low >> 8) << 5))
|
||||
((((pte).pte_low >> PTE_FILE_SHIFT1) \
|
||||
& ((1U << PTE_FILE_BITS1) - 1)) \
|
||||
+ ((((pte).pte_low >> PTE_FILE_SHIFT2) \
|
||||
& ((1U << PTE_FILE_BITS2) - 1)) << PTE_FILE_BITS1) \
|
||||
+ (((pte).pte_low >> PTE_FILE_SHIFT3) \
|
||||
<< (PTE_FILE_BITS1 + PTE_FILE_BITS2)))
|
||||
|
||||
#define pgoff_to_pte(off) \
|
||||
((pte_t) { .pte_low = (((off) & 0x1f) << 1) + \
|
||||
(((off) >> 5) << 8) + _PAGE_FILE })
|
||||
((pte_t) { .pte_low = \
|
||||
(((off) & ((1U << PTE_FILE_BITS1) - 1)) << PTE_FILE_SHIFT1) \
|
||||
+ ((((off) >> PTE_FILE_BITS1) & ((1U << PTE_FILE_BITS2) - 1)) \
|
||||
<< PTE_FILE_SHIFT2) \
|
||||
+ (((off) >> (PTE_FILE_BITS1 + PTE_FILE_BITS2)) \
|
||||
<< PTE_FILE_SHIFT3) \
|
||||
+ _PAGE_FILE })
|
||||
|
||||
/* Encode and de-code a swap entry */
|
||||
#define __swp_type(x) (((x).val >> 1) & 0x1f)
|
||||
#define __swp_offset(x) ((x).val >> 8)
|
||||
#define __swp_entry(type, offset) \
|
||||
((swp_entry_t) { ((type) << 1) | ((offset) << 8) })
|
||||
#if _PAGE_BIT_FILE < _PAGE_BIT_PROTNONE
|
||||
#define SWP_TYPE_BITS (_PAGE_BIT_FILE - _PAGE_BIT_PRESENT - 1)
|
||||
#define SWP_OFFSET_SHIFT (_PAGE_BIT_PROTNONE + 1)
|
||||
#else
|
||||
#define SWP_TYPE_BITS (_PAGE_BIT_PROTNONE - _PAGE_BIT_PRESENT - 1)
|
||||
#define SWP_OFFSET_SHIFT (_PAGE_BIT_FILE + 1)
|
||||
#endif
|
||||
|
||||
#define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS)
|
||||
|
||||
#define __swp_type(x) (((x).val >> (_PAGE_BIT_PRESENT + 1)) \
|
||||
& ((1U << SWP_TYPE_BITS) - 1))
|
||||
#define __swp_offset(x) ((x).val >> SWP_OFFSET_SHIFT)
|
||||
#define __swp_entry(type, offset) ((swp_entry_t) { \
|
||||
((type) << (_PAGE_BIT_PRESENT + 1)) \
|
||||
| ((offset) << SWP_OFFSET_SHIFT) })
|
||||
#define __pte_to_swp_entry(pte) ((swp_entry_t) { (pte).pte_low })
|
||||
#define __swp_entry_to_pte(x) ((pte_t) { .pte = (x).val })
|
||||
|
||||
|
|
|
@ -166,6 +166,7 @@ static inline int pte_none(pte_t pte)
|
|||
#define PTE_FILE_MAX_BITS 32
|
||||
|
||||
/* Encode and de-code a swap entry */
|
||||
#define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > 5)
|
||||
#define __swp_type(x) (((x).val) & 0x1f)
|
||||
#define __swp_offset(x) ((x).val >> 5)
|
||||
#define __swp_entry(type, offset) ((swp_entry_t){(type) | (offset) << 5})
|
||||
|
|
|
@ -10,7 +10,6 @@
|
|||
#define _PAGE_BIT_PCD 4 /* page cache disabled */
|
||||
#define _PAGE_BIT_ACCESSED 5 /* was accessed (raised by CPU) */
|
||||
#define _PAGE_BIT_DIRTY 6 /* was written to (raised by CPU) */
|
||||
#define _PAGE_BIT_FILE 6
|
||||
#define _PAGE_BIT_PSE 7 /* 4 MB (or 2MB) page */
|
||||
#define _PAGE_BIT_PAT 7 /* on 4KB pages */
|
||||
#define _PAGE_BIT_GLOBAL 8 /* Global TLB entry PPro+ */
|
||||
|
@ -22,6 +21,12 @@
|
|||
#define _PAGE_BIT_CPA_TEST _PAGE_BIT_UNUSED1
|
||||
#define _PAGE_BIT_NX 63 /* No execute: only valid after cpuid check */
|
||||
|
||||
/* If _PAGE_BIT_PRESENT is clear, we use these: */
|
||||
/* - if the user mapped it with PROT_NONE; pte_present gives true */
|
||||
#define _PAGE_BIT_PROTNONE _PAGE_BIT_GLOBAL
|
||||
/* - set: nonlinear file mapping, saved PTE; unset:swap */
|
||||
#define _PAGE_BIT_FILE _PAGE_BIT_DIRTY
|
||||
|
||||
#define _PAGE_PRESENT (_AT(pteval_t, 1) << _PAGE_BIT_PRESENT)
|
||||
#define _PAGE_RW (_AT(pteval_t, 1) << _PAGE_BIT_RW)
|
||||
#define _PAGE_USER (_AT(pteval_t, 1) << _PAGE_BIT_USER)
|
||||
|
@ -46,11 +51,8 @@
|
|||
#define _PAGE_NX (_AT(pteval_t, 0))
|
||||
#endif
|
||||
|
||||
/* If _PAGE_PRESENT is clear, we use these: */
|
||||
#define _PAGE_FILE _PAGE_DIRTY /* nonlinear file mapping,
|
||||
* saved PTE; unset:swap */
|
||||
#define _PAGE_PROTNONE _PAGE_PSE /* if the user mapped it with PROT_NONE;
|
||||
pte_present gives true */
|
||||
#define _PAGE_FILE (_AT(pteval_t, 1) << _PAGE_BIT_FILE)
|
||||
#define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE)
|
||||
|
||||
#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \
|
||||
_PAGE_ACCESSED | _PAGE_DIRTY)
|
||||
|
@ -158,8 +160,19 @@
|
|||
#define PGD_IDENT_ATTR 0x001 /* PRESENT (no other attributes) */
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Macro to mark a page protection value as UC-
|
||||
*/
|
||||
#define pgprot_noncached(prot) \
|
||||
((boot_cpu_data.x86 > 3) \
|
||||
? (__pgprot(pgprot_val(prot) | _PAGE_CACHE_UC_MINUS)) \
|
||||
: (prot))
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
#define pgprot_writecombine pgprot_writecombine
|
||||
extern pgprot_t pgprot_writecombine(pgprot_t prot);
|
||||
|
||||
/*
|
||||
* ZERO_PAGE is a global shared page that is always zero: used
|
||||
* for zero-mapped memory areas etc..
|
||||
|
@ -329,6 +342,9 @@ static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot)
|
|||
#define canon_pgprot(p) __pgprot(pgprot_val(p) & __supported_pte_mask)
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
/* Indicate that x86 has its own track and untrack pfn vma functions */
|
||||
#define __HAVE_PFNMAP_TRACKING
|
||||
|
||||
#define __HAVE_PHYS_MEM_ACCESS_PROT
|
||||
struct file;
|
||||
pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
|
||||
|
|
|
@ -100,15 +100,6 @@ extern unsigned long pg0[];
|
|||
# include <asm/pgtable-2level.h>
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Macro to mark a page protection value as "uncacheable".
|
||||
* On processors which do not support it, this is a no-op.
|
||||
*/
|
||||
#define pgprot_noncached(prot) \
|
||||
((boot_cpu_data.x86 > 3) \
|
||||
? (__pgprot(pgprot_val(prot) | _PAGE_PCD | _PAGE_PWT)) \
|
||||
: (prot))
|
||||
|
||||
/*
|
||||
* Conversion functions: convert a page and protection to a page entry,
|
||||
* and a page entry and page directory to the page they refer to.
|
||||
|
|
|
@ -146,7 +146,7 @@ static inline void native_pgd_clear(pgd_t *pgd)
|
|||
#define PGDIR_MASK (~(PGDIR_SIZE - 1))
|
||||
|
||||
|
||||
#define MAXMEM _AC(0x00003fffffffffff, UL)
|
||||
#define MAXMEM _AC(__AC(1, UL) << MAX_PHYSMEM_BITS, UL)
|
||||
#define VMALLOC_START _AC(0xffffc20000000000, UL)
|
||||
#define VMALLOC_END _AC(0xffffe1ffffffffff, UL)
|
||||
#define VMEMMAP_START _AC(0xffffe20000000000, UL)
|
||||
|
@ -176,12 +176,6 @@ static inline int pmd_bad(pmd_t pmd)
|
|||
|
||||
#define pages_to_mb(x) ((x) >> (20 - PAGE_SHIFT)) /* FIXME: is this right? */
|
||||
|
||||
/*
|
||||
* Macro to mark a page protection value as "uncacheable".
|
||||
*/
|
||||
#define pgprot_noncached(prot) \
|
||||
(__pgprot(pgprot_val((prot)) | _PAGE_PCD | _PAGE_PWT))
|
||||
|
||||
/*
|
||||
* Conversion functions: convert a page and protection to a page entry,
|
||||
* and a page entry and page directory to the page they refer to.
|
||||
|
@ -250,10 +244,22 @@ static inline int pud_large(pud_t pte)
|
|||
extern int direct_gbpages;
|
||||
|
||||
/* Encode and de-code a swap entry */
|
||||
#define __swp_type(x) (((x).val >> 1) & 0x3f)
|
||||
#define __swp_offset(x) ((x).val >> 8)
|
||||
#define __swp_entry(type, offset) ((swp_entry_t) { ((type) << 1) | \
|
||||
((offset) << 8) })
|
||||
#if _PAGE_BIT_FILE < _PAGE_BIT_PROTNONE
|
||||
#define SWP_TYPE_BITS (_PAGE_BIT_FILE - _PAGE_BIT_PRESENT - 1)
|
||||
#define SWP_OFFSET_SHIFT (_PAGE_BIT_PROTNONE + 1)
|
||||
#else
|
||||
#define SWP_TYPE_BITS (_PAGE_BIT_PROTNONE - _PAGE_BIT_PRESENT - 1)
|
||||
#define SWP_OFFSET_SHIFT (_PAGE_BIT_FILE + 1)
|
||||
#endif
|
||||
|
||||
#define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS)
|
||||
|
||||
#define __swp_type(x) (((x).val >> (_PAGE_BIT_PRESENT + 1)) \
|
||||
& ((1U << SWP_TYPE_BITS) - 1))
|
||||
#define __swp_offset(x) ((x).val >> SWP_OFFSET_SHIFT)
|
||||
#define __swp_entry(type, offset) ((swp_entry_t) { \
|
||||
((type) << (_PAGE_BIT_PRESENT + 1)) \
|
||||
| ((offset) << SWP_OFFSET_SHIFT) })
|
||||
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val((pte)) })
|
||||
#define __swp_entry_to_pte(x) ((pte_t) { .pte = (x).val })
|
||||
|
||||
|
|
|
@ -6,5 +6,8 @@
|
|||
#define ARCH_GET_FS 0x1003
|
||||
#define ARCH_GET_GS 0x1004
|
||||
|
||||
#ifdef CONFIG_X86_64
|
||||
extern long sys_arch_prctl(int, unsigned long);
|
||||
#endif /* CONFIG_X86_64 */
|
||||
|
||||
#endif /* _ASM_X86_PRCTL_H */
|
||||
|
|
|
@ -110,6 +110,7 @@ struct cpuinfo_x86 {
|
|||
/* Index into per_cpu list: */
|
||||
u16 cpu_index;
|
||||
#endif
|
||||
unsigned int x86_hyper_vendor;
|
||||
} __attribute__((__aligned__(SMP_CACHE_BYTES)));
|
||||
|
||||
#define X86_VENDOR_INTEL 0
|
||||
|
@ -123,6 +124,9 @@ struct cpuinfo_x86 {
|
|||
|
||||
#define X86_VENDOR_UNKNOWN 0xff
|
||||
|
||||
#define X86_HYPER_VENDOR_NONE 0
|
||||
#define X86_HYPER_VENDOR_VMWARE 1
|
||||
|
||||
/*
|
||||
* capabilities of CPUs
|
||||
*/
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
#ifndef _ASM_X86_REBOOT_H
|
||||
#define _ASM_X86_REBOOT_H
|
||||
|
||||
#include <linux/kdebug.h>
|
||||
|
||||
struct pt_regs;
|
||||
|
||||
struct machine_ops {
|
||||
|
@ -18,4 +20,7 @@ void native_machine_crash_shutdown(struct pt_regs *regs);
|
|||
void native_machine_shutdown(void);
|
||||
void machine_real_restart(const unsigned char *code, int length);
|
||||
|
||||
typedef void (*nmi_shootdown_cb)(int, struct die_args*);
|
||||
void nmi_shootdown_cpus(nmi_shootdown_cb callback);
|
||||
|
||||
#endif /* _ASM_X86_REBOOT_H */
|
||||
|
|
|
@ -8,6 +8,10 @@
|
|||
/* Interrupt control for vSMPowered x86_64 systems */
|
||||
void vsmp_init(void);
|
||||
|
||||
|
||||
void setup_bios_corruption_check(void);
|
||||
|
||||
|
||||
#ifdef CONFIG_X86_VISWS
|
||||
extern void visws_early_detect(void);
|
||||
extern int is_visws_box(void);
|
||||
|
@ -16,6 +20,8 @@ static inline void visws_early_detect(void) { }
|
|||
static inline int is_visws_box(void) { return 0; }
|
||||
#endif
|
||||
|
||||
extern int wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip);
|
||||
extern int wakeup_secondary_cpu_via_init(int apicid, unsigned long start_eip);
|
||||
/*
|
||||
* Any setup quirks to be performed?
|
||||
*/
|
||||
|
@ -39,6 +45,7 @@ struct x86_quirks {
|
|||
void (*smp_read_mpc_oem)(struct mp_config_oemtable *oemtable,
|
||||
unsigned short oemsize);
|
||||
int (*setup_ioapic_ids)(void);
|
||||
int (*update_genapic)(void);
|
||||
};
|
||||
|
||||
extern struct x86_quirks *x86_quirks;
|
||||
|
|
|
@ -0,0 +1,70 @@
|
|||
#ifndef _ASM_X86_SIGFRAME_H
|
||||
#define _ASM_X86_SIGFRAME_H
|
||||
|
||||
#include <asm/sigcontext.h>
|
||||
#include <asm/siginfo.h>
|
||||
#include <asm/ucontext.h>
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
#define sigframe_ia32 sigframe
|
||||
#define rt_sigframe_ia32 rt_sigframe
|
||||
#define sigcontext_ia32 sigcontext
|
||||
#define _fpstate_ia32 _fpstate
|
||||
#define ucontext_ia32 ucontext
|
||||
#else /* !CONFIG_X86_32 */
|
||||
|
||||
#ifdef CONFIG_IA32_EMULATION
|
||||
#include <asm/ia32.h>
|
||||
#endif /* CONFIG_IA32_EMULATION */
|
||||
|
||||
#endif /* CONFIG_X86_32 */
|
||||
|
||||
#if defined(CONFIG_X86_32) || defined(CONFIG_IA32_EMULATION)
|
||||
struct sigframe_ia32 {
|
||||
u32 pretcode;
|
||||
int sig;
|
||||
struct sigcontext_ia32 sc;
|
||||
/*
|
||||
* fpstate is unused. fpstate is moved/allocated after
|
||||
* retcode[] below. This movement allows to have the FP state and the
|
||||
* future state extensions (xsave) stay together.
|
||||
* And at the same time retaining the unused fpstate, prevents changing
|
||||
* the offset of extramask[] in the sigframe and thus prevent any
|
||||
* legacy application accessing/modifying it.
|
||||
*/
|
||||
struct _fpstate_ia32 fpstate_unused;
|
||||
#ifdef CONFIG_IA32_EMULATION
|
||||
unsigned int extramask[_COMPAT_NSIG_WORDS-1];
|
||||
#else /* !CONFIG_IA32_EMULATION */
|
||||
unsigned long extramask[_NSIG_WORDS-1];
|
||||
#endif /* CONFIG_IA32_EMULATION */
|
||||
char retcode[8];
|
||||
/* fp state follows here */
|
||||
};
|
||||
|
||||
struct rt_sigframe_ia32 {
|
||||
u32 pretcode;
|
||||
int sig;
|
||||
u32 pinfo;
|
||||
u32 puc;
|
||||
#ifdef CONFIG_IA32_EMULATION
|
||||
compat_siginfo_t info;
|
||||
#else /* !CONFIG_IA32_EMULATION */
|
||||
struct siginfo info;
|
||||
#endif /* CONFIG_IA32_EMULATION */
|
||||
struct ucontext_ia32 uc;
|
||||
char retcode[8];
|
||||
/* fp state follows here */
|
||||
};
|
||||
#endif /* defined(CONFIG_X86_32) || defined(CONFIG_IA32_EMULATION) */
|
||||
|
||||
#ifdef CONFIG_X86_64
|
||||
struct rt_sigframe {
|
||||
char __user *pretcode;
|
||||
struct ucontext uc;
|
||||
struct siginfo info;
|
||||
/* fp state follows here */
|
||||
};
|
||||
#endif /* CONFIG_X86_64 */
|
||||
|
||||
#endif /* _ASM_X86_SIGFRAME_H */
|
|
@ -121,6 +121,10 @@ typedef unsigned long sigset_t;
|
|||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
# ifdef __KERNEL__
|
||||
extern void do_notify_resume(struct pt_regs *, void *, __u32);
|
||||
# endif /* __KERNEL__ */
|
||||
|
||||
#ifdef __i386__
|
||||
# ifdef __KERNEL__
|
||||
struct old_sigaction {
|
||||
|
@ -141,8 +145,6 @@ struct k_sigaction {
|
|||
struct sigaction sa;
|
||||
};
|
||||
|
||||
extern void do_notify_resume(struct pt_regs *, void *, __u32);
|
||||
|
||||
# else /* __KERNEL__ */
|
||||
/* Here we must cater to libcs that poke about in kernel headers. */
|
||||
|
||||
|
|
|
@ -27,7 +27,7 @@
|
|||
#else /* CONFIG_X86_32 */
|
||||
# define SECTION_SIZE_BITS 27 /* matt - 128 is convenient right now */
|
||||
# define MAX_PHYSADDR_BITS 44
|
||||
# define MAX_PHYSMEM_BITS 44
|
||||
# define MAX_PHYSMEM_BITS 44 /* Can be max 45 bits */
|
||||
#endif
|
||||
|
||||
#endif /* CONFIG_SPARSEMEM */
|
||||
|
|
|
@ -19,6 +19,13 @@
|
|||
/* kernel/ioport.c */
|
||||
asmlinkage long sys_ioperm(unsigned long, unsigned long, int);
|
||||
|
||||
/* kernel/ldt.c */
|
||||
asmlinkage int sys_modify_ldt(int, void __user *, unsigned long);
|
||||
|
||||
/* kernel/tls.c */
|
||||
asmlinkage int sys_set_thread_area(struct user_desc __user *);
|
||||
asmlinkage int sys_get_thread_area(struct user_desc __user *);
|
||||
|
||||
/* X86_32 only */
|
||||
#ifdef CONFIG_X86_32
|
||||
/* kernel/process_32.c */
|
||||
|
@ -33,14 +40,11 @@ asmlinkage int sys_sigaction(int, const struct old_sigaction __user *,
|
|||
struct old_sigaction __user *);
|
||||
asmlinkage int sys_sigaltstack(unsigned long);
|
||||
asmlinkage unsigned long sys_sigreturn(unsigned long);
|
||||
asmlinkage int sys_rt_sigreturn(unsigned long);
|
||||
asmlinkage int sys_rt_sigreturn(struct pt_regs);
|
||||
|
||||
/* kernel/ioport.c */
|
||||
asmlinkage long sys_iopl(unsigned long);
|
||||
|
||||
/* kernel/ldt.c */
|
||||
asmlinkage int sys_modify_ldt(int, void __user *, unsigned long);
|
||||
|
||||
/* kernel/sys_i386_32.c */
|
||||
asmlinkage long sys_mmap2(unsigned long, unsigned long, unsigned long,
|
||||
unsigned long, unsigned long, unsigned long);
|
||||
|
@ -54,10 +58,6 @@ asmlinkage int sys_uname(struct old_utsname __user *);
|
|||
struct oldold_utsname;
|
||||
asmlinkage int sys_olduname(struct oldold_utsname __user *);
|
||||
|
||||
/* kernel/tls.c */
|
||||
asmlinkage int sys_set_thread_area(struct user_desc __user *);
|
||||
asmlinkage int sys_get_thread_area(struct user_desc __user *);
|
||||
|
||||
/* kernel/vm86_32.c */
|
||||
asmlinkage int sys_vm86old(struct pt_regs);
|
||||
asmlinkage int sys_vm86(struct pt_regs);
|
||||
|
|
|
@ -17,12 +17,12 @@
|
|||
# define AT_VECTOR_SIZE_ARCH 1
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
|
||||
struct task_struct; /* one of the stranger aspects of C forward declarations */
|
||||
struct task_struct *__switch_to(struct task_struct *prev,
|
||||
struct task_struct *next);
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
|
||||
/*
|
||||
* Saving eflags is important. It switches not only IOPL between tasks,
|
||||
* it also protects other tasks from NT leaking through sysenter etc.
|
||||
|
@ -314,6 +314,8 @@ extern void free_init_pages(char *what, unsigned long begin, unsigned long end);
|
|||
|
||||
void default_idle(void);
|
||||
|
||||
void stop_this_cpu(void *dummy);
|
||||
|
||||
/*
|
||||
* Force strict CPU ordering.
|
||||
* And yes, this is required on UP too when we're talking
|
||||
|
|
|
@ -24,7 +24,7 @@ struct exec_domain;
|
|||
struct thread_info {
|
||||
struct task_struct *task; /* main task structure */
|
||||
struct exec_domain *exec_domain; /* execution domain */
|
||||
unsigned long flags; /* low level flags */
|
||||
__u32 flags; /* low level flags */
|
||||
__u32 status; /* thread synchronous flags */
|
||||
__u32 cpu; /* current CPU */
|
||||
int preempt_count; /* 0 => preemptable,
|
||||
|
|
|
@ -3,6 +3,7 @@
|
|||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
#ifdef CONFIG_X86_TRAMPOLINE
|
||||
/*
|
||||
* Trampoline 80x86 program as an array.
|
||||
*/
|
||||
|
@ -13,8 +14,14 @@ extern unsigned char *trampoline_base;
|
|||
extern unsigned long init_rsp;
|
||||
extern unsigned long initial_code;
|
||||
|
||||
#define TRAMPOLINE_SIZE roundup(trampoline_end - trampoline_data, PAGE_SIZE)
|
||||
#define TRAMPOLINE_BASE 0x6000
|
||||
|
||||
extern unsigned long setup_trampoline(void);
|
||||
extern void __init reserve_trampoline_memory(void);
|
||||
#else
|
||||
static inline void reserve_trampoline_memory(void) {};
|
||||
#endif /* CONFIG_X86_TRAMPOLINE */
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
|
|
|
@ -46,6 +46,10 @@ dotraplinkage void do_coprocessor_segment_overrun(struct pt_regs *, long);
|
|||
dotraplinkage void do_invalid_TSS(struct pt_regs *, long);
|
||||
dotraplinkage void do_segment_not_present(struct pt_regs *, long);
|
||||
dotraplinkage void do_stack_segment(struct pt_regs *, long);
|
||||
#ifdef CONFIG_X86_64
|
||||
dotraplinkage void do_double_fault(struct pt_regs *, long);
|
||||
asmlinkage __kprobes struct pt_regs *sync_regs(struct pt_regs *);
|
||||
#endif
|
||||
dotraplinkage void do_general_protection(struct pt_regs *, long);
|
||||
dotraplinkage void do_page_fault(struct pt_regs *, unsigned long);
|
||||
dotraplinkage void do_spurious_interrupt_bug(struct pt_regs *, long);
|
||||
|
@ -72,10 +76,13 @@ static inline int get_si_code(unsigned long condition)
|
|||
extern int panic_on_unrecovered_nmi;
|
||||
extern int kstack_depth_to_print;
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
void math_error(void __user *);
|
||||
unsigned long patch_espfix_desc(unsigned long, unsigned long);
|
||||
asmlinkage void math_emulate(long);
|
||||
#ifdef CONFIG_X86_32
|
||||
unsigned long patch_espfix_desc(unsigned long, unsigned long);
|
||||
#else
|
||||
asmlinkage void smp_thermal_interrupt(void);
|
||||
asmlinkage void mce_threshold_interrupt(void);
|
||||
#endif
|
||||
|
||||
#endif /* _ASM_X86_TRAPS_H */
|
||||
|
|
|
@ -34,8 +34,6 @@ static inline cycles_t get_cycles(void)
|
|||
|
||||
static __always_inline cycles_t vget_cycles(void)
|
||||
{
|
||||
cycles_t cycles;
|
||||
|
||||
/*
|
||||
* We only do VDSOs on TSC capable CPUs, so this shouldnt
|
||||
* access boot_cpu_data (which is not VDSO-safe):
|
||||
|
@ -44,11 +42,7 @@ static __always_inline cycles_t vget_cycles(void)
|
|||
if (!cpu_has_tsc)
|
||||
return 0;
|
||||
#endif
|
||||
rdtsc_barrier();
|
||||
cycles = (cycles_t)__native_read_tsc();
|
||||
rdtsc_barrier();
|
||||
|
||||
return cycles;
|
||||
return (cycles_t)__native_read_tsc();
|
||||
}
|
||||
|
||||
extern void tsc_init(void);
|
||||
|
|
|
@ -350,14 +350,14 @@ do { \
|
|||
|
||||
#define __put_user_nocheck(x, ptr, size) \
|
||||
({ \
|
||||
long __pu_err; \
|
||||
int __pu_err; \
|
||||
__put_user_size((x), (ptr), (size), __pu_err, -EFAULT); \
|
||||
__pu_err; \
|
||||
})
|
||||
|
||||
#define __get_user_nocheck(x, ptr, size) \
|
||||
({ \
|
||||
long __gu_err; \
|
||||
int __gu_err; \
|
||||
unsigned long __gu_val; \
|
||||
__get_user_size(__gu_val, (ptr), (size), __gu_err, -EFAULT); \
|
||||
(x) = (__force __typeof__(*(ptr)))__gu_val; \
|
||||
|
|
|
@ -32,13 +32,18 @@
|
|||
enum uv_bios_cmd {
|
||||
UV_BIOS_COMMON,
|
||||
UV_BIOS_GET_SN_INFO,
|
||||
UV_BIOS_FREQ_BASE
|
||||
UV_BIOS_FREQ_BASE,
|
||||
UV_BIOS_WATCHLIST_ALLOC,
|
||||
UV_BIOS_WATCHLIST_FREE,
|
||||
UV_BIOS_MEMPROTECT,
|
||||
UV_BIOS_GET_PARTITION_ADDR
|
||||
};
|
||||
|
||||
/*
|
||||
* Status values returned from a BIOS call.
|
||||
*/
|
||||
enum {
|
||||
BIOS_STATUS_MORE_PASSES = 1,
|
||||
BIOS_STATUS_SUCCESS = 0,
|
||||
BIOS_STATUS_UNIMPLEMENTED = -ENOSYS,
|
||||
BIOS_STATUS_EINVAL = -EINVAL,
|
||||
|
@ -71,6 +76,21 @@ union partition_info_u {
|
|||
};
|
||||
};
|
||||
|
||||
union uv_watchlist_u {
|
||||
u64 val;
|
||||
struct {
|
||||
u64 blade : 16,
|
||||
size : 32,
|
||||
filler : 16;
|
||||
};
|
||||
};
|
||||
|
||||
enum uv_memprotect {
|
||||
UV_MEMPROT_RESTRICT_ACCESS,
|
||||
UV_MEMPROT_ALLOW_AMO,
|
||||
UV_MEMPROT_ALLOW_RW
|
||||
};
|
||||
|
||||
/*
|
||||
* bios calls have 6 parameters
|
||||
*/
|
||||
|
@ -80,14 +100,20 @@ extern s64 uv_bios_call_reentrant(enum uv_bios_cmd, u64, u64, u64, u64, u64);
|
|||
|
||||
extern s64 uv_bios_get_sn_info(int, int *, long *, long *, long *);
|
||||
extern s64 uv_bios_freq_base(u64, u64 *);
|
||||
extern int uv_bios_mq_watchlist_alloc(int, unsigned long, unsigned int,
|
||||
unsigned long *);
|
||||
extern int uv_bios_mq_watchlist_free(int, int);
|
||||
extern s64 uv_bios_change_memprotect(u64, u64, enum uv_memprotect);
|
||||
extern s64 uv_bios_reserved_page_pa(u64, u64 *, u64 *, u64 *);
|
||||
|
||||
extern void uv_bios_init(void);
|
||||
|
||||
extern unsigned long sn_rtc_cycles_per_second;
|
||||
extern int uv_type;
|
||||
extern long sn_partition_id;
|
||||
extern long uv_coherency_id;
|
||||
extern long uv_region_size;
|
||||
#define partition_coherence_id() (uv_coherency_id)
|
||||
extern long sn_coherency_id;
|
||||
extern long sn_region_size;
|
||||
#define partition_coherence_id() (sn_coherency_id)
|
||||
|
||||
extern struct kobject *sgi_uv_kobj; /* /sys/firmware/sgi_uv */
|
||||
|
||||
|
|
|
@ -113,25 +113,37 @@
|
|||
*/
|
||||
#define UV_MAX_NASID_VALUE (UV_MAX_NUMALINK_NODES * 2)
|
||||
|
||||
struct uv_scir_s {
|
||||
struct timer_list timer;
|
||||
unsigned long offset;
|
||||
unsigned long last;
|
||||
unsigned long idle_on;
|
||||
unsigned long idle_off;
|
||||
unsigned char state;
|
||||
unsigned char enabled;
|
||||
};
|
||||
|
||||
/*
|
||||
* The following defines attributes of the HUB chip. These attributes are
|
||||
* frequently referenced and are kept in the per-cpu data areas of each cpu.
|
||||
* They are kept together in a struct to minimize cache misses.
|
||||
*/
|
||||
struct uv_hub_info_s {
|
||||
unsigned long global_mmr_base;
|
||||
unsigned long gpa_mask;
|
||||
unsigned long gnode_upper;
|
||||
unsigned long lowmem_remap_top;
|
||||
unsigned long lowmem_remap_base;
|
||||
unsigned short pnode;
|
||||
unsigned short pnode_mask;
|
||||
unsigned short coherency_domain_number;
|
||||
unsigned short numa_blade_id;
|
||||
unsigned char blade_processor_id;
|
||||
unsigned char m_val;
|
||||
unsigned char n_val;
|
||||
unsigned long global_mmr_base;
|
||||
unsigned long gpa_mask;
|
||||
unsigned long gnode_upper;
|
||||
unsigned long lowmem_remap_top;
|
||||
unsigned long lowmem_remap_base;
|
||||
unsigned short pnode;
|
||||
unsigned short pnode_mask;
|
||||
unsigned short coherency_domain_number;
|
||||
unsigned short numa_blade_id;
|
||||
unsigned char blade_processor_id;
|
||||
unsigned char m_val;
|
||||
unsigned char n_val;
|
||||
struct uv_scir_s scir;
|
||||
};
|
||||
|
||||
DECLARE_PER_CPU(struct uv_hub_info_s, __uv_hub_info);
|
||||
#define uv_hub_info (&__get_cpu_var(__uv_hub_info))
|
||||
#define uv_cpu_hub_info(cpu) (&per_cpu(__uv_hub_info, cpu))
|
||||
|
@ -163,6 +175,30 @@ DECLARE_PER_CPU(struct uv_hub_info_s, __uv_hub_info);
|
|||
|
||||
#define UV_APIC_PNODE_SHIFT 6
|
||||
|
||||
/* Local Bus from cpu's perspective */
|
||||
#define LOCAL_BUS_BASE 0x1c00000
|
||||
#define LOCAL_BUS_SIZE (4 * 1024 * 1024)
|
||||
|
||||
/*
|
||||
* System Controller Interface Reg
|
||||
*
|
||||
* Note there are NO leds on a UV system. This register is only
|
||||
* used by the system controller to monitor system-wide operation.
|
||||
* There are 64 regs per node. With Nahelem cpus (2 cores per node,
|
||||
* 8 cpus per core, 2 threads per cpu) there are 32 cpu threads on
|
||||
* a node.
|
||||
*
|
||||
* The window is located at top of ACPI MMR space
|
||||
*/
|
||||
#define SCIR_WINDOW_COUNT 64
|
||||
#define SCIR_LOCAL_MMR_BASE (LOCAL_BUS_BASE + \
|
||||
LOCAL_BUS_SIZE - \
|
||||
SCIR_WINDOW_COUNT)
|
||||
|
||||
#define SCIR_CPU_HEARTBEAT 0x01 /* timer interrupt */
|
||||
#define SCIR_CPU_ACTIVITY 0x02 /* not idle */
|
||||
#define SCIR_CPU_HB_INTERVAL (HZ) /* once per second */
|
||||
|
||||
/*
|
||||
* Macros for converting between kernel virtual addresses, socket local physical
|
||||
* addresses, and UV global physical addresses.
|
||||
|
@ -174,7 +210,7 @@ DECLARE_PER_CPU(struct uv_hub_info_s, __uv_hub_info);
|
|||
static inline unsigned long uv_soc_phys_ram_to_gpa(unsigned long paddr)
|
||||
{
|
||||
if (paddr < uv_hub_info->lowmem_remap_top)
|
||||
paddr += uv_hub_info->lowmem_remap_base;
|
||||
paddr |= uv_hub_info->lowmem_remap_base;
|
||||
return paddr | uv_hub_info->gnode_upper;
|
||||
}
|
||||
|
||||
|
@ -182,19 +218,7 @@ static inline unsigned long uv_soc_phys_ram_to_gpa(unsigned long paddr)
|
|||
/* socket virtual --> UV global physical address */
|
||||
static inline unsigned long uv_gpa(void *v)
|
||||
{
|
||||
return __pa(v) | uv_hub_info->gnode_upper;
|
||||
}
|
||||
|
||||
/* socket virtual --> UV global physical address */
|
||||
static inline void *uv_vgpa(void *v)
|
||||
{
|
||||
return (void *)uv_gpa(v);
|
||||
}
|
||||
|
||||
/* UV global physical address --> socket virtual */
|
||||
static inline void *uv_va(unsigned long gpa)
|
||||
{
|
||||
return __va(gpa & uv_hub_info->gpa_mask);
|
||||
return uv_soc_phys_ram_to_gpa(__pa(v));
|
||||
}
|
||||
|
||||
/* pnode, offset --> socket virtual */
|
||||
|
@ -277,6 +301,16 @@ static inline void uv_write_local_mmr(unsigned long offset, unsigned long val)
|
|||
*uv_local_mmr_address(offset) = val;
|
||||
}
|
||||
|
||||
static inline unsigned char uv_read_local_mmr8(unsigned long offset)
|
||||
{
|
||||
return *((unsigned char *)uv_local_mmr_address(offset));
|
||||
}
|
||||
|
||||
static inline void uv_write_local_mmr8(unsigned long offset, unsigned char val)
|
||||
{
|
||||
*((unsigned char *)uv_local_mmr_address(offset)) = val;
|
||||
}
|
||||
|
||||
/*
|
||||
* Structures and definitions for converting between cpu, node, pnode, and blade
|
||||
* numbers.
|
||||
|
@ -351,5 +385,20 @@ static inline int uv_num_possible_blades(void)
|
|||
return uv_possible_blades;
|
||||
}
|
||||
|
||||
#endif /* _ASM_X86_UV_UV_HUB_H */
|
||||
/* Update SCIR state */
|
||||
static inline void uv_set_scir_bits(unsigned char value)
|
||||
{
|
||||
if (uv_hub_info->scir.state != value) {
|
||||
uv_hub_info->scir.state = value;
|
||||
uv_write_local_mmr8(uv_hub_info->scir.offset, value);
|
||||
}
|
||||
}
|
||||
static inline void uv_set_cpu_scir_bits(int cpu, unsigned char value)
|
||||
{
|
||||
if (uv_cpu_hub_info(cpu)->scir.state != value) {
|
||||
uv_cpu_hub_info(cpu)->scir.state = value;
|
||||
uv_write_local_mmr8(uv_cpu_hub_info(cpu)->scir.offset, value);
|
||||
}
|
||||
}
|
||||
|
||||
#endif /* _ASM_X86_UV_UV_HUB_H */
|
||||
|
|
|
@ -0,0 +1,27 @@
|
|||
/*
|
||||
* Copyright (C) 2008, VMware, Inc.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
* the Free Software Foundation; either version 2 of the License, or
|
||||
* (at your option) any later version.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but
|
||||
* WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
|
||||
* NON INFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
*
|
||||
*/
|
||||
#ifndef ASM_X86__VMWARE_H
|
||||
#define ASM_X86__VMWARE_H
|
||||
|
||||
extern unsigned long vmware_get_tsc_khz(void);
|
||||
extern int vmware_platform(void);
|
||||
extern void vmware_set_feature_bits(struct cpuinfo_x86 *c);
|
||||
|
||||
#endif
|
|
@ -33,8 +33,14 @@
|
|||
#ifndef _ASM_X86_XEN_HYPERCALL_H
|
||||
#define _ASM_X86_XEN_HYPERCALL_H
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include <asm/page.h>
|
||||
#include <asm/pgtable.h>
|
||||
|
||||
#include <xen/interface/xen.h>
|
||||
#include <xen/interface/sched.h>
|
||||
|
|
|
@ -33,39 +33,10 @@
|
|||
#ifndef _ASM_X86_XEN_HYPERVISOR_H
|
||||
#define _ASM_X86_XEN_HYPERVISOR_H
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/kernel.h>
|
||||
|
||||
#include <xen/interface/xen.h>
|
||||
#include <xen/interface/version.h>
|
||||
|
||||
#include <asm/ptrace.h>
|
||||
#include <asm/page.h>
|
||||
#include <asm/desc.h>
|
||||
#if defined(__i386__)
|
||||
# ifdef CONFIG_X86_PAE
|
||||
# include <asm-generic/pgtable-nopud.h>
|
||||
# else
|
||||
# include <asm-generic/pgtable-nopmd.h>
|
||||
# endif
|
||||
#endif
|
||||
#include <asm/xen/hypercall.h>
|
||||
|
||||
/* arch/i386/kernel/setup.c */
|
||||
extern struct shared_info *HYPERVISOR_shared_info;
|
||||
extern struct start_info *xen_start_info;
|
||||
|
||||
/* arch/i386/mach-xen/evtchn.c */
|
||||
/* Force a proper event-channel callback from Xen. */
|
||||
extern void force_evtchn_callback(void);
|
||||
|
||||
/* Turn jiffies into Xen system time. */
|
||||
u64 jiffies_to_st(unsigned long jiffies);
|
||||
|
||||
|
||||
#define MULTI_UVMFLAGS_INDEX 3
|
||||
#define MULTI_UVMDOMID_INDEX 4
|
||||
|
||||
enum xen_domain_type {
|
||||
XEN_NATIVE,
|
||||
XEN_PV_DOMAIN,
|
||||
|
@ -74,9 +45,15 @@ enum xen_domain_type {
|
|||
|
||||
extern enum xen_domain_type xen_domain_type;
|
||||
|
||||
#ifdef CONFIG_XEN
|
||||
#define xen_domain() (xen_domain_type != XEN_NATIVE)
|
||||
#define xen_pv_domain() (xen_domain_type == XEN_PV_DOMAIN)
|
||||
#else
|
||||
#define xen_domain() (0)
|
||||
#endif
|
||||
|
||||
#define xen_pv_domain() (xen_domain() && xen_domain_type == XEN_PV_DOMAIN)
|
||||
#define xen_hvm_domain() (xen_domain() && xen_domain_type == XEN_HVM_DOMAIN)
|
||||
|
||||
#define xen_initial_domain() (xen_pv_domain() && xen_start_info->flags & SIF_INITDOMAIN)
|
||||
#define xen_hvm_domain() (xen_domain_type == XEN_HVM_DOMAIN)
|
||||
|
||||
#endif /* _ASM_X86_XEN_HYPERVISOR_H */
|
||||
|
|
|
@ -1,11 +1,16 @@
|
|||
#ifndef _ASM_X86_XEN_PAGE_H
|
||||
#define _ASM_X86_XEN_PAGE_H
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/pfn.h>
|
||||
|
||||
#include <asm/uaccess.h>
|
||||
#include <asm/page.h>
|
||||
#include <asm/pgtable.h>
|
||||
|
||||
#include <xen/interface/xen.h>
|
||||
#include <xen/features.h>
|
||||
|
||||
/* Xen machine address */
|
||||
|
|
|
@ -12,6 +12,7 @@ CFLAGS_REMOVE_tsc.o = -pg
|
|||
CFLAGS_REMOVE_rtc.o = -pg
|
||||
CFLAGS_REMOVE_paravirt-spinlocks.o = -pg
|
||||
CFLAGS_REMOVE_ftrace.o = -pg
|
||||
CFLAGS_REMOVE_early_printk.o = -pg
|
||||
endif
|
||||
|
||||
#
|
||||
|
@ -23,9 +24,9 @@ CFLAGS_vsyscall_64.o := $(PROFILING) -g0 $(nostackp)
|
|||
CFLAGS_hpet.o := $(nostackp)
|
||||
CFLAGS_tsc.o := $(nostackp)
|
||||
|
||||
obj-y := process_$(BITS).o signal_$(BITS).o entry_$(BITS).o
|
||||
obj-y := process_$(BITS).o signal.o entry_$(BITS).o
|
||||
obj-y += traps.o irq.o irq_$(BITS).o dumpstack_$(BITS).o
|
||||
obj-y += time_$(BITS).o ioport.o ldt.o
|
||||
obj-y += time_$(BITS).o ioport.o ldt.o dumpstack.o
|
||||
obj-y += setup.o i8259.o irqinit_$(BITS).o setup_percpu.o
|
||||
obj-$(CONFIG_X86_VISWS) += visws_quirks.o
|
||||
obj-$(CONFIG_X86_32) += probe_roms_32.o
|
||||
|
@ -105,6 +106,8 @@ microcode-$(CONFIG_MICROCODE_INTEL) += microcode_intel.o
|
|||
microcode-$(CONFIG_MICROCODE_AMD) += microcode_amd.o
|
||||
obj-$(CONFIG_MICROCODE) += microcode.o
|
||||
|
||||
obj-$(CONFIG_X86_CHECK_BIOS_CORRUPTION) += check.o
|
||||
|
||||
###
|
||||
# 64 bit specific files
|
||||
ifeq ($(CONFIG_X86_64),y)
|
||||
|
|
|
@ -1360,6 +1360,17 @@ static void __init acpi_process_madt(void)
|
|||
disable_acpi();
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* ACPI supports both logical (e.g. Hyper-Threading) and physical
|
||||
* processors, where MPS only supports physical.
|
||||
*/
|
||||
if (acpi_lapic && acpi_ioapic)
|
||||
printk(KERN_INFO "Using ACPI (MADT) for SMP configuration "
|
||||
"information\n");
|
||||
else if (acpi_lapic)
|
||||
printk(KERN_INFO "Using ACPI for processor (LAPIC) "
|
||||
"configuration information\n");
|
||||
#endif
|
||||
return;
|
||||
}
|
||||
|
|
|
@ -24,6 +24,7 @@
|
|||
#include <linux/iommu-helper.h>
|
||||
#include <asm/proto.h>
|
||||
#include <asm/iommu.h>
|
||||
#include <asm/gart.h>
|
||||
#include <asm/amd_iommu_types.h>
|
||||
#include <asm/amd_iommu.h>
|
||||
|
||||
|
|
|
@ -28,6 +28,7 @@
|
|||
#include <asm/amd_iommu_types.h>
|
||||
#include <asm/amd_iommu.h>
|
||||
#include <asm/iommu.h>
|
||||
#include <asm/gart.h>
|
||||
|
||||
/*
|
||||
* definitions for the ACPI scanning code
|
||||
|
|
|
@ -1,8 +1,9 @@
|
|||
/*
|
||||
* Firmware replacement code.
|
||||
*
|
||||
* Work around broken BIOSes that don't set an aperture or only set the
|
||||
* aperture in the AGP bridge.
|
||||
* Work around broken BIOSes that don't set an aperture, only set the
|
||||
* aperture in the AGP bridge, or set too small aperture.
|
||||
*
|
||||
* If all fails map the aperture over some low memory. This is cheaper than
|
||||
* doing bounce buffering. The memory is lost. This is done at early boot
|
||||
* because only the bootmem allocator can allocate 32+MB.
|
||||
|
|
|
@ -441,6 +441,7 @@ static void lapic_timer_setup(enum clock_event_mode mode,
|
|||
v = apic_read(APIC_LVTT);
|
||||
v |= (APIC_LVT_MASKED | LOCAL_TIMER_VECTOR);
|
||||
apic_write(APIC_LVTT, v);
|
||||
apic_write(APIC_TMICT, 0xffffffff);
|
||||
break;
|
||||
case CLOCK_EVT_MODE_RESUME:
|
||||
/* Nothing to do here */
|
||||
|
@ -559,13 +560,13 @@ static int __init calibrate_by_pmtimer(long deltapm, long *delta)
|
|||
} else {
|
||||
res = (((u64)deltapm) * mult) >> 22;
|
||||
do_div(res, 1000000);
|
||||
printk(KERN_WARNING "APIC calibration not consistent "
|
||||
pr_warning("APIC calibration not consistent "
|
||||
"with PM Timer: %ldms instead of 100ms\n",
|
||||
(long)res);
|
||||
/* Correct the lapic counter value */
|
||||
res = (((u64)(*delta)) * pm_100ms);
|
||||
do_div(res, deltapm);
|
||||
printk(KERN_INFO "APIC delta adjusted to PM-Timer: "
|
||||
pr_info("APIC delta adjusted to PM-Timer: "
|
||||
"%lu (%ld)\n", (unsigned long)res, *delta);
|
||||
*delta = (long)res;
|
||||
}
|
||||
|
@ -645,8 +646,7 @@ static int __init calibrate_APIC_clock(void)
|
|||
*/
|
||||
if (calibration_result < (1000000 / HZ)) {
|
||||
local_irq_enable();
|
||||
printk(KERN_WARNING
|
||||
"APIC frequency too slow, disabling apic timer\n");
|
||||
pr_warning("APIC frequency too slow, disabling apic timer\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
@ -672,13 +672,9 @@ static int __init calibrate_APIC_clock(void)
|
|||
while (lapic_cal_loops <= LAPIC_CAL_LOOPS)
|
||||
cpu_relax();
|
||||
|
||||
local_irq_disable();
|
||||
|
||||
/* Stop the lapic timer */
|
||||
lapic_timer_setup(CLOCK_EVT_MODE_SHUTDOWN, levt);
|
||||
|
||||
local_irq_enable();
|
||||
|
||||
/* Jiffies delta */
|
||||
deltaj = lapic_cal_j2 - lapic_cal_j1;
|
||||
apic_printk(APIC_VERBOSE, "... jiffies delta = %lu\n", deltaj);
|
||||
|
@ -692,8 +688,7 @@ static int __init calibrate_APIC_clock(void)
|
|||
local_irq_enable();
|
||||
|
||||
if (levt->features & CLOCK_EVT_FEAT_DUMMY) {
|
||||
printk(KERN_WARNING
|
||||
"APIC timer disabled due to verification failure.\n");
|
||||
pr_warning("APIC timer disabled due to verification failure.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
@ -714,7 +709,7 @@ void __init setup_boot_APIC_clock(void)
|
|||
* broadcast mechanism is used. On UP systems simply ignore it.
|
||||
*/
|
||||
if (disable_apic_timer) {
|
||||
printk(KERN_INFO "Disabling APIC timer\n");
|
||||
pr_info("Disabling APIC timer\n");
|
||||
/* No broadcast on UP ! */
|
||||
if (num_possible_cpus() > 1) {
|
||||
lapic_clockevent.mult = 1;
|
||||
|
@ -741,7 +736,7 @@ void __init setup_boot_APIC_clock(void)
|
|||
if (nmi_watchdog != NMI_IO_APIC)
|
||||
lapic_clockevent.features &= ~CLOCK_EVT_FEAT_DUMMY;
|
||||
else
|
||||
printk(KERN_WARNING "APIC timer registered as dummy,"
|
||||
pr_warning("APIC timer registered as dummy,"
|
||||
" due to nmi_watchdog=%d!\n", nmi_watchdog);
|
||||
|
||||
/* Setup the lapic or request the broadcast */
|
||||
|
@ -773,8 +768,7 @@ static void local_apic_timer_interrupt(void)
|
|||
* spurious.
|
||||
*/
|
||||
if (!evt->event_handler) {
|
||||
printk(KERN_WARNING
|
||||
"Spurious LAPIC timer interrupt on cpu %d\n", cpu);
|
||||
pr_warning("Spurious LAPIC timer interrupt on cpu %d\n", cpu);
|
||||
/* Switch it off */
|
||||
lapic_timer_setup(CLOCK_EVT_MODE_SHUTDOWN, evt);
|
||||
return;
|
||||
|
@ -783,11 +777,7 @@ static void local_apic_timer_interrupt(void)
|
|||
/*
|
||||
* the NMI deadlock-detector uses this.
|
||||
*/
|
||||
#ifdef CONFIG_X86_64
|
||||
add_pda(apic_timer_irqs, 1);
|
||||
#else
|
||||
per_cpu(irq_stat, cpu).apic_timer_irqs++;
|
||||
#endif
|
||||
inc_irq_stat(apic_timer_irqs);
|
||||
|
||||
evt->event_handler(evt);
|
||||
}
|
||||
|
@ -814,9 +804,7 @@ void smp_apic_timer_interrupt(struct pt_regs *regs)
|
|||
* Besides, if we don't timer interrupts ignore the global
|
||||
* interrupt lock, which is the WrongThing (tm) to do.
|
||||
*/
|
||||
#ifdef CONFIG_X86_64
|
||||
exit_idle();
|
||||
#endif
|
||||
irq_enter();
|
||||
local_apic_timer_interrupt();
|
||||
irq_exit();
|
||||
|
@ -1093,7 +1081,7 @@ static void __cpuinit lapic_setup_esr(void)
|
|||
unsigned int oldvalue, value, maxlvt;
|
||||
|
||||
if (!lapic_is_integrated()) {
|
||||
printk(KERN_INFO "No ESR for 82489DX.\n");
|
||||
pr_info("No ESR for 82489DX.\n");
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -1104,7 +1092,7 @@ static void __cpuinit lapic_setup_esr(void)
|
|||
* ESR disabled - we can't do anything useful with the
|
||||
* errors anyway - mbligh
|
||||
*/
|
||||
printk(KERN_INFO "Leaving ESR disabled.\n");
|
||||
pr_info("Leaving ESR disabled.\n");
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -1298,7 +1286,7 @@ void check_x2apic(void)
|
|||
rdmsr(MSR_IA32_APICBASE, msr, msr2);
|
||||
|
||||
if (msr & X2APIC_ENABLE) {
|
||||
printk("x2apic enabled by BIOS, switching to x2apic ops\n");
|
||||
pr_info("x2apic enabled by BIOS, switching to x2apic ops\n");
|
||||
x2apic_preenabled = x2apic = 1;
|
||||
apic_ops = &x2apic_ops;
|
||||
}
|
||||
|
@ -1310,7 +1298,7 @@ void enable_x2apic(void)
|
|||
|
||||
rdmsr(MSR_IA32_APICBASE, msr, msr2);
|
||||
if (!(msr & X2APIC_ENABLE)) {
|
||||
printk("Enabling x2apic\n");
|
||||
pr_info("Enabling x2apic\n");
|
||||
wrmsr(MSR_IA32_APICBASE, msr | X2APIC_ENABLE, 0);
|
||||
}
|
||||
}
|
||||
|
@ -1325,9 +1313,8 @@ void __init enable_IR_x2apic(void)
|
|||
return;
|
||||
|
||||
if (!x2apic_preenabled && disable_x2apic) {
|
||||
printk(KERN_INFO
|
||||
"Skipped enabling x2apic and Interrupt-remapping "
|
||||
"because of nox2apic\n");
|
||||
pr_info("Skipped enabling x2apic and Interrupt-remapping "
|
||||
"because of nox2apic\n");
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -1335,22 +1322,19 @@ void __init enable_IR_x2apic(void)
|
|||
panic("Bios already enabled x2apic, can't enforce nox2apic");
|
||||
|
||||
if (!x2apic_preenabled && skip_ioapic_setup) {
|
||||
printk(KERN_INFO
|
||||
"Skipped enabling x2apic and Interrupt-remapping "
|
||||
"because of skipping io-apic setup\n");
|
||||
pr_info("Skipped enabling x2apic and Interrupt-remapping "
|
||||
"because of skipping io-apic setup\n");
|
||||
return;
|
||||
}
|
||||
|
||||
ret = dmar_table_init();
|
||||
if (ret) {
|
||||
printk(KERN_INFO
|
||||
"dmar_table_init() failed with %d:\n", ret);
|
||||
pr_info("dmar_table_init() failed with %d:\n", ret);
|
||||
|
||||
if (x2apic_preenabled)
|
||||
panic("x2apic enabled by bios. But IR enabling failed");
|
||||
else
|
||||
printk(KERN_INFO
|
||||
"Not enabling x2apic,Intr-remapping\n");
|
||||
pr_info("Not enabling x2apic,Intr-remapping\n");
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -1359,7 +1343,7 @@ void __init enable_IR_x2apic(void)
|
|||
|
||||
ret = save_mask_IO_APIC_setup();
|
||||
if (ret) {
|
||||
printk(KERN_INFO "Saving IO-APIC state failed: %d\n", ret);
|
||||
pr_info("Saving IO-APIC state failed: %d\n", ret);
|
||||
goto end;
|
||||
}
|
||||
|
||||
|
@ -1394,14 +1378,11 @@ end:
|
|||
|
||||
if (!ret) {
|
||||
if (!x2apic_preenabled)
|
||||
printk(KERN_INFO
|
||||
"Enabled x2apic and interrupt-remapping\n");
|
||||
pr_info("Enabled x2apic and interrupt-remapping\n");
|
||||
else
|
||||
printk(KERN_INFO
|
||||
"Enabled Interrupt-remapping\n");
|
||||
pr_info("Enabled Interrupt-remapping\n");
|
||||
} else
|
||||
printk(KERN_ERR
|
||||
"Failed to enable Interrupt-remapping and x2apic\n");
|
||||
pr_err("Failed to enable Interrupt-remapping and x2apic\n");
|
||||
#else
|
||||
if (!cpu_has_x2apic)
|
||||
return;
|
||||
|
@ -1410,8 +1391,8 @@ end:
|
|||
panic("x2apic enabled prior OS handover,"
|
||||
" enable CONFIG_INTR_REMAP");
|
||||
|
||||
printk(KERN_INFO "Enable CONFIG_INTR_REMAP for enabling intr-remapping "
|
||||
" and x2apic\n");
|
||||
pr_info("Enable CONFIG_INTR_REMAP for enabling intr-remapping "
|
||||
" and x2apic\n");
|
||||
#endif
|
||||
|
||||
return;
|
||||
|
@ -1428,7 +1409,7 @@ end:
|
|||
static int __init detect_init_APIC(void)
|
||||
{
|
||||
if (!cpu_has_apic) {
|
||||
printk(KERN_INFO "No local APIC present\n");
|
||||
pr_info("No local APIC present\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
@ -1469,8 +1450,8 @@ static int __init detect_init_APIC(void)
|
|||
* "lapic" specified.
|
||||
*/
|
||||
if (!force_enable_local_apic) {
|
||||
printk(KERN_INFO "Local APIC disabled by BIOS -- "
|
||||
"you can enable it with \"lapic\"\n");
|
||||
pr_info("Local APIC disabled by BIOS -- "
|
||||
"you can enable it with \"lapic\"\n");
|
||||
return -1;
|
||||
}
|
||||
/*
|
||||
|
@ -1480,8 +1461,7 @@ static int __init detect_init_APIC(void)
|
|||
*/
|
||||
rdmsr(MSR_IA32_APICBASE, l, h);
|
||||
if (!(l & MSR_IA32_APICBASE_ENABLE)) {
|
||||
printk(KERN_INFO
|
||||
"Local APIC disabled by BIOS -- reenabling.\n");
|
||||
pr_info("Local APIC disabled by BIOS -- reenabling.\n");
|
||||
l &= ~MSR_IA32_APICBASE_BASE;
|
||||
l |= MSR_IA32_APICBASE_ENABLE | APIC_DEFAULT_PHYS_BASE;
|
||||
wrmsr(MSR_IA32_APICBASE, l, h);
|
||||
|
@ -1494,7 +1474,7 @@ static int __init detect_init_APIC(void)
|
|||
*/
|
||||
features = cpuid_edx(1);
|
||||
if (!(features & (1 << X86_FEATURE_APIC))) {
|
||||
printk(KERN_WARNING "Could not enable APIC!\n");
|
||||
pr_warning("Could not enable APIC!\n");
|
||||
return -1;
|
||||
}
|
||||
set_cpu_cap(&boot_cpu_data, X86_FEATURE_APIC);
|
||||
|
@ -1505,14 +1485,14 @@ static int __init detect_init_APIC(void)
|
|||
if (l & MSR_IA32_APICBASE_ENABLE)
|
||||
mp_lapic_addr = l & MSR_IA32_APICBASE_BASE;
|
||||
|
||||
printk(KERN_INFO "Found and enabled local APIC!\n");
|
||||
pr_info("Found and enabled local APIC!\n");
|
||||
|
||||
apic_pm_activate();
|
||||
|
||||
return 0;
|
||||
|
||||
no_apic:
|
||||
printk(KERN_INFO "No local APIC present or hardware disabled\n");
|
||||
pr_info("No local APIC present or hardware disabled\n");
|
||||
return -1;
|
||||
}
|
||||
#endif
|
||||
|
@ -1588,12 +1568,12 @@ int __init APIC_init_uniprocessor(void)
|
|||
{
|
||||
#ifdef CONFIG_X86_64
|
||||
if (disable_apic) {
|
||||
printk(KERN_INFO "Apic disabled\n");
|
||||
pr_info("Apic disabled\n");
|
||||
return -1;
|
||||
}
|
||||
if (!cpu_has_apic) {
|
||||
disable_apic = 1;
|
||||
printk(KERN_INFO "Apic disabled by BIOS\n");
|
||||
pr_info("Apic disabled by BIOS\n");
|
||||
return -1;
|
||||
}
|
||||
#else
|
||||
|
@ -1605,8 +1585,8 @@ int __init APIC_init_uniprocessor(void)
|
|||
*/
|
||||
if (!cpu_has_apic &&
|
||||
APIC_INTEGRATED(apic_version[boot_cpu_physical_apicid])) {
|
||||
printk(KERN_ERR "BIOS bug, local APIC 0x%x not detected!...\n",
|
||||
boot_cpu_physical_apicid);
|
||||
pr_err("BIOS bug, local APIC 0x%x not detected!...\n",
|
||||
boot_cpu_physical_apicid);
|
||||
clear_cpu_cap(&boot_cpu_data, X86_FEATURE_APIC);
|
||||
return -1;
|
||||
}
|
||||
|
@ -1682,9 +1662,7 @@ void smp_spurious_interrupt(struct pt_regs *regs)
|
|||
{
|
||||
u32 v;
|
||||
|
||||
#ifdef CONFIG_X86_64
|
||||
exit_idle();
|
||||
#endif
|
||||
irq_enter();
|
||||
/*
|
||||
* Check if this really is a spurious interrupt and ACK it
|
||||
|
@ -1695,14 +1673,11 @@ void smp_spurious_interrupt(struct pt_regs *regs)
|
|||
if (v & (1 << (SPURIOUS_APIC_VECTOR & 0x1f)))
|
||||
ack_APIC_irq();
|
||||
|
||||
#ifdef CONFIG_X86_64
|
||||
add_pda(irq_spurious_count, 1);
|
||||
#else
|
||||
inc_irq_stat(irq_spurious_count);
|
||||
|
||||
/* see sw-dev-man vol 3, chapter 7.4.13.5 */
|
||||
printk(KERN_INFO "spurious APIC interrupt on CPU#%d, "
|
||||
"should never happen.\n", smp_processor_id());
|
||||
__get_cpu_var(irq_stat).irq_spurious_count++;
|
||||
#endif
|
||||
pr_info("spurious APIC interrupt on CPU#%d, "
|
||||
"should never happen.\n", smp_processor_id());
|
||||
irq_exit();
|
||||
}
|
||||
|
||||
|
@ -1713,9 +1688,7 @@ void smp_error_interrupt(struct pt_regs *regs)
|
|||
{
|
||||
u32 v, v1;
|
||||
|
||||
#ifdef CONFIG_X86_64
|
||||
exit_idle();
|
||||
#endif
|
||||
irq_enter();
|
||||
/* First tickle the hardware, only then report what went on. -- REW */
|
||||
v = apic_read(APIC_ESR);
|
||||
|
@ -1724,17 +1697,18 @@ void smp_error_interrupt(struct pt_regs *regs)
|
|||
ack_APIC_irq();
|
||||
atomic_inc(&irq_err_count);
|
||||
|
||||
/* Here is what the APIC error bits mean:
|
||||
0: Send CS error
|
||||
1: Receive CS error
|
||||
2: Send accept error
|
||||
3: Receive accept error
|
||||
4: Reserved
|
||||
5: Send illegal vector
|
||||
6: Received illegal vector
|
||||
7: Illegal register address
|
||||
*/
|
||||
printk(KERN_DEBUG "APIC error on CPU%d: %02x(%02x)\n",
|
||||
/*
|
||||
* Here is what the APIC error bits mean:
|
||||
* 0: Send CS error
|
||||
* 1: Receive CS error
|
||||
* 2: Send accept error
|
||||
* 3: Receive accept error
|
||||
* 4: Reserved
|
||||
* 5: Send illegal vector
|
||||
* 6: Received illegal vector
|
||||
* 7: Illegal register address
|
||||
*/
|
||||
pr_debug("APIC error on CPU%d: %02x(%02x)\n",
|
||||
smp_processor_id(), v , v1);
|
||||
irq_exit();
|
||||
}
|
||||
|
@ -1838,15 +1812,15 @@ void __cpuinit generic_processor_info(int apicid, int version)
|
|||
* Validate version
|
||||
*/
|
||||
if (version == 0x0) {
|
||||
printk(KERN_WARNING "BIOS bug, APIC version is 0 for CPU#%d! "
|
||||
"fixing up to 0x10. (tell your hw vendor)\n",
|
||||
version);
|
||||
pr_warning("BIOS bug, APIC version is 0 for CPU#%d! "
|
||||
"fixing up to 0x10. (tell your hw vendor)\n",
|
||||
version);
|
||||
version = 0x10;
|
||||
}
|
||||
apic_version[apicid] = version;
|
||||
|
||||
if (num_processors >= NR_CPUS) {
|
||||
printk(KERN_WARNING "WARNING: NR_CPUS limit of %i reached."
|
||||
pr_warning("WARNING: NR_CPUS limit of %i reached."
|
||||
" Processor ignored.\n", NR_CPUS);
|
||||
return;
|
||||
}
|
||||
|
@ -2209,7 +2183,7 @@ static int __init apic_set_verbosity(char *arg)
|
|||
else if (strcmp("verbose", arg) == 0)
|
||||
apic_verbosity = APIC_VERBOSE;
|
||||
else {
|
||||
printk(KERN_WARNING "APIC Verbosity level %s not recognised"
|
||||
pr_warning("APIC Verbosity level %s not recognised"
|
||||
" use apic=verbose or apic=debug\n", arg);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
|
|
@ -391,11 +391,7 @@ static int power_off;
|
|||
#else
|
||||
static int power_off = 1;
|
||||
#endif
|
||||
#ifdef CONFIG_APM_REAL_MODE_POWER_OFF
|
||||
static int realmode_power_off = 1;
|
||||
#else
|
||||
static int realmode_power_off;
|
||||
#endif
|
||||
#ifdef CONFIG_APM_ALLOW_INTS
|
||||
static int allow_ints = 1;
|
||||
#else
|
||||
|
|
|
@ -11,7 +11,7 @@
|
|||
#include <linux/suspend.h>
|
||||
#include <linux/kbuild.h>
|
||||
#include <asm/ucontext.h>
|
||||
#include "sigframe.h"
|
||||
#include <asm/sigframe.h>
|
||||
#include <asm/pgtable.h>
|
||||
#include <asm/fixmap.h>
|
||||
#include <asm/processor.h>
|
||||
|
|
|
@ -20,6 +20,8 @@
|
|||
|
||||
#include <xen/interface/xen.h>
|
||||
|
||||
#include <asm/sigframe.h>
|
||||
|
||||
#define __NO_STUBS 1
|
||||
#undef __SYSCALL
|
||||
#undef _ASM_X86_UNISTD_64_H
|
||||
|
@ -87,7 +89,7 @@ int main(void)
|
|||
BLANK();
|
||||
#undef ENTRY
|
||||
DEFINE(IA32_RT_SIGFRAME_sigcontext,
|
||||
offsetof (struct rt_sigframe32, uc.uc_mcontext));
|
||||
offsetof (struct rt_sigframe_ia32, uc.uc_mcontext));
|
||||
BLANK();
|
||||
#endif
|
||||
DEFINE(pbe_address, offsetof(struct pbe, address));
|
||||
|
|
|
@ -69,10 +69,10 @@ s64 uv_bios_call_reentrant(enum uv_bios_cmd which, u64 a1, u64 a2, u64 a3,
|
|||
|
||||
long sn_partition_id;
|
||||
EXPORT_SYMBOL_GPL(sn_partition_id);
|
||||
long uv_coherency_id;
|
||||
EXPORT_SYMBOL_GPL(uv_coherency_id);
|
||||
long uv_region_size;
|
||||
EXPORT_SYMBOL_GPL(uv_region_size);
|
||||
long sn_coherency_id;
|
||||
EXPORT_SYMBOL_GPL(sn_coherency_id);
|
||||
long sn_region_size;
|
||||
EXPORT_SYMBOL_GPL(sn_region_size);
|
||||
int uv_type;
|
||||
|
||||
|
||||
|
@ -100,6 +100,56 @@ s64 uv_bios_get_sn_info(int fc, int *uvtype, long *partid, long *coher,
|
|||
return ret;
|
||||
}
|
||||
|
||||
int
|
||||
uv_bios_mq_watchlist_alloc(int blade, unsigned long addr, unsigned int mq_size,
|
||||
unsigned long *intr_mmr_offset)
|
||||
{
|
||||
union uv_watchlist_u size_blade;
|
||||
u64 watchlist;
|
||||
s64 ret;
|
||||
|
||||
size_blade.size = mq_size;
|
||||
size_blade.blade = blade;
|
||||
|
||||
/*
|
||||
* bios returns watchlist number or negative error number.
|
||||
*/
|
||||
ret = (int)uv_bios_call_irqsave(UV_BIOS_WATCHLIST_ALLOC, addr,
|
||||
size_blade.val, (u64)intr_mmr_offset,
|
||||
(u64)&watchlist, 0);
|
||||
if (ret < BIOS_STATUS_SUCCESS)
|
||||
return ret;
|
||||
|
||||
return watchlist;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(uv_bios_mq_watchlist_alloc);
|
||||
|
||||
int
|
||||
uv_bios_mq_watchlist_free(int blade, int watchlist_num)
|
||||
{
|
||||
return (int)uv_bios_call_irqsave(UV_BIOS_WATCHLIST_FREE,
|
||||
blade, watchlist_num, 0, 0, 0);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(uv_bios_mq_watchlist_free);
|
||||
|
||||
s64
|
||||
uv_bios_change_memprotect(u64 paddr, u64 len, enum uv_memprotect perms)
|
||||
{
|
||||
return uv_bios_call_irqsave(UV_BIOS_MEMPROTECT, paddr, len,
|
||||
perms, 0, 0);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(uv_bios_change_memprotect);
|
||||
|
||||
s64
|
||||
uv_bios_reserved_page_pa(u64 buf, u64 *cookie, u64 *addr, u64 *len)
|
||||
{
|
||||
s64 ret;
|
||||
|
||||
ret = uv_bios_call_irqsave(UV_BIOS_GET_PARTITION_ADDR, (u64)cookie,
|
||||
(u64)addr, buf, (u64)len, 0);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(uv_bios_reserved_page_pa);
|
||||
|
||||
s64 uv_bios_freq_base(u64 clock_type, u64 *ticks_per_second)
|
||||
{
|
||||
|
|
|
@ -0,0 +1,161 @@
|
|||
#include <linux/module.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/kthread.h>
|
||||
#include <linux/workqueue.h>
|
||||
#include <asm/e820.h>
|
||||
#include <asm/proto.h>
|
||||
|
||||
/*
|
||||
* Some BIOSes seem to corrupt the low 64k of memory during events
|
||||
* like suspend/resume and unplugging an HDMI cable. Reserve all
|
||||
* remaining free memory in that area and fill it with a distinct
|
||||
* pattern.
|
||||
*/
|
||||
#define MAX_SCAN_AREAS 8
|
||||
|
||||
static int __read_mostly memory_corruption_check = -1;
|
||||
|
||||
static unsigned __read_mostly corruption_check_size = 64*1024;
|
||||
static unsigned __read_mostly corruption_check_period = 60; /* seconds */
|
||||
|
||||
static struct e820entry scan_areas[MAX_SCAN_AREAS];
|
||||
static int num_scan_areas;
|
||||
|
||||
|
||||
static __init int set_corruption_check(char *arg)
|
||||
{
|
||||
char *end;
|
||||
|
||||
memory_corruption_check = simple_strtol(arg, &end, 10);
|
||||
|
||||
return (*end == 0) ? 0 : -EINVAL;
|
||||
}
|
||||
early_param("memory_corruption_check", set_corruption_check);
|
||||
|
||||
static __init int set_corruption_check_period(char *arg)
|
||||
{
|
||||
char *end;
|
||||
|
||||
corruption_check_period = simple_strtoul(arg, &end, 10);
|
||||
|
||||
return (*end == 0) ? 0 : -EINVAL;
|
||||
}
|
||||
early_param("memory_corruption_check_period", set_corruption_check_period);
|
||||
|
||||
static __init int set_corruption_check_size(char *arg)
|
||||
{
|
||||
char *end;
|
||||
unsigned size;
|
||||
|
||||
size = memparse(arg, &end);
|
||||
|
||||
if (*end == '\0')
|
||||
corruption_check_size = size;
|
||||
|
||||
return (size == corruption_check_size) ? 0 : -EINVAL;
|
||||
}
|
||||
early_param("memory_corruption_check_size", set_corruption_check_size);
|
||||
|
||||
|
||||
void __init setup_bios_corruption_check(void)
|
||||
{
|
||||
u64 addr = PAGE_SIZE; /* assume first page is reserved anyway */
|
||||
|
||||
if (memory_corruption_check == -1) {
|
||||
memory_corruption_check =
|
||||
#ifdef CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK
|
||||
1
|
||||
#else
|
||||
0
|
||||
#endif
|
||||
;
|
||||
}
|
||||
|
||||
if (corruption_check_size == 0)
|
||||
memory_corruption_check = 0;
|
||||
|
||||
if (!memory_corruption_check)
|
||||
return;
|
||||
|
||||
corruption_check_size = round_up(corruption_check_size, PAGE_SIZE);
|
||||
|
||||
while (addr < corruption_check_size && num_scan_areas < MAX_SCAN_AREAS) {
|
||||
u64 size;
|
||||
addr = find_e820_area_size(addr, &size, PAGE_SIZE);
|
||||
|
||||
if (addr == 0)
|
||||
break;
|
||||
|
||||
if ((addr + size) > corruption_check_size)
|
||||
size = corruption_check_size - addr;
|
||||
|
||||
if (size == 0)
|
||||
break;
|
||||
|
||||
e820_update_range(addr, size, E820_RAM, E820_RESERVED);
|
||||
scan_areas[num_scan_areas].addr = addr;
|
||||
scan_areas[num_scan_areas].size = size;
|
||||
num_scan_areas++;
|
||||
|
||||
/* Assume we've already mapped this early memory */
|
||||
memset(__va(addr), 0, size);
|
||||
|
||||
addr += size;
|
||||
}
|
||||
|
||||
printk(KERN_INFO "Scanning %d areas for low memory corruption\n",
|
||||
num_scan_areas);
|
||||
update_e820();
|
||||
}
|
||||
|
||||
|
||||
void check_for_bios_corruption(void)
|
||||
{
|
||||
int i;
|
||||
int corruption = 0;
|
||||
|
||||
if (!memory_corruption_check)
|
||||
return;
|
||||
|
||||
for (i = 0; i < num_scan_areas; i++) {
|
||||
unsigned long *addr = __va(scan_areas[i].addr);
|
||||
unsigned long size = scan_areas[i].size;
|
||||
|
||||
for (; size; addr++, size -= sizeof(unsigned long)) {
|
||||
if (!*addr)
|
||||
continue;
|
||||
printk(KERN_ERR "Corrupted low memory at %p (%lx phys) = %08lx\n",
|
||||
addr, __pa(addr), *addr);
|
||||
corruption = 1;
|
||||
*addr = 0;
|
||||
}
|
||||
}
|
||||
|
||||
WARN_ONCE(corruption, KERN_ERR "Memory corruption detected in low memory\n");
|
||||
}
|
||||
|
||||
static void check_corruption(struct work_struct *dummy);
|
||||
static DECLARE_DELAYED_WORK(bios_check_work, check_corruption);
|
||||
|
||||
static void check_corruption(struct work_struct *dummy)
|
||||
{
|
||||
check_for_bios_corruption();
|
||||
schedule_delayed_work(&bios_check_work,
|
||||
round_jiffies_relative(corruption_check_period*HZ));
|
||||
}
|
||||
|
||||
static int start_periodic_check_for_corruption(void)
|
||||
{
|
||||
if (!memory_corruption_check || corruption_check_period == 0)
|
||||
return 0;
|
||||
|
||||
printk(KERN_INFO "Scanning for low memory corruption every %d seconds\n",
|
||||
corruption_check_period);
|
||||
|
||||
/* First time we run the checks right away */
|
||||
schedule_delayed_work(&bios_check_work, 0);
|
||||
return 0;
|
||||
}
|
||||
|
||||
module_init(start_periodic_check_for_corruption);
|
||||
|
|
@ -4,6 +4,7 @@
|
|||
|
||||
obj-y := intel_cacheinfo.o addon_cpuid_features.o
|
||||
obj-y += proc.o capflags.o powerflags.o common.o
|
||||
obj-y += vmware.o hypervisor.o
|
||||
|
||||
obj-$(CONFIG_X86_32) += bugs.o cmpxchg.o
|
||||
obj-$(CONFIG_X86_64) += bugs_64.o
|
||||
|
|
|
@ -120,9 +120,17 @@ void __cpuinit detect_extended_topology(struct cpuinfo_x86 *c)
|
|||
c->cpu_core_id = phys_pkg_id(c->initial_apicid, ht_mask_width)
|
||||
& core_select_mask;
|
||||
c->phys_proc_id = phys_pkg_id(c->initial_apicid, core_plus_mask_width);
|
||||
/*
|
||||
* Reinit the apicid, now that we have extended initial_apicid.
|
||||
*/
|
||||
c->apicid = phys_pkg_id(c->initial_apicid, 0);
|
||||
#else
|
||||
c->cpu_core_id = phys_pkg_id(ht_mask_width) & core_select_mask;
|
||||
c->phys_proc_id = phys_pkg_id(core_plus_mask_width);
|
||||
/*
|
||||
* Reinit the apicid, now that we have extended initial_apicid.
|
||||
*/
|
||||
c->apicid = phys_pkg_id(0);
|
||||
#endif
|
||||
c->x86_max_cores = (core_level_siblings / smp_num_siblings);
|
||||
|
||||
|
|
|
@ -283,9 +283,14 @@ static void __cpuinit early_init_amd(struct cpuinfo_x86 *c)
|
|||
{
|
||||
early_init_amd_mc(c);
|
||||
|
||||
/* c->x86_power is 8000_0007 edx. Bit 8 is constant TSC */
|
||||
if (c->x86_power & (1<<8))
|
||||
/*
|
||||
* c->x86_power is 8000_0007 edx. Bit 8 is TSC runs at constant rate
|
||||
* with P/T states and does not stop in deep C-states
|
||||
*/
|
||||
if (c->x86_power & (1 << 8)) {
|
||||
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
|
||||
set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_X86_64
|
||||
set_cpu_cap(c, X86_FEATURE_SYSCALL32);
|
||||
|
|
|
@ -36,6 +36,7 @@
|
|||
#include <asm/proto.h>
|
||||
#include <asm/sections.h>
|
||||
#include <asm/setup.h>
|
||||
#include <asm/hypervisor.h>
|
||||
|
||||
#include "cpu.h"
|
||||
|
||||
|
@ -703,6 +704,7 @@ static void __cpuinit identify_cpu(struct cpuinfo_x86 *c)
|
|||
detect_ht(c);
|
||||
#endif
|
||||
|
||||
init_hypervisor(c);
|
||||
/*
|
||||
* On SMP, boot_cpu_data holds the common feature set between
|
||||
* all CPUs; so make sure that we indicate which features are
|
||||
|
@ -862,7 +864,7 @@ EXPORT_SYMBOL(_cpu_pda);
|
|||
|
||||
struct desc_ptr idt_descr = { 256 * 16 - 1, (unsigned long) idt_table };
|
||||
|
||||
char boot_cpu_stack[IRQSTACKSIZE] __page_aligned_bss;
|
||||
static char boot_cpu_stack[IRQSTACKSIZE] __page_aligned_bss;
|
||||
|
||||
void __cpuinit pda_init(int cpu)
|
||||
{
|
||||
|
@ -903,8 +905,8 @@ void __cpuinit pda_init(int cpu)
|
|||
}
|
||||
}
|
||||
|
||||
char boot_exception_stacks[(N_EXCEPTION_STACKS - 1) * EXCEPTION_STKSZ +
|
||||
DEBUG_STKSZ] __page_aligned_bss;
|
||||
static char boot_exception_stacks[(N_EXCEPTION_STACKS - 1) * EXCEPTION_STKSZ +
|
||||
DEBUG_STKSZ] __page_aligned_bss;
|
||||
|
||||
extern asmlinkage void ignore_sysret(void);
|
||||
|
||||
|
|
|
@ -0,0 +1,58 @@
|
|||
/*
|
||||
* Common hypervisor code
|
||||
*
|
||||
* Copyright (C) 2008, VMware, Inc.
|
||||
* Author : Alok N Kataria <akataria@vmware.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
* the Free Software Foundation; either version 2 of the License, or
|
||||
* (at your option) any later version.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but
|
||||
* WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
|
||||
* NON INFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
*
|
||||
*/
|
||||
|
||||
#include <asm/processor.h>
|
||||
#include <asm/vmware.h>
|
||||
#include <asm/hypervisor.h>
|
||||
|
||||
static inline void __cpuinit
|
||||
detect_hypervisor_vendor(struct cpuinfo_x86 *c)
|
||||
{
|
||||
if (vmware_platform()) {
|
||||
c->x86_hyper_vendor = X86_HYPER_VENDOR_VMWARE;
|
||||
} else {
|
||||
c->x86_hyper_vendor = X86_HYPER_VENDOR_NONE;
|
||||
}
|
||||
}
|
||||
|
||||
unsigned long get_hypervisor_tsc_freq(void)
|
||||
{
|
||||
if (boot_cpu_data.x86_hyper_vendor == X86_HYPER_VENDOR_VMWARE)
|
||||
return vmware_get_tsc_khz();
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void __cpuinit
|
||||
hypervisor_set_feature_bits(struct cpuinfo_x86 *c)
|
||||
{
|
||||
if (boot_cpu_data.x86_hyper_vendor == X86_HYPER_VENDOR_VMWARE) {
|
||||
vmware_set_feature_bits(c);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
void __cpuinit init_hypervisor(struct cpuinfo_x86 *c)
|
||||
{
|
||||
detect_hypervisor_vendor(c);
|
||||
hypervisor_set_feature_bits(c);
|
||||
}
|
|
@ -41,6 +41,16 @@ static void __cpuinit early_init_intel(struct cpuinfo_x86 *c)
|
|||
if (c->x86 == 15 && c->x86_cache_alignment == 64)
|
||||
c->x86_cache_alignment = 128;
|
||||
#endif
|
||||
|
||||
/*
|
||||
* c->x86_power is 8000_0007 edx. Bit 8 is TSC runs at constant rate
|
||||
* with P/T states and does not stop in deep C-states
|
||||
*/
|
||||
if (c->x86_power & (1 << 8)) {
|
||||
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
|
||||
set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
|
@ -242,6 +252,13 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
|
|||
|
||||
intel_workarounds(c);
|
||||
|
||||
/*
|
||||
* Detect the extended topology information if available. This
|
||||
* will reinitialise the initial_apicid which will be used
|
||||
* in init_intel_cacheinfo()
|
||||
*/
|
||||
detect_extended_topology(c);
|
||||
|
||||
l2 = init_intel_cacheinfo(c);
|
||||
if (c->cpuid_level > 9) {
|
||||
unsigned eax = cpuid_eax(10);
|
||||
|
@ -307,13 +324,11 @@ static void __cpuinit init_intel(struct cpuinfo_x86 *c)
|
|||
set_cpu_cap(c, X86_FEATURE_P4);
|
||||
if (c->x86 == 6)
|
||||
set_cpu_cap(c, X86_FEATURE_P3);
|
||||
#endif
|
||||
|
||||
if (cpu_has_bts)
|
||||
ptrace_bts_init_intel(c);
|
||||
|
||||
#endif
|
||||
|
||||
detect_extended_topology(c);
|
||||
if (!cpu_has(c, X86_FEATURE_XTOPOLOGY)) {
|
||||
/*
|
||||
* let's use the legacy cpuid vector 0x1 and 0x4 for topology
|
||||
|
|
|
@ -644,20 +644,17 @@ static inline ssize_t show_shared_cpu_list(struct _cpuid4_info *leaf, char *buf)
|
|||
return show_shared_cpu_map_func(leaf, 1, buf);
|
||||
}
|
||||
|
||||
static ssize_t show_type(struct _cpuid4_info *this_leaf, char *buf) {
|
||||
switch(this_leaf->eax.split.type) {
|
||||
case CACHE_TYPE_DATA:
|
||||
static ssize_t show_type(struct _cpuid4_info *this_leaf, char *buf)
|
||||
{
|
||||
switch (this_leaf->eax.split.type) {
|
||||
case CACHE_TYPE_DATA:
|
||||
return sprintf(buf, "Data\n");
|
||||
break;
|
||||
case CACHE_TYPE_INST:
|
||||
case CACHE_TYPE_INST:
|
||||
return sprintf(buf, "Instruction\n");
|
||||
break;
|
||||
case CACHE_TYPE_UNIFIED:
|
||||
case CACHE_TYPE_UNIFIED:
|
||||
return sprintf(buf, "Unified\n");
|
||||
break;
|
||||
default:
|
||||
default:
|
||||
return sprintf(buf, "Unknown\n");
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -237,7 +237,7 @@ asmlinkage void mce_threshold_interrupt(void)
|
|||
}
|
||||
}
|
||||
out:
|
||||
add_pda(irq_threshold_count, 1);
|
||||
inc_irq_stat(irq_threshold_count);
|
||||
irq_exit();
|
||||
}
|
||||
|
||||
|
|
|
@ -26,7 +26,7 @@ asmlinkage void smp_thermal_interrupt(void)
|
|||
if (therm_throt_process(msr_val & 1))
|
||||
mce_log_therm_throt_event(smp_processor_id(), msr_val);
|
||||
|
||||
add_pda(irq_thermal_count, 1);
|
||||
inc_irq_stat(irq_thermal_count);
|
||||
irq_exit();
|
||||
}
|
||||
|
||||
|
|
|
@ -803,6 +803,7 @@ x86_get_mtrr_mem_range(struct res_range *range, int nr_range,
|
|||
}
|
||||
|
||||
static struct res_range __initdata range[RANGE_NUM];
|
||||
static int __initdata nr_range;
|
||||
|
||||
#ifdef CONFIG_MTRR_SANITIZER
|
||||
|
||||
|
@ -1206,40 +1207,44 @@ struct mtrr_cleanup_result {
|
|||
#define PSHIFT (PAGE_SHIFT - 10)
|
||||
|
||||
static struct mtrr_cleanup_result __initdata result[NUM_RESULT];
|
||||
static struct res_range __initdata range_new[RANGE_NUM];
|
||||
static unsigned long __initdata min_loss_pfn[RANGE_NUM];
|
||||
|
||||
static int __init mtrr_cleanup(unsigned address_bits)
|
||||
static void __init print_out_mtrr_range_state(void)
|
||||
{
|
||||
unsigned long extra_remove_base, extra_remove_size;
|
||||
unsigned long base, size, def, dummy;
|
||||
mtrr_type type;
|
||||
int nr_range, nr_range_new;
|
||||
u64 chunk_size, gran_size;
|
||||
unsigned long range_sums, range_sums_new;
|
||||
int index_good;
|
||||
int num_reg_good;
|
||||
int i;
|
||||
char start_factor = 'K', size_factor = 'K';
|
||||
unsigned long start_base, size_base;
|
||||
mtrr_type type;
|
||||
|
||||
for (i = 0; i < num_var_ranges; i++) {
|
||||
|
||||
size_base = range_state[i].size_pfn << (PAGE_SHIFT - 10);
|
||||
if (!size_base)
|
||||
continue;
|
||||
|
||||
size_base = to_size_factor(size_base, &size_factor),
|
||||
start_base = range_state[i].base_pfn << (PAGE_SHIFT - 10);
|
||||
start_base = to_size_factor(start_base, &start_factor),
|
||||
type = range_state[i].type;
|
||||
|
||||
printk(KERN_DEBUG "reg %d, base: %ld%cB, range: %ld%cB, type %s\n",
|
||||
i, start_base, start_factor,
|
||||
size_base, size_factor,
|
||||
(type == MTRR_TYPE_UNCACHABLE) ? "UC" :
|
||||
((type == MTRR_TYPE_WRPROT) ? "WP" :
|
||||
((type == MTRR_TYPE_WRBACK) ? "WB" : "Other"))
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
static int __init mtrr_need_cleanup(void)
|
||||
{
|
||||
int i;
|
||||
mtrr_type type;
|
||||
unsigned long size;
|
||||
/* extra one for all 0 */
|
||||
int num[MTRR_NUM_TYPES + 1];
|
||||
|
||||
if (!is_cpu(INTEL) || enable_mtrr_cleanup < 1)
|
||||
return 0;
|
||||
rdmsr(MTRRdefType_MSR, def, dummy);
|
||||
def &= 0xff;
|
||||
if (def != MTRR_TYPE_UNCACHABLE)
|
||||
return 0;
|
||||
|
||||
/* get it and store it aside */
|
||||
memset(range_state, 0, sizeof(range_state));
|
||||
for (i = 0; i < num_var_ranges; i++) {
|
||||
mtrr_if->get(i, &base, &size, &type);
|
||||
range_state[i].base_pfn = base;
|
||||
range_state[i].size_pfn = size;
|
||||
range_state[i].type = type;
|
||||
}
|
||||
|
||||
/* check entries number */
|
||||
memset(num, 0, sizeof(num));
|
||||
for (i = 0; i < num_var_ranges; i++) {
|
||||
|
@ -1263,29 +1268,133 @@ static int __init mtrr_cleanup(unsigned address_bits)
|
|||
num_var_ranges - num[MTRR_NUM_TYPES])
|
||||
return 0;
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
static unsigned long __initdata range_sums;
|
||||
static void __init mtrr_calc_range_state(u64 chunk_size, u64 gran_size,
|
||||
unsigned long extra_remove_base,
|
||||
unsigned long extra_remove_size,
|
||||
int i)
|
||||
{
|
||||
int num_reg;
|
||||
static struct res_range range_new[RANGE_NUM];
|
||||
static int nr_range_new;
|
||||
unsigned long range_sums_new;
|
||||
|
||||
/* convert ranges to var ranges state */
|
||||
num_reg = x86_setup_var_mtrrs(range, nr_range,
|
||||
chunk_size, gran_size);
|
||||
|
||||
/* we got new setting in range_state, check it */
|
||||
memset(range_new, 0, sizeof(range_new));
|
||||
nr_range_new = x86_get_mtrr_mem_range(range_new, 0,
|
||||
extra_remove_base, extra_remove_size);
|
||||
range_sums_new = sum_ranges(range_new, nr_range_new);
|
||||
|
||||
result[i].chunk_sizek = chunk_size >> 10;
|
||||
result[i].gran_sizek = gran_size >> 10;
|
||||
result[i].num_reg = num_reg;
|
||||
if (range_sums < range_sums_new) {
|
||||
result[i].lose_cover_sizek =
|
||||
(range_sums_new - range_sums) << PSHIFT;
|
||||
result[i].bad = 1;
|
||||
} else
|
||||
result[i].lose_cover_sizek =
|
||||
(range_sums - range_sums_new) << PSHIFT;
|
||||
|
||||
/* double check it */
|
||||
if (!result[i].bad && !result[i].lose_cover_sizek) {
|
||||
if (nr_range_new != nr_range ||
|
||||
memcmp(range, range_new, sizeof(range)))
|
||||
result[i].bad = 1;
|
||||
}
|
||||
|
||||
if (!result[i].bad && (range_sums - range_sums_new <
|
||||
min_loss_pfn[num_reg])) {
|
||||
min_loss_pfn[num_reg] =
|
||||
range_sums - range_sums_new;
|
||||
}
|
||||
}
|
||||
|
||||
static void __init mtrr_print_out_one_result(int i)
|
||||
{
|
||||
char gran_factor, chunk_factor, lose_factor;
|
||||
unsigned long gran_base, chunk_base, lose_base;
|
||||
|
||||
gran_base = to_size_factor(result[i].gran_sizek, &gran_factor),
|
||||
chunk_base = to_size_factor(result[i].chunk_sizek, &chunk_factor),
|
||||
lose_base = to_size_factor(result[i].lose_cover_sizek, &lose_factor),
|
||||
printk(KERN_INFO "%sgran_size: %ld%c \tchunk_size: %ld%c \t",
|
||||
result[i].bad ? "*BAD*" : " ",
|
||||
gran_base, gran_factor, chunk_base, chunk_factor);
|
||||
printk(KERN_CONT "num_reg: %d \tlose cover RAM: %s%ld%c\n",
|
||||
result[i].num_reg, result[i].bad ? "-" : "",
|
||||
lose_base, lose_factor);
|
||||
}
|
||||
|
||||
static int __init mtrr_search_optimal_index(void)
|
||||
{
|
||||
int i;
|
||||
int num_reg_good;
|
||||
int index_good;
|
||||
|
||||
if (nr_mtrr_spare_reg >= num_var_ranges)
|
||||
nr_mtrr_spare_reg = num_var_ranges - 1;
|
||||
num_reg_good = -1;
|
||||
for (i = num_var_ranges - nr_mtrr_spare_reg; i > 0; i--) {
|
||||
if (!min_loss_pfn[i])
|
||||
num_reg_good = i;
|
||||
}
|
||||
|
||||
index_good = -1;
|
||||
if (num_reg_good != -1) {
|
||||
for (i = 0; i < NUM_RESULT; i++) {
|
||||
if (!result[i].bad &&
|
||||
result[i].num_reg == num_reg_good &&
|
||||
!result[i].lose_cover_sizek) {
|
||||
index_good = i;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return index_good;
|
||||
}
|
||||
|
||||
|
||||
static int __init mtrr_cleanup(unsigned address_bits)
|
||||
{
|
||||
unsigned long extra_remove_base, extra_remove_size;
|
||||
unsigned long base, size, def, dummy;
|
||||
mtrr_type type;
|
||||
u64 chunk_size, gran_size;
|
||||
int index_good;
|
||||
int i;
|
||||
|
||||
if (!is_cpu(INTEL) || enable_mtrr_cleanup < 1)
|
||||
return 0;
|
||||
rdmsr(MTRRdefType_MSR, def, dummy);
|
||||
def &= 0xff;
|
||||
if (def != MTRR_TYPE_UNCACHABLE)
|
||||
return 0;
|
||||
|
||||
/* get it and store it aside */
|
||||
memset(range_state, 0, sizeof(range_state));
|
||||
for (i = 0; i < num_var_ranges; i++) {
|
||||
mtrr_if->get(i, &base, &size, &type);
|
||||
range_state[i].base_pfn = base;
|
||||
range_state[i].size_pfn = size;
|
||||
range_state[i].type = type;
|
||||
}
|
||||
|
||||
/* check if we need handle it and can handle it */
|
||||
if (!mtrr_need_cleanup())
|
||||
return 0;
|
||||
|
||||
/* print original var MTRRs at first, for debugging: */
|
||||
printk(KERN_DEBUG "original variable MTRRs\n");
|
||||
for (i = 0; i < num_var_ranges; i++) {
|
||||
char start_factor = 'K', size_factor = 'K';
|
||||
unsigned long start_base, size_base;
|
||||
|
||||
size_base = range_state[i].size_pfn << (PAGE_SHIFT - 10);
|
||||
if (!size_base)
|
||||
continue;
|
||||
|
||||
size_base = to_size_factor(size_base, &size_factor),
|
||||
start_base = range_state[i].base_pfn << (PAGE_SHIFT - 10);
|
||||
start_base = to_size_factor(start_base, &start_factor),
|
||||
type = range_state[i].type;
|
||||
|
||||
printk(KERN_DEBUG "reg %d, base: %ld%cB, range: %ld%cB, type %s\n",
|
||||
i, start_base, start_factor,
|
||||
size_base, size_factor,
|
||||
(type == MTRR_TYPE_UNCACHABLE) ? "UC" :
|
||||
((type == MTRR_TYPE_WRPROT) ? "WP" :
|
||||
((type == MTRR_TYPE_WRBACK) ? "WB" : "Other"))
|
||||
);
|
||||
}
|
||||
print_out_mtrr_range_state();
|
||||
|
||||
memset(range, 0, sizeof(range));
|
||||
extra_remove_size = 0;
|
||||
|
@ -1309,176 +1418,64 @@ static int __init mtrr_cleanup(unsigned address_bits)
|
|||
range_sums >> (20 - PAGE_SHIFT));
|
||||
|
||||
if (mtrr_chunk_size && mtrr_gran_size) {
|
||||
int num_reg;
|
||||
char gran_factor, chunk_factor, lose_factor;
|
||||
unsigned long gran_base, chunk_base, lose_base;
|
||||
|
||||
debug_print++;
|
||||
/* convert ranges to var ranges state */
|
||||
num_reg = x86_setup_var_mtrrs(range, nr_range, mtrr_chunk_size,
|
||||
mtrr_gran_size);
|
||||
|
||||
/* we got new setting in range_state, check it */
|
||||
memset(range_new, 0, sizeof(range_new));
|
||||
nr_range_new = x86_get_mtrr_mem_range(range_new, 0,
|
||||
extra_remove_base,
|
||||
extra_remove_size);
|
||||
range_sums_new = sum_ranges(range_new, nr_range_new);
|
||||
|
||||
i = 0;
|
||||
result[i].chunk_sizek = mtrr_chunk_size >> 10;
|
||||
result[i].gran_sizek = mtrr_gran_size >> 10;
|
||||
result[i].num_reg = num_reg;
|
||||
if (range_sums < range_sums_new) {
|
||||
result[i].lose_cover_sizek =
|
||||
(range_sums_new - range_sums) << PSHIFT;
|
||||
result[i].bad = 1;
|
||||
} else
|
||||
result[i].lose_cover_sizek =
|
||||
(range_sums - range_sums_new) << PSHIFT;
|
||||
mtrr_calc_range_state(mtrr_chunk_size, mtrr_gran_size,
|
||||
extra_remove_base, extra_remove_size, i);
|
||||
|
||||
mtrr_print_out_one_result(i);
|
||||
|
||||
gran_base = to_size_factor(result[i].gran_sizek, &gran_factor),
|
||||
chunk_base = to_size_factor(result[i].chunk_sizek, &chunk_factor),
|
||||
lose_base = to_size_factor(result[i].lose_cover_sizek, &lose_factor),
|
||||
printk(KERN_INFO "%sgran_size: %ld%c \tchunk_size: %ld%c \t",
|
||||
result[i].bad?"*BAD*":" ",
|
||||
gran_base, gran_factor, chunk_base, chunk_factor);
|
||||
printk(KERN_CONT "num_reg: %d \tlose cover RAM: %s%ld%c\n",
|
||||
result[i].num_reg, result[i].bad?"-":"",
|
||||
lose_base, lose_factor);
|
||||
if (!result[i].bad) {
|
||||
set_var_mtrr_all(address_bits);
|
||||
return 1;
|
||||
}
|
||||
printk(KERN_INFO "invalid mtrr_gran_size or mtrr_chunk_size, "
|
||||
"will find optimal one\n");
|
||||
debug_print--;
|
||||
memset(result, 0, sizeof(result[0]));
|
||||
}
|
||||
|
||||
i = 0;
|
||||
memset(min_loss_pfn, 0xff, sizeof(min_loss_pfn));
|
||||
memset(result, 0, sizeof(result));
|
||||
for (gran_size = (1ULL<<16); gran_size < (1ULL<<32); gran_size <<= 1) {
|
||||
char gran_factor;
|
||||
unsigned long gran_base;
|
||||
|
||||
if (debug_print)
|
||||
gran_base = to_size_factor(gran_size >> 10, &gran_factor);
|
||||
|
||||
for (chunk_size = gran_size; chunk_size < (1ULL<<32);
|
||||
chunk_size <<= 1) {
|
||||
int num_reg;
|
||||
|
||||
if (debug_print) {
|
||||
char chunk_factor;
|
||||
unsigned long chunk_base;
|
||||
|
||||
chunk_base = to_size_factor(chunk_size>>10, &chunk_factor),
|
||||
printk(KERN_INFO "\n");
|
||||
printk(KERN_INFO "gran_size: %ld%c chunk_size: %ld%c \n",
|
||||
gran_base, gran_factor, chunk_base, chunk_factor);
|
||||
}
|
||||
if (i >= NUM_RESULT)
|
||||
continue;
|
||||
|
||||
/* convert ranges to var ranges state */
|
||||
num_reg = x86_setup_var_mtrrs(range, nr_range,
|
||||
chunk_size, gran_size);
|
||||
|
||||
/* we got new setting in range_state, check it */
|
||||
memset(range_new, 0, sizeof(range_new));
|
||||
nr_range_new = x86_get_mtrr_mem_range(range_new, 0,
|
||||
extra_remove_base, extra_remove_size);
|
||||
range_sums_new = sum_ranges(range_new, nr_range_new);
|
||||
|
||||
result[i].chunk_sizek = chunk_size >> 10;
|
||||
result[i].gran_sizek = gran_size >> 10;
|
||||
result[i].num_reg = num_reg;
|
||||
if (range_sums < range_sums_new) {
|
||||
result[i].lose_cover_sizek =
|
||||
(range_sums_new - range_sums) << PSHIFT;
|
||||
result[i].bad = 1;
|
||||
} else
|
||||
result[i].lose_cover_sizek =
|
||||
(range_sums - range_sums_new) << PSHIFT;
|
||||
|
||||
/* double check it */
|
||||
if (!result[i].bad && !result[i].lose_cover_sizek) {
|
||||
if (nr_range_new != nr_range ||
|
||||
memcmp(range, range_new, sizeof(range)))
|
||||
result[i].bad = 1;
|
||||
mtrr_calc_range_state(chunk_size, gran_size,
|
||||
extra_remove_base, extra_remove_size, i);
|
||||
if (debug_print) {
|
||||
mtrr_print_out_one_result(i);
|
||||
printk(KERN_INFO "\n");
|
||||
}
|
||||
|
||||
if (!result[i].bad && (range_sums - range_sums_new <
|
||||
min_loss_pfn[num_reg])) {
|
||||
min_loss_pfn[num_reg] =
|
||||
range_sums - range_sums_new;
|
||||
}
|
||||
i++;
|
||||
}
|
||||
}
|
||||
|
||||
/* print out all */
|
||||
for (i = 0; i < NUM_RESULT; i++) {
|
||||
char gran_factor, chunk_factor, lose_factor;
|
||||
unsigned long gran_base, chunk_base, lose_base;
|
||||
|
||||
gran_base = to_size_factor(result[i].gran_sizek, &gran_factor),
|
||||
chunk_base = to_size_factor(result[i].chunk_sizek, &chunk_factor),
|
||||
lose_base = to_size_factor(result[i].lose_cover_sizek, &lose_factor),
|
||||
printk(KERN_INFO "%sgran_size: %ld%c \tchunk_size: %ld%c \t",
|
||||
result[i].bad?"*BAD*":" ",
|
||||
gran_base, gran_factor, chunk_base, chunk_factor);
|
||||
printk(KERN_CONT "num_reg: %d \tlose cover RAM: %s%ld%c\n",
|
||||
result[i].num_reg, result[i].bad?"-":"",
|
||||
lose_base, lose_factor);
|
||||
}
|
||||
|
||||
/* try to find the optimal index */
|
||||
if (nr_mtrr_spare_reg >= num_var_ranges)
|
||||
nr_mtrr_spare_reg = num_var_ranges - 1;
|
||||
num_reg_good = -1;
|
||||
for (i = num_var_ranges - nr_mtrr_spare_reg; i > 0; i--) {
|
||||
if (!min_loss_pfn[i])
|
||||
num_reg_good = i;
|
||||
}
|
||||
|
||||
index_good = -1;
|
||||
if (num_reg_good != -1) {
|
||||
for (i = 0; i < NUM_RESULT; i++) {
|
||||
if (!result[i].bad &&
|
||||
result[i].num_reg == num_reg_good &&
|
||||
!result[i].lose_cover_sizek) {
|
||||
index_good = i;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
index_good = mtrr_search_optimal_index();
|
||||
|
||||
if (index_good != -1) {
|
||||
char gran_factor, chunk_factor, lose_factor;
|
||||
unsigned long gran_base, chunk_base, lose_base;
|
||||
|
||||
printk(KERN_INFO "Found optimal setting for mtrr clean up\n");
|
||||
i = index_good;
|
||||
gran_base = to_size_factor(result[i].gran_sizek, &gran_factor),
|
||||
chunk_base = to_size_factor(result[i].chunk_sizek, &chunk_factor),
|
||||
lose_base = to_size_factor(result[i].lose_cover_sizek, &lose_factor),
|
||||
printk(KERN_INFO "gran_size: %ld%c \tchunk_size: %ld%c \t",
|
||||
gran_base, gran_factor, chunk_base, chunk_factor);
|
||||
printk(KERN_CONT "num_reg: %d \tlose RAM: %ld%c\n",
|
||||
result[i].num_reg, lose_base, lose_factor);
|
||||
mtrr_print_out_one_result(i);
|
||||
|
||||
/* convert ranges to var ranges state */
|
||||
chunk_size = result[i].chunk_sizek;
|
||||
chunk_size <<= 10;
|
||||
gran_size = result[i].gran_sizek;
|
||||
gran_size <<= 10;
|
||||
debug_print++;
|
||||
x86_setup_var_mtrrs(range, nr_range, chunk_size, gran_size);
|
||||
debug_print--;
|
||||
set_var_mtrr_all(address_bits);
|
||||
printk(KERN_DEBUG "New variable MTRRs\n");
|
||||
print_out_mtrr_range_state();
|
||||
return 1;
|
||||
} else {
|
||||
/* print out all */
|
||||
for (i = 0; i < NUM_RESULT; i++)
|
||||
mtrr_print_out_one_result(i);
|
||||
}
|
||||
|
||||
printk(KERN_INFO "mtrr_cleanup: can not find optimal value\n");
|
||||
|
@ -1562,7 +1559,6 @@ int __init mtrr_trim_uncached_memory(unsigned long end_pfn)
|
|||
{
|
||||
unsigned long i, base, size, highest_pfn = 0, def, dummy;
|
||||
mtrr_type type;
|
||||
int nr_range;
|
||||
u64 total_trim_size;
|
||||
|
||||
/* extra one for all 0 */
|
||||
|
|
|
@ -0,0 +1,112 @@
|
|||
/*
|
||||
* VMware Detection code.
|
||||
*
|
||||
* Copyright (C) 2008, VMware, Inc.
|
||||
* Author : Alok N Kataria <akataria@vmware.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
* the Free Software Foundation; either version 2 of the License, or
|
||||
* (at your option) any later version.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but
|
||||
* WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
|
||||
* NON INFRINGEMENT. See the GNU General Public License for more
|
||||
* details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
*
|
||||
*/
|
||||
|
||||
#include <linux/dmi.h>
|
||||
#include <asm/div64.h>
|
||||
#include <asm/vmware.h>
|
||||
|
||||
#define CPUID_VMWARE_INFO_LEAF 0x40000000
|
||||
#define VMWARE_HYPERVISOR_MAGIC 0x564D5868
|
||||
#define VMWARE_HYPERVISOR_PORT 0x5658
|
||||
|
||||
#define VMWARE_PORT_CMD_GETVERSION 10
|
||||
#define VMWARE_PORT_CMD_GETHZ 45
|
||||
|
||||
#define VMWARE_PORT(cmd, eax, ebx, ecx, edx) \
|
||||
__asm__("inl (%%dx)" : \
|
||||
"=a"(eax), "=c"(ecx), "=d"(edx), "=b"(ebx) : \
|
||||
"0"(VMWARE_HYPERVISOR_MAGIC), \
|
||||
"1"(VMWARE_PORT_CMD_##cmd), \
|
||||
"2"(VMWARE_HYPERVISOR_PORT), "3"(UINT_MAX) : \
|
||||
"memory");
|
||||
|
||||
static inline int __vmware_platform(void)
|
||||
{
|
||||
uint32_t eax, ebx, ecx, edx;
|
||||
VMWARE_PORT(GETVERSION, eax, ebx, ecx, edx);
|
||||
return eax != (uint32_t)-1 && ebx == VMWARE_HYPERVISOR_MAGIC;
|
||||
}
|
||||
|
||||
static unsigned long __vmware_get_tsc_khz(void)
|
||||
{
|
||||
uint64_t tsc_hz;
|
||||
uint32_t eax, ebx, ecx, edx;
|
||||
|
||||
VMWARE_PORT(GETHZ, eax, ebx, ecx, edx);
|
||||
|
||||
if (ebx == UINT_MAX)
|
||||
return 0;
|
||||
tsc_hz = eax | (((uint64_t)ebx) << 32);
|
||||
do_div(tsc_hz, 1000);
|
||||
BUG_ON(tsc_hz >> 32);
|
||||
return tsc_hz;
|
||||
}
|
||||
|
||||
/*
|
||||
* While checking the dmi string infomation, just checking the product
|
||||
* serial key should be enough, as this will always have a VMware
|
||||
* specific string when running under VMware hypervisor.
|
||||
*/
|
||||
int vmware_platform(void)
|
||||
{
|
||||
if (cpu_has_hypervisor) {
|
||||
unsigned int eax, ebx, ecx, edx;
|
||||
char hyper_vendor_id[13];
|
||||
|
||||
cpuid(CPUID_VMWARE_INFO_LEAF, &eax, &ebx, &ecx, &edx);
|
||||
memcpy(hyper_vendor_id + 0, &ebx, 4);
|
||||
memcpy(hyper_vendor_id + 4, &ecx, 4);
|
||||
memcpy(hyper_vendor_id + 8, &edx, 4);
|
||||
hyper_vendor_id[12] = '\0';
|
||||
if (!strcmp(hyper_vendor_id, "VMwareVMware"))
|
||||
return 1;
|
||||
} else if (dmi_available && dmi_name_in_serial("VMware") &&
|
||||
__vmware_platform())
|
||||
return 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
unsigned long vmware_get_tsc_khz(void)
|
||||
{
|
||||
BUG_ON(!vmware_platform());
|
||||
return __vmware_get_tsc_khz();
|
||||
}
|
||||
|
||||
/*
|
||||
* VMware hypervisor takes care of exporting a reliable TSC to the guest.
|
||||
* Still, due to timing difference when running on virtual cpus, the TSC can
|
||||
* be marked as unstable in some cases. For example, the TSC sync check at
|
||||
* bootup can fail due to a marginal offset between vcpus' TSCs (though the
|
||||
* TSCs do not drift from each other). Also, the ACPI PM timer clocksource
|
||||
* is not suitable as a watchdog when running on a hypervisor because the
|
||||
* kernel may miss a wrap of the counter if the vcpu is descheduled for a
|
||||
* long time. To skip these checks at runtime we set these capability bits,
|
||||
* so that the kernel could just trust the hypervisor with providing a
|
||||
* reliable virtual TSC that is suitable for timekeeping.
|
||||
*/
|
||||
void __cpuinit vmware_set_feature_bits(struct cpuinfo_x86 *c)
|
||||
{
|
||||
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
|
||||
set_cpu_cap(c, X86_FEATURE_TSC_RELIABLE);
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue