License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2016-01-27 05:12:04 +08:00
|
|
|
#ifndef _ASM_X86_CPUFEATURES_H
|
|
|
|
#define _ASM_X86_CPUFEATURES_H
|
|
|
|
|
|
|
|
#ifndef _ASM_X86_REQUIRED_FEATURES_H
|
|
|
|
#include <asm/required-features.h>
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifndef _ASM_X86_DISABLED_FEATURES_H
|
|
|
|
#include <asm/disabled-features.h>
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Defines x86 CPU feature bits
|
|
|
|
*/
|
2021-01-23 04:40:46 +08:00
|
|
|
#define NCAPINTS 20 /* N 32-bit words worth of info */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define NBUGINTS 1 /* N 32-bit bug flags */
|
2016-01-27 05:12:04 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Note: If the comment begins with a quoted string, that string is used
|
|
|
|
* in /proc/cpuinfo instead of the macro name. If the string is "",
|
|
|
|
* this feature bit is not displayed in /proc/cpuinfo at all.
|
2017-10-31 20:17:23 +08:00
|
|
|
*
|
2017-10-14 05:56:42 +08:00
|
|
|
* When adding new features here that depend on other features,
|
2017-10-31 20:17:23 +08:00
|
|
|
* please update the table in kernel/cpu/cpuid-deps.c as well.
|
2017-10-14 05:56:42 +08:00
|
|
|
*/
|
|
|
|
|
2017-10-31 20:17:23 +08:00
|
|
|
/* Intel-defined CPU features, CPUID level 0x00000001 (EDX), word 0 */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_FPU ( 0*32+ 0) /* Onboard FPU */
|
|
|
|
#define X86_FEATURE_VME ( 0*32+ 1) /* Virtual Mode Extensions */
|
|
|
|
#define X86_FEATURE_DE ( 0*32+ 2) /* Debugging Extensions */
|
|
|
|
#define X86_FEATURE_PSE ( 0*32+ 3) /* Page Size Extensions */
|
|
|
|
#define X86_FEATURE_TSC ( 0*32+ 4) /* Time Stamp Counter */
|
|
|
|
#define X86_FEATURE_MSR ( 0*32+ 5) /* Model-Specific Registers */
|
|
|
|
#define X86_FEATURE_PAE ( 0*32+ 6) /* Physical Address Extensions */
|
|
|
|
#define X86_FEATURE_MCE ( 0*32+ 7) /* Machine Check Exception */
|
|
|
|
#define X86_FEATURE_CX8 ( 0*32+ 8) /* CMPXCHG8 instruction */
|
|
|
|
#define X86_FEATURE_APIC ( 0*32+ 9) /* Onboard APIC */
|
|
|
|
#define X86_FEATURE_SEP ( 0*32+11) /* SYSENTER/SYSEXIT */
|
|
|
|
#define X86_FEATURE_MTRR ( 0*32+12) /* Memory Type Range Registers */
|
|
|
|
#define X86_FEATURE_PGE ( 0*32+13) /* Page Global Enable */
|
|
|
|
#define X86_FEATURE_MCA ( 0*32+14) /* Machine Check Architecture */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_CMOV ( 0*32+15) /* CMOV instructions (plus FCMOVcc, FCOMI with FPU) */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_PAT ( 0*32+16) /* Page Attribute Table */
|
|
|
|
#define X86_FEATURE_PSE36 ( 0*32+17) /* 36-bit PSEs */
|
|
|
|
#define X86_FEATURE_PN ( 0*32+18) /* Processor serial number */
|
|
|
|
#define X86_FEATURE_CLFLUSH ( 0*32+19) /* CLFLUSH instruction */
|
|
|
|
#define X86_FEATURE_DS ( 0*32+21) /* "dts" Debug Store */
|
|
|
|
#define X86_FEATURE_ACPI ( 0*32+22) /* ACPI via MSR */
|
|
|
|
#define X86_FEATURE_MMX ( 0*32+23) /* Multimedia Extensions */
|
|
|
|
#define X86_FEATURE_FXSR ( 0*32+24) /* FXSAVE/FXRSTOR, CR4.OSFXSR */
|
|
|
|
#define X86_FEATURE_XMM ( 0*32+25) /* "sse" */
|
|
|
|
#define X86_FEATURE_XMM2 ( 0*32+26) /* "sse2" */
|
|
|
|
#define X86_FEATURE_SELFSNOOP ( 0*32+27) /* "ss" CPU self snoop */
|
|
|
|
#define X86_FEATURE_HT ( 0*32+28) /* Hyper-Threading */
|
|
|
|
#define X86_FEATURE_ACC ( 0*32+29) /* "tm" Automatic clock control */
|
|
|
|
#define X86_FEATURE_IA64 ( 0*32+30) /* IA-64 processor */
|
|
|
|
#define X86_FEATURE_PBE ( 0*32+31) /* Pending Break Enable */
|
2016-01-27 05:12:04 +08:00
|
|
|
|
|
|
|
/* AMD-defined CPU features, CPUID level 0x80000001, word 1 */
|
|
|
|
/* Don't duplicate feature flags which are redundant with Intel! */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_SYSCALL ( 1*32+11) /* SYSCALL/SYSRET */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_MP ( 1*32+19) /* MP Capable */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_NX ( 1*32+20) /* Execute Disable */
|
|
|
|
#define X86_FEATURE_MMXEXT ( 1*32+22) /* AMD MMX extensions */
|
|
|
|
#define X86_FEATURE_FXSR_OPT ( 1*32+25) /* FXSAVE/FXRSTOR optimizations */
|
|
|
|
#define X86_FEATURE_GBPAGES ( 1*32+26) /* "pdpe1gb" GB pages */
|
|
|
|
#define X86_FEATURE_RDTSCP ( 1*32+27) /* RDTSCP */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_LM ( 1*32+29) /* Long Mode (x86-64, 64-bit support) */
|
|
|
|
#define X86_FEATURE_3DNOWEXT ( 1*32+30) /* AMD 3DNow extensions */
|
|
|
|
#define X86_FEATURE_3DNOW ( 1*32+31) /* 3DNow */
|
2016-01-27 05:12:04 +08:00
|
|
|
|
|
|
|
/* Transmeta-defined CPU features, CPUID level 0x80860001, word 2 */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_RECOVERY ( 2*32+ 0) /* CPU in recovery mode */
|
|
|
|
#define X86_FEATURE_LONGRUN ( 2*32+ 1) /* Longrun power control */
|
|
|
|
#define X86_FEATURE_LRTI ( 2*32+ 3) /* LongRun table interface */
|
2016-01-27 05:12:04 +08:00
|
|
|
|
|
|
|
/* Other features, Linux-defined mapping, word 3 */
|
|
|
|
/* This range is used for feature bits which conflict or are synthesized */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_CXMMX ( 3*32+ 0) /* Cyrix MMX extensions */
|
|
|
|
#define X86_FEATURE_K6_MTRR ( 3*32+ 1) /* AMD K6 nonstandard MTRRs */
|
|
|
|
#define X86_FEATURE_CYRIX_ARR ( 3*32+ 2) /* Cyrix ARRs (= MTRRs) */
|
|
|
|
#define X86_FEATURE_CENTAUR_MCR ( 3*32+ 3) /* Centaur MCRs (= MTRRs) */
|
2017-10-31 20:17:23 +08:00
|
|
|
|
|
|
|
/* CPU types for specific tunings: */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_K8 ( 3*32+ 4) /* "" Opteron, Athlon64 */
|
x86: Remove dynamic NOP selection
This ensures that a NOP is a NOP and not a random other instruction that
is also a NOP. It allows simplification of dynamic code patching that
wants to verify existing code before writing new instructions (ftrace,
jump_label, static_call, etc..).
Differentiating on NOPs is not a feature.
This pessimises 32bit (DONTCARE) and 32bit on 64bit CPUs (CARELESS).
32bit is not a performance target.
Everything x86_64 since AMD K10 (2007) and Intel IvyBridge (2012) is
fine with using NOPL (as opposed to prefix NOP). And per FEATURE_NOPL
being required for x86_64, all x86_64 CPUs can use NOPL. So stop
caring about NOPs, simplify things and get on with life.
[ The problem seems to be that some uarchs can only decode NOPL on a
single front-end port while others have severe decode penalties for
excessive prefixes. All modern uarchs can handle both, except Atom,
which has prefix penalties. ]
[ Also, much doubt you can actually measure any of this on normal
workloads. ]
After this, FEATURE_NOPL is unused except for required-features for
x86_64. FEATURE_K8 is only used for PTI.
[ bp: Kernel build measurements showed ~0.3s slowdown on Sandybridge
which is hardly a slowdown. Get rid of X86_FEATURE_K7, while at it. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Alexei Starovoitov <alexei.starovoitov@gmail.com> # bpf
Acked-by: Linus Torvalds <torvalds@linuxfoundation.org>
Link: https://lkml.kernel.org/r/20210312115749.065275711@infradead.org
2021-03-12 19:32:54 +08:00
|
|
|
/* FREE, was #define X86_FEATURE_K7 ( 3*32+ 5) "" Athlon */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_P3 ( 3*32+ 6) /* "" P3 */
|
|
|
|
#define X86_FEATURE_P4 ( 3*32+ 7) /* "" P4 */
|
|
|
|
#define X86_FEATURE_CONSTANT_TSC ( 3*32+ 8) /* TSC ticks at a constant rate */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_UP ( 3*32+ 9) /* SMP kernel running on UP */
|
|
|
|
#define X86_FEATURE_ART ( 3*32+10) /* Always running timer (ART) */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_ARCH_PERFMON ( 3*32+11) /* Intel Architectural PerfMon */
|
|
|
|
#define X86_FEATURE_PEBS ( 3*32+12) /* Precise-Event Based Sampling */
|
|
|
|
#define X86_FEATURE_BTS ( 3*32+13) /* Branch Trace Store */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_SYSCALL32 ( 3*32+14) /* "" syscall in IA32 userspace */
|
|
|
|
#define X86_FEATURE_SYSENTER32 ( 3*32+15) /* "" sysenter in IA32 userspace */
|
|
|
|
#define X86_FEATURE_REP_GOOD ( 3*32+16) /* REP microcode works well */
|
2022-08-11 20:29:52 +08:00
|
|
|
#define X86_FEATURE_AMD_LBR_V2 ( 3*32+17) /* AMD Last Branch Record Extension Version 2 */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_LFENCE_RDTSC ( 3*32+18) /* "" LFENCE synchronizes RDTSC */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_ACC_POWER ( 3*32+19) /* AMD Accumulated Power Mechanism */
|
|
|
|
#define X86_FEATURE_NOPL ( 3*32+20) /* The NOPL (0F 1F) instructions */
|
|
|
|
#define X86_FEATURE_ALWAYS ( 3*32+21) /* "" Always-present feature */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_XTOPOLOGY ( 3*32+22) /* CPU topology enum extensions */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_TSC_RELIABLE ( 3*32+23) /* TSC is known to be reliable */
|
|
|
|
#define X86_FEATURE_NONSTOP_TSC ( 3*32+24) /* TSC does not stop in C states */
|
|
|
|
#define X86_FEATURE_CPUID ( 3*32+25) /* CPU has CPUID instruction itself */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_EXTD_APICID ( 3*32+26) /* Extended APICID (8 bits) */
|
|
|
|
#define X86_FEATURE_AMD_DCM ( 3*32+27) /* AMD multi-node processor */
|
|
|
|
#define X86_FEATURE_APERFMPERF ( 3*32+28) /* P-State hardware coordination feedback capability (APERF/MPERF MSRs) */
|
2021-05-14 21:59:20 +08:00
|
|
|
#define X86_FEATURE_RAPL ( 3*32+29) /* AMD/Hygon RAPL interface */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_NONSTOP_TSC_S3 ( 3*32+30) /* TSC doesn't stop in S3 state */
|
|
|
|
#define X86_FEATURE_TSC_KNOWN_FREQ ( 3*32+31) /* TSC has known frequency */
|
2016-01-27 05:12:04 +08:00
|
|
|
|
2017-10-31 20:17:23 +08:00
|
|
|
/* Intel-defined CPU features, CPUID level 0x00000001 (ECX), word 4 */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_XMM3 ( 4*32+ 0) /* "pni" SSE-3 */
|
|
|
|
#define X86_FEATURE_PCLMULQDQ ( 4*32+ 1) /* PCLMULQDQ instruction */
|
|
|
|
#define X86_FEATURE_DTES64 ( 4*32+ 2) /* 64-bit Debug Store */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_MWAIT ( 4*32+ 3) /* "monitor" MONITOR/MWAIT support */
|
|
|
|
#define X86_FEATURE_DSCPL ( 4*32+ 4) /* "ds_cpl" CPL-qualified (filtered) Debug Store */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_VMX ( 4*32+ 5) /* Hardware virtualization */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_SMX ( 4*32+ 6) /* Safer Mode eXtensions */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_EST ( 4*32+ 7) /* Enhanced SpeedStep */
|
|
|
|
#define X86_FEATURE_TM2 ( 4*32+ 8) /* Thermal Monitor 2 */
|
|
|
|
#define X86_FEATURE_SSSE3 ( 4*32+ 9) /* Supplemental SSE-3 */
|
|
|
|
#define X86_FEATURE_CID ( 4*32+10) /* Context ID */
|
|
|
|
#define X86_FEATURE_SDBG ( 4*32+11) /* Silicon Debug */
|
|
|
|
#define X86_FEATURE_FMA ( 4*32+12) /* Fused multiply-add */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_CX16 ( 4*32+13) /* CMPXCHG16B instruction */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_XTPR ( 4*32+14) /* Send Task Priority Messages */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_PDCM ( 4*32+15) /* Perf/Debug Capabilities MSR */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_PCID ( 4*32+17) /* Process Context Identifiers */
|
|
|
|
#define X86_FEATURE_DCA ( 4*32+18) /* Direct Cache Access */
|
|
|
|
#define X86_FEATURE_XMM4_1 ( 4*32+19) /* "sse4_1" SSE-4.1 */
|
|
|
|
#define X86_FEATURE_XMM4_2 ( 4*32+20) /* "sse4_2" SSE-4.2 */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_X2APIC ( 4*32+21) /* X2APIC */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_MOVBE ( 4*32+22) /* MOVBE instruction */
|
|
|
|
#define X86_FEATURE_POPCNT ( 4*32+23) /* POPCNT instruction */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_TSC_DEADLINE_TIMER ( 4*32+24) /* TSC deadline timer */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_AES ( 4*32+25) /* AES instructions */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_XSAVE ( 4*32+26) /* XSAVE/XRSTOR/XSETBV/XGETBV instructions */
|
|
|
|
#define X86_FEATURE_OSXSAVE ( 4*32+27) /* "" XSAVE instruction enabled in the OS */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_AVX ( 4*32+28) /* Advanced Vector Extensions */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_F16C ( 4*32+29) /* 16-bit FP conversions */
|
|
|
|
#define X86_FEATURE_RDRAND ( 4*32+30) /* RDRAND instruction */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_HYPERVISOR ( 4*32+31) /* Running on a hypervisor */
|
2016-01-27 05:12:04 +08:00
|
|
|
|
|
|
|
/* VIA/Cyrix/Centaur-defined CPU features, CPUID level 0xC0000001, word 5 */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_XSTORE ( 5*32+ 2) /* "rng" RNG present (xstore) */
|
|
|
|
#define X86_FEATURE_XSTORE_EN ( 5*32+ 3) /* "rng_en" RNG enabled */
|
|
|
|
#define X86_FEATURE_XCRYPT ( 5*32+ 6) /* "ace" on-CPU crypto (xcrypt) */
|
|
|
|
#define X86_FEATURE_XCRYPT_EN ( 5*32+ 7) /* "ace_en" on-CPU crypto enabled */
|
|
|
|
#define X86_FEATURE_ACE2 ( 5*32+ 8) /* Advanced Cryptography Engine v2 */
|
|
|
|
#define X86_FEATURE_ACE2_EN ( 5*32+ 9) /* ACE v2 enabled */
|
|
|
|
#define X86_FEATURE_PHE ( 5*32+10) /* PadLock Hash Engine */
|
|
|
|
#define X86_FEATURE_PHE_EN ( 5*32+11) /* PHE enabled */
|
|
|
|
#define X86_FEATURE_PMM ( 5*32+12) /* PadLock Montgomery Multiplier */
|
|
|
|
#define X86_FEATURE_PMM_EN ( 5*32+13) /* PMM enabled */
|
2016-01-27 05:12:04 +08:00
|
|
|
|
2017-10-31 20:17:23 +08:00
|
|
|
/* More extended AMD flags: CPUID level 0x80000001, ECX, word 6 */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_LAHF_LM ( 6*32+ 0) /* LAHF/SAHF in long mode */
|
|
|
|
#define X86_FEATURE_CMP_LEGACY ( 6*32+ 1) /* If yes HyperThreading not valid */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_SVM ( 6*32+ 2) /* Secure Virtual Machine */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_EXTAPIC ( 6*32+ 3) /* Extended APIC space */
|
|
|
|
#define X86_FEATURE_CR8_LEGACY ( 6*32+ 4) /* CR8 in 32-bit mode */
|
|
|
|
#define X86_FEATURE_ABM ( 6*32+ 5) /* Advanced bit manipulation */
|
|
|
|
#define X86_FEATURE_SSE4A ( 6*32+ 6) /* SSE-4A */
|
|
|
|
#define X86_FEATURE_MISALIGNSSE ( 6*32+ 7) /* Misaligned SSE mode */
|
|
|
|
#define X86_FEATURE_3DNOWPREFETCH ( 6*32+ 8) /* 3DNow prefetch instructions */
|
|
|
|
#define X86_FEATURE_OSVW ( 6*32+ 9) /* OS Visible Workaround */
|
|
|
|
#define X86_FEATURE_IBS ( 6*32+10) /* Instruction Based Sampling */
|
|
|
|
#define X86_FEATURE_XOP ( 6*32+11) /* extended AVX instructions */
|
|
|
|
#define X86_FEATURE_SKINIT ( 6*32+12) /* SKINIT/STGI instructions */
|
|
|
|
#define X86_FEATURE_WDT ( 6*32+13) /* Watchdog timer */
|
|
|
|
#define X86_FEATURE_LWP ( 6*32+15) /* Light Weight Profiling */
|
|
|
|
#define X86_FEATURE_FMA4 ( 6*32+16) /* 4 operands MAC instructions */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_TCE ( 6*32+17) /* Translation Cache Extension */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_NODEID_MSR ( 6*32+19) /* NodeId MSR */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_TBM ( 6*32+21) /* Trailing Bit Manipulations */
|
|
|
|
#define X86_FEATURE_TOPOEXT ( 6*32+22) /* Topology extensions CPUID leafs */
|
|
|
|
#define X86_FEATURE_PERFCTR_CORE ( 6*32+23) /* Core performance counter extensions */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_PERFCTR_NB ( 6*32+24) /* NB performance counter extensions */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_BPEXT ( 6*32+26) /* Data breakpoint extension */
|
|
|
|
#define X86_FEATURE_PTSC ( 6*32+27) /* Performance time-stamp counter */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_PERFCTR_LLC ( 6*32+28) /* Last Level Cache performance counter extensions */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_MWAITX ( 6*32+29) /* MWAIT extension (MONITORX/MWAITX instructions) */
|
2016-01-27 05:12:04 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Auxiliary flags: Linux defined - For features scattered in various
|
|
|
|
* CPUID levels like 0x6, 0xA etc, word 7.
|
|
|
|
*
|
|
|
|
* Reuse free bits when adding new feature flags!
|
|
|
|
*/
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_RING3MWAIT ( 7*32+ 0) /* Ring 3 MONITOR/MWAIT instructions */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_CPUID_FAULT ( 7*32+ 1) /* Intel CPUID faulting */
|
|
|
|
#define X86_FEATURE_CPB ( 7*32+ 2) /* AMD Core Performance Boost */
|
|
|
|
#define X86_FEATURE_EPB ( 7*32+ 3) /* IA32_ENERGY_PERF_BIAS support */
|
|
|
|
#define X86_FEATURE_CAT_L3 ( 7*32+ 4) /* Cache Allocation Technology L3 */
|
|
|
|
#define X86_FEATURE_CAT_L2 ( 7*32+ 5) /* Cache Allocation Technology L2 */
|
|
|
|
#define X86_FEATURE_CDP_L3 ( 7*32+ 6) /* Code and Data Prioritization L3 */
|
2017-12-04 22:08:01 +08:00
|
|
|
#define X86_FEATURE_INVPCID_SINGLE ( 7*32+ 7) /* Effectively INVPCID && CR4.PCIDE=1 */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_HW_PSTATE ( 7*32+ 8) /* AMD HW-PState */
|
|
|
|
#define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD ProcFeedbackInterface */
|
2022-04-04 20:11:25 +08:00
|
|
|
#define X86_FEATURE_XCOMPACTED ( 7*32+10) /* "" Use compacted XSTATE (XSAVES or XSAVEC) */
|
2017-12-04 22:07:33 +08:00
|
|
|
#define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */
|
2022-06-15 05:15:53 +08:00
|
|
|
#define X86_FEATURE_KERNEL_IBRS ( 7*32+12) /* "" Set/clear IBRS on kernel entry/exit */
|
2022-06-15 05:16:15 +08:00
|
|
|
#define X86_FEATURE_RSB_VMEXIT ( 7*32+13) /* "" Fill RSB on VM-Exit */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */
|
2017-12-21 06:57:21 +08:00
|
|
|
#define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */
|
2018-05-11 01:13:18 +08:00
|
|
|
#define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */
|
2018-05-11 02:21:36 +08:00
|
|
|
#define X86_FEATURE_SSBD ( 7*32+17) /* Speculative Store Bypass Disable */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_MBA ( 7*32+18) /* Memory Bandwidth Allocation */
|
2018-01-28 00:24:32 +08:00
|
|
|
#define X86_FEATURE_RSB_CTXSW ( 7*32+19) /* "" Fill RSB on context switches */
|
2022-04-21 13:46:53 +08:00
|
|
|
#define X86_FEATURE_PERFMON_V2 ( 7*32+20) /* AMD Performance Monitoring Version 2 */
|
2018-01-28 00:24:32 +08:00
|
|
|
#define X86_FEATURE_USE_IBPB ( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
|
2018-02-19 18:50:54 +08:00
|
|
|
#define X86_FEATURE_USE_IBRS_FW ( 7*32+22) /* "" Use IBRS during runtime firmware calls */
|
2018-04-26 10:04:21 +08:00
|
|
|
#define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE ( 7*32+23) /* "" Disable Speculative Store Bypass. */
|
2018-05-11 02:21:36 +08:00
|
|
|
#define X86_FEATURE_LS_CFG_SSBD ( 7*32+24) /* "" AMD SSBD implementation via LS_CFG MSR */
|
2018-05-03 00:15:14 +08:00
|
|
|
#define X86_FEATURE_IBRS ( 7*32+25) /* Indirect Branch Restricted Speculation */
|
|
|
|
#define X86_FEATURE_IBPB ( 7*32+26) /* Indirect Branch Prediction Barrier */
|
|
|
|
#define X86_FEATURE_STIBP ( 7*32+27) /* Single Thread Indirect Branch Predictors */
|
2022-06-07 02:03:36 +08:00
|
|
|
#define X86_FEATURE_ZEN (7*32+28) /* "" CPU based on Zen microarchitecture */
|
2018-06-14 06:48:26 +08:00
|
|
|
#define X86_FEATURE_L1TF_PTEINV ( 7*32+29) /* "" L1TF workaround PTE inversion */
|
Merge branch 'l1tf-final' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Merge L1 Terminal Fault fixes from Thomas Gleixner:
"L1TF, aka L1 Terminal Fault, is yet another speculative hardware
engineering trainwreck. It's a hardware vulnerability which allows
unprivileged speculative access to data which is available in the
Level 1 Data Cache when the page table entry controlling the virtual
address, which is used for the access, has the Present bit cleared or
other reserved bits set.
If an instruction accesses a virtual address for which the relevant
page table entry (PTE) has the Present bit cleared or other reserved
bits set, then speculative execution ignores the invalid PTE and loads
the referenced data if it is present in the Level 1 Data Cache, as if
the page referenced by the address bits in the PTE was still present
and accessible.
While this is a purely speculative mechanism and the instruction will
raise a page fault when it is retired eventually, the pure act of
loading the data and making it available to other speculative
instructions opens up the opportunity for side channel attacks to
unprivileged malicious code, similar to the Meltdown attack.
While Meltdown breaks the user space to kernel space protection, L1TF
allows to attack any physical memory address in the system and the
attack works across all protection domains. It allows an attack of SGX
and also works from inside virtual machines because the speculation
bypasses the extended page table (EPT) protection mechanism.
The assoicated CVEs are: CVE-2018-3615, CVE-2018-3620, CVE-2018-3646
The mitigations provided by this pull request include:
- Host side protection by inverting the upper address bits of a non
present page table entry so the entry points to uncacheable memory.
- Hypervisor protection by flushing L1 Data Cache on VMENTER.
- SMT (HyperThreading) control knobs, which allow to 'turn off' SMT
by offlining the sibling CPU threads. The knobs are available on
the kernel command line and at runtime via sysfs
- Control knobs for the hypervisor mitigation, related to L1D flush
and SMT control. The knobs are available on the kernel command line
and at runtime via sysfs
- Extensive documentation about L1TF including various degrees of
mitigations.
Thanks to all people who have contributed to this in various ways -
patches, review, testing, backporting - and the fruitful, sometimes
heated, but at the end constructive discussions.
There is work in progress to provide other forms of mitigations, which
might be less horrible performance wise for a particular kind of
workloads, but this is not yet ready for consumption due to their
complexity and limitations"
* 'l1tf-final' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (75 commits)
x86/microcode: Allow late microcode loading with SMT disabled
tools headers: Synchronise x86 cpufeatures.h for L1TF additions
x86/mm/kmmio: Make the tracer robust against L1TF
x86/mm/pat: Make set_memory_np() L1TF safe
x86/speculation/l1tf: Make pmd/pud_mknotpresent() invert
x86/speculation/l1tf: Invert all not present mappings
cpu/hotplug: Fix SMT supported evaluation
KVM: VMX: Tell the nested hypervisor to skip L1D flush on vmentry
x86/speculation: Use ARCH_CAPABILITIES to skip L1D flush on vmentry
x86/speculation: Simplify sysfs report of VMX L1TF vulnerability
Documentation/l1tf: Remove Yonah processors from not vulnerable list
x86/KVM/VMX: Don't set l1tf_flush_l1d from vmx_handle_external_intr()
x86/irq: Let interrupt handlers set kvm_cpu_l1tf_flush_l1d
x86: Don't include linux/irq.h from asm/hardirq.h
x86/KVM/VMX: Introduce per-host-cpu analogue of l1tf_flush_l1d
x86/irq: Demote irq_cpustat_t::__softirq_pending to u16
x86/KVM/VMX: Move the l1tf_flush_l1d test to vmx_l1d_flush()
x86/KVM/VMX: Replace 'vmx_l1d_flush_always' with 'vmx_l1d_flush_cond'
x86/KVM/VMX: Don't set l1tf_flush_l1d to true from vmx_l1d_flush()
cpu/hotplug: detect SMT disabled by BIOS
...
2018-08-15 00:46:06 +08:00
|
|
|
#define X86_FEATURE_IBRS_ENHANCED ( 7*32+30) /* Enhanced IBRS */
|
x86/cpufeatures: Add flag to track whether MSR IA32_FEAT_CTL is configured
Add a new feature flag, X86_FEATURE_MSR_IA32_FEAT_CTL, to track whether
IA32_FEAT_CTL has been initialized. This will allow KVM, and any future
subsystems that depend on IA32_FEAT_CTL, to rely purely on cpufeatures
to query platform support, e.g. allows a future patch to remove KVM's
manual IA32_FEAT_CTL MSR checks.
Various features (on platforms that support IA32_FEAT_CTL) are dependent
on IA32_FEAT_CTL being configured and locked, e.g. VMX and LMCE. The
MSR is always configured during boot, but only if the CPU vendor is
recognized by the kernel. Because CPUID doesn't incorporate the current
IA32_FEAT_CTL value in its reporting of relevant features, it's possible
for a feature to be reported as supported in cpufeatures but not truly
enabled, e.g. if the CPU supports VMX but the kernel doesn't recognize
the CPU.
As a result, without the flag, KVM would see VMX as supported even if
IA32_FEAT_CTL hasn't been initialized, and so would need to manually
read the MSR and check the various enabling bits to avoid taking an
unexpected #GP on VMXON.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Link: https://lkml.kernel.org/r/20191221044513.21680-14-sean.j.christopherson@intel.com
2019-12-21 12:45:07 +08:00
|
|
|
#define X86_FEATURE_MSR_IA32_FEAT_CTL ( 7*32+31) /* "" MSR IA32_FEAT_CTL configured */
|
2017-04-08 08:33:52 +08:00
|
|
|
|
2016-01-27 05:12:04 +08:00
|
|
|
/* Virtualization flags: Linux defined, word 8 */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */
|
|
|
|
#define X86_FEATURE_VNMI ( 8*32+ 1) /* Intel Virtual NMI */
|
|
|
|
#define X86_FEATURE_FLEXPRIORITY ( 8*32+ 2) /* Intel FlexPriority */
|
|
|
|
#define X86_FEATURE_EPT ( 8*32+ 3) /* Intel Extended Page Table */
|
|
|
|
#define X86_FEATURE_VPID ( 8*32+ 4) /* Intel Virtual Processor ID */
|
2016-01-27 05:12:04 +08:00
|
|
|
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_VMMCALL ( 8*32+15) /* Prefer VMMCALL to VMCALL */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_XENPV ( 8*32+16) /* "" Xen paravirtual guest */
|
2018-08-02 02:06:57 +08:00
|
|
|
#define X86_FEATURE_EPT_AD ( 8*32+17) /* Intel Extended Page Table access-dirty bit */
|
2019-08-28 16:03:51 +08:00
|
|
|
#define X86_FEATURE_VMCALL ( 8*32+18) /* "" Hypervisor supports the VMCALL instruction */
|
|
|
|
#define X86_FEATURE_VMW_VMMCALL ( 8*32+19) /* "" VMware prefers VMMCALL hypercall instruction */
|
2021-03-11 22:23:13 +08:00
|
|
|
#define X86_FEATURE_PVUNLOCK ( 8*32+20) /* "" PV unlock function */
|
|
|
|
#define X86_FEATURE_VCPUPREEMPT ( 8*32+21) /* "" PV vcpu_is_preempted function */
|
2022-04-06 07:29:10 +08:00
|
|
|
#define X86_FEATURE_TDX_GUEST ( 8*32+22) /* Intel Trust Domain Extensions Guest */
|
2016-01-27 05:12:04 +08:00
|
|
|
|
2017-10-31 20:17:23 +08:00
|
|
|
/* Intel-defined CPU features, CPUID level 0x00000007:0 (EBX), word 9 */
|
|
|
|
#define X86_FEATURE_FSGSBASE ( 9*32+ 0) /* RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions*/
|
|
|
|
#define X86_FEATURE_TSC_ADJUST ( 9*32+ 1) /* TSC adjustment MSR 0x3B */
|
2020-11-13 06:01:14 +08:00
|
|
|
#define X86_FEATURE_SGX ( 9*32+ 2) /* Software Guard Extensions */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_BMI1 ( 9*32+ 3) /* 1st group bit manipulation extensions */
|
|
|
|
#define X86_FEATURE_HLE ( 9*32+ 4) /* Hardware Lock Elision */
|
|
|
|
#define X86_FEATURE_AVX2 ( 9*32+ 5) /* AVX2 instructions */
|
2019-06-06 06:02:52 +08:00
|
|
|
#define X86_FEATURE_FDP_EXCPTN_ONLY ( 9*32+ 6) /* "" FPU data pointer updated only on x87 exceptions */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_SMEP ( 9*32+ 7) /* Supervisor Mode Execution Protection */
|
|
|
|
#define X86_FEATURE_BMI2 ( 9*32+ 8) /* 2nd group bit manipulation extensions */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_ERMS ( 9*32+ 9) /* Enhanced REP MOVSB/STOSB instructions */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_INVPCID ( 9*32+10) /* Invalidate Processor Context ID */
|
|
|
|
#define X86_FEATURE_RTM ( 9*32+11) /* Restricted Transactional Memory */
|
|
|
|
#define X86_FEATURE_CQM ( 9*32+12) /* Cache QoS Monitoring */
|
2019-06-06 06:02:52 +08:00
|
|
|
#define X86_FEATURE_ZERO_FCS_FDS ( 9*32+13) /* "" Zero out FPU CS and FPU DS */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_MPX ( 9*32+14) /* Memory Protection Extension */
|
|
|
|
#define X86_FEATURE_RDT_A ( 9*32+15) /* Resource Director Technology Allocation */
|
|
|
|
#define X86_FEATURE_AVX512F ( 9*32+16) /* AVX-512 Foundation */
|
|
|
|
#define X86_FEATURE_AVX512DQ ( 9*32+17) /* AVX-512 DQ (Double/Quad granular) Instructions */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_RDSEED ( 9*32+18) /* RDSEED instruction */
|
|
|
|
#define X86_FEATURE_ADX ( 9*32+19) /* ADCX and ADOX instructions */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_SMAP ( 9*32+20) /* Supervisor Mode Access Prevention */
|
|
|
|
#define X86_FEATURE_AVX512IFMA ( 9*32+21) /* AVX-512 Integer Fused Multiply-Add instructions */
|
|
|
|
#define X86_FEATURE_CLFLUSHOPT ( 9*32+23) /* CLFLUSHOPT instruction */
|
|
|
|
#define X86_FEATURE_CLWB ( 9*32+24) /* CLWB instruction */
|
2018-01-16 23:42:25 +08:00
|
|
|
#define X86_FEATURE_INTEL_PT ( 9*32+25) /* Intel Processor Trace */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_AVX512PF ( 9*32+26) /* AVX-512 Prefetch */
|
|
|
|
#define X86_FEATURE_AVX512ER ( 9*32+27) /* AVX-512 Exponential and Reciprocal */
|
|
|
|
#define X86_FEATURE_AVX512CD ( 9*32+28) /* AVX-512 Conflict Detection */
|
|
|
|
#define X86_FEATURE_SHA_NI ( 9*32+29) /* SHA1/SHA256 Instruction Extensions */
|
|
|
|
#define X86_FEATURE_AVX512BW ( 9*32+30) /* AVX-512 BW (Byte/Word granular) Instructions */
|
|
|
|
#define X86_FEATURE_AVX512VL ( 9*32+31) /* AVX-512 VL (128/256 Vector Length) Extensions */
|
2016-01-27 05:12:04 +08:00
|
|
|
|
2017-10-31 20:17:23 +08:00
|
|
|
/* Extended state features, CPUID level 0x0000000d:1 (EAX), word 10 */
|
|
|
|
#define X86_FEATURE_XSAVEOPT (10*32+ 0) /* XSAVEOPT instruction */
|
|
|
|
#define X86_FEATURE_XSAVEC (10*32+ 1) /* XSAVEC instruction */
|
|
|
|
#define X86_FEATURE_XGETBV1 (10*32+ 2) /* XGETBV with ECX = 1 instruction */
|
|
|
|
#define X86_FEATURE_XSAVES (10*32+ 3) /* XSAVES/XRSTORS instructions */
|
2021-10-22 06:55:16 +08:00
|
|
|
#define X86_FEATURE_XFD (10*32+ 4) /* "" eXtended Feature Disabling */
|
2016-01-27 05:12:04 +08:00
|
|
|
|
2019-06-20 00:51:09 +08:00
|
|
|
/*
|
|
|
|
* Extended auxiliary flags: Linux defined - for features scattered in various
|
|
|
|
* CPUID levels like 0xf, etc.
|
|
|
|
*
|
|
|
|
* Reuse free bits when adding new feature flags!
|
|
|
|
*/
|
|
|
|
#define X86_FEATURE_CQM_LLC (11*32+ 0) /* LLC QoS if 1 */
|
|
|
|
#define X86_FEATURE_CQM_OCCUP_LLC (11*32+ 1) /* LLC occupancy monitoring */
|
|
|
|
#define X86_FEATURE_CQM_MBM_TOTAL (11*32+ 2) /* LLC Total MBM monitoring */
|
|
|
|
#define X86_FEATURE_CQM_MBM_LOCAL (11*32+ 3) /* LLC Local MBM monitoring */
|
x86/speculation: Prepare entry code for Spectre v1 swapgs mitigations
Spectre v1 isn't only about array bounds checks. It can affect any
conditional checks. The kernel entry code interrupt, exception, and NMI
handlers all have conditional swapgs checks. Those may be problematic in
the context of Spectre v1, as kernel code can speculatively run with a user
GS.
For example:
if (coming from user space)
swapgs
mov %gs:<percpu_offset>, %reg
mov (%reg), %reg1
When coming from user space, the CPU can speculatively skip the swapgs, and
then do a speculative percpu load using the user GS value. So the user can
speculatively force a read of any kernel value. If a gadget exists which
uses the percpu value as an address in another load/store, then the
contents of the kernel value may become visible via an L1 side channel
attack.
A similar attack exists when coming from kernel space. The CPU can
speculatively do the swapgs, causing the user GS to get used for the rest
of the speculative window.
The mitigation is similar to a traditional Spectre v1 mitigation, except:
a) index masking isn't possible; because the index (percpu offset)
isn't user-controlled; and
b) an lfence is needed in both the "from user" swapgs path and the
"from kernel" non-swapgs path (because of the two attacks described
above).
The user entry swapgs paths already have SWITCH_TO_KERNEL_CR3, which has a
CR3 write when PTI is enabled. Since CR3 writes are serializing, the
lfences can be skipped in those cases.
On the other hand, the kernel entry swapgs paths don't depend on PTI.
To avoid unnecessary lfences for the user entry case, create two separate
features for alternative patching:
X86_FEATURE_FENCE_SWAPGS_USER
X86_FEATURE_FENCE_SWAPGS_KERNEL
Use these features in entry code to patch in lfences where needed.
The features aren't enabled yet, so there's no functional change.
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Dave Hansen <dave.hansen@intel.com>
2019-07-09 00:52:25 +08:00
|
|
|
#define X86_FEATURE_FENCE_SWAPGS_USER (11*32+ 4) /* "" LFENCE in user entry SWAPGS path */
|
|
|
|
#define X86_FEATURE_FENCE_SWAPGS_KERNEL (11*32+ 5) /* "" LFENCE in kernel entry SWAPGS path */
|
2020-01-27 04:05:35 +08:00
|
|
|
#define X86_FEATURE_SPLIT_LOCK_DETECT (11*32+ 6) /* #AC for split lock */
|
2020-08-25 03:11:20 +08:00
|
|
|
#define X86_FEATURE_PER_THREAD_MBA (11*32+ 7) /* "" Per-thread Memory Bandwidth Allocation */
|
2021-03-19 15:22:18 +08:00
|
|
|
#define X86_FEATURE_SGX1 (11*32+ 8) /* "" Basic SGX */
|
|
|
|
#define X86_FEATURE_SGX2 (11*32+ 9) /* "" SGX Enclave Dynamic Memory Management (EDMM) */
|
2022-06-15 05:16:02 +08:00
|
|
|
#define X86_FEATURE_ENTRY_IBPB (11*32+10) /* "" Issue an IBPB on kernel entry */
|
2022-07-09 04:36:09 +08:00
|
|
|
#define X86_FEATURE_RRSBA_CTRL (11*32+11) /* "" RET prediction control */
|
2022-06-15 05:15:33 +08:00
|
|
|
#define X86_FEATURE_RETPOLINE (11*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
|
|
|
|
#define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* "" Use LFENCE for Spectre variant 2 */
|
2022-06-15 05:15:37 +08:00
|
|
|
#define X86_FEATURE_RETHUNK (11*32+14) /* "" Use REturn THUNK */
|
2022-06-15 05:15:48 +08:00
|
|
|
#define X86_FEATURE_UNRET (11*32+15) /* "" AMD BTB untrain return */
|
2022-07-18 19:41:37 +08:00
|
|
|
#define X86_FEATURE_USE_IBPB_FW (11*32+16) /* "" Use IBPB during runtime firmware calls */
|
x86/speculation: Add RSB VM Exit protections
tl;dr: The Enhanced IBRS mitigation for Spectre v2 does not work as
documented for RET instructions after VM exits. Mitigate it with a new
one-entry RSB stuffing mechanism and a new LFENCE.
== Background ==
Indirect Branch Restricted Speculation (IBRS) was designed to help
mitigate Branch Target Injection and Speculative Store Bypass, i.e.
Spectre, attacks. IBRS prevents software run in less privileged modes
from affecting branch prediction in more privileged modes. IBRS requires
the MSR to be written on every privilege level change.
To overcome some of the performance issues of IBRS, Enhanced IBRS was
introduced. eIBRS is an "always on" IBRS, in other words, just turn
it on once instead of writing the MSR on every privilege level change.
When eIBRS is enabled, more privileged modes should be protected from
less privileged modes, including protecting VMMs from guests.
== Problem ==
Here's a simplification of how guests are run on Linux' KVM:
void run_kvm_guest(void)
{
// Prepare to run guest
VMRESUME();
// Clean up after guest runs
}
The execution flow for that would look something like this to the
processor:
1. Host-side: call run_kvm_guest()
2. Host-side: VMRESUME
3. Guest runs, does "CALL guest_function"
4. VM exit, host runs again
5. Host might make some "cleanup" function calls
6. Host-side: RET from run_kvm_guest()
Now, when back on the host, there are a couple of possible scenarios of
post-guest activity the host needs to do before executing host code:
* on pre-eIBRS hardware (legacy IBRS, or nothing at all), the RSB is not
touched and Linux has to do a 32-entry stuffing.
* on eIBRS hardware, VM exit with IBRS enabled, or restoring the host
IBRS=1 shortly after VM exit, has a documented side effect of flushing
the RSB except in this PBRSB situation where the software needs to stuff
the last RSB entry "by hand".
IOW, with eIBRS supported, host RET instructions should no longer be
influenced by guest behavior after the host retires a single CALL
instruction.
However, if the RET instructions are "unbalanced" with CALLs after a VM
exit as is the RET in #6, it might speculatively use the address for the
instruction after the CALL in #3 as an RSB prediction. This is a problem
since the (untrusted) guest controls this address.
Balanced CALL/RET instruction pairs such as in step #5 are not affected.
== Solution ==
The PBRSB issue affects a wide variety of Intel processors which
support eIBRS. But not all of them need mitigation. Today,
X86_FEATURE_RSB_VMEXIT triggers an RSB filling sequence that mitigates
PBRSB. Systems setting RSB_VMEXIT need no further mitigation - i.e.,
eIBRS systems which enable legacy IBRS explicitly.
However, such systems (X86_FEATURE_IBRS_ENHANCED) do not set RSB_VMEXIT
and most of them need a new mitigation.
Therefore, introduce a new feature flag X86_FEATURE_RSB_VMEXIT_LITE
which triggers a lighter-weight PBRSB mitigation versus RSB_VMEXIT.
The lighter-weight mitigation performs a CALL instruction which is
immediately followed by a speculative execution barrier (INT3). This
steers speculative execution to the barrier -- just like a retpoline
-- which ensures that speculation can never reach an unbalanced RET.
Then, ensure this CALL is retired before continuing execution with an
LFENCE.
In other words, the window of exposure is opened at VM exit where RET
behavior is troublesome. While the window is open, force RSB predictions
sampling for RET targets to a dead end at the INT3. Close the window
with the LFENCE.
There is a subset of eIBRS systems which are not vulnerable to PBRSB.
Add these systems to the cpu_vuln_whitelist[] as NO_EIBRS_PBRSB.
Future systems that aren't vulnerable will set ARCH_CAP_PBRSB_NO.
[ bp: Massage, incorporate review comments from Andy Cooper. ]
Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com>
Co-developed-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
2022-08-03 06:47:01 +08:00
|
|
|
#define X86_FEATURE_RSB_VMEXIT_LITE (11*32+17) /* "" Fill RSB on VM exit when EIBRS is enabled */
|
KVM/VMX: Allow exposing EDECCSSA user leaf function to KVM guest
The new Asynchronous Exit (AEX) notification mechanism (AEX-notify)
allows one enclave to receive a notification in the ERESUME after the
enclave exit due to an AEX. EDECCSSA is a new SGX user leaf function
(ENCLU[EDECCSSA]) to facilitate the AEX notification handling. The new
EDECCSSA is enumerated via CPUID(EAX=0x12,ECX=0x0):EAX[11].
Besides Allowing reporting the new AEX-notify attribute to KVM guests,
also allow reporting the new EDECCSSA user leaf function to KVM guests
so the guest can fully utilize the AEX-notify mechanism.
Similar to existing X86_FEATURE_SGX1 and X86_FEATURE_SGX2, introduce a
new scattered X86_FEATURE_SGX_EDECCSSA bit for the new EDECCSSA, and
report it in KVM's supported CPUIDs.
Note, no additional KVM enabling is required to allow the guest to use
EDECCSSA. It's impossible to trap ENCLU (without completely preventing
the guest from using SGX). Advertise EDECCSSA as supported purely so
that userspace doesn't need to special case EDECCSSA, i.e. doesn't need
to manually check host CPUID.
The inability to trap ENCLU also means that KVM can't prevent the guest
from using EDECCSSA, but that virtualization hole is benign as far as
KVM is concerned. EDECCSSA is simply a fancy way to modify internal
enclave state.
More background about how do AEX-notify and EDECCSSA work:
SGX maintains a Current State Save Area Frame (CSSA) for each enclave
thread. When AEX happens, the enclave thread context is saved to the
CSSA and the CSSA is increased by 1. For a normal ERESUME which doesn't
deliver AEX notification, it restores the saved thread context from the
previously saved SSA and decreases the CSSA. If AEX-notify is enabled
for one enclave, the ERESUME acts differently. Instead of restoring the
saved thread context and decreasing the CSSA, it acts like EENTER which
doesn't decrease the CSSA but establishes a clean slate thread context
using the CSSA for the enclave to handle the notification. After some
handling, the enclave must discard the "new-established" SSA and switch
back to the previously saved SSA (upon AEX). Otherwise, the enclave
will run out of SSA space upon further AEXs and eventually fail to run.
To solve this problem, the new EDECCSSA essentially decreases the CSSA.
It can be used by the enclave notification handler to switch back to the
previous saved SSA when needed, i.e. after it handles the notification.
Signed-off-by: Kai Huang <kai.huang@intel.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Acked-by: Sean Christopherson <seanjc@google.com>
Acked-by: Jarkko Sakkinen <jarkko@kernel.org>
Link: https://lore.kernel.org/all/20221101022422.858944-1-kai.huang%40intel.com
2022-11-01 10:24:22 +08:00
|
|
|
#define X86_FEATURE_SGX_EDECCSSA (11*32+18) /* "" SGX EDECCSSA user leaf function */
|
2022-11-06 16:55:56 +08:00
|
|
|
#define X86_FEATURE_CALL_DEPTH (11*32+19) /* "" Call depth tracking for RSB stuffing */
|
2022-11-16 03:17:05 +08:00
|
|
|
#define X86_FEATURE_MSR_TSX_CTRL (11*32+20) /* "" MSR IA32_TSX_CTRL (Intel) implemented */
|
|
|
|
|
2019-06-18 02:00:16 +08:00
|
|
|
/* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
|
2021-01-05 08:49:08 +08:00
|
|
|
#define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */
|
2019-06-18 02:00:16 +08:00
|
|
|
#define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instructions */
|
2022-11-25 20:58:40 +08:00
|
|
|
#define X86_FEATURE_CMPCCXADD (12*32+ 7) /* "" CMPccXADD instructions */
|
2022-09-02 05:18:06 +08:00
|
|
|
#define X86_FEATURE_FZRM (12*32+10) /* "" Fast zero-length REP MOVSB */
|
|
|
|
#define X86_FEATURE_FSRS (12*32+11) /* "" Fast short REP STOSB */
|
|
|
|
#define X86_FEATURE_FSRC (12*32+12) /* "" Fast short REP {CMPSB,SCASB} */
|
2022-11-25 20:58:41 +08:00
|
|
|
#define X86_FEATURE_AMX_FP16 (12*32+21) /* "" AMX fp16 Support */
|
2022-11-25 20:58:42 +08:00
|
|
|
#define X86_FEATURE_AVX_IFMA (12*32+23) /* "" Support for VPMADD52[H,L]UQ */
|
2019-06-18 02:00:16 +08:00
|
|
|
|
2017-10-31 20:17:23 +08:00
|
|
|
/* AMD-defined CPU features, CPUID level 0x80000008 (EBX), word 13 */
|
|
|
|
#define X86_FEATURE_CLZERO (13*32+ 0) /* CLZERO instruction */
|
|
|
|
#define X86_FEATURE_IRPERF (13*32+ 1) /* Instructions Retired Count */
|
2017-11-29 05:01:06 +08:00
|
|
|
#define X86_FEATURE_XSAVEERPTR (13*32+ 2) /* Always save/restore FP error pointers */
|
2019-10-08 04:48:39 +08:00
|
|
|
#define X86_FEATURE_RDPRU (13*32+ 4) /* Read processor register at user level */
|
2018-12-19 21:51:43 +08:00
|
|
|
#define X86_FEATURE_WBNOINVD (13*32+ 9) /* WBNOINVD instruction */
|
2018-05-03 00:15:14 +08:00
|
|
|
#define X86_FEATURE_AMD_IBPB (13*32+12) /* "" Indirect Branch Prediction Barrier */
|
|
|
|
#define X86_FEATURE_AMD_IBRS (13*32+14) /* "" Indirect Branch Restricted Speculation */
|
|
|
|
#define X86_FEATURE_AMD_STIBP (13*32+15) /* "" Single Thread Indirect Branch Predictors */
|
2018-12-14 07:03:54 +08:00
|
|
|
#define X86_FEATURE_AMD_STIBP_ALWAYS_ON (13*32+17) /* "" Single Thread Indirect Branch Predictors always-on preferred */
|
2020-03-22 03:38:00 +08:00
|
|
|
#define X86_FEATURE_AMD_PPIN (13*32+23) /* Protected Processor Inventory Number */
|
2018-06-01 22:59:20 +08:00
|
|
|
#define X86_FEATURE_AMD_SSBD (13*32+24) /* "" Speculative Store Bypass Disable */
|
2018-05-17 23:09:18 +08:00
|
|
|
#define X86_FEATURE_VIRT_SSBD (13*32+25) /* Virtualized Speculative Store Bypass Disable */
|
2018-06-01 22:59:19 +08:00
|
|
|
#define X86_FEATURE_AMD_SSB_NO (13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
|
2021-12-24 09:04:55 +08:00
|
|
|
#define X86_FEATURE_CPPC (13*32+27) /* Collaborative Processor Performance Control */
|
2022-06-24 21:41:21 +08:00
|
|
|
#define X86_FEATURE_BTC_NO (13*32+29) /* "" Not vulnerable to Branch Type Confusion */
|
2022-03-23 06:15:06 +08:00
|
|
|
#define X86_FEATURE_BRS (13*32+31) /* Branch Sampling available */
|
2016-01-27 05:12:04 +08:00
|
|
|
|
2017-10-31 20:17:23 +08:00
|
|
|
/* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_DTHERM (14*32+ 0) /* Digital Thermal Sensor */
|
|
|
|
#define X86_FEATURE_IDA (14*32+ 1) /* Intel Dynamic Acceleration */
|
|
|
|
#define X86_FEATURE_ARAT (14*32+ 2) /* Always Running APIC Timer */
|
|
|
|
#define X86_FEATURE_PLN (14*32+ 4) /* Intel Power Limit Notification */
|
|
|
|
#define X86_FEATURE_PTS (14*32+ 6) /* Intel Package Thermal Status */
|
|
|
|
#define X86_FEATURE_HWP (14*32+ 7) /* Intel Hardware P-states */
|
|
|
|
#define X86_FEATURE_HWP_NOTIFY (14*32+ 8) /* HWP Notification */
|
|
|
|
#define X86_FEATURE_HWP_ACT_WINDOW (14*32+ 9) /* HWP Activity Window */
|
|
|
|
#define X86_FEATURE_HWP_EPP (14*32+10) /* HWP Energy Perf. Preference */
|
|
|
|
#define X86_FEATURE_HWP_PKG_REQ (14*32+11) /* HWP Package Level Request */
|
2022-01-28 03:34:49 +08:00
|
|
|
#define X86_FEATURE_HFI (14*32+19) /* Hardware Feedback Interface */
|
2016-01-27 05:12:04 +08:00
|
|
|
|
2017-10-31 20:17:23 +08:00
|
|
|
/* AMD SVM Feature Identification, CPUID level 0x8000000a (EDX), word 15 */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_NPT (15*32+ 0) /* Nested Page Table support */
|
|
|
|
#define X86_FEATURE_LBRV (15*32+ 1) /* LBR Virtualization support */
|
|
|
|
#define X86_FEATURE_SVML (15*32+ 2) /* "svm_lock" SVM locking MSR */
|
|
|
|
#define X86_FEATURE_NRIPS (15*32+ 3) /* "nrip_save" SVM next_rip save */
|
|
|
|
#define X86_FEATURE_TSCRATEMSR (15*32+ 4) /* "tsc_scale" TSC scaling support */
|
|
|
|
#define X86_FEATURE_VMCBCLEAN (15*32+ 5) /* "vmcb_clean" VMCB clean bits support */
|
|
|
|
#define X86_FEATURE_FLUSHBYASID (15*32+ 6) /* flush-by-ASID support */
|
|
|
|
#define X86_FEATURE_DECODEASSISTS (15*32+ 7) /* Decode Assists support */
|
|
|
|
#define X86_FEATURE_PAUSEFILTER (15*32+10) /* filtered pause intercept */
|
|
|
|
#define X86_FEATURE_PFTHRESHOLD (15*32+12) /* pause filter threshold */
|
|
|
|
#define X86_FEATURE_AVIC (15*32+13) /* Virtual Interrupt Controller */
|
|
|
|
#define X86_FEATURE_V_VMSAVE_VMLOAD (15*32+15) /* Virtual VMSAVE VMLOAD */
|
|
|
|
#define X86_FEATURE_VGIF (15*32+16) /* Virtual GIF */
|
2022-05-19 18:26:53 +08:00
|
|
|
#define X86_FEATURE_X2AVIC (15*32+18) /* Virtual x2apic */
|
2021-01-29 08:43:22 +08:00
|
|
|
#define X86_FEATURE_V_SPEC_CTRL (15*32+20) /* Virtual SPEC_CTRL */
|
2021-01-26 16:18:30 +08:00
|
|
|
#define X86_FEATURE_SVME_ADDR_CHK (15*32+28) /* "" SVME addr check */
|
2016-01-27 05:12:04 +08:00
|
|
|
|
2017-10-31 20:17:23 +08:00
|
|
|
/* Intel-defined CPU features, CPUID level 0x00000007:0 (ECX), word 16 */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_AVX512VBMI (16*32+ 1) /* AVX512 Vector Bit Manipulation instructions*/
|
2017-11-06 10:27:51 +08:00
|
|
|
#define X86_FEATURE_UMIP (16*32+ 2) /* User Mode Instruction Protection */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_PKU (16*32+ 3) /* Protection Keys for Userspace */
|
|
|
|
#define X86_FEATURE_OSPKE (16*32+ 4) /* OS Protection Keys Enable */
|
2019-06-20 09:33:54 +08:00
|
|
|
#define X86_FEATURE_WAITPKG (16*32+ 5) /* UMONITOR/UMWAIT/TPAUSE Instructions */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_AVX512_VBMI2 (16*32+ 6) /* Additional AVX512 Vector Bit Manipulation Instructions */
|
|
|
|
#define X86_FEATURE_GFNI (16*32+ 8) /* Galois Field New Instructions */
|
|
|
|
#define X86_FEATURE_VAES (16*32+ 9) /* Vector AES */
|
2017-10-31 20:17:23 +08:00
|
|
|
#define X86_FEATURE_VPCLMULQDQ (16*32+10) /* Carry-Less Multiplication Double Quadword */
|
|
|
|
#define X86_FEATURE_AVX512_VNNI (16*32+11) /* Vector Neural Network Instructions */
|
|
|
|
#define X86_FEATURE_AVX512_BITALG (16*32+12) /* Support for VPOPCNT[B,W] and VPSHUF-BITQMB instructions */
|
2018-03-06 00:25:49 +08:00
|
|
|
#define X86_FEATURE_TME (16*32+13) /* Intel Total Memory Encryption */
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_FEATURE_AVX512_VPOPCNTDQ (16*32+14) /* POPCNT for vectors of DW/QW */
|
|
|
|
#define X86_FEATURE_LA57 (16*32+16) /* 5-level page tables */
|
|
|
|
#define X86_FEATURE_RDPID (16*32+22) /* RDPID instruction */
|
2021-03-22 21:53:23 +08:00
|
|
|
#define X86_FEATURE_BUS_LOCK_DETECT (16*32+24) /* Bus Lock detect */
|
2018-04-24 02:29:22 +08:00
|
|
|
#define X86_FEATURE_CLDEMOTE (16*32+25) /* CLDEMOTE instruction */
|
2018-10-25 05:57:16 +08:00
|
|
|
#define X86_FEATURE_MOVDIRI (16*32+27) /* MOVDIRI instruction */
|
2018-10-25 05:57:17 +08:00
|
|
|
#define X86_FEATURE_MOVDIR64B (16*32+28) /* MOVDIR64B instruction */
|
2020-09-16 00:30:08 +08:00
|
|
|
#define X86_FEATURE_ENQCMD (16*32+29) /* ENQCMD and ENQCMDS instructions */
|
x86/{cpufeatures,msr}: Add Intel SGX Launch Control hardware bits
The SGX Launch Control hardware helps restrict which enclaves the
hardware will run. Launch control is intended to restrict what software
can run with enclave protections, which helps protect the overall system
from bad enclaves.
For the kernel's purposes, there are effectively two modes in which the
launch control hardware can operate: rigid and flexible. In its rigid
mode, an entity other than the kernel has ultimate authority over which
enclaves can be run (firmware, Intel, etc...). In its flexible mode, the
kernel has ultimate authority over which enclaves can run.
Enable X86_FEATURE_SGX_LC to enumerate when the CPU supports SGX Launch
Control in general.
Add MSR_IA32_SGXLEPUBKEYHASH{0, 1, 2, 3}, which when combined contain a
SHA256 hash of a 3072-bit RSA public key. The hardware allows SGX enclaves
signed with this public key to initialize and run [*]. Enclaves not signed
with this key can not initialize and run.
Add FEAT_CTL_SGX_LC_ENABLED, which informs whether the SGXLEPUBKEYHASH MSRs
can be written by the kernel.
If the MSRs do not exist or are read-only, the launch control hardware is
operating in rigid mode. Linux does not and will not support creating
enclaves when hardware is configured in rigid mode because it takes away
the authority for launch decisions from the kernel. Note, this does not
preclude KVM from virtualizing/exposing SGX to a KVM guest when launch
control hardware is operating in rigid mode.
[*] Intel SDM: 38.1.4 Intel SGX Launch Control Configuration
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Co-developed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Jethro Beekman <jethro@fortanix.com>
Link: https://lkml.kernel.org/r/20201112220135.165028-5-jarkko@kernel.org
2020-11-13 06:01:15 +08:00
|
|
|
#define X86_FEATURE_SGX_LC (16*32+30) /* Software Guard Extensions Launch Control */
|
2016-03-11 06:12:13 +08:00
|
|
|
|
2017-10-31 20:17:23 +08:00
|
|
|
/* AMD-defined CPU features, CPUID level 0x80000007 (EBX), word 17 */
|
|
|
|
#define X86_FEATURE_OVERFLOW_RECOV (17*32+ 0) /* MCA overflow recovery support */
|
|
|
|
#define X86_FEATURE_SUCCOR (17*32+ 1) /* Uncorrectable error containment and recovery */
|
|
|
|
#define X86_FEATURE_SMCA (17*32+ 3) /* Scalable MCA */
|
2016-05-11 20:58:26 +08:00
|
|
|
|
2018-01-26 00:14:09 +08:00
|
|
|
/* Intel-defined CPU features, CPUID level 0x00000007:0 (EDX), word 18 */
|
|
|
|
#define X86_FEATURE_AVX512_4VNNIW (18*32+ 2) /* AVX-512 Neural Network Instructions */
|
|
|
|
#define X86_FEATURE_AVX512_4FMAPS (18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
|
2019-12-17 05:42:54 +08:00
|
|
|
#define X86_FEATURE_FSRM (18*32+ 4) /* Fast Short Rep Mov */
|
2019-07-18 07:46:32 +08:00
|
|
|
#define X86_FEATURE_AVX512_VP2INTERSECT (18*32+ 8) /* AVX-512 Intersect for D/Q */
|
2020-04-16 23:54:04 +08:00
|
|
|
#define X86_FEATURE_SRBDS_CTRL (18*32+ 9) /* "" SRBDS mitigation MSR available */
|
2019-01-19 08:50:16 +08:00
|
|
|
#define X86_FEATURE_MD_CLEAR (18*32+10) /* VERW clears CPU buffers */
|
x86/msr: Define new bits in TSX_FORCE_ABORT MSR
Intel client processors that support the IA32_TSX_FORCE_ABORT MSR
related to perf counter interaction [1] received a microcode update that
deprecates the Transactional Synchronization Extension (TSX) feature.
The bit FORCE_ABORT_RTM now defaults to 1, writes to this bit are
ignored. A new bit TSX_CPUID_CLEAR clears the TSX related CPUID bits.
The summary of changes to the IA32_TSX_FORCE_ABORT MSR are:
Bit 0: FORCE_ABORT_RTM (legacy bit, new default=1) Status bit that
indicates if RTM transactions are always aborted. This bit is
essentially !SDV_ENABLE_RTM(Bit 2). Writes to this bit are ignored.
Bit 1: TSX_CPUID_CLEAR (new bit, default=0) When set, CPUID.HLE = 0
and CPUID.RTM = 0.
Bit 2: SDV_ENABLE_RTM (new bit, default=0) When clear, XBEGIN will
always abort with EAX code 0. When set, XBEGIN will not be forced to
abort (but will always abort in SGX enclaves). This bit is intended to
be used on developer systems. If this bit is set, transactional
atomicity correctness is not certain. SDV = Software Development
Vehicle (SDV), i.e. developer systems.
Performance monitoring counter 3 is usable in all cases, regardless of
the value of above bits.
Add support for a new CPUID bit - CPUID.RTM_ALWAYS_ABORT (CPUID 7.EDX[11])
- to indicate the status of always abort behavior.
[1] [ bp: Look for document ID 604224, "Performance Monitoring Impact
of Intel Transactional Synchronization Extension Memory". Since
there's no way for us to have stable links to documents... ]
[ bp: Massage and extend commit message. ]
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Tested-by: Neelima Krishnan <neelima.krishnan@intel.com>
Link: https://lkml.kernel.org/r/9add61915b4a4eedad74fbd869107863a28b428e.1623704845.git-series.pawan.kumar.gupta@linux.intel.com
2021-06-15 05:12:22 +08:00
|
|
|
#define X86_FEATURE_RTM_ALWAYS_ABORT (18*32+11) /* "" RTM transaction always aborts */
|
2019-03-06 05:23:17 +08:00
|
|
|
#define X86_FEATURE_TSX_FORCE_ABORT (18*32+13) /* "" TSX_FORCE_ABORT */
|
2020-07-27 12:31:29 +08:00
|
|
|
#define X86_FEATURE_SERIALIZE (18*32+14) /* SERIALIZE instruction */
|
2021-04-12 22:30:41 +08:00
|
|
|
#define X86_FEATURE_HYBRID_CPU (18*32+15) /* "" This part has CPUs of more than one type */
|
2020-08-25 08:47:57 +08:00
|
|
|
#define X86_FEATURE_TSXLDTRK (18*32+16) /* TSX Suspend Load Address Tracking */
|
2018-03-06 00:25:51 +08:00
|
|
|
#define X86_FEATURE_PCONFIG (18*32+18) /* Intel PCONFIG */
|
2020-07-03 20:49:07 +08:00
|
|
|
#define X86_FEATURE_ARCH_LBR (18*32+19) /* Intel ARCH LBR */
|
2022-03-08 23:30:35 +08:00
|
|
|
#define X86_FEATURE_IBT (18*32+20) /* Indirect Branch Tracking */
|
2022-02-04 03:43:07 +08:00
|
|
|
#define X86_FEATURE_AMX_BF16 (18*32+22) /* AMX bf16 Support */
|
2020-12-08 11:34:40 +08:00
|
|
|
#define X86_FEATURE_AVX512_FP16 (18*32+23) /* AVX512 FP16 */
|
2022-02-04 03:43:07 +08:00
|
|
|
#define X86_FEATURE_AMX_TILE (18*32+24) /* AMX tile Support */
|
|
|
|
#define X86_FEATURE_AMX_INT8 (18*32+25) /* AMX int8 Support */
|
2018-01-28 00:24:32 +08:00
|
|
|
#define X86_FEATURE_SPEC_CTRL (18*32+26) /* "" Speculation Control (IBRS + IBPB) */
|
|
|
|
#define X86_FEATURE_INTEL_STIBP (18*32+27) /* "" Single Thread Indirect Branch Predictors */
|
2018-06-21 04:42:58 +08:00
|
|
|
#define X86_FEATURE_FLUSH_L1D (18*32+28) /* Flush L1D cache */
|
2018-01-26 00:14:10 +08:00
|
|
|
#define X86_FEATURE_ARCH_CAPABILITIES (18*32+29) /* IA32_ARCH_CAPABILITIES MSR (Intel) */
|
2020-01-27 04:05:35 +08:00
|
|
|
#define X86_FEATURE_CORE_CAPABILITIES (18*32+30) /* "" IA32_CORE_CAPABILITIES MSR */
|
2018-05-11 02:21:36 +08:00
|
|
|
#define X86_FEATURE_SPEC_CTRL_SSBD (18*32+31) /* "" Speculative Store Bypass Disable */
|
2018-01-26 00:14:09 +08:00
|
|
|
|
2021-01-23 04:40:46 +08:00
|
|
|
/* AMD-defined memory encryption features, CPUID level 0x8000001f (EAX), word 19 */
|
|
|
|
#define X86_FEATURE_SME (19*32+ 0) /* AMD Secure Memory Encryption */
|
|
|
|
#define X86_FEATURE_SEV (19*32+ 1) /* AMD Secure Encrypted Virtualization */
|
|
|
|
#define X86_FEATURE_VM_PAGE_FLUSH (19*32+ 2) /* "" VM Page Flush MSR is supported */
|
|
|
|
#define X86_FEATURE_SEV_ES (19*32+ 3) /* AMD Secure Encrypted Virtualization - Encrypted State */
|
2022-04-20 04:54:44 +08:00
|
|
|
#define X86_FEATURE_V_TSC_AUX (19*32+ 9) /* "" Virtual TSC_AUX */
|
2021-01-23 04:40:46 +08:00
|
|
|
#define X86_FEATURE_SME_COHERENT (19*32+10) /* "" AMD hardware-enforced cache coherency */
|
|
|
|
|
2016-01-27 05:12:04 +08:00
|
|
|
/*
|
|
|
|
* BUG word(s)
|
|
|
|
*/
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_BUG(x) (NCAPINTS*32 + (x))
|
2016-01-27 05:12:04 +08:00
|
|
|
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_BUG_F00F X86_BUG(0) /* Intel F00F */
|
|
|
|
#define X86_BUG_FDIV X86_BUG(1) /* FPU FDIV */
|
|
|
|
#define X86_BUG_COMA X86_BUG(2) /* Cyrix 6x86 coma */
|
|
|
|
#define X86_BUG_AMD_TLB_MMATCH X86_BUG(3) /* "tlb_mmatch" AMD Erratum 383 */
|
|
|
|
#define X86_BUG_AMD_APIC_C1E X86_BUG(4) /* "apic_c1e" AMD Erratum 400 */
|
|
|
|
#define X86_BUG_11AP X86_BUG(5) /* Bad local APIC aka 11AP */
|
|
|
|
#define X86_BUG_FXSAVE_LEAK X86_BUG(6) /* FXSAVE leaks FOP/FIP/FOP */
|
|
|
|
#define X86_BUG_CLFLUSH_MONITOR X86_BUG(7) /* AAI65, CLFLUSH required before MONITOR */
|
|
|
|
#define X86_BUG_SYSRET_SS_ATTRS X86_BUG(8) /* SYSRET doesn't fix up SS attrs */
|
x86/entry/32: Introduce and use X86_BUG_ESPFIX instead of paravirt_enabled
x86_64 has very clean espfix handling on paravirt: espfix64 is set
up in native_iret, so paravirt systems that override iret bypass
espfix64 automatically. This is robust and straightforward.
x86_32 is messier. espfix is set up before the IRET paravirt patch
point, so it can't be directly conditionalized on whether we use
native_iret. We also can't easily move it into native_iret without
regressing performance due to a bizarre consideration. Specifically,
on 64-bit kernels, the logic is:
if (regs->ss & 0x4)
setup_espfix;
On 32-bit kernels, the logic is:
if ((regs->ss & 0x4) && (regs->cs & 0x3) == 3 &&
(regs->flags & X86_EFLAGS_VM) == 0)
setup_espfix;
The performance of setup_espfix itself is essentially irrelevant, but
the comparison happens on every IRET so its performance matters. On
x86_64, there's no need for any registers except flags to implement
the comparison, so we fold the whole thing into native_iret. On
x86_32, we don't do that because we need a free register to
implement the comparison efficiently. We therefore do espfix setup
before restoring registers on x86_32.
This patch gets rid of the explicit paravirt_enabled check by
introducing X86_BUG_ESPFIX on 32-bit systems and using an ALTERNATIVE
to skip espfix on paravirt systems where iret != native_iret. This is
also messy, but it's at least in line with other things we do.
This improves espfix performance by removing a branch, but no one
cares. More importantly, it removes a paravirt_enabled user, which is
good because paravirt_enabled is ill-defined and is going away.
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Reviewed-by: Borislav Petkov <bp@suse.de>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Luis R. Rodriguez <mcgrof@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boris.ostrovsky@oracle.com
Cc: david.vrabel@citrix.com
Cc: konrad.wilk@oracle.com
Cc: lguest@lists.ozlabs.org
Cc: xen-devel@lists.xensource.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-03-01 07:50:19 +08:00
|
|
|
#ifdef CONFIG_X86_32
|
|
|
|
/*
|
|
|
|
* 64-bit kernels don't use X86_BUG_ESPFIX. Make the define conditional
|
|
|
|
* to avoid confusion.
|
|
|
|
*/
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_BUG_ESPFIX X86_BUG(9) /* "" IRET to 16-bit SS corrupts ESP/RSP high bits */
|
x86/entry/32: Introduce and use X86_BUG_ESPFIX instead of paravirt_enabled
x86_64 has very clean espfix handling on paravirt: espfix64 is set
up in native_iret, so paravirt systems that override iret bypass
espfix64 automatically. This is robust and straightforward.
x86_32 is messier. espfix is set up before the IRET paravirt patch
point, so it can't be directly conditionalized on whether we use
native_iret. We also can't easily move it into native_iret without
regressing performance due to a bizarre consideration. Specifically,
on 64-bit kernels, the logic is:
if (regs->ss & 0x4)
setup_espfix;
On 32-bit kernels, the logic is:
if ((regs->ss & 0x4) && (regs->cs & 0x3) == 3 &&
(regs->flags & X86_EFLAGS_VM) == 0)
setup_espfix;
The performance of setup_espfix itself is essentially irrelevant, but
the comparison happens on every IRET so its performance matters. On
x86_64, there's no need for any registers except flags to implement
the comparison, so we fold the whole thing into native_iret. On
x86_32, we don't do that because we need a free register to
implement the comparison efficiently. We therefore do espfix setup
before restoring registers on x86_32.
This patch gets rid of the explicit paravirt_enabled check by
introducing X86_BUG_ESPFIX on 32-bit systems and using an ALTERNATIVE
to skip espfix on paravirt systems where iret != native_iret. This is
also messy, but it's at least in line with other things we do.
This improves espfix performance by removing a branch, but no one
cares. More importantly, it removes a paravirt_enabled user, which is
good because paravirt_enabled is ill-defined and is going away.
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Reviewed-by: Borislav Petkov <bp@suse.de>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Luis R. Rodriguez <mcgrof@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boris.ostrovsky@oracle.com
Cc: david.vrabel@citrix.com
Cc: konrad.wilk@oracle.com
Cc: lguest@lists.ozlabs.org
Cc: xen-devel@lists.xensource.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-03-01 07:50:19 +08:00
|
|
|
#endif
|
2017-10-31 20:17:22 +08:00
|
|
|
#define X86_BUG_NULL_SEG X86_BUG(10) /* Nulling a selector preserves the base */
|
|
|
|
#define X86_BUG_SWAPGS_FENCE X86_BUG(11) /* SWAPGS without input dep on GS */
|
|
|
|
#define X86_BUG_MONITOR X86_BUG(12) /* IPI required to wake up remote CPU */
|
|
|
|
#define X86_BUG_AMD_E400 X86_BUG(13) /* CPU is among the affected by Erratum 400 */
|
2018-01-05 22:27:34 +08:00
|
|
|
#define X86_BUG_CPU_MELTDOWN X86_BUG(14) /* CPU is affected by meltdown attack and needs kernel page table isolation */
|
2018-01-06 19:49:23 +08:00
|
|
|
#define X86_BUG_SPECTRE_V1 X86_BUG(15) /* CPU is affected by Spectre variant 1 attack with conditional branches */
|
|
|
|
#define X86_BUG_SPECTRE_V2 X86_BUG(16) /* CPU is affected by Spectre variant 2 attack with indirect branches */
|
2018-04-26 10:04:20 +08:00
|
|
|
#define X86_BUG_SPEC_STORE_BYPASS X86_BUG(17) /* CPU is affected by speculative store bypass attack */
|
2018-06-14 06:48:26 +08:00
|
|
|
#define X86_BUG_L1TF X86_BUG(18) /* CPU is affected by L1 Terminal Fault */
|
2019-01-19 08:50:16 +08:00
|
|
|
#define X86_BUG_MDS X86_BUG(19) /* CPU is affected by Microarchitectural data sampling */
|
2019-03-02 03:21:08 +08:00
|
|
|
#define X86_BUG_MSBDS_ONLY X86_BUG(20) /* CPU is only affected by the MSDBS variant of BUG_MDS */
|
2019-07-18 03:18:59 +08:00
|
|
|
#define X86_BUG_SWAPGS X86_BUG(21) /* CPU is affected by speculation through SWAPGS */
|
2019-10-23 17:30:45 +08:00
|
|
|
#define X86_BUG_TAA X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
|
2019-11-04 19:22:01 +08:00
|
|
|
#define X86_BUG_ITLB_MULTIHIT X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
|
2020-04-16 23:54:04 +08:00
|
|
|
#define X86_BUG_SRBDS X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
|
2022-05-20 11:27:08 +08:00
|
|
|
#define X86_BUG_MMIO_STALE_DATA X86_BUG(25) /* CPU is affected by Processor MMIO Stale Data vulnerabilities */
|
2022-08-04 05:41:32 +08:00
|
|
|
#define X86_BUG_MMIO_UNKNOWN X86_BUG(26) /* CPU is too old and its MMIO Stale Data status is unknown */
|
|
|
|
#define X86_BUG_RETBLEED X86_BUG(27) /* CPU is affected by RETBleed */
|
|
|
|
#define X86_BUG_EIBRS_PBRSB X86_BUG(28) /* EIBRS is vulnerable to Post Barrier RSB Predictions */
|
2017-10-31 20:17:23 +08:00
|
|
|
|
2016-01-27 05:12:04 +08:00
|
|
|
#endif /* _ASM_X86_CPUFEATURES_H */
|