License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2012-03-29 01:11:12 +08:00
|
|
|
#ifndef _ASM_X86_SPECIAL_INSNS_H
|
|
|
|
#define _ASM_X86_SPECIAL_INSNS_H
|
|
|
|
|
|
|
|
|
|
|
|
#ifdef __KERNEL__
|
|
|
|
|
2015-02-20 01:37:28 +08:00
|
|
|
#include <asm/nops.h>
|
2019-06-18 12:55:02 +08:00
|
|
|
#include <asm/processor-flags.h>
|
2020-03-05 06:32:15 +08:00
|
|
|
#include <linux/irqflags.h>
|
2019-06-18 12:55:02 +08:00
|
|
|
#include <linux/jump_label.h>
|
2015-02-20 01:37:28 +08:00
|
|
|
|
2012-03-29 01:11:12 +08:00
|
|
|
/*
|
x86/asm: Replace __force_order with a memory clobber
The CRn accessor functions use __force_order as a dummy operand to
prevent the compiler from reordering CRn reads/writes with respect to
each other.
The fact that the asm is volatile should be enough to prevent this:
volatile asm statements should be executed in program order. However GCC
4.9.x and 5.x have a bug that might result in reordering. This was fixed
in 8.1, 7.3 and 6.5. Versions prior to these, including 5.x and 4.9.x,
may reorder volatile asm statements with respect to each other.
There are some issues with __force_order as implemented:
- It is used only as an input operand for the write functions, and hence
doesn't do anything additional to prevent reordering writes.
- It allows memory accesses to be cached/reordered across write
functions, but CRn writes affect the semantics of memory accesses, so
this could be dangerous.
- __force_order is not actually defined in the kernel proper, but the
LLVM toolchain can in some cases require a definition: LLVM (as well
as GCC 4.9) requires it for PIE code, which is why the compressed
kernel has a definition, but also the clang integrated assembler may
consider the address of __force_order to be significant, resulting in
a reference that requires a definition.
Fix this by:
- Using a memory clobber for the write functions to additionally prevent
caching/reordering memory accesses across CRn writes.
- Using a dummy input operand with an arbitrary constant address for the
read functions, instead of a global variable. This will prevent reads
from being reordered across writes, while allowing memory loads to be
cached/reordered across CRn reads, which should be safe.
Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Tested-by: Nathan Chancellor <natechancellor@gmail.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82602
Link: https://lore.kernel.org/lkml/20200527135329.1172644-1-arnd@arndb.de/
Link: https://lkml.kernel.org/r/20200902232152.3709896-1-nivedita@alum.mit.edu
2020-09-03 07:21:52 +08:00
|
|
|
* The compiler should not reorder volatile asm statements with respect to each
|
|
|
|
* other: they should execute in program order. However GCC 4.9.x and 5.x have
|
|
|
|
* a bug (which was fixed in 8.1, 7.3 and 6.5) where they might reorder
|
|
|
|
* volatile asm. The write functions are not affected since they have memory
|
|
|
|
* clobbers preventing reordering. To prevent reads from being reordered with
|
|
|
|
* respect to writes, use a dummy memory operand.
|
2012-03-29 01:11:12 +08:00
|
|
|
*/
|
x86/asm: Replace __force_order with a memory clobber
The CRn accessor functions use __force_order as a dummy operand to
prevent the compiler from reordering CRn reads/writes with respect to
each other.
The fact that the asm is volatile should be enough to prevent this:
volatile asm statements should be executed in program order. However GCC
4.9.x and 5.x have a bug that might result in reordering. This was fixed
in 8.1, 7.3 and 6.5. Versions prior to these, including 5.x and 4.9.x,
may reorder volatile asm statements with respect to each other.
There are some issues with __force_order as implemented:
- It is used only as an input operand for the write functions, and hence
doesn't do anything additional to prevent reordering writes.
- It allows memory accesses to be cached/reordered across write
functions, but CRn writes affect the semantics of memory accesses, so
this could be dangerous.
- __force_order is not actually defined in the kernel proper, but the
LLVM toolchain can in some cases require a definition: LLVM (as well
as GCC 4.9) requires it for PIE code, which is why the compressed
kernel has a definition, but also the clang integrated assembler may
consider the address of __force_order to be significant, resulting in
a reference that requires a definition.
Fix this by:
- Using a memory clobber for the write functions to additionally prevent
caching/reordering memory accesses across CRn writes.
- Using a dummy input operand with an arbitrary constant address for the
read functions, instead of a global variable. This will prevent reads
from being reordered across writes, while allowing memory loads to be
cached/reordered across CRn reads, which should be safe.
Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Tested-by: Nathan Chancellor <natechancellor@gmail.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82602
Link: https://lore.kernel.org/lkml/20200527135329.1172644-1-arnd@arndb.de/
Link: https://lkml.kernel.org/r/20200902232152.3709896-1-nivedita@alum.mit.edu
2020-09-03 07:21:52 +08:00
|
|
|
|
|
|
|
#define __FORCE_ORDER "m"(*(unsigned int *)0x1000UL)
|
2012-03-29 01:11:12 +08:00
|
|
|
|
2019-07-11 03:42:46 +08:00
|
|
|
void native_write_cr0(unsigned long val);
|
2019-06-18 12:55:02 +08:00
|
|
|
|
2012-03-29 01:11:12 +08:00
|
|
|
static inline unsigned long native_read_cr0(void)
|
|
|
|
{
|
|
|
|
unsigned long val;
|
x86/asm: Replace __force_order with a memory clobber
The CRn accessor functions use __force_order as a dummy operand to
prevent the compiler from reordering CRn reads/writes with respect to
each other.
The fact that the asm is volatile should be enough to prevent this:
volatile asm statements should be executed in program order. However GCC
4.9.x and 5.x have a bug that might result in reordering. This was fixed
in 8.1, 7.3 and 6.5. Versions prior to these, including 5.x and 4.9.x,
may reorder volatile asm statements with respect to each other.
There are some issues with __force_order as implemented:
- It is used only as an input operand for the write functions, and hence
doesn't do anything additional to prevent reordering writes.
- It allows memory accesses to be cached/reordered across write
functions, but CRn writes affect the semantics of memory accesses, so
this could be dangerous.
- __force_order is not actually defined in the kernel proper, but the
LLVM toolchain can in some cases require a definition: LLVM (as well
as GCC 4.9) requires it for PIE code, which is why the compressed
kernel has a definition, but also the clang integrated assembler may
consider the address of __force_order to be significant, resulting in
a reference that requires a definition.
Fix this by:
- Using a memory clobber for the write functions to additionally prevent
caching/reordering memory accesses across CRn writes.
- Using a dummy input operand with an arbitrary constant address for the
read functions, instead of a global variable. This will prevent reads
from being reordered across writes, while allowing memory loads to be
cached/reordered across CRn reads, which should be safe.
Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Tested-by: Nathan Chancellor <natechancellor@gmail.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82602
Link: https://lore.kernel.org/lkml/20200527135329.1172644-1-arnd@arndb.de/
Link: https://lkml.kernel.org/r/20200902232152.3709896-1-nivedita@alum.mit.edu
2020-09-03 07:21:52 +08:00
|
|
|
asm volatile("mov %%cr0,%0\n\t" : "=r" (val) : __FORCE_ORDER);
|
2012-03-29 01:11:12 +08:00
|
|
|
return val;
|
|
|
|
}
|
|
|
|
|
2020-06-03 19:40:22 +08:00
|
|
|
static __always_inline unsigned long native_read_cr2(void)
|
2012-03-29 01:11:12 +08:00
|
|
|
{
|
|
|
|
unsigned long val;
|
x86/asm: Replace __force_order with a memory clobber
The CRn accessor functions use __force_order as a dummy operand to
prevent the compiler from reordering CRn reads/writes with respect to
each other.
The fact that the asm is volatile should be enough to prevent this:
volatile asm statements should be executed in program order. However GCC
4.9.x and 5.x have a bug that might result in reordering. This was fixed
in 8.1, 7.3 and 6.5. Versions prior to these, including 5.x and 4.9.x,
may reorder volatile asm statements with respect to each other.
There are some issues with __force_order as implemented:
- It is used only as an input operand for the write functions, and hence
doesn't do anything additional to prevent reordering writes.
- It allows memory accesses to be cached/reordered across write
functions, but CRn writes affect the semantics of memory accesses, so
this could be dangerous.
- __force_order is not actually defined in the kernel proper, but the
LLVM toolchain can in some cases require a definition: LLVM (as well
as GCC 4.9) requires it for PIE code, which is why the compressed
kernel has a definition, but also the clang integrated assembler may
consider the address of __force_order to be significant, resulting in
a reference that requires a definition.
Fix this by:
- Using a memory clobber for the write functions to additionally prevent
caching/reordering memory accesses across CRn writes.
- Using a dummy input operand with an arbitrary constant address for the
read functions, instead of a global variable. This will prevent reads
from being reordered across writes, while allowing memory loads to be
cached/reordered across CRn reads, which should be safe.
Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Tested-by: Nathan Chancellor <natechancellor@gmail.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82602
Link: https://lore.kernel.org/lkml/20200527135329.1172644-1-arnd@arndb.de/
Link: https://lkml.kernel.org/r/20200902232152.3709896-1-nivedita@alum.mit.edu
2020-09-03 07:21:52 +08:00
|
|
|
asm volatile("mov %%cr2,%0\n\t" : "=r" (val) : __FORCE_ORDER);
|
2012-03-29 01:11:12 +08:00
|
|
|
return val;
|
|
|
|
}
|
|
|
|
|
2020-06-03 19:40:22 +08:00
|
|
|
static __always_inline void native_write_cr2(unsigned long val)
|
2012-03-29 01:11:12 +08:00
|
|
|
{
|
x86/asm: Replace __force_order with a memory clobber
The CRn accessor functions use __force_order as a dummy operand to
prevent the compiler from reordering CRn reads/writes with respect to
each other.
The fact that the asm is volatile should be enough to prevent this:
volatile asm statements should be executed in program order. However GCC
4.9.x and 5.x have a bug that might result in reordering. This was fixed
in 8.1, 7.3 and 6.5. Versions prior to these, including 5.x and 4.9.x,
may reorder volatile asm statements with respect to each other.
There are some issues with __force_order as implemented:
- It is used only as an input operand for the write functions, and hence
doesn't do anything additional to prevent reordering writes.
- It allows memory accesses to be cached/reordered across write
functions, but CRn writes affect the semantics of memory accesses, so
this could be dangerous.
- __force_order is not actually defined in the kernel proper, but the
LLVM toolchain can in some cases require a definition: LLVM (as well
as GCC 4.9) requires it for PIE code, which is why the compressed
kernel has a definition, but also the clang integrated assembler may
consider the address of __force_order to be significant, resulting in
a reference that requires a definition.
Fix this by:
- Using a memory clobber for the write functions to additionally prevent
caching/reordering memory accesses across CRn writes.
- Using a dummy input operand with an arbitrary constant address for the
read functions, instead of a global variable. This will prevent reads
from being reordered across writes, while allowing memory loads to be
cached/reordered across CRn reads, which should be safe.
Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Tested-by: Nathan Chancellor <natechancellor@gmail.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82602
Link: https://lore.kernel.org/lkml/20200527135329.1172644-1-arnd@arndb.de/
Link: https://lkml.kernel.org/r/20200902232152.3709896-1-nivedita@alum.mit.edu
2020-09-03 07:21:52 +08:00
|
|
|
asm volatile("mov %0,%%cr2": : "r" (val) : "memory");
|
2012-03-29 01:11:12 +08:00
|
|
|
}
|
|
|
|
|
2017-06-13 01:26:14 +08:00
|
|
|
static inline unsigned long __native_read_cr3(void)
|
2012-03-29 01:11:12 +08:00
|
|
|
{
|
|
|
|
unsigned long val;
|
x86/asm: Replace __force_order with a memory clobber
The CRn accessor functions use __force_order as a dummy operand to
prevent the compiler from reordering CRn reads/writes with respect to
each other.
The fact that the asm is volatile should be enough to prevent this:
volatile asm statements should be executed in program order. However GCC
4.9.x and 5.x have a bug that might result in reordering. This was fixed
in 8.1, 7.3 and 6.5. Versions prior to these, including 5.x and 4.9.x,
may reorder volatile asm statements with respect to each other.
There are some issues with __force_order as implemented:
- It is used only as an input operand for the write functions, and hence
doesn't do anything additional to prevent reordering writes.
- It allows memory accesses to be cached/reordered across write
functions, but CRn writes affect the semantics of memory accesses, so
this could be dangerous.
- __force_order is not actually defined in the kernel proper, but the
LLVM toolchain can in some cases require a definition: LLVM (as well
as GCC 4.9) requires it for PIE code, which is why the compressed
kernel has a definition, but also the clang integrated assembler may
consider the address of __force_order to be significant, resulting in
a reference that requires a definition.
Fix this by:
- Using a memory clobber for the write functions to additionally prevent
caching/reordering memory accesses across CRn writes.
- Using a dummy input operand with an arbitrary constant address for the
read functions, instead of a global variable. This will prevent reads
from being reordered across writes, while allowing memory loads to be
cached/reordered across CRn reads, which should be safe.
Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Tested-by: Nathan Chancellor <natechancellor@gmail.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82602
Link: https://lore.kernel.org/lkml/20200527135329.1172644-1-arnd@arndb.de/
Link: https://lkml.kernel.org/r/20200902232152.3709896-1-nivedita@alum.mit.edu
2020-09-03 07:21:52 +08:00
|
|
|
asm volatile("mov %%cr3,%0\n\t" : "=r" (val) : __FORCE_ORDER);
|
2012-03-29 01:11:12 +08:00
|
|
|
return val;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void native_write_cr3(unsigned long val)
|
|
|
|
{
|
x86/asm: Replace __force_order with a memory clobber
The CRn accessor functions use __force_order as a dummy operand to
prevent the compiler from reordering CRn reads/writes with respect to
each other.
The fact that the asm is volatile should be enough to prevent this:
volatile asm statements should be executed in program order. However GCC
4.9.x and 5.x have a bug that might result in reordering. This was fixed
in 8.1, 7.3 and 6.5. Versions prior to these, including 5.x and 4.9.x,
may reorder volatile asm statements with respect to each other.
There are some issues with __force_order as implemented:
- It is used only as an input operand for the write functions, and hence
doesn't do anything additional to prevent reordering writes.
- It allows memory accesses to be cached/reordered across write
functions, but CRn writes affect the semantics of memory accesses, so
this could be dangerous.
- __force_order is not actually defined in the kernel proper, but the
LLVM toolchain can in some cases require a definition: LLVM (as well
as GCC 4.9) requires it for PIE code, which is why the compressed
kernel has a definition, but also the clang integrated assembler may
consider the address of __force_order to be significant, resulting in
a reference that requires a definition.
Fix this by:
- Using a memory clobber for the write functions to additionally prevent
caching/reordering memory accesses across CRn writes.
- Using a dummy input operand with an arbitrary constant address for the
read functions, instead of a global variable. This will prevent reads
from being reordered across writes, while allowing memory loads to be
cached/reordered across CRn reads, which should be safe.
Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Tested-by: Nathan Chancellor <natechancellor@gmail.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82602
Link: https://lore.kernel.org/lkml/20200527135329.1172644-1-arnd@arndb.de/
Link: https://lkml.kernel.org/r/20200902232152.3709896-1-nivedita@alum.mit.edu
2020-09-03 07:21:52 +08:00
|
|
|
asm volatile("mov %0,%%cr3": : "r" (val) : "memory");
|
2012-03-29 01:11:12 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned long native_read_cr4(void)
|
|
|
|
{
|
|
|
|
unsigned long val;
|
|
|
|
#ifdef CONFIG_X86_32
|
2016-09-30 03:48:12 +08:00
|
|
|
/*
|
|
|
|
* This could fault if CR4 does not exist. Non-existent CR4
|
|
|
|
* is functionally equivalent to CR4 == 0. Keep it simple and pretend
|
|
|
|
* that CR4 == 0 on CPUs that don't have CR4.
|
|
|
|
*/
|
2012-03-29 01:11:12 +08:00
|
|
|
asm volatile("1: mov %%cr4, %0\n"
|
|
|
|
"2:\n"
|
|
|
|
_ASM_EXTABLE(1b, 2b)
|
x86/asm: Replace __force_order with a memory clobber
The CRn accessor functions use __force_order as a dummy operand to
prevent the compiler from reordering CRn reads/writes with respect to
each other.
The fact that the asm is volatile should be enough to prevent this:
volatile asm statements should be executed in program order. However GCC
4.9.x and 5.x have a bug that might result in reordering. This was fixed
in 8.1, 7.3 and 6.5. Versions prior to these, including 5.x and 4.9.x,
may reorder volatile asm statements with respect to each other.
There are some issues with __force_order as implemented:
- It is used only as an input operand for the write functions, and hence
doesn't do anything additional to prevent reordering writes.
- It allows memory accesses to be cached/reordered across write
functions, but CRn writes affect the semantics of memory accesses, so
this could be dangerous.
- __force_order is not actually defined in the kernel proper, but the
LLVM toolchain can in some cases require a definition: LLVM (as well
as GCC 4.9) requires it for PIE code, which is why the compressed
kernel has a definition, but also the clang integrated assembler may
consider the address of __force_order to be significant, resulting in
a reference that requires a definition.
Fix this by:
- Using a memory clobber for the write functions to additionally prevent
caching/reordering memory accesses across CRn writes.
- Using a dummy input operand with an arbitrary constant address for the
read functions, instead of a global variable. This will prevent reads
from being reordered across writes, while allowing memory loads to be
cached/reordered across CRn reads, which should be safe.
Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Tested-by: Nathan Chancellor <natechancellor@gmail.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82602
Link: https://lore.kernel.org/lkml/20200527135329.1172644-1-arnd@arndb.de/
Link: https://lkml.kernel.org/r/20200902232152.3709896-1-nivedita@alum.mit.edu
2020-09-03 07:21:52 +08:00
|
|
|
: "=r" (val) : "0" (0), __FORCE_ORDER);
|
2012-03-29 01:11:12 +08:00
|
|
|
#else
|
2016-09-30 03:48:12 +08:00
|
|
|
/* CR4 always exists on x86_64. */
|
x86/asm: Replace __force_order with a memory clobber
The CRn accessor functions use __force_order as a dummy operand to
prevent the compiler from reordering CRn reads/writes with respect to
each other.
The fact that the asm is volatile should be enough to prevent this:
volatile asm statements should be executed in program order. However GCC
4.9.x and 5.x have a bug that might result in reordering. This was fixed
in 8.1, 7.3 and 6.5. Versions prior to these, including 5.x and 4.9.x,
may reorder volatile asm statements with respect to each other.
There are some issues with __force_order as implemented:
- It is used only as an input operand for the write functions, and hence
doesn't do anything additional to prevent reordering writes.
- It allows memory accesses to be cached/reordered across write
functions, but CRn writes affect the semantics of memory accesses, so
this could be dangerous.
- __force_order is not actually defined in the kernel proper, but the
LLVM toolchain can in some cases require a definition: LLVM (as well
as GCC 4.9) requires it for PIE code, which is why the compressed
kernel has a definition, but also the clang integrated assembler may
consider the address of __force_order to be significant, resulting in
a reference that requires a definition.
Fix this by:
- Using a memory clobber for the write functions to additionally prevent
caching/reordering memory accesses across CRn writes.
- Using a dummy input operand with an arbitrary constant address for the
read functions, instead of a global variable. This will prevent reads
from being reordered across writes, while allowing memory loads to be
cached/reordered across CRn reads, which should be safe.
Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Tested-by: Nathan Chancellor <natechancellor@gmail.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82602
Link: https://lore.kernel.org/lkml/20200527135329.1172644-1-arnd@arndb.de/
Link: https://lkml.kernel.org/r/20200902232152.3709896-1-nivedita@alum.mit.edu
2020-09-03 07:21:52 +08:00
|
|
|
asm volatile("mov %%cr4,%0\n\t" : "=r" (val) : __FORCE_ORDER);
|
2012-03-29 01:11:12 +08:00
|
|
|
#endif
|
|
|
|
return val;
|
|
|
|
}
|
|
|
|
|
2019-07-11 03:42:46 +08:00
|
|
|
void native_write_cr4(unsigned long val);
|
2012-03-29 01:11:12 +08:00
|
|
|
|
2016-02-13 05:02:15 +08:00
|
|
|
#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
|
2019-04-04 00:41:41 +08:00
|
|
|
static inline u32 rdpkru(void)
|
2016-02-13 05:02:15 +08:00
|
|
|
{
|
|
|
|
u32 ecx = 0;
|
|
|
|
u32 edx, pkru;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* "rdpkru" instruction. Places PKRU contents in to EAX,
|
|
|
|
* clears EDX and requires that ecx=0.
|
|
|
|
*/
|
|
|
|
asm volatile(".byte 0x0f,0x01,0xee\n\t"
|
|
|
|
: "=a" (pkru), "=d" (edx)
|
|
|
|
: "c" (ecx));
|
|
|
|
return pkru;
|
|
|
|
}
|
2016-03-22 16:51:17 +08:00
|
|
|
|
2019-04-04 00:41:41 +08:00
|
|
|
static inline void wrpkru(u32 pkru)
|
2016-03-22 16:51:17 +08:00
|
|
|
{
|
|
|
|
u32 ecx = 0, edx = 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* "wrpkru" instruction. Loads contents in EAX to PKRU,
|
|
|
|
* requires that ecx = edx = 0.
|
|
|
|
*/
|
|
|
|
asm volatile(".byte 0x0f,0x01,0xef\n\t"
|
|
|
|
: : "a" (pkru), "c"(ecx), "d"(edx));
|
|
|
|
}
|
2019-04-04 00:41:41 +08:00
|
|
|
|
2016-02-13 05:02:15 +08:00
|
|
|
#else
|
2019-04-04 00:41:41 +08:00
|
|
|
static inline u32 rdpkru(void)
|
2016-02-13 05:02:15 +08:00
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
2016-03-22 16:51:17 +08:00
|
|
|
|
2021-06-23 20:02:23 +08:00
|
|
|
static inline void wrpkru(u32 pkru)
|
2016-03-22 16:51:17 +08:00
|
|
|
{
|
|
|
|
}
|
2016-02-13 05:02:15 +08:00
|
|
|
#endif
|
|
|
|
|
2012-03-29 01:11:12 +08:00
|
|
|
static inline void native_wbinvd(void)
|
|
|
|
{
|
|
|
|
asm volatile("wbinvd": : :"memory");
|
|
|
|
}
|
|
|
|
|
2020-03-05 06:32:15 +08:00
|
|
|
extern asmlinkage void asm_load_gs_index(unsigned int selector);
|
|
|
|
|
|
|
|
static inline void native_load_gs_index(unsigned int selector)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
local_irq_save(flags);
|
|
|
|
asm_load_gs_index(selector);
|
|
|
|
local_irq_restore(flags);
|
|
|
|
}
|
2012-03-29 01:11:12 +08:00
|
|
|
|
2017-09-04 18:25:27 +08:00
|
|
|
static inline unsigned long __read_cr4(void)
|
|
|
|
{
|
|
|
|
return native_read_cr4();
|
|
|
|
}
|
|
|
|
|
2018-08-28 15:40:25 +08:00
|
|
|
#ifdef CONFIG_PARAVIRT_XXL
|
2012-03-29 01:11:12 +08:00
|
|
|
#include <asm/paravirt.h>
|
2018-08-28 15:40:25 +08:00
|
|
|
#else
|
2012-03-29 01:11:12 +08:00
|
|
|
|
|
|
|
static inline unsigned long read_cr0(void)
|
|
|
|
{
|
|
|
|
return native_read_cr0();
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void write_cr0(unsigned long x)
|
|
|
|
{
|
|
|
|
native_write_cr0(x);
|
|
|
|
}
|
|
|
|
|
2020-06-03 19:40:22 +08:00
|
|
|
static __always_inline unsigned long read_cr2(void)
|
2012-03-29 01:11:12 +08:00
|
|
|
{
|
|
|
|
return native_read_cr2();
|
|
|
|
}
|
|
|
|
|
2020-06-03 19:40:22 +08:00
|
|
|
static __always_inline void write_cr2(unsigned long x)
|
2012-03-29 01:11:12 +08:00
|
|
|
{
|
|
|
|
native_write_cr2(x);
|
|
|
|
}
|
|
|
|
|
2017-06-13 01:26:14 +08:00
|
|
|
/*
|
|
|
|
* Careful! CR3 contains more than just an address. You probably want
|
|
|
|
* read_cr3_pa() instead.
|
|
|
|
*/
|
|
|
|
static inline unsigned long __read_cr3(void)
|
2012-03-29 01:11:12 +08:00
|
|
|
{
|
2017-06-13 01:26:14 +08:00
|
|
|
return __native_read_cr3();
|
2012-03-29 01:11:12 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void write_cr3(unsigned long x)
|
|
|
|
{
|
|
|
|
native_write_cr3(x);
|
|
|
|
}
|
|
|
|
|
2014-10-25 06:58:08 +08:00
|
|
|
static inline void __write_cr4(unsigned long x)
|
2012-03-29 01:11:12 +08:00
|
|
|
{
|
|
|
|
native_write_cr4(x);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void wbinvd(void)
|
|
|
|
{
|
|
|
|
native_wbinvd();
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_X86_64
|
|
|
|
|
2020-03-05 06:32:15 +08:00
|
|
|
static inline void load_gs_index(unsigned int selector)
|
2012-03-29 01:11:12 +08:00
|
|
|
{
|
|
|
|
native_load_gs_index(selector);
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif
|
|
|
|
|
2018-08-28 15:40:25 +08:00
|
|
|
#endif /* CONFIG_PARAVIRT_XXL */
|
2012-03-29 01:11:12 +08:00
|
|
|
|
|
|
|
static inline void clflush(volatile void *__p)
|
|
|
|
{
|
|
|
|
asm volatile("clflush %0" : "+m" (*(volatile char __force *)__p));
|
|
|
|
}
|
|
|
|
|
2014-02-27 03:06:49 +08:00
|
|
|
static inline void clflushopt(volatile void *__p)
|
|
|
|
{
|
x86: Remove dynamic NOP selection
This ensures that a NOP is a NOP and not a random other instruction that
is also a NOP. It allows simplification of dynamic code patching that
wants to verify existing code before writing new instructions (ftrace,
jump_label, static_call, etc..).
Differentiating on NOPs is not a feature.
This pessimises 32bit (DONTCARE) and 32bit on 64bit CPUs (CARELESS).
32bit is not a performance target.
Everything x86_64 since AMD K10 (2007) and Intel IvyBridge (2012) is
fine with using NOPL (as opposed to prefix NOP). And per FEATURE_NOPL
being required for x86_64, all x86_64 CPUs can use NOPL. So stop
caring about NOPs, simplify things and get on with life.
[ The problem seems to be that some uarchs can only decode NOPL on a
single front-end port while others have severe decode penalties for
excessive prefixes. All modern uarchs can handle both, except Atom,
which has prefix penalties. ]
[ Also, much doubt you can actually measure any of this on normal
workloads. ]
After this, FEATURE_NOPL is unused except for required-features for
x86_64. FEATURE_K8 is only used for PTI.
[ bp: Kernel build measurements showed ~0.3s slowdown on Sandybridge
which is hardly a slowdown. Get rid of X86_FEATURE_K7, while at it. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Alexei Starovoitov <alexei.starovoitov@gmail.com> # bpf
Acked-by: Linus Torvalds <torvalds@linuxfoundation.org>
Link: https://lkml.kernel.org/r/20210312115749.065275711@infradead.org
2021-03-12 19:32:54 +08:00
|
|
|
alternative_io(".byte 0x3e; clflush %P0",
|
2014-02-27 03:06:49 +08:00
|
|
|
".byte 0x66; clflush %P0",
|
|
|
|
X86_FEATURE_CLFLUSHOPT,
|
|
|
|
"+m" (*(volatile char __force *)__p));
|
|
|
|
}
|
|
|
|
|
2015-01-28 00:53:51 +08:00
|
|
|
static inline void clwb(volatile void *__p)
|
|
|
|
{
|
|
|
|
volatile struct { char x[64]; } *p = __p;
|
|
|
|
|
|
|
|
asm volatile(ALTERNATIVE_2(
|
x86: Remove dynamic NOP selection
This ensures that a NOP is a NOP and not a random other instruction that
is also a NOP. It allows simplification of dynamic code patching that
wants to verify existing code before writing new instructions (ftrace,
jump_label, static_call, etc..).
Differentiating on NOPs is not a feature.
This pessimises 32bit (DONTCARE) and 32bit on 64bit CPUs (CARELESS).
32bit is not a performance target.
Everything x86_64 since AMD K10 (2007) and Intel IvyBridge (2012) is
fine with using NOPL (as opposed to prefix NOP). And per FEATURE_NOPL
being required for x86_64, all x86_64 CPUs can use NOPL. So stop
caring about NOPs, simplify things and get on with life.
[ The problem seems to be that some uarchs can only decode NOPL on a
single front-end port while others have severe decode penalties for
excessive prefixes. All modern uarchs can handle both, except Atom,
which has prefix penalties. ]
[ Also, much doubt you can actually measure any of this on normal
workloads. ]
After this, FEATURE_NOPL is unused except for required-features for
x86_64. FEATURE_K8 is only used for PTI.
[ bp: Kernel build measurements showed ~0.3s slowdown on Sandybridge
which is hardly a slowdown. Get rid of X86_FEATURE_K7, while at it. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Alexei Starovoitov <alexei.starovoitov@gmail.com> # bpf
Acked-by: Linus Torvalds <torvalds@linuxfoundation.org>
Link: https://lkml.kernel.org/r/20210312115749.065275711@infradead.org
2021-03-12 19:32:54 +08:00
|
|
|
".byte 0x3e; clflush (%[pax])",
|
2015-01-28 00:53:51 +08:00
|
|
|
".byte 0x66; clflush (%[pax])", /* clflushopt (%%rax) */
|
|
|
|
X86_FEATURE_CLFLUSHOPT,
|
|
|
|
".byte 0x66, 0x0f, 0xae, 0x30", /* clwb (%%rax) */
|
|
|
|
X86_FEATURE_CLWB)
|
|
|
|
: [p] "+m" (*p)
|
|
|
|
: [pax] "a" (p));
|
|
|
|
}
|
|
|
|
|
2012-03-29 01:11:12 +08:00
|
|
|
#define nop() asm volatile ("nop")
|
|
|
|
|
2020-08-07 11:28:33 +08:00
|
|
|
static inline void serialize(void)
|
|
|
|
{
|
|
|
|
/* Instruction opcode for SERIALIZE; supported in binutils >= 2.35. */
|
|
|
|
asm volatile(".byte 0xf, 0x1, 0xe8" ::: "memory");
|
|
|
|
}
|
|
|
|
|
2020-10-05 23:11:22 +08:00
|
|
|
/* The dst parameter must be 64-bytes aligned */
|
2021-01-08 00:44:51 +08:00
|
|
|
static inline void movdir64b(void __iomem *dst, const void *src)
|
2020-10-05 23:11:22 +08:00
|
|
|
{
|
|
|
|
const struct { char _[64]; } *__src = src;
|
2021-01-08 00:44:51 +08:00
|
|
|
struct { char _[64]; } __iomem *__dst = dst;
|
2020-10-05 23:11:22 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* MOVDIR64B %(rdx), rax.
|
|
|
|
*
|
|
|
|
* Both __src and __dst must be memory constraints in order to tell the
|
|
|
|
* compiler that no other memory accesses should be reordered around
|
|
|
|
* this one.
|
|
|
|
*
|
|
|
|
* Also, both must be supplied as lvalues because this tells
|
|
|
|
* the compiler what the object is (its size) the instruction accesses.
|
|
|
|
* I.e., not the pointers but what they point to, thus the deref'ing '*'.
|
|
|
|
*/
|
|
|
|
asm volatile(".byte 0x66, 0x0f, 0x38, 0xf8, 0x02"
|
|
|
|
: "+m" (*__dst)
|
|
|
|
: "m" (*__src), "a" (__dst), "d" (__src));
|
|
|
|
}
|
|
|
|
|
x86/asm: Add an enqcmds() wrapper for the ENQCMDS instruction
Currently, the MOVDIR64B instruction is used to atomically submit
64-byte work descriptors to devices. Although it can encounter errors
like device queue full, command not accepted, device not ready, etc when
writing to a device MMIO, MOVDIR64B can not report back on errors from
the device itself. This means that MOVDIR64B users need to separately
interact with a device to see if a descriptor was successfully queued,
which slows down device interactions.
ENQCMD and ENQCMDS also atomically submit 64-byte work descriptors
to devices. But, they *can* report back errors directly from the
device, such as if the device was busy, or device not enabled or does
not support the command. This immediate feedback from the submission
instruction itself reduces the number of interactions with the device
and can greatly increase efficiency.
ENQCMD can be used at any privilege level, but can effectively only
submit work on behalf of the current process. ENQCMDS is a ring0-only
instruction and can explicitly specify a process context instead of
being tied to the current process or needing to reprogram the IA32_PASID
MSR.
Use ENQCMDS for work submission within the kernel because a Process
Address ID (PASID) is setup to translate the kernel virtual address
space. This PASID is provided to ENQCMDS from the descriptor structure
submitted to the device and not retrieved from IA32_PASID MSR, which is
setup for the current user address space.
See Intel Software Developer’s Manual for more information on the
instructions.
[ bp:
- Make operand constraints like movdir64b() because both insns are
basically doing the same thing, more or less.
- Fixup comments and cleanup. ]
Link: https://lkml.kernel.org/r/20200924180041.34056-3-dave.jiang@intel.com
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Link: https://lkml.kernel.org/r/20201005151126.657029-3-dave.jiang@intel.com
2020-10-05 23:11:23 +08:00
|
|
|
/**
|
|
|
|
* enqcmds - Enqueue a command in supervisor (CPL0) mode
|
|
|
|
* @dst: destination, in MMIO space (must be 512-bit aligned)
|
|
|
|
* @src: 512 bits memory operand
|
|
|
|
*
|
|
|
|
* The ENQCMDS instruction allows software to write a 512-bit command to
|
|
|
|
* a 512-bit-aligned special MMIO region that supports the instruction.
|
|
|
|
* A return status is loaded into the ZF flag in the RFLAGS register.
|
|
|
|
* ZF = 0 equates to success, and ZF = 1 indicates retry or error.
|
|
|
|
*
|
|
|
|
* This function issues the ENQCMDS instruction to submit data from
|
|
|
|
* kernel space to MMIO space, in a unit of 512 bits. Order of data access
|
|
|
|
* is not guaranteed, nor is a memory barrier performed afterwards. It
|
|
|
|
* returns 0 on success and -EAGAIN on failure.
|
|
|
|
*
|
|
|
|
* Warning: Do not use this helper unless your driver has checked that the
|
|
|
|
* ENQCMDS instruction is supported on the platform and the device accepts
|
|
|
|
* ENQCMDS.
|
|
|
|
*/
|
|
|
|
static inline int enqcmds(void __iomem *dst, const void *src)
|
|
|
|
{
|
|
|
|
const struct { char _[64]; } *__src = src;
|
x86/asm: Add a missing __iomem annotation in enqcmds()
Add a missing __iomem annotation to address a sparse warning. The caller
is expected to pass an __iomem annotated pointer to this function. The
current usages send a 64-bytes command descriptor to an MMIO location
(portal) on a device for consumption.
Also, from the comment in movdir64b(), which also applies to enqcmds(),
@__dst must be supplied as an lvalue because this tells the compiler
what the object is (its size) the instruction accesses. I.e., not the
pointers but what they point to, thus the deref'ing '*'."
The actual sparse warning is:
drivers/dma/idxd/submit.c: note: in included file (through arch/x86/include/asm/processor.h, \
arch/x86/include/asm/timex.h, include/linux/timex.h, include/linux/time32.h, \
include/linux/time.h, include/linux/stat.h, ...):
./arch/x86/include/asm/special_insns.h:289:41: warning: incorrect type in initializer (different address spaces)
./arch/x86/include/asm/special_insns.h:289:41: expected struct <noident> *__dst
./arch/x86/include/asm/special_insns.h:289:41: got void [noderef] __iomem *dst
[ bp: Massage commit message. ]
Fixes: 7f5933f81bd8 ("x86/asm: Add an enqcmds() wrapper for the ENQCMDS instruction")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Ben Widawsky <ben.widawsky@intel.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Link: https://lkml.kernel.org/r/161003789741.4062451.14362269365703761223.stgit@djiang5-desk3.ch.intel.com
2021-01-08 00:45:21 +08:00
|
|
|
struct { char _[64]; } __iomem *__dst = dst;
|
x86/asm: Add an enqcmds() wrapper for the ENQCMDS instruction
Currently, the MOVDIR64B instruction is used to atomically submit
64-byte work descriptors to devices. Although it can encounter errors
like device queue full, command not accepted, device not ready, etc when
writing to a device MMIO, MOVDIR64B can not report back on errors from
the device itself. This means that MOVDIR64B users need to separately
interact with a device to see if a descriptor was successfully queued,
which slows down device interactions.
ENQCMD and ENQCMDS also atomically submit 64-byte work descriptors
to devices. But, they *can* report back errors directly from the
device, such as if the device was busy, or device not enabled or does
not support the command. This immediate feedback from the submission
instruction itself reduces the number of interactions with the device
and can greatly increase efficiency.
ENQCMD can be used at any privilege level, but can effectively only
submit work on behalf of the current process. ENQCMDS is a ring0-only
instruction and can explicitly specify a process context instead of
being tied to the current process or needing to reprogram the IA32_PASID
MSR.
Use ENQCMDS for work submission within the kernel because a Process
Address ID (PASID) is setup to translate the kernel virtual address
space. This PASID is provided to ENQCMDS from the descriptor structure
submitted to the device and not retrieved from IA32_PASID MSR, which is
setup for the current user address space.
See Intel Software Developer’s Manual for more information on the
instructions.
[ bp:
- Make operand constraints like movdir64b() because both insns are
basically doing the same thing, more or less.
- Fixup comments and cleanup. ]
Link: https://lkml.kernel.org/r/20200924180041.34056-3-dave.jiang@intel.com
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Link: https://lkml.kernel.org/r/20201005151126.657029-3-dave.jiang@intel.com
2020-10-05 23:11:23 +08:00
|
|
|
int zf;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* ENQCMDS %(rdx), rax
|
|
|
|
*
|
|
|
|
* See movdir64b()'s comment on operand specification.
|
|
|
|
*/
|
|
|
|
asm volatile(".byte 0xf3, 0x0f, 0x38, 0xf8, 0x02, 0x66, 0x90"
|
|
|
|
CC_SET(z)
|
|
|
|
: CC_OUT(z) (zf), "+m" (*__dst)
|
|
|
|
: "m" (*__src), "a" (__dst), "d" (__src));
|
|
|
|
|
|
|
|
/* Submission failure is indicated via EFLAGS.ZF=1 */
|
|
|
|
if (zf)
|
|
|
|
return -EAGAIN;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2012-03-29 01:11:12 +08:00
|
|
|
#endif /* __KERNEL__ */
|
|
|
|
|
|
|
|
#endif /* _ASM_X86_SPECIAL_INSNS_H */
|