License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2015-04-30 18:45:38 +08:00
|
|
|
/*
|
|
|
|
* FPU signal frame handling routines.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/compat.h>
|
|
|
|
#include <linux/cpu.h>
|
2019-05-29 15:25:40 +08:00
|
|
|
#include <linux/pagemap.h>
|
2015-04-30 18:45:38 +08:00
|
|
|
|
|
|
|
#include <asm/fpu/internal.h>
|
|
|
|
#include <asm/fpu/signal.h>
|
|
|
|
#include <asm/fpu/regset.h>
|
2016-05-21 01:47:08 +08:00
|
|
|
#include <asm/fpu/xstate.h>
|
2015-04-30 18:45:38 +08:00
|
|
|
|
|
|
|
#include <asm/sigframe.h>
|
2021-09-08 21:29:26 +08:00
|
|
|
#include <asm/trapnr.h>
|
2016-06-02 01:42:20 +08:00
|
|
|
#include <asm/trace/fpu.h>
|
2015-04-30 18:45:38 +08:00
|
|
|
|
2021-06-23 20:02:27 +08:00
|
|
|
static struct _fpx_sw_bytes fx_sw_reserved __ro_after_init;
|
|
|
|
static struct _fpx_sw_bytes fx_sw_reserved_ia32 __ro_after_init;
|
2015-04-30 18:45:38 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Check for the presence of extended state information in the
|
|
|
|
* user fpstate pointer in the sigcontext.
|
|
|
|
*/
|
2021-09-08 21:29:40 +08:00
|
|
|
static inline bool check_xstate_in_sigframe(struct fxregs_state __user *fxbuf,
|
|
|
|
struct _fpx_sw_bytes *fx_sw)
|
2015-04-30 18:45:38 +08:00
|
|
|
{
|
2015-04-30 23:15:32 +08:00
|
|
|
int min_xstate_size = sizeof(struct fxregs_state) +
|
2015-04-30 18:45:38 +08:00
|
|
|
sizeof(struct xstate_header);
|
2021-06-23 20:02:27 +08:00
|
|
|
void __user *fpstate = fxbuf;
|
2015-04-30 18:45:38 +08:00
|
|
|
unsigned int magic2;
|
|
|
|
|
2021-06-23 20:02:27 +08:00
|
|
|
if (__copy_from_user(fx_sw, &fxbuf->sw_reserved[0], sizeof(*fx_sw)))
|
2021-09-08 21:29:40 +08:00
|
|
|
return false;
|
2015-04-30 18:45:38 +08:00
|
|
|
|
|
|
|
/* Check for the first magic field and other error scenarios. */
|
|
|
|
if (fx_sw->magic1 != FP_XSTATE_MAGIC1 ||
|
|
|
|
fx_sw->xstate_size < min_xstate_size ||
|
2016-05-21 01:47:05 +08:00
|
|
|
fx_sw->xstate_size > fpu_user_xstate_size ||
|
2015-04-30 18:45:38 +08:00
|
|
|
fx_sw->xstate_size > fx_sw->extended_size)
|
2021-06-23 20:02:27 +08:00
|
|
|
goto setfx;
|
2015-04-30 18:45:38 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Check for the presence of second magic word at the end of memory
|
|
|
|
* layout. This detects the case where the user just copied the legacy
|
|
|
|
* fpstate layout with out copying the extended state information
|
|
|
|
* in the memory layout.
|
|
|
|
*/
|
2021-06-23 20:02:27 +08:00
|
|
|
if (__get_user(magic2, (__u32 __user *)(fpstate + fx_sw->xstate_size)))
|
2021-09-08 21:29:40 +08:00
|
|
|
return false;
|
2015-04-30 18:45:38 +08:00
|
|
|
|
2021-06-23 20:02:27 +08:00
|
|
|
if (likely(magic2 == FP_XSTATE_MAGIC2))
|
2021-09-08 21:29:40 +08:00
|
|
|
return true;
|
2021-06-23 20:02:27 +08:00
|
|
|
setfx:
|
|
|
|
trace_x86_fpu_xstate_check_failed(¤t->thread.fpu);
|
|
|
|
|
|
|
|
/* Set the parameters for fx only state */
|
|
|
|
fx_sw->magic1 = 0;
|
|
|
|
fx_sw->xstate_size = sizeof(struct fxregs_state);
|
|
|
|
fx_sw->xfeatures = XFEATURE_MASK_FPSSE;
|
2021-09-08 21:29:40 +08:00
|
|
|
return true;
|
2015-04-30 18:45:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Signal frame handlers.
|
|
|
|
*/
|
2021-09-08 21:29:34 +08:00
|
|
|
static inline bool save_fsave_header(struct task_struct *tsk, void __user *buf)
|
2015-04-30 18:45:38 +08:00
|
|
|
{
|
|
|
|
if (use_fxsr()) {
|
2015-04-30 23:15:32 +08:00
|
|
|
struct xregs_state *xsave = &tsk->thread.fpu.state.xsave;
|
2015-04-30 18:45:38 +08:00
|
|
|
struct user_i387_ia32_struct env;
|
2015-09-05 15:32:36 +08:00
|
|
|
struct _fpstate_32 __user *fp = buf;
|
2015-04-30 18:45:38 +08:00
|
|
|
|
2019-06-07 22:29:16 +08:00
|
|
|
fpregs_lock();
|
|
|
|
if (!test_thread_flag(TIF_NEED_FPU_LOAD))
|
2021-06-23 20:01:54 +08:00
|
|
|
fxsave(&tsk->thread.fpu.state.fxsave);
|
2019-06-07 22:29:16 +08:00
|
|
|
fpregs_unlock();
|
|
|
|
|
2015-04-30 18:45:38 +08:00
|
|
|
convert_from_fxsr(&env, tsk);
|
|
|
|
|
|
|
|
if (__copy_to_user(buf, &env, sizeof(env)) ||
|
|
|
|
__put_user(xsave->i387.swd, &fp->status) ||
|
|
|
|
__put_user(X86_FXSR_MAGIC, &fp->magic))
|
2021-09-08 21:29:34 +08:00
|
|
|
return false;
|
2015-04-30 18:45:38 +08:00
|
|
|
} else {
|
2015-04-30 23:15:32 +08:00
|
|
|
struct fregs_state __user *fp = buf;
|
2015-04-30 18:45:38 +08:00
|
|
|
u32 swd;
|
2021-09-08 21:29:34 +08:00
|
|
|
|
2015-04-30 18:45:38 +08:00
|
|
|
if (__get_user(swd, &fp->swd) || __put_user(swd, &fp->status))
|
2021-09-08 21:29:34 +08:00
|
|
|
return false;
|
2015-04-30 18:45:38 +08:00
|
|
|
}
|
|
|
|
|
2021-09-08 21:29:34 +08:00
|
|
|
return true;
|
2015-04-30 18:45:38 +08:00
|
|
|
}
|
|
|
|
|
2021-09-08 21:29:34 +08:00
|
|
|
static inline bool save_xstate_epilog(void __user *buf, int ia32_frame)
|
2015-04-30 18:45:38 +08:00
|
|
|
{
|
2015-04-30 23:15:32 +08:00
|
|
|
struct xregs_state __user *x = buf;
|
2015-04-30 18:45:38 +08:00
|
|
|
struct _fpx_sw_bytes *sw_bytes;
|
|
|
|
u32 xfeatures;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
/* Setup the bytes not touched by the [f]xsave and reserved for SW. */
|
|
|
|
sw_bytes = ia32_frame ? &fx_sw_reserved_ia32 : &fx_sw_reserved;
|
|
|
|
err = __copy_to_user(&x->i387.sw_reserved, sw_bytes, sizeof(*sw_bytes));
|
|
|
|
|
|
|
|
if (!use_xsave())
|
|
|
|
return err;
|
|
|
|
|
2016-05-21 01:47:05 +08:00
|
|
|
err |= __put_user(FP_XSTATE_MAGIC2,
|
2019-03-30 05:46:51 +08:00
|
|
|
(__u32 __user *)(buf + fpu_user_xstate_size));
|
2015-04-30 18:45:38 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Read the xfeatures which we copied (directly from the cpu or
|
|
|
|
* from the state in task struct) to the user buffers.
|
|
|
|
*/
|
2019-03-30 05:46:51 +08:00
|
|
|
err |= __get_user(xfeatures, (__u32 __user *)&x->header.xfeatures);
|
2015-04-30 18:45:38 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* For legacy compatible, we always set FP/SSE bits in the bit
|
|
|
|
* vector while saving the state to the user context. This will
|
|
|
|
* enable us capturing any changes(during sigreturn) to
|
|
|
|
* the FP/SSE bits by the legacy applications which don't touch
|
|
|
|
* xfeatures in the xsave header.
|
|
|
|
*
|
|
|
|
* xsave aware apps can change the xfeatures in the xsave
|
|
|
|
* header as well as change any contents in the memory layout.
|
|
|
|
* xrestore as part of sigreturn will capture all the changes.
|
|
|
|
*/
|
2015-09-03 07:31:26 +08:00
|
|
|
xfeatures |= XFEATURE_MASK_FPSSE;
|
2015-04-30 18:45:38 +08:00
|
|
|
|
2019-03-30 05:46:51 +08:00
|
|
|
err |= __put_user(xfeatures, (__u32 __user *)&x->header.xfeatures);
|
2015-04-30 18:45:38 +08:00
|
|
|
|
2021-09-08 21:29:34 +08:00
|
|
|
return !err;
|
2015-04-30 18:45:38 +08:00
|
|
|
}
|
|
|
|
|
2015-04-30 23:15:32 +08:00
|
|
|
static inline int copy_fpregs_to_sigframe(struct xregs_state __user *buf)
|
2015-04-30 18:45:38 +08:00
|
|
|
{
|
|
|
|
if (use_xsave())
|
2021-09-08 21:29:30 +08:00
|
|
|
return xsave_to_user_sigframe(buf);
|
|
|
|
if (use_fxsr())
|
|
|
|
return fxsave_to_user_sigframe((struct fxregs_state __user *) buf);
|
2015-04-30 18:45:38 +08:00
|
|
|
else
|
2021-09-08 21:29:30 +08:00
|
|
|
return fnsave_to_user_sigframe((struct fregs_state __user *) buf);
|
2015-04-30 18:45:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Save the fpu, extended register state to the user signal frame.
|
|
|
|
*
|
|
|
|
* 'buf_fx' is the 64-byte aligned pointer at which the [f|fx|x]save
|
|
|
|
* state is copied.
|
|
|
|
* 'buf' points to the 'buf_fx' or to the fsave header followed by 'buf_fx'.
|
|
|
|
*
|
|
|
|
* buf == buf_fx for 64-bit frames and 32-bit fsave frame.
|
|
|
|
* buf != buf_fx for 32-bit frames with fxstate.
|
|
|
|
*
|
2019-04-04 00:41:54 +08:00
|
|
|
* Try to save it directly to the user frame with disabled page fault handler.
|
|
|
|
* If this fails then do the slow path where the FPU state is first saved to
|
|
|
|
* task's fpu->state and then copy it to the user frame pointed to by the
|
|
|
|
* aligned pointer 'buf_fx'.
|
2015-04-30 18:45:38 +08:00
|
|
|
*
|
|
|
|
* If this is a 32-bit frame with fxstate, put a fsave header before
|
|
|
|
* the aligned state at 'buf_fx'.
|
|
|
|
*
|
|
|
|
* For [f]xsave state, update the SW reserved fields in the [f]xsave frame
|
|
|
|
* indicating the absence/presence of the extended state to the user.
|
|
|
|
*/
|
2021-09-08 21:29:32 +08:00
|
|
|
bool copy_fpstate_to_sigframe(void __user *buf, void __user *buf_fx, int size)
|
2015-04-30 18:45:38 +08:00
|
|
|
{
|
|
|
|
struct task_struct *tsk = current;
|
|
|
|
int ia32_fxstate = (buf != buf_fx);
|
2019-05-03 01:11:39 +08:00
|
|
|
int ret;
|
2015-04-30 18:45:38 +08:00
|
|
|
|
2016-08-04 04:45:50 +08:00
|
|
|
ia32_fxstate &= (IS_ENABLED(CONFIG_X86_32) ||
|
|
|
|
IS_ENABLED(CONFIG_IA32_EMULATION));
|
2015-04-30 18:45:38 +08:00
|
|
|
|
2020-02-21 13:11:55 +08:00
|
|
|
if (!static_cpu_has(X86_FEATURE_FPU)) {
|
|
|
|
struct user_i387_ia32_struct fp;
|
2021-09-08 21:29:32 +08:00
|
|
|
|
2020-02-19 01:14:34 +08:00
|
|
|
fpregs_soft_get(current, NULL, (struct membuf){.p = &fp,
|
|
|
|
.left = sizeof(fp)});
|
2021-09-08 21:29:32 +08:00
|
|
|
return !copy_to_user(buf, &fp, sizeof(fp));
|
2020-02-21 13:11:55 +08:00
|
|
|
}
|
|
|
|
|
Remove 'type' argument from access_ok() function
Nobody has actually used the type (VERIFY_READ vs VERIFY_WRITE) argument
of the user address range verification function since we got rid of the
old racy i386-only code to walk page tables by hand.
It existed because the original 80386 would not honor the write protect
bit when in kernel mode, so you had to do COW by hand before doing any
user access. But we haven't supported that in a long time, and these
days the 'type' argument is a purely historical artifact.
A discussion about extending 'user_access_begin()' to do the range
checking resulted this patch, because there is no way we're going to
move the old VERIFY_xyz interface to that model. And it's best done at
the end of the merge window when I've done most of my merges, so let's
just get this done once and for all.
This patch was mostly done with a sed-script, with manual fix-ups for
the cases that weren't of the trivial 'access_ok(VERIFY_xyz' form.
There were a couple of notable cases:
- csky still had the old "verify_area()" name as an alias.
- the iter_iov code had magical hardcoded knowledge of the actual
values of VERIFY_{READ,WRITE} (not that they mattered, since nothing
really used it)
- microblaze used the type argument for a debug printout
but other than those oddities this should be a total no-op patch.
I tried to fix up all architectures, did fairly extensive grepping for
access_ok() uses, and the changes are trivial, but I may have missed
something. Any missed conversion should be trivially fixable, though.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-04 10:57:57 +08:00
|
|
|
if (!access_ok(buf, size))
|
2021-09-08 21:29:32 +08:00
|
|
|
return false;
|
2021-09-08 21:29:29 +08:00
|
|
|
|
|
|
|
if (use_xsave()) {
|
|
|
|
struct xregs_state __user *xbuf = buf_fx;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Clear the xsave header first, so that reserved fields are
|
|
|
|
* initialized to zero.
|
|
|
|
*/
|
2021-09-08 21:29:32 +08:00
|
|
|
if (__clear_user(&xbuf->header, sizeof(xbuf->header)))
|
|
|
|
return false;
|
2021-09-08 21:29:29 +08:00
|
|
|
}
|
2019-05-03 01:11:39 +08:00
|
|
|
retry:
|
2019-04-04 00:41:47 +08:00
|
|
|
/*
|
2019-04-13 02:16:15 +08:00
|
|
|
* Load the FPU registers if they are not valid for the current task.
|
|
|
|
* With a valid FPU state we can attempt to save the state directly to
|
2019-05-03 01:11:39 +08:00
|
|
|
* userland's stack frame which will likely succeed. If it does not,
|
|
|
|
* resolve the fault in the user memory and try again.
|
2019-04-04 00:41:47 +08:00
|
|
|
*/
|
|
|
|
fpregs_lock();
|
2019-04-13 02:16:15 +08:00
|
|
|
if (test_thread_flag(TIF_NEED_FPU_LOAD))
|
2021-06-23 20:02:14 +08:00
|
|
|
fpregs_restore_userregs();
|
2019-04-13 02:16:15 +08:00
|
|
|
|
|
|
|
pagefault_disable();
|
|
|
|
ret = copy_fpregs_to_sigframe(buf_fx);
|
|
|
|
pagefault_enable();
|
2019-04-04 00:41:47 +08:00
|
|
|
fpregs_unlock();
|
2019-04-04 00:41:46 +08:00
|
|
|
|
2019-04-04 00:41:54 +08:00
|
|
|
if (ret) {
|
2021-09-08 21:29:30 +08:00
|
|
|
if (!__clear_user(buf_fx, fpu_user_xstate_size))
|
2019-05-03 01:11:39 +08:00
|
|
|
goto retry;
|
2021-09-08 21:29:32 +08:00
|
|
|
return false;
|
2019-04-04 00:41:46 +08:00
|
|
|
}
|
2015-04-30 18:45:38 +08:00
|
|
|
|
|
|
|
/* Save the fsave header for the 32-bit frames. */
|
2021-09-08 21:29:34 +08:00
|
|
|
if ((ia32_fxstate || !use_fxsr()) && !save_fsave_header(tsk, buf))
|
2021-09-08 21:29:32 +08:00
|
|
|
return false;
|
2015-04-30 18:45:38 +08:00
|
|
|
|
2021-09-08 21:29:34 +08:00
|
|
|
if (use_fxsr() && !save_xstate_epilog(buf_fx, ia32_fxstate))
|
2021-09-08 21:29:32 +08:00
|
|
|
return false;
|
2015-04-30 18:45:38 +08:00
|
|
|
|
2021-09-08 21:29:32 +08:00
|
|
|
return true;
|
2015-04-30 18:45:38 +08:00
|
|
|
}
|
|
|
|
|
2021-06-23 20:02:29 +08:00
|
|
|
static int __restore_fpregs_from_user(void __user *buf, u64 xrestore,
|
|
|
|
bool fx_only)
|
2015-04-30 18:45:38 +08:00
|
|
|
{
|
|
|
|
if (use_xsave()) {
|
2021-06-23 20:02:28 +08:00
|
|
|
u64 init_bv = xfeatures_mask_uabi() & ~xrestore;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (likely(!fx_only))
|
|
|
|
ret = xrstor_from_user_sigframe(buf, xrestore);
|
|
|
|
else
|
|
|
|
ret = fxrstor_from_user_sigframe(buf);
|
|
|
|
|
|
|
|
if (!ret && unlikely(init_bv))
|
|
|
|
os_xrstor(&init_fpstate.xsave, init_bv);
|
|
|
|
return ret;
|
2015-04-30 18:45:38 +08:00
|
|
|
} else if (use_fxsr()) {
|
2021-06-23 20:01:54 +08:00
|
|
|
return fxrstor_from_user_sigframe(buf);
|
2021-06-23 20:02:28 +08:00
|
|
|
} else {
|
2021-06-23 20:01:56 +08:00
|
|
|
return frstor_from_user_sigframe(buf);
|
2021-06-23 20:02:28 +08:00
|
|
|
}
|
2015-04-30 18:45:38 +08:00
|
|
|
}
|
|
|
|
|
2021-06-23 20:02:31 +08:00
|
|
|
/*
|
|
|
|
* Attempt to restore the FPU registers directly from user memory.
|
|
|
|
* Pagefaults are handled and any errors returned are fatal.
|
|
|
|
*/
|
|
|
|
static int restore_fpregs_from_user(void __user *buf, u64 xrestore,
|
|
|
|
bool fx_only, unsigned int size)
|
2021-06-23 20:02:29 +08:00
|
|
|
{
|
|
|
|
struct fpu *fpu = ¤t->thread.fpu;
|
|
|
|
int ret;
|
|
|
|
|
2021-06-23 20:02:31 +08:00
|
|
|
retry:
|
2021-06-23 20:02:29 +08:00
|
|
|
fpregs_lock();
|
|
|
|
pagefault_disable();
|
|
|
|
ret = __restore_fpregs_from_user(buf, xrestore, fx_only);
|
|
|
|
pagefault_enable();
|
|
|
|
|
|
|
|
if (unlikely(ret)) {
|
|
|
|
/*
|
|
|
|
* The above did an FPU restore operation, restricted to
|
|
|
|
* the user portion of the registers, and failed, but the
|
|
|
|
* microcode might have modified the FPU registers
|
|
|
|
* nevertheless.
|
|
|
|
*
|
|
|
|
* If the FPU registers do not belong to current, then
|
|
|
|
* invalidate the FPU register state otherwise the task
|
|
|
|
* might preempt current and return to user space with
|
|
|
|
* corrupted FPU registers.
|
|
|
|
*/
|
|
|
|
if (test_thread_flag(TIF_NEED_FPU_LOAD))
|
|
|
|
__cpu_invalidate_fpregs_state();
|
|
|
|
fpregs_unlock();
|
2021-06-23 20:02:31 +08:00
|
|
|
|
|
|
|
/* Try to handle #PF, but anything else is fatal. */
|
2021-09-08 21:29:26 +08:00
|
|
|
if (ret != X86_TRAP_PF)
|
2021-06-23 20:02:31 +08:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
ret = fault_in_pages_readable(buf, size);
|
|
|
|
if (!ret)
|
|
|
|
goto retry;
|
2021-06-23 20:02:29 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Restore supervisor states: previous context switch etc has done
|
|
|
|
* XSAVES and saved the supervisor states in the kernel buffer from
|
|
|
|
* which they can be restored now.
|
|
|
|
*
|
|
|
|
* It would be optimal to handle this with a single XRSTORS, but
|
|
|
|
* this does not work because the rest of the FPU registers have
|
2021-06-23 20:02:31 +08:00
|
|
|
* been restored from a user buffer directly.
|
2021-06-23 20:02:29 +08:00
|
|
|
*/
|
|
|
|
if (test_thread_flag(TIF_NEED_FPU_LOAD) && xfeatures_mask_supervisor())
|
|
|
|
os_xrstor(&fpu->state.xsave, xfeatures_mask_supervisor());
|
|
|
|
|
|
|
|
fpregs_mark_activate();
|
|
|
|
fpregs_unlock();
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-09-08 21:29:38 +08:00
|
|
|
static bool __fpu_restore_sig(void __user *buf, void __user *buf_fx,
|
|
|
|
bool ia32_fxstate)
|
2015-04-30 18:45:38 +08:00
|
|
|
{
|
2019-04-04 00:41:51 +08:00
|
|
|
int state_size = fpu_kernel_xstate_size;
|
2015-04-30 18:45:38 +08:00
|
|
|
struct task_struct *tsk = current;
|
|
|
|
struct fpu *fpu = &tsk->thread.fpu;
|
2019-04-04 00:41:51 +08:00
|
|
|
struct user_i387_ia32_struct env;
|
2020-05-12 22:54:40 +08:00
|
|
|
u64 user_xfeatures = 0;
|
2021-06-23 20:02:27 +08:00
|
|
|
bool fx_only = false;
|
2021-09-08 21:29:38 +08:00
|
|
|
bool success;
|
|
|
|
|
2015-04-30 18:45:38 +08:00
|
|
|
|
|
|
|
if (use_xsave()) {
|
|
|
|
struct _fpx_sw_bytes fx_sw_user;
|
2021-06-23 20:02:27 +08:00
|
|
|
|
2021-09-08 21:29:40 +08:00
|
|
|
if (!check_xstate_in_sigframe(buf_fx, &fx_sw_user))
|
2021-09-08 21:29:38 +08:00
|
|
|
return false;
|
2021-06-23 20:02:27 +08:00
|
|
|
|
|
|
|
fx_only = !fx_sw_user.magic1;
|
|
|
|
state_size = fx_sw_user.xstate_size;
|
|
|
|
user_xfeatures = fx_sw_user.xfeatures;
|
2021-06-23 20:02:32 +08:00
|
|
|
} else {
|
|
|
|
user_xfeatures = XFEATURE_MASK_FPSSE;
|
2015-04-30 18:45:38 +08:00
|
|
|
}
|
|
|
|
|
2021-06-23 20:02:29 +08:00
|
|
|
if (likely(!ia32_fxstate)) {
|
2019-04-04 00:41:53 +08:00
|
|
|
/*
|
|
|
|
* Attempt to restore the FPU registers directly from user
|
2021-06-23 20:02:29 +08:00
|
|
|
* memory. For that to succeed, the user access cannot cause page
|
|
|
|
* faults. If it does, fall back to the slow path below, going
|
|
|
|
* through the kernel buffer with the enabled pagefault handler.
|
2019-04-04 00:41:53 +08:00
|
|
|
*/
|
2021-09-08 21:29:38 +08:00
|
|
|
return !restore_fpregs_from_user(buf_fx, user_xfeatures, fx_only,
|
|
|
|
state_size);
|
2019-04-04 00:41:51 +08:00
|
|
|
}
|
2015-04-30 18:45:38 +08:00
|
|
|
|
2021-06-23 20:02:31 +08:00
|
|
|
/*
|
|
|
|
* Copy the legacy state because the FP portion of the FX frame has
|
|
|
|
* to be ignored for histerical raisins. The legacy state is folded
|
|
|
|
* in once the larger state has been copied.
|
|
|
|
*/
|
2021-09-08 21:29:38 +08:00
|
|
|
if (__copy_from_user(&env, buf, sizeof(env)))
|
|
|
|
return false;
|
2021-06-23 20:02:31 +08:00
|
|
|
|
x86/fpu/xstate: Preserve supervisor states for the slow path in __fpu__restore_sig()
The signal return code is responsible for taking an XSAVE buffer
present in user memory and loading it into the hardware registers. This
operation only affects user XSAVE state and never affects supervisor
state.
The fast path through this code simply points XRSTOR directly at the
user buffer. However, since user memory is not guaranteed to be always
mapped, this XRSTOR can fail. If it fails, the signal return code falls
back to a slow path which can tolerate page faults.
That slow path copies the xfeatures one by one out of the user buffer
into the task's fpu state area. However, by being in a context where it
can handle page faults, the code can also schedule.
The lazy-fpu-load code would think it has an up-to-date fpstate and
would fail to save the supervisor state when scheduling the task out.
When scheduling back in, it would likely restore stale supervisor state.
To fix that, preserve supervisor state before the slow path. Modify
copy_user_to_fpregs_zeroing() so that if it fails, fpregs are not zeroed,
and there is no need for fpregs_deactivate() and supervisor states are
preserved.
Move set_thread_flag(TIF_NEED_FPU_LOAD) to the slow path. Without doing
this, the fast path also needs supervisor states to be saved first.
Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Link: https://lkml.kernel.org/r/20200512145444.15483-10-yu-cheng.yu@intel.com
2020-05-12 22:54:43 +08:00
|
|
|
/*
|
x86/fpu/xstate: Restore supervisor states for signal return
The signal return fast path directly restores user states from the user
buffer. Once that succeeds, restore supervisor states (but only when
they are not yet restored).
For the slow path, save supervisor states to preserve them across context
switches, and restore after the user states are restored.
The previous version has the overhead of an XSAVES in both the fast and the
slow paths. It is addressed as the following:
- In the fast path, only do an XRSTORS.
- In the slow path, do a supervisor-state-only XSAVES, and relocate the
buffer contents.
Some thoughts in the implementation:
- In the slow path, can any supervisor state become stale between
save/restore?
Answer: set_thread_flag(TIF_NEED_FPU_LOAD) protects the xstate buffer.
- In the slow path, can any code reference a stale supervisor state
register between save/restore?
Answer: In the current lazy-restore scheme, any reference to xstate
registers needs fpregs_lock()/fpregs_unlock() and __fpregs_load_activate().
- Are there other options?
One other option is eagerly restoring all supervisor states.
Currently, CET user-mode states and ENQCMD's PASID do not need to be
eagerly restored. The upcoming CET kernel-mode states (24 bytes) need
to be eagerly restored. To me, eagerly restoring all supervisor states
adds more overhead then benefit at this point.
Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://lkml.kernel.org/r/20200512145444.15483-11-yu-cheng.yu@intel.com
2020-05-12 22:54:44 +08:00
|
|
|
* By setting TIF_NEED_FPU_LOAD it is ensured that our xstate is
|
|
|
|
* not modified on context switch and that the xstate is considered
|
x86/fpu/xstate: Preserve supervisor states for the slow path in __fpu__restore_sig()
The signal return code is responsible for taking an XSAVE buffer
present in user memory and loading it into the hardware registers. This
operation only affects user XSAVE state and never affects supervisor
state.
The fast path through this code simply points XRSTOR directly at the
user buffer. However, since user memory is not guaranteed to be always
mapped, this XRSTOR can fail. If it fails, the signal return code falls
back to a slow path which can tolerate page faults.
That slow path copies the xfeatures one by one out of the user buffer
into the task's fpu state area. However, by being in a context where it
can handle page faults, the code can also schedule.
The lazy-fpu-load code would think it has an up-to-date fpstate and
would fail to save the supervisor state when scheduling the task out.
When scheduling back in, it would likely restore stale supervisor state.
To fix that, preserve supervisor state before the slow path. Modify
copy_user_to_fpregs_zeroing() so that if it fails, fpregs are not zeroed,
and there is no need for fpregs_deactivate() and supervisor states are
preserved.
Move set_thread_flag(TIF_NEED_FPU_LOAD) to the slow path. Without doing
this, the fast path also needs supervisor states to be saved first.
Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Link: https://lkml.kernel.org/r/20200512145444.15483-10-yu-cheng.yu@intel.com
2020-05-12 22:54:43 +08:00
|
|
|
* to be loaded again on return to userland (overriding last_cpu avoids
|
|
|
|
* the optimisation).
|
|
|
|
*/
|
x86/fpu/xstate: Restore supervisor states for signal return
The signal return fast path directly restores user states from the user
buffer. Once that succeeds, restore supervisor states (but only when
they are not yet restored).
For the slow path, save supervisor states to preserve them across context
switches, and restore after the user states are restored.
The previous version has the overhead of an XSAVES in both the fast and the
slow paths. It is addressed as the following:
- In the fast path, only do an XRSTORS.
- In the slow path, do a supervisor-state-only XSAVES, and relocate the
buffer contents.
Some thoughts in the implementation:
- In the slow path, can any supervisor state become stale between
save/restore?
Answer: set_thread_flag(TIF_NEED_FPU_LOAD) protects the xstate buffer.
- In the slow path, can any code reference a stale supervisor state
register between save/restore?
Answer: In the current lazy-restore scheme, any reference to xstate
registers needs fpregs_lock()/fpregs_unlock() and __fpregs_load_activate().
- Are there other options?
One other option is eagerly restoring all supervisor states.
Currently, CET user-mode states and ENQCMD's PASID do not need to be
eagerly restored. The upcoming CET kernel-mode states (24 bytes) need
to be eagerly restored. To me, eagerly restoring all supervisor states
adds more overhead then benefit at this point.
Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://lkml.kernel.org/r/20200512145444.15483-11-yu-cheng.yu@intel.com
2020-05-12 22:54:44 +08:00
|
|
|
fpregs_lock();
|
|
|
|
if (!test_thread_flag(TIF_NEED_FPU_LOAD)) {
|
|
|
|
/*
|
2021-06-23 20:01:51 +08:00
|
|
|
* If supervisor states are available then save the
|
|
|
|
* hardware state in current's fpstate so that the
|
|
|
|
* supervisor state is preserved. Save the full state for
|
|
|
|
* simplicity. There is no point in optimizing this by only
|
|
|
|
* saving the supervisor states and then shuffle them to
|
2021-06-23 20:02:31 +08:00
|
|
|
* the right place in memory. It's ia32 mode. Shrug.
|
x86/fpu/xstate: Restore supervisor states for signal return
The signal return fast path directly restores user states from the user
buffer. Once that succeeds, restore supervisor states (but only when
they are not yet restored).
For the slow path, save supervisor states to preserve them across context
switches, and restore after the user states are restored.
The previous version has the overhead of an XSAVES in both the fast and the
slow paths. It is addressed as the following:
- In the fast path, only do an XRSTORS.
- In the slow path, do a supervisor-state-only XSAVES, and relocate the
buffer contents.
Some thoughts in the implementation:
- In the slow path, can any supervisor state become stale between
save/restore?
Answer: set_thread_flag(TIF_NEED_FPU_LOAD) protects the xstate buffer.
- In the slow path, can any code reference a stale supervisor state
register between save/restore?
Answer: In the current lazy-restore scheme, any reference to xstate
registers needs fpregs_lock()/fpregs_unlock() and __fpregs_load_activate().
- Are there other options?
One other option is eagerly restoring all supervisor states.
Currently, CET user-mode states and ENQCMD's PASID do not need to be
eagerly restored. The upcoming CET kernel-mode states (24 bytes) need
to be eagerly restored. To me, eagerly restoring all supervisor states
adds more overhead then benefit at this point.
Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://lkml.kernel.org/r/20200512145444.15483-11-yu-cheng.yu@intel.com
2020-05-12 22:54:44 +08:00
|
|
|
*/
|
|
|
|
if (xfeatures_mask_supervisor())
|
2021-06-23 20:01:52 +08:00
|
|
|
os_xsave(&fpu->state.xsave);
|
x86/fpu/xstate: Restore supervisor states for signal return
The signal return fast path directly restores user states from the user
buffer. Once that succeeds, restore supervisor states (but only when
they are not yet restored).
For the slow path, save supervisor states to preserve them across context
switches, and restore after the user states are restored.
The previous version has the overhead of an XSAVES in both the fast and the
slow paths. It is addressed as the following:
- In the fast path, only do an XRSTORS.
- In the slow path, do a supervisor-state-only XSAVES, and relocate the
buffer contents.
Some thoughts in the implementation:
- In the slow path, can any supervisor state become stale between
save/restore?
Answer: set_thread_flag(TIF_NEED_FPU_LOAD) protects the xstate buffer.
- In the slow path, can any code reference a stale supervisor state
register between save/restore?
Answer: In the current lazy-restore scheme, any reference to xstate
registers needs fpregs_lock()/fpregs_unlock() and __fpregs_load_activate().
- Are there other options?
One other option is eagerly restoring all supervisor states.
Currently, CET user-mode states and ENQCMD's PASID do not need to be
eagerly restored. The upcoming CET kernel-mode states (24 bytes) need
to be eagerly restored. To me, eagerly restoring all supervisor states
adds more overhead then benefit at this point.
Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://lkml.kernel.org/r/20200512145444.15483-11-yu-cheng.yu@intel.com
2020-05-12 22:54:44 +08:00
|
|
|
set_thread_flag(TIF_NEED_FPU_LOAD);
|
|
|
|
}
|
x86/fpu/xstate: Preserve supervisor states for the slow path in __fpu__restore_sig()
The signal return code is responsible for taking an XSAVE buffer
present in user memory and loading it into the hardware registers. This
operation only affects user XSAVE state and never affects supervisor
state.
The fast path through this code simply points XRSTOR directly at the
user buffer. However, since user memory is not guaranteed to be always
mapped, this XRSTOR can fail. If it fails, the signal return code falls
back to a slow path which can tolerate page faults.
That slow path copies the xfeatures one by one out of the user buffer
into the task's fpu state area. However, by being in a context where it
can handle page faults, the code can also schedule.
The lazy-fpu-load code would think it has an up-to-date fpstate and
would fail to save the supervisor state when scheduling the task out.
When scheduling back in, it would likely restore stale supervisor state.
To fix that, preserve supervisor state before the slow path. Modify
copy_user_to_fpregs_zeroing() so that if it fails, fpregs are not zeroed,
and there is no need for fpregs_deactivate() and supervisor states are
preserved.
Move set_thread_flag(TIF_NEED_FPU_LOAD) to the slow path. Without doing
this, the fast path also needs supervisor states to be saved first.
Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Link: https://lkml.kernel.org/r/20200512145444.15483-10-yu-cheng.yu@intel.com
2020-05-12 22:54:43 +08:00
|
|
|
__fpu_invalidate_fpregs_state(fpu);
|
2021-06-23 20:02:32 +08:00
|
|
|
__cpu_invalidate_fpregs_state();
|
x86/fpu/xstate: Restore supervisor states for signal return
The signal return fast path directly restores user states from the user
buffer. Once that succeeds, restore supervisor states (but only when
they are not yet restored).
For the slow path, save supervisor states to preserve them across context
switches, and restore after the user states are restored.
The previous version has the overhead of an XSAVES in both the fast and the
slow paths. It is addressed as the following:
- In the fast path, only do an XRSTORS.
- In the slow path, do a supervisor-state-only XSAVES, and relocate the
buffer contents.
Some thoughts in the implementation:
- In the slow path, can any supervisor state become stale between
save/restore?
Answer: set_thread_flag(TIF_NEED_FPU_LOAD) protects the xstate buffer.
- In the slow path, can any code reference a stale supervisor state
register between save/restore?
Answer: In the current lazy-restore scheme, any reference to xstate
registers needs fpregs_lock()/fpregs_unlock() and __fpregs_load_activate().
- Are there other options?
One other option is eagerly restoring all supervisor states.
Currently, CET user-mode states and ENQCMD's PASID do not need to be
eagerly restored. The upcoming CET kernel-mode states (24 bytes) need
to be eagerly restored. To me, eagerly restoring all supervisor states
adds more overhead then benefit at this point.
Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://lkml.kernel.org/r/20200512145444.15483-11-yu-cheng.yu@intel.com
2020-05-12 22:54:44 +08:00
|
|
|
fpregs_unlock();
|
2019-04-04 00:41:53 +08:00
|
|
|
|
2019-04-04 00:41:51 +08:00
|
|
|
if (use_xsave() && !fx_only) {
|
2021-09-08 21:29:38 +08:00
|
|
|
if (copy_sigframe_from_user_to_xstate(&fpu->state.xsave, buf_fx))
|
|
|
|
return false;
|
2021-06-23 20:02:32 +08:00
|
|
|
} else {
|
|
|
|
if (__copy_from_user(&fpu->state.fxsave, buf_fx,
|
|
|
|
sizeof(fpu->state.fxsave)))
|
2021-09-08 21:29:38 +08:00
|
|
|
return false;
|
x86/fpu: Don't let userspace set bogus xcomp_bv
On x86, userspace can use the ptrace() or rt_sigreturn() system calls to
set a task's extended state (xstate) or "FPU" registers. ptrace() can
set them for another task using the PTRACE_SETREGSET request with
NT_X86_XSTATE, while rt_sigreturn() can set them for the current task.
In either case, registers can be set to any value, but the kernel
assumes that the XSAVE area itself remains valid in the sense that the
CPU can restore it.
However, in the case where the kernel is using the uncompacted xstate
format (which it does whenever the XSAVES instruction is unavailable),
it was possible for userspace to set the xcomp_bv field in the
xstate_header to an arbitrary value. However, all bits in that field
are reserved in the uncompacted case, so when switching to a task with
nonzero xcomp_bv, the XRSTOR instruction failed with a #GP fault. This
caused the WARN_ON_FPU(err) in copy_kernel_to_xregs() to be hit. In
addition, since the error is otherwise ignored, the FPU registers from
the task previously executing on the CPU were leaked.
Fix the bug by checking that the user-supplied value of xcomp_bv is 0 in
the uncompacted case, and returning an error otherwise.
The reason for validating xcomp_bv rather than simply overwriting it
with 0 is that we want userspace to see an error if it (incorrectly)
provides an XSAVE area in compacted format rather than in uncompacted
format.
Note that as before, in case of error we clear the task's FPU state.
This is perhaps non-ideal, especially for PTRACE_SETREGSET; it might be
better to return an error before changing anything. But it seems the
"clear on error" behavior is fine for now, and it's a little tricky to
do otherwise because it would mean we couldn't simply copy the full
userspace state into kernel memory in one __copy_from_user().
This bug was found by syzkaller, which hit the above-mentioned
WARN_ON_FPU():
WARNING: CPU: 1 PID: 0 at ./arch/x86/include/asm/fpu/internal.h:373 __switch_to+0x5b5/0x5d0
CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.13.0 #453
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
task: ffff9ba2bc8e42c0 task.stack: ffffa78cc036c000
RIP: 0010:__switch_to+0x5b5/0x5d0
RSP: 0000:ffffa78cc08bbb88 EFLAGS: 00010082
RAX: 00000000fffffffe RBX: ffff9ba2b8bf2180 RCX: 00000000c0000100
RDX: 00000000ffffffff RSI: 000000005cb10700 RDI: ffff9ba2b8bf36c0
RBP: ffffa78cc08bbbd0 R08: 00000000929fdf46 R09: 0000000000000001
R10: 0000000000000000 R11: 0000000000000000 R12: ffff9ba2bc8e42c0
R13: 0000000000000000 R14: ffff9ba2b8bf3680 R15: ffff9ba2bf5d7b40
FS: 00007f7e5cb10700(0000) GS:ffff9ba2bf400000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00000000004005cc CR3: 0000000079fd5000 CR4: 00000000001406e0
Call Trace:
Code: 84 00 00 00 00 00 e9 11 fd ff ff 0f ff 66 0f 1f 84 00 00 00 00 00 e9 e7 fa ff ff 0f ff 66 0f 1f 84 00 00 00 00 00 e9 c2 fa ff ff <0f> ff 66 0f 1f 84 00 00 00 00 00 e9 d4 fc ff ff 66 66 2e 0f 1f
Here is a C reproducer. The expected behavior is that the program spin
forever with no output. However, on a buggy kernel running on a
processor with the "xsave" feature but without the "xsaves" feature
(e.g. Sandy Bridge through Broadwell for Intel), within a second or two
the program reports that the xmm registers were corrupted, i.e. were not
restored correctly. With CONFIG_X86_DEBUG_FPU=y it also hits the above
kernel warning.
#define _GNU_SOURCE
#include <stdbool.h>
#include <inttypes.h>
#include <linux/elf.h>
#include <stdio.h>
#include <sys/ptrace.h>
#include <sys/uio.h>
#include <sys/wait.h>
#include <unistd.h>
int main(void)
{
int pid = fork();
uint64_t xstate[512];
struct iovec iov = { .iov_base = xstate, .iov_len = sizeof(xstate) };
if (pid == 0) {
bool tracee = true;
for (int i = 0; i < sysconf(_SC_NPROCESSORS_ONLN) && tracee; i++)
tracee = (fork() != 0);
uint32_t xmm0[4] = { [0 ... 3] = tracee ? 0x00000000 : 0xDEADBEEF };
asm volatile(" movdqu %0, %%xmm0\n"
" mov %0, %%rbx\n"
"1: movdqu %%xmm0, %0\n"
" mov %0, %%rax\n"
" cmp %%rax, %%rbx\n"
" je 1b\n"
: "+m" (xmm0) : : "rax", "rbx", "xmm0");
printf("BUG: xmm registers corrupted! tracee=%d, xmm0=%08X%08X%08X%08X\n",
tracee, xmm0[0], xmm0[1], xmm0[2], xmm0[3]);
} else {
usleep(100000);
ptrace(PTRACE_ATTACH, pid, 0, 0);
wait(NULL);
ptrace(PTRACE_GETREGSET, pid, NT_X86_XSTATE, &iov);
xstate[65] = -1;
ptrace(PTRACE_SETREGSET, pid, NT_X86_XSTATE, &iov);
ptrace(PTRACE_CONT, pid, 0, 0);
wait(NULL);
}
return 1;
}
Note: the program only tests for the bug using the ptrace() system call.
The bug can also be reproduced using the rt_sigreturn() system call, but
only when called from a 32-bit program, since for 64-bit programs the
kernel restores the FPU state from the signal frame by doing XRSTOR
directly from userspace memory (with proper error checking).
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Rik van Riel <riel@redhat.com>
Acked-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: <stable@vger.kernel.org> [v3.17+]
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Eric Biggers <ebiggers3@gmail.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Kevin Hao <haokexin@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Michael Halcrow <mhalcrow@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Wanpeng Li <wanpeng.li@hotmail.com>
Cc: Yu-cheng Yu <yu-cheng.yu@intel.com>
Cc: kernel-hardening@lists.openwall.com
Fixes: 0b29643a5843 ("x86/xsaves: Change compacted format xsave area header")
Link: http://lkml.kernel.org/r/20170922174156.16780-2-ebiggers3@gmail.com
Link: http://lkml.kernel.org/r/20170923130016.21448-25-mingo@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-09-23 21:00:07 +08:00
|
|
|
|
2021-06-23 20:02:32 +08:00
|
|
|
/* Reject invalid MXCSR values. */
|
|
|
|
if (fpu->state.fxsave.mxcsr & ~mxcsr_feature_mask)
|
2021-09-08 21:29:38 +08:00
|
|
|
return false;
|
2019-04-04 00:41:50 +08:00
|
|
|
|
2021-06-23 20:02:32 +08:00
|
|
|
/* Enforce XFEATURE_MASK_FPSSE when XSAVE is enabled */
|
|
|
|
if (use_xsave())
|
|
|
|
fpu->state.xsave.header.xfeatures |= XFEATURE_MASK_FPSSE;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Fold the legacy FP storage */
|
|
|
|
convert_to_fxsr(&fpu->state.fxsave, &env);
|
x86/fpu/xstate: Restore supervisor states for signal return
The signal return fast path directly restores user states from the user
buffer. Once that succeeds, restore supervisor states (but only when
they are not yet restored).
For the slow path, save supervisor states to preserve them across context
switches, and restore after the user states are restored.
The previous version has the overhead of an XSAVES in both the fast and the
slow paths. It is addressed as the following:
- In the fast path, only do an XRSTORS.
- In the slow path, do a supervisor-state-only XSAVES, and relocate the
buffer contents.
Some thoughts in the implementation:
- In the slow path, can any supervisor state become stale between
save/restore?
Answer: set_thread_flag(TIF_NEED_FPU_LOAD) protects the xstate buffer.
- In the slow path, can any code reference a stale supervisor state
register between save/restore?
Answer: In the current lazy-restore scheme, any reference to xstate
registers needs fpregs_lock()/fpregs_unlock() and __fpregs_load_activate().
- Are there other options?
One other option is eagerly restoring all supervisor states.
Currently, CET user-mode states and ENQCMD's PASID do not need to be
eagerly restored. The upcoming CET kernel-mode states (24 bytes) need
to be eagerly restored. To me, eagerly restoring all supervisor states
adds more overhead then benefit at this point.
Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://lkml.kernel.org/r/20200512145444.15483-11-yu-cheng.yu@intel.com
2020-05-12 22:54:44 +08:00
|
|
|
|
2021-06-23 20:02:32 +08:00
|
|
|
fpregs_lock();
|
|
|
|
if (use_xsave()) {
|
x86/fpu/xstate: Restore supervisor states for signal return
The signal return fast path directly restores user states from the user
buffer. Once that succeeds, restore supervisor states (but only when
they are not yet restored).
For the slow path, save supervisor states to preserve them across context
switches, and restore after the user states are restored.
The previous version has the overhead of an XSAVES in both the fast and the
slow paths. It is addressed as the following:
- In the fast path, only do an XRSTORS.
- In the slow path, do a supervisor-state-only XSAVES, and relocate the
buffer contents.
Some thoughts in the implementation:
- In the slow path, can any supervisor state become stale between
save/restore?
Answer: set_thread_flag(TIF_NEED_FPU_LOAD) protects the xstate buffer.
- In the slow path, can any code reference a stale supervisor state
register between save/restore?
Answer: In the current lazy-restore scheme, any reference to xstate
registers needs fpregs_lock()/fpregs_unlock() and __fpregs_load_activate().
- Are there other options?
One other option is eagerly restoring all supervisor states.
Currently, CET user-mode states and ENQCMD's PASID do not need to be
eagerly restored. The upcoming CET kernel-mode states (24 bytes) need
to be eagerly restored. To me, eagerly restoring all supervisor states
adds more overhead then benefit at this point.
Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://lkml.kernel.org/r/20200512145444.15483-11-yu-cheng.yu@intel.com
2020-05-12 22:54:44 +08:00
|
|
|
/*
|
2021-06-23 20:02:32 +08:00
|
|
|
* Remove all UABI feature bits not set in user_xfeatures
|
|
|
|
* from the memory xstate header which makes the full
|
|
|
|
* restore below bring them into init state. This works for
|
|
|
|
* fx_only mode as well because that has only FP and SSE
|
|
|
|
* set in user_xfeatures.
|
|
|
|
*
|
|
|
|
* Preserve supervisor states!
|
x86/fpu/xstate: Restore supervisor states for signal return
The signal return fast path directly restores user states from the user
buffer. Once that succeeds, restore supervisor states (but only when
they are not yet restored).
For the slow path, save supervisor states to preserve them across context
switches, and restore after the user states are restored.
The previous version has the overhead of an XSAVES in both the fast and the
slow paths. It is addressed as the following:
- In the fast path, only do an XRSTORS.
- In the slow path, do a supervisor-state-only XSAVES, and relocate the
buffer contents.
Some thoughts in the implementation:
- In the slow path, can any supervisor state become stale between
save/restore?
Answer: set_thread_flag(TIF_NEED_FPU_LOAD) protects the xstate buffer.
- In the slow path, can any code reference a stale supervisor state
register between save/restore?
Answer: In the current lazy-restore scheme, any reference to xstate
registers needs fpregs_lock()/fpregs_unlock() and __fpregs_load_activate().
- Are there other options?
One other option is eagerly restoring all supervisor states.
Currently, CET user-mode states and ENQCMD's PASID do not need to be
eagerly restored. The upcoming CET kernel-mode states (24 bytes) need
to be eagerly restored. To me, eagerly restoring all supervisor states
adds more overhead then benefit at this point.
Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://lkml.kernel.org/r/20200512145444.15483-11-yu-cheng.yu@intel.com
2020-05-12 22:54:44 +08:00
|
|
|
*/
|
2021-06-23 20:02:32 +08:00
|
|
|
u64 mask = user_xfeatures | xfeatures_mask_supervisor();
|
2019-04-04 00:41:49 +08:00
|
|
|
|
2021-06-23 20:02:32 +08:00
|
|
|
fpu->state.xsave.header.xfeatures &= mask;
|
2021-09-08 21:29:38 +08:00
|
|
|
success = !os_xrstor_safe(&fpu->state.xsave, xfeatures_mask_all);
|
2021-06-23 20:02:31 +08:00
|
|
|
} else {
|
2021-09-08 21:29:38 +08:00
|
|
|
success = !fxrstor_safe(&fpu->state.fxsave);
|
2015-04-30 18:45:38 +08:00
|
|
|
}
|
2021-06-23 20:02:31 +08:00
|
|
|
|
2021-09-08 21:29:38 +08:00
|
|
|
if (likely(success))
|
2019-04-04 00:41:52 +08:00
|
|
|
fpregs_mark_activate();
|
2021-06-23 20:02:32 +08:00
|
|
|
|
2019-04-04 00:41:52 +08:00
|
|
|
fpregs_unlock();
|
2021-09-08 21:29:38 +08:00
|
|
|
return success;
|
2015-04-30 18:45:38 +08:00
|
|
|
}
|
2021-09-08 21:29:38 +08:00
|
|
|
|
2015-04-30 18:45:38 +08:00
|
|
|
static inline int xstate_sigframe_size(void)
|
|
|
|
{
|
2016-05-21 01:47:05 +08:00
|
|
|
return use_xsave() ? fpu_user_xstate_size + FP_XSTATE_MAGIC2_SIZE :
|
|
|
|
fpu_user_xstate_size;
|
2015-04-30 18:45:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Restore FPU state from a sigframe:
|
|
|
|
*/
|
2021-09-08 21:29:37 +08:00
|
|
|
bool fpu__restore_sig(void __user *buf, int ia32_frame)
|
2015-04-30 18:45:38 +08:00
|
|
|
{
|
2021-06-23 20:02:25 +08:00
|
|
|
unsigned int size = xstate_sigframe_size();
|
|
|
|
struct fpu *fpu = ¤t->thread.fpu;
|
2015-04-30 18:45:38 +08:00
|
|
|
void __user *buf_fx = buf;
|
2021-06-23 20:02:25 +08:00
|
|
|
bool ia32_fxstate = false;
|
2021-09-08 21:29:37 +08:00
|
|
|
bool success = false;
|
2021-06-23 20:02:25 +08:00
|
|
|
|
|
|
|
if (unlikely(!buf)) {
|
|
|
|
fpu__clear_user_states(fpu);
|
2021-09-08 21:29:37 +08:00
|
|
|
return true;
|
2021-06-23 20:02:25 +08:00
|
|
|
}
|
2015-04-30 18:45:38 +08:00
|
|
|
|
2021-06-23 20:02:25 +08:00
|
|
|
ia32_frame &= (IS_ENABLED(CONFIG_X86_32) ||
|
|
|
|
IS_ENABLED(CONFIG_IA32_EMULATION));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Only FXSR enabled systems need the FX state quirk.
|
|
|
|
* FRSTOR does not need it and can use the fast path.
|
|
|
|
*/
|
2015-04-30 18:45:38 +08:00
|
|
|
if (ia32_frame && use_fxsr()) {
|
2015-04-30 23:15:32 +08:00
|
|
|
buf_fx = buf + sizeof(struct fregs_state);
|
|
|
|
size += sizeof(struct fregs_state);
|
2021-06-23 20:02:25 +08:00
|
|
|
ia32_fxstate = true;
|
2015-04-30 18:45:38 +08:00
|
|
|
}
|
|
|
|
|
2021-09-08 21:29:37 +08:00
|
|
|
if (!access_ok(buf, size))
|
2021-06-23 20:02:25 +08:00
|
|
|
goto out;
|
|
|
|
|
|
|
|
if (!IS_ENABLED(CONFIG_X86_64) && !cpu_feature_enabled(X86_FEATURE_FPU)) {
|
2021-09-08 21:29:37 +08:00
|
|
|
success = !fpregs_soft_set(current, NULL, 0,
|
|
|
|
sizeof(struct user_i387_ia32_struct),
|
|
|
|
NULL, buf);
|
2021-06-23 20:02:25 +08:00
|
|
|
} else {
|
2021-09-08 21:29:38 +08:00
|
|
|
success = __fpu_restore_sig(buf, buf_fx, ia32_fxstate);
|
2021-06-23 20:02:25 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
2021-09-08 21:29:37 +08:00
|
|
|
if (unlikely(!success))
|
2021-06-23 20:02:25 +08:00
|
|
|
fpu__clear_user_states(fpu);
|
2021-09-08 21:29:37 +08:00
|
|
|
return success;
|
2015-04-30 18:45:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
unsigned long
|
|
|
|
fpu__alloc_mathframe(unsigned long sp, int ia32_frame,
|
|
|
|
unsigned long *buf_fx, unsigned long *size)
|
|
|
|
{
|
|
|
|
unsigned long frame_size = xstate_sigframe_size();
|
|
|
|
|
|
|
|
*buf_fx = sp = round_down(sp - frame_size, 64);
|
|
|
|
if (ia32_frame && use_fxsr()) {
|
2015-04-30 23:15:32 +08:00
|
|
|
frame_size += sizeof(struct fregs_state);
|
|
|
|
sp -= sizeof(struct fregs_state);
|
2015-04-30 18:45:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
*size = frame_size;
|
|
|
|
|
|
|
|
return sp;
|
|
|
|
}
|
2021-05-19 04:03:16 +08:00
|
|
|
|
|
|
|
unsigned long fpu__get_fpstate_size(void)
|
|
|
|
{
|
|
|
|
unsigned long ret = xstate_sigframe_size();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This space is needed on (most) 32-bit kernels, or when a 32-bit
|
|
|
|
* app is running on a 64-bit kernel. To keep things simple, just
|
|
|
|
* assume the worst case and always include space for 'freg_state',
|
|
|
|
* even for 64-bit apps on 64-bit kernels. This wastes a bit of
|
|
|
|
* space, but keeps the code simple.
|
|
|
|
*/
|
|
|
|
if ((IS_ENABLED(CONFIG_IA32_EMULATION) ||
|
|
|
|
IS_ENABLED(CONFIG_X86_32)) && use_fxsr())
|
|
|
|
ret += sizeof(struct fregs_state);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2015-04-30 18:45:38 +08:00
|
|
|
/*
|
|
|
|
* Prepare the SW reserved portion of the fxsave memory layout, indicating
|
|
|
|
* the presence of the extended state information in the memory layout
|
|
|
|
* pointed by the fpstate pointer in the sigcontext.
|
|
|
|
* This will be saved when ever the FP and extended state context is
|
|
|
|
* saved on the user stack during the signal handler delivery to the user.
|
|
|
|
*/
|
|
|
|
void fpu__init_prepare_fx_sw_frame(void)
|
|
|
|
{
|
2016-05-21 01:47:05 +08:00
|
|
|
int size = fpu_user_xstate_size + FP_XSTATE_MAGIC2_SIZE;
|
2015-04-30 18:45:38 +08:00
|
|
|
|
|
|
|
fx_sw_reserved.magic1 = FP_XSTATE_MAGIC1;
|
|
|
|
fx_sw_reserved.extended_size = size;
|
2021-06-23 20:02:16 +08:00
|
|
|
fx_sw_reserved.xfeatures = xfeatures_mask_uabi();
|
2016-05-21 01:47:05 +08:00
|
|
|
fx_sw_reserved.xstate_size = fpu_user_xstate_size;
|
2015-04-30 18:45:38 +08:00
|
|
|
|
2016-08-04 04:45:50 +08:00
|
|
|
if (IS_ENABLED(CONFIG_IA32_EMULATION) ||
|
|
|
|
IS_ENABLED(CONFIG_X86_32)) {
|
2015-11-11 08:23:54 +08:00
|
|
|
int fsave_header_size = sizeof(struct fregs_state);
|
|
|
|
|
2015-04-30 18:45:38 +08:00
|
|
|
fx_sw_reserved_ia32 = fx_sw_reserved;
|
2015-11-11 08:23:54 +08:00
|
|
|
fx_sw_reserved_ia32.extended_size = size + fsave_header_size;
|
2015-04-30 18:45:38 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|