x86/fpu: Simplify PTRACE_GETREGS code

ptrace() has interfaces that let a ptracer inspect a ptracee's register state.
This includes XSAVE state.  The ptrace() ABI includes a hardware-format XSAVE
buffer for both the SETREGS and GETREGS interfaces.

In the old days, the kernel buffer and the ptrace() ABI buffer were the
same boring non-compacted format.  But, since the advent of supervisor
states and the compacted format, the kernel buffer has diverged from the
format presented in the ABI.

This leads to two paths in the kernel:
1. Effectively a verbatim copy_to_user() which just copies the kernel buffer
   out to userspace.  This is used when the kernel buffer is kept in the
   non-compacted form which means that it shares a format with the ptrace
   ABI.
2. A one-state-at-a-time path: copy_xstate_to_kernel().  This is theoretically
   slower since it does a bunch of piecemeal copies.

Remove the verbatim copy case.  Speed probably does not matter in this path,
and the vast majority of new hardware will use the one-state-at-a-time path
anyway.  This ensures greater testing for the "slow" path.

This also makes enabling PKRU in this interface easier since a single path
can be patched instead of two.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Andy Lutomirski <luto@kernel.org>
Reviewed-by: Borislav Petkov <bp@suse.de>
Link: https://lkml.kernel.org/r/20210623121452.408457100@linutronix.de
This commit is contained in:
Dave Hansen 2021-06-23 14:01:38 +02:00 committed by Borislav Petkov
parent 947f4947cf
commit 3a3351126e
2 changed files with 6 additions and 24 deletions

View File

@ -77,32 +77,14 @@ int xstateregs_get(struct task_struct *target, const struct user_regset *regset,
struct membuf to)
{
struct fpu *fpu = &target->thread.fpu;
struct xregs_state *xsave;
if (!boot_cpu_has(X86_FEATURE_XSAVE))
if (!cpu_feature_enabled(X86_FEATURE_XSAVE))
return -ENODEV;
xsave = &fpu->state.xsave;
fpu__prepare_read(fpu);
if (using_compacted_format()) {
copy_xstate_to_kernel(to, xsave);
return 0;
} else {
fpstate_sanitize_xstate(fpu);
/*
* Copy the 48 bytes defined by the software into the xsave
* area in the thread struct, so that we can copy the whole
* area to user using one user_regset_copyout().
*/
memcpy(&xsave->i387.sw_reserved, xstate_fx_sw_bytes, sizeof(xstate_fx_sw_bytes));
/*
* Copy the xstate memory layout.
*/
return membuf_write(&to, xsave, fpu_user_xstate_size);
}
copy_xstate_to_kernel(to, &fpu->state.xsave);
return 0;
}
int xstateregs_set(struct task_struct *target, const struct user_regset *regset,

View File

@ -1069,11 +1069,11 @@ static void copy_feature(bool from_xstate, struct membuf *to, void *xstate,
}
/*
* Convert from kernel XSAVES compacted format to standard format and copy
* to a kernel-space ptrace buffer.
* Convert from kernel XSAVE or XSAVES compacted format to UABI
* non-compacted format and copy to a kernel-space ptrace buffer.
*
* It supports partial copy but pos always starts from zero. This is called
* from xstateregs_get() and there we check the CPU has XSAVES.
* from xstateregs_get() and there we check the CPU has XSAVE.
*/
void copy_xstate_to_kernel(struct membuf to, struct xregs_state *xsave)
{