Merge series "spi: Various Cleanups" from Uwe Kleine-König <u.kleine-koenig@pengutronix.de>:

Hello,

while trying to understand how the spi framework makes use of the core
device driver stuff (to fix a deadlock) I found these simplifications
and improvements.

They are build-tested with allmodconfig on arm64, m68k, powerpc, riscv,
s390, sparc64 and x86_64.

Best regards
Uwe

Uwe Kleine-König (4):
  spi: Move comment about chipselect check to the right place
  spi: Remove unused function spi_busnum_to_master()
  spi: Reorder functions to simplify the next commit
  spi: Make several public functions private to spi.c

 Documentation/spi/spi-summary.rst |   8 -
 drivers/spi/spi.c                 | 237 ++++++++++++------------------
 include/linux/spi/spi.h           |  55 -------
 3 files changed, 95 insertions(+), 205 deletions(-)

base-commit: 9e1ff307c7
--
2.30.2
This commit is contained in:
Mark Brown 2021-10-07 22:35:49 +01:00
commit a0ecee3201
No known key found for this signature in database
GPG Key ID: 24D68B725D5487D0
352 changed files with 4240 additions and 2519 deletions

View File

@ -31,11 +31,11 @@ properties:
clocks: clocks:
minItems: 1 minItems: 1
maxItems: 3 maxItems: 7
clock-names: clock-names:
minItems: 1 minItems: 1
maxItems: 3 maxItems: 7
required: required:
- compatible - compatible
@ -72,6 +72,32 @@ allOf:
contains: contains:
enum: enum:
- qcom,sdm660-a2noc - qcom,sdm660-a2noc
then:
properties:
clocks:
items:
- description: Bus Clock.
- description: Bus A Clock.
- description: IPA Clock.
- description: UFS AXI Clock.
- description: Aggregate2 UFS AXI Clock.
- description: Aggregate2 USB3 AXI Clock.
- description: Config NoC USB2 AXI Clock.
clock-names:
items:
- const: bus
- const: bus_a
- const: ipa
- const: ufs_axi
- const: aggre2_ufs_axi
- const: aggre2_usb3_axi
- const: cfg_noc_usb2_axi
- if:
properties:
compatible:
contains:
enum:
- qcom,sdm660-bimc - qcom,sdm660-bimc
- qcom,sdm660-cnoc - qcom,sdm660-cnoc
- qcom,sdm660-gnoc - qcom,sdm660-gnoc
@ -91,6 +117,7 @@ examples:
- | - |
#include <dt-bindings/clock/qcom,rpmcc.h> #include <dt-bindings/clock/qcom,rpmcc.h>
#include <dt-bindings/clock/qcom,mmcc-sdm660.h> #include <dt-bindings/clock/qcom,mmcc-sdm660.h>
#include <dt-bindings/clock/qcom,gcc-sdm660.h>
bimc: interconnect@1008000 { bimc: interconnect@1008000 {
compatible = "qcom,sdm660-bimc"; compatible = "qcom,sdm660-bimc";
@ -123,9 +150,20 @@ examples:
compatible = "qcom,sdm660-a2noc"; compatible = "qcom,sdm660-a2noc";
reg = <0x01704000 0xc100>; reg = <0x01704000 0xc100>;
#interconnect-cells = <1>; #interconnect-cells = <1>;
clock-names = "bus", "bus_a"; clock-names = "bus",
"bus_a",
"ipa",
"ufs_axi",
"aggre2_ufs_axi",
"aggre2_usb3_axi",
"cfg_noc_usb2_axi";
clocks = <&rpmcc RPM_SMD_AGGR2_NOC_CLK>, clocks = <&rpmcc RPM_SMD_AGGR2_NOC_CLK>,
<&rpmcc RPM_SMD_AGGR2_NOC_A_CLK>; <&rpmcc RPM_SMD_AGGR2_NOC_A_CLK>,
<&rpmcc RPM_SMD_IPA_CLK>,
<&gcc GCC_UFS_AXI_CLK>,
<&gcc GCC_AGGRE2_UFS_AXI_CLK>,
<&gcc GCC_AGGRE2_USB3_AXI_CLK>,
<&gcc GCC_CFG_NOC_USB2_AXI_CLK>;
}; };
mnoc: interconnect@1745000 { mnoc: interconnect@1745000 {

View File

@ -132,20 +132,3 @@ On Family 17h and Family 18h CPUs, additional temperature sensors may report
Core Complex Die (CCD) temperatures. Up to 8 such temperatures are reported Core Complex Die (CCD) temperatures. Up to 8 such temperatures are reported
as temp{3..10}_input, labeled Tccd{1..8}. Actual support depends on the CPU as temp{3..10}_input, labeled Tccd{1..8}. Actual support depends on the CPU
variant. variant.
Various Family 17h and 18h CPUs report voltage and current telemetry
information. The following attributes may be reported.
Attribute Label Description
=============== ======= ================
in0_input Vcore Core voltage
in1_input Vsoc SoC voltage
curr1_input Icore Core current
curr2_input Isoc SoC current
=============== ======= ================
Current values are raw (unscaled) as reported by the CPU. Core current is
reported as multiples of 1A / LSB. SoC is reported as multiples of 0.25A
/ LSB. The real current is board specific. Reported currents should be seen
as rough guidance, and should be scaled using sensors3.conf as appropriate
for a given board.

View File

@ -336,14 +336,6 @@ certainly includes SPI devices hooked up through the card connectors!
Non-static Configurations Non-static Configurations
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^
Developer boards often play by different rules than product boards, and one
example is the potential need to hotplug SPI devices and/or controllers.
For those cases you might need to use spi_busnum_to_master() to look
up the spi bus master, and will likely need spi_new_device() to provide the
board info based on the board that was hotplugged. Of course, you'd later
call at least spi_unregister_device() when that board is removed.
When Linux includes support for MMC/SD/SDIO/DataFlash cards through SPI, those When Linux includes support for MMC/SD/SDIO/DataFlash cards through SPI, those
configurations will also be dynamic. Fortunately, such devices all support configurations will also be dynamic. Fortunately, such devices all support
basic device identification probes, so they should hotplug normally. basic device identification probes, so they should hotplug normally.

View File

@ -414,7 +414,8 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
F: drivers/acpi/pmic/ F: drivers/acpi/pmic/
ACPI THERMAL DRIVER ACPI THERMAL DRIVER
M: Zhang Rui <rui.zhang@intel.com> M: Rafael J. Wysocki <rafael@kernel.org>
R: Zhang Rui <rui.zhang@intel.com>
L: linux-acpi@vger.kernel.org L: linux-acpi@vger.kernel.org
S: Supported S: Supported
W: https://01.org/linux-acpi W: https://01.org/linux-acpi
@ -810,7 +811,7 @@ F: Documentation/devicetree/bindings/dma/altr,msgdma.yaml
F: drivers/dma/altera-msgdma.c F: drivers/dma/altera-msgdma.c
ALTERA PIO DRIVER ALTERA PIO DRIVER
M: Joyce Ooi <joyce.ooi@intel.com> M: Mun Yew Tham <mun.yew.tham@intel.com>
L: linux-gpio@vger.kernel.org L: linux-gpio@vger.kernel.org
S: Maintained S: Maintained
F: drivers/gpio/gpio-altera.c F: drivers/gpio/gpio-altera.c
@ -2961,7 +2962,7 @@ F: crypto/async_tx/
F: include/linux/async_tx.h F: include/linux/async_tx.h
AT24 EEPROM DRIVER AT24 EEPROM DRIVER
M: Bartosz Golaszewski <bgolaszewski@baylibre.com> M: Bartosz Golaszewski <brgl@bgdev.pl>
L: linux-i2c@vger.kernel.org L: linux-i2c@vger.kernel.org
S: Maintained S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux.git
@ -3384,9 +3385,11 @@ F: Documentation/networking/filter.rst
F: Documentation/userspace-api/ebpf/ F: Documentation/userspace-api/ebpf/
F: arch/*/net/* F: arch/*/net/*
F: include/linux/bpf* F: include/linux/bpf*
F: include/linux/btf*
F: include/linux/filter.h F: include/linux/filter.h
F: include/trace/events/xdp.h F: include/trace/events/xdp.h
F: include/uapi/linux/bpf* F: include/uapi/linux/bpf*
F: include/uapi/linux/btf*
F: include/uapi/linux/filter.h F: include/uapi/linux/filter.h
F: kernel/bpf/ F: kernel/bpf/
F: kernel/trace/bpf_trace.c F: kernel/trace/bpf_trace.c
@ -3820,7 +3823,6 @@ F: drivers/scsi/mpi3mr/
BROADCOM NETXTREME-E ROCE DRIVER BROADCOM NETXTREME-E ROCE DRIVER
M: Selvin Xavier <selvin.xavier@broadcom.com> M: Selvin Xavier <selvin.xavier@broadcom.com>
M: Naresh Kumar PBS <nareshkumar.pbs@broadcom.com>
L: linux-rdma@vger.kernel.org L: linux-rdma@vger.kernel.org
S: Supported S: Supported
W: http://www.broadcom.com W: http://www.broadcom.com
@ -4655,7 +4657,7 @@ W: http://linux-cifs.samba.org/
T: git git://git.samba.org/sfrench/cifs-2.6.git T: git git://git.samba.org/sfrench/cifs-2.6.git
F: Documentation/admin-guide/cifs/ F: Documentation/admin-guide/cifs/
F: fs/cifs/ F: fs/cifs/
F: fs/cifs_common/ F: fs/smbfs_common/
COMPACTPCI HOTPLUG CORE COMPACTPCI HOTPLUG CORE
M: Scott Murray <scott@spiteful.org> M: Scott Murray <scott@spiteful.org>
@ -7985,7 +7987,7 @@ F: include/linux/gpio/regmap.h
GPIO SUBSYSTEM GPIO SUBSYSTEM
M: Linus Walleij <linus.walleij@linaro.org> M: Linus Walleij <linus.walleij@linaro.org>
M: Bartosz Golaszewski <bgolaszewski@baylibre.com> M: Bartosz Golaszewski <brgl@bgdev.pl>
L: linux-gpio@vger.kernel.org L: linux-gpio@vger.kernel.org
S: Maintained S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio.git
@ -10193,8 +10195,8 @@ M: Hyunchul Lee <hyc.lee@gmail.com>
L: linux-cifs@vger.kernel.org L: linux-cifs@vger.kernel.org
S: Maintained S: Maintained
T: git git://git.samba.org/ksmbd.git T: git git://git.samba.org/ksmbd.git
F: fs/cifs_common/
F: fs/ksmbd/ F: fs/ksmbd/
F: fs/smbfs_common/
KERNEL UNIT TESTING FRAMEWORK (KUnit) KERNEL UNIT TESTING FRAMEWORK (KUnit)
M: Brendan Higgins <brendanhiggins@google.com> M: Brendan Higgins <brendanhiggins@google.com>
@ -11366,7 +11368,7 @@ F: Documentation/devicetree/bindings/iio/proximity/maxbotix,mb1232.yaml
F: drivers/iio/proximity/mb1232.c F: drivers/iio/proximity/mb1232.c
MAXIM MAX77650 PMIC MFD DRIVER MAXIM MAX77650 PMIC MFD DRIVER
M: Bartosz Golaszewski <bgolaszewski@baylibre.com> M: Bartosz Golaszewski <brgl@bgdev.pl>
L: linux-kernel@vger.kernel.org L: linux-kernel@vger.kernel.org
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/*/*max77650.yaml F: Documentation/devicetree/bindings/*/*max77650.yaml
@ -17883,7 +17885,8 @@ M: Olivier Moysan <olivier.moysan@foss.st.com>
M: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com> M: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com>
L: alsa-devel@alsa-project.org (moderated for non-subscribers) L: alsa-devel@alsa-project.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/iio/adc/st,stm32-*.yaml F: Documentation/devicetree/bindings/iio/adc/st,stm32-dfsdm-adc.yaml
F: Documentation/devicetree/bindings/sound/st,stm32-*.yaml
F: sound/soc/stm/ F: sound/soc/stm/
STM32 TIMER/LPTIMER DRIVERS STM32 TIMER/LPTIMER DRIVERS
@ -18547,13 +18550,14 @@ T: git git://linuxtv.org/media_tree.git
F: drivers/media/radio/radio-raremono.c F: drivers/media/radio/radio-raremono.c
THERMAL THERMAL
M: Zhang Rui <rui.zhang@intel.com> M: Rafael J. Wysocki <rafael@kernel.org>
M: Daniel Lezcano <daniel.lezcano@linaro.org> M: Daniel Lezcano <daniel.lezcano@linaro.org>
R: Amit Kucheria <amitk@kernel.org> R: Amit Kucheria <amitk@kernel.org>
R: Zhang Rui <rui.zhang@intel.com>
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
S: Supported S: Supported
Q: https://patchwork.kernel.org/project/linux-pm/list/ Q: https://patchwork.kernel.org/project/linux-pm/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/thermal/linux.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git thermal
F: Documentation/devicetree/bindings/thermal/ F: Documentation/devicetree/bindings/thermal/
F: drivers/thermal/ F: drivers/thermal/
F: include/linux/cpu_cooling.h F: include/linux/cpu_cooling.h
@ -18682,7 +18686,7 @@ F: include/linux/clk/ti.h
TI DAVINCI MACHINE SUPPORT TI DAVINCI MACHINE SUPPORT
M: Sekhar Nori <nsekhar@ti.com> M: Sekhar Nori <nsekhar@ti.com>
R: Bartosz Golaszewski <bgolaszewski@baylibre.com> R: Bartosz Golaszewski <brgl@bgdev.pl>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Supported S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci.git

View File

@ -2,7 +2,7 @@
VERSION = 5 VERSION = 5
PATCHLEVEL = 15 PATCHLEVEL = 15
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc3 EXTRAVERSION = -rc4
NAME = Opossums on Parade NAME = Opossums on Parade
# *DOCUMENTATION* # *DOCUMENTATION*

View File

@ -54,7 +54,7 @@ $(obj)/kvm_nvhe.tmp.o: $(obj)/hyp.lds $(addprefix $(obj)/,$(hyp-obj)) FORCE
# runtime. Because the hypervisor is part of the kernel binary, relocations # runtime. Because the hypervisor is part of the kernel binary, relocations
# produce a kernel VA. We enumerate relocations targeting hyp at build time # produce a kernel VA. We enumerate relocations targeting hyp at build time
# and convert the kernel VAs at those positions to hyp VAs. # and convert the kernel VAs at those positions to hyp VAs.
$(obj)/hyp-reloc.S: $(obj)/kvm_nvhe.tmp.o $(obj)/gen-hyprel $(obj)/hyp-reloc.S: $(obj)/kvm_nvhe.tmp.o $(obj)/gen-hyprel FORCE
$(call if_changed,hyprel) $(call if_changed,hyprel)
# 5) Compile hyp-reloc.S and link it into the existing partially linked object. # 5) Compile hyp-reloc.S and link it into the existing partially linked object.

View File

@ -50,9 +50,6 @@ static struct perf_guest_info_callbacks kvm_guest_cbs = {
int kvm_perf_init(void) int kvm_perf_init(void)
{ {
if (kvm_pmu_probe_pmuver() != ID_AA64DFR0_PMUVER_IMP_DEF && !is_protected_kvm_enabled())
static_branch_enable(&kvm_arm_pmu_available);
return perf_register_guest_info_callbacks(&kvm_guest_cbs); return perf_register_guest_info_callbacks(&kvm_guest_cbs);
} }

View File

@ -740,7 +740,14 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
kvm_pmu_create_perf_event(vcpu, select_idx); kvm_pmu_create_perf_event(vcpu, select_idx);
} }
int kvm_pmu_probe_pmuver(void) void kvm_host_pmu_init(struct arm_pmu *pmu)
{
if (pmu->pmuver != 0 && pmu->pmuver != ID_AA64DFR0_PMUVER_IMP_DEF &&
!kvm_arm_support_pmu_v3() && !is_protected_kvm_enabled())
static_branch_enable(&kvm_arm_pmu_available);
}
static int kvm_pmu_probe_pmuver(void)
{ {
struct perf_event_attr attr = { }; struct perf_event_attr attr = { };
struct perf_event *event; struct perf_event *event;

View File

@ -15,7 +15,6 @@
#include <asm/unistd.h> #include <asm/unistd.h>
#include <asm/errno.h> #include <asm/errno.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/segment.h>
#include <asm/traps.h> #include <asm/traps.h>
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/entry.h> #include <asm/entry.h>
@ -25,7 +24,6 @@
.globl system_call .globl system_call
.globl resume .globl resume
.globl ret_from_exception .globl ret_from_exception
.globl ret_from_signal
.globl sys_call_table .globl sys_call_table
.globl bad_interrupt .globl bad_interrupt
.globl inthandler1 .globl inthandler1
@ -59,8 +57,6 @@ do_trace:
subql #4,%sp /* dummy return address */ subql #4,%sp /* dummy return address */
SAVE_SWITCH_STACK SAVE_SWITCH_STACK
jbsr syscall_trace_leave jbsr syscall_trace_leave
ret_from_signal:
RESTORE_SWITCH_STACK RESTORE_SWITCH_STACK
addql #4,%sp addql #4,%sp
jra ret_from_exception jra ret_from_exception

View File

@ -29,7 +29,6 @@ config M68K
select NO_DMA if !MMU && !COLDFIRE select NO_DMA if !MMU && !COLDFIRE
select OLD_SIGACTION select OLD_SIGACTION
select OLD_SIGSUSPEND3 select OLD_SIGSUSPEND3
select SET_FS
select UACCESS_MEMCPY if !MMU select UACCESS_MEMCPY if !MMU
select VIRT_TO_BUS select VIRT_TO_BUS
select ZONE_DMA select ZONE_DMA

View File

@ -31,7 +31,6 @@
#include <asm/thread_info.h> #include <asm/thread_info.h>
#include <asm/errno.h> #include <asm/errno.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/segment.h>
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/entry.h> #include <asm/entry.h>
@ -51,7 +50,6 @@ sw_usp:
.globl system_call .globl system_call
.globl resume .globl resume
.globl ret_from_exception .globl ret_from_exception
.globl ret_from_signal
.globl sys_call_table .globl sys_call_table
.globl inthandler .globl inthandler
@ -98,8 +96,6 @@ ENTRY(system_call)
subql #4,%sp /* dummy return address */ subql #4,%sp /* dummy return address */
SAVE_SWITCH_STACK SAVE_SWITCH_STACK
jbsr syscall_trace_leave jbsr syscall_trace_leave
ret_from_signal:
RESTORE_SWITCH_STACK RESTORE_SWITCH_STACK
addql #4,%sp addql #4,%sp

View File

@ -9,7 +9,6 @@
#define __ASM_M68K_PROCESSOR_H #define __ASM_M68K_PROCESSOR_H
#include <linux/thread_info.h> #include <linux/thread_info.h>
#include <asm/segment.h>
#include <asm/fpu.h> #include <asm/fpu.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
@ -75,11 +74,37 @@ static inline void wrusp(unsigned long usp)
#define TASK_UNMAPPED_BASE 0 #define TASK_UNMAPPED_BASE 0
#endif #endif
/* Address spaces (or Function Codes in Motorola lingo) */
#define USER_DATA 1
#define USER_PROGRAM 2
#define SUPER_DATA 5
#define SUPER_PROGRAM 6
#define CPU_SPACE 7
#ifdef CONFIG_CPU_HAS_ADDRESS_SPACES
/*
* Set the SFC/DFC registers for special MM operations. For most normal
* operation these remain set to USER_DATA for the uaccess routines.
*/
static inline void set_fc(unsigned long val)
{
WARN_ON_ONCE(in_interrupt());
__asm__ __volatile__ ("movec %0,%/sfc\n\t"
"movec %0,%/dfc\n\t"
: /* no outputs */ : "r" (val) : "memory");
}
#else
static inline void set_fc(unsigned long val)
{
}
#endif /* CONFIG_CPU_HAS_ADDRESS_SPACES */
struct thread_struct { struct thread_struct {
unsigned long ksp; /* kernel stack pointer */ unsigned long ksp; /* kernel stack pointer */
unsigned long usp; /* user stack pointer */ unsigned long usp; /* user stack pointer */
unsigned short sr; /* saved status register */ unsigned short sr; /* saved status register */
unsigned short fs; /* saved fs (sfc, dfc) */ unsigned short fc; /* saved fc (sfc, dfc) */
unsigned long crp[2]; /* cpu root pointer */ unsigned long crp[2]; /* cpu root pointer */
unsigned long esp0; /* points to SR of stack frame */ unsigned long esp0; /* points to SR of stack frame */
unsigned long faddr; /* info about last fault */ unsigned long faddr; /* info about last fault */
@ -92,7 +117,7 @@ struct thread_struct {
#define INIT_THREAD { \ #define INIT_THREAD { \
.ksp = sizeof(init_stack) + (unsigned long) init_stack, \ .ksp = sizeof(init_stack) + (unsigned long) init_stack, \
.sr = PS_S, \ .sr = PS_S, \
.fs = __KERNEL_DS, \ .fc = USER_DATA, \
} }
/* /*

View File

@ -1,59 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _M68K_SEGMENT_H
#define _M68K_SEGMENT_H
/* define constants */
/* Address spaces (FC0-FC2) */
#define USER_DATA (1)
#ifndef __USER_DS
#define __USER_DS (USER_DATA)
#endif
#define USER_PROGRAM (2)
#define SUPER_DATA (5)
#ifndef __KERNEL_DS
#define __KERNEL_DS (SUPER_DATA)
#endif
#define SUPER_PROGRAM (6)
#define CPU_SPACE (7)
#ifndef __ASSEMBLY__
typedef struct {
unsigned long seg;
} mm_segment_t;
#define MAKE_MM_SEG(s) ((mm_segment_t) { (s) })
#ifdef CONFIG_CPU_HAS_ADDRESS_SPACES
/*
* Get/set the SFC/DFC registers for MOVES instructions
*/
#define USER_DS MAKE_MM_SEG(__USER_DS)
#define KERNEL_DS MAKE_MM_SEG(__KERNEL_DS)
static inline mm_segment_t get_fs(void)
{
mm_segment_t _v;
__asm__ ("movec %/dfc,%0":"=r" (_v.seg):);
return _v;
}
static inline void set_fs(mm_segment_t val)
{
__asm__ __volatile__ ("movec %0,%/sfc\n\t"
"movec %0,%/dfc\n\t"
: /* no outputs */ : "r" (val.seg) : "memory");
}
#else
#define USER_DS MAKE_MM_SEG(TASK_SIZE)
#define KERNEL_DS MAKE_MM_SEG(0xFFFFFFFF)
#define get_fs() (current_thread_info()->addr_limit)
#define set_fs(x) (current_thread_info()->addr_limit = (x))
#endif
#define uaccess_kernel() (get_fs().seg == KERNEL_DS.seg)
#endif /* __ASSEMBLY__ */
#endif /* _M68K_SEGMENT_H */

View File

@ -4,7 +4,6 @@
#include <asm/types.h> #include <asm/types.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/segment.h>
/* /*
* On machines with 4k pages we default to an 8k thread size, though we * On machines with 4k pages we default to an 8k thread size, though we
@ -27,7 +26,6 @@
struct thread_info { struct thread_info {
struct task_struct *task; /* main task structure */ struct task_struct *task; /* main task structure */
unsigned long flags; unsigned long flags;
mm_segment_t addr_limit; /* thread address space */
int preempt_count; /* 0 => preemptable, <0 => BUG */ int preempt_count; /* 0 => preemptable, <0 => BUG */
__u32 cpu; /* should always be 0 on m68k */ __u32 cpu; /* should always be 0 on m68k */
unsigned long tp_value; /* thread pointer */ unsigned long tp_value; /* thread pointer */
@ -37,7 +35,6 @@ struct thread_info {
#define INIT_THREAD_INFO(tsk) \ #define INIT_THREAD_INFO(tsk) \
{ \ { \
.task = &tsk, \ .task = &tsk, \
.addr_limit = KERNEL_DS, \
.preempt_count = INIT_PREEMPT_COUNT, \ .preempt_count = INIT_PREEMPT_COUNT, \
} }

View File

@ -13,13 +13,12 @@ static inline void flush_tlb_kernel_page(void *addr)
if (CPU_IS_COLDFIRE) { if (CPU_IS_COLDFIRE) {
mmu_write(MMUOR, MMUOR_CNL); mmu_write(MMUOR, MMUOR_CNL);
} else if (CPU_IS_040_OR_060) { } else if (CPU_IS_040_OR_060) {
mm_segment_t old_fs = get_fs(); set_fc(SUPER_DATA);
set_fs(KERNEL_DS);
__asm__ __volatile__(".chip 68040\n\t" __asm__ __volatile__(".chip 68040\n\t"
"pflush (%0)\n\t" "pflush (%0)\n\t"
".chip 68k" ".chip 68k"
: : "a" (addr)); : : "a" (addr));
set_fs(old_fs); set_fc(USER_DATA);
} else if (CPU_IS_020_OR_030) } else if (CPU_IS_020_OR_030)
__asm__ __volatile__("pflush #4,#4,(%0)" : : "a" (addr)); __asm__ __volatile__("pflush #4,#4,(%0)" : : "a" (addr));
} }
@ -84,12 +83,8 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
{ {
if (vma->vm_mm == current->active_mm) { if (vma->vm_mm == current->active_mm)
mm_segment_t old_fs = force_uaccess_begin();
__flush_tlb_one(addr); __flush_tlb_one(addr);
force_uaccess_end(old_fs);
}
} }
static inline void flush_tlb_range(struct vm_area_struct *vma, static inline void flush_tlb_range(struct vm_area_struct *vma,

View File

@ -267,6 +267,10 @@ struct frame {
} un; } un;
}; };
#ifdef CONFIG_M68040
asmlinkage void berr_040cleanup(struct frame *fp);
#endif
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* _M68K_TRAPS_H */ #endif /* _M68K_TRAPS_H */

View File

@ -9,13 +9,16 @@
*/ */
#include <linux/compiler.h> #include <linux/compiler.h>
#include <linux/types.h> #include <linux/types.h>
#include <asm/segment.h>
#include <asm/extable.h> #include <asm/extable.h>
/* We let the MMU do all checking */ /* We let the MMU do all checking */
static inline int access_ok(const void __user *addr, static inline int access_ok(const void __user *addr,
unsigned long size) unsigned long size)
{ {
/*
* XXX: for !CONFIG_CPU_HAS_ADDRESS_SPACES this really needs to check
* for TASK_SIZE!
*/
return 1; return 1;
} }
@ -35,12 +38,9 @@ static inline int access_ok(const void __user *addr,
#define MOVES "move" #define MOVES "move"
#endif #endif
extern int __put_user_bad(void); #define __put_user_asm(inst, res, x, ptr, bwl, reg, err) \
extern int __get_user_bad(void);
#define __put_user_asm(res, x, ptr, bwl, reg, err) \
asm volatile ("\n" \ asm volatile ("\n" \
"1: "MOVES"."#bwl" %2,%1\n" \ "1: "inst"."#bwl" %2,%1\n" \
"2:\n" \ "2:\n" \
" .section .fixup,\"ax\"\n" \ " .section .fixup,\"ax\"\n" \
" .even\n" \ " .even\n" \
@ -56,6 +56,31 @@ asm volatile ("\n" \
: "+d" (res), "=m" (*(ptr)) \ : "+d" (res), "=m" (*(ptr)) \
: #reg (x), "i" (err)) : #reg (x), "i" (err))
#define __put_user_asm8(inst, res, x, ptr) \
do { \
const void *__pu_ptr = (const void __force *)(ptr); \
\
asm volatile ("\n" \
"1: "inst".l %2,(%1)+\n" \
"2: "inst".l %R2,(%1)\n" \
"3:\n" \
" .section .fixup,\"ax\"\n" \
" .even\n" \
"10: movel %3,%0\n" \
" jra 3b\n" \
" .previous\n" \
"\n" \
" .section __ex_table,\"a\"\n" \
" .align 4\n" \
" .long 1b,10b\n" \
" .long 2b,10b\n" \
" .long 3b,10b\n" \
" .previous" \
: "+d" (res), "+a" (__pu_ptr) \
: "r" (x), "i" (-EFAULT) \
: "memory"); \
} while (0)
/* /*
* These are the main single-value transfer routines. They automatically * These are the main single-value transfer routines. They automatically
* use the right size if we just have the right pointer type. * use the right size if we just have the right pointer type.
@ -68,51 +93,29 @@ asm volatile ("\n" \
__chk_user_ptr(ptr); \ __chk_user_ptr(ptr); \
switch (sizeof (*(ptr))) { \ switch (sizeof (*(ptr))) { \
case 1: \ case 1: \
__put_user_asm(__pu_err, __pu_val, ptr, b, d, -EFAULT); \ __put_user_asm(MOVES, __pu_err, __pu_val, ptr, b, d, -EFAULT); \
break; \ break; \
case 2: \ case 2: \
__put_user_asm(__pu_err, __pu_val, ptr, w, r, -EFAULT); \ __put_user_asm(MOVES, __pu_err, __pu_val, ptr, w, r, -EFAULT); \
break; \ break; \
case 4: \ case 4: \
__put_user_asm(__pu_err, __pu_val, ptr, l, r, -EFAULT); \ __put_user_asm(MOVES, __pu_err, __pu_val, ptr, l, r, -EFAULT); \
break; \ break; \
case 8: \ case 8: \
{ \ __put_user_asm8(MOVES, __pu_err, __pu_val, ptr); \
const void __user *__pu_ptr = (ptr); \
asm volatile ("\n" \
"1: "MOVES".l %2,(%1)+\n" \
"2: "MOVES".l %R2,(%1)\n" \
"3:\n" \
" .section .fixup,\"ax\"\n" \
" .even\n" \
"10: movel %3,%0\n" \
" jra 3b\n" \
" .previous\n" \
"\n" \
" .section __ex_table,\"a\"\n" \
" .align 4\n" \
" .long 1b,10b\n" \
" .long 2b,10b\n" \
" .long 3b,10b\n" \
" .previous" \
: "+d" (__pu_err), "+a" (__pu_ptr) \
: "r" (__pu_val), "i" (-EFAULT) \
: "memory"); \
break; \ break; \
} \
default: \ default: \
__pu_err = __put_user_bad(); \ BUILD_BUG(); \
break; \
} \ } \
__pu_err; \ __pu_err; \
}) })
#define put_user(x, ptr) __put_user(x, ptr) #define put_user(x, ptr) __put_user(x, ptr)
#define __get_user_asm(res, x, ptr, type, bwl, reg, err) ({ \ #define __get_user_asm(inst, res, x, ptr, type, bwl, reg, err) ({ \
type __gu_val; \ type __gu_val; \
asm volatile ("\n" \ asm volatile ("\n" \
"1: "MOVES"."#bwl" %2,%1\n" \ "1: "inst"."#bwl" %2,%1\n" \
"2:\n" \ "2:\n" \
" .section .fixup,\"ax\"\n" \ " .section .fixup,\"ax\"\n" \
" .even\n" \ " .even\n" \
@ -130,53 +133,57 @@ asm volatile ("\n" \
(x) = (__force typeof(*(ptr)))(__force unsigned long)__gu_val; \ (x) = (__force typeof(*(ptr)))(__force unsigned long)__gu_val; \
}) })
#define __get_user_asm8(inst, res, x, ptr) \
do { \
const void *__gu_ptr = (const void __force *)(ptr); \
union { \
u64 l; \
__typeof__(*(ptr)) t; \
} __gu_val; \
\
asm volatile ("\n" \
"1: "inst".l (%2)+,%1\n" \
"2: "inst".l (%2),%R1\n" \
"3:\n" \
" .section .fixup,\"ax\"\n" \
" .even\n" \
"10: move.l %3,%0\n" \
" sub.l %1,%1\n" \
" sub.l %R1,%R1\n" \
" jra 3b\n" \
" .previous\n" \
"\n" \
" .section __ex_table,\"a\"\n" \
" .align 4\n" \
" .long 1b,10b\n" \
" .long 2b,10b\n" \
" .previous" \
: "+d" (res), "=&r" (__gu_val.l), \
"+a" (__gu_ptr) \
: "i" (-EFAULT) \
: "memory"); \
(x) = __gu_val.t; \
} while (0)
#define __get_user(x, ptr) \ #define __get_user(x, ptr) \
({ \ ({ \
int __gu_err = 0; \ int __gu_err = 0; \
__chk_user_ptr(ptr); \ __chk_user_ptr(ptr); \
switch (sizeof(*(ptr))) { \ switch (sizeof(*(ptr))) { \
case 1: \ case 1: \
__get_user_asm(__gu_err, x, ptr, u8, b, d, -EFAULT); \ __get_user_asm(MOVES, __gu_err, x, ptr, u8, b, d, -EFAULT); \
break; \ break; \
case 2: \ case 2: \
__get_user_asm(__gu_err, x, ptr, u16, w, r, -EFAULT); \ __get_user_asm(MOVES, __gu_err, x, ptr, u16, w, r, -EFAULT); \
break; \ break; \
case 4: \ case 4: \
__get_user_asm(__gu_err, x, ptr, u32, l, r, -EFAULT); \ __get_user_asm(MOVES, __gu_err, x, ptr, u32, l, r, -EFAULT); \
break; \ break; \
case 8: { \ case 8: \
const void __user *__gu_ptr = (ptr); \ __get_user_asm8(MOVES, __gu_err, x, ptr); \
union { \
u64 l; \
__typeof__(*(ptr)) t; \
} __gu_val; \
asm volatile ("\n" \
"1: "MOVES".l (%2)+,%1\n" \
"2: "MOVES".l (%2),%R1\n" \
"3:\n" \
" .section .fixup,\"ax\"\n" \
" .even\n" \
"10: move.l %3,%0\n" \
" sub.l %1,%1\n" \
" sub.l %R1,%R1\n" \
" jra 3b\n" \
" .previous\n" \
"\n" \
" .section __ex_table,\"a\"\n" \
" .align 4\n" \
" .long 1b,10b\n" \
" .long 2b,10b\n" \
" .previous" \
: "+d" (__gu_err), "=&r" (__gu_val.l), \
"+a" (__gu_ptr) \
: "i" (-EFAULT) \
: "memory"); \
(x) = __gu_val.t; \
break; \ break; \
} \
default: \ default: \
__gu_err = __get_user_bad(); \ BUILD_BUG(); \
break; \
} \ } \
__gu_err; \ __gu_err; \
}) })
@ -322,16 +329,19 @@ __constant_copy_to_user(void __user *to, const void *from, unsigned long n)
switch (n) { switch (n) {
case 1: case 1:
__put_user_asm(res, *(u8 *)from, (u8 __user *)to, b, d, 1); __put_user_asm(MOVES, res, *(u8 *)from, (u8 __user *)to,
b, d, 1);
break; break;
case 2: case 2:
__put_user_asm(res, *(u16 *)from, (u16 __user *)to, w, r, 2); __put_user_asm(MOVES, res, *(u16 *)from, (u16 __user *)to,
w, r, 2);
break; break;
case 3: case 3:
__constant_copy_to_user_asm(res, to, from, tmp, 3, w, b,); __constant_copy_to_user_asm(res, to, from, tmp, 3, w, b,);
break; break;
case 4: case 4:
__put_user_asm(res, *(u32 *)from, (u32 __user *)to, l, r, 4); __put_user_asm(MOVES, res, *(u32 *)from, (u32 __user *)to,
l, r, 4);
break; break;
case 5: case 5:
__constant_copy_to_user_asm(res, to, from, tmp, 5, l, b,); __constant_copy_to_user_asm(res, to, from, tmp, 5, l, b,);
@ -380,8 +390,65 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n)
#define INLINE_COPY_FROM_USER #define INLINE_COPY_FROM_USER
#define INLINE_COPY_TO_USER #define INLINE_COPY_TO_USER
#define user_addr_max() \ #define HAVE_GET_KERNEL_NOFAULT
(uaccess_kernel() ? ~0UL : TASK_SIZE)
#define __get_kernel_nofault(dst, src, type, err_label) \
do { \
type *__gk_dst = (type *)(dst); \
type *__gk_src = (type *)(src); \
int __gk_err = 0; \
\
switch (sizeof(type)) { \
case 1: \
__get_user_asm("move", __gk_err, *__gk_dst, __gk_src, \
u8, b, d, -EFAULT); \
break; \
case 2: \
__get_user_asm("move", __gk_err, *__gk_dst, __gk_src, \
u16, w, r, -EFAULT); \
break; \
case 4: \
__get_user_asm("move", __gk_err, *__gk_dst, __gk_src, \
u32, l, r, -EFAULT); \
break; \
case 8: \
__get_user_asm8("move", __gk_err, *__gk_dst, __gk_src); \
break; \
default: \
BUILD_BUG(); \
} \
if (unlikely(__gk_err)) \
goto err_label; \
} while (0)
#define __put_kernel_nofault(dst, src, type, err_label) \
do { \
type __pk_src = *(type *)(src); \
type *__pk_dst = (type *)(dst); \
int __pk_err = 0; \
\
switch (sizeof(type)) { \
case 1: \
__put_user_asm("move", __pk_err, __pk_src, __pk_dst, \
b, d, -EFAULT); \
break; \
case 2: \
__put_user_asm("move", __pk_err, __pk_src, __pk_dst, \
w, r, -EFAULT); \
break; \
case 4: \
__put_user_asm("move", __pk_err, __pk_src, __pk_dst, \
l, r, -EFAULT); \
break; \
case 8: \
__put_user_asm8("move", __pk_err, __pk_src, __pk_dst); \
break; \
default: \
BUILD_BUG(); \
} \
if (unlikely(__pk_err)) \
goto err_label; \
} while (0)
extern long strncpy_from_user(char *dst, const char __user *src, long count); extern long strncpy_from_user(char *dst, const char __user *src, long count);
extern __must_check long strnlen_user(const char __user *str, long n); extern __must_check long strnlen_user(const char __user *str, long n);

View File

@ -31,7 +31,7 @@ int main(void)
DEFINE(THREAD_KSP, offsetof(struct thread_struct, ksp)); DEFINE(THREAD_KSP, offsetof(struct thread_struct, ksp));
DEFINE(THREAD_USP, offsetof(struct thread_struct, usp)); DEFINE(THREAD_USP, offsetof(struct thread_struct, usp));
DEFINE(THREAD_SR, offsetof(struct thread_struct, sr)); DEFINE(THREAD_SR, offsetof(struct thread_struct, sr));
DEFINE(THREAD_FS, offsetof(struct thread_struct, fs)); DEFINE(THREAD_FC, offsetof(struct thread_struct, fc));
DEFINE(THREAD_CRP, offsetof(struct thread_struct, crp)); DEFINE(THREAD_CRP, offsetof(struct thread_struct, crp));
DEFINE(THREAD_ESP0, offsetof(struct thread_struct, esp0)); DEFINE(THREAD_ESP0, offsetof(struct thread_struct, esp0));
DEFINE(THREAD_FPREG, offsetof(struct thread_struct, fp)); DEFINE(THREAD_FPREG, offsetof(struct thread_struct, fp));

View File

@ -36,7 +36,6 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/errno.h> #include <asm/errno.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/segment.h>
#include <asm/traps.h> #include <asm/traps.h>
#include <asm/unistd.h> #include <asm/unistd.h>
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
@ -78,20 +77,38 @@ ENTRY(__sys_clone3)
ENTRY(sys_sigreturn) ENTRY(sys_sigreturn)
SAVE_SWITCH_STACK SAVE_SWITCH_STACK
movel %sp,%sp@- | switch_stack pointer movel %sp,%a1 | switch_stack pointer
pea %sp@(SWITCH_STACK_SIZE+4) | pt_regs pointer lea %sp@(SWITCH_STACK_SIZE),%a0 | pt_regs pointer
lea %sp@(-84),%sp | leave a gap
movel %a1,%sp@-
movel %a0,%sp@-
jbsr do_sigreturn jbsr do_sigreturn
addql #8,%sp jra 1f | shared with rt_sigreturn()
RESTORE_SWITCH_STACK
rts
ENTRY(sys_rt_sigreturn) ENTRY(sys_rt_sigreturn)
SAVE_SWITCH_STACK SAVE_SWITCH_STACK
movel %sp,%sp@- | switch_stack pointer movel %sp,%a1 | switch_stack pointer
pea %sp@(SWITCH_STACK_SIZE+4) | pt_regs pointer lea %sp@(SWITCH_STACK_SIZE),%a0 | pt_regs pointer
lea %sp@(-84),%sp | leave a gap
movel %a1,%sp@-
movel %a0,%sp@-
| stack contents:
| [original pt_regs address] [original switch_stack address]
| [gap] [switch_stack] [pt_regs] [exception frame]
jbsr do_rt_sigreturn jbsr do_rt_sigreturn
addql #8,%sp
1:
| stack contents now:
| [original pt_regs address] [original switch_stack address]
| [unused part of the gap] [moved switch_stack] [moved pt_regs]
| [replacement exception frame]
| return value of do_{rt_,}sigreturn() points to moved switch_stack.
movel %d0,%sp | discard the leftover junk
RESTORE_SWITCH_STACK RESTORE_SWITCH_STACK
| stack contents now is just [syscall return address] [pt_regs] [frame]
| return pt_regs.d0
movel %sp@(PT_OFF_D0+4),%d0
rts rts
ENTRY(buserr) ENTRY(buserr)
@ -182,25 +199,6 @@ do_trace_exit:
addql #4,%sp addql #4,%sp
jra .Lret_from_exception jra .Lret_from_exception
ENTRY(ret_from_signal)
movel %curptr@(TASK_STACK),%a1
tstb %a1@(TINFO_FLAGS+2)
jge 1f
jbsr syscall_trace
1: RESTORE_SWITCH_STACK
addql #4,%sp
/* on 68040 complete pending writebacks if any */
#ifdef CONFIG_M68040
bfextu %sp@(PT_OFF_FORMATVEC){#0,#4},%d0
subql #7,%d0 | bus error frame ?
jbne 1f
movel %sp,%sp@-
jbsr berr_040cleanup
addql #4,%sp
1:
#endif
jra .Lret_from_exception
ENTRY(system_call) ENTRY(system_call)
SAVE_ALL_SYS SAVE_ALL_SYS
@ -338,7 +336,7 @@ resume:
/* save fs (sfc,%dfc) (may be pointing to kernel memory) */ /* save fs (sfc,%dfc) (may be pointing to kernel memory) */
movec %sfc,%d0 movec %sfc,%d0
movew %d0,%a0@(TASK_THREAD+THREAD_FS) movew %d0,%a0@(TASK_THREAD+THREAD_FC)
/* save usp */ /* save usp */
/* it is better to use a movel here instead of a movew 8*) */ /* it is better to use a movel here instead of a movew 8*) */
@ -424,7 +422,7 @@ resume:
movel %a0,%usp movel %a0,%usp
/* restore fs (sfc,%dfc) */ /* restore fs (sfc,%dfc) */
movew %a1@(TASK_THREAD+THREAD_FS),%a0 movew %a1@(TASK_THREAD+THREAD_FC),%a0
movec %a0,%sfc movec %a0,%sfc
movec %a0,%dfc movec %a0,%dfc

View File

@ -92,7 +92,7 @@ void show_regs(struct pt_regs * regs)
void flush_thread(void) void flush_thread(void)
{ {
current->thread.fs = __USER_DS; current->thread.fc = USER_DATA;
#ifdef CONFIG_FPU #ifdef CONFIG_FPU
if (!FPU_IS_EMU) { if (!FPU_IS_EMU) {
unsigned long zero = 0; unsigned long zero = 0;
@ -155,7 +155,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp, unsigned long arg,
* Must save the current SFC/DFC value, NOT the value when * Must save the current SFC/DFC value, NOT the value when
* the parent was last descheduled - RGH 10-08-96 * the parent was last descheduled - RGH 10-08-96
*/ */
p->thread.fs = get_fs().seg; p->thread.fc = USER_DATA;
if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) { if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) {
/* kernel thread */ /* kernel thread */

View File

@ -447,7 +447,7 @@ static inline void save_fpu_state(struct sigcontext *sc, struct pt_regs *regs)
if (CPU_IS_060 ? sc->sc_fpstate[2] : sc->sc_fpstate[0]) { if (CPU_IS_060 ? sc->sc_fpstate[2] : sc->sc_fpstate[0]) {
fpu_version = sc->sc_fpstate[0]; fpu_version = sc->sc_fpstate[0];
if (CPU_IS_020_OR_030 && if (CPU_IS_020_OR_030 && !regs->stkadj &&
regs->vector >= (VEC_FPBRUC * 4) && regs->vector >= (VEC_FPBRUC * 4) &&
regs->vector <= (VEC_FPNAN * 4)) { regs->vector <= (VEC_FPNAN * 4)) {
/* Clear pending exception in 68882 idle frame */ /* Clear pending exception in 68882 idle frame */
@ -510,7 +510,7 @@ static inline int rt_save_fpu_state(struct ucontext __user *uc, struct pt_regs *
if (!(CPU_IS_060 || CPU_IS_COLDFIRE)) if (!(CPU_IS_060 || CPU_IS_COLDFIRE))
context_size = fpstate[1]; context_size = fpstate[1];
fpu_version = fpstate[0]; fpu_version = fpstate[0];
if (CPU_IS_020_OR_030 && if (CPU_IS_020_OR_030 && !regs->stkadj &&
regs->vector >= (VEC_FPBRUC * 4) && regs->vector >= (VEC_FPBRUC * 4) &&
regs->vector <= (VEC_FPNAN * 4)) { regs->vector <= (VEC_FPNAN * 4)) {
/* Clear pending exception in 68882 idle frame */ /* Clear pending exception in 68882 idle frame */
@ -641,56 +641,35 @@ static inline void siginfo_build_tests(void)
static int mangle_kernel_stack(struct pt_regs *regs, int formatvec, static int mangle_kernel_stack(struct pt_regs *regs, int formatvec,
void __user *fp) void __user *fp)
{ {
int fsize = frame_extra_sizes(formatvec >> 12); int extra = frame_extra_sizes(formatvec >> 12);
if (fsize < 0) { char buf[sizeof_field(struct frame, un)];
if (extra < 0) {
/* /*
* user process trying to return with weird frame format * user process trying to return with weird frame format
*/ */
pr_debug("user process returning with weird frame format\n"); pr_debug("user process returning with weird frame format\n");
return 1; return -1;
} }
if (!fsize) { if (extra && copy_from_user(buf, fp, extra))
regs->format = formatvec >> 12; return -1;
regs->vector = formatvec & 0xfff; regs->format = formatvec >> 12;
} else { regs->vector = formatvec & 0xfff;
struct switch_stack *sw = (struct switch_stack *)regs - 1; if (extra) {
/* yes, twice as much as max(sizeof(frame.un.fmt<x>)) */ void *p = (struct switch_stack *)regs - 1;
unsigned long buf[sizeof_field(struct frame, un) / 2]; struct frame *new = (void *)regs - extra;
int size = sizeof(struct pt_regs)+sizeof(struct switch_stack);
/* that'll make sure that expansion won't crap over data */ memmove(p - extra, p, size);
if (copy_from_user(buf + fsize / 4, fp, fsize)) memcpy(p - extra + size, buf, extra);
return 1; current->thread.esp0 = (unsigned long)&new->ptregs;
#ifdef CONFIG_M68040
/* point of no return */ /* on 68040 complete pending writebacks if any */
regs->format = formatvec >> 12; if (new->ptregs.format == 7) // bus error frame
regs->vector = formatvec & 0xfff; berr_040cleanup(new);
#define frame_offset (sizeof(struct pt_regs)+sizeof(struct switch_stack))
__asm__ __volatile__ (
#ifdef CONFIG_COLDFIRE
" movel %0,%/sp\n\t"
" bra ret_from_signal\n"
#else
" movel %0,%/a0\n\t"
" subl %1,%/a0\n\t" /* make room on stack */
" movel %/a0,%/sp\n\t" /* set stack pointer */
/* move switch_stack and pt_regs */
"1: movel %0@+,%/a0@+\n\t"
" dbra %2,1b\n\t"
" lea %/sp@(%c3),%/a0\n\t" /* add offset of fmt */
" lsrl #2,%1\n\t"
" subql #1,%1\n\t"
/* copy to the gap we'd made */
"2: movel %4@+,%/a0@+\n\t"
" dbra %1,2b\n\t"
" bral ret_from_signal\n"
#endif #endif
: /* no outputs, it doesn't ever return */
: "a" (sw), "d" (fsize), "d" (frame_offset/4-1),
"n" (frame_offset), "a" (buf + fsize/4)
: "a0");
#undef frame_offset
} }
return 0; return extra;
} }
static inline int static inline int
@ -698,7 +677,6 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *usc, void __u
{ {
int formatvec; int formatvec;
struct sigcontext context; struct sigcontext context;
int err = 0;
siginfo_build_tests(); siginfo_build_tests();
@ -707,7 +685,7 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *usc, void __u
/* get previous context */ /* get previous context */
if (copy_from_user(&context, usc, sizeof(context))) if (copy_from_user(&context, usc, sizeof(context)))
goto badframe; return -1;
/* restore passed registers */ /* restore passed registers */
regs->d0 = context.sc_d0; regs->d0 = context.sc_d0;
@ -720,15 +698,10 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *usc, void __u
wrusp(context.sc_usp); wrusp(context.sc_usp);
formatvec = context.sc_formatvec; formatvec = context.sc_formatvec;
err = restore_fpu_state(&context); if (restore_fpu_state(&context))
return -1;
if (err || mangle_kernel_stack(regs, formatvec, fp)) return mangle_kernel_stack(regs, formatvec, fp);
goto badframe;
return 0;
badframe:
return 1;
} }
static inline int static inline int
@ -745,7 +718,7 @@ rt_restore_ucontext(struct pt_regs *regs, struct switch_stack *sw,
err = __get_user(temp, &uc->uc_mcontext.version); err = __get_user(temp, &uc->uc_mcontext.version);
if (temp != MCONTEXT_VERSION) if (temp != MCONTEXT_VERSION)
goto badframe; return -1;
/* restore passed registers */ /* restore passed registers */
err |= __get_user(regs->d0, &gregs[0]); err |= __get_user(regs->d0, &gregs[0]);
err |= __get_user(regs->d1, &gregs[1]); err |= __get_user(regs->d1, &gregs[1]);
@ -774,22 +747,17 @@ rt_restore_ucontext(struct pt_regs *regs, struct switch_stack *sw,
err |= restore_altstack(&uc->uc_stack); err |= restore_altstack(&uc->uc_stack);
if (err) if (err)
goto badframe; return -1;
if (mangle_kernel_stack(regs, temp, &uc->uc_extra)) return mangle_kernel_stack(regs, temp, &uc->uc_extra);
goto badframe;
return 0;
badframe:
return 1;
} }
asmlinkage int do_sigreturn(struct pt_regs *regs, struct switch_stack *sw) asmlinkage void *do_sigreturn(struct pt_regs *regs, struct switch_stack *sw)
{ {
unsigned long usp = rdusp(); unsigned long usp = rdusp();
struct sigframe __user *frame = (struct sigframe __user *)(usp - 4); struct sigframe __user *frame = (struct sigframe __user *)(usp - 4);
sigset_t set; sigset_t set;
int size;
if (!access_ok(frame, sizeof(*frame))) if (!access_ok(frame, sizeof(*frame)))
goto badframe; goto badframe;
@ -801,20 +769,22 @@ asmlinkage int do_sigreturn(struct pt_regs *regs, struct switch_stack *sw)
set_current_blocked(&set); set_current_blocked(&set);
if (restore_sigcontext(regs, &frame->sc, frame + 1)) size = restore_sigcontext(regs, &frame->sc, frame + 1);
if (size < 0)
goto badframe; goto badframe;
return regs->d0; return (void *)sw - size;
badframe: badframe:
force_sig(SIGSEGV); force_sig(SIGSEGV);
return 0; return sw;
} }
asmlinkage int do_rt_sigreturn(struct pt_regs *regs, struct switch_stack *sw) asmlinkage void *do_rt_sigreturn(struct pt_regs *regs, struct switch_stack *sw)
{ {
unsigned long usp = rdusp(); unsigned long usp = rdusp();
struct rt_sigframe __user *frame = (struct rt_sigframe __user *)(usp - 4); struct rt_sigframe __user *frame = (struct rt_sigframe __user *)(usp - 4);
sigset_t set; sigset_t set;
int size;
if (!access_ok(frame, sizeof(*frame))) if (!access_ok(frame, sizeof(*frame)))
goto badframe; goto badframe;
@ -823,27 +793,34 @@ asmlinkage int do_rt_sigreturn(struct pt_regs *regs, struct switch_stack *sw)
set_current_blocked(&set); set_current_blocked(&set);
if (rt_restore_ucontext(regs, sw, &frame->uc)) size = rt_restore_ucontext(regs, sw, &frame->uc);
if (size < 0)
goto badframe; goto badframe;
return regs->d0; return (void *)sw - size;
badframe: badframe:
force_sig(SIGSEGV); force_sig(SIGSEGV);
return 0; return sw;
}
static inline struct pt_regs *rte_regs(struct pt_regs *regs)
{
return (void *)regs + regs->stkadj;
} }
static void setup_sigcontext(struct sigcontext *sc, struct pt_regs *regs, static void setup_sigcontext(struct sigcontext *sc, struct pt_regs *regs,
unsigned long mask) unsigned long mask)
{ {
struct pt_regs *tregs = rte_regs(regs);
sc->sc_mask = mask; sc->sc_mask = mask;
sc->sc_usp = rdusp(); sc->sc_usp = rdusp();
sc->sc_d0 = regs->d0; sc->sc_d0 = regs->d0;
sc->sc_d1 = regs->d1; sc->sc_d1 = regs->d1;
sc->sc_a0 = regs->a0; sc->sc_a0 = regs->a0;
sc->sc_a1 = regs->a1; sc->sc_a1 = regs->a1;
sc->sc_sr = regs->sr; sc->sc_sr = tregs->sr;
sc->sc_pc = regs->pc; sc->sc_pc = tregs->pc;
sc->sc_formatvec = regs->format << 12 | regs->vector; sc->sc_formatvec = tregs->format << 12 | tregs->vector;
save_a5_state(sc, regs); save_a5_state(sc, regs);
save_fpu_state(sc, regs); save_fpu_state(sc, regs);
} }
@ -851,6 +828,7 @@ static void setup_sigcontext(struct sigcontext *sc, struct pt_regs *regs,
static inline int rt_setup_ucontext(struct ucontext __user *uc, struct pt_regs *regs) static inline int rt_setup_ucontext(struct ucontext __user *uc, struct pt_regs *regs)
{ {
struct switch_stack *sw = (struct switch_stack *)regs - 1; struct switch_stack *sw = (struct switch_stack *)regs - 1;
struct pt_regs *tregs = rte_regs(regs);
greg_t __user *gregs = uc->uc_mcontext.gregs; greg_t __user *gregs = uc->uc_mcontext.gregs;
int err = 0; int err = 0;
@ -871,9 +849,9 @@ static inline int rt_setup_ucontext(struct ucontext __user *uc, struct pt_regs *
err |= __put_user(sw->a5, &gregs[13]); err |= __put_user(sw->a5, &gregs[13]);
err |= __put_user(sw->a6, &gregs[14]); err |= __put_user(sw->a6, &gregs[14]);
err |= __put_user(rdusp(), &gregs[15]); err |= __put_user(rdusp(), &gregs[15]);
err |= __put_user(regs->pc, &gregs[16]); err |= __put_user(tregs->pc, &gregs[16]);
err |= __put_user(regs->sr, &gregs[17]); err |= __put_user(tregs->sr, &gregs[17]);
err |= __put_user((regs->format << 12) | regs->vector, &uc->uc_formatvec); err |= __put_user((tregs->format << 12) | tregs->vector, &uc->uc_formatvec);
err |= rt_save_fpu_state(uc, regs); err |= rt_save_fpu_state(uc, regs);
return err; return err;
} }
@ -890,13 +868,14 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
struct pt_regs *regs) struct pt_regs *regs)
{ {
struct sigframe __user *frame; struct sigframe __user *frame;
int fsize = frame_extra_sizes(regs->format); struct pt_regs *tregs = rte_regs(regs);
int fsize = frame_extra_sizes(tregs->format);
struct sigcontext context; struct sigcontext context;
int err = 0, sig = ksig->sig; int err = 0, sig = ksig->sig;
if (fsize < 0) { if (fsize < 0) {
pr_debug("setup_frame: Unknown frame format %#x\n", pr_debug("setup_frame: Unknown frame format %#x\n",
regs->format); tregs->format);
return -EFAULT; return -EFAULT;
} }
@ -907,7 +886,7 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
err |= __put_user(sig, &frame->sig); err |= __put_user(sig, &frame->sig);
err |= __put_user(regs->vector, &frame->code); err |= __put_user(tregs->vector, &frame->code);
err |= __put_user(&frame->sc, &frame->psc); err |= __put_user(&frame->sc, &frame->psc);
if (_NSIG_WORDS > 1) if (_NSIG_WORDS > 1)
@ -933,34 +912,28 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
push_cache ((unsigned long) &frame->retcode); push_cache ((unsigned long) &frame->retcode);
/*
* Set up registers for signal handler. All the state we are about
* to destroy is successfully copied to sigframe.
*/
wrusp ((unsigned long) frame);
regs->pc = (unsigned long) ksig->ka.sa.sa_handler;
adjustformat(regs);
/* /*
* This is subtle; if we build more than one sigframe, all but the * This is subtle; if we build more than one sigframe, all but the
* first one will see frame format 0 and have fsize == 0, so we won't * first one will see frame format 0 and have fsize == 0, so we won't
* screw stkadj. * screw stkadj.
*/ */
if (fsize) if (fsize) {
regs->stkadj = fsize; regs->stkadj = fsize;
tregs = rte_regs(regs);
/* Prepare to skip over the extra stuff in the exception frame. */
if (regs->stkadj) {
struct pt_regs *tregs =
(struct pt_regs *)((ulong)regs + regs->stkadj);
pr_debug("Performing stackadjust=%04lx\n", regs->stkadj); pr_debug("Performing stackadjust=%04lx\n", regs->stkadj);
/* This must be copied with decreasing addresses to
handle overlaps. */
tregs->vector = 0; tregs->vector = 0;
tregs->format = 0; tregs->format = 0;
tregs->pc = regs->pc;
tregs->sr = regs->sr; tregs->sr = regs->sr;
} }
/*
* Set up registers for signal handler. All the state we are about
* to destroy is successfully copied to sigframe.
*/
wrusp ((unsigned long) frame);
tregs->pc = (unsigned long) ksig->ka.sa.sa_handler;
adjustformat(regs);
return 0; return 0;
} }
@ -968,7 +941,8 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
struct pt_regs *regs) struct pt_regs *regs)
{ {
struct rt_sigframe __user *frame; struct rt_sigframe __user *frame;
int fsize = frame_extra_sizes(regs->format); struct pt_regs *tregs = rte_regs(regs);
int fsize = frame_extra_sizes(tregs->format);
int err = 0, sig = ksig->sig; int err = 0, sig = ksig->sig;
if (fsize < 0) { if (fsize < 0) {
@ -1018,34 +992,27 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
push_cache ((unsigned long) &frame->retcode); push_cache ((unsigned long) &frame->retcode);
/*
* Set up registers for signal handler. All the state we are about
* to destroy is successfully copied to sigframe.
*/
wrusp ((unsigned long) frame);
regs->pc = (unsigned long) ksig->ka.sa.sa_handler;
adjustformat(regs);
/* /*
* This is subtle; if we build more than one sigframe, all but the * This is subtle; if we build more than one sigframe, all but the
* first one will see frame format 0 and have fsize == 0, so we won't * first one will see frame format 0 and have fsize == 0, so we won't
* screw stkadj. * screw stkadj.
*/ */
if (fsize) if (fsize) {
regs->stkadj = fsize; regs->stkadj = fsize;
tregs = rte_regs(regs);
/* Prepare to skip over the extra stuff in the exception frame. */
if (regs->stkadj) {
struct pt_regs *tregs =
(struct pt_regs *)((ulong)regs + regs->stkadj);
pr_debug("Performing stackadjust=%04lx\n", regs->stkadj); pr_debug("Performing stackadjust=%04lx\n", regs->stkadj);
/* This must be copied with decreasing addresses to
handle overlaps. */
tregs->vector = 0; tregs->vector = 0;
tregs->format = 0; tregs->format = 0;
tregs->pc = regs->pc;
tregs->sr = regs->sr; tregs->sr = regs->sr;
} }
/*
* Set up registers for signal handler. All the state we are about
* to destroy is successfully copied to sigframe.
*/
wrusp ((unsigned long) frame);
tregs->pc = (unsigned long) ksig->ka.sa.sa_handler;
adjustformat(regs);
return 0; return 0;
} }

View File

@ -181,9 +181,8 @@ static inline void access_error060 (struct frame *fp)
static inline unsigned long probe040(int iswrite, unsigned long addr, int wbs) static inline unsigned long probe040(int iswrite, unsigned long addr, int wbs)
{ {
unsigned long mmusr; unsigned long mmusr;
mm_segment_t old_fs = get_fs();
set_fs(MAKE_MM_SEG(wbs)); set_fc(wbs);
if (iswrite) if (iswrite)
asm volatile (".chip 68040; ptestw (%0); .chip 68k" : : "a" (addr)); asm volatile (".chip 68040; ptestw (%0); .chip 68k" : : "a" (addr));
@ -192,7 +191,7 @@ static inline unsigned long probe040(int iswrite, unsigned long addr, int wbs)
asm volatile (".chip 68040; movec %%mmusr,%0; .chip 68k" : "=r" (mmusr)); asm volatile (".chip 68040; movec %%mmusr,%0; .chip 68k" : "=r" (mmusr));
set_fs(old_fs); set_fc(USER_DATA);
return mmusr; return mmusr;
} }
@ -201,10 +200,8 @@ static inline int do_040writeback1(unsigned short wbs, unsigned long wba,
unsigned long wbd) unsigned long wbd)
{ {
int res = 0; int res = 0;
mm_segment_t old_fs = get_fs();
/* set_fs can not be moved, otherwise put_user() may oops */ set_fc(wbs);
set_fs(MAKE_MM_SEG(wbs));
switch (wbs & WBSIZ_040) { switch (wbs & WBSIZ_040) {
case BA_SIZE_BYTE: case BA_SIZE_BYTE:
@ -218,9 +215,7 @@ static inline int do_040writeback1(unsigned short wbs, unsigned long wba,
break; break;
} }
/* set_fs can not be moved, otherwise put_user() may oops */ set_fc(USER_DATA);
set_fs(old_fs);
pr_debug("do_040writeback1, res=%d\n", res); pr_debug("do_040writeback1, res=%d\n", res);

View File

@ -18,7 +18,6 @@
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/segment.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/macintosh.h> #include <asm/macintosh.h>
#include <asm/mac_via.h> #include <asm/mac_via.h>

View File

@ -49,24 +49,7 @@ static unsigned long virt_to_phys_slow(unsigned long vaddr)
if (mmusr & MMU_R_040) if (mmusr & MMU_R_040)
return (mmusr & PAGE_MASK) | (vaddr & ~PAGE_MASK); return (mmusr & PAGE_MASK) | (vaddr & ~PAGE_MASK);
} else { } else {
unsigned short mmusr; WARN_ON_ONCE(!CPU_IS_040_OR_060);
unsigned long *descaddr;
asm volatile ("ptestr %3,%2@,#7,%0\n\t"
"pmove %%psr,%1"
: "=a&" (descaddr), "=m" (mmusr)
: "a" (vaddr), "d" (get_fs().seg));
if (mmusr & (MMU_I|MMU_B|MMU_L))
return 0;
descaddr = phys_to_virt((unsigned long)descaddr);
switch (mmusr & MMU_NUM) {
case 1:
return (*descaddr & 0xfe000000) | (vaddr & 0x01ffffff);
case 2:
return (*descaddr & 0xfffc0000) | (vaddr & 0x0003ffff);
case 3:
return (*descaddr & PAGE_MASK) | (vaddr & ~PAGE_MASK);
}
} }
return 0; return 0;
} }
@ -107,11 +90,9 @@ void flush_icache_user_range(unsigned long address, unsigned long endaddr)
void flush_icache_range(unsigned long address, unsigned long endaddr) void flush_icache_range(unsigned long address, unsigned long endaddr)
{ {
mm_segment_t old_fs = get_fs(); set_fc(SUPER_DATA);
set_fs(KERNEL_DS);
flush_icache_user_range(address, endaddr); flush_icache_user_range(address, endaddr);
set_fs(old_fs); set_fc(USER_DATA);
} }
EXPORT_SYMBOL(flush_icache_range); EXPORT_SYMBOL(flush_icache_range);

View File

@ -72,12 +72,6 @@ void __init paging_init(void)
if (!empty_zero_page) if (!empty_zero_page)
panic("%s: Failed to allocate %lu bytes align=0x%lx\n", panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
__func__, PAGE_SIZE, PAGE_SIZE); __func__, PAGE_SIZE, PAGE_SIZE);
/*
* Set up SFC/DFC registers (user data space).
*/
set_fs (USER_DS);
max_zone_pfn[ZONE_DMA] = end_mem >> PAGE_SHIFT; max_zone_pfn[ZONE_DMA] = end_mem >> PAGE_SHIFT;
free_area_init(max_zone_pfn); free_area_init(max_zone_pfn);
} }

View File

@ -17,7 +17,6 @@
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/segment.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>

View File

@ -15,7 +15,6 @@
#include <linux/gfp.h> #include <linux/gfp.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/segment.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/traps.h> #include <asm/traps.h>
#include <asm/machdep.h> #include <asm/machdep.h>

View File

@ -467,7 +467,7 @@ void __init paging_init(void)
/* /*
* Set up SFC/DFC registers * Set up SFC/DFC registers
*/ */
set_fs(KERNEL_DS); set_fc(USER_DATA);
#ifdef DEBUG #ifdef DEBUG
printk ("before free_area_init\n"); printk ("before free_area_init\n");

View File

@ -31,7 +31,6 @@
#include <asm/intersil.h> #include <asm/intersil.h>
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/sections.h> #include <asm/sections.h>
#include <asm/segment.h>
#include <asm/sun3ints.h> #include <asm/sun3ints.h>
char sun3_reserved_pmeg[SUN3_PMEGS_NUM]; char sun3_reserved_pmeg[SUN3_PMEGS_NUM];
@ -89,7 +88,7 @@ void __init sun3_init(void)
sun3_reserved_pmeg[249] = 1; sun3_reserved_pmeg[249] = 1;
sun3_reserved_pmeg[252] = 1; sun3_reserved_pmeg[252] = 1;
sun3_reserved_pmeg[253] = 1; sun3_reserved_pmeg[253] = 1;
set_fs(KERNEL_DS); set_fc(USER_DATA);
} }
/* Without this, Bad Things happen when something calls arch_reset. */ /* Without this, Bad Things happen when something calls arch_reset. */

View File

@ -23,7 +23,6 @@
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/sun3mmu.h> #include <asm/sun3mmu.h>
#include <asm/segment.h>
#include <asm/oplib.h> #include <asm/oplib.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/dvma.h> #include <asm/dvma.h>
@ -191,14 +190,13 @@ void __init mmu_emu_init(unsigned long bootmem_end)
for(seg = 0; seg < PAGE_OFFSET; seg += SUN3_PMEG_SIZE) for(seg = 0; seg < PAGE_OFFSET; seg += SUN3_PMEG_SIZE)
sun3_put_segmap(seg, SUN3_INVALID_PMEG); sun3_put_segmap(seg, SUN3_INVALID_PMEG);
set_fs(MAKE_MM_SEG(3)); set_fc(3);
for(seg = 0; seg < 0x10000000; seg += SUN3_PMEG_SIZE) { for(seg = 0; seg < 0x10000000; seg += SUN3_PMEG_SIZE) {
i = sun3_get_segmap(seg); i = sun3_get_segmap(seg);
for(j = 1; j < CONTEXTS_NUM; j++) for(j = 1; j < CONTEXTS_NUM; j++)
(*(romvec->pv_setctxt))(j, (void *)seg, i); (*(romvec->pv_setctxt))(j, (void *)seg, i);
} }
set_fs(KERNEL_DS); set_fc(USER_DATA);
} }
/* erase the mappings for a dead context. Uses the pg_dir for hints /* erase the mappings for a dead context. Uses the pg_dir for hints

View File

@ -11,7 +11,6 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/kernel_stat.h> #include <linux/kernel_stat.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <asm/segment.h>
#include <asm/intersil.h> #include <asm/intersil.h>
#include <asm/oplib.h> #include <asm/oplib.h>
#include <asm/sun3ints.h> #include <asm/sun3ints.h>

View File

@ -14,7 +14,6 @@
#include <asm/traps.h> #include <asm/traps.h>
#include <asm/sun3xprom.h> #include <asm/sun3xprom.h>
#include <asm/idprom.h> #include <asm/idprom.h>
#include <asm/segment.h>
#include <asm/sun3ints.h> #include <asm/sun3ints.h>
#include <asm/openprom.h> #include <asm/openprom.h>
#include <asm/machines.h> #include <asm/machines.h>

View File

@ -662,6 +662,11 @@ static void build_epilogue(struct jit_ctx *ctx)
((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative : func) : \ ((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative : func) : \
func##_positive) func##_positive)
static bool is_bad_offset(int b_off)
{
return b_off > 0x1ffff || b_off < -0x20000;
}
static int build_body(struct jit_ctx *ctx) static int build_body(struct jit_ctx *ctx)
{ {
const struct bpf_prog *prog = ctx->skf; const struct bpf_prog *prog = ctx->skf;
@ -728,7 +733,10 @@ load_common:
/* Load return register on DS for failures */ /* Load return register on DS for failures */
emit_reg_move(r_ret, r_zero, ctx); emit_reg_move(r_ret, r_zero, ctx);
/* Return with error */ /* Return with error */
emit_b(b_imm(prog->len, ctx), ctx); b_off = b_imm(prog->len, ctx);
if (is_bad_offset(b_off))
return -E2BIG;
emit_b(b_off, ctx);
emit_nop(ctx); emit_nop(ctx);
break; break;
case BPF_LD | BPF_W | BPF_IND: case BPF_LD | BPF_W | BPF_IND:
@ -775,8 +783,10 @@ load_ind:
emit_jalr(MIPS_R_RA, r_s0, ctx); emit_jalr(MIPS_R_RA, r_s0, ctx);
emit_reg_move(MIPS_R_A0, r_skb, ctx); /* delay slot */ emit_reg_move(MIPS_R_A0, r_skb, ctx); /* delay slot */
/* Check the error value */ /* Check the error value */
emit_bcond(MIPS_COND_NE, r_ret, 0, b_off = b_imm(prog->len, ctx);
b_imm(prog->len, ctx), ctx); if (is_bad_offset(b_off))
return -E2BIG;
emit_bcond(MIPS_COND_NE, r_ret, 0, b_off, ctx);
emit_reg_move(r_ret, r_zero, ctx); emit_reg_move(r_ret, r_zero, ctx);
/* We are good */ /* We are good */
/* X <- P[1:K] & 0xf */ /* X <- P[1:K] & 0xf */
@ -855,8 +865,10 @@ load_ind:
/* A /= X */ /* A /= X */
ctx->flags |= SEEN_X | SEEN_A; ctx->flags |= SEEN_X | SEEN_A;
/* Check if r_X is zero */ /* Check if r_X is zero */
emit_bcond(MIPS_COND_EQ, r_X, r_zero, b_off = b_imm(prog->len, ctx);
b_imm(prog->len, ctx), ctx); if (is_bad_offset(b_off))
return -E2BIG;
emit_bcond(MIPS_COND_EQ, r_X, r_zero, b_off, ctx);
emit_load_imm(r_ret, 0, ctx); /* delay slot */ emit_load_imm(r_ret, 0, ctx); /* delay slot */
emit_div(r_A, r_X, ctx); emit_div(r_A, r_X, ctx);
break; break;
@ -864,8 +876,10 @@ load_ind:
/* A %= X */ /* A %= X */
ctx->flags |= SEEN_X | SEEN_A; ctx->flags |= SEEN_X | SEEN_A;
/* Check if r_X is zero */ /* Check if r_X is zero */
emit_bcond(MIPS_COND_EQ, r_X, r_zero, b_off = b_imm(prog->len, ctx);
b_imm(prog->len, ctx), ctx); if (is_bad_offset(b_off))
return -E2BIG;
emit_bcond(MIPS_COND_EQ, r_X, r_zero, b_off, ctx);
emit_load_imm(r_ret, 0, ctx); /* delay slot */ emit_load_imm(r_ret, 0, ctx); /* delay slot */
emit_mod(r_A, r_X, ctx); emit_mod(r_A, r_X, ctx);
break; break;
@ -926,7 +940,10 @@ load_ind:
break; break;
case BPF_JMP | BPF_JA: case BPF_JMP | BPF_JA:
/* pc += K */ /* pc += K */
emit_b(b_imm(i + k + 1, ctx), ctx); b_off = b_imm(i + k + 1, ctx);
if (is_bad_offset(b_off))
return -E2BIG;
emit_b(b_off, ctx);
emit_nop(ctx); emit_nop(ctx);
break; break;
case BPF_JMP | BPF_JEQ | BPF_K: case BPF_JMP | BPF_JEQ | BPF_K:
@ -1056,12 +1073,16 @@ jmp_cmp:
break; break;
case BPF_RET | BPF_A: case BPF_RET | BPF_A:
ctx->flags |= SEEN_A; ctx->flags |= SEEN_A;
if (i != prog->len - 1) if (i != prog->len - 1) {
/* /*
* If this is not the last instruction * If this is not the last instruction
* then jump to the epilogue * then jump to the epilogue
*/ */
emit_b(b_imm(prog->len, ctx), ctx); b_off = b_imm(prog->len, ctx);
if (is_bad_offset(b_off))
return -E2BIG;
emit_b(b_off, ctx);
}
emit_reg_move(r_ret, r_A, ctx); /* delay slot */ emit_reg_move(r_ret, r_A, ctx); /* delay slot */
break; break;
case BPF_RET | BPF_K: case BPF_RET | BPF_K:
@ -1075,7 +1096,10 @@ jmp_cmp:
* If this is not the last instruction * If this is not the last instruction
* then jump to the epilogue * then jump to the epilogue
*/ */
emit_b(b_imm(prog->len, ctx), ctx); b_off = b_imm(prog->len, ctx);
if (is_bad_offset(b_off))
return -E2BIG;
emit_b(b_off, ctx);
emit_nop(ctx); emit_nop(ctx);
} }
break; break;
@ -1133,8 +1157,10 @@ jmp_cmp:
/* Load *dev pointer */ /* Load *dev pointer */
emit_load_ptr(r_s0, r_skb, off, ctx); emit_load_ptr(r_s0, r_skb, off, ctx);
/* error (0) in the delay slot */ /* error (0) in the delay slot */
emit_bcond(MIPS_COND_EQ, r_s0, r_zero, b_off = b_imm(prog->len, ctx);
b_imm(prog->len, ctx), ctx); if (is_bad_offset(b_off))
return -E2BIG;
emit_bcond(MIPS_COND_EQ, r_s0, r_zero, b_off, ctx);
emit_reg_move(r_ret, r_zero, ctx); emit_reg_move(r_ret, r_zero, ctx);
if (code == (BPF_ANC | SKF_AD_IFINDEX)) { if (code == (BPF_ANC | SKF_AD_IFINDEX)) {
BUILD_BUG_ON(sizeof_field(struct net_device, ifindex) != 4); BUILD_BUG_ON(sizeof_field(struct net_device, ifindex) != 4);
@ -1244,7 +1270,10 @@ void bpf_jit_compile(struct bpf_prog *fp)
/* Generate the actual JIT code */ /* Generate the actual JIT code */
build_prologue(&ctx); build_prologue(&ctx);
build_body(&ctx); if (build_body(&ctx)) {
module_memfree(ctx.target);
goto out;
}
build_epilogue(&ctx); build_epilogue(&ctx);
/* Update the icache */ /* Update the icache */

View File

@ -3,9 +3,10 @@
config EARLY_PRINTK config EARLY_PRINTK
bool "Activate early kernel debugging" bool "Activate early kernel debugging"
default y default y
depends on TTY
select SERIAL_CORE_CONSOLE select SERIAL_CORE_CONSOLE
help help
Enable early printk on console Enable early printk on console.
This is useful for kernel debugging when your machine crashes very This is useful for kernel debugging when your machine crashes very
early before the console code is initialized. early before the console code is initialized.
You should normally say N here, unless you want to debug such a crash. You should normally say N here, unless you want to debug such a crash.

View File

@ -149,8 +149,6 @@ static void __init find_limits(unsigned long *min, unsigned long *max_low,
void __init setup_arch(char **cmdline_p) void __init setup_arch(char **cmdline_p)
{ {
int dram_start;
console_verbose(); console_verbose();
memory_start = memblock_start_of_DRAM(); memory_start = memblock_start_of_DRAM();

View File

@ -419,13 +419,13 @@ static unsigned long deliverable_irqs(struct kvm_vcpu *vcpu)
static void __set_cpu_idle(struct kvm_vcpu *vcpu) static void __set_cpu_idle(struct kvm_vcpu *vcpu)
{ {
kvm_s390_set_cpuflags(vcpu, CPUSTAT_WAIT); kvm_s390_set_cpuflags(vcpu, CPUSTAT_WAIT);
set_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.idle_mask); set_bit(vcpu->vcpu_idx, vcpu->kvm->arch.idle_mask);
} }
static void __unset_cpu_idle(struct kvm_vcpu *vcpu) static void __unset_cpu_idle(struct kvm_vcpu *vcpu)
{ {
kvm_s390_clear_cpuflags(vcpu, CPUSTAT_WAIT); kvm_s390_clear_cpuflags(vcpu, CPUSTAT_WAIT);
clear_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.idle_mask); clear_bit(vcpu->vcpu_idx, vcpu->kvm->arch.idle_mask);
} }
static void __reset_intercept_indicators(struct kvm_vcpu *vcpu) static void __reset_intercept_indicators(struct kvm_vcpu *vcpu)

View File

@ -4066,7 +4066,7 @@ static int vcpu_pre_run(struct kvm_vcpu *vcpu)
kvm_s390_patch_guest_per_regs(vcpu); kvm_s390_patch_guest_per_regs(vcpu);
} }
clear_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.gisa_int.kicked_mask); clear_bit(vcpu->vcpu_idx, vcpu->kvm->arch.gisa_int.kicked_mask);
vcpu->arch.sie_block->icptcode = 0; vcpu->arch.sie_block->icptcode = 0;
cpuflags = atomic_read(&vcpu->arch.sie_block->cpuflags); cpuflags = atomic_read(&vcpu->arch.sie_block->cpuflags);

View File

@ -79,7 +79,7 @@ static inline int is_vcpu_stopped(struct kvm_vcpu *vcpu)
static inline int is_vcpu_idle(struct kvm_vcpu *vcpu) static inline int is_vcpu_idle(struct kvm_vcpu *vcpu)
{ {
return test_bit(kvm_vcpu_get_idx(vcpu), vcpu->kvm->arch.idle_mask); return test_bit(vcpu->vcpu_idx, vcpu->kvm->arch.idle_mask);
} }
static inline int kvm_is_ucontrol(struct kvm *kvm) static inline int kvm_is_ucontrol(struct kvm *kvm)

View File

@ -367,10 +367,11 @@ SYM_FUNC_START(sm4_aesni_avx_crypt8)
* %rdx: src (1..8 blocks) * %rdx: src (1..8 blocks)
* %rcx: num blocks (1..8) * %rcx: num blocks (1..8)
*/ */
FRAME_BEGIN
cmpq $5, %rcx; cmpq $5, %rcx;
jb sm4_aesni_avx_crypt4; jb sm4_aesni_avx_crypt4;
FRAME_BEGIN
vmovdqu (0 * 16)(%rdx), RA0; vmovdqu (0 * 16)(%rdx), RA0;
vmovdqu (1 * 16)(%rdx), RA1; vmovdqu (1 * 16)(%rdx), RA1;
vmovdqu (2 * 16)(%rdx), RA2; vmovdqu (2 * 16)(%rdx), RA2;

View File

@ -2465,6 +2465,7 @@ static int x86_pmu_event_init(struct perf_event *event)
if (err) { if (err) {
if (event->destroy) if (event->destroy)
event->destroy(event); event->destroy(event);
event->destroy = NULL;
} }
if (READ_ONCE(x86_pmu.attr_rdpmc) && if (READ_ONCE(x86_pmu.attr_rdpmc) &&

View File

@ -263,6 +263,7 @@ static struct event_constraint intel_icl_event_constraints[] = {
INTEL_EVENT_CONSTRAINT_RANGE(0xa8, 0xb0, 0xf), INTEL_EVENT_CONSTRAINT_RANGE(0xa8, 0xb0, 0xf),
INTEL_EVENT_CONSTRAINT_RANGE(0xb7, 0xbd, 0xf), INTEL_EVENT_CONSTRAINT_RANGE(0xb7, 0xbd, 0xf),
INTEL_EVENT_CONSTRAINT_RANGE(0xd0, 0xe6, 0xf), INTEL_EVENT_CONSTRAINT_RANGE(0xd0, 0xe6, 0xf),
INTEL_EVENT_CONSTRAINT(0xef, 0xf),
INTEL_EVENT_CONSTRAINT_RANGE(0xf0, 0xf4, 0xf), INTEL_EVENT_CONSTRAINT_RANGE(0xf0, 0xf4, 0xf),
EVENT_CONSTRAINT_END EVENT_CONSTRAINT_END
}; };

View File

@ -46,7 +46,7 @@ struct kvm_page_track_notifier_node {
struct kvm_page_track_notifier_node *node); struct kvm_page_track_notifier_node *node);
}; };
void kvm_page_track_init(struct kvm *kvm); int kvm_page_track_init(struct kvm *kvm);
void kvm_page_track_cleanup(struct kvm *kvm); void kvm_page_track_cleanup(struct kvm *kvm);
void kvm_page_track_free_memslot(struct kvm_memory_slot *slot); void kvm_page_track_free_memslot(struct kvm_memory_slot *slot);

View File

@ -2,6 +2,20 @@
#ifndef _ASM_X86_KVM_CLOCK_H #ifndef _ASM_X86_KVM_CLOCK_H
#define _ASM_X86_KVM_CLOCK_H #define _ASM_X86_KVM_CLOCK_H
#include <linux/percpu.h>
extern struct clocksource kvm_clock; extern struct clocksource kvm_clock;
DECLARE_PER_CPU(struct pvclock_vsyscall_time_info *, hv_clock_per_cpu);
static inline struct pvclock_vcpu_time_info *this_cpu_pvti(void)
{
return &this_cpu_read(hv_clock_per_cpu)->pvti;
}
static inline struct pvclock_vsyscall_time_info *this_cpu_hvclock(void)
{
return this_cpu_read(hv_clock_per_cpu);
}
#endif /* _ASM_X86_KVM_CLOCK_H */ #endif /* _ASM_X86_KVM_CLOCK_H */

View File

@ -49,18 +49,9 @@ early_param("no-kvmclock-vsyscall", parse_no_kvmclock_vsyscall);
static struct pvclock_vsyscall_time_info static struct pvclock_vsyscall_time_info
hv_clock_boot[HVC_BOOT_ARRAY_SIZE] __bss_decrypted __aligned(PAGE_SIZE); hv_clock_boot[HVC_BOOT_ARRAY_SIZE] __bss_decrypted __aligned(PAGE_SIZE);
static struct pvclock_wall_clock wall_clock __bss_decrypted; static struct pvclock_wall_clock wall_clock __bss_decrypted;
static DEFINE_PER_CPU(struct pvclock_vsyscall_time_info *, hv_clock_per_cpu);
static struct pvclock_vsyscall_time_info *hvclock_mem; static struct pvclock_vsyscall_time_info *hvclock_mem;
DEFINE_PER_CPU(struct pvclock_vsyscall_time_info *, hv_clock_per_cpu);
static inline struct pvclock_vcpu_time_info *this_cpu_pvti(void) EXPORT_PER_CPU_SYMBOL_GPL(hv_clock_per_cpu);
{
return &this_cpu_read(hv_clock_per_cpu)->pvti;
}
static inline struct pvclock_vsyscall_time_info *this_cpu_hvclock(void)
{
return this_cpu_read(hv_clock_per_cpu);
}
/* /*
* The wallclock is the time of day when we booted. Since then, some time may * The wallclock is the time of day when we booted. Since then, some time may

View File

@ -65,8 +65,8 @@ static inline struct kvm_cpuid_entry2 *cpuid_entry2_find(
for (i = 0; i < nent; i++) { for (i = 0; i < nent; i++) {
e = &entries[i]; e = &entries[i];
if (e->function == function && (e->index == index || if (e->function == function &&
!(e->flags & KVM_CPUID_FLAG_SIGNIFCANT_INDEX))) (!(e->flags & KVM_CPUID_FLAG_SIGNIFCANT_INDEX) || e->index == index))
return e; return e;
} }

View File

@ -435,7 +435,6 @@ static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop);
__FOP_RET(#op) __FOP_RET(#op)
asm(".pushsection .fixup, \"ax\"\n" asm(".pushsection .fixup, \"ax\"\n"
".global kvm_fastop_exception \n"
"kvm_fastop_exception: xor %esi, %esi; ret\n" "kvm_fastop_exception: xor %esi, %esi; ret\n"
".popsection"); ".popsection");
@ -4206,7 +4205,7 @@ static int check_rdtsc(struct x86_emulate_ctxt *ctxt)
u64 cr4 = ctxt->ops->get_cr(ctxt, 4); u64 cr4 = ctxt->ops->get_cr(ctxt, 4);
if (cr4 & X86_CR4_TSD && ctxt->ops->cpl(ctxt)) if (cr4 & X86_CR4_TSD && ctxt->ops->cpl(ctxt))
return emulate_ud(ctxt); return emulate_gp(ctxt, 0);
return X86EMUL_CONTINUE; return X86EMUL_CONTINUE;
} }

View File

@ -939,7 +939,7 @@ static int kvm_hv_vcpu_init(struct kvm_vcpu *vcpu)
for (i = 0; i < ARRAY_SIZE(hv_vcpu->stimer); i++) for (i = 0; i < ARRAY_SIZE(hv_vcpu->stimer); i++)
stimer_init(&hv_vcpu->stimer[i], i); stimer_init(&hv_vcpu->stimer[i], i);
hv_vcpu->vp_index = kvm_vcpu_get_idx(vcpu); hv_vcpu->vp_index = vcpu->vcpu_idx;
return 0; return 0;
} }
@ -1444,7 +1444,6 @@ static int kvm_hv_set_msr(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host)
switch (msr) { switch (msr) {
case HV_X64_MSR_VP_INDEX: { case HV_X64_MSR_VP_INDEX: {
struct kvm_hv *hv = to_kvm_hv(vcpu->kvm); struct kvm_hv *hv = to_kvm_hv(vcpu->kvm);
int vcpu_idx = kvm_vcpu_get_idx(vcpu);
u32 new_vp_index = (u32)data; u32 new_vp_index = (u32)data;
if (!host || new_vp_index >= KVM_MAX_VCPUS) if (!host || new_vp_index >= KVM_MAX_VCPUS)
@ -1459,9 +1458,9 @@ static int kvm_hv_set_msr(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host)
* VP index is changing, adjust num_mismatched_vp_indexes if * VP index is changing, adjust num_mismatched_vp_indexes if
* it now matches or no longer matches vcpu_idx. * it now matches or no longer matches vcpu_idx.
*/ */
if (hv_vcpu->vp_index == vcpu_idx) if (hv_vcpu->vp_index == vcpu->vcpu_idx)
atomic_inc(&hv->num_mismatched_vp_indexes); atomic_inc(&hv->num_mismatched_vp_indexes);
else if (new_vp_index == vcpu_idx) else if (new_vp_index == vcpu->vcpu_idx)
atomic_dec(&hv->num_mismatched_vp_indexes); atomic_dec(&hv->num_mismatched_vp_indexes);
hv_vcpu->vp_index = new_vp_index; hv_vcpu->vp_index = new_vp_index;

View File

@ -83,7 +83,7 @@ static inline u32 kvm_hv_get_vpindex(struct kvm_vcpu *vcpu)
{ {
struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu); struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
return hv_vcpu ? hv_vcpu->vp_index : kvm_vcpu_get_idx(vcpu); return hv_vcpu ? hv_vcpu->vp_index : vcpu->vcpu_idx;
} }
int kvm_hv_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host); int kvm_hv_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host);

View File

@ -319,8 +319,8 @@ static void ioapic_write_indirect(struct kvm_ioapic *ioapic, u32 val)
unsigned index; unsigned index;
bool mask_before, mask_after; bool mask_before, mask_after;
union kvm_ioapic_redirect_entry *e; union kvm_ioapic_redirect_entry *e;
unsigned long vcpu_bitmap;
int old_remote_irr, old_delivery_status, old_dest_id, old_dest_mode; int old_remote_irr, old_delivery_status, old_dest_id, old_dest_mode;
DECLARE_BITMAP(vcpu_bitmap, KVM_MAX_VCPUS);
switch (ioapic->ioregsel) { switch (ioapic->ioregsel) {
case IOAPIC_REG_VERSION: case IOAPIC_REG_VERSION:
@ -384,9 +384,9 @@ static void ioapic_write_indirect(struct kvm_ioapic *ioapic, u32 val)
irq.shorthand = APIC_DEST_NOSHORT; irq.shorthand = APIC_DEST_NOSHORT;
irq.dest_id = e->fields.dest_id; irq.dest_id = e->fields.dest_id;
irq.msi_redir_hint = false; irq.msi_redir_hint = false;
bitmap_zero(&vcpu_bitmap, 16); bitmap_zero(vcpu_bitmap, KVM_MAX_VCPUS);
kvm_bitmap_or_dest_vcpus(ioapic->kvm, &irq, kvm_bitmap_or_dest_vcpus(ioapic->kvm, &irq,
&vcpu_bitmap); vcpu_bitmap);
if (old_dest_mode != e->fields.dest_mode || if (old_dest_mode != e->fields.dest_mode ||
old_dest_id != e->fields.dest_id) { old_dest_id != e->fields.dest_id) {
/* /*
@ -399,10 +399,10 @@ static void ioapic_write_indirect(struct kvm_ioapic *ioapic, u32 val)
kvm_lapic_irq_dest_mode( kvm_lapic_irq_dest_mode(
!!e->fields.dest_mode); !!e->fields.dest_mode);
kvm_bitmap_or_dest_vcpus(ioapic->kvm, &irq, kvm_bitmap_or_dest_vcpus(ioapic->kvm, &irq,
&vcpu_bitmap); vcpu_bitmap);
} }
kvm_make_scan_ioapic_request_mask(ioapic->kvm, kvm_make_scan_ioapic_request_mask(ioapic->kvm,
&vcpu_bitmap); vcpu_bitmap);
} else { } else {
kvm_make_scan_ioapic_request(ioapic->kvm); kvm_make_scan_ioapic_request(ioapic->kvm);
} }

View File

@ -2027,8 +2027,8 @@ static void mmu_pages_clear_parents(struct mmu_page_path *parents)
} while (!sp->unsync_children); } while (!sp->unsync_children);
} }
static void mmu_sync_children(struct kvm_vcpu *vcpu, static int mmu_sync_children(struct kvm_vcpu *vcpu,
struct kvm_mmu_page *parent) struct kvm_mmu_page *parent, bool can_yield)
{ {
int i; int i;
struct kvm_mmu_page *sp; struct kvm_mmu_page *sp;
@ -2055,12 +2055,18 @@ static void mmu_sync_children(struct kvm_vcpu *vcpu,
} }
if (need_resched() || rwlock_needbreak(&vcpu->kvm->mmu_lock)) { if (need_resched() || rwlock_needbreak(&vcpu->kvm->mmu_lock)) {
kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush); kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush);
if (!can_yield) {
kvm_make_request(KVM_REQ_MMU_SYNC, vcpu);
return -EINTR;
}
cond_resched_rwlock_write(&vcpu->kvm->mmu_lock); cond_resched_rwlock_write(&vcpu->kvm->mmu_lock);
flush = false; flush = false;
} }
} }
kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush); kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush);
return 0;
} }
static void __clear_sp_write_flooding_count(struct kvm_mmu_page *sp) static void __clear_sp_write_flooding_count(struct kvm_mmu_page *sp)
@ -2146,9 +2152,6 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu); kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu);
} }
if (sp->unsync_children)
kvm_make_request(KVM_REQ_MMU_SYNC, vcpu);
__clear_sp_write_flooding_count(sp); __clear_sp_write_flooding_count(sp);
trace_get_page: trace_get_page:
@ -3684,7 +3687,7 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu)
write_lock(&vcpu->kvm->mmu_lock); write_lock(&vcpu->kvm->mmu_lock);
kvm_mmu_audit(vcpu, AUDIT_PRE_SYNC); kvm_mmu_audit(vcpu, AUDIT_PRE_SYNC);
mmu_sync_children(vcpu, sp); mmu_sync_children(vcpu, sp, true);
kvm_mmu_audit(vcpu, AUDIT_POST_SYNC); kvm_mmu_audit(vcpu, AUDIT_POST_SYNC);
write_unlock(&vcpu->kvm->mmu_lock); write_unlock(&vcpu->kvm->mmu_lock);
@ -3700,7 +3703,7 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu)
if (IS_VALID_PAE_ROOT(root)) { if (IS_VALID_PAE_ROOT(root)) {
root &= PT64_BASE_ADDR_MASK; root &= PT64_BASE_ADDR_MASK;
sp = to_shadow_page(root); sp = to_shadow_page(root);
mmu_sync_children(vcpu, sp); mmu_sync_children(vcpu, sp, true);
} }
} }

View File

@ -164,13 +164,13 @@ void kvm_page_track_cleanup(struct kvm *kvm)
cleanup_srcu_struct(&head->track_srcu); cleanup_srcu_struct(&head->track_srcu);
} }
void kvm_page_track_init(struct kvm *kvm) int kvm_page_track_init(struct kvm *kvm)
{ {
struct kvm_page_track_notifier_head *head; struct kvm_page_track_notifier_head *head;
head = &kvm->arch.track_notifier_head; head = &kvm->arch.track_notifier_head;
init_srcu_struct(&head->track_srcu);
INIT_HLIST_HEAD(&head->track_notifier_list); INIT_HLIST_HEAD(&head->track_notifier_list);
return init_srcu_struct(&head->track_srcu);
} }
/* /*

View File

@ -707,8 +707,27 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr,
if (!is_shadow_present_pte(*it.sptep)) { if (!is_shadow_present_pte(*it.sptep)) {
table_gfn = gw->table_gfn[it.level - 2]; table_gfn = gw->table_gfn[it.level - 2];
access = gw->pt_access[it.level - 2]; access = gw->pt_access[it.level - 2];
sp = kvm_mmu_get_page(vcpu, table_gfn, addr, it.level-1, sp = kvm_mmu_get_page(vcpu, table_gfn, addr,
false, access); it.level-1, false, access);
/*
* We must synchronize the pagetable before linking it
* because the guest doesn't need to flush tlb when
* the gpte is changed from non-present to present.
* Otherwise, the guest may use the wrong mapping.
*
* For PG_LEVEL_4K, kvm_mmu_get_page() has already
* synchronized it transiently via kvm_sync_page().
*
* For higher level pagetable, we synchronize it via
* the slower mmu_sync_children(). If it needs to
* break, some progress has been made; return
* RET_PF_RETRY and retry on the next #PF.
* KVM_REQ_MMU_SYNC is not necessary but it
* expedites the process.
*/
if (sp->unsync_children &&
mmu_sync_children(vcpu, sp, false))
return RET_PF_RETRY;
} }
/* /*
@ -1047,14 +1066,6 @@ static gpa_t FNAME(gva_to_gpa_nested)(struct kvm_vcpu *vcpu, gpa_t vaddr,
* Using the cached information from sp->gfns is safe because: * Using the cached information from sp->gfns is safe because:
* - The spte has a reference to the struct page, so the pfn for a given gfn * - The spte has a reference to the struct page, so the pfn for a given gfn
* can't change unless all sptes pointing to it are nuked first. * can't change unless all sptes pointing to it are nuked first.
*
* Note:
* We should flush all tlbs if spte is dropped even though guest is
* responsible for it. Since if we don't, kvm_mmu_notifier_invalidate_page
* and kvm_mmu_notifier_invalidate_range_start detect the mapping page isn't
* used by guest then tlbs are not flushed, so guest is allowed to access the
* freed pages.
* And we increase kvm->tlbs_dirty to delay tlbs flush in this case.
*/ */
static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
{ {
@ -1107,13 +1118,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
return 0; return 0;
if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) { if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) {
/* set_spte_ret |= SET_SPTE_NEED_REMOTE_TLB_FLUSH;
* Update spte before increasing tlbs_dirty to make
* sure no tlb flush is lost after spte is zapped; see
* the comments in kvm_flush_remote_tlbs().
*/
smp_wmb();
vcpu->kvm->tlbs_dirty++;
continue; continue;
} }
@ -1128,12 +1133,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
if (gfn != sp->gfns[i]) { if (gfn != sp->gfns[i]) {
drop_spte(vcpu->kvm, &sp->spt[i]); drop_spte(vcpu->kvm, &sp->spt[i]);
/* set_spte_ret |= SET_SPTE_NEED_REMOTE_TLB_FLUSH;
* The same as above where we are doing
* prefetch_invalid_gpte().
*/
smp_wmb();
vcpu->kvm->tlbs_dirty++;
continue; continue;
} }

View File

@ -545,7 +545,6 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm)
(svm->nested.ctl.int_ctl & int_ctl_vmcb12_bits) | (svm->nested.ctl.int_ctl & int_ctl_vmcb12_bits) |
(svm->vmcb01.ptr->control.int_ctl & int_ctl_vmcb01_bits); (svm->vmcb01.ptr->control.int_ctl & int_ctl_vmcb01_bits);
svm->vmcb->control.virt_ext = svm->nested.ctl.virt_ext;
svm->vmcb->control.int_vector = svm->nested.ctl.int_vector; svm->vmcb->control.int_vector = svm->nested.ctl.int_vector;
svm->vmcb->control.int_state = svm->nested.ctl.int_state; svm->vmcb->control.int_state = svm->nested.ctl.int_state;
svm->vmcb->control.event_inj = svm->nested.ctl.event_inj; svm->vmcb->control.event_inj = svm->nested.ctl.event_inj;
@ -579,7 +578,7 @@ static void nested_svm_copy_common_state(struct vmcb *from_vmcb, struct vmcb *to
} }
int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa, int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa,
struct vmcb *vmcb12) struct vmcb *vmcb12, bool from_vmrun)
{ {
struct vcpu_svm *svm = to_svm(vcpu); struct vcpu_svm *svm = to_svm(vcpu);
int ret; int ret;
@ -609,13 +608,16 @@ int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa,
nested_vmcb02_prepare_save(svm, vmcb12); nested_vmcb02_prepare_save(svm, vmcb12);
ret = nested_svm_load_cr3(&svm->vcpu, vmcb12->save.cr3, ret = nested_svm_load_cr3(&svm->vcpu, vmcb12->save.cr3,
nested_npt_enabled(svm), true); nested_npt_enabled(svm), from_vmrun);
if (ret) if (ret)
return ret; return ret;
if (!npt_enabled) if (!npt_enabled)
vcpu->arch.mmu->inject_page_fault = svm_inject_page_fault_nested; vcpu->arch.mmu->inject_page_fault = svm_inject_page_fault_nested;
if (!from_vmrun)
kvm_make_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
svm_set_gif(svm, true); svm_set_gif(svm, true);
return 0; return 0;
@ -681,7 +683,7 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
svm->nested.nested_run_pending = 1; svm->nested.nested_run_pending = 1;
if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12)) if (enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, true))
goto out_exit_err; goto out_exit_err;
if (nested_svm_vmrun_msrpm(svm)) if (nested_svm_vmrun_msrpm(svm))

View File

@ -595,43 +595,50 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm)
return 0; return 0;
} }
static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu,
int *error)
{
struct sev_data_launch_update_vmsa vmsa;
struct vcpu_svm *svm = to_svm(vcpu);
int ret;
/* Perform some pre-encryption checks against the VMSA */
ret = sev_es_sync_vmsa(svm);
if (ret)
return ret;
/*
* The LAUNCH_UPDATE_VMSA command will perform in-place encryption of
* the VMSA memory content (i.e it will write the same memory region
* with the guest's key), so invalidate it first.
*/
clflush_cache_range(svm->vmsa, PAGE_SIZE);
vmsa.reserved = 0;
vmsa.handle = to_kvm_svm(kvm)->sev_info.handle;
vmsa.address = __sme_pa(svm->vmsa);
vmsa.len = PAGE_SIZE;
return sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error);
}
static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp) static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
{ {
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct sev_data_launch_update_vmsa vmsa;
struct kvm_vcpu *vcpu; struct kvm_vcpu *vcpu;
int i, ret; int i, ret;
if (!sev_es_guest(kvm)) if (!sev_es_guest(kvm))
return -ENOTTY; return -ENOTTY;
vmsa.reserved = 0;
kvm_for_each_vcpu(i, vcpu, kvm) { kvm_for_each_vcpu(i, vcpu, kvm) {
struct vcpu_svm *svm = to_svm(vcpu); ret = mutex_lock_killable(&vcpu->mutex);
/* Perform some pre-encryption checks against the VMSA */
ret = sev_es_sync_vmsa(svm);
if (ret) if (ret)
return ret; return ret;
/* ret = __sev_launch_update_vmsa(kvm, vcpu, &argp->error);
* The LAUNCH_UPDATE_VMSA command will perform in-place
* encryption of the VMSA memory content (i.e it will write
* the same memory region with the guest's key), so invalidate
* it first.
*/
clflush_cache_range(svm->vmsa, PAGE_SIZE);
vmsa.handle = sev->handle; mutex_unlock(&vcpu->mutex);
vmsa.address = __sme_pa(svm->vmsa);
vmsa.len = PAGE_SIZE;
ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa,
&argp->error);
if (ret) if (ret)
return ret; return ret;
svm->vcpu.arch.guest_state_protected = true;
} }
return 0; return 0;
@ -1397,8 +1404,10 @@ static int sev_receive_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
/* Bind ASID to this guest */ /* Bind ASID to this guest */
ret = sev_bind_asid(kvm, start.handle, error); ret = sev_bind_asid(kvm, start.handle, error);
if (ret) if (ret) {
sev_decommission(start.handle);
goto e_free_session; goto e_free_session;
}
params.handle = start.handle; params.handle = start.handle;
if (copy_to_user((void __user *)(uintptr_t)argp->data, if (copy_to_user((void __user *)(uintptr_t)argp->data,
@ -1464,7 +1473,7 @@ static int sev_receive_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
/* Pin guest memory */ /* Pin guest memory */
guest_page = sev_pin_memory(kvm, params.guest_uaddr & PAGE_MASK, guest_page = sev_pin_memory(kvm, params.guest_uaddr & PAGE_MASK,
PAGE_SIZE, &n, 0); PAGE_SIZE, &n, 1);
if (IS_ERR(guest_page)) { if (IS_ERR(guest_page)) {
ret = PTR_ERR(guest_page); ret = PTR_ERR(guest_page);
goto e_free_trans; goto e_free_trans;
@ -1501,6 +1510,20 @@ static int sev_receive_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
return sev_issue_cmd(kvm, SEV_CMD_RECEIVE_FINISH, &data, &argp->error); return sev_issue_cmd(kvm, SEV_CMD_RECEIVE_FINISH, &data, &argp->error);
} }
static bool cmd_allowed_from_miror(u32 cmd_id)
{
/*
* Allow mirrors VM to call KVM_SEV_LAUNCH_UPDATE_VMSA to enable SEV-ES
* active mirror VMs. Also allow the debugging and status commands.
*/
if (cmd_id == KVM_SEV_LAUNCH_UPDATE_VMSA ||
cmd_id == KVM_SEV_GUEST_STATUS || cmd_id == KVM_SEV_DBG_DECRYPT ||
cmd_id == KVM_SEV_DBG_ENCRYPT)
return true;
return false;
}
int svm_mem_enc_op(struct kvm *kvm, void __user *argp) int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
{ {
struct kvm_sev_cmd sev_cmd; struct kvm_sev_cmd sev_cmd;
@ -1517,8 +1540,9 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
mutex_lock(&kvm->lock); mutex_lock(&kvm->lock);
/* enc_context_owner handles all memory enc operations */ /* Only the enc_context_owner handles some memory enc operations. */
if (is_mirroring_enc_context(kvm)) { if (is_mirroring_enc_context(kvm) &&
!cmd_allowed_from_miror(sev_cmd.id)) {
r = -EINVAL; r = -EINVAL;
goto out; goto out;
} }
@ -1715,8 +1739,7 @@ int svm_vm_copy_asid_from(struct kvm *kvm, unsigned int source_fd)
{ {
struct file *source_kvm_file; struct file *source_kvm_file;
struct kvm *source_kvm; struct kvm *source_kvm;
struct kvm_sev_info *mirror_sev; struct kvm_sev_info source_sev, *mirror_sev;
unsigned int asid;
int ret; int ret;
source_kvm_file = fget(source_fd); source_kvm_file = fget(source_fd);
@ -1739,7 +1762,8 @@ int svm_vm_copy_asid_from(struct kvm *kvm, unsigned int source_fd)
goto e_source_unlock; goto e_source_unlock;
} }
asid = to_kvm_svm(source_kvm)->sev_info.asid; memcpy(&source_sev, &to_kvm_svm(source_kvm)->sev_info,
sizeof(source_sev));
/* /*
* The mirror kvm holds an enc_context_owner ref so its asid can't * The mirror kvm holds an enc_context_owner ref so its asid can't
@ -1759,8 +1783,16 @@ int svm_vm_copy_asid_from(struct kvm *kvm, unsigned int source_fd)
/* Set enc_context_owner and copy its encryption context over */ /* Set enc_context_owner and copy its encryption context over */
mirror_sev = &to_kvm_svm(kvm)->sev_info; mirror_sev = &to_kvm_svm(kvm)->sev_info;
mirror_sev->enc_context_owner = source_kvm; mirror_sev->enc_context_owner = source_kvm;
mirror_sev->asid = asid;
mirror_sev->active = true; mirror_sev->active = true;
mirror_sev->asid = source_sev.asid;
mirror_sev->fd = source_sev.fd;
mirror_sev->es_active = source_sev.es_active;
mirror_sev->handle = source_sev.handle;
/*
* Do not copy ap_jump_table. Since the mirror does not share the same
* KVM contexts as the original, and they may have different
* memory-views.
*/
mutex_unlock(&kvm->lock); mutex_unlock(&kvm->lock);
return 0; return 0;

View File

@ -1566,6 +1566,8 @@ static void svm_clear_vintr(struct vcpu_svm *svm)
svm->vmcb->control.int_ctl |= svm->nested.ctl.int_ctl & svm->vmcb->control.int_ctl |= svm->nested.ctl.int_ctl &
V_IRQ_INJECTION_BITS_MASK; V_IRQ_INJECTION_BITS_MASK;
svm->vmcb->control.int_vector = svm->nested.ctl.int_vector;
} }
vmcb_mark_dirty(svm->vmcb, VMCB_INTR); vmcb_mark_dirty(svm->vmcb, VMCB_INTR);
@ -2222,6 +2224,10 @@ static int gp_interception(struct kvm_vcpu *vcpu)
if (error_code) if (error_code)
goto reinject; goto reinject;
/* All SVM instructions expect page aligned RAX */
if (svm->vmcb->save.rax & ~PAGE_MASK)
goto reinject;
/* Decode the instruction for usage later */ /* Decode the instruction for usage later */
if (x86_decode_emulated_instruction(vcpu, 0, NULL, 0) != EMULATION_OK) if (x86_decode_emulated_instruction(vcpu, 0, NULL, 0) != EMULATION_OK)
goto reinject; goto reinject;
@ -4285,43 +4291,44 @@ static int svm_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
struct kvm_host_map map_save; struct kvm_host_map map_save;
int ret; int ret;
if (is_guest_mode(vcpu)) { if (!is_guest_mode(vcpu))
/* FED8h - SVM Guest */ return 0;
put_smstate(u64, smstate, 0x7ed8, 1);
/* FEE0h - SVM Guest VMCB Physical Address */
put_smstate(u64, smstate, 0x7ee0, svm->nested.vmcb12_gpa);
svm->vmcb->save.rax = vcpu->arch.regs[VCPU_REGS_RAX]; /* FED8h - SVM Guest */
svm->vmcb->save.rsp = vcpu->arch.regs[VCPU_REGS_RSP]; put_smstate(u64, smstate, 0x7ed8, 1);
svm->vmcb->save.rip = vcpu->arch.regs[VCPU_REGS_RIP]; /* FEE0h - SVM Guest VMCB Physical Address */
put_smstate(u64, smstate, 0x7ee0, svm->nested.vmcb12_gpa);
ret = nested_svm_vmexit(svm); svm->vmcb->save.rax = vcpu->arch.regs[VCPU_REGS_RAX];
if (ret) svm->vmcb->save.rsp = vcpu->arch.regs[VCPU_REGS_RSP];
return ret; svm->vmcb->save.rip = vcpu->arch.regs[VCPU_REGS_RIP];
/* ret = nested_svm_vmexit(svm);
* KVM uses VMCB01 to store L1 host state while L2 runs but if (ret)
* VMCB01 is going to be used during SMM and thus the state will return ret;
* be lost. Temporary save non-VMLOAD/VMSAVE state to the host save
* area pointed to by MSR_VM_HSAVE_PA. APM guarantees that the
* format of the area is identical to guest save area offsetted
* by 0x400 (matches the offset of 'struct vmcb_save_area'
* within 'struct vmcb'). Note: HSAVE area may also be used by
* L1 hypervisor to save additional host context (e.g. KVM does
* that, see svm_prepare_guest_switch()) which must be
* preserved.
*/
if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr),
&map_save) == -EINVAL)
return 1;
BUILD_BUG_ON(offsetof(struct vmcb, save) != 0x400); /*
* KVM uses VMCB01 to store L1 host state while L2 runs but
* VMCB01 is going to be used during SMM and thus the state will
* be lost. Temporary save non-VMLOAD/VMSAVE state to the host save
* area pointed to by MSR_VM_HSAVE_PA. APM guarantees that the
* format of the area is identical to guest save area offsetted
* by 0x400 (matches the offset of 'struct vmcb_save_area'
* within 'struct vmcb'). Note: HSAVE area may also be used by
* L1 hypervisor to save additional host context (e.g. KVM does
* that, see svm_prepare_guest_switch()) which must be
* preserved.
*/
if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr),
&map_save) == -EINVAL)
return 1;
svm_copy_vmrun_state(map_save.hva + 0x400, BUILD_BUG_ON(offsetof(struct vmcb, save) != 0x400);
&svm->vmcb01.ptr->save);
kvm_vcpu_unmap(vcpu, &map_save, true); svm_copy_vmrun_state(map_save.hva + 0x400,
} &svm->vmcb01.ptr->save);
kvm_vcpu_unmap(vcpu, &map_save, true);
return 0; return 0;
} }
@ -4329,50 +4336,54 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
{ {
struct vcpu_svm *svm = to_svm(vcpu); struct vcpu_svm *svm = to_svm(vcpu);
struct kvm_host_map map, map_save; struct kvm_host_map map, map_save;
int ret = 0; u64 saved_efer, vmcb12_gpa;
struct vmcb *vmcb12;
int ret;
if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) { if (!guest_cpuid_has(vcpu, X86_FEATURE_LM))
u64 saved_efer = GET_SMSTATE(u64, smstate, 0x7ed0); return 0;
u64 guest = GET_SMSTATE(u64, smstate, 0x7ed8);
u64 vmcb12_gpa = GET_SMSTATE(u64, smstate, 0x7ee0);
struct vmcb *vmcb12;
if (guest) { /* Non-zero if SMI arrived while vCPU was in guest mode. */
if (!guest_cpuid_has(vcpu, X86_FEATURE_SVM)) if (!GET_SMSTATE(u64, smstate, 0x7ed8))
return 1; return 0;
if (!(saved_efer & EFER_SVME)) if (!guest_cpuid_has(vcpu, X86_FEATURE_SVM))
return 1; return 1;
if (kvm_vcpu_map(vcpu, saved_efer = GET_SMSTATE(u64, smstate, 0x7ed0);
gpa_to_gfn(vmcb12_gpa), &map) == -EINVAL) if (!(saved_efer & EFER_SVME))
return 1; return 1;
if (svm_allocate_nested(svm)) vmcb12_gpa = GET_SMSTATE(u64, smstate, 0x7ee0);
return 1; if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcb12_gpa), &map) == -EINVAL)
return 1;
vmcb12 = map.hva; ret = 1;
if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr), &map_save) == -EINVAL)
goto unmap_map;
nested_load_control_from_vmcb12(svm, &vmcb12->control); if (svm_allocate_nested(svm))
goto unmap_save;
ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12); /*
kvm_vcpu_unmap(vcpu, &map, true); * Restore L1 host state from L1 HSAVE area as VMCB01 was
* used during SMM (see svm_enter_smm())
*/
/* svm_copy_vmrun_state(&svm->vmcb01.ptr->save, map_save.hva + 0x400);
* Restore L1 host state from L1 HSAVE area as VMCB01 was
* used during SMM (see svm_enter_smm())
*/
if (kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.hsave_msr),
&map_save) == -EINVAL)
return 1;
svm_copy_vmrun_state(&svm->vmcb01.ptr->save, /*
map_save.hva + 0x400); * Enter the nested guest now
*/
kvm_vcpu_unmap(vcpu, &map_save, true); vmcb12 = map.hva;
} nested_load_control_from_vmcb12(svm, &vmcb12->control);
} ret = enter_svm_guest_mode(vcpu, vmcb12_gpa, vmcb12, false);
unmap_save:
kvm_vcpu_unmap(vcpu, &map_save, true);
unmap_map:
kvm_vcpu_unmap(vcpu, &map, true);
return ret; return ret;
} }

View File

@ -459,7 +459,8 @@ static inline bool nested_exit_on_nmi(struct vcpu_svm *svm)
return vmcb_is_intercept(&svm->nested.ctl, INTERCEPT_NMI); return vmcb_is_intercept(&svm->nested.ctl, INTERCEPT_NMI);
} }
int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb_gpa, struct vmcb *vmcb12); int enter_svm_guest_mode(struct kvm_vcpu *vcpu,
u64 vmcb_gpa, struct vmcb *vmcb12, bool from_vmrun);
void svm_leave_nested(struct vcpu_svm *svm); void svm_leave_nested(struct vcpu_svm *svm);
void svm_free_nested(struct vcpu_svm *svm); void svm_free_nested(struct vcpu_svm *svm);
int svm_allocate_nested(struct vcpu_svm *svm); int svm_allocate_nested(struct vcpu_svm *svm);

View File

@ -353,14 +353,20 @@ void nested_evmcs_filter_control_msr(u32 msr_index, u64 *pdata)
switch (msr_index) { switch (msr_index) {
case MSR_IA32_VMX_EXIT_CTLS: case MSR_IA32_VMX_EXIT_CTLS:
case MSR_IA32_VMX_TRUE_EXIT_CTLS: case MSR_IA32_VMX_TRUE_EXIT_CTLS:
ctl_high &= ~VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL; ctl_high &= ~EVMCS1_UNSUPPORTED_VMEXIT_CTRL;
break; break;
case MSR_IA32_VMX_ENTRY_CTLS: case MSR_IA32_VMX_ENTRY_CTLS:
case MSR_IA32_VMX_TRUE_ENTRY_CTLS: case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
ctl_high &= ~VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL; ctl_high &= ~EVMCS1_UNSUPPORTED_VMENTRY_CTRL;
break; break;
case MSR_IA32_VMX_PROCBASED_CTLS2: case MSR_IA32_VMX_PROCBASED_CTLS2:
ctl_high &= ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES; ctl_high &= ~EVMCS1_UNSUPPORTED_2NDEXEC;
break;
case MSR_IA32_VMX_PINBASED_CTLS:
ctl_high &= ~EVMCS1_UNSUPPORTED_PINCTRL;
break;
case MSR_IA32_VMX_VMFUNC:
ctl_low &= ~EVMCS1_UNSUPPORTED_VMFUNC;
break; break;
} }

View File

@ -2583,8 +2583,13 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
* Guest state is invalid and unrestricted guest is disabled, * Guest state is invalid and unrestricted guest is disabled,
* which means L1 attempted VMEntry to L2 with invalid state. * which means L1 attempted VMEntry to L2 with invalid state.
* Fail the VMEntry. * Fail the VMEntry.
*
* However when force loading the guest state (SMM exit or
* loading nested state after migration, it is possible to
* have invalid guest state now, which will be later fixed by
* restoring L2 register state
*/ */
if (CC(!vmx_guest_state_valid(vcpu))) { if (CC(from_vmentry && !vmx_guest_state_valid(vcpu))) {
*entry_failure_code = ENTRY_FAIL_DEFAULT; *entry_failure_code = ENTRY_FAIL_DEFAULT;
return -EINVAL; return -EINVAL;
} }
@ -4351,6 +4356,8 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
if (nested_vmx_load_msr(vcpu, vmcs12->vm_exit_msr_load_addr, if (nested_vmx_load_msr(vcpu, vmcs12->vm_exit_msr_load_addr,
vmcs12->vm_exit_msr_load_count)) vmcs12->vm_exit_msr_load_count))
nested_vmx_abort(vcpu, VMX_ABORT_LOAD_HOST_MSR_FAIL); nested_vmx_abort(vcpu, VMX_ABORT_LOAD_HOST_MSR_FAIL);
to_vmx(vcpu)->emulation_required = vmx_emulation_required(vcpu);
} }
static inline u64 nested_vmx_get_vmcs01_guest_efer(struct vcpu_vmx *vmx) static inline u64 nested_vmx_get_vmcs01_guest_efer(struct vcpu_vmx *vmx)
@ -4899,14 +4906,7 @@ out_vmcs02:
return -ENOMEM; return -ENOMEM;
} }
/* /* Emulate the VMXON instruction. */
* Emulate the VMXON instruction.
* Currently, we just remember that VMX is active, and do not save or even
* inspect the argument to VMXON (the so-called "VMXON pointer") because we
* do not currently need to store anything in that guest-allocated memory
* region. Consequently, VMCLEAR and VMPTRLD also do not verify that the their
* argument is different from the VMXON pointer (which the spec says they do).
*/
static int handle_vmon(struct kvm_vcpu *vcpu) static int handle_vmon(struct kvm_vcpu *vcpu)
{ {
int ret; int ret;
@ -5903,6 +5903,12 @@ static bool nested_vmx_l0_wants_exit(struct kvm_vcpu *vcpu,
case EXIT_REASON_VMFUNC: case EXIT_REASON_VMFUNC:
/* VM functions are emulated through L2->L0 vmexits. */ /* VM functions are emulated through L2->L0 vmexits. */
return true; return true;
case EXIT_REASON_BUS_LOCK:
/*
* At present, bus lock VM exit is never exposed to L1.
* Handle L2's bus locks in L0 directly.
*/
return true;
default: default:
break; break;
} }

View File

@ -1323,7 +1323,7 @@ static void vmx_vcpu_put(struct kvm_vcpu *vcpu)
vmx_prepare_switch_to_host(to_vmx(vcpu)); vmx_prepare_switch_to_host(to_vmx(vcpu));
} }
static bool emulation_required(struct kvm_vcpu *vcpu) bool vmx_emulation_required(struct kvm_vcpu *vcpu)
{ {
return emulate_invalid_guest_state && !vmx_guest_state_valid(vcpu); return emulate_invalid_guest_state && !vmx_guest_state_valid(vcpu);
} }
@ -1367,7 +1367,7 @@ void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
vmcs_writel(GUEST_RFLAGS, rflags); vmcs_writel(GUEST_RFLAGS, rflags);
if ((old_rflags ^ vmx->rflags) & X86_EFLAGS_VM) if ((old_rflags ^ vmx->rflags) & X86_EFLAGS_VM)
vmx->emulation_required = emulation_required(vcpu); vmx->emulation_required = vmx_emulation_required(vcpu);
} }
u32 vmx_get_interrupt_shadow(struct kvm_vcpu *vcpu) u32 vmx_get_interrupt_shadow(struct kvm_vcpu *vcpu)
@ -1837,10 +1837,11 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
&msr_info->data)) &msr_info->data))
return 1; return 1;
/* /*
* Enlightened VMCS v1 doesn't have certain fields, but buggy * Enlightened VMCS v1 doesn't have certain VMCS fields but
* Hyper-V versions are still trying to use corresponding * instead of just ignoring the features, different Hyper-V
* features when they are exposed. Filter out the essential * versions are either trying to use them and fail or do some
* minimum. * sanity checking and refuse to boot. Filter all unsupported
* features out.
*/ */
if (!msr_info->host_initiated && if (!msr_info->host_initiated &&
vmx->nested.enlightened_vmcs_enabled) vmx->nested.enlightened_vmcs_enabled)
@ -3077,7 +3078,7 @@ void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
} }
/* depends on vcpu->arch.cr0 to be set to a new value */ /* depends on vcpu->arch.cr0 to be set to a new value */
vmx->emulation_required = emulation_required(vcpu); vmx->emulation_required = vmx_emulation_required(vcpu);
} }
static int vmx_get_max_tdp_level(void) static int vmx_get_max_tdp_level(void)
@ -3330,7 +3331,7 @@ static void vmx_set_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int
{ {
__vmx_set_segment(vcpu, var, seg); __vmx_set_segment(vcpu, var, seg);
to_vmx(vcpu)->emulation_required = emulation_required(vcpu); to_vmx(vcpu)->emulation_required = vmx_emulation_required(vcpu);
} }
static void vmx_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l) static void vmx_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
@ -6621,10 +6622,24 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
vmx->loaded_vmcs->soft_vnmi_blocked)) vmx->loaded_vmcs->soft_vnmi_blocked))
vmx->loaded_vmcs->entry_time = ktime_get(); vmx->loaded_vmcs->entry_time = ktime_get();
/* Don't enter VMX if guest state is invalid, let the exit handler /*
start emulation until we arrive back to a valid state */ * Don't enter VMX if guest state is invalid, let the exit handler
if (vmx->emulation_required) * start emulation until we arrive back to a valid state. Synthesize a
* consistency check VM-Exit due to invalid guest state and bail.
*/
if (unlikely(vmx->emulation_required)) {
/* We don't emulate invalid state of a nested guest */
vmx->fail = is_guest_mode(vcpu);
vmx->exit_reason.full = EXIT_REASON_INVALID_STATE;
vmx->exit_reason.failed_vmentry = 1;
kvm_register_mark_available(vcpu, VCPU_EXREG_EXIT_INFO_1);
vmx->exit_qualification = ENTRY_FAIL_DEFAULT;
kvm_register_mark_available(vcpu, VCPU_EXREG_EXIT_INFO_2);
vmx->exit_intr_info = 0;
return EXIT_FASTPATH_NONE; return EXIT_FASTPATH_NONE;
}
trace_kvm_entry(vcpu); trace_kvm_entry(vcpu);
@ -6833,7 +6848,7 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu)
*/ */
tsx_ctrl = vmx_find_uret_msr(vmx, MSR_IA32_TSX_CTRL); tsx_ctrl = vmx_find_uret_msr(vmx, MSR_IA32_TSX_CTRL);
if (tsx_ctrl) if (tsx_ctrl)
vmx->guest_uret_msrs[i].mask = ~(u64)TSX_CTRL_CPUID_CLEAR; tsx_ctrl->mask = ~(u64)TSX_CTRL_CPUID_CLEAR;
} }
err = alloc_loaded_vmcs(&vmx->vmcs01); err = alloc_loaded_vmcs(&vmx->vmcs01);

View File

@ -248,12 +248,8 @@ struct vcpu_vmx {
* only loaded into hardware when necessary, e.g. SYSCALL #UDs outside * only loaded into hardware when necessary, e.g. SYSCALL #UDs outside
* of 64-bit mode or if EFER.SCE=1, thus the SYSCALL MSRs don't need to * of 64-bit mode or if EFER.SCE=1, thus the SYSCALL MSRs don't need to
* be loaded into hardware if those conditions aren't met. * be loaded into hardware if those conditions aren't met.
* nr_active_uret_msrs tracks the number of MSRs that need to be loaded
* into hardware when running the guest. guest_uret_msrs[] is resorted
* whenever the number of "active" uret MSRs is modified.
*/ */
struct vmx_uret_msr guest_uret_msrs[MAX_NR_USER_RETURN_MSRS]; struct vmx_uret_msr guest_uret_msrs[MAX_NR_USER_RETURN_MSRS];
int nr_active_uret_msrs;
bool guest_uret_msrs_loaded; bool guest_uret_msrs_loaded;
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
u64 msr_host_kernel_gs_base; u64 msr_host_kernel_gs_base;
@ -359,6 +355,7 @@ void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu);
void vmx_set_host_fs_gs(struct vmcs_host_state *host, u16 fs_sel, u16 gs_sel, void vmx_set_host_fs_gs(struct vmcs_host_state *host, u16 fs_sel, u16 gs_sel,
unsigned long fs_base, unsigned long gs_base); unsigned long fs_base, unsigned long gs_base);
int vmx_get_cpl(struct kvm_vcpu *vcpu); int vmx_get_cpl(struct kvm_vcpu *vcpu);
bool vmx_emulation_required(struct kvm_vcpu *vcpu);
unsigned long vmx_get_rflags(struct kvm_vcpu *vcpu); unsigned long vmx_get_rflags(struct kvm_vcpu *vcpu);
void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags); void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags);
u32 vmx_get_interrupt_shadow(struct kvm_vcpu *vcpu); u32 vmx_get_interrupt_shadow(struct kvm_vcpu *vcpu);

View File

@ -1332,6 +1332,13 @@ static const u32 msrs_to_save_all[] = {
MSR_ARCH_PERFMON_EVENTSEL0 + 12, MSR_ARCH_PERFMON_EVENTSEL0 + 13, MSR_ARCH_PERFMON_EVENTSEL0 + 12, MSR_ARCH_PERFMON_EVENTSEL0 + 13,
MSR_ARCH_PERFMON_EVENTSEL0 + 14, MSR_ARCH_PERFMON_EVENTSEL0 + 15, MSR_ARCH_PERFMON_EVENTSEL0 + 14, MSR_ARCH_PERFMON_EVENTSEL0 + 15,
MSR_ARCH_PERFMON_EVENTSEL0 + 16, MSR_ARCH_PERFMON_EVENTSEL0 + 17, MSR_ARCH_PERFMON_EVENTSEL0 + 16, MSR_ARCH_PERFMON_EVENTSEL0 + 17,
MSR_K7_EVNTSEL0, MSR_K7_EVNTSEL1, MSR_K7_EVNTSEL2, MSR_K7_EVNTSEL3,
MSR_K7_PERFCTR0, MSR_K7_PERFCTR1, MSR_K7_PERFCTR2, MSR_K7_PERFCTR3,
MSR_F15H_PERF_CTL0, MSR_F15H_PERF_CTL1, MSR_F15H_PERF_CTL2,
MSR_F15H_PERF_CTL3, MSR_F15H_PERF_CTL4, MSR_F15H_PERF_CTL5,
MSR_F15H_PERF_CTR0, MSR_F15H_PERF_CTR1, MSR_F15H_PERF_CTR2,
MSR_F15H_PERF_CTR3, MSR_F15H_PERF_CTR4, MSR_F15H_PERF_CTR5,
}; };
static u32 msrs_to_save[ARRAY_SIZE(msrs_to_save_all)]; static u32 msrs_to_save[ARRAY_SIZE(msrs_to_save_all)];
@ -2969,7 +2976,7 @@ static int kvm_guest_time_update(struct kvm_vcpu *v)
offsetof(struct compat_vcpu_info, time)); offsetof(struct compat_vcpu_info, time));
if (vcpu->xen.vcpu_time_info_set) if (vcpu->xen.vcpu_time_info_set)
kvm_setup_pvclock_page(v, &vcpu->xen.vcpu_time_info_cache, 0); kvm_setup_pvclock_page(v, &vcpu->xen.vcpu_time_info_cache, 0);
if (v == kvm_get_vcpu(v->kvm, 0)) if (!v->vcpu_idx)
kvm_hv_setup_tsc_page(v->kvm, &vcpu->hv_clock); kvm_hv_setup_tsc_page(v->kvm, &vcpu->hv_clock);
return 0; return 0;
} }
@ -7658,6 +7665,13 @@ static void kvm_smm_changed(struct kvm_vcpu *vcpu, bool entering_smm)
/* Process a latched INIT or SMI, if any. */ /* Process a latched INIT or SMI, if any. */
kvm_make_request(KVM_REQ_EVENT, vcpu); kvm_make_request(KVM_REQ_EVENT, vcpu);
/*
* Even if KVM_SET_SREGS2 loaded PDPTRs out of band,
* on SMM exit we still need to reload them from
* guest memory
*/
vcpu->arch.pdptrs_from_userspace = false;
} }
kvm_mmu_reset_context(vcpu); kvm_mmu_reset_context(vcpu);
@ -10652,6 +10666,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
int r; int r;
vcpu->arch.last_vmentry_cpu = -1; vcpu->arch.last_vmentry_cpu = -1;
vcpu->arch.regs_avail = ~0;
vcpu->arch.regs_dirty = ~0;
if (!irqchip_in_kernel(vcpu->kvm) || kvm_vcpu_is_reset_bsp(vcpu)) if (!irqchip_in_kernel(vcpu->kvm) || kvm_vcpu_is_reset_bsp(vcpu))
vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE; vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
@ -10893,6 +10909,9 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
kvm_set_rflags(vcpu, X86_EFLAGS_FIXED); kvm_set_rflags(vcpu, X86_EFLAGS_FIXED);
kvm_rip_write(vcpu, 0xfff0); kvm_rip_write(vcpu, 0xfff0);
vcpu->arch.cr3 = 0;
kvm_register_mark_dirty(vcpu, VCPU_EXREG_CR3);
/* /*
* CR0.CD/NW are set on RESET, preserved on INIT. Note, some versions * CR0.CD/NW are set on RESET, preserved on INIT. Note, some versions
* of Intel's SDM list CD/NW as being set on INIT, but they contradict * of Intel's SDM list CD/NW as being set on INIT, but they contradict
@ -11139,9 +11158,15 @@ void kvm_arch_free_vm(struct kvm *kvm)
int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
{ {
int ret;
if (type) if (type)
return -EINVAL; return -EINVAL;
ret = kvm_page_track_init(kvm);
if (ret)
return ret;
INIT_HLIST_HEAD(&kvm->arch.mask_notifier_list); INIT_HLIST_HEAD(&kvm->arch.mask_notifier_list);
INIT_LIST_HEAD(&kvm->arch.active_mmu_pages); INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);
INIT_LIST_HEAD(&kvm->arch.zapped_obsolete_pages); INIT_LIST_HEAD(&kvm->arch.zapped_obsolete_pages);
@ -11174,7 +11199,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
kvm_apicv_init(kvm); kvm_apicv_init(kvm);
kvm_hv_init_vm(kvm); kvm_hv_init_vm(kvm);
kvm_page_track_init(kvm);
kvm_mmu_init_vm(kvm); kvm_mmu_init_vm(kvm);
kvm_xen_init_vm(kvm); kvm_xen_init_vm(kvm);

View File

@ -1341,9 +1341,10 @@ st: if (is_imm8(insn->off))
if (insn->imm == (BPF_AND | BPF_FETCH) || if (insn->imm == (BPF_AND | BPF_FETCH) ||
insn->imm == (BPF_OR | BPF_FETCH) || insn->imm == (BPF_OR | BPF_FETCH) ||
insn->imm == (BPF_XOR | BPF_FETCH)) { insn->imm == (BPF_XOR | BPF_FETCH)) {
u8 *branch_target;
bool is64 = BPF_SIZE(insn->code) == BPF_DW; bool is64 = BPF_SIZE(insn->code) == BPF_DW;
u32 real_src_reg = src_reg; u32 real_src_reg = src_reg;
u32 real_dst_reg = dst_reg;
u8 *branch_target;
/* /*
* Can't be implemented with a single x86 insn. * Can't be implemented with a single x86 insn.
@ -1354,11 +1355,13 @@ st: if (is_imm8(insn->off))
emit_mov_reg(&prog, true, BPF_REG_AX, BPF_REG_0); emit_mov_reg(&prog, true, BPF_REG_AX, BPF_REG_0);
if (src_reg == BPF_REG_0) if (src_reg == BPF_REG_0)
real_src_reg = BPF_REG_AX; real_src_reg = BPF_REG_AX;
if (dst_reg == BPF_REG_0)
real_dst_reg = BPF_REG_AX;
branch_target = prog; branch_target = prog;
/* Load old value */ /* Load old value */
emit_ldx(&prog, BPF_SIZE(insn->code), emit_ldx(&prog, BPF_SIZE(insn->code),
BPF_REG_0, dst_reg, insn->off); BPF_REG_0, real_dst_reg, insn->off);
/* /*
* Perform the (commutative) operation locally, * Perform the (commutative) operation locally,
* put the result in the AUX_REG. * put the result in the AUX_REG.
@ -1369,7 +1372,8 @@ st: if (is_imm8(insn->off))
add_2reg(0xC0, AUX_REG, real_src_reg)); add_2reg(0xC0, AUX_REG, real_src_reg));
/* Attempt to swap in new value */ /* Attempt to swap in new value */
err = emit_atomic(&prog, BPF_CMPXCHG, err = emit_atomic(&prog, BPF_CMPXCHG,
dst_reg, AUX_REG, insn->off, real_dst_reg, AUX_REG,
insn->off,
BPF_SIZE(insn->code)); BPF_SIZE(insn->code));
if (WARN_ON(err)) if (WARN_ON(err))
return err; return err;
@ -1383,11 +1387,10 @@ st: if (is_imm8(insn->off))
/* Restore R0 after clobbering RAX */ /* Restore R0 after clobbering RAX */
emit_mov_reg(&prog, true, BPF_REG_0, BPF_REG_AX); emit_mov_reg(&prog, true, BPF_REG_0, BPF_REG_AX);
break; break;
} }
err = emit_atomic(&prog, insn->imm, dst_reg, src_reg, err = emit_atomic(&prog, insn->imm, dst_reg, src_reg,
insn->off, BPF_SIZE(insn->code)); insn->off, BPF_SIZE(insn->code));
if (err) if (err)
return err; return err;
break; break;
@ -1744,7 +1747,7 @@ static void restore_regs(const struct btf_func_model *m, u8 **prog, int nr_args,
} }
static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog, static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
struct bpf_prog *p, int stack_size, bool mod_ret) struct bpf_prog *p, int stack_size, bool save_ret)
{ {
u8 *prog = *pprog; u8 *prog = *pprog;
u8 *jmp_insn; u8 *jmp_insn;
@ -1777,11 +1780,15 @@ static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
if (emit_call(&prog, p->bpf_func, prog)) if (emit_call(&prog, p->bpf_func, prog))
return -EINVAL; return -EINVAL;
/* BPF_TRAMP_MODIFY_RETURN trampolines can modify the return /*
* BPF_TRAMP_MODIFY_RETURN trampolines can modify the return
* of the previous call which is then passed on the stack to * of the previous call which is then passed on the stack to
* the next BPF program. * the next BPF program.
*
* BPF_TRAMP_FENTRY trampoline may need to return the return
* value of BPF_PROG_TYPE_STRUCT_OPS prog.
*/ */
if (mod_ret) if (save_ret)
emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8); emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
/* replace 2 nops with JE insn, since jmp target is known */ /* replace 2 nops with JE insn, since jmp target is known */
@ -1828,13 +1835,15 @@ static int emit_cond_near_jump(u8 **pprog, void *func, void *ip, u8 jmp_cond)
} }
static int invoke_bpf(const struct btf_func_model *m, u8 **pprog, static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
struct bpf_tramp_progs *tp, int stack_size) struct bpf_tramp_progs *tp, int stack_size,
bool save_ret)
{ {
int i; int i;
u8 *prog = *pprog; u8 *prog = *pprog;
for (i = 0; i < tp->nr_progs; i++) { for (i = 0; i < tp->nr_progs; i++) {
if (invoke_bpf_prog(m, &prog, tp->progs[i], stack_size, false)) if (invoke_bpf_prog(m, &prog, tp->progs[i], stack_size,
save_ret))
return -EINVAL; return -EINVAL;
} }
*pprog = prog; *pprog = prog;
@ -1877,6 +1886,23 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
return 0; return 0;
} }
static bool is_valid_bpf_tramp_flags(unsigned int flags)
{
if ((flags & BPF_TRAMP_F_RESTORE_REGS) &&
(flags & BPF_TRAMP_F_SKIP_FRAME))
return false;
/*
* BPF_TRAMP_F_RET_FENTRY_RET is only used by bpf_struct_ops,
* and it must be used alone.
*/
if ((flags & BPF_TRAMP_F_RET_FENTRY_RET) &&
(flags & ~BPF_TRAMP_F_RET_FENTRY_RET))
return false;
return true;
}
/* Example: /* Example:
* __be16 eth_type_trans(struct sk_buff *skb, struct net_device *dev); * __be16 eth_type_trans(struct sk_buff *skb, struct net_device *dev);
* its 'struct btf_func_model' will be nr_args=2 * its 'struct btf_func_model' will be nr_args=2
@ -1949,17 +1975,19 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
struct bpf_tramp_progs *fmod_ret = &tprogs[BPF_TRAMP_MODIFY_RETURN]; struct bpf_tramp_progs *fmod_ret = &tprogs[BPF_TRAMP_MODIFY_RETURN];
u8 **branches = NULL; u8 **branches = NULL;
u8 *prog; u8 *prog;
bool save_ret;
/* x86-64 supports up to 6 arguments. 7+ can be added in the future */ /* x86-64 supports up to 6 arguments. 7+ can be added in the future */
if (nr_args > 6) if (nr_args > 6)
return -ENOTSUPP; return -ENOTSUPP;
if ((flags & BPF_TRAMP_F_RESTORE_REGS) && if (!is_valid_bpf_tramp_flags(flags))
(flags & BPF_TRAMP_F_SKIP_FRAME))
return -EINVAL; return -EINVAL;
if (flags & BPF_TRAMP_F_CALL_ORIG) /* room for return value of orig_call or fentry prog */
stack_size += 8; /* room for return value of orig_call */ save_ret = flags & (BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_RET_FENTRY_RET);
if (save_ret)
stack_size += 8;
if (flags & BPF_TRAMP_F_IP_ARG) if (flags & BPF_TRAMP_F_IP_ARG)
stack_size += 8; /* room for IP address argument */ stack_size += 8; /* room for IP address argument */
@ -2005,7 +2033,8 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
} }
if (fentry->nr_progs) if (fentry->nr_progs)
if (invoke_bpf(m, &prog, fentry, stack_size)) if (invoke_bpf(m, &prog, fentry, stack_size,
flags & BPF_TRAMP_F_RET_FENTRY_RET))
return -EINVAL; return -EINVAL;
if (fmod_ret->nr_progs) { if (fmod_ret->nr_progs) {
@ -2052,7 +2081,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
} }
if (fexit->nr_progs) if (fexit->nr_progs)
if (invoke_bpf(m, &prog, fexit, stack_size)) { if (invoke_bpf(m, &prog, fexit, stack_size, false)) {
ret = -EINVAL; ret = -EINVAL;
goto cleanup; goto cleanup;
} }
@ -2072,9 +2101,10 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
ret = -EINVAL; ret = -EINVAL;
goto cleanup; goto cleanup;
} }
/* restore original return value back into RAX */
emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
} }
/* restore return value of orig_call or fentry prog back into RAX */
if (save_ret)
emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
EMIT1(0x5B); /* pop rbx */ EMIT1(0x5B); /* pop rbx */
EMIT1(0xC9); /* leave */ EMIT1(0xC9); /* leave */

View File

@ -2662,15 +2662,6 @@ bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
* are likely to increase the throughput. * are likely to increase the throughput.
*/ */
bfqq->new_bfqq = new_bfqq; bfqq->new_bfqq = new_bfqq;
/*
* The above assignment schedules the following redirections:
* each time some I/O for bfqq arrives, the process that
* generated that I/O is disassociated from bfqq and
* associated with new_bfqq. Here we increases new_bfqq->ref
* in advance, adding the number of processes that are
* expected to be associated with new_bfqq as they happen to
* issue I/O.
*/
new_bfqq->ref += process_refs; new_bfqq->ref += process_refs;
return new_bfqq; return new_bfqq;
} }
@ -2733,10 +2724,6 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
{ {
struct bfq_queue *in_service_bfqq, *new_bfqq; struct bfq_queue *in_service_bfqq, *new_bfqq;
/* if a merge has already been setup, then proceed with that first */
if (bfqq->new_bfqq)
return bfqq->new_bfqq;
/* /*
* Check delayed stable merge for rotational or non-queueing * Check delayed stable merge for rotational or non-queueing
* devs. For this branch to be executed, bfqq must not be * devs. For this branch to be executed, bfqq must not be
@ -2838,6 +2825,9 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
if (bfq_too_late_for_merging(bfqq)) if (bfq_too_late_for_merging(bfqq))
return NULL; return NULL;
if (bfqq->new_bfqq)
return bfqq->new_bfqq;
if (!io_struct || unlikely(bfqq == &bfqd->oom_bfqq)) if (!io_struct || unlikely(bfqq == &bfqd->oom_bfqq))
return NULL; return NULL;

View File

@ -3007,6 +3007,18 @@ static int acpi_nfit_register_region(struct acpi_nfit_desc *acpi_desc,
ndr_desc->target_node = NUMA_NO_NODE; ndr_desc->target_node = NUMA_NO_NODE;
} }
/* Fallback to address based numa information if node lookup failed */
if (ndr_desc->numa_node == NUMA_NO_NODE) {
ndr_desc->numa_node = memory_add_physaddr_to_nid(spa->address);
dev_info(acpi_desc->dev, "changing numa node from %d to %d for nfit region [%pa-%pa]",
NUMA_NO_NODE, ndr_desc->numa_node, &res.start, &res.end);
}
if (ndr_desc->target_node == NUMA_NO_NODE) {
ndr_desc->target_node = phys_to_target_node(spa->address);
dev_info(acpi_desc->dev, "changing target node from %d to %d for nfit region [%pa-%pa]",
NUMA_NO_NODE, ndr_desc->numa_node, &res.start, &res.end);
}
/* /*
* Persistence domain bits are hierarchical, if * Persistence domain bits are hierarchical, if
* ACPI_NFIT_CAPABILITY_CACHE_FLUSH is set then * ACPI_NFIT_CAPABILITY_CACHE_FLUSH is set then

View File

@ -95,12 +95,29 @@ int fwnode_link_add(struct fwnode_handle *con, struct fwnode_handle *sup)
list_add(&link->s_hook, &sup->consumers); list_add(&link->s_hook, &sup->consumers);
list_add(&link->c_hook, &con->suppliers); list_add(&link->c_hook, &con->suppliers);
pr_debug("%pfwP Linked as a fwnode consumer to %pfwP\n",
con, sup);
out: out:
mutex_unlock(&fwnode_link_lock); mutex_unlock(&fwnode_link_lock);
return ret; return ret;
} }
/**
* __fwnode_link_del - Delete a link between two fwnode_handles.
* @link: the fwnode_link to be deleted
*
* The fwnode_link_lock needs to be held when this function is called.
*/
static void __fwnode_link_del(struct fwnode_link *link)
{
pr_debug("%pfwP Dropping the fwnode link to %pfwP\n",
link->consumer, link->supplier);
list_del(&link->s_hook);
list_del(&link->c_hook);
kfree(link);
}
/** /**
* fwnode_links_purge_suppliers - Delete all supplier links of fwnode_handle. * fwnode_links_purge_suppliers - Delete all supplier links of fwnode_handle.
* @fwnode: fwnode whose supplier links need to be deleted * @fwnode: fwnode whose supplier links need to be deleted
@ -112,11 +129,8 @@ static void fwnode_links_purge_suppliers(struct fwnode_handle *fwnode)
struct fwnode_link *link, *tmp; struct fwnode_link *link, *tmp;
mutex_lock(&fwnode_link_lock); mutex_lock(&fwnode_link_lock);
list_for_each_entry_safe(link, tmp, &fwnode->suppliers, c_hook) { list_for_each_entry_safe(link, tmp, &fwnode->suppliers, c_hook)
list_del(&link->s_hook); __fwnode_link_del(link);
list_del(&link->c_hook);
kfree(link);
}
mutex_unlock(&fwnode_link_lock); mutex_unlock(&fwnode_link_lock);
} }
@ -131,11 +145,8 @@ static void fwnode_links_purge_consumers(struct fwnode_handle *fwnode)
struct fwnode_link *link, *tmp; struct fwnode_link *link, *tmp;
mutex_lock(&fwnode_link_lock); mutex_lock(&fwnode_link_lock);
list_for_each_entry_safe(link, tmp, &fwnode->consumers, s_hook) { list_for_each_entry_safe(link, tmp, &fwnode->consumers, s_hook)
list_del(&link->s_hook); __fwnode_link_del(link);
list_del(&link->c_hook);
kfree(link);
}
mutex_unlock(&fwnode_link_lock); mutex_unlock(&fwnode_link_lock);
} }
@ -975,6 +986,7 @@ int device_links_check_suppliers(struct device *dev)
{ {
struct device_link *link; struct device_link *link;
int ret = 0; int ret = 0;
struct fwnode_handle *sup_fw;
/* /*
* Device waiting for supplier to become available is not allowed to * Device waiting for supplier to become available is not allowed to
@ -983,10 +995,11 @@ int device_links_check_suppliers(struct device *dev)
mutex_lock(&fwnode_link_lock); mutex_lock(&fwnode_link_lock);
if (dev->fwnode && !list_empty(&dev->fwnode->suppliers) && if (dev->fwnode && !list_empty(&dev->fwnode->suppliers) &&
!fw_devlink_is_permissive()) { !fw_devlink_is_permissive()) {
dev_dbg(dev, "probe deferral - wait for supplier %pfwP\n", sup_fw = list_first_entry(&dev->fwnode->suppliers,
list_first_entry(&dev->fwnode->suppliers, struct fwnode_link,
struct fwnode_link, c_hook)->supplier;
c_hook)->supplier); dev_err_probe(dev, -EPROBE_DEFER, "wait for supplier %pfwP\n",
sup_fw);
mutex_unlock(&fwnode_link_lock); mutex_unlock(&fwnode_link_lock);
return -EPROBE_DEFER; return -EPROBE_DEFER;
} }
@ -1001,8 +1014,9 @@ int device_links_check_suppliers(struct device *dev)
if (link->status != DL_STATE_AVAILABLE && if (link->status != DL_STATE_AVAILABLE &&
!(link->flags & DL_FLAG_SYNC_STATE_ONLY)) { !(link->flags & DL_FLAG_SYNC_STATE_ONLY)) {
device_links_missing_supplier(dev); device_links_missing_supplier(dev);
dev_dbg(dev, "probe deferral - supplier %s not ready\n", dev_err_probe(dev, -EPROBE_DEFER,
dev_name(link->supplier)); "supplier %s not ready\n",
dev_name(link->supplier));
ret = -EPROBE_DEFER; ret = -EPROBE_DEFER;
break; break;
} }
@ -1722,6 +1736,25 @@ static int fw_devlink_create_devlink(struct device *con,
struct device *sup_dev; struct device *sup_dev;
int ret = 0; int ret = 0;
/*
* In some cases, a device P might also be a supplier to its child node
* C. However, this would defer the probe of C until the probe of P
* completes successfully. This is perfectly fine in the device driver
* model. device_add() doesn't guarantee probe completion of the device
* by the time it returns.
*
* However, there are a few drivers that assume C will finish probing
* as soon as it's added and before P finishes probing. So, we provide
* a flag to let fw_devlink know not to delay the probe of C until the
* probe of P completes successfully.
*
* When such a flag is set, we can't create device links where P is the
* supplier of C as that would delay the probe of C.
*/
if (sup_handle->flags & FWNODE_FLAG_NEEDS_CHILD_BOUND_ON_ADD &&
fwnode_is_ancestor_of(sup_handle, con->fwnode))
return -EINVAL;
sup_dev = get_dev_from_fwnode(sup_handle); sup_dev = get_dev_from_fwnode(sup_handle);
if (sup_dev) { if (sup_dev) {
/* /*
@ -1772,14 +1805,21 @@ static int fw_devlink_create_devlink(struct device *con,
* be broken by applying logic. Check for these types of cycles and * be broken by applying logic. Check for these types of cycles and
* break them so that devices in the cycle probe properly. * break them so that devices in the cycle probe properly.
* *
* If the supplier's parent is dependent on the consumer, then * If the supplier's parent is dependent on the consumer, then the
* the consumer-supplier dependency is a false dependency. So, * consumer and supplier have a cyclic dependency. Since fw_devlink
* treat it as an invalid link. * can't tell which of the inferred dependencies are incorrect, don't
* enforce probe ordering between any of the devices in this cyclic
* dependency. Do this by relaxing all the fw_devlink device links in
* this cycle and by treating the fwnode link between the consumer and
* the supplier as an invalid dependency.
*/ */
sup_dev = fwnode_get_next_parent_dev(sup_handle); sup_dev = fwnode_get_next_parent_dev(sup_handle);
if (sup_dev && device_is_dependent(con, sup_dev)) { if (sup_dev && device_is_dependent(con, sup_dev)) {
dev_dbg(con, "Not linking to %pfwP - False link\n", dev_info(con, "Fixing up cyclic dependency with %pfwP (%s)\n",
sup_handle); sup_handle, dev_name(sup_dev));
device_links_write_lock();
fw_devlink_relax_cycle(con, sup_dev);
device_links_write_unlock();
ret = -EINVAL; ret = -EINVAL;
} else { } else {
/* /*
@ -1858,9 +1898,7 @@ static void __fw_devlink_link_to_consumers(struct device *dev)
if (!own_link || ret == -EAGAIN) if (!own_link || ret == -EAGAIN)
continue; continue;
list_del(&link->s_hook); __fwnode_link_del(link);
list_del(&link->c_hook);
kfree(link);
} }
} }
@ -1912,9 +1950,7 @@ static void __fw_devlink_link_to_suppliers(struct device *dev,
if (!own_link || ret == -EAGAIN) if (!own_link || ret == -EAGAIN)
continue; continue;
list_del(&link->s_hook); __fwnode_link_del(link);
list_del(&link->c_hook);
kfree(link);
/* If no device link was created, nothing more to do. */ /* If no device link was created, nothing more to do. */
if (ret) if (ret)

View File

@ -97,13 +97,18 @@ struct nbd_config {
atomic_t recv_threads; atomic_t recv_threads;
wait_queue_head_t recv_wq; wait_queue_head_t recv_wq;
loff_t blksize; unsigned int blksize_bits;
loff_t bytesize; loff_t bytesize;
#if IS_ENABLED(CONFIG_DEBUG_FS) #if IS_ENABLED(CONFIG_DEBUG_FS)
struct dentry *dbg_dir; struct dentry *dbg_dir;
#endif #endif
}; };
static inline unsigned int nbd_blksize(struct nbd_config *config)
{
return 1u << config->blksize_bits;
}
struct nbd_device { struct nbd_device {
struct blk_mq_tag_set tag_set; struct blk_mq_tag_set tag_set;
@ -146,7 +151,7 @@ static struct dentry *nbd_dbg_dir;
#define NBD_MAGIC 0x68797548 #define NBD_MAGIC 0x68797548
#define NBD_DEF_BLKSIZE 1024 #define NBD_DEF_BLKSIZE_BITS 10
static unsigned int nbds_max = 16; static unsigned int nbds_max = 16;
static int max_part = 16; static int max_part = 16;
@ -317,12 +322,12 @@ static int nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
loff_t blksize) loff_t blksize)
{ {
if (!blksize) if (!blksize)
blksize = NBD_DEF_BLKSIZE; blksize = 1u << NBD_DEF_BLKSIZE_BITS;
if (blksize < 512 || blksize > PAGE_SIZE || !is_power_of_2(blksize)) if (blksize < 512 || blksize > PAGE_SIZE || !is_power_of_2(blksize))
return -EINVAL; return -EINVAL;
nbd->config->bytesize = bytesize; nbd->config->bytesize = bytesize;
nbd->config->blksize = blksize; nbd->config->blksize_bits = __ffs(blksize);
if (!nbd->task_recv) if (!nbd->task_recv)
return 0; return 0;
@ -1337,7 +1342,7 @@ static int nbd_start_device(struct nbd_device *nbd)
args->index = i; args->index = i;
queue_work(nbd->recv_workq, &args->work); queue_work(nbd->recv_workq, &args->work);
} }
return nbd_set_size(nbd, config->bytesize, config->blksize); return nbd_set_size(nbd, config->bytesize, nbd_blksize(config));
} }
static int nbd_start_device_ioctl(struct nbd_device *nbd, struct block_device *bdev) static int nbd_start_device_ioctl(struct nbd_device *nbd, struct block_device *bdev)
@ -1406,11 +1411,11 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
case NBD_SET_BLKSIZE: case NBD_SET_BLKSIZE:
return nbd_set_size(nbd, config->bytesize, arg); return nbd_set_size(nbd, config->bytesize, arg);
case NBD_SET_SIZE: case NBD_SET_SIZE:
return nbd_set_size(nbd, arg, config->blksize); return nbd_set_size(nbd, arg, nbd_blksize(config));
case NBD_SET_SIZE_BLOCKS: case NBD_SET_SIZE_BLOCKS:
if (check_mul_overflow((loff_t)arg, config->blksize, &bytesize)) if (check_shl_overflow(arg, config->blksize_bits, &bytesize))
return -EINVAL; return -EINVAL;
return nbd_set_size(nbd, bytesize, config->blksize); return nbd_set_size(nbd, bytesize, nbd_blksize(config));
case NBD_SET_TIMEOUT: case NBD_SET_TIMEOUT:
nbd_set_cmd_timeout(nbd, arg); nbd_set_cmd_timeout(nbd, arg);
return 0; return 0;
@ -1476,7 +1481,7 @@ static struct nbd_config *nbd_alloc_config(void)
atomic_set(&config->recv_threads, 0); atomic_set(&config->recv_threads, 0);
init_waitqueue_head(&config->recv_wq); init_waitqueue_head(&config->recv_wq);
init_waitqueue_head(&config->conn_wait); init_waitqueue_head(&config->conn_wait);
config->blksize = NBD_DEF_BLKSIZE; config->blksize_bits = NBD_DEF_BLKSIZE_BITS;
atomic_set(&config->live_connections, 0); atomic_set(&config->live_connections, 0);
try_module_get(THIS_MODULE); try_module_get(THIS_MODULE);
return config; return config;
@ -1604,7 +1609,7 @@ static int nbd_dev_dbg_init(struct nbd_device *nbd)
debugfs_create_file("tasks", 0444, dir, nbd, &nbd_dbg_tasks_fops); debugfs_create_file("tasks", 0444, dir, nbd, &nbd_dbg_tasks_fops);
debugfs_create_u64("size_bytes", 0444, dir, &config->bytesize); debugfs_create_u64("size_bytes", 0444, dir, &config->bytesize);
debugfs_create_u32("timeout", 0444, dir, &nbd->tag_set.timeout); debugfs_create_u32("timeout", 0444, dir, &nbd->tag_set.timeout);
debugfs_create_u64("blocksize", 0444, dir, &config->blksize); debugfs_create_u32("blocksize_bits", 0444, dir, &config->blksize_bits);
debugfs_create_file("flags", 0444, dir, nbd, &nbd_dbg_flags_fops); debugfs_create_file("flags", 0444, dir, nbd, &nbd_dbg_flags_fops);
return 0; return 0;
@ -1826,7 +1831,7 @@ nbd_device_policy[NBD_DEVICE_ATTR_MAX + 1] = {
static int nbd_genl_size_set(struct genl_info *info, struct nbd_device *nbd) static int nbd_genl_size_set(struct genl_info *info, struct nbd_device *nbd)
{ {
struct nbd_config *config = nbd->config; struct nbd_config *config = nbd->config;
u64 bsize = config->blksize; u64 bsize = nbd_blksize(config);
u64 bytes = config->bytesize; u64 bytes = config->bytesize;
if (info->attrs[NBD_ATTR_SIZE_BYTES]) if (info->attrs[NBD_ATTR_SIZE_BYTES])
@ -1835,7 +1840,7 @@ static int nbd_genl_size_set(struct genl_info *info, struct nbd_device *nbd)
if (info->attrs[NBD_ATTR_BLOCK_SIZE_BYTES]) if (info->attrs[NBD_ATTR_BLOCK_SIZE_BYTES])
bsize = nla_get_u64(info->attrs[NBD_ATTR_BLOCK_SIZE_BYTES]); bsize = nla_get_u64(info->attrs[NBD_ATTR_BLOCK_SIZE_BYTES]);
if (bytes != config->bytesize || bsize != config->blksize) if (bytes != config->bytesize || bsize != nbd_blksize(config))
return nbd_set_size(nbd, bytes, bsize); return nbd_set_size(nbd, bytes, bsize);
return 0; return 0;
} }

View File

@ -778,7 +778,7 @@ ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
in_place ? DMA_BIDIRECTIONAL in_place ? DMA_BIDIRECTIONAL
: DMA_TO_DEVICE); : DMA_TO_DEVICE);
if (ret) if (ret)
goto e_ctx; goto e_aad;
if (in_place) { if (in_place) {
dst = src; dst = src;
@ -863,7 +863,7 @@ ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
op.u.aes.size = 0; op.u.aes.size = 0;
ret = cmd_q->ccp->vdata->perform->aes(&op); ret = cmd_q->ccp->vdata->perform->aes(&op);
if (ret) if (ret)
goto e_dst; goto e_final_wa;
if (aes->action == CCP_AES_ACTION_ENCRYPT) { if (aes->action == CCP_AES_ACTION_ENCRYPT) {
/* Put the ciphered tag after the ciphertext. */ /* Put the ciphered tag after the ciphertext. */
@ -873,17 +873,19 @@ ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
ret = ccp_init_dm_workarea(&tag, cmd_q, authsize, ret = ccp_init_dm_workarea(&tag, cmd_q, authsize,
DMA_BIDIRECTIONAL); DMA_BIDIRECTIONAL);
if (ret) if (ret)
goto e_tag; goto e_final_wa;
ret = ccp_set_dm_area(&tag, 0, p_tag, 0, authsize); ret = ccp_set_dm_area(&tag, 0, p_tag, 0, authsize);
if (ret) if (ret) {
goto e_tag; ccp_dm_free(&tag);
goto e_final_wa;
}
ret = crypto_memneq(tag.address, final_wa.address, ret = crypto_memneq(tag.address, final_wa.address,
authsize) ? -EBADMSG : 0; authsize) ? -EBADMSG : 0;
ccp_dm_free(&tag); ccp_dm_free(&tag);
} }
e_tag: e_final_wa:
ccp_dm_free(&final_wa); ccp_dm_free(&final_wa);
e_dst: e_dst:

View File

@ -468,15 +468,8 @@ static int pca953x_gpio_get_value(struct gpio_chip *gc, unsigned off)
mutex_lock(&chip->i2c_lock); mutex_lock(&chip->i2c_lock);
ret = regmap_read(chip->regmap, inreg, &reg_val); ret = regmap_read(chip->regmap, inreg, &reg_val);
mutex_unlock(&chip->i2c_lock); mutex_unlock(&chip->i2c_lock);
if (ret < 0) { if (ret < 0)
/* return ret;
* NOTE:
* diagnostic already emitted; that's all we should
* do unless gpio_*_value_cansleep() calls become different
* from their nonsleeping siblings (and report faults).
*/
return 0;
}
return !!(reg_val & bit); return !!(reg_val & bit);
} }

View File

@ -689,6 +689,7 @@ static int rockchip_gpio_probe(struct platform_device *pdev)
struct device_node *pctlnp = of_get_parent(np); struct device_node *pctlnp = of_get_parent(np);
struct pinctrl_dev *pctldev = NULL; struct pinctrl_dev *pctldev = NULL;
struct rockchip_pin_bank *bank = NULL; struct rockchip_pin_bank *bank = NULL;
struct rockchip_pin_output_deferred *cfg;
static int gpio; static int gpio;
int id, ret; int id, ret;
@ -716,12 +717,33 @@ static int rockchip_gpio_probe(struct platform_device *pdev)
if (ret) if (ret)
return ret; return ret;
/*
* Prevent clashes with a deferred output setting
* being added right at this moment.
*/
mutex_lock(&bank->deferred_lock);
ret = rockchip_gpiolib_register(bank); ret = rockchip_gpiolib_register(bank);
if (ret) { if (ret) {
clk_disable_unprepare(bank->clk); clk_disable_unprepare(bank->clk);
mutex_unlock(&bank->deferred_lock);
return ret; return ret;
} }
while (!list_empty(&bank->deferred_output)) {
cfg = list_first_entry(&bank->deferred_output,
struct rockchip_pin_output_deferred, head);
list_del(&cfg->head);
ret = rockchip_gpio_direction_output(&bank->gpio_chip, cfg->pin, cfg->arg);
if (ret)
dev_warn(dev, "setting output pin %u to %u failed\n", cfg->pin, cfg->arg);
kfree(cfg);
}
mutex_unlock(&bank->deferred_lock);
platform_set_drvdata(pdev, bank); platform_set_drvdata(pdev, bank);
dev_info(dev, "probed %pOF\n", np); dev_info(dev, "probed %pOF\n", np);

View File

@ -837,6 +837,28 @@ static int convert_tiling_flags_to_modifier(struct amdgpu_framebuffer *afb)
return 0; return 0;
} }
/* Mirrors the is_displayable check in radeonsi's gfx6_compute_surface */
static int check_tiling_flags_gfx6(struct amdgpu_framebuffer *afb)
{
u64 micro_tile_mode;
/* Zero swizzle mode means linear */
if (AMDGPU_TILING_GET(afb->tiling_flags, SWIZZLE_MODE) == 0)
return 0;
micro_tile_mode = AMDGPU_TILING_GET(afb->tiling_flags, MICRO_TILE_MODE);
switch (micro_tile_mode) {
case 0: /* DISPLAY */
case 3: /* RENDER */
return 0;
default:
drm_dbg_kms(afb->base.dev,
"Micro tile mode %llu not supported for scanout\n",
micro_tile_mode);
return -EINVAL;
}
}
static void get_block_dimensions(unsigned int block_log2, unsigned int cpp, static void get_block_dimensions(unsigned int block_log2, unsigned int cpp,
unsigned int *width, unsigned int *height) unsigned int *width, unsigned int *height)
{ {
@ -1103,6 +1125,7 @@ int amdgpu_display_framebuffer_init(struct drm_device *dev,
const struct drm_mode_fb_cmd2 *mode_cmd, const struct drm_mode_fb_cmd2 *mode_cmd,
struct drm_gem_object *obj) struct drm_gem_object *obj)
{ {
struct amdgpu_device *adev = drm_to_adev(dev);
int ret, i; int ret, i;
/* /*
@ -1122,6 +1145,14 @@ int amdgpu_display_framebuffer_init(struct drm_device *dev,
if (ret) if (ret)
return ret; return ret;
if (!dev->mode_config.allow_fb_modifiers) {
drm_WARN_ONCE(dev, adev->family >= AMDGPU_FAMILY_AI,
"GFX9+ requires FB check based on format modifier\n");
ret = check_tiling_flags_gfx6(rfb);
if (ret)
return ret;
}
if (dev->mode_config.allow_fb_modifiers && if (dev->mode_config.allow_fb_modifiers &&
!(rfb->base.flags & DRM_MODE_FB_MODIFIERS)) { !(rfb->base.flags & DRM_MODE_FB_MODIFIERS)) {
ret = convert_tiling_flags_to_modifier(rfb); ret = convert_tiling_flags_to_modifier(rfb);

View File

@ -3599,7 +3599,7 @@ static int gfx_v9_0_mqd_init(struct amdgpu_ring *ring)
/* set static priority for a queue/ring */ /* set static priority for a queue/ring */
gfx_v9_0_mqd_set_priority(ring, mqd); gfx_v9_0_mqd_set_priority(ring, mqd);
mqd->cp_hqd_quantum = RREG32(mmCP_HQD_QUANTUM); mqd->cp_hqd_quantum = RREG32_SOC15(GC, 0, mmCP_HQD_QUANTUM);
/* map_queues packet doesn't need activate the queue, /* map_queues packet doesn't need activate the queue,
* so only kiq need set this field. * so only kiq need set this field.

View File

@ -1098,6 +1098,8 @@ static int gmc_v10_0_hw_fini(void *handle)
{ {
struct amdgpu_device *adev = (struct amdgpu_device *)handle; struct amdgpu_device *adev = (struct amdgpu_device *)handle;
gmc_v10_0_gart_disable(adev);
if (amdgpu_sriov_vf(adev)) { if (amdgpu_sriov_vf(adev)) {
/* full access mode, so don't touch any GMC register */ /* full access mode, so don't touch any GMC register */
DRM_DEBUG("For SRIOV client, shouldn't do anything.\n"); DRM_DEBUG("For SRIOV client, shouldn't do anything.\n");
@ -1106,7 +1108,6 @@ static int gmc_v10_0_hw_fini(void *handle)
amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0); amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0);
amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0); amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
gmc_v10_0_gart_disable(adev);
return 0; return 0;
} }

View File

@ -1794,6 +1794,8 @@ static int gmc_v9_0_hw_fini(void *handle)
{ {
struct amdgpu_device *adev = (struct amdgpu_device *)handle; struct amdgpu_device *adev = (struct amdgpu_device *)handle;
gmc_v9_0_gart_disable(adev);
if (amdgpu_sriov_vf(adev)) { if (amdgpu_sriov_vf(adev)) {
/* full access mode, so don't touch any GMC register */ /* full access mode, so don't touch any GMC register */
DRM_DEBUG("For SRIOV client, shouldn't do anything.\n"); DRM_DEBUG("For SRIOV client, shouldn't do anything.\n");
@ -1802,7 +1804,6 @@ static int gmc_v9_0_hw_fini(void *handle)
amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0); amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0);
amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0); amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
gmc_v9_0_gart_disable(adev);
return 0; return 0;
} }

View File

@ -868,6 +868,12 @@ static int sdma_v5_2_start(struct amdgpu_device *adev)
msleep(1000); msleep(1000);
} }
/* TODO: check whether can submit a doorbell request to raise
* a doorbell fence to exit gfxoff.
*/
if (adev->in_s0ix)
amdgpu_gfx_off_ctrl(adev, false);
sdma_v5_2_soft_reset(adev); sdma_v5_2_soft_reset(adev);
/* unhalt the MEs */ /* unhalt the MEs */
sdma_v5_2_enable(adev, true); sdma_v5_2_enable(adev, true);
@ -876,6 +882,8 @@ static int sdma_v5_2_start(struct amdgpu_device *adev)
/* start the gfx rings and rlc compute queues */ /* start the gfx rings and rlc compute queues */
r = sdma_v5_2_gfx_resume(adev); r = sdma_v5_2_gfx_resume(adev);
if (adev->in_s0ix)
amdgpu_gfx_off_ctrl(adev, true);
if (r) if (r)
return r; return r;
r = sdma_v5_2_rlc_resume(adev); r = sdma_v5_2_rlc_resume(adev);

View File

@ -1115,6 +1115,7 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
init_data.asic_id.pci_revision_id = adev->pdev->revision; init_data.asic_id.pci_revision_id = adev->pdev->revision;
init_data.asic_id.hw_internal_rev = adev->external_rev_id; init_data.asic_id.hw_internal_rev = adev->external_rev_id;
init_data.asic_id.chip_id = adev->pdev->device;
init_data.asic_id.vram_width = adev->gmc.vram_width; init_data.asic_id.vram_width = adev->gmc.vram_width;
/* TODO: initialize init_data.asic_id.vram_type here!!!! */ /* TODO: initialize init_data.asic_id.vram_type here!!!! */
@ -1719,6 +1720,7 @@ static int dm_late_init(void *handle)
linear_lut[i] = 0xFFFF * i / 15; linear_lut[i] = 0xFFFF * i / 15;
params.set = 0; params.set = 0;
params.backlight_ramping_override = false;
params.backlight_ramping_start = 0xCCCC; params.backlight_ramping_start = 0xCCCC;
params.backlight_ramping_reduction = 0xCCCCCCCC; params.backlight_ramping_reduction = 0xCCCCCCCC;
params.backlight_lut_array_size = 16; params.backlight_lut_array_size = 16;

View File

@ -1826,14 +1826,13 @@ bool perform_link_training_with_retries(
if (panel_mode == DP_PANEL_MODE_EDP) { if (panel_mode == DP_PANEL_MODE_EDP) {
struct cp_psp *cp_psp = &stream->ctx->cp_psp; struct cp_psp *cp_psp = &stream->ctx->cp_psp;
if (cp_psp && cp_psp->funcs.enable_assr) { if (cp_psp && cp_psp->funcs.enable_assr)
if (!cp_psp->funcs.enable_assr(cp_psp->handle, link)) { /* ASSR is bound to fail with unsigned PSP
/* since eDP implies ASSR on, change panel * verstage used during devlopment phase.
* mode to disable ASSR * Report and continue with eDP panel mode to
*/ * perform eDP link training with right settings
panel_mode = DP_PANEL_MODE_DEFAULT; */
} cp_psp->funcs.enable_assr(cp_psp->handle, link);
}
} }
#endif #endif

View File

@ -793,7 +793,6 @@ static int exynos5433_decon_probe(struct platform_device *pdev)
{ {
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct decon_context *ctx; struct decon_context *ctx;
struct resource *res;
int ret; int ret;
int i; int i;
@ -818,8 +817,7 @@ static int exynos5433_decon_probe(struct platform_device *pdev)
ctx->clks[i] = clk; ctx->clks[i] = clk;
} }
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); ctx->addr = devm_platform_ioremap_resource(pdev, 0);
ctx->addr = devm_ioremap_resource(dev, res);
if (IS_ERR(ctx->addr)) if (IS_ERR(ctx->addr))
return PTR_ERR(ctx->addr); return PTR_ERR(ctx->addr);

View File

@ -1738,7 +1738,6 @@ static const struct component_ops exynos_dsi_component_ops = {
static int exynos_dsi_probe(struct platform_device *pdev) static int exynos_dsi_probe(struct platform_device *pdev)
{ {
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct resource *res;
struct exynos_dsi *dsi; struct exynos_dsi *dsi;
int ret, i; int ret, i;
@ -1789,8 +1788,7 @@ static int exynos_dsi_probe(struct platform_device *pdev)
} }
} }
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); dsi->reg_base = devm_platform_ioremap_resource(pdev, 0);
dsi->reg_base = devm_ioremap_resource(dev, res);
if (IS_ERR(dsi->reg_base)) if (IS_ERR(dsi->reg_base))
return PTR_ERR(dsi->reg_base); return PTR_ERR(dsi->reg_base);

View File

@ -85,7 +85,6 @@ struct fimc_scaler {
/* /*
* A structure of fimc context. * A structure of fimc context.
* *
* @regs_res: register resources.
* @regs: memory mapped io registers. * @regs: memory mapped io registers.
* @lock: locking of operations. * @lock: locking of operations.
* @clocks: fimc clocks. * @clocks: fimc clocks.
@ -103,7 +102,6 @@ struct fimc_context {
struct exynos_drm_ipp_formats *formats; struct exynos_drm_ipp_formats *formats;
unsigned int num_formats; unsigned int num_formats;
struct resource *regs_res;
void __iomem *regs; void __iomem *regs;
spinlock_t lock; spinlock_t lock;
struct clk *clocks[FIMC_CLKS_MAX]; struct clk *clocks[FIMC_CLKS_MAX];
@ -1327,8 +1325,7 @@ static int fimc_probe(struct platform_device *pdev)
ctx->num_formats = num_formats; ctx->num_formats = num_formats;
/* resource memory */ /* resource memory */
ctx->regs_res = platform_get_resource(pdev, IORESOURCE_MEM, 0); ctx->regs = devm_platform_ioremap_resource(pdev, 0);
ctx->regs = devm_ioremap_resource(dev, ctx->regs_res);
if (IS_ERR(ctx->regs)) if (IS_ERR(ctx->regs))
return PTR_ERR(ctx->regs); return PTR_ERR(ctx->regs);

View File

@ -1202,9 +1202,7 @@ static int fimd_probe(struct platform_device *pdev)
return PTR_ERR(ctx->lcd_clk); return PTR_ERR(ctx->lcd_clk);
} }
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); ctx->regs = devm_platform_ioremap_resource(pdev, 0);
ctx->regs = devm_ioremap_resource(dev, res);
if (IS_ERR(ctx->regs)) if (IS_ERR(ctx->regs))
return PTR_ERR(ctx->regs); return PTR_ERR(ctx->regs);

View File

@ -1449,7 +1449,6 @@ static const struct component_ops g2d_component_ops = {
static int g2d_probe(struct platform_device *pdev) static int g2d_probe(struct platform_device *pdev)
{ {
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct resource *res;
struct g2d_data *g2d; struct g2d_data *g2d;
int ret; int ret;
@ -1491,9 +1490,7 @@ static int g2d_probe(struct platform_device *pdev)
clear_bit(G2D_BIT_SUSPEND_RUNQUEUE, &g2d->flags); clear_bit(G2D_BIT_SUSPEND_RUNQUEUE, &g2d->flags);
clear_bit(G2D_BIT_ENGINE_BUSY, &g2d->flags); clear_bit(G2D_BIT_ENGINE_BUSY, &g2d->flags);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); g2d->regs = devm_platform_ioremap_resource(pdev, 0);
g2d->regs = devm_ioremap_resource(dev, res);
if (IS_ERR(g2d->regs)) { if (IS_ERR(g2d->regs)) {
ret = PTR_ERR(g2d->regs); ret = PTR_ERR(g2d->regs);
goto err_put_clk; goto err_put_clk;

View File

@ -86,7 +86,6 @@ struct gsc_scaler {
/* /*
* A structure of gsc context. * A structure of gsc context.
* *
* @regs_res: register resources.
* @regs: memory mapped io registers. * @regs: memory mapped io registers.
* @gsc_clk: gsc gate clock. * @gsc_clk: gsc gate clock.
* @sc: scaler infomations. * @sc: scaler infomations.
@ -103,7 +102,6 @@ struct gsc_context {
struct exynos_drm_ipp_formats *formats; struct exynos_drm_ipp_formats *formats;
unsigned int num_formats; unsigned int num_formats;
struct resource *regs_res;
void __iomem *regs; void __iomem *regs;
const char **clk_names; const char **clk_names;
struct clk *clocks[GSC_MAX_CLOCKS]; struct clk *clocks[GSC_MAX_CLOCKS];
@ -1272,9 +1270,7 @@ static int gsc_probe(struct platform_device *pdev)
} }
} }
/* resource memory */ ctx->regs = devm_platform_ioremap_resource(pdev, 0);
ctx->regs_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
ctx->regs = devm_ioremap_resource(dev, ctx->regs_res);
if (IS_ERR(ctx->regs)) if (IS_ERR(ctx->regs))
return PTR_ERR(ctx->regs); return PTR_ERR(ctx->regs);

View File

@ -278,7 +278,6 @@ static const struct component_ops rotator_component_ops = {
static int rotator_probe(struct platform_device *pdev) static int rotator_probe(struct platform_device *pdev)
{ {
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct resource *regs_res;
struct rot_context *rot; struct rot_context *rot;
const struct rot_variant *variant; const struct rot_variant *variant;
int irq; int irq;
@ -292,8 +291,7 @@ static int rotator_probe(struct platform_device *pdev)
rot->formats = variant->formats; rot->formats = variant->formats;
rot->num_formats = variant->num_formats; rot->num_formats = variant->num_formats;
rot->dev = dev; rot->dev = dev;
regs_res = platform_get_resource(pdev, IORESOURCE_MEM, 0); rot->regs = devm_platform_ioremap_resource(pdev, 0);
rot->regs = devm_ioremap_resource(dev, regs_res);
if (IS_ERR(rot->regs)) if (IS_ERR(rot->regs))
return PTR_ERR(rot->regs); return PTR_ERR(rot->regs);

View File

@ -485,7 +485,6 @@ static const struct component_ops scaler_component_ops = {
static int scaler_probe(struct platform_device *pdev) static int scaler_probe(struct platform_device *pdev)
{ {
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct resource *regs_res;
struct scaler_context *scaler; struct scaler_context *scaler;
int irq; int irq;
int ret, i; int ret, i;
@ -498,8 +497,7 @@ static int scaler_probe(struct platform_device *pdev)
(struct scaler_data *)of_device_get_match_data(dev); (struct scaler_data *)of_device_get_match_data(dev);
scaler->dev = dev; scaler->dev = dev;
regs_res = platform_get_resource(pdev, IORESOURCE_MEM, 0); scaler->regs = devm_platform_ioremap_resource(pdev, 0);
scaler->regs = devm_ioremap_resource(dev, regs_res);
if (IS_ERR(scaler->regs)) if (IS_ERR(scaler->regs))
return PTR_ERR(scaler->regs); return PTR_ERR(scaler->regs);

View File

@ -1957,7 +1957,6 @@ static int hdmi_probe(struct platform_device *pdev)
struct hdmi_audio_infoframe *audio_infoframe; struct hdmi_audio_infoframe *audio_infoframe;
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct hdmi_context *hdata; struct hdmi_context *hdata;
struct resource *res;
int ret; int ret;
hdata = devm_kzalloc(dev, sizeof(struct hdmi_context), GFP_KERNEL); hdata = devm_kzalloc(dev, sizeof(struct hdmi_context), GFP_KERNEL);
@ -1979,8 +1978,7 @@ static int hdmi_probe(struct platform_device *pdev)
return ret; return ret;
} }
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); hdata->regs = devm_platform_ioremap_resource(pdev, 0);
hdata->regs = devm_ioremap_resource(dev, res);
if (IS_ERR(hdata->regs)) { if (IS_ERR(hdata->regs)) {
ret = PTR_ERR(hdata->regs); ret = PTR_ERR(hdata->regs);
return ret; return ret;

View File

@ -362,8 +362,9 @@ static int __intel_context_active(struct i915_active *active)
return 0; return 0;
} }
static int sw_fence_dummy_notify(struct i915_sw_fence *sf, static int __i915_sw_fence_call
enum i915_sw_fence_notify state) sw_fence_dummy_notify(struct i915_sw_fence *sf,
enum i915_sw_fence_notify state)
{ {
return NOTIFY_DONE; return NOTIFY_DONE;
} }

View File

@ -882,8 +882,6 @@ void intel_rps_park(struct intel_rps *rps)
if (!intel_rps_is_enabled(rps)) if (!intel_rps_is_enabled(rps))
return; return;
GEM_BUG_ON(atomic_read(&rps->num_waiters));
if (!intel_rps_clear_active(rps)) if (!intel_rps_clear_active(rps))
return; return;

View File

@ -102,11 +102,11 @@ static_assert(sizeof(struct guc_ct_buffer_desc) == 64);
* | +-------+--------------------------------------------------------------+ * | +-------+--------------------------------------------------------------+
* | | 7:0 | NUM_DWORDS = length (in dwords) of the embedded HXG message | * | | 7:0 | NUM_DWORDS = length (in dwords) of the embedded HXG message |
* +---+-------+--------------------------------------------------------------+ * +---+-------+--------------------------------------------------------------+
* | 1 | 31:0 | +--------------------------------------------------------+ | * | 1 | 31:0 | |
* +---+-------+ | | | * +---+-------+ |
* |...| | | Embedded `HXG Message`_ | | * |...| | [Embedded `HXG Message`_] |
* +---+-------+ | | | * +---+-------+ |
* | n | 31:0 | +--------------------------------------------------------+ | * | n | 31:0 | |
* +---+-------+--------------------------------------------------------------+ * +---+-------+--------------------------------------------------------------+
*/ */

View File

@ -38,11 +38,11 @@
* +---+-------+--------------------------------------------------------------+ * +---+-------+--------------------------------------------------------------+
* | | Bits | Description | * | | Bits | Description |
* +===+=======+==============================================================+ * +===+=======+==============================================================+
* | 0 | 31:0 | +--------------------------------------------------------+ | * | 0 | 31:0 | |
* +---+-------+ | | | * +---+-------+ |
* |...| | | Embedded `HXG Message`_ | | * |...| | [Embedded `HXG Message`_] |
* +---+-------+ | | | * +---+-------+ |
* | n | 31:0 | +--------------------------------------------------------+ | * | n | 31:0 | |
* +---+-------+--------------------------------------------------------------+ * +---+-------+--------------------------------------------------------------+
*/ */

View File

@ -576,7 +576,7 @@ retry:
/* No one is going to touch shadow bb from now on. */ /* No one is going to touch shadow bb from now on. */
i915_gem_object_flush_map(bb->obj); i915_gem_object_flush_map(bb->obj);
i915_gem_object_unlock(bb->obj); i915_gem_ww_ctx_fini(&ww);
} }
} }
return 0; return 0;
@ -630,7 +630,7 @@ retry:
return ret; return ret;
} }
i915_gem_object_unlock(wa_ctx->indirect_ctx.obj); i915_gem_ww_ctx_fini(&ww);
/* FIXME: we are not tracking our pinned VMA leaving it /* FIXME: we are not tracking our pinned VMA leaving it
* up to the core to fix up the stray pin_count upon * up to the core to fix up the stray pin_count upon

View File

@ -829,8 +829,6 @@ static void __i915_request_ctor(void *arg)
i915_sw_fence_init(&rq->submit, submit_notify); i915_sw_fence_init(&rq->submit, submit_notify);
i915_sw_fence_init(&rq->semaphore, semaphore_notify); i915_sw_fence_init(&rq->semaphore, semaphore_notify);
dma_fence_init(&rq->fence, &i915_fence_ops, &rq->lock, 0, 0);
rq->capture_list = NULL; rq->capture_list = NULL;
init_llist_head(&rq->execute_cb); init_llist_head(&rq->execute_cb);
@ -905,17 +903,12 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp)
rq->ring = ce->ring; rq->ring = ce->ring;
rq->execution_mask = ce->engine->mask; rq->execution_mask = ce->engine->mask;
kref_init(&rq->fence.refcount);
rq->fence.flags = 0;
rq->fence.error = 0;
INIT_LIST_HEAD(&rq->fence.cb_list);
ret = intel_timeline_get_seqno(tl, rq, &seqno); ret = intel_timeline_get_seqno(tl, rq, &seqno);
if (ret) if (ret)
goto err_free; goto err_free;
rq->fence.context = tl->fence_context; dma_fence_init(&rq->fence, &i915_fence_ops, &rq->lock,
rq->fence.seqno = seqno; tl->fence_context, seqno);
RCU_INIT_POINTER(rq->timeline, tl); RCU_INIT_POINTER(rq->timeline, tl);
rq->hwsp_seqno = tl->hwsp_seqno; rq->hwsp_seqno = tl->hwsp_seqno;

View File

@ -1845,7 +1845,6 @@ tegra_crtc_update_memory_bandwidth(struct drm_crtc *crtc,
bool prepare_bandwidth_transition) bool prepare_bandwidth_transition)
{ {
const struct tegra_plane_state *old_tegra_state, *new_tegra_state; const struct tegra_plane_state *old_tegra_state, *new_tegra_state;
const struct tegra_dc_state *old_dc_state, *new_dc_state;
u32 i, new_avg_bw, old_avg_bw, new_peak_bw, old_peak_bw; u32 i, new_avg_bw, old_avg_bw, new_peak_bw, old_peak_bw;
const struct drm_plane_state *old_plane_state; const struct drm_plane_state *old_plane_state;
const struct drm_crtc_state *old_crtc_state; const struct drm_crtc_state *old_crtc_state;
@ -1858,8 +1857,6 @@ tegra_crtc_update_memory_bandwidth(struct drm_crtc *crtc,
return; return;
old_crtc_state = drm_atomic_get_old_crtc_state(state, crtc); old_crtc_state = drm_atomic_get_old_crtc_state(state, crtc);
old_dc_state = to_const_dc_state(old_crtc_state);
new_dc_state = to_const_dc_state(crtc->state);
if (!crtc->state->active) { if (!crtc->state->active) {
if (!old_crtc_state->active) if (!old_crtc_state->active)

View File

@ -35,12 +35,6 @@ static inline struct tegra_dc_state *to_dc_state(struct drm_crtc_state *state)
return NULL; return NULL;
} }
static inline const struct tegra_dc_state *
to_const_dc_state(const struct drm_crtc_state *state)
{
return to_dc_state((struct drm_crtc_state *)state);
}
struct tegra_dc_stats { struct tegra_dc_stats {
unsigned long frames; unsigned long frames;
unsigned long vblank; unsigned long vblank;

View File

@ -222,7 +222,7 @@ int tegra_drm_ioctl_channel_map(struct drm_device *drm, void *data, struct drm_f
mapping->iova = sg_dma_address(mapping->sgt->sgl); mapping->iova = sg_dma_address(mapping->sgt->sgl);
} }
mapping->iova_end = mapping->iova + host1x_to_tegra_bo(mapping->bo)->size; mapping->iova_end = mapping->iova + host1x_to_tegra_bo(mapping->bo)->gem.size;
err = xa_alloc(&context->mappings, &args->mapping, mapping, XA_LIMIT(1, U32_MAX), err = xa_alloc(&context->mappings, &args->mapping, mapping, XA_LIMIT(1, U32_MAX),
GFP_KERNEL); GFP_KERNEL);

View File

@ -15,7 +15,7 @@
#include "intr.h" #include "intr.h"
#include "syncpt.h" #include "syncpt.h"
DEFINE_SPINLOCK(lock); static DEFINE_SPINLOCK(lock);
struct host1x_syncpt_fence { struct host1x_syncpt_fence {
struct dma_fence base; struct dma_fence base;
@ -152,8 +152,10 @@ struct dma_fence *host1x_fence_create(struct host1x_syncpt *sp, u32 threshold)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
fence->waiter = kzalloc(sizeof(*fence->waiter), GFP_KERNEL); fence->waiter = kzalloc(sizeof(*fence->waiter), GFP_KERNEL);
if (!fence->waiter) if (!fence->waiter) {
kfree(fence);
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
}
fence->sp = sp; fence->sp = sp;
fence->threshold = threshold; fence->threshold = threshold;

View File

@ -255,13 +255,13 @@ static int amd_mp2_pci_probe(struct pci_dev *pdev, const struct pci_device_id *i
if (!privdata->cl_data) if (!privdata->cl_data)
return -ENOMEM; return -ENOMEM;
rc = devm_add_action_or_reset(&pdev->dev, amd_mp2_pci_remove, privdata); mp2_select_ops(privdata);
rc = amd_sfh_hid_client_init(privdata);
if (rc) if (rc)
return rc; return rc;
mp2_select_ops(privdata); return devm_add_action_or_reset(&pdev->dev, amd_mp2_pci_remove, privdata);
return amd_sfh_hid_client_init(privdata);
} }
static int __maybe_unused amd_mp2_pci_resume(struct device *dev) static int __maybe_unused amd_mp2_pci_resume(struct device *dev)

View File

@ -336,12 +336,19 @@ static int apple_event(struct hid_device *hdev, struct hid_field *field,
/* /*
* MacBook JIS keyboard has wrong logical maximum * MacBook JIS keyboard has wrong logical maximum
* Magic Keyboard JIS has wrong logical maximum
*/ */
static __u8 *apple_report_fixup(struct hid_device *hdev, __u8 *rdesc, static __u8 *apple_report_fixup(struct hid_device *hdev, __u8 *rdesc,
unsigned int *rsize) unsigned int *rsize)
{ {
struct apple_sc *asc = hid_get_drvdata(hdev); struct apple_sc *asc = hid_get_drvdata(hdev);
if(*rsize >=71 && rdesc[70] == 0x65 && rdesc[64] == 0x65) {
hid_info(hdev,
"fixing up Magic Keyboard JIS report descriptor\n");
rdesc[64] = rdesc[70] = 0xe7;
}
if ((asc->quirks & APPLE_RDESC_JIS) && *rsize >= 60 && if ((asc->quirks & APPLE_RDESC_JIS) && *rsize >= 60 &&
rdesc[53] == 0x65 && rdesc[59] == 0x65) { rdesc[53] == 0x65 && rdesc[59] == 0x65) {
hid_info(hdev, hid_info(hdev,

View File

@ -56,15 +56,22 @@ static int betopff_init(struct hid_device *hid)
{ {
struct betopff_device *betopff; struct betopff_device *betopff;
struct hid_report *report; struct hid_report *report;
struct hid_input *hidinput = struct hid_input *hidinput;
list_first_entry(&hid->inputs, struct hid_input, list);
struct list_head *report_list = struct list_head *report_list =
&hid->report_enum[HID_OUTPUT_REPORT].report_list; &hid->report_enum[HID_OUTPUT_REPORT].report_list;
struct input_dev *dev = hidinput->input; struct input_dev *dev;
int field_count = 0; int field_count = 0;
int error; int error;
int i, j; int i, j;
if (list_empty(&hid->inputs)) {
hid_err(hid, "no inputs found\n");
return -ENODEV;
}
hidinput = list_first_entry(&hid->inputs, struct hid_input, list);
dev = hidinput->input;
if (list_empty(report_list)) { if (list_empty(report_list)) {
hid_err(hid, "no output reports found\n"); hid_err(hid, "no output reports found\n");
return -ENODEV; return -ENODEV;

View File

@ -198,7 +198,9 @@ static int u2fzero_rng_read(struct hwrng *rng, void *data,
} }
ret = u2fzero_recv(dev, &req, &resp); ret = u2fzero_recv(dev, &req, &resp);
if (ret < 0)
/* ignore errors or packets without data */
if (ret < offsetof(struct u2f_hid_msg, init.data))
return 0; return 0;
/* only take the minimum amount of data it is safe to take */ /* only take the minimum amount of data it is safe to take */

View File

@ -4746,6 +4746,12 @@ static const struct wacom_features wacom_features_0x393 =
{ "Wacom Intuos Pro S", 31920, 19950, 8191, 63, { "Wacom Intuos Pro S", 31920, 19950, 8191, 63,
INTUOSP2S_BT, WACOM_INTUOS3_RES, WACOM_INTUOS3_RES, 7, INTUOSP2S_BT, WACOM_INTUOS3_RES, WACOM_INTUOS3_RES, 7,
.touch_max = 10 }; .touch_max = 10 };
static const struct wacom_features wacom_features_0x3c6 =
{ "Wacom Intuos BT S", 15200, 9500, 4095, 63,
INTUOSHT3_BT, WACOM_INTUOS_RES, WACOM_INTUOS_RES, 4 };
static const struct wacom_features wacom_features_0x3c8 =
{ "Wacom Intuos BT M", 21600, 13500, 4095, 63,
INTUOSHT3_BT, WACOM_INTUOS_RES, WACOM_INTUOS_RES, 4 };
static const struct wacom_features wacom_features_HID_ANY_ID = static const struct wacom_features wacom_features_HID_ANY_ID =
{ "Wacom HID", .type = HID_GENERIC, .oVid = HID_ANY_ID, .oPid = HID_ANY_ID }; { "Wacom HID", .type = HID_GENERIC, .oVid = HID_ANY_ID, .oPid = HID_ANY_ID };
@ -4919,6 +4925,8 @@ const struct hid_device_id wacom_ids[] = {
{ USB_DEVICE_WACOM(0x37A) }, { USB_DEVICE_WACOM(0x37A) },
{ USB_DEVICE_WACOM(0x37B) }, { USB_DEVICE_WACOM(0x37B) },
{ BT_DEVICE_WACOM(0x393) }, { BT_DEVICE_WACOM(0x393) },
{ BT_DEVICE_WACOM(0x3c6) },
{ BT_DEVICE_WACOM(0x3c8) },
{ USB_DEVICE_WACOM(0x4001) }, { USB_DEVICE_WACOM(0x4001) },
{ USB_DEVICE_WACOM(0x4004) }, { USB_DEVICE_WACOM(0x4004) },
{ USB_DEVICE_WACOM(0x5000) }, { USB_DEVICE_WACOM(0x5000) },

Some files were not shown because too many files have changed in this diff Show More