SPI NOR changes:

* small fixes on core and spansion driver.
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEEHUIqys8OyG1eHf7fS1VPR6WNFOkFAmPgyGUACgkQS1VPR6WN
 FOkv+AgAmqp68REEM7SicPy2dIy6fdLFOLjma9CSiWwGTMEJRM60ZBNFCJcAuUjD
 1Sf+TCZEkBHc2crYCun5SqBErOA9oQCxI4nVtKZEQ9RyBklz/e5DsLONsVlLKrJU
 lwemyeZ2tAV8023iBdjCi1nJ831eRmYipQMIEvr2xbOP/G95Ccc/wG6vKeMDi2QR
 BxWdPD4XXvIVRY923nvnz9kK65QiqEQASJ8Rpf/AIYw+C/oukQFql8J3SMMs9kLH
 DLagTnyhTx8qMd0V7Z6OVA3Ljf+bMd5gI7Z2fFQhErKo1mvT1Jw13sDQwxGGs+XS
 9yxomEDKX+d/LdWEoZaSWFYwHVEixQ==
 =fazc
 -----END PGP SIGNATURE-----

Merge tag 'spi-nor/for-6.3' into mtd/next

SPI NOR changes:
* small fixes on core and spansion driver.
This commit is contained in:
Miquel Raynal 2023-02-23 10:27:32 +01:00
commit 27121864ab
105 changed files with 1126 additions and 717 deletions

1
.gitignore vendored
View File

@ -39,6 +39,7 @@
*.o.* *.o.*
*.patch *.patch
*.rmeta *.rmeta
*.rpm
*.rsi *.rsi
*.s *.s
*.so *.so

View File

@ -422,6 +422,7 @@ Tony Luck <tony.luck@intel.com>
TripleX Chung <xxx.phy@gmail.com> <triplex@zh-kernel.org> TripleX Chung <xxx.phy@gmail.com> <triplex@zh-kernel.org>
TripleX Chung <xxx.phy@gmail.com> <zhongyu@18mail.cn> TripleX Chung <xxx.phy@gmail.com> <zhongyu@18mail.cn>
Tsuneo Yoshioka <Tsuneo.Yoshioka@f-secure.com> Tsuneo Yoshioka <Tsuneo.Yoshioka@f-secure.com>
Tudor Ambarus <tudor.ambarus@linaro.org> <tudor.ambarus@microchip.com>
Tycho Andersen <tycho@tycho.pizza> <tycho@tycho.ws> Tycho Andersen <tycho@tycho.pizza> <tycho@tycho.ws>
Tzung-Bi Shih <tzungbi@kernel.org> <tzungbi@google.com> Tzung-Bi Shih <tzungbi@kernel.org> <tzungbi@google.com>
Uwe Kleine-König <ukleinek@informatik.uni-freiburg.de> Uwe Kleine-König <ukleinek@informatik.uni-freiburg.de>

View File

@ -8,7 +8,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Atmel Advanced Encryption Standard (AES) HW cryptographic accelerator title: Atmel Advanced Encryption Standard (AES) HW cryptographic accelerator
maintainers: maintainers:
- Tudor Ambarus <tudor.ambarus@microchip.com> - Tudor Ambarus <tudor.ambarus@linaro.org>
properties: properties:
compatible: compatible:

View File

@ -8,7 +8,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Atmel Secure Hash Algorithm (SHA) HW cryptographic accelerator title: Atmel Secure Hash Algorithm (SHA) HW cryptographic accelerator
maintainers: maintainers:
- Tudor Ambarus <tudor.ambarus@microchip.com> - Tudor Ambarus <tudor.ambarus@linaro.org>
properties: properties:
compatible: compatible:

View File

@ -8,7 +8,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Atmel Triple Data Encryption Standard (TDES) HW cryptographic accelerator title: Atmel Triple Data Encryption Standard (TDES) HW cryptographic accelerator
maintainers: maintainers:
- Tudor Ambarus <tudor.ambarus@microchip.com> - Tudor Ambarus <tudor.ambarus@linaro.org>
properties: properties:
compatible: compatible:

View File

@ -8,7 +8,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Atmel SPI device title: Atmel SPI device
maintainers: maintainers:
- Tudor Ambarus <tudor.ambarus@microchip.com> - Tudor Ambarus <tudor.ambarus@linaro.org>
allOf: allOf:
- $ref: spi-controller.yaml# - $ref: spi-controller.yaml#

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Atmel Quad Serial Peripheral Interface (QSPI) title: Atmel Quad Serial Peripheral Interface (QSPI)
maintainers: maintainers:
- Tudor Ambarus <tudor.ambarus@microchip.com> - Tudor Ambarus <tudor.ambarus@linaro.org>
allOf: allOf:
- $ref: spi-controller.yaml# - $ref: spi-controller.yaml#

View File

@ -104,3 +104,4 @@ to do something different in the near future.
../riscv/patch-acceptance ../riscv/patch-acceptance
../driver-api/media/maintainer-entry-profile ../driver-api/media/maintainer-entry-profile
../driver-api/vfio-pci-device-specific-driver-acceptance ../driver-api/vfio-pci-device-specific-driver-acceptance
../nvme/feature-and-quirk-policy

View File

@ -0,0 +1,77 @@
.. SPDX-License-Identifier: GPL-2.0
=======================================
Linux NVMe feature and and quirk policy
=======================================
This file explains the policy used to decide what is supported by the
Linux NVMe driver and what is not.
Introduction
============
NVM Express is an open collection of standards and information.
The Linux NVMe host driver in drivers/nvme/host/ supports devices
implementing the NVM Express (NVMe) family of specifications, which
currently consists of a number of documents:
- the NVMe Base specification
- various Command Set specifications (e.g. NVM Command Set)
- various Transport specifications (e.g. PCIe, Fibre Channel, RDMA, TCP)
- the NVMe Management Interface specification
See https://nvmexpress.org/developers/ for the NVMe specifications.
Supported features
==================
NVMe is a large suite of specifications, and contains features that are only
useful or suitable for specific use-cases. It is important to note that Linux
does not aim to implement every feature in the specification. Every additional
feature implemented introduces more code, more maintenance and potentially more
bugs. Hence there is an inherent tradeoff between functionality and
maintainability of the NVMe host driver.
Any feature implemented in the Linux NVMe host driver must support the
following requirements:
1. The feature is specified in a release version of an official NVMe
specification, or in a ratified Technical Proposal (TP) that is
available on NVMe website. Or if it is not directly related to the
on-wire protocol, does not contradict any of the NVMe specifications.
2. Does not conflict with the Linux architecture, nor the design of the
NVMe host driver.
3. Has a clear, indisputable value-proposition and a wide consensus across
the community.
Vendor specific extensions are generally not supported in the NVMe host
driver.
It is strongly recommended to work with the Linux NVMe and block layer
maintainers and get feedback on specification changes that are intended
to be used by the Linux NVMe host driver in order to avoid conflict at a
later stage.
Quirks
======
Sometimes implementations of open standards fail to correctly implement parts
of the standards. Linux uses identifier-based quirks to work around such
implementation bugs. The intent of quirks is to deal with widely available
hardware, usually consumer, which Linux users can't use without these quirks.
Typically these implementations are not or only superficially tested with Linux
by the hardware manufacturer.
The Linux NVMe maintainers decide ad hoc whether to quirk implementations
based on the impact of the problem to Linux users and how it impacts
maintainability of the driver. In general quirks are a last resort, if no
firmware updates or other workarounds are available from the vendor.
Quirks will not be added to the Linux kernel for hardware that isn't available
on the mass market. Hardware that fails qualification for enterprise Linux
distributions, ChromeOS, Android or other consumers of the Linux kernel
should be fixed before it is shipped instead of relying on Linux quirks.

View File

@ -5343,9 +5343,9 @@ KVM_XEN_ATTR_TYPE_SHARED_INFO
32 vCPUs in the shared_info page, KVM does not automatically do so 32 vCPUs in the shared_info page, KVM does not automatically do so
and instead requires that KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO be used and instead requires that KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO be used
explicitly even when the vcpu_info for a given vCPU resides at the explicitly even when the vcpu_info for a given vCPU resides at the
"default" location in the shared_info page. This is because KVM is "default" location in the shared_info page. This is because KVM may
not aware of the Xen CPU id which is used as the index into the not be aware of the Xen CPU id which is used as the index into the
vcpu_info[] array, so cannot know the correct default location. vcpu_info[] array, so may know the correct default location.
Note that the shared info page may be constantly written to by KVM; Note that the shared info page may be constantly written to by KVM;
it contains the event channel bitmap used to deliver interrupts to it contains the event channel bitmap used to deliver interrupts to
@ -5356,23 +5356,29 @@ KVM_XEN_ATTR_TYPE_SHARED_INFO
any vCPU has been running or any event channel interrupts can be any vCPU has been running or any event channel interrupts can be
routed to the guest. routed to the guest.
Setting the gfn to KVM_XEN_INVALID_GFN will disable the shared info
page.
KVM_XEN_ATTR_TYPE_UPCALL_VECTOR KVM_XEN_ATTR_TYPE_UPCALL_VECTOR
Sets the exception vector used to deliver Xen event channel upcalls. Sets the exception vector used to deliver Xen event channel upcalls.
This is the HVM-wide vector injected directly by the hypervisor This is the HVM-wide vector injected directly by the hypervisor
(not through the local APIC), typically configured by a guest via (not through the local APIC), typically configured by a guest via
HVM_PARAM_CALLBACK_IRQ. HVM_PARAM_CALLBACK_IRQ. This can be disabled again (e.g. for guest
SHUTDOWN_soft_reset) by setting it to zero.
KVM_XEN_ATTR_TYPE_EVTCHN KVM_XEN_ATTR_TYPE_EVTCHN
This attribute is available when the KVM_CAP_XEN_HVM ioctl indicates This attribute is available when the KVM_CAP_XEN_HVM ioctl indicates
support for KVM_XEN_HVM_CONFIG_EVTCHN_SEND features. It configures support for KVM_XEN_HVM_CONFIG_EVTCHN_SEND features. It configures
an outbound port number for interception of EVTCHNOP_send requests an outbound port number for interception of EVTCHNOP_send requests
from the guest. A given sending port number may be directed back from the guest. A given sending port number may be directed back to
to a specified vCPU (by APIC ID) / port / priority on the guest, a specified vCPU (by APIC ID) / port / priority on the guest, or to
or to trigger events on an eventfd. The vCPU and priority can be trigger events on an eventfd. The vCPU and priority can be changed
changed by setting KVM_XEN_EVTCHN_UPDATE in a subsequent call, by setting KVM_XEN_EVTCHN_UPDATE in a subsequent call, but but other
but other fields cannot change for a given sending port. A port fields cannot change for a given sending port. A port mapping is
mapping is removed by using KVM_XEN_EVTCHN_DEASSIGN in the flags removed by using KVM_XEN_EVTCHN_DEASSIGN in the flags field. Passing
field. KVM_XEN_EVTCHN_RESET in the flags field removes all interception of
outbound event channels. The values of the flags field are mutually
exclusive and cannot be combined as a bitmask.
KVM_XEN_ATTR_TYPE_XEN_VERSION KVM_XEN_ATTR_TYPE_XEN_VERSION
This attribute is available when the KVM_CAP_XEN_HVM ioctl indicates This attribute is available when the KVM_CAP_XEN_HVM ioctl indicates
@ -5388,7 +5394,7 @@ KVM_XEN_ATTR_TYPE_RUNSTATE_UPDATE_FLAG
support for KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG. It enables the support for KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG. It enables the
XEN_RUNSTATE_UPDATE flag which allows guest vCPUs to safely read XEN_RUNSTATE_UPDATE flag which allows guest vCPUs to safely read
other vCPUs' vcpu_runstate_info. Xen guests enable this feature via other vCPUs' vcpu_runstate_info. Xen guests enable this feature via
the VM_ASST_TYPE_runstate_update_flag of the HYPERVISOR_vm_assist the VMASST_TYPE_runstate_update_flag of the HYPERVISOR_vm_assist
hypercall. hypercall.
4.127 KVM_XEN_HVM_GET_ATTR 4.127 KVM_XEN_HVM_GET_ATTR
@ -5446,15 +5452,18 @@ KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO
As with the shared_info page for the VM, the corresponding page may be As with the shared_info page for the VM, the corresponding page may be
dirtied at any time if event channel interrupt delivery is enabled, so dirtied at any time if event channel interrupt delivery is enabled, so
userspace should always assume that the page is dirty without relying userspace should always assume that the page is dirty without relying
on dirty logging. on dirty logging. Setting the gpa to KVM_XEN_INVALID_GPA will disable
the vcpu_info.
KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO
Sets the guest physical address of an additional pvclock structure Sets the guest physical address of an additional pvclock structure
for a given vCPU. This is typically used for guest vsyscall support. for a given vCPU. This is typically used for guest vsyscall support.
Setting the gpa to KVM_XEN_INVALID_GPA will disable the structure.
KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADDR KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADDR
Sets the guest physical address of the vcpu_runstate_info for a given Sets the guest physical address of the vcpu_runstate_info for a given
vCPU. This is how a Xen guest tracks CPU state such as steal time. vCPU. This is how a Xen guest tracks CPU state such as steal time.
Setting the gpa to KVM_XEN_INVALID_GPA will disable the runstate area.
KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT
Sets the runstate (RUNSTATE_running/_runnable/_blocked/_offline) of Sets the runstate (RUNSTATE_running/_runnable/_blocked/_offline) of
@ -5487,7 +5496,8 @@ KVM_XEN_VCPU_ATTR_TYPE_TIMER
This attribute is available when the KVM_CAP_XEN_HVM ioctl indicates This attribute is available when the KVM_CAP_XEN_HVM ioctl indicates
support for KVM_XEN_HVM_CONFIG_EVTCHN_SEND features. It sets the support for KVM_XEN_HVM_CONFIG_EVTCHN_SEND features. It sets the
event channel port/priority for the VIRQ_TIMER of the vCPU, as well event channel port/priority for the VIRQ_TIMER of the vCPU, as well
as allowing a pending timer to be saved/restored. as allowing a pending timer to be saved/restored. Setting the timer
port to zero disables kernel handling of the singleshot timer.
KVM_XEN_VCPU_ATTR_TYPE_UPCALL_VECTOR KVM_XEN_VCPU_ATTR_TYPE_UPCALL_VECTOR
This attribute is available when the KVM_CAP_XEN_HVM ioctl indicates This attribute is available when the KVM_CAP_XEN_HVM ioctl indicates
@ -5495,7 +5505,8 @@ KVM_XEN_VCPU_ATTR_TYPE_UPCALL_VECTOR
per-vCPU local APIC upcall vector, configured by a Xen guest with per-vCPU local APIC upcall vector, configured by a Xen guest with
the HVMOP_set_evtchn_upcall_vector hypercall. This is typically the HVMOP_set_evtchn_upcall_vector hypercall. This is typically
used by Windows guests, and is distinct from the HVM-wide upcall used by Windows guests, and is distinct from the HVM-wide upcall
vector configured with HVM_PARAM_CALLBACK_IRQ. vector configured with HVM_PARAM_CALLBACK_IRQ. It is disabled by
setting the vector to zero.
4.129 KVM_XEN_VCPU_GET_ATTR 4.129 KVM_XEN_VCPU_GET_ATTR
@ -6577,11 +6588,6 @@ Please note that the kernel is allowed to use the kvm_run structure as the
primary storage for certain register types. Therefore, the kernel may use the primary storage for certain register types. Therefore, the kernel may use the
values in kvm_run even if the corresponding bit in kvm_dirty_regs is not set. values in kvm_run even if the corresponding bit in kvm_dirty_regs is not set.
::
};
6. Capabilities that can be enabled on vCPUs 6. Capabilities that can be enabled on vCPUs
============================================ ============================================

View File

@ -16,17 +16,26 @@ The acquisition orders for mutexes are as follows:
- kvm->slots_lock is taken outside kvm->irq_lock, though acquiring - kvm->slots_lock is taken outside kvm->irq_lock, though acquiring
them together is quite rare. them together is quite rare.
- Unlike kvm->slots_lock, kvm->slots_arch_lock is released before
synchronize_srcu(&kvm->srcu). Therefore kvm->slots_arch_lock
can be taken inside a kvm->srcu read-side critical section,
while kvm->slots_lock cannot.
- kvm->mn_active_invalidate_count ensures that pairs of - kvm->mn_active_invalidate_count ensures that pairs of
invalidate_range_start() and invalidate_range_end() callbacks invalidate_range_start() and invalidate_range_end() callbacks
use the same memslots array. kvm->slots_lock and kvm->slots_arch_lock use the same memslots array. kvm->slots_lock and kvm->slots_arch_lock
are taken on the waiting side in install_new_memslots, so MMU notifiers are taken on the waiting side in install_new_memslots, so MMU notifiers
must not take either kvm->slots_lock or kvm->slots_arch_lock. must not take either kvm->slots_lock or kvm->slots_arch_lock.
For SRCU:
- ``synchronize_srcu(&kvm->srcu)`` is called _inside_
the kvm->slots_lock critical section, therefore kvm->slots_lock
cannot be taken inside a kvm->srcu read-side critical section.
Instead, kvm->slots_arch_lock is released before the call
to ``synchronize_srcu()`` and _can_ be taken inside a
kvm->srcu read-side critical section.
- kvm->lock is taken inside kvm->srcu, therefore
``synchronize_srcu(&kvm->srcu)`` cannot be called inside
a kvm->lock critical section. If you cannot delay the
call until after kvm->lock is released, use ``call_srcu``.
On x86: On x86:
- vcpu->mutex is taken outside kvm->arch.hyperv.hv_lock - vcpu->mutex is taken outside kvm->arch.hyperv.hv_lock

View File

@ -11468,7 +11468,7 @@ F: arch/x86/kvm/hyperv.*
F: arch/x86/kvm/kvm_onhyperv.* F: arch/x86/kvm/kvm_onhyperv.*
F: arch/x86/kvm/svm/hyperv.* F: arch/x86/kvm/svm/hyperv.*
F: arch/x86/kvm/svm/svm_onhyperv.* F: arch/x86/kvm/svm/svm_onhyperv.*
F: arch/x86/kvm/vmx/evmcs.* F: arch/x86/kvm/vmx/hyperv.*
KVM X86 Xen (KVM/Xen) KVM X86 Xen (KVM/Xen)
M: David Woodhouse <dwmw2@infradead.org> M: David Woodhouse <dwmw2@infradead.org>
@ -13620,7 +13620,7 @@ F: arch/microblaze/
MICROCHIP AT91 DMA DRIVERS MICROCHIP AT91 DMA DRIVERS
M: Ludovic Desroches <ludovic.desroches@microchip.com> M: Ludovic Desroches <ludovic.desroches@microchip.com>
M: Tudor Ambarus <tudor.ambarus@microchip.com> M: Tudor Ambarus <tudor.ambarus@linaro.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: dmaengine@vger.kernel.org L: dmaengine@vger.kernel.org
S: Supported S: Supported
@ -13665,7 +13665,7 @@ F: Documentation/devicetree/bindings/media/microchip,csi2dc.yaml
F: drivers/media/platform/microchip/microchip-csi2dc.c F: drivers/media/platform/microchip/microchip-csi2dc.c
MICROCHIP ECC DRIVER MICROCHIP ECC DRIVER
M: Tudor Ambarus <tudor.ambarus@microchip.com> M: Tudor Ambarus <tudor.ambarus@linaro.org>
L: linux-crypto@vger.kernel.org L: linux-crypto@vger.kernel.org
S: Maintained S: Maintained
F: drivers/crypto/atmel-ecc.* F: drivers/crypto/atmel-ecc.*
@ -13762,7 +13762,7 @@ S: Maintained
F: drivers/mmc/host/atmel-mci.c F: drivers/mmc/host/atmel-mci.c
MICROCHIP NAND DRIVER MICROCHIP NAND DRIVER
M: Tudor Ambarus <tudor.ambarus@microchip.com> M: Tudor Ambarus <tudor.ambarus@linaro.org>
L: linux-mtd@lists.infradead.org L: linux-mtd@lists.infradead.org
S: Supported S: Supported
F: Documentation/devicetree/bindings/mtd/atmel-nand.txt F: Documentation/devicetree/bindings/mtd/atmel-nand.txt
@ -13814,7 +13814,7 @@ S: Supported
F: drivers/power/reset/at91-sama5d2_shdwc.c F: drivers/power/reset/at91-sama5d2_shdwc.c
MICROCHIP SPI DRIVER MICROCHIP SPI DRIVER
M: Tudor Ambarus <tudor.ambarus@microchip.com> M: Tudor Ambarus <tudor.ambarus@linaro.org>
S: Supported S: Supported
F: drivers/spi/spi-atmel.* F: drivers/spi/spi-atmel.*
@ -14916,6 +14916,7 @@ L: linux-nvme@lists.infradead.org
S: Supported S: Supported
W: http://git.infradead.org/nvme.git W: http://git.infradead.org/nvme.git
T: git://git.infradead.org/nvme.git T: git://git.infradead.org/nvme.git
F: Documentation/nvme/
F: drivers/nvme/host/ F: drivers/nvme/host/
F: drivers/nvme/common/ F: drivers/nvme/common/
F: include/linux/nvme* F: include/linux/nvme*
@ -19664,7 +19665,7 @@ F: drivers/clk/spear/
F: drivers/pinctrl/spear/ F: drivers/pinctrl/spear/
SPI NOR SUBSYSTEM SPI NOR SUBSYSTEM
M: Tudor Ambarus <tudor.ambarus@microchip.com> M: Tudor Ambarus <tudor.ambarus@linaro.org>
M: Pratyush Yadav <pratyush@kernel.org> M: Pratyush Yadav <pratyush@kernel.org>
R: Michael Walle <michael@walle.cc> R: Michael Walle <michael@walle.cc>
L: linux-mtd@lists.infradead.org L: linux-mtd@lists.infradead.org

View File

@ -2,7 +2,7 @@
VERSION = 6 VERSION = 6
PATCHLEVEL = 2 PATCHLEVEL = 2
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc1 EXTRAVERSION = -rc2
NAME = Hurr durr I'ma ninja sloth NAME = Hurr durr I'ma ninja sloth
# *DOCUMENTATION* # *DOCUMENTATION*
@ -297,7 +297,7 @@ no-compiler-targets := $(no-dot-config-targets) install dtbs_install \
headers_install modules_install kernelrelease image_name headers_install modules_install kernelrelease image_name
no-sync-config-targets := $(no-dot-config-targets) %install kernelrelease \ no-sync-config-targets := $(no-dot-config-targets) %install kernelrelease \
image_name image_name
single-targets := %.a %.i %.rsi %.ko %.lds %.ll %.lst %.mod %.o %.s %.symtypes %/ single-targets := %.a %.i %.ko %.lds %.ll %.lst %.mod %.o %.rsi %.s %.symtypes %/
config-build := config-build :=
mixed-build := mixed-build :=

View File

@ -1387,7 +1387,7 @@ static int __init amd_core_pmu_init(void)
* numbered counter following it. * numbered counter following it.
*/ */
for (i = 0; i < x86_pmu.num_counters - 1; i += 2) for (i = 0; i < x86_pmu.num_counters - 1; i += 2)
even_ctr_mask |= 1 << i; even_ctr_mask |= BIT_ULL(i);
pair_constraint = (struct event_constraint) pair_constraint = (struct event_constraint)
__EVENT_CONSTRAINT(0, even_ctr_mask, 0, __EVENT_CONSTRAINT(0, even_ctr_mask, 0,

View File

@ -119,7 +119,7 @@ static bool is_coretext(const struct core_text *ct, void *addr)
return within_module_coretext(addr); return within_module_coretext(addr);
} }
static __init_or_module bool skip_addr(void *dest) static bool skip_addr(void *dest)
{ {
if (dest == error_entry) if (dest == error_entry)
return true; return true;
@ -181,7 +181,7 @@ static const u8 nops[] = {
0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90, 0x90,
}; };
static __init_or_module void *patch_dest(void *dest, bool direct) static void *patch_dest(void *dest, bool direct)
{ {
unsigned int tsize = SKL_TMPL_SIZE; unsigned int tsize = SKL_TMPL_SIZE;
u8 *pad = dest - tsize; u8 *pad = dest - tsize;

View File

@ -37,6 +37,7 @@
#include <linux/extable.h> #include <linux/extable.h>
#include <linux/kdebug.h> #include <linux/kdebug.h>
#include <linux/kallsyms.h> #include <linux/kallsyms.h>
#include <linux/kgdb.h>
#include <linux/ftrace.h> #include <linux/ftrace.h>
#include <linux/kasan.h> #include <linux/kasan.h>
#include <linux/moduleloader.h> #include <linux/moduleloader.h>
@ -281,12 +282,15 @@ static int can_probe(unsigned long paddr)
if (ret < 0) if (ret < 0)
return 0; return 0;
#ifdef CONFIG_KGDB
/* /*
* Another debugging subsystem might insert this breakpoint. * If there is a dynamically installed kgdb sw breakpoint,
* In that case, we can't recover it. * this function should not be probed.
*/ */
if (insn.opcode.bytes[0] == INT3_INSN_OPCODE) if (insn.opcode.bytes[0] == INT3_INSN_OPCODE &&
kgdb_has_hit_break(addr))
return 0; return 0;
#endif
addr += insn.length; addr += insn.length;
} }

View File

@ -15,6 +15,7 @@
#include <linux/extable.h> #include <linux/extable.h>
#include <linux/kdebug.h> #include <linux/kdebug.h>
#include <linux/kallsyms.h> #include <linux/kallsyms.h>
#include <linux/kgdb.h>
#include <linux/ftrace.h> #include <linux/ftrace.h>
#include <linux/objtool.h> #include <linux/objtool.h>
#include <linux/pgtable.h> #include <linux/pgtable.h>
@ -279,19 +280,6 @@ static int insn_is_indirect_jump(struct insn *insn)
return ret; return ret;
} }
static bool is_padding_int3(unsigned long addr, unsigned long eaddr)
{
unsigned char ops;
for (; addr < eaddr; addr++) {
if (get_kernel_nofault(ops, (void *)addr) < 0 ||
ops != INT3_INSN_OPCODE)
return false;
}
return true;
}
/* Decode whole function to ensure any instructions don't jump into target */ /* Decode whole function to ensure any instructions don't jump into target */
static int can_optimize(unsigned long paddr) static int can_optimize(unsigned long paddr)
{ {
@ -334,15 +322,15 @@ static int can_optimize(unsigned long paddr)
ret = insn_decode_kernel(&insn, (void *)recovered_insn); ret = insn_decode_kernel(&insn, (void *)recovered_insn);
if (ret < 0) if (ret < 0)
return 0; return 0;
#ifdef CONFIG_KGDB
/* /*
* In the case of detecting unknown breakpoint, this could be * If there is a dynamically installed kgdb sw breakpoint,
* a padding INT3 between functions. Let's check that all the * this function should not be probed.
* rest of the bytes are also INT3.
*/ */
if (insn.opcode.bytes[0] == INT3_INSN_OPCODE) if (insn.opcode.bytes[0] == INT3_INSN_OPCODE &&
return is_padding_int3(addr, paddr - offset + size) ? 1 : 0; kgdb_has_hit_break(addr))
return 0;
#endif
/* Recover address */ /* Recover address */
insn.kaddr = (void *)addr; insn.kaddr = (void *)addr;
insn.next_byte = (void *)(addr + insn.length); insn.next_byte = (void *)(addr + insn.length);

View File

@ -1769,6 +1769,7 @@ static bool hv_is_vp_in_sparse_set(u32 vp_id, u64 valid_bank_mask, u64 sparse_ba
} }
struct kvm_hv_hcall { struct kvm_hv_hcall {
/* Hypercall input data */
u64 param; u64 param;
u64 ingpa; u64 ingpa;
u64 outgpa; u64 outgpa;
@ -1779,12 +1780,21 @@ struct kvm_hv_hcall {
bool fast; bool fast;
bool rep; bool rep;
sse128_t xmm[HV_HYPERCALL_MAX_XMM_REGISTERS]; sse128_t xmm[HV_HYPERCALL_MAX_XMM_REGISTERS];
/*
* Current read offset when KVM reads hypercall input data gradually,
* either offset in bytes from 'ingpa' for regular hypercalls or the
* number of already consumed 'XMM halves' for 'fast' hypercalls.
*/
union {
gpa_t data_offset;
int consumed_xmm_halves;
};
}; };
static int kvm_hv_get_hc_data(struct kvm *kvm, struct kvm_hv_hcall *hc, static int kvm_hv_get_hc_data(struct kvm *kvm, struct kvm_hv_hcall *hc,
u16 orig_cnt, u16 cnt_cap, u64 *data, u16 orig_cnt, u16 cnt_cap, u64 *data)
int consumed_xmm_halves, gpa_t offset)
{ {
/* /*
* Preserve the original count when ignoring entries via a "cap", KVM * Preserve the original count when ignoring entries via a "cap", KVM
@ -1799,11 +1809,11 @@ static int kvm_hv_get_hc_data(struct kvm *kvm, struct kvm_hv_hcall *hc,
* Each XMM holds two sparse banks, but do not count halves that * Each XMM holds two sparse banks, but do not count halves that
* have already been consumed for hypercall parameters. * have already been consumed for hypercall parameters.
*/ */
if (orig_cnt > 2 * HV_HYPERCALL_MAX_XMM_REGISTERS - consumed_xmm_halves) if (orig_cnt > 2 * HV_HYPERCALL_MAX_XMM_REGISTERS - hc->consumed_xmm_halves)
return HV_STATUS_INVALID_HYPERCALL_INPUT; return HV_STATUS_INVALID_HYPERCALL_INPUT;
for (i = 0; i < cnt; i++) { for (i = 0; i < cnt; i++) {
j = i + consumed_xmm_halves; j = i + hc->consumed_xmm_halves;
if (j % 2) if (j % 2)
data[i] = sse128_hi(hc->xmm[j / 2]); data[i] = sse128_hi(hc->xmm[j / 2]);
else else
@ -1812,27 +1822,24 @@ static int kvm_hv_get_hc_data(struct kvm *kvm, struct kvm_hv_hcall *hc,
return 0; return 0;
} }
return kvm_read_guest(kvm, hc->ingpa + offset, data, return kvm_read_guest(kvm, hc->ingpa + hc->data_offset, data,
cnt * sizeof(*data)); cnt * sizeof(*data));
} }
static u64 kvm_get_sparse_vp_set(struct kvm *kvm, struct kvm_hv_hcall *hc, static u64 kvm_get_sparse_vp_set(struct kvm *kvm, struct kvm_hv_hcall *hc,
u64 *sparse_banks, int consumed_xmm_halves, u64 *sparse_banks)
gpa_t offset)
{ {
if (hc->var_cnt > HV_MAX_SPARSE_VCPU_BANKS) if (hc->var_cnt > HV_MAX_SPARSE_VCPU_BANKS)
return -EINVAL; return -EINVAL;
/* Cap var_cnt to ignore banks that cannot contain a legal VP index. */ /* Cap var_cnt to ignore banks that cannot contain a legal VP index. */
return kvm_hv_get_hc_data(kvm, hc, hc->var_cnt, KVM_HV_MAX_SPARSE_VCPU_SET_BITS, return kvm_hv_get_hc_data(kvm, hc, hc->var_cnt, KVM_HV_MAX_SPARSE_VCPU_SET_BITS,
sparse_banks, consumed_xmm_halves, offset); sparse_banks);
} }
static int kvm_hv_get_tlb_flush_entries(struct kvm *kvm, struct kvm_hv_hcall *hc, u64 entries[], static int kvm_hv_get_tlb_flush_entries(struct kvm *kvm, struct kvm_hv_hcall *hc, u64 entries[])
int consumed_xmm_halves, gpa_t offset)
{ {
return kvm_hv_get_hc_data(kvm, hc, hc->rep_cnt, hc->rep_cnt, return kvm_hv_get_hc_data(kvm, hc, hc->rep_cnt, hc->rep_cnt, entries);
entries, consumed_xmm_halves, offset);
} }
static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu, static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu,
@ -1926,8 +1933,6 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
struct kvm_vcpu *v; struct kvm_vcpu *v;
unsigned long i; unsigned long i;
bool all_cpus; bool all_cpus;
int consumed_xmm_halves = 0;
gpa_t data_offset;
/* /*
* The Hyper-V TLFS doesn't allow more than HV_MAX_SPARSE_VCPU_BANKS * The Hyper-V TLFS doesn't allow more than HV_MAX_SPARSE_VCPU_BANKS
@ -1955,12 +1960,12 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
flush.address_space = hc->ingpa; flush.address_space = hc->ingpa;
flush.flags = hc->outgpa; flush.flags = hc->outgpa;
flush.processor_mask = sse128_lo(hc->xmm[0]); flush.processor_mask = sse128_lo(hc->xmm[0]);
consumed_xmm_halves = 1; hc->consumed_xmm_halves = 1;
} else { } else {
if (unlikely(kvm_read_guest(kvm, hc->ingpa, if (unlikely(kvm_read_guest(kvm, hc->ingpa,
&flush, sizeof(flush)))) &flush, sizeof(flush))))
return HV_STATUS_INVALID_HYPERCALL_INPUT; return HV_STATUS_INVALID_HYPERCALL_INPUT;
data_offset = sizeof(flush); hc->data_offset = sizeof(flush);
} }
trace_kvm_hv_flush_tlb(flush.processor_mask, trace_kvm_hv_flush_tlb(flush.processor_mask,
@ -1985,12 +1990,12 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
flush_ex.flags = hc->outgpa; flush_ex.flags = hc->outgpa;
memcpy(&flush_ex.hv_vp_set, memcpy(&flush_ex.hv_vp_set,
&hc->xmm[0], sizeof(hc->xmm[0])); &hc->xmm[0], sizeof(hc->xmm[0]));
consumed_xmm_halves = 2; hc->consumed_xmm_halves = 2;
} else { } else {
if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush_ex, if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush_ex,
sizeof(flush_ex)))) sizeof(flush_ex))))
return HV_STATUS_INVALID_HYPERCALL_INPUT; return HV_STATUS_INVALID_HYPERCALL_INPUT;
data_offset = sizeof(flush_ex); hc->data_offset = sizeof(flush_ex);
} }
trace_kvm_hv_flush_tlb_ex(flush_ex.hv_vp_set.valid_bank_mask, trace_kvm_hv_flush_tlb_ex(flush_ex.hv_vp_set.valid_bank_mask,
@ -2009,8 +2014,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
if (!hc->var_cnt) if (!hc->var_cnt)
goto ret_success; goto ret_success;
if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks, if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks))
consumed_xmm_halves, data_offset))
return HV_STATUS_INVALID_HYPERCALL_INPUT; return HV_STATUS_INVALID_HYPERCALL_INPUT;
} }
@ -2021,8 +2025,10 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
* consumed_xmm_halves to make sure TLB flush entries are read * consumed_xmm_halves to make sure TLB flush entries are read
* from the correct offset. * from the correct offset.
*/ */
data_offset += hc->var_cnt * sizeof(sparse_banks[0]); if (hc->fast)
consumed_xmm_halves += hc->var_cnt; hc->consumed_xmm_halves += hc->var_cnt;
else
hc->data_offset += hc->var_cnt * sizeof(sparse_banks[0]);
} }
if (hc->code == HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE || if (hc->code == HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE ||
@ -2030,8 +2036,7 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
hc->rep_cnt > ARRAY_SIZE(__tlb_flush_entries)) { hc->rep_cnt > ARRAY_SIZE(__tlb_flush_entries)) {
tlb_flush_entries = NULL; tlb_flush_entries = NULL;
} else { } else {
if (kvm_hv_get_tlb_flush_entries(kvm, hc, __tlb_flush_entries, if (kvm_hv_get_tlb_flush_entries(kvm, hc, __tlb_flush_entries))
consumed_xmm_halves, data_offset))
return HV_STATUS_INVALID_HYPERCALL_INPUT; return HV_STATUS_INVALID_HYPERCALL_INPUT;
tlb_flush_entries = __tlb_flush_entries; tlb_flush_entries = __tlb_flush_entries;
} }
@ -2180,9 +2185,13 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
if (!hc->var_cnt) if (!hc->var_cnt)
goto ret_success; goto ret_success;
if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks, 1, if (!hc->fast)
offsetof(struct hv_send_ipi_ex, hc->data_offset = offsetof(struct hv_send_ipi_ex,
vp_set.bank_contents))) vp_set.bank_contents);
else
hc->consumed_xmm_halves = 1;
if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks))
return HV_STATUS_INVALID_HYPERCALL_INPUT; return HV_STATUS_INVALID_HYPERCALL_INPUT;
} }

View File

@ -426,8 +426,9 @@ void kvm_scan_ioapic_routes(struct kvm_vcpu *vcpu,
kvm_set_msi_irq(vcpu->kvm, entry, &irq); kvm_set_msi_irq(vcpu->kvm, entry, &irq);
if (irq.trig_mode && if (irq.trig_mode &&
kvm_apic_match_dest(vcpu, NULL, APIC_DEST_NOSHORT, (kvm_apic_match_dest(vcpu, NULL, APIC_DEST_NOSHORT,
irq.dest_id, irq.dest_mode)) irq.dest_id, irq.dest_mode) ||
kvm_apic_pending_eoi(vcpu, irq.vector)))
__set_bit(irq.vector, ioapic_handled_vectors); __set_bit(irq.vector, ioapic_handled_vectors);
} }
} }

View File

@ -188,11 +188,11 @@ static inline bool lapic_in_kernel(struct kvm_vcpu *vcpu)
extern struct static_key_false_deferred apic_hw_disabled; extern struct static_key_false_deferred apic_hw_disabled;
static inline int kvm_apic_hw_enabled(struct kvm_lapic *apic) static inline bool kvm_apic_hw_enabled(struct kvm_lapic *apic)
{ {
if (static_branch_unlikely(&apic_hw_disabled.key)) if (static_branch_unlikely(&apic_hw_disabled.key))
return apic->vcpu->arch.apic_base & MSR_IA32_APICBASE_ENABLE; return apic->vcpu->arch.apic_base & MSR_IA32_APICBASE_ENABLE;
return MSR_IA32_APICBASE_ENABLE; return true;
} }
extern struct static_key_false_deferred apic_sw_disabled; extern struct static_key_false_deferred apic_sw_disabled;

View File

@ -363,7 +363,7 @@ static __always_inline bool is_rsvd_spte(struct rsvd_bits_validate *rsvd_check,
* A shadow-present leaf SPTE may be non-writable for 4 possible reasons: * A shadow-present leaf SPTE may be non-writable for 4 possible reasons:
* *
* 1. To intercept writes for dirty logging. KVM write-protects huge pages * 1. To intercept writes for dirty logging. KVM write-protects huge pages
* so that they can be split be split down into the dirty logging * so that they can be split down into the dirty logging
* granularity (4KiB) whenever the guest writes to them. KVM also * granularity (4KiB) whenever the guest writes to them. KVM also
* write-protects 4KiB pages so that writes can be recorded in the dirty log * write-protects 4KiB pages so that writes can be recorded in the dirty log
* (e.g. if not using PML). SPTEs are write-protected for dirty logging * (e.g. if not using PML). SPTEs are write-protected for dirty logging

View File

@ -1074,7 +1074,9 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
int ret = RET_PF_FIXED; int ret = RET_PF_FIXED;
bool wrprot = false; bool wrprot = false;
WARN_ON(sp->role.level != fault->goal_level); if (WARN_ON_ONCE(sp->role.level != fault->goal_level))
return RET_PF_RETRY;
if (unlikely(!fault->slot)) if (unlikely(!fault->slot))
new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL); new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL);
else else
@ -1173,9 +1175,6 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
if (fault->nx_huge_page_workaround_enabled) if (fault->nx_huge_page_workaround_enabled)
disallowed_hugepage_adjust(fault, iter.old_spte, iter.level); disallowed_hugepage_adjust(fault, iter.old_spte, iter.level);
if (iter.level == fault->goal_level)
break;
/* /*
* If SPTE has been frozen by another thread, just give up and * If SPTE has been frozen by another thread, just give up and
* retry, avoiding unnecessary page table allocation and free. * retry, avoiding unnecessary page table allocation and free.
@ -1183,6 +1182,9 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
if (is_removed_spte(iter.old_spte)) if (is_removed_spte(iter.old_spte))
goto retry; goto retry;
if (iter.level == fault->goal_level)
goto map_target_level;
/* Step down into the lower level page table if it exists. */ /* Step down into the lower level page table if it exists. */
if (is_shadow_present_pte(iter.old_spte) && if (is_shadow_present_pte(iter.old_spte) &&
!is_large_pte(iter.old_spte)) !is_large_pte(iter.old_spte))
@ -1203,8 +1205,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
r = tdp_mmu_link_sp(kvm, &iter, sp, true); r = tdp_mmu_link_sp(kvm, &iter, sp, true);
/* /*
* Also force the guest to retry the access if the upper level SPTEs * Force the guest to retry if installing an upper level SPTE
* aren't in place. * failed, e.g. because a different task modified the SPTE.
*/ */
if (r) { if (r) {
tdp_mmu_free_sp(sp); tdp_mmu_free_sp(sp);
@ -1214,11 +1216,20 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
if (fault->huge_page_disallowed && if (fault->huge_page_disallowed &&
fault->req_level >= iter.level) { fault->req_level >= iter.level) {
spin_lock(&kvm->arch.tdp_mmu_pages_lock); spin_lock(&kvm->arch.tdp_mmu_pages_lock);
track_possible_nx_huge_page(kvm, sp); if (sp->nx_huge_page_disallowed)
track_possible_nx_huge_page(kvm, sp);
spin_unlock(&kvm->arch.tdp_mmu_pages_lock); spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
} }
} }
/*
* The walk aborted before reaching the target level, e.g. because the
* iterator detected an upper level SPTE was frozen during traversal.
*/
WARN_ON_ONCE(iter.level == fault->goal_level);
goto retry;
map_target_level:
ret = tdp_mmu_map_handle_target_level(vcpu, fault, &iter); ret = tdp_mmu_map_handle_target_level(vcpu, fault, &iter);
retry: retry:

View File

@ -238,7 +238,8 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc)
return false; return false;
/* recalibrate sample period and check if it's accepted by perf core */ /* recalibrate sample period and check if it's accepted by perf core */
if (perf_event_period(pmc->perf_event, if (is_sampling_event(pmc->perf_event) &&
perf_event_period(pmc->perf_event,
get_sample_period(pmc, pmc->counter))) get_sample_period(pmc, pmc->counter)))
return false; return false;

View File

@ -140,7 +140,8 @@ static inline u64 get_sample_period(struct kvm_pmc *pmc, u64 counter_value)
static inline void pmc_update_sample_period(struct kvm_pmc *pmc) static inline void pmc_update_sample_period(struct kvm_pmc *pmc)
{ {
if (!pmc->perf_event || pmc->is_paused) if (!pmc->perf_event || pmc->is_paused ||
!is_sampling_event(pmc->perf_event))
return; return;
perf_event_period(pmc->perf_event, perf_event_period(pmc->perf_event,

View File

@ -5296,10 +5296,19 @@ static int handle_vmclear(struct kvm_vcpu *vcpu)
if (vmptr == vmx->nested.current_vmptr) if (vmptr == vmx->nested.current_vmptr)
nested_release_vmcs12(vcpu); nested_release_vmcs12(vcpu);
kvm_vcpu_write_guest(vcpu, /*
vmptr + offsetof(struct vmcs12, * Silently ignore memory errors on VMCLEAR, Intel's pseudocode
launch_state), * for VMCLEAR includes a "ensure that data for VMCS referenced
&zero, sizeof(zero)); * by the operand is in memory" clause that guards writes to
* memory, i.e. doing nothing for I/O is architecturally valid.
*
* FIXME: Suppress failures if and only if no memslot is found,
* i.e. exit to userspace if __copy_to_user() fails.
*/
(void)kvm_vcpu_write_guest(vcpu,
vmptr + offsetof(struct vmcs12,
launch_state),
&zero, sizeof(zero));
} else if (vmx->nested.hv_evmcs && vmptr == vmx->nested.hv_evmcs_vmptr) { } else if (vmx->nested.hv_evmcs && vmptr == vmx->nested.hv_evmcs_vmptr) {
nested_release_evmcs(vcpu); nested_release_evmcs(vcpu);
} }
@ -6873,7 +6882,8 @@ void nested_vmx_setup_ctls_msrs(struct vmcs_config *vmcs_conf, u32 ept_caps)
SECONDARY_EXEC_ENABLE_INVPCID | SECONDARY_EXEC_ENABLE_INVPCID |
SECONDARY_EXEC_RDSEED_EXITING | SECONDARY_EXEC_RDSEED_EXITING |
SECONDARY_EXEC_XSAVES | SECONDARY_EXEC_XSAVES |
SECONDARY_EXEC_TSC_SCALING; SECONDARY_EXEC_TSC_SCALING |
SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE;
/* /*
* We can emulate "VMCS shadowing," even if the hardware * We can emulate "VMCS shadowing," even if the hardware

View File

@ -4459,6 +4459,13 @@ vmx_adjust_secondary_exec_control(struct vcpu_vmx *vmx, u32 *exec_control,
* controls for features that are/aren't exposed to the guest. * controls for features that are/aren't exposed to the guest.
*/ */
if (nested) { if (nested) {
/*
* All features that can be added or removed to VMX MSRs must
* be supported in the first place for nested virtualization.
*/
if (WARN_ON_ONCE(!(vmcs_config.nested.secondary_ctls_high & control)))
enabled = false;
if (enabled) if (enabled)
vmx->nested.msrs.secondary_ctls_high |= control; vmx->nested.msrs.secondary_ctls_high |= control;
else else

View File

@ -13132,6 +13132,9 @@ int kvm_handle_memory_failure(struct kvm_vcpu *vcpu, int r,
struct x86_exception *e) struct x86_exception *e)
{ {
if (r == X86EMUL_PROPAGATE_FAULT) { if (r == X86EMUL_PROPAGATE_FAULT) {
if (KVM_BUG_ON(!e, vcpu->kvm))
return -EIO;
kvm_inject_emulated_page_fault(vcpu, e); kvm_inject_emulated_page_fault(vcpu, e);
return 1; return 1;
} }

View File

@ -41,7 +41,7 @@ static int kvm_xen_shared_info_init(struct kvm *kvm, gfn_t gfn)
int ret = 0; int ret = 0;
int idx = srcu_read_lock(&kvm->srcu); int idx = srcu_read_lock(&kvm->srcu);
if (gfn == GPA_INVALID) { if (gfn == KVM_XEN_INVALID_GFN) {
kvm_gpc_deactivate(gpc); kvm_gpc_deactivate(gpc);
goto out; goto out;
} }
@ -659,7 +659,7 @@ int kvm_xen_hvm_get_attr(struct kvm *kvm, struct kvm_xen_hvm_attr *data)
if (kvm->arch.xen.shinfo_cache.active) if (kvm->arch.xen.shinfo_cache.active)
data->u.shared_info.gfn = gpa_to_gfn(kvm->arch.xen.shinfo_cache.gpa); data->u.shared_info.gfn = gpa_to_gfn(kvm->arch.xen.shinfo_cache.gpa);
else else
data->u.shared_info.gfn = GPA_INVALID; data->u.shared_info.gfn = KVM_XEN_INVALID_GFN;
r = 0; r = 0;
break; break;
@ -705,7 +705,7 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data)
BUILD_BUG_ON(offsetof(struct vcpu_info, time) != BUILD_BUG_ON(offsetof(struct vcpu_info, time) !=
offsetof(struct compat_vcpu_info, time)); offsetof(struct compat_vcpu_info, time));
if (data->u.gpa == GPA_INVALID) { if (data->u.gpa == KVM_XEN_INVALID_GPA) {
kvm_gpc_deactivate(&vcpu->arch.xen.vcpu_info_cache); kvm_gpc_deactivate(&vcpu->arch.xen.vcpu_info_cache);
r = 0; r = 0;
break; break;
@ -719,7 +719,7 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data)
break; break;
case KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO: case KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO:
if (data->u.gpa == GPA_INVALID) { if (data->u.gpa == KVM_XEN_INVALID_GPA) {
kvm_gpc_deactivate(&vcpu->arch.xen.vcpu_time_info_cache); kvm_gpc_deactivate(&vcpu->arch.xen.vcpu_time_info_cache);
r = 0; r = 0;
break; break;
@ -739,7 +739,7 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data)
r = -EOPNOTSUPP; r = -EOPNOTSUPP;
break; break;
} }
if (data->u.gpa == GPA_INVALID) { if (data->u.gpa == KVM_XEN_INVALID_GPA) {
r = 0; r = 0;
deactivate_out: deactivate_out:
kvm_gpc_deactivate(&vcpu->arch.xen.runstate_cache); kvm_gpc_deactivate(&vcpu->arch.xen.runstate_cache);
@ -937,7 +937,7 @@ int kvm_xen_vcpu_get_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data)
if (vcpu->arch.xen.vcpu_info_cache.active) if (vcpu->arch.xen.vcpu_info_cache.active)
data->u.gpa = vcpu->arch.xen.vcpu_info_cache.gpa; data->u.gpa = vcpu->arch.xen.vcpu_info_cache.gpa;
else else
data->u.gpa = GPA_INVALID; data->u.gpa = KVM_XEN_INVALID_GPA;
r = 0; r = 0;
break; break;
@ -945,7 +945,7 @@ int kvm_xen_vcpu_get_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data)
if (vcpu->arch.xen.vcpu_time_info_cache.active) if (vcpu->arch.xen.vcpu_time_info_cache.active)
data->u.gpa = vcpu->arch.xen.vcpu_time_info_cache.gpa; data->u.gpa = vcpu->arch.xen.vcpu_time_info_cache.gpa;
else else
data->u.gpa = GPA_INVALID; data->u.gpa = KVM_XEN_INVALID_GPA;
r = 0; r = 0;
break; break;
@ -1069,6 +1069,7 @@ int kvm_xen_write_hypercall_page(struct kvm_vcpu *vcpu, u64 data)
u8 blob_size = lm ? kvm->arch.xen_hvm_config.blob_size_64 u8 blob_size = lm ? kvm->arch.xen_hvm_config.blob_size_64
: kvm->arch.xen_hvm_config.blob_size_32; : kvm->arch.xen_hvm_config.blob_size_32;
u8 *page; u8 *page;
int ret;
if (page_num >= blob_size) if (page_num >= blob_size)
return 1; return 1;
@ -1079,10 +1080,10 @@ int kvm_xen_write_hypercall_page(struct kvm_vcpu *vcpu, u64 data)
if (IS_ERR(page)) if (IS_ERR(page))
return PTR_ERR(page); return PTR_ERR(page);
if (kvm_vcpu_write_guest(vcpu, page_addr, page, PAGE_SIZE)) { ret = kvm_vcpu_write_guest(vcpu, page_addr, page, PAGE_SIZE);
kfree(page); kfree(page);
if (ret)
return 1; return 1;
}
} }
return 0; return 0;
} }
@ -1183,30 +1184,22 @@ static bool wait_pending_event(struct kvm_vcpu *vcpu, int nr_ports,
static bool kvm_xen_schedop_poll(struct kvm_vcpu *vcpu, bool longmode, static bool kvm_xen_schedop_poll(struct kvm_vcpu *vcpu, bool longmode,
u64 param, u64 *r) u64 param, u64 *r)
{ {
int idx, i;
struct sched_poll sched_poll; struct sched_poll sched_poll;
evtchn_port_t port, *ports; evtchn_port_t port, *ports;
gpa_t gpa; struct x86_exception e;
int i;
if (!lapic_in_kernel(vcpu) || if (!lapic_in_kernel(vcpu) ||
!(vcpu->kvm->arch.xen_hvm_config.flags & KVM_XEN_HVM_CONFIG_EVTCHN_SEND)) !(vcpu->kvm->arch.xen_hvm_config.flags & KVM_XEN_HVM_CONFIG_EVTCHN_SEND))
return false; return false;
idx = srcu_read_lock(&vcpu->kvm->srcu);
gpa = kvm_mmu_gva_to_gpa_system(vcpu, param, NULL);
srcu_read_unlock(&vcpu->kvm->srcu, idx);
if (!gpa) {
*r = -EFAULT;
return true;
}
if (IS_ENABLED(CONFIG_64BIT) && !longmode) { if (IS_ENABLED(CONFIG_64BIT) && !longmode) {
struct compat_sched_poll sp32; struct compat_sched_poll sp32;
/* Sanity check that the compat struct definition is correct */ /* Sanity check that the compat struct definition is correct */
BUILD_BUG_ON(sizeof(sp32) != 16); BUILD_BUG_ON(sizeof(sp32) != 16);
if (kvm_vcpu_read_guest(vcpu, gpa, &sp32, sizeof(sp32))) { if (kvm_read_guest_virt(vcpu, param, &sp32, sizeof(sp32), &e)) {
*r = -EFAULT; *r = -EFAULT;
return true; return true;
} }
@ -1220,8 +1213,8 @@ static bool kvm_xen_schedop_poll(struct kvm_vcpu *vcpu, bool longmode,
sched_poll.nr_ports = sp32.nr_ports; sched_poll.nr_ports = sp32.nr_ports;
sched_poll.timeout = sp32.timeout; sched_poll.timeout = sp32.timeout;
} else { } else {
if (kvm_vcpu_read_guest(vcpu, gpa, &sched_poll, if (kvm_read_guest_virt(vcpu, param, &sched_poll,
sizeof(sched_poll))) { sizeof(sched_poll), &e)) {
*r = -EFAULT; *r = -EFAULT;
return true; return true;
} }
@ -1243,18 +1236,13 @@ static bool kvm_xen_schedop_poll(struct kvm_vcpu *vcpu, bool longmode,
} else } else
ports = &port; ports = &port;
for (i = 0; i < sched_poll.nr_ports; i++) { if (kvm_read_guest_virt(vcpu, (gva_t)sched_poll.ports, ports,
idx = srcu_read_lock(&vcpu->kvm->srcu); sched_poll.nr_ports * sizeof(*ports), &e)) {
gpa = kvm_mmu_gva_to_gpa_system(vcpu, *r = -EFAULT;
(gva_t)(sched_poll.ports + i), return true;
NULL); }
srcu_read_unlock(&vcpu->kvm->srcu, idx);
if (!gpa || kvm_vcpu_read_guest(vcpu, gpa, for (i = 0; i < sched_poll.nr_ports; i++) {
&ports[i], sizeof(port))) {
*r = -EFAULT;
goto out;
}
if (ports[i] >= max_evtchn_port(vcpu->kvm)) { if (ports[i] >= max_evtchn_port(vcpu->kvm)) {
*r = -EINVAL; *r = -EINVAL;
goto out; goto out;
@ -1330,9 +1318,8 @@ static bool kvm_xen_hcall_vcpu_op(struct kvm_vcpu *vcpu, bool longmode, int cmd,
int vcpu_id, u64 param, u64 *r) int vcpu_id, u64 param, u64 *r)
{ {
struct vcpu_set_singleshot_timer oneshot; struct vcpu_set_singleshot_timer oneshot;
struct x86_exception e;
s64 delta; s64 delta;
gpa_t gpa;
int idx;
if (!kvm_xen_timer_enabled(vcpu)) if (!kvm_xen_timer_enabled(vcpu))
return false; return false;
@ -1343,9 +1330,6 @@ static bool kvm_xen_hcall_vcpu_op(struct kvm_vcpu *vcpu, bool longmode, int cmd,
*r = -EINVAL; *r = -EINVAL;
return true; return true;
} }
idx = srcu_read_lock(&vcpu->kvm->srcu);
gpa = kvm_mmu_gva_to_gpa_system(vcpu, param, NULL);
srcu_read_unlock(&vcpu->kvm->srcu, idx);
/* /*
* The only difference for 32-bit compat is the 4 bytes of * The only difference for 32-bit compat is the 4 bytes of
@ -1363,9 +1347,8 @@ static bool kvm_xen_hcall_vcpu_op(struct kvm_vcpu *vcpu, bool longmode, int cmd,
BUILD_BUG_ON(sizeof_field(struct compat_vcpu_set_singleshot_timer, flags) != BUILD_BUG_ON(sizeof_field(struct compat_vcpu_set_singleshot_timer, flags) !=
sizeof_field(struct vcpu_set_singleshot_timer, flags)); sizeof_field(struct vcpu_set_singleshot_timer, flags));
if (!gpa || if (kvm_read_guest_virt(vcpu, param, &oneshot, longmode ? sizeof(oneshot) :
kvm_vcpu_read_guest(vcpu, gpa, &oneshot, longmode ? sizeof(oneshot) : sizeof(struct compat_vcpu_set_singleshot_timer), &e)) {
sizeof(struct compat_vcpu_set_singleshot_timer))) {
*r = -EFAULT; *r = -EFAULT;
return true; return true;
} }
@ -1825,20 +1808,20 @@ static int kvm_xen_eventfd_update(struct kvm *kvm,
{ {
u32 port = data->u.evtchn.send_port; u32 port = data->u.evtchn.send_port;
struct evtchnfd *evtchnfd; struct evtchnfd *evtchnfd;
int ret;
if (!port || port >= max_evtchn_port(kvm)) /* Protect writes to evtchnfd as well as the idr lookup. */
return -EINVAL;
mutex_lock(&kvm->lock); mutex_lock(&kvm->lock);
evtchnfd = idr_find(&kvm->arch.xen.evtchn_ports, port); evtchnfd = idr_find(&kvm->arch.xen.evtchn_ports, port);
mutex_unlock(&kvm->lock);
ret = -ENOENT;
if (!evtchnfd) if (!evtchnfd)
return -ENOENT; goto out_unlock;
/* For an UPDATE, nothing may change except the priority/vcpu */ /* For an UPDATE, nothing may change except the priority/vcpu */
ret = -EINVAL;
if (evtchnfd->type != data->u.evtchn.type) if (evtchnfd->type != data->u.evtchn.type)
return -EINVAL; goto out_unlock;
/* /*
* Port cannot change, and if it's zero that was an eventfd * Port cannot change, and if it's zero that was an eventfd
@ -1846,20 +1829,21 @@ static int kvm_xen_eventfd_update(struct kvm *kvm,
*/ */
if (!evtchnfd->deliver.port.port || if (!evtchnfd->deliver.port.port ||
evtchnfd->deliver.port.port != data->u.evtchn.deliver.port.port) evtchnfd->deliver.port.port != data->u.evtchn.deliver.port.port)
return -EINVAL; goto out_unlock;
/* We only support 2 level event channels for now */ /* We only support 2 level event channels for now */
if (data->u.evtchn.deliver.port.priority != KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL) if (data->u.evtchn.deliver.port.priority != KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL)
return -EINVAL; goto out_unlock;
mutex_lock(&kvm->lock);
evtchnfd->deliver.port.priority = data->u.evtchn.deliver.port.priority; evtchnfd->deliver.port.priority = data->u.evtchn.deliver.port.priority;
if (evtchnfd->deliver.port.vcpu_id != data->u.evtchn.deliver.port.vcpu) { if (evtchnfd->deliver.port.vcpu_id != data->u.evtchn.deliver.port.vcpu) {
evtchnfd->deliver.port.vcpu_id = data->u.evtchn.deliver.port.vcpu; evtchnfd->deliver.port.vcpu_id = data->u.evtchn.deliver.port.vcpu;
evtchnfd->deliver.port.vcpu_idx = -1; evtchnfd->deliver.port.vcpu_idx = -1;
} }
ret = 0;
out_unlock:
mutex_unlock(&kvm->lock); mutex_unlock(&kvm->lock);
return 0; return ret;
} }
/* /*
@ -1871,12 +1855,9 @@ static int kvm_xen_eventfd_assign(struct kvm *kvm,
{ {
u32 port = data->u.evtchn.send_port; u32 port = data->u.evtchn.send_port;
struct eventfd_ctx *eventfd = NULL; struct eventfd_ctx *eventfd = NULL;
struct evtchnfd *evtchnfd = NULL; struct evtchnfd *evtchnfd;
int ret = -EINVAL; int ret = -EINVAL;
if (!port || port >= max_evtchn_port(kvm))
return -EINVAL;
evtchnfd = kzalloc(sizeof(struct evtchnfd), GFP_KERNEL); evtchnfd = kzalloc(sizeof(struct evtchnfd), GFP_KERNEL);
if (!evtchnfd) if (!evtchnfd)
return -ENOMEM; return -ENOMEM;
@ -1952,8 +1933,7 @@ static int kvm_xen_eventfd_deassign(struct kvm *kvm, u32 port)
if (!evtchnfd) if (!evtchnfd)
return -ENOENT; return -ENOENT;
if (kvm) synchronize_srcu(&kvm->srcu);
synchronize_srcu(&kvm->srcu);
if (!evtchnfd->deliver.port.port) if (!evtchnfd->deliver.port.port)
eventfd_ctx_put(evtchnfd->deliver.eventfd.ctx); eventfd_ctx_put(evtchnfd->deliver.eventfd.ctx);
kfree(evtchnfd); kfree(evtchnfd);
@ -1962,18 +1942,42 @@ static int kvm_xen_eventfd_deassign(struct kvm *kvm, u32 port)
static int kvm_xen_eventfd_reset(struct kvm *kvm) static int kvm_xen_eventfd_reset(struct kvm *kvm)
{ {
struct evtchnfd *evtchnfd; struct evtchnfd *evtchnfd, **all_evtchnfds;
int i; int i;
int n = 0;
mutex_lock(&kvm->lock); mutex_lock(&kvm->lock);
/*
* Because synchronize_srcu() cannot be called inside the
* critical section, first collect all the evtchnfd objects
* in an array as they are removed from evtchn_ports.
*/
idr_for_each_entry(&kvm->arch.xen.evtchn_ports, evtchnfd, i)
n++;
all_evtchnfds = kmalloc_array(n, sizeof(struct evtchnfd *), GFP_KERNEL);
if (!all_evtchnfds) {
mutex_unlock(&kvm->lock);
return -ENOMEM;
}
n = 0;
idr_for_each_entry(&kvm->arch.xen.evtchn_ports, evtchnfd, i) { idr_for_each_entry(&kvm->arch.xen.evtchn_ports, evtchnfd, i) {
all_evtchnfds[n++] = evtchnfd;
idr_remove(&kvm->arch.xen.evtchn_ports, evtchnfd->send_port); idr_remove(&kvm->arch.xen.evtchn_ports, evtchnfd->send_port);
synchronize_srcu(&kvm->srcu); }
mutex_unlock(&kvm->lock);
synchronize_srcu(&kvm->srcu);
while (n--) {
evtchnfd = all_evtchnfds[n];
if (!evtchnfd->deliver.port.port) if (!evtchnfd->deliver.port.port)
eventfd_ctx_put(evtchnfd->deliver.eventfd.ctx); eventfd_ctx_put(evtchnfd->deliver.eventfd.ctx);
kfree(evtchnfd); kfree(evtchnfd);
} }
mutex_unlock(&kvm->lock); kfree(all_evtchnfds);
return 0; return 0;
} }
@ -2002,20 +2006,22 @@ static bool kvm_xen_hcall_evtchn_send(struct kvm_vcpu *vcpu, u64 param, u64 *r)
{ {
struct evtchnfd *evtchnfd; struct evtchnfd *evtchnfd;
struct evtchn_send send; struct evtchn_send send;
gpa_t gpa; struct x86_exception e;
int idx;
idx = srcu_read_lock(&vcpu->kvm->srcu); /* Sanity check: this structure is the same for 32-bit and 64-bit */
gpa = kvm_mmu_gva_to_gpa_system(vcpu, param, NULL); BUILD_BUG_ON(sizeof(send) != 4);
srcu_read_unlock(&vcpu->kvm->srcu, idx); if (kvm_read_guest_virt(vcpu, param, &send, sizeof(send), &e)) {
if (!gpa || kvm_vcpu_read_guest(vcpu, gpa, &send, sizeof(send))) {
*r = -EFAULT; *r = -EFAULT;
return true; return true;
} }
/* The evtchn_ports idr is protected by vcpu->kvm->srcu */ /*
* evtchnfd is protected by kvm->srcu; the idr lookup instead
* is protected by RCU.
*/
rcu_read_lock();
evtchnfd = idr_find(&vcpu->kvm->arch.xen.evtchn_ports, send.port); evtchnfd = idr_find(&vcpu->kvm->arch.xen.evtchn_ports, send.port);
rcu_read_unlock();
if (!evtchnfd) if (!evtchnfd)
return false; return false;

View File

@ -5317,8 +5317,8 @@ static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic, bool is_sync)
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&bfqd->lock, flags); spin_lock_irqsave(&bfqd->lock, flags);
bfq_exit_bfqq(bfqd, bfqq);
bic_set_bfqq(bic, NULL, is_sync); bic_set_bfqq(bic, NULL, is_sync);
bfq_exit_bfqq(bfqd, bfqq);
spin_unlock_irqrestore(&bfqd->lock, flags); spin_unlock_irqrestore(&bfqd->lock, flags);
} }
} }

View File

@ -70,11 +70,7 @@ module_param(device_id_scheme, bool, 0444);
static int only_lcd = -1; static int only_lcd = -1;
module_param(only_lcd, int, 0444); module_param(only_lcd, int, 0444);
/* static int register_backlight_delay;
* Display probing is known to take up to 5 seconds, so delay the fallback
* backlight registration by 5 seconds + 3 seconds for some extra margin.
*/
static int register_backlight_delay = 8;
module_param(register_backlight_delay, int, 0444); module_param(register_backlight_delay, int, 0444);
MODULE_PARM_DESC(register_backlight_delay, MODULE_PARM_DESC(register_backlight_delay,
"Delay in seconds before doing fallback (non GPU driver triggered) " "Delay in seconds before doing fallback (non GPU driver triggered) "
@ -2176,6 +2172,17 @@ static bool should_check_lcd_flag(void)
return false; return false;
} }
/*
* At least one graphics driver has reported that no LCD is connected
* via the native interface. cancel the registration for fallback acpi_video0.
* If another driver still deems this necessary, it can explicitly register it.
*/
void acpi_video_report_nolcd(void)
{
cancel_delayed_work(&video_bus_register_backlight_work);
}
EXPORT_SYMBOL(acpi_video_report_nolcd);
int acpi_video_register(void) int acpi_video_register(void)
{ {
int ret = 0; int ret = 0;

View File

@ -432,10 +432,24 @@ static const struct dmi_system_id asus_laptop[] = {
DMI_MATCH(DMI_BOARD_NAME, "S5602ZA"), DMI_MATCH(DMI_BOARD_NAME, "S5602ZA"),
}, },
}, },
{
.ident = "Asus ExpertBook B2502",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_BOARD_NAME, "B2502CBA"),
},
},
{ } { }
}; };
static const struct dmi_system_id lenovo_82ra[] = { static const struct dmi_system_id lenovo_laptop[] = {
{
.ident = "LENOVO IdeaPad Flex 5 14ALC7",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
DMI_MATCH(DMI_PRODUCT_NAME, "82R9"),
},
},
{ {
.ident = "LENOVO IdeaPad Flex 5 16ALC7", .ident = "LENOVO IdeaPad Flex 5 16ALC7",
.matches = { .matches = {
@ -446,6 +460,17 @@ static const struct dmi_system_id lenovo_82ra[] = {
{ } { }
}; };
static const struct dmi_system_id schenker_gm_rg[] = {
{
.ident = "XMG CORE 15 (M22)",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "SchenkerTechnologiesGmbH"),
DMI_MATCH(DMI_BOARD_NAME, "GMxRGxx"),
},
},
{ }
};
struct irq_override_cmp { struct irq_override_cmp {
const struct dmi_system_id *system; const struct dmi_system_id *system;
unsigned char irq; unsigned char irq;
@ -458,8 +483,9 @@ struct irq_override_cmp {
static const struct irq_override_cmp override_table[] = { static const struct irq_override_cmp override_table[] = {
{ medion_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, false }, { medion_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, false },
{ asus_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, false }, { asus_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, false },
{ lenovo_82ra, 6, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, true }, { lenovo_laptop, 6, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, true },
{ lenovo_82ra, 10, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, true }, { lenovo_laptop, 10, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, true },
{ schenker_gm_rg, 1, ACPI_EDGE_SENSITIVE, ACPI_ACTIVE_LOW, 1, true },
}; };
static bool acpi_dev_irq_override(u32 gsi, u8 triggering, u8 polarity, static bool acpi_dev_irq_override(u32 gsi, u8 triggering, u8 polarity,

View File

@ -34,6 +34,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/platform_data/x86/nvidia-wmi-ec-backlight.h> #include <linux/platform_data/x86/nvidia-wmi-ec-backlight.h>
#include <linux/pnp.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/workqueue.h> #include <linux/workqueue.h>
#include <acpi/video.h> #include <acpi/video.h>
@ -105,6 +106,26 @@ static bool nvidia_wmi_ec_supported(void)
} }
#endif #endif
static bool apple_gmux_backlight_present(void)
{
struct acpi_device *adev;
struct device *dev;
adev = acpi_dev_get_first_match_dev(GMUX_ACPI_HID, NULL, -1);
if (!adev)
return false;
dev = acpi_get_first_physical_node(adev);
if (!dev)
return false;
/*
* drivers/platform/x86/apple-gmux.c only supports old style
* Apple GMUX with an IO-resource.
*/
return pnp_get_resource(to_pnp_dev(dev), IORESOURCE_IO, 0) != NULL;
}
/* Force to use vendor driver when the ACPI device is known to be /* Force to use vendor driver when the ACPI device is known to be
* buggy */ * buggy */
static int video_detect_force_vendor(const struct dmi_system_id *d) static int video_detect_force_vendor(const struct dmi_system_id *d)
@ -767,7 +788,7 @@ static enum acpi_backlight_type __acpi_video_get_backlight_type(bool native)
if (nvidia_wmi_ec_present) if (nvidia_wmi_ec_present)
return acpi_backlight_nvidia_wmi_ec; return acpi_backlight_nvidia_wmi_ec;
if (apple_gmux_present()) if (apple_gmux_backlight_present())
return acpi_backlight_apple_gmux; return acpi_backlight_apple_gmux;
/* Use ACPI video if available, except when native should be preferred. */ /* Use ACPI video if available, except when native should be preferred. */

View File

@ -28,10 +28,6 @@ static bool sleep_no_lps0 __read_mostly;
module_param(sleep_no_lps0, bool, 0644); module_param(sleep_no_lps0, bool, 0644);
MODULE_PARM_DESC(sleep_no_lps0, "Do not use the special LPS0 device interface"); MODULE_PARM_DESC(sleep_no_lps0, "Do not use the special LPS0 device interface");
static bool prefer_microsoft_dsm_guid __read_mostly;
module_param(prefer_microsoft_dsm_guid, bool, 0644);
MODULE_PARM_DESC(prefer_microsoft_dsm_guid, "Prefer using Microsoft GUID in LPS0 device _DSM evaluation");
static const struct acpi_device_id lps0_device_ids[] = { static const struct acpi_device_id lps0_device_ids[] = {
{"PNP0D80", }, {"PNP0D80", },
{"", }, {"", },
@ -369,27 +365,15 @@ out:
} }
struct amd_lps0_hid_device_data { struct amd_lps0_hid_device_data {
const unsigned int rev_id;
const bool check_off_by_one; const bool check_off_by_one;
const bool prefer_amd_guid;
}; };
static const struct amd_lps0_hid_device_data amd_picasso = { static const struct amd_lps0_hid_device_data amd_picasso = {
.rev_id = 0,
.check_off_by_one = true, .check_off_by_one = true,
.prefer_amd_guid = false,
}; };
static const struct amd_lps0_hid_device_data amd_cezanne = { static const struct amd_lps0_hid_device_data amd_cezanne = {
.rev_id = 0,
.check_off_by_one = false, .check_off_by_one = false,
.prefer_amd_guid = false,
};
static const struct amd_lps0_hid_device_data amd_rembrandt = {
.rev_id = 2,
.check_off_by_one = false,
.prefer_amd_guid = true,
}; };
static const struct acpi_device_id amd_hid_ids[] = { static const struct acpi_device_id amd_hid_ids[] = {
@ -397,69 +381,27 @@ static const struct acpi_device_id amd_hid_ids[] = {
{"AMD0005", (kernel_ulong_t)&amd_picasso, }, {"AMD0005", (kernel_ulong_t)&amd_picasso, },
{"AMDI0005", (kernel_ulong_t)&amd_picasso, }, {"AMDI0005", (kernel_ulong_t)&amd_picasso, },
{"AMDI0006", (kernel_ulong_t)&amd_cezanne, }, {"AMDI0006", (kernel_ulong_t)&amd_cezanne, },
{"AMDI0007", (kernel_ulong_t)&amd_rembrandt, },
{} {}
}; };
static int lps0_prefer_microsoft(const struct dmi_system_id *id) static int lps0_prefer_amd(const struct dmi_system_id *id)
{ {
pr_debug("Preferring Microsoft GUID.\n"); pr_debug("Using AMD GUID w/ _REV 2.\n");
prefer_microsoft_dsm_guid = true; rev_id = 2;
return 0; return 0;
} }
static const struct dmi_system_id s2idle_dmi_table[] __initconst = { static const struct dmi_system_id s2idle_dmi_table[] __initconst = {
{ {
/* /*
* ASUS TUF Gaming A17 FA707RE * AMD Rembrandt based HP EliteBook 835/845/865 G9
* https://bugzilla.kernel.org/show_bug.cgi?id=216101 * Contains specialized AML in AMD/_REV 2 path to avoid
* triggering a bug in Qualcomm WLAN firmware. This may be
* removed in the future if that firmware is fixed.
*/ */
.callback = lps0_prefer_microsoft, .callback = lps0_prefer_amd,
.matches = { .matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), DMI_MATCH(DMI_BOARD_VENDOR, "HP"),
DMI_MATCH(DMI_PRODUCT_NAME, "ASUS TUF Gaming A17"), DMI_MATCH(DMI_BOARD_NAME, "8990"),
},
},
{
/* ASUS ROG Zephyrus G14 (2022) */
.callback = lps0_prefer_microsoft,
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_PRODUCT_NAME, "ROG Zephyrus G14 GA402"),
},
},
{
/*
* Lenovo Yoga Slim 7 Pro X 14ARH7
* https://bugzilla.kernel.org/show_bug.cgi?id=216473 : 82V2
* https://bugzilla.kernel.org/show_bug.cgi?id=216438 : 82TL
*/
.callback = lps0_prefer_microsoft,
.matches = {
DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
DMI_MATCH(DMI_PRODUCT_NAME, "82"),
},
},
{
/*
* ASUSTeK COMPUTER INC. ROG Flow X13 GV301RE_GV301RE
* https://gitlab.freedesktop.org/drm/amd/-/issues/2148
*/
.callback = lps0_prefer_microsoft,
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_PRODUCT_NAME, "ROG Flow X13 GV301"),
},
},
{
/*
* ASUSTeK COMPUTER INC. ROG Flow X16 GV601RW_GV601RW
* https://gitlab.freedesktop.org/drm/amd/-/issues/2148
*/
.callback = lps0_prefer_microsoft,
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_PRODUCT_NAME, "ROG Flow X16 GV601"),
}, },
}, },
{} {}
@ -484,16 +426,14 @@ static int lps0_device_attach(struct acpi_device *adev,
if (dev_id->id[0]) if (dev_id->id[0])
data = (const struct amd_lps0_hid_device_data *) dev_id->driver_data; data = (const struct amd_lps0_hid_device_data *) dev_id->driver_data;
else else
data = &amd_rembrandt; data = &amd_cezanne;
rev_id = data->rev_id;
lps0_dsm_func_mask = validate_dsm(adev->handle, lps0_dsm_func_mask = validate_dsm(adev->handle,
ACPI_LPS0_DSM_UUID_AMD, rev_id, &lps0_dsm_guid); ACPI_LPS0_DSM_UUID_AMD, rev_id, &lps0_dsm_guid);
if (lps0_dsm_func_mask > 0x3 && data->check_off_by_one) { if (lps0_dsm_func_mask > 0x3 && data->check_off_by_one) {
lps0_dsm_func_mask = (lps0_dsm_func_mask << 1) | 0x1; lps0_dsm_func_mask = (lps0_dsm_func_mask << 1) | 0x1;
acpi_handle_debug(adev->handle, "_DSM UUID %s: Adjusted function mask: 0x%x\n", acpi_handle_debug(adev->handle, "_DSM UUID %s: Adjusted function mask: 0x%x\n",
ACPI_LPS0_DSM_UUID_AMD, lps0_dsm_func_mask); ACPI_LPS0_DSM_UUID_AMD, lps0_dsm_func_mask);
} else if (lps0_dsm_func_mask_microsoft > 0 && data->prefer_amd_guid && } else if (lps0_dsm_func_mask_microsoft > 0 && rev_id) {
!prefer_microsoft_dsm_guid) {
lps0_dsm_func_mask_microsoft = -EINVAL; lps0_dsm_func_mask_microsoft = -EINVAL;
acpi_handle_debug(adev->handle, "_DSM Using AMD method\n"); acpi_handle_debug(adev->handle, "_DSM Using AMD method\n");
} }
@ -501,8 +441,7 @@ static int lps0_device_attach(struct acpi_device *adev,
rev_id = 1; rev_id = 1;
lps0_dsm_func_mask = validate_dsm(adev->handle, lps0_dsm_func_mask = validate_dsm(adev->handle,
ACPI_LPS0_DSM_UUID, rev_id, &lps0_dsm_guid); ACPI_LPS0_DSM_UUID, rev_id, &lps0_dsm_guid);
if (!prefer_microsoft_dsm_guid) lps0_dsm_func_mask_microsoft = -EINVAL;
lps0_dsm_func_mask_microsoft = -EINVAL;
} }
if (lps0_dsm_func_mask < 0 && lps0_dsm_func_mask_microsoft < 0) if (lps0_dsm_func_mask < 0 && lps0_dsm_func_mask_microsoft < 0)

View File

@ -83,6 +83,7 @@ enum board_ids {
static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent); static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent);
static void ahci_remove_one(struct pci_dev *dev); static void ahci_remove_one(struct pci_dev *dev);
static void ahci_shutdown_one(struct pci_dev *dev); static void ahci_shutdown_one(struct pci_dev *dev);
static void ahci_intel_pcs_quirk(struct pci_dev *pdev, struct ahci_host_priv *hpriv);
static int ahci_vt8251_hardreset(struct ata_link *link, unsigned int *class, static int ahci_vt8251_hardreset(struct ata_link *link, unsigned int *class,
unsigned long deadline); unsigned long deadline);
static int ahci_avn_hardreset(struct ata_link *link, unsigned int *class, static int ahci_avn_hardreset(struct ata_link *link, unsigned int *class,
@ -676,6 +677,25 @@ static void ahci_pci_save_initial_config(struct pci_dev *pdev,
ahci_save_initial_config(&pdev->dev, hpriv); ahci_save_initial_config(&pdev->dev, hpriv);
} }
static int ahci_pci_reset_controller(struct ata_host *host)
{
struct pci_dev *pdev = to_pci_dev(host->dev);
struct ahci_host_priv *hpriv = host->private_data;
int rc;
rc = ahci_reset_controller(host);
if (rc)
return rc;
/*
* If platform firmware failed to enable ports, try to enable
* them here.
*/
ahci_intel_pcs_quirk(pdev, hpriv);
return 0;
}
static void ahci_pci_init_controller(struct ata_host *host) static void ahci_pci_init_controller(struct ata_host *host)
{ {
struct ahci_host_priv *hpriv = host->private_data; struct ahci_host_priv *hpriv = host->private_data;
@ -870,7 +890,7 @@ static int ahci_pci_device_runtime_resume(struct device *dev)
struct ata_host *host = pci_get_drvdata(pdev); struct ata_host *host = pci_get_drvdata(pdev);
int rc; int rc;
rc = ahci_reset_controller(host); rc = ahci_pci_reset_controller(host);
if (rc) if (rc)
return rc; return rc;
ahci_pci_init_controller(host); ahci_pci_init_controller(host);
@ -906,7 +926,7 @@ static int ahci_pci_device_resume(struct device *dev)
ahci_mcp89_apple_enable(pdev); ahci_mcp89_apple_enable(pdev);
if (pdev->dev.power.power_state.event == PM_EVENT_SUSPEND) { if (pdev->dev.power.power_state.event == PM_EVENT_SUSPEND) {
rc = ahci_reset_controller(host); rc = ahci_pci_reset_controller(host);
if (rc) if (rc)
return rc; return rc;
@ -1784,12 +1804,6 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
/* save initial config */ /* save initial config */
ahci_pci_save_initial_config(pdev, hpriv); ahci_pci_save_initial_config(pdev, hpriv);
/*
* If platform firmware failed to enable ports, try to enable
* them here.
*/
ahci_intel_pcs_quirk(pdev, hpriv);
/* prepare host */ /* prepare host */
if (hpriv->cap & HOST_CAP_NCQ) { if (hpriv->cap & HOST_CAP_NCQ) {
pi.flags |= ATA_FLAG_NCQ; pi.flags |= ATA_FLAG_NCQ;
@ -1899,7 +1913,7 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
if (rc) if (rc)
return rc; return rc;
rc = ahci_reset_controller(host); rc = ahci_pci_reset_controller(host);
if (rc) if (rc)
return rc; return rc;

View File

@ -3,7 +3,7 @@
* Microchip / Atmel ECC (I2C) driver. * Microchip / Atmel ECC (I2C) driver.
* *
* Copyright (c) 2017, Microchip Technology Inc. * Copyright (c) 2017, Microchip Technology Inc.
* Author: Tudor Ambarus <tudor.ambarus@microchip.com> * Author: Tudor Ambarus
*/ */
#include <linux/delay.h> #include <linux/delay.h>
@ -411,6 +411,6 @@ static void __exit atmel_ecc_exit(void)
module_init(atmel_ecc_init); module_init(atmel_ecc_init);
module_exit(atmel_ecc_exit); module_exit(atmel_ecc_exit);
MODULE_AUTHOR("Tudor Ambarus <tudor.ambarus@microchip.com>"); MODULE_AUTHOR("Tudor Ambarus");
MODULE_DESCRIPTION("Microchip / Atmel ECC (I2C) driver"); MODULE_DESCRIPTION("Microchip / Atmel ECC (I2C) driver");
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");

View File

@ -3,7 +3,7 @@
* Microchip / Atmel ECC (I2C) driver. * Microchip / Atmel ECC (I2C) driver.
* *
* Copyright (c) 2017, Microchip Technology Inc. * Copyright (c) 2017, Microchip Technology Inc.
* Author: Tudor Ambarus <tudor.ambarus@microchip.com> * Author: Tudor Ambarus
*/ */
#include <linux/bitrev.h> #include <linux/bitrev.h>
@ -390,6 +390,6 @@ static void __exit atmel_i2c_exit(void)
module_init(atmel_i2c_init); module_init(atmel_i2c_init);
module_exit(atmel_i2c_exit); module_exit(atmel_i2c_exit);
MODULE_AUTHOR("Tudor Ambarus <tudor.ambarus@microchip.com>"); MODULE_AUTHOR("Tudor Ambarus");
MODULE_DESCRIPTION("Microchip / Atmel ECC (I2C) driver"); MODULE_DESCRIPTION("Microchip / Atmel ECC (I2C) driver");
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");

View File

@ -1,7 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0 */ /* SPDX-License-Identifier: GPL-2.0 */
/* /*
* Copyright (c) 2017, Microchip Technology Inc. * Copyright (c) 2017, Microchip Technology Inc.
* Author: Tudor Ambarus <tudor.ambarus@microchip.com> * Author: Tudor Ambarus
*/ */
#ifndef __ATMEL_I2C_H__ #ifndef __ATMEL_I2C_H__

View File

@ -4361,6 +4361,10 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
amdgpu_set_panel_orientation(&aconnector->base); amdgpu_set_panel_orientation(&aconnector->base);
} }
/* If we didn't find a panel, notify the acpi video detection */
if (dm->adev->flags & AMD_IS_APU && dm->num_of_edps == 0)
acpi_video_report_nolcd();
/* Software is initialized. Now we can register interrupt handlers. */ /* Software is initialized. Now we can register interrupt handlers. */
switch (adev->asic_type) { switch (adev->asic_type) {
#if defined(CONFIG_DRM_AMD_DC_SI) #if defined(CONFIG_DRM_AMD_DC_SI)

View File

@ -41,9 +41,11 @@
#include "i915_drv.h" #include "i915_drv.h"
#include "i915_reg.h" #include "i915_reg.h"
#include "intel_de.h"
#include "intel_display_types.h" #include "intel_display_types.h"
#include "intel_dsi.h" #include "intel_dsi.h"
#include "intel_dsi_vbt.h" #include "intel_dsi_vbt.h"
#include "intel_gmbus_regs.h"
#include "vlv_dsi.h" #include "vlv_dsi.h"
#include "vlv_dsi_regs.h" #include "vlv_dsi_regs.h"
#include "vlv_sideband.h" #include "vlv_sideband.h"
@ -377,6 +379,85 @@ static void icl_exec_gpio(struct intel_connector *connector,
drm_dbg_kms(&dev_priv->drm, "Skipping ICL GPIO element execution\n"); drm_dbg_kms(&dev_priv->drm, "Skipping ICL GPIO element execution\n");
} }
enum {
MIPI_RESET_1 = 0,
MIPI_AVDD_EN_1,
MIPI_BKLT_EN_1,
MIPI_AVEE_EN_1,
MIPI_VIO_EN_1,
MIPI_RESET_2,
MIPI_AVDD_EN_2,
MIPI_BKLT_EN_2,
MIPI_AVEE_EN_2,
MIPI_VIO_EN_2,
};
static void icl_native_gpio_set_value(struct drm_i915_private *dev_priv,
int gpio, bool value)
{
int index;
if (drm_WARN_ON(&dev_priv->drm, DISPLAY_VER(dev_priv) == 11 && gpio >= MIPI_RESET_2))
return;
switch (gpio) {
case MIPI_RESET_1:
case MIPI_RESET_2:
index = gpio == MIPI_RESET_1 ? HPD_PORT_A : HPD_PORT_B;
/*
* Disable HPD to set the pin to output, and set output
* value. The HPD pin should not be enabled for DSI anyway,
* assuming the board design and VBT are sane, and the pin isn't
* used by a non-DSI encoder.
*
* The locking protects against concurrent SHOTPLUG_CTL_DDI
* modifications in irq setup and handling.
*/
spin_lock_irq(&dev_priv->irq_lock);
intel_de_rmw(dev_priv, SHOTPLUG_CTL_DDI,
SHOTPLUG_CTL_DDI_HPD_ENABLE(index) |
SHOTPLUG_CTL_DDI_HPD_OUTPUT_DATA(index),
value ? SHOTPLUG_CTL_DDI_HPD_OUTPUT_DATA(index) : 0);
spin_unlock_irq(&dev_priv->irq_lock);
break;
case MIPI_AVDD_EN_1:
case MIPI_AVDD_EN_2:
index = gpio == MIPI_AVDD_EN_1 ? 0 : 1;
intel_de_rmw(dev_priv, PP_CONTROL(index), PANEL_POWER_ON,
value ? PANEL_POWER_ON : 0);
break;
case MIPI_BKLT_EN_1:
case MIPI_BKLT_EN_2:
index = gpio == MIPI_BKLT_EN_1 ? 0 : 1;
intel_de_rmw(dev_priv, PP_CONTROL(index), EDP_BLC_ENABLE,
value ? EDP_BLC_ENABLE : 0);
break;
case MIPI_AVEE_EN_1:
case MIPI_AVEE_EN_2:
index = gpio == MIPI_AVEE_EN_1 ? 1 : 2;
intel_de_rmw(dev_priv, GPIO(dev_priv, index),
GPIO_CLOCK_VAL_OUT,
GPIO_CLOCK_DIR_MASK | GPIO_CLOCK_DIR_OUT |
GPIO_CLOCK_VAL_MASK | (value ? GPIO_CLOCK_VAL_OUT : 0));
break;
case MIPI_VIO_EN_1:
case MIPI_VIO_EN_2:
index = gpio == MIPI_VIO_EN_1 ? 1 : 2;
intel_de_rmw(dev_priv, GPIO(dev_priv, index),
GPIO_DATA_VAL_OUT,
GPIO_DATA_DIR_MASK | GPIO_DATA_DIR_OUT |
GPIO_DATA_VAL_MASK | (value ? GPIO_DATA_VAL_OUT : 0));
break;
default:
MISSING_CASE(gpio);
}
}
static const u8 *mipi_exec_gpio(struct intel_dsi *intel_dsi, const u8 *data) static const u8 *mipi_exec_gpio(struct intel_dsi *intel_dsi, const u8 *data)
{ {
struct drm_device *dev = intel_dsi->base.base.dev; struct drm_device *dev = intel_dsi->base.base.dev;
@ -384,8 +465,7 @@ static const u8 *mipi_exec_gpio(struct intel_dsi *intel_dsi, const u8 *data)
struct intel_connector *connector = intel_dsi->attached_connector; struct intel_connector *connector = intel_dsi->attached_connector;
u8 gpio_source, gpio_index = 0, gpio_number; u8 gpio_source, gpio_index = 0, gpio_number;
bool value; bool value;
bool native = DISPLAY_VER(dev_priv) >= 11;
drm_dbg_kms(&dev_priv->drm, "\n");
if (connector->panel.vbt.dsi.seq_version >= 3) if (connector->panel.vbt.dsi.seq_version >= 3)
gpio_index = *data++; gpio_index = *data++;
@ -398,10 +478,18 @@ static const u8 *mipi_exec_gpio(struct intel_dsi *intel_dsi, const u8 *data)
else else
gpio_source = 0; gpio_source = 0;
if (connector->panel.vbt.dsi.seq_version >= 4 && *data & BIT(1))
native = false;
/* pull up/down */ /* pull up/down */
value = *data++ & 1; value = *data++ & 1;
if (DISPLAY_VER(dev_priv) >= 11) drm_dbg_kms(&dev_priv->drm, "GPIO index %u, number %u, source %u, native %s, set to %s\n",
gpio_index, gpio_number, gpio_source, str_yes_no(native), str_on_off(value));
if (native)
icl_native_gpio_set_value(dev_priv, gpio_number, value);
else if (DISPLAY_VER(dev_priv) >= 11)
icl_exec_gpio(connector, gpio_source, gpio_index, value); icl_exec_gpio(connector, gpio_source, gpio_index, value);
else if (IS_VALLEYVIEW(dev_priv)) else if (IS_VALLEYVIEW(dev_priv))
vlv_exec_gpio(connector, gpio_source, gpio_number, value); vlv_exec_gpio(connector, gpio_source, gpio_number, value);

View File

@ -730,37 +730,74 @@ static int eb_reserve(struct i915_execbuffer *eb)
bool unpinned; bool unpinned;
/* /*
* Attempt to pin all of the buffers into the GTT. * We have one more buffers that we couldn't bind, which could be due to
* This is done in 2 phases: * various reasons. To resolve this we have 4 passes, with every next
* level turning the screws tighter:
* *
* 1. Unbind all objects that do not match the GTT constraints for * 0. Unbind all objects that do not match the GTT constraints for the
* the execbuffer (fenceable, mappable, alignment etc). * execbuffer (fenceable, mappable, alignment etc). Bind all new
* 2. Bind new objects. * objects. This avoids unnecessary unbinding of later objects in order
* to make room for the earlier objects *unless* we need to defragment.
* *
* This avoid unnecessary unbinding of later objects in order to make * 1. Reorder the buffers, where objects with the most restrictive
* room for the earlier objects *unless* we need to defragment. * placement requirements go first (ignoring fixed location buffers for
* now). For example, objects needing the mappable aperture (the first
* 256M of GTT), should go first vs objects that can be placed just
* about anywhere. Repeat the previous pass.
* *
* Defragmenting is skipped if all objects are pinned at a fixed location. * 2. Consider buffers that are pinned at a fixed location. Also try to
* evict the entire VM this time, leaving only objects that we were
* unable to lock. Try again to bind the buffers. (still using the new
* buffer order).
*
* 3. We likely have object lock contention for one or more stubborn
* objects in the VM, for which we need to evict to make forward
* progress (perhaps we are fighting the shrinker?). When evicting the
* VM this time around, anything that we can't lock we now track using
* the busy_bo, using the full lock (after dropping the vm->mutex to
* prevent deadlocks), instead of trylock. We then continue to evict the
* VM, this time with the stubborn object locked, which we can now
* hopefully unbind (if still bound in the VM). Repeat until the VM is
* evicted. Finally we should be able bind everything.
*/ */
for (pass = 0; pass <= 2; pass++) { for (pass = 0; pass <= 3; pass++) {
int pin_flags = PIN_USER | PIN_VALIDATE; int pin_flags = PIN_USER | PIN_VALIDATE;
if (pass == 0) if (pass == 0)
pin_flags |= PIN_NONBLOCK; pin_flags |= PIN_NONBLOCK;
if (pass >= 1) if (pass >= 1)
unpinned = eb_unbind(eb, pass == 2); unpinned = eb_unbind(eb, pass >= 2);
if (pass == 2) { if (pass == 2) {
err = mutex_lock_interruptible(&eb->context->vm->mutex); err = mutex_lock_interruptible(&eb->context->vm->mutex);
if (!err) { if (!err) {
err = i915_gem_evict_vm(eb->context->vm, &eb->ww); err = i915_gem_evict_vm(eb->context->vm, &eb->ww, NULL);
mutex_unlock(&eb->context->vm->mutex); mutex_unlock(&eb->context->vm->mutex);
} }
if (err) if (err)
return err; return err;
} }
if (pass == 3) {
retry:
err = mutex_lock_interruptible(&eb->context->vm->mutex);
if (!err) {
struct drm_i915_gem_object *busy_bo = NULL;
err = i915_gem_evict_vm(eb->context->vm, &eb->ww, &busy_bo);
mutex_unlock(&eb->context->vm->mutex);
if (err && busy_bo) {
err = i915_gem_object_lock(busy_bo, &eb->ww);
i915_gem_object_put(busy_bo);
if (!err)
goto retry;
}
}
if (err)
return err;
}
list_for_each_entry(ev, &eb->unbound, bind_link) { list_for_each_entry(ev, &eb->unbound, bind_link) {
err = eb_reserve_vma(eb, ev, pin_flags); err = eb_reserve_vma(eb, ev, pin_flags);
if (err) if (err)

View File

@ -369,7 +369,7 @@ retry:
if (vma == ERR_PTR(-ENOSPC)) { if (vma == ERR_PTR(-ENOSPC)) {
ret = mutex_lock_interruptible(&ggtt->vm.mutex); ret = mutex_lock_interruptible(&ggtt->vm.mutex);
if (!ret) { if (!ret) {
ret = i915_gem_evict_vm(&ggtt->vm, &ww); ret = i915_gem_evict_vm(&ggtt->vm, &ww, NULL);
mutex_unlock(&ggtt->vm.mutex); mutex_unlock(&ggtt->vm.mutex);
} }
if (ret) if (ret)

View File

@ -1109,9 +1109,15 @@ static void mmio_invalidate_full(struct intel_gt *gt)
continue; continue;
if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50)) { if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50)) {
u32 val = BIT(engine->instance);
if (engine->class == VIDEO_DECODE_CLASS ||
engine->class == VIDEO_ENHANCEMENT_CLASS ||
engine->class == COMPUTE_CLASS)
val = _MASKED_BIT_ENABLE(val);
intel_gt_mcr_multicast_write_fw(gt, intel_gt_mcr_multicast_write_fw(gt,
xehp_regs[engine->class], xehp_regs[engine->class],
BIT(engine->instance)); val);
} else { } else {
rb = get_reg_and_bit(engine, regs == gen8_regs, regs, num); rb = get_reg_and_bit(engine, regs == gen8_regs, regs, num);
if (!i915_mmio_reg_offset(rb.reg)) if (!i915_mmio_reg_offset(rb.reg))

View File

@ -545,6 +545,32 @@ static int check_ccs_header(struct intel_gt *gt,
return 0; return 0;
} }
static int try_firmware_load(struct intel_uc_fw *uc_fw, const struct firmware **fw)
{
struct intel_gt *gt = __uc_fw_to_gt(uc_fw);
struct device *dev = gt->i915->drm.dev;
int err;
err = firmware_request_nowarn(fw, uc_fw->file_selected.path, dev);
if (err)
return err;
if ((*fw)->size > INTEL_UC_RSVD_GGTT_PER_FW) {
drm_err(&gt->i915->drm,
"%s firmware %s: size (%zuKB) exceeds max supported size (%uKB)\n",
intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path,
(*fw)->size / SZ_1K, INTEL_UC_RSVD_GGTT_PER_FW / SZ_1K);
/* try to find another blob to load */
release_firmware(*fw);
*fw = NULL;
return -ENOENT;
}
return 0;
}
/** /**
* intel_uc_fw_fetch - fetch uC firmware * intel_uc_fw_fetch - fetch uC firmware
* @uc_fw: uC firmware * @uc_fw: uC firmware
@ -558,7 +584,6 @@ int intel_uc_fw_fetch(struct intel_uc_fw *uc_fw)
struct intel_gt *gt = __uc_fw_to_gt(uc_fw); struct intel_gt *gt = __uc_fw_to_gt(uc_fw);
struct drm_i915_private *i915 = gt->i915; struct drm_i915_private *i915 = gt->i915;
struct intel_uc_fw_file file_ideal; struct intel_uc_fw_file file_ideal;
struct device *dev = i915->drm.dev;
struct drm_i915_gem_object *obj; struct drm_i915_gem_object *obj;
const struct firmware *fw = NULL; const struct firmware *fw = NULL;
bool old_ver = false; bool old_ver = false;
@ -574,20 +599,9 @@ int intel_uc_fw_fetch(struct intel_uc_fw *uc_fw)
__force_fw_fetch_failures(uc_fw, -EINVAL); __force_fw_fetch_failures(uc_fw, -EINVAL);
__force_fw_fetch_failures(uc_fw, -ESTALE); __force_fw_fetch_failures(uc_fw, -ESTALE);
err = firmware_request_nowarn(&fw, uc_fw->file_selected.path, dev); err = try_firmware_load(uc_fw, &fw);
memcpy(&file_ideal, &uc_fw->file_wanted, sizeof(file_ideal)); memcpy(&file_ideal, &uc_fw->file_wanted, sizeof(file_ideal));
if (!err && fw->size > INTEL_UC_RSVD_GGTT_PER_FW) {
drm_err(&i915->drm,
"%s firmware %s: size (%zuKB) exceeds max supported size (%uKB)\n",
intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path,
fw->size / SZ_1K, INTEL_UC_RSVD_GGTT_PER_FW / SZ_1K);
/* try to find another blob to load */
release_firmware(fw);
err = -ENOENT;
}
/* Any error is terminal if overriding. Don't bother searching for older versions */ /* Any error is terminal if overriding. Don't bother searching for older versions */
if (err && intel_uc_fw_is_overridden(uc_fw)) if (err && intel_uc_fw_is_overridden(uc_fw))
goto fail; goto fail;
@ -608,7 +622,7 @@ int intel_uc_fw_fetch(struct intel_uc_fw *uc_fw)
break; break;
} }
err = firmware_request_nowarn(&fw, uc_fw->file_selected.path, dev); err = try_firmware_load(uc_fw, &fw);
} }
if (err) if (err)

View File

@ -416,6 +416,11 @@ int i915_gem_evict_for_node(struct i915_address_space *vm,
* @vm: Address space to cleanse * @vm: Address space to cleanse
* @ww: An optional struct i915_gem_ww_ctx. If not NULL, i915_gem_evict_vm * @ww: An optional struct i915_gem_ww_ctx. If not NULL, i915_gem_evict_vm
* will be able to evict vma's locked by the ww as well. * will be able to evict vma's locked by the ww as well.
* @busy_bo: Optional pointer to struct drm_i915_gem_object. If not NULL, then
* in the event i915_gem_evict_vm() is unable to trylock an object for eviction,
* then @busy_bo will point to it. -EBUSY is also returned. The caller must drop
* the vm->mutex, before trying again to acquire the contended lock. The caller
* also owns a reference to the object.
* *
* This function evicts all vmas from a vm. * This function evicts all vmas from a vm.
* *
@ -425,7 +430,8 @@ int i915_gem_evict_for_node(struct i915_address_space *vm,
* To clarify: This is for freeing up virtual address space, not for freeing * To clarify: This is for freeing up virtual address space, not for freeing
* memory in e.g. the shrinker. * memory in e.g. the shrinker.
*/ */
int i915_gem_evict_vm(struct i915_address_space *vm, struct i915_gem_ww_ctx *ww) int i915_gem_evict_vm(struct i915_address_space *vm, struct i915_gem_ww_ctx *ww,
struct drm_i915_gem_object **busy_bo)
{ {
int ret = 0; int ret = 0;
@ -457,15 +463,22 @@ int i915_gem_evict_vm(struct i915_address_space *vm, struct i915_gem_ww_ctx *ww)
* the resv is shared among multiple objects, we still * the resv is shared among multiple objects, we still
* need the object ref. * need the object ref.
*/ */
if (dying_vma(vma) || if (!i915_gem_object_get_rcu(vma->obj) ||
(ww && (dma_resv_locking_ctx(vma->obj->base.resv) == &ww->ctx))) { (ww && (dma_resv_locking_ctx(vma->obj->base.resv) == &ww->ctx))) {
__i915_vma_pin(vma); __i915_vma_pin(vma);
list_add(&vma->evict_link, &locked_eviction_list); list_add(&vma->evict_link, &locked_eviction_list);
continue; continue;
} }
if (!i915_gem_object_trylock(vma->obj, ww)) if (!i915_gem_object_trylock(vma->obj, ww)) {
if (busy_bo) {
*busy_bo = vma->obj; /* holds ref */
ret = -EBUSY;
break;
}
i915_gem_object_put(vma->obj);
continue; continue;
}
__i915_vma_pin(vma); __i915_vma_pin(vma);
list_add(&vma->evict_link, &eviction_list); list_add(&vma->evict_link, &eviction_list);
@ -473,25 +486,29 @@ int i915_gem_evict_vm(struct i915_address_space *vm, struct i915_gem_ww_ctx *ww)
if (list_empty(&eviction_list) && list_empty(&locked_eviction_list)) if (list_empty(&eviction_list) && list_empty(&locked_eviction_list))
break; break;
ret = 0;
/* Unbind locked objects first, before unlocking the eviction_list */ /* Unbind locked objects first, before unlocking the eviction_list */
list_for_each_entry_safe(vma, vn, &locked_eviction_list, evict_link) { list_for_each_entry_safe(vma, vn, &locked_eviction_list, evict_link) {
__i915_vma_unpin(vma); __i915_vma_unpin(vma);
if (ret == 0) if (ret == 0) {
ret = __i915_vma_unbind(vma); ret = __i915_vma_unbind(vma);
if (ret != -EINTR) /* "Get me out of here!" */ if (ret != -EINTR) /* "Get me out of here!" */
ret = 0; ret = 0;
}
if (!dying_vma(vma))
i915_gem_object_put(vma->obj);
} }
list_for_each_entry_safe(vma, vn, &eviction_list, evict_link) { list_for_each_entry_safe(vma, vn, &eviction_list, evict_link) {
__i915_vma_unpin(vma); __i915_vma_unpin(vma);
if (ret == 0) if (ret == 0) {
ret = __i915_vma_unbind(vma); ret = __i915_vma_unbind(vma);
if (ret != -EINTR) /* "Get me out of here!" */ if (ret != -EINTR) /* "Get me out of here!" */
ret = 0; ret = 0;
}
i915_gem_object_unlock(vma->obj); i915_gem_object_unlock(vma->obj);
i915_gem_object_put(vma->obj);
} }
} while (ret == 0); } while (ret == 0);

View File

@ -11,6 +11,7 @@
struct drm_mm_node; struct drm_mm_node;
struct i915_address_space; struct i915_address_space;
struct i915_gem_ww_ctx; struct i915_gem_ww_ctx;
struct drm_i915_gem_object;
int __must_check i915_gem_evict_something(struct i915_address_space *vm, int __must_check i915_gem_evict_something(struct i915_address_space *vm,
struct i915_gem_ww_ctx *ww, struct i915_gem_ww_ctx *ww,
@ -23,6 +24,7 @@ int __must_check i915_gem_evict_for_node(struct i915_address_space *vm,
struct drm_mm_node *node, struct drm_mm_node *node,
unsigned int flags); unsigned int flags);
int i915_gem_evict_vm(struct i915_address_space *vm, int i915_gem_evict_vm(struct i915_address_space *vm,
struct i915_gem_ww_ctx *ww); struct i915_gem_ww_ctx *ww,
struct drm_i915_gem_object **busy_bo);
#endif /* __I915_GEM_EVICT_H__ */ #endif /* __I915_GEM_EVICT_H__ */

View File

@ -1974,7 +1974,10 @@ static void icp_irq_handler(struct drm_i915_private *dev_priv, u32 pch_iir)
if (ddi_hotplug_trigger) { if (ddi_hotplug_trigger) {
u32 dig_hotplug_reg; u32 dig_hotplug_reg;
/* Locking due to DSI native GPIO sequences */
spin_lock(&dev_priv->irq_lock);
dig_hotplug_reg = intel_uncore_rmw(&dev_priv->uncore, SHOTPLUG_CTL_DDI, 0, 0); dig_hotplug_reg = intel_uncore_rmw(&dev_priv->uncore, SHOTPLUG_CTL_DDI, 0, 0);
spin_unlock(&dev_priv->irq_lock);
intel_get_hpd_pins(dev_priv, &pin_mask, &long_mask, intel_get_hpd_pins(dev_priv, &pin_mask, &long_mask,
ddi_hotplug_trigger, dig_hotplug_reg, ddi_hotplug_trigger, dig_hotplug_reg,

View File

@ -1129,7 +1129,6 @@ static const struct intel_gt_definition xelpmp_extra_gt[] = {
{} {}
}; };
__maybe_unused
static const struct intel_device_info mtl_info = { static const struct intel_device_info mtl_info = {
XE_HP_FEATURES, XE_HP_FEATURES,
XE_LPDP_FEATURES, XE_LPDP_FEATURES,

View File

@ -5988,6 +5988,7 @@
#define SHOTPLUG_CTL_DDI _MMIO(0xc4030) #define SHOTPLUG_CTL_DDI _MMIO(0xc4030)
#define SHOTPLUG_CTL_DDI_HPD_ENABLE(hpd_pin) (0x8 << (_HPD_PIN_DDI(hpd_pin) * 4)) #define SHOTPLUG_CTL_DDI_HPD_ENABLE(hpd_pin) (0x8 << (_HPD_PIN_DDI(hpd_pin) * 4))
#define SHOTPLUG_CTL_DDI_HPD_OUTPUT_DATA(hpd_pin) (0x4 << (_HPD_PIN_DDI(hpd_pin) * 4))
#define SHOTPLUG_CTL_DDI_HPD_STATUS_MASK(hpd_pin) (0x3 << (_HPD_PIN_DDI(hpd_pin) * 4)) #define SHOTPLUG_CTL_DDI_HPD_STATUS_MASK(hpd_pin) (0x3 << (_HPD_PIN_DDI(hpd_pin) * 4))
#define SHOTPLUG_CTL_DDI_HPD_NO_DETECT(hpd_pin) (0x0 << (_HPD_PIN_DDI(hpd_pin) * 4)) #define SHOTPLUG_CTL_DDI_HPD_NO_DETECT(hpd_pin) (0x0 << (_HPD_PIN_DDI(hpd_pin) * 4))
#define SHOTPLUG_CTL_DDI_HPD_SHORT_DETECT(hpd_pin) (0x1 << (_HPD_PIN_DDI(hpd_pin) * 4)) #define SHOTPLUG_CTL_DDI_HPD_SHORT_DETECT(hpd_pin) (0x1 << (_HPD_PIN_DDI(hpd_pin) * 4))

View File

@ -1566,7 +1566,7 @@ static int __i915_ggtt_pin(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
* locked objects when called from execbuf when pinning * locked objects when called from execbuf when pinning
* is removed. This would probably regress badly. * is removed. This would probably regress badly.
*/ */
i915_gem_evict_vm(vm, NULL); i915_gem_evict_vm(vm, NULL, NULL);
mutex_unlock(&vm->mutex); mutex_unlock(&vm->mutex);
} }
} while (1); } while (1);

View File

@ -344,7 +344,7 @@ static int igt_evict_vm(void *arg)
/* Everything is pinned, nothing should happen */ /* Everything is pinned, nothing should happen */
mutex_lock(&ggtt->vm.mutex); mutex_lock(&ggtt->vm.mutex);
err = i915_gem_evict_vm(&ggtt->vm, NULL); err = i915_gem_evict_vm(&ggtt->vm, NULL, NULL);
mutex_unlock(&ggtt->vm.mutex); mutex_unlock(&ggtt->vm.mutex);
if (err) { if (err) {
pr_err("i915_gem_evict_vm on a full GGTT returned err=%d]\n", pr_err("i915_gem_evict_vm on a full GGTT returned err=%d]\n",
@ -356,7 +356,7 @@ static int igt_evict_vm(void *arg)
for_i915_gem_ww(&ww, err, false) { for_i915_gem_ww(&ww, err, false) {
mutex_lock(&ggtt->vm.mutex); mutex_lock(&ggtt->vm.mutex);
err = i915_gem_evict_vm(&ggtt->vm, &ww); err = i915_gem_evict_vm(&ggtt->vm, &ww, NULL);
mutex_unlock(&ggtt->vm.mutex); mutex_unlock(&ggtt->vm.mutex);
} }

View File

@ -50,7 +50,7 @@ static int scpart_scan_partmap(struct mtd_info *master, loff_t partmap_offs,
int cnt = 0; int cnt = 0;
int res = 0; int res = 0;
int res2; int res2;
loff_t offs; uint32_t offs;
size_t retlen; size_t retlen;
struct sc_part_desc *pdesc = NULL; struct sc_part_desc *pdesc = NULL;
struct sc_part_desc *tmpdesc; struct sc_part_desc *tmpdesc;

View File

@ -91,7 +91,7 @@ static int mtd_parser_tplink_safeloader_parse(struct mtd_info *mtd,
buf = mtd_parser_tplink_safeloader_read_table(mtd); buf = mtd_parser_tplink_safeloader_read_table(mtd);
if (!buf) { if (!buf) {
err = -ENOENT; err = -ENOENT;
goto err_out; goto err_free_parts;
} }
for (idx = 0, offset = TPLINK_SAFELOADER_DATA_OFFSET; for (idx = 0, offset = TPLINK_SAFELOADER_DATA_OFFSET;
@ -118,6 +118,8 @@ static int mtd_parser_tplink_safeloader_parse(struct mtd_info *mtd,
err_free: err_free:
for (idx -= 1; idx >= 0; idx--) for (idx -= 1; idx >= 0; idx--)
kfree(parts[idx].name); kfree(parts[idx].name);
err_free_parts:
kfree(parts);
err_out: err_out:
return err; return err;
}; };

View File

@ -9,18 +9,18 @@
#include <linux/err.h> #include <linux/err.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/module.h> #include <linux/delay.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/mutex.h>
#include <linux/math64.h> #include <linux/math64.h>
#include <linux/sizes.h> #include <linux/module.h>
#include <linux/slab.h>
#include <linux/mtd/mtd.h> #include <linux/mtd/mtd.h>
#include <linux/mtd/spi-nor.h>
#include <linux/mutex.h>
#include <linux/of_platform.h> #include <linux/of_platform.h>
#include <linux/sched/task_stack.h> #include <linux/sched/task_stack.h>
#include <linux/sizes.h>
#include <linux/slab.h>
#include <linux/spi/flash.h> #include <linux/spi/flash.h>
#include <linux/mtd/spi-nor.h>
#include "core.h" #include "core.h"
@ -2025,6 +2025,15 @@ void spi_nor_set_erase_type(struct spi_nor_erase_type *erase, u32 size,
erase->size_mask = (1 << erase->size_shift) - 1; erase->size_mask = (1 << erase->size_shift) - 1;
} }
/**
* spi_nor_mask_erase_type() - mask out a SPI NOR erase type
* @erase: pointer to a structure that describes a SPI NOR erase type
*/
void spi_nor_mask_erase_type(struct spi_nor_erase_type *erase)
{
erase->size = 0;
}
/** /**
* spi_nor_init_uniform_erase_map() - Initialize uniform erase map * spi_nor_init_uniform_erase_map() - Initialize uniform erase map
* @map: the erase map of the SPI NOR * @map: the erase map of the SPI NOR

View File

@ -529,33 +529,30 @@ struct flash_info {
const struct spi_nor_fixups *fixups; const struct spi_nor_fixups *fixups;
}; };
#define SPI_NOR_ID_2ITEMS(_id) ((_id) >> 8) & 0xff, (_id) & 0xff
#define SPI_NOR_ID_3ITEMS(_id) ((_id) >> 16) & 0xff, SPI_NOR_ID_2ITEMS(_id)
#define SPI_NOR_ID(_jedec_id, _ext_id) \
.id = { SPI_NOR_ID_3ITEMS(_jedec_id), SPI_NOR_ID_2ITEMS(_ext_id) }, \
.id_len = !(_jedec_id) ? 0 : (3 + ((_ext_id) ? 2 : 0))
#define SPI_NOR_ID6(_jedec_id, _ext_id) \
.id = { SPI_NOR_ID_3ITEMS(_jedec_id), SPI_NOR_ID_3ITEMS(_ext_id) }, \
.id_len = 6
#define SPI_NOR_GEOMETRY(_sector_size, _n_sectors) \
.sector_size = (_sector_size), \
.n_sectors = (_n_sectors), \
.page_size = 256
/* Used when the "_ext_id" is two bytes at most */ /* Used when the "_ext_id" is two bytes at most */
#define INFO(_jedec_id, _ext_id, _sector_size, _n_sectors) \ #define INFO(_jedec_id, _ext_id, _sector_size, _n_sectors) \
.id = { \ SPI_NOR_ID((_jedec_id), (_ext_id)), \
((_jedec_id) >> 16) & 0xff, \ SPI_NOR_GEOMETRY((_sector_size), (_n_sectors)),
((_jedec_id) >> 8) & 0xff, \
(_jedec_id) & 0xff, \
((_ext_id) >> 8) & 0xff, \
(_ext_id) & 0xff, \
}, \
.id_len = (!(_jedec_id) ? 0 : (3 + ((_ext_id) ? 2 : 0))), \
.sector_size = (_sector_size), \
.n_sectors = (_n_sectors), \
.page_size = 256, \
#define INFO6(_jedec_id, _ext_id, _sector_size, _n_sectors) \ #define INFO6(_jedec_id, _ext_id, _sector_size, _n_sectors) \
.id = { \ SPI_NOR_ID6((_jedec_id), (_ext_id)), \
((_jedec_id) >> 16) & 0xff, \ SPI_NOR_GEOMETRY((_sector_size), (_n_sectors)),
((_jedec_id) >> 8) & 0xff, \
(_jedec_id) & 0xff, \
((_ext_id) >> 16) & 0xff, \
((_ext_id) >> 8) & 0xff, \
(_ext_id) & 0xff, \
}, \
.id_len = 6, \
.sector_size = (_sector_size), \
.n_sectors = (_n_sectors), \
.page_size = 256, \
#define CAT25_INFO(_sector_size, _n_sectors, _page_size, _addr_nbytes) \ #define CAT25_INFO(_sector_size, _n_sectors, _page_size, _addr_nbytes) \
.sector_size = (_sector_size), \ .sector_size = (_sector_size), \
@ -684,6 +681,7 @@ void spi_nor_set_pp_settings(struct spi_nor_pp_command *pp, u8 opcode,
void spi_nor_set_erase_type(struct spi_nor_erase_type *erase, u32 size, void spi_nor_set_erase_type(struct spi_nor_erase_type *erase, u32 size,
u8 opcode); u8 opcode);
void spi_nor_mask_erase_type(struct spi_nor_erase_type *erase);
struct spi_nor_erase_region * struct spi_nor_erase_region *
spi_nor_region_next(struct spi_nor_erase_region *region); spi_nor_region_next(struct spi_nor_erase_region *region);
void spi_nor_init_uniform_erase_map(struct spi_nor_erase_map *map, void spi_nor_init_uniform_erase_map(struct spi_nor_erase_map *map,

View File

@ -1,9 +1,9 @@
// SPDX-License-Identifier: GPL-2.0 // SPDX-License-Identifier: GPL-2.0
#include <linux/debugfs.h>
#include <linux/mtd/spi-nor.h> #include <linux/mtd/spi-nor.h>
#include <linux/spi/spi.h> #include <linux/spi/spi.h>
#include <linux/spi/spi-mem.h> #include <linux/spi/spi-mem.h>
#include <linux/debugfs.h>
#include "core.h" #include "core.h"

View File

@ -18,7 +18,7 @@ is25lp256_post_bfpt_fixups(struct spi_nor *nor,
* BFPT_DWORD1_ADDRESS_BYTES_3_ONLY. * BFPT_DWORD1_ADDRESS_BYTES_3_ONLY.
* Overwrite the number of address bytes advertised by the BFPT. * Overwrite the number of address bytes advertised by the BFPT.
*/ */
if ((bfpt->dwords[BFPT_DWORD(1)] & BFPT_DWORD1_ADDRESS_BYTES_MASK) == if ((bfpt->dwords[SFDP_DWORD(1)] & BFPT_DWORD1_ADDRESS_BYTES_MASK) ==
BFPT_DWORD1_ADDRESS_BYTES_3_ONLY) BFPT_DWORD1_ADDRESS_BYTES_3_ONLY)
nor->params->addr_nbytes = 4; nor->params->addr_nbytes = 4;

View File

@ -22,7 +22,7 @@ mx25l25635_post_bfpt_fixups(struct spi_nor *nor,
* seems that the F version advertises support for Fast Read 4-4-4 in * seems that the F version advertises support for Fast Read 4-4-4 in
* its BFPT table. * its BFPT table.
*/ */
if (bfpt->dwords[BFPT_DWORD(5)] & BFPT_DWORD5_FAST_READ_4_4_4) if (bfpt->dwords[SFDP_DWORD(5)] & BFPT_DWORD5_FAST_READ_4_4_4)
nor->flags |= SNOR_F_4B_OPCODES; nor->flags |= SNOR_F_4B_OPCODES;
return 0; return 0;

View File

@ -5,9 +5,9 @@
*/ */
#include <linux/bitfield.h> #include <linux/bitfield.h>
#include <linux/mtd/spi-nor.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/sort.h> #include <linux/sort.h>
#include <linux/mtd/spi-nor.h>
#include "core.h" #include "core.h"
@ -242,64 +242,64 @@ static const struct sfdp_bfpt_read sfdp_bfpt_reads[] = {
/* Fast Read 1-1-2 */ /* Fast Read 1-1-2 */
{ {
SNOR_HWCAPS_READ_1_1_2, SNOR_HWCAPS_READ_1_1_2,
BFPT_DWORD(1), BIT(16), /* Supported bit */ SFDP_DWORD(1), BIT(16), /* Supported bit */
BFPT_DWORD(4), 0, /* Settings */ SFDP_DWORD(4), 0, /* Settings */
SNOR_PROTO_1_1_2, SNOR_PROTO_1_1_2,
}, },
/* Fast Read 1-2-2 */ /* Fast Read 1-2-2 */
{ {
SNOR_HWCAPS_READ_1_2_2, SNOR_HWCAPS_READ_1_2_2,
BFPT_DWORD(1), BIT(20), /* Supported bit */ SFDP_DWORD(1), BIT(20), /* Supported bit */
BFPT_DWORD(4), 16, /* Settings */ SFDP_DWORD(4), 16, /* Settings */
SNOR_PROTO_1_2_2, SNOR_PROTO_1_2_2,
}, },
/* Fast Read 2-2-2 */ /* Fast Read 2-2-2 */
{ {
SNOR_HWCAPS_READ_2_2_2, SNOR_HWCAPS_READ_2_2_2,
BFPT_DWORD(5), BIT(0), /* Supported bit */ SFDP_DWORD(5), BIT(0), /* Supported bit */
BFPT_DWORD(6), 16, /* Settings */ SFDP_DWORD(6), 16, /* Settings */
SNOR_PROTO_2_2_2, SNOR_PROTO_2_2_2,
}, },
/* Fast Read 1-1-4 */ /* Fast Read 1-1-4 */
{ {
SNOR_HWCAPS_READ_1_1_4, SNOR_HWCAPS_READ_1_1_4,
BFPT_DWORD(1), BIT(22), /* Supported bit */ SFDP_DWORD(1), BIT(22), /* Supported bit */
BFPT_DWORD(3), 16, /* Settings */ SFDP_DWORD(3), 16, /* Settings */
SNOR_PROTO_1_1_4, SNOR_PROTO_1_1_4,
}, },
/* Fast Read 1-4-4 */ /* Fast Read 1-4-4 */
{ {
SNOR_HWCAPS_READ_1_4_4, SNOR_HWCAPS_READ_1_4_4,
BFPT_DWORD(1), BIT(21), /* Supported bit */ SFDP_DWORD(1), BIT(21), /* Supported bit */
BFPT_DWORD(3), 0, /* Settings */ SFDP_DWORD(3), 0, /* Settings */
SNOR_PROTO_1_4_4, SNOR_PROTO_1_4_4,
}, },
/* Fast Read 4-4-4 */ /* Fast Read 4-4-4 */
{ {
SNOR_HWCAPS_READ_4_4_4, SNOR_HWCAPS_READ_4_4_4,
BFPT_DWORD(5), BIT(4), /* Supported bit */ SFDP_DWORD(5), BIT(4), /* Supported bit */
BFPT_DWORD(7), 16, /* Settings */ SFDP_DWORD(7), 16, /* Settings */
SNOR_PROTO_4_4_4, SNOR_PROTO_4_4_4,
}, },
}; };
static const struct sfdp_bfpt_erase sfdp_bfpt_erases[] = { static const struct sfdp_bfpt_erase sfdp_bfpt_erases[] = {
/* Erase Type 1 in DWORD8 bits[15:0] */ /* Erase Type 1 in DWORD8 bits[15:0] */
{BFPT_DWORD(8), 0}, {SFDP_DWORD(8), 0},
/* Erase Type 2 in DWORD8 bits[31:16] */ /* Erase Type 2 in DWORD8 bits[31:16] */
{BFPT_DWORD(8), 16}, {SFDP_DWORD(8), 16},
/* Erase Type 3 in DWORD9 bits[15:0] */ /* Erase Type 3 in DWORD9 bits[15:0] */
{BFPT_DWORD(9), 0}, {SFDP_DWORD(9), 0},
/* Erase Type 4 in DWORD9 bits[31:16] */ /* Erase Type 4 in DWORD9 bits[31:16] */
{BFPT_DWORD(9), 16}, {SFDP_DWORD(9), 16},
}; };
/** /**
@ -458,7 +458,7 @@ static int spi_nor_parse_bfpt(struct spi_nor *nor,
le32_to_cpu_array(bfpt.dwords, BFPT_DWORD_MAX); le32_to_cpu_array(bfpt.dwords, BFPT_DWORD_MAX);
/* Number of address bytes. */ /* Number of address bytes. */
switch (bfpt.dwords[BFPT_DWORD(1)] & BFPT_DWORD1_ADDRESS_BYTES_MASK) { switch (bfpt.dwords[SFDP_DWORD(1)] & BFPT_DWORD1_ADDRESS_BYTES_MASK) {
case BFPT_DWORD1_ADDRESS_BYTES_3_ONLY: case BFPT_DWORD1_ADDRESS_BYTES_3_ONLY:
case BFPT_DWORD1_ADDRESS_BYTES_3_OR_4: case BFPT_DWORD1_ADDRESS_BYTES_3_OR_4:
params->addr_nbytes = 3; params->addr_nbytes = 3;
@ -475,7 +475,7 @@ static int spi_nor_parse_bfpt(struct spi_nor *nor,
} }
/* Flash Memory Density (in bits). */ /* Flash Memory Density (in bits). */
val = bfpt.dwords[BFPT_DWORD(2)]; val = bfpt.dwords[SFDP_DWORD(2)];
if (val & BIT(31)) { if (val & BIT(31)) {
val &= ~BIT(31); val &= ~BIT(31);
@ -555,13 +555,13 @@ static int spi_nor_parse_bfpt(struct spi_nor *nor,
return spi_nor_post_bfpt_fixups(nor, bfpt_header, &bfpt); return spi_nor_post_bfpt_fixups(nor, bfpt_header, &bfpt);
/* Page size: this field specifies 'N' so the page size = 2^N bytes. */ /* Page size: this field specifies 'N' so the page size = 2^N bytes. */
val = bfpt.dwords[BFPT_DWORD(11)]; val = bfpt.dwords[SFDP_DWORD(11)];
val &= BFPT_DWORD11_PAGE_SIZE_MASK; val &= BFPT_DWORD11_PAGE_SIZE_MASK;
val >>= BFPT_DWORD11_PAGE_SIZE_SHIFT; val >>= BFPT_DWORD11_PAGE_SIZE_SHIFT;
params->page_size = 1U << val; params->page_size = 1U << val;
/* Quad Enable Requirements. */ /* Quad Enable Requirements. */
switch (bfpt.dwords[BFPT_DWORD(15)] & BFPT_DWORD15_QER_MASK) { switch (bfpt.dwords[SFDP_DWORD(15)] & BFPT_DWORD15_QER_MASK) {
case BFPT_DWORD15_QER_NONE: case BFPT_DWORD15_QER_NONE:
params->quad_enable = NULL; params->quad_enable = NULL;
break; break;
@ -608,7 +608,7 @@ static int spi_nor_parse_bfpt(struct spi_nor *nor,
} }
/* Soft Reset support. */ /* Soft Reset support. */
if (bfpt.dwords[BFPT_DWORD(16)] & BFPT_DWORD16_SWRST_EN_RST) if (bfpt.dwords[SFDP_DWORD(16)] & BFPT_DWORD16_SWRST_EN_RST)
nor->flags |= SNOR_F_SOFT_RESET; nor->flags |= SNOR_F_SOFT_RESET;
/* Stop here if not JESD216 rev C or later. */ /* Stop here if not JESD216 rev C or later. */
@ -616,7 +616,7 @@ static int spi_nor_parse_bfpt(struct spi_nor *nor,
return spi_nor_post_bfpt_fixups(nor, bfpt_header, &bfpt); return spi_nor_post_bfpt_fixups(nor, bfpt_header, &bfpt);
/* 8D-8D-8D command extension. */ /* 8D-8D-8D command extension. */
switch (bfpt.dwords[BFPT_DWORD(18)] & BFPT_DWORD18_CMD_EXT_MASK) { switch (bfpt.dwords[SFDP_DWORD(18)] & BFPT_DWORD18_CMD_EXT_MASK) {
case BFPT_DWORD18_CMD_EXT_REP: case BFPT_DWORD18_CMD_EXT_REP:
nor->cmd_ext_type = SPI_NOR_EXT_REPEAT; nor->cmd_ext_type = SPI_NOR_EXT_REPEAT;
break; break;
@ -875,7 +875,7 @@ static int spi_nor_init_non_uniform_erase_map(struct spi_nor *nor,
*/ */
for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++) for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++)
if (!(regions_erase_type & BIT(erase[i].idx))) if (!(regions_erase_type & BIT(erase[i].idx)))
spi_nor_set_erase_type(&erase[i], 0, 0xFF); spi_nor_mask_erase_type(&erase[i]);
return 0; return 0;
} }
@ -1004,7 +1004,7 @@ static int spi_nor_parse_4bait(struct spi_nor *nor,
discard_hwcaps |= read->hwcaps; discard_hwcaps |= read->hwcaps;
if ((params->hwcaps.mask & read->hwcaps) && if ((params->hwcaps.mask & read->hwcaps) &&
(dwords[0] & read->supported_bit)) (dwords[SFDP_DWORD(1)] & read->supported_bit))
read_hwcaps |= read->hwcaps; read_hwcaps |= read->hwcaps;
} }
@ -1023,7 +1023,7 @@ static int spi_nor_parse_4bait(struct spi_nor *nor,
* authority for specifying Page Program support. * authority for specifying Page Program support.
*/ */
discard_hwcaps |= program->hwcaps; discard_hwcaps |= program->hwcaps;
if (dwords[0] & program->supported_bit) if (dwords[SFDP_DWORD(1)] & program->supported_bit)
pp_hwcaps |= program->hwcaps; pp_hwcaps |= program->hwcaps;
} }
@ -1035,7 +1035,7 @@ static int spi_nor_parse_4bait(struct spi_nor *nor,
for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++) { for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++) {
const struct sfdp_4bait *erase = &erases[i]; const struct sfdp_4bait *erase = &erases[i];
if (dwords[0] & erase->supported_bit) if (dwords[SFDP_DWORD(1)] & erase->supported_bit)
erase_mask |= BIT(i); erase_mask |= BIT(i);
} }
@ -1086,10 +1086,10 @@ static int spi_nor_parse_4bait(struct spi_nor *nor,
for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++) { for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++) {
if (erase_mask & BIT(i)) if (erase_mask & BIT(i))
erase_type[i].opcode = (dwords[1] >> erase_type[i].opcode = (dwords[SFDP_DWORD(2)] >>
erase_type[i].idx * 8) & 0xFF; erase_type[i].idx * 8) & 0xFF;
else else
spi_nor_set_erase_type(&erase_type[i], 0u, 0xFF); spi_nor_mask_erase_type(&erase_type[i]);
} }
/* /*
@ -1145,15 +1145,15 @@ static int spi_nor_parse_profile1(struct spi_nor *nor,
le32_to_cpu_array(dwords, profile1_header->length); le32_to_cpu_array(dwords, profile1_header->length);
/* Get 8D-8D-8D fast read opcode and dummy cycles. */ /* Get 8D-8D-8D fast read opcode and dummy cycles. */
opcode = FIELD_GET(PROFILE1_DWORD1_RD_FAST_CMD, dwords[0]); opcode = FIELD_GET(PROFILE1_DWORD1_RD_FAST_CMD, dwords[SFDP_DWORD(1)]);
/* Set the Read Status Register dummy cycles and dummy address bytes. */ /* Set the Read Status Register dummy cycles and dummy address bytes. */
if (dwords[0] & PROFILE1_DWORD1_RDSR_DUMMY) if (dwords[SFDP_DWORD(1)] & PROFILE1_DWORD1_RDSR_DUMMY)
nor->params->rdsr_dummy = 8; nor->params->rdsr_dummy = 8;
else else
nor->params->rdsr_dummy = 4; nor->params->rdsr_dummy = 4;
if (dwords[0] & PROFILE1_DWORD1_RDSR_ADDR_BYTES) if (dwords[SFDP_DWORD(1)] & PROFILE1_DWORD1_RDSR_ADDR_BYTES)
nor->params->rdsr_addr_nbytes = 4; nor->params->rdsr_addr_nbytes = 4;
else else
nor->params->rdsr_addr_nbytes = 0; nor->params->rdsr_addr_nbytes = 0;
@ -1167,13 +1167,16 @@ static int spi_nor_parse_profile1(struct spi_nor *nor,
* Default to PROFILE1_DUMMY_DEFAULT if we don't find anything, and let * Default to PROFILE1_DUMMY_DEFAULT if we don't find anything, and let
* flashes set the correct value if needed in their fixup hooks. * flashes set the correct value if needed in their fixup hooks.
*/ */
dummy = FIELD_GET(PROFILE1_DWORD4_DUMMY_200MHZ, dwords[3]); dummy = FIELD_GET(PROFILE1_DWORD4_DUMMY_200MHZ, dwords[SFDP_DWORD(4)]);
if (!dummy) if (!dummy)
dummy = FIELD_GET(PROFILE1_DWORD5_DUMMY_166MHZ, dwords[4]); dummy = FIELD_GET(PROFILE1_DWORD5_DUMMY_166MHZ,
dwords[SFDP_DWORD(5)]);
if (!dummy) if (!dummy)
dummy = FIELD_GET(PROFILE1_DWORD5_DUMMY_133MHZ, dwords[4]); dummy = FIELD_GET(PROFILE1_DWORD5_DUMMY_133MHZ,
dwords[SFDP_DWORD(5)]);
if (!dummy) if (!dummy)
dummy = FIELD_GET(PROFILE1_DWORD5_DUMMY_100MHZ, dwords[4]); dummy = FIELD_GET(PROFILE1_DWORD5_DUMMY_100MHZ,
dwords[SFDP_DWORD(5)]);
if (!dummy) if (!dummy)
dev_dbg(nor->dev, dev_dbg(nor->dev,
"Can't find dummy cycles from Profile 1.0 table\n"); "Can't find dummy cycles from Profile 1.0 table\n");
@ -1228,7 +1231,8 @@ static int spi_nor_parse_sccr(struct spi_nor *nor,
le32_to_cpu_array(dwords, sccr_header->length); le32_to_cpu_array(dwords, sccr_header->length);
if (FIELD_GET(SCCR_DWORD22_OCTAL_DTR_EN_VOLATILE, dwords[22])) if (FIELD_GET(SCCR_DWORD22_OCTAL_DTR_EN_VOLATILE,
dwords[SFDP_DWORD(22)]))
nor->flags |= SNOR_F_IO_MODE_EN_VOLATILE; nor->flags |= SNOR_F_IO_MODE_EN_VOLATILE;
out: out:

View File

@ -13,13 +13,12 @@
#define SFDP_JESD216A_MINOR 5 #define SFDP_JESD216A_MINOR 5
#define SFDP_JESD216B_MINOR 6 #define SFDP_JESD216B_MINOR 6
/* SFDP DWORDS are indexed from 1 but C arrays are indexed from 0. */
#define SFDP_DWORD(i) ((i) - 1)
/* Basic Flash Parameter Table */ /* Basic Flash Parameter Table */
/* /* JESD216 rev D defines a Basic Flash Parameter Table of 20 DWORDs. */
* JESD216 rev D defines a Basic Flash Parameter Table of 20 DWORDs.
* They are indexed from 1 but C arrays are indexed from 0.
*/
#define BFPT_DWORD(i) ((i) - 1)
#define BFPT_DWORD_MAX 20 #define BFPT_DWORD_MAX 20
struct sfdp_bfpt { struct sfdp_bfpt {

View File

@ -15,14 +15,19 @@
#define SPINOR_OP_RD_ANY_REG 0x65 /* Read any register */ #define SPINOR_OP_RD_ANY_REG 0x65 /* Read any register */
#define SPINOR_OP_WR_ANY_REG 0x71 /* Write any register */ #define SPINOR_OP_WR_ANY_REG 0x71 /* Write any register */
#define SPINOR_REG_CYPRESS_CFR1V 0x00800002 #define SPINOR_REG_CYPRESS_CFR1V 0x00800002
#define SPINOR_REG_CYPRESS_CFR1V_QUAD_EN BIT(1) /* Quad Enable */ #define SPINOR_REG_CYPRESS_CFR1_QUAD_EN BIT(1) /* Quad Enable */
#define SPINOR_REG_CYPRESS_CFR2V 0x00800003 #define SPINOR_REG_CYPRESS_CFR2V 0x00800003
#define SPINOR_REG_CYPRESS_CFR2V_MEMLAT_11_24 0xb #define SPINOR_REG_CYPRESS_CFR2_MEMLAT_11_24 0xb
#define SPINOR_REG_CYPRESS_CFR3V 0x00800004 #define SPINOR_REG_CYPRESS_CFR3V 0x00800004
#define SPINOR_REG_CYPRESS_CFR3V_PGSZ BIT(4) /* Page size. */ #define SPINOR_REG_CYPRESS_CFR3_PGSZ BIT(4) /* Page size. */
#define SPINOR_REG_CYPRESS_CFR5V 0x00800006 #define SPINOR_REG_CYPRESS_CFR5V 0x00800006
#define SPINOR_REG_CYPRESS_CFR5V_OCT_DTR_EN 0x3 #define SPINOR_REG_CYPRESS_CFR5_BIT6 BIT(6)
#define SPINOR_REG_CYPRESS_CFR5V_OCT_DTR_DS 0 #define SPINOR_REG_CYPRESS_CFR5_DDR BIT(1)
#define SPINOR_REG_CYPRESS_CFR5_OPI BIT(0)
#define SPINOR_REG_CYPRESS_CFR5_OCT_DTR_EN \
(SPINOR_REG_CYPRESS_CFR5_BIT6 | SPINOR_REG_CYPRESS_CFR5_DDR | \
SPINOR_REG_CYPRESS_CFR5_OPI)
#define SPINOR_REG_CYPRESS_CFR5_OCT_DTR_DS SPINOR_REG_CYPRESS_CFR5_BIT6
#define SPINOR_OP_CYPRESS_RD_FAST 0xee #define SPINOR_OP_CYPRESS_RD_FAST 0xee
/* Cypress SPI NOR flash operations. */ /* Cypress SPI NOR flash operations. */
@ -52,7 +57,7 @@ static int cypress_nor_octal_dtr_en(struct spi_nor *nor)
u8 addr_mode_nbytes = nor->params->addr_mode_nbytes; u8 addr_mode_nbytes = nor->params->addr_mode_nbytes;
/* Use 24 dummy cycles for memory array reads. */ /* Use 24 dummy cycles for memory array reads. */
*buf = SPINOR_REG_CYPRESS_CFR2V_MEMLAT_11_24; *buf = SPINOR_REG_CYPRESS_CFR2_MEMLAT_11_24;
op = (struct spi_mem_op) op = (struct spi_mem_op)
CYPRESS_NOR_WR_ANY_REG_OP(addr_mode_nbytes, CYPRESS_NOR_WR_ANY_REG_OP(addr_mode_nbytes,
SPINOR_REG_CYPRESS_CFR2V, 1, buf); SPINOR_REG_CYPRESS_CFR2V, 1, buf);
@ -64,7 +69,7 @@ static int cypress_nor_octal_dtr_en(struct spi_nor *nor)
nor->read_dummy = 24; nor->read_dummy = 24;
/* Set the octal and DTR enable bits. */ /* Set the octal and DTR enable bits. */
buf[0] = SPINOR_REG_CYPRESS_CFR5V_OCT_DTR_EN; buf[0] = SPINOR_REG_CYPRESS_CFR5_OCT_DTR_EN;
op = (struct spi_mem_op) op = (struct spi_mem_op)
CYPRESS_NOR_WR_ANY_REG_OP(addr_mode_nbytes, CYPRESS_NOR_WR_ANY_REG_OP(addr_mode_nbytes,
SPINOR_REG_CYPRESS_CFR5V, 1, buf); SPINOR_REG_CYPRESS_CFR5V, 1, buf);
@ -98,7 +103,7 @@ static int cypress_nor_octal_dtr_dis(struct spi_nor *nor)
* in 8D-8D-8D mode. Since there is no register at the next location, * in 8D-8D-8D mode. Since there is no register at the next location,
* just initialize the value to 0 and let the transaction go on. * just initialize the value to 0 and let the transaction go on.
*/ */
buf[0] = SPINOR_REG_CYPRESS_CFR5V_OCT_DTR_DS; buf[0] = SPINOR_REG_CYPRESS_CFR5_OCT_DTR_DS;
buf[1] = 0; buf[1] = 0;
op = (struct spi_mem_op) op = (struct spi_mem_op)
CYPRESS_NOR_WR_ANY_REG_OP(nor->addr_nbytes, CYPRESS_NOR_WR_ANY_REG_OP(nor->addr_nbytes,
@ -150,11 +155,11 @@ static int cypress_nor_quad_enable_volatile(struct spi_nor *nor)
if (ret) if (ret)
return ret; return ret;
if (nor->bouncebuf[0] & SPINOR_REG_CYPRESS_CFR1V_QUAD_EN) if (nor->bouncebuf[0] & SPINOR_REG_CYPRESS_CFR1_QUAD_EN)
return 0; return 0;
/* Update the Quad Enable bit. */ /* Update the Quad Enable bit. */
nor->bouncebuf[0] |= SPINOR_REG_CYPRESS_CFR1V_QUAD_EN; nor->bouncebuf[0] |= SPINOR_REG_CYPRESS_CFR1_QUAD_EN;
op = (struct spi_mem_op) op = (struct spi_mem_op)
CYPRESS_NOR_WR_ANY_REG_OP(addr_mode_nbytes, CYPRESS_NOR_WR_ANY_REG_OP(addr_mode_nbytes,
SPINOR_REG_CYPRESS_CFR1V, 1, SPINOR_REG_CYPRESS_CFR1V, 1,
@ -205,7 +210,7 @@ static int cypress_nor_set_page_size(struct spi_nor *nor)
if (ret) if (ret)
return ret; return ret;
if (nor->bouncebuf[0] & SPINOR_REG_CYPRESS_CFR3V_PGSZ) if (nor->bouncebuf[0] & SPINOR_REG_CYPRESS_CFR3_PGSZ)
nor->params->page_size = 512; nor->params->page_size = 512;
else else
nor->params->page_size = 256; nor->params->page_size = 256;

View File

@ -953,7 +953,7 @@ int nvme_auth_init_ctrl(struct nvme_ctrl *ctrl)
goto err_free_dhchap_secret; goto err_free_dhchap_secret;
if (!ctrl->opts->dhchap_secret && !ctrl->opts->dhchap_ctrl_secret) if (!ctrl->opts->dhchap_secret && !ctrl->opts->dhchap_ctrl_secret)
return ret; return 0;
ctrl->dhchap_ctxs = kvcalloc(ctrl_max_dhchaps(ctrl), ctrl->dhchap_ctxs = kvcalloc(ctrl_max_dhchaps(ctrl),
sizeof(*chap), GFP_KERNEL); sizeof(*chap), GFP_KERNEL);

View File

@ -1074,6 +1074,18 @@ static u32 nvme_known_admin_effects(u8 opcode)
return 0; return 0;
} }
static u32 nvme_known_nvm_effects(u8 opcode)
{
switch (opcode) {
case nvme_cmd_write:
case nvme_cmd_write_zeroes:
case nvme_cmd_write_uncor:
return NVME_CMD_EFFECTS_LBCC;
default:
return 0;
}
}
u32 nvme_command_effects(struct nvme_ctrl *ctrl, struct nvme_ns *ns, u8 opcode) u32 nvme_command_effects(struct nvme_ctrl *ctrl, struct nvme_ns *ns, u8 opcode)
{ {
u32 effects = 0; u32 effects = 0;
@ -1081,16 +1093,24 @@ u32 nvme_command_effects(struct nvme_ctrl *ctrl, struct nvme_ns *ns, u8 opcode)
if (ns) { if (ns) {
if (ns->head->effects) if (ns->head->effects)
effects = le32_to_cpu(ns->head->effects->iocs[opcode]); effects = le32_to_cpu(ns->head->effects->iocs[opcode]);
if (ns->head->ids.csi == NVME_CAP_CSS_NVM)
effects |= nvme_known_nvm_effects(opcode);
if (effects & ~(NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC)) if (effects & ~(NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC))
dev_warn_once(ctrl->device, dev_warn_once(ctrl->device,
"IO command:%02x has unhandled effects:%08x\n", "IO command:%02x has unusual effects:%08x\n",
opcode, effects); opcode, effects);
return 0;
}
if (ctrl->effects) /*
effects = le32_to_cpu(ctrl->effects->acs[opcode]); * NVME_CMD_EFFECTS_CSE_MASK causes a freeze all I/O queues,
effects |= nvme_known_admin_effects(opcode); * which would deadlock when done on an I/O command. Note that
* We already warn about an unusual effect above.
*/
effects &= ~NVME_CMD_EFFECTS_CSE_MASK;
} else {
if (ctrl->effects)
effects = le32_to_cpu(ctrl->effects->acs[opcode]);
effects |= nvme_known_admin_effects(opcode);
}
return effects; return effects;
} }
@ -4926,7 +4946,7 @@ int nvme_alloc_io_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set,
memset(set, 0, sizeof(*set)); memset(set, 0, sizeof(*set));
set->ops = ops; set->ops = ops;
set->queue_depth = ctrl->sqsize + 1; set->queue_depth = min_t(unsigned, ctrl->sqsize, BLK_MQ_MAX_DEPTH - 1);
/* /*
* Some Apple controllers requires tags to be unique across admin and * Some Apple controllers requires tags to be unique across admin and
* the (only) I/O queue, so reserve the first 32 tags of the I/O queue. * the (only) I/O queue, so reserve the first 32 tags of the I/O queue.

View File

@ -11,6 +11,8 @@
static bool nvme_cmd_allowed(struct nvme_ns *ns, struct nvme_command *c, static bool nvme_cmd_allowed(struct nvme_ns *ns, struct nvme_command *c,
fmode_t mode) fmode_t mode)
{ {
u32 effects;
if (capable(CAP_SYS_ADMIN)) if (capable(CAP_SYS_ADMIN))
return true; return true;
@ -43,11 +45,29 @@ static bool nvme_cmd_allowed(struct nvme_ns *ns, struct nvme_command *c,
} }
/* /*
* Only allow I/O commands that transfer data to the controller if the * Check if the controller provides a Commands Supported and Effects log
* special file is open for writing, but always allow I/O commands that * and marks this command as supported. If not reject unprivileged
* transfer data from the controller. * passthrough.
*/ */
if (nvme_is_write(c)) effects = nvme_command_effects(ns->ctrl, ns, c->common.opcode);
if (!(effects & NVME_CMD_EFFECTS_CSUPP))
return false;
/*
* Don't allow passthrough for command that have intrusive (or unknown)
* effects.
*/
if (effects & ~(NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC |
NVME_CMD_EFFECTS_UUID_SEL |
NVME_CMD_EFFECTS_SCOPE_MASK))
return false;
/*
* Only allow I/O commands that transfer data to the controller or that
* change the logical block contents if the file descriptor is open for
* writing.
*/
if (nvme_is_write(c) || (effects & NVME_CMD_EFFECTS_LBCC))
return mode & FMODE_WRITE; return mode & FMODE_WRITE;
return true; return true;
} }

View File

@ -893,7 +893,7 @@ static inline void nvme_trace_bio_complete(struct request *req)
{ {
struct nvme_ns *ns = req->q->queuedata; struct nvme_ns *ns = req->q->queuedata;
if (req->cmd_flags & REQ_NVME_MPATH) if ((req->cmd_flags & REQ_NVME_MPATH) && req->bio)
trace_block_bio_complete(ns->head->disk->queue, req->bio); trace_block_bio_complete(ns->head->disk->queue, req->bio);
} }

View File

@ -36,7 +36,7 @@
#define SQ_SIZE(q) ((q)->q_depth << (q)->sqes) #define SQ_SIZE(q) ((q)->q_depth << (q)->sqes)
#define CQ_SIZE(q) ((q)->q_depth * sizeof(struct nvme_completion)) #define CQ_SIZE(q) ((q)->q_depth * sizeof(struct nvme_completion))
#define SGES_PER_PAGE (PAGE_SIZE / sizeof(struct nvme_sgl_desc)) #define SGES_PER_PAGE (NVME_CTRL_PAGE_SIZE / sizeof(struct nvme_sgl_desc))
/* /*
* These can be higher, but we need to ensure that any command doesn't * These can be higher, but we need to ensure that any command doesn't
@ -144,9 +144,9 @@ struct nvme_dev {
mempool_t *iod_mempool; mempool_t *iod_mempool;
/* shadow doorbell buffer support: */ /* shadow doorbell buffer support: */
u32 *dbbuf_dbs; __le32 *dbbuf_dbs;
dma_addr_t dbbuf_dbs_dma_addr; dma_addr_t dbbuf_dbs_dma_addr;
u32 *dbbuf_eis; __le32 *dbbuf_eis;
dma_addr_t dbbuf_eis_dma_addr; dma_addr_t dbbuf_eis_dma_addr;
/* host memory buffer support: */ /* host memory buffer support: */
@ -208,10 +208,10 @@ struct nvme_queue {
#define NVMEQ_SQ_CMB 1 #define NVMEQ_SQ_CMB 1
#define NVMEQ_DELETE_ERROR 2 #define NVMEQ_DELETE_ERROR 2
#define NVMEQ_POLLED 3 #define NVMEQ_POLLED 3
u32 *dbbuf_sq_db; __le32 *dbbuf_sq_db;
u32 *dbbuf_cq_db; __le32 *dbbuf_cq_db;
u32 *dbbuf_sq_ei; __le32 *dbbuf_sq_ei;
u32 *dbbuf_cq_ei; __le32 *dbbuf_cq_ei;
struct completion delete_done; struct completion delete_done;
}; };
@ -343,11 +343,11 @@ static inline int nvme_dbbuf_need_event(u16 event_idx, u16 new_idx, u16 old)
} }
/* Update dbbuf and return true if an MMIO is required */ /* Update dbbuf and return true if an MMIO is required */
static bool nvme_dbbuf_update_and_check_event(u16 value, u32 *dbbuf_db, static bool nvme_dbbuf_update_and_check_event(u16 value, __le32 *dbbuf_db,
volatile u32 *dbbuf_ei) volatile __le32 *dbbuf_ei)
{ {
if (dbbuf_db) { if (dbbuf_db) {
u16 old_value; u16 old_value, event_idx;
/* /*
* Ensure that the queue is written before updating * Ensure that the queue is written before updating
@ -355,8 +355,8 @@ static bool nvme_dbbuf_update_and_check_event(u16 value, u32 *dbbuf_db,
*/ */
wmb(); wmb();
old_value = *dbbuf_db; old_value = le32_to_cpu(*dbbuf_db);
*dbbuf_db = value; *dbbuf_db = cpu_to_le32(value);
/* /*
* Ensure that the doorbell is updated before reading the event * Ensure that the doorbell is updated before reading the event
@ -366,7 +366,8 @@ static bool nvme_dbbuf_update_and_check_event(u16 value, u32 *dbbuf_db,
*/ */
mb(); mb();
if (!nvme_dbbuf_need_event(*dbbuf_ei, value, old_value)) event_idx = le32_to_cpu(*dbbuf_ei);
if (!nvme_dbbuf_need_event(event_idx, value, old_value))
return false; return false;
} }
@ -380,9 +381,9 @@ static bool nvme_dbbuf_update_and_check_event(u16 value, u32 *dbbuf_db,
*/ */
static int nvme_pci_npages_prp(void) static int nvme_pci_npages_prp(void)
{ {
unsigned nprps = DIV_ROUND_UP(NVME_MAX_KB_SZ + NVME_CTRL_PAGE_SIZE, unsigned max_bytes = (NVME_MAX_KB_SZ * 1024) + NVME_CTRL_PAGE_SIZE;
NVME_CTRL_PAGE_SIZE); unsigned nprps = DIV_ROUND_UP(max_bytes, NVME_CTRL_PAGE_SIZE);
return DIV_ROUND_UP(8 * nprps, PAGE_SIZE - 8); return DIV_ROUND_UP(8 * nprps, NVME_CTRL_PAGE_SIZE - 8);
} }
/* /*
@ -392,7 +393,7 @@ static int nvme_pci_npages_prp(void)
static int nvme_pci_npages_sgl(void) static int nvme_pci_npages_sgl(void)
{ {
return DIV_ROUND_UP(NVME_MAX_SEGS * sizeof(struct nvme_sgl_desc), return DIV_ROUND_UP(NVME_MAX_SEGS * sizeof(struct nvme_sgl_desc),
PAGE_SIZE); NVME_CTRL_PAGE_SIZE);
} }
static int nvme_admin_init_hctx(struct blk_mq_hw_ctx *hctx, void *data, static int nvme_admin_init_hctx(struct blk_mq_hw_ctx *hctx, void *data,
@ -708,7 +709,7 @@ static void nvme_pci_sgl_set_seg(struct nvme_sgl_desc *sge,
sge->length = cpu_to_le32(entries * sizeof(*sge)); sge->length = cpu_to_le32(entries * sizeof(*sge));
sge->type = NVME_SGL_FMT_LAST_SEG_DESC << 4; sge->type = NVME_SGL_FMT_LAST_SEG_DESC << 4;
} else { } else {
sge->length = cpu_to_le32(PAGE_SIZE); sge->length = cpu_to_le32(NVME_CTRL_PAGE_SIZE);
sge->type = NVME_SGL_FMT_SEG_DESC << 4; sge->type = NVME_SGL_FMT_SEG_DESC << 4;
} }
} }
@ -2332,10 +2333,12 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
if (dev->cmb_use_sqes) { if (dev->cmb_use_sqes) {
result = nvme_cmb_qdepth(dev, nr_io_queues, result = nvme_cmb_qdepth(dev, nr_io_queues,
sizeof(struct nvme_command)); sizeof(struct nvme_command));
if (result > 0) if (result > 0) {
dev->q_depth = result; dev->q_depth = result;
else dev->ctrl.sqsize = result - 1;
} else {
dev->cmb_use_sqes = false; dev->cmb_use_sqes = false;
}
} }
do { do {
@ -2536,7 +2539,6 @@ static int nvme_pci_enable(struct nvme_dev *dev)
dev->q_depth = min_t(u32, NVME_CAP_MQES(dev->ctrl.cap) + 1, dev->q_depth = min_t(u32, NVME_CAP_MQES(dev->ctrl.cap) + 1,
io_queue_depth); io_queue_depth);
dev->ctrl.sqsize = dev->q_depth - 1; /* 0's based queue depth */
dev->db_stride = 1 << NVME_CAP_STRIDE(dev->ctrl.cap); dev->db_stride = 1 << NVME_CAP_STRIDE(dev->ctrl.cap);
dev->dbs = dev->bar + 4096; dev->dbs = dev->bar + 4096;
@ -2577,7 +2579,7 @@ static int nvme_pci_enable(struct nvme_dev *dev)
dev_warn(dev->ctrl.device, "IO queue depth clamped to %d\n", dev_warn(dev->ctrl.device, "IO queue depth clamped to %d\n",
dev->q_depth); dev->q_depth);
} }
dev->ctrl.sqsize = dev->q_depth - 1; /* 0's based queue depth */
nvme_map_cmb(dev); nvme_map_cmb(dev);

View File

@ -164,26 +164,31 @@ out:
static void nvmet_get_cmd_effects_nvm(struct nvme_effects_log *log) static void nvmet_get_cmd_effects_nvm(struct nvme_effects_log *log)
{ {
log->acs[nvme_admin_get_log_page] = cpu_to_le32(1 << 0); log->acs[nvme_admin_get_log_page] =
log->acs[nvme_admin_identify] = cpu_to_le32(1 << 0); log->acs[nvme_admin_identify] =
log->acs[nvme_admin_abort_cmd] = cpu_to_le32(1 << 0); log->acs[nvme_admin_abort_cmd] =
log->acs[nvme_admin_set_features] = cpu_to_le32(1 << 0); log->acs[nvme_admin_set_features] =
log->acs[nvme_admin_get_features] = cpu_to_le32(1 << 0); log->acs[nvme_admin_get_features] =
log->acs[nvme_admin_async_event] = cpu_to_le32(1 << 0); log->acs[nvme_admin_async_event] =
log->acs[nvme_admin_keep_alive] = cpu_to_le32(1 << 0); log->acs[nvme_admin_keep_alive] =
cpu_to_le32(NVME_CMD_EFFECTS_CSUPP);
log->iocs[nvme_cmd_read] = cpu_to_le32(1 << 0); log->iocs[nvme_cmd_read] =
log->iocs[nvme_cmd_write] = cpu_to_le32(1 << 0); log->iocs[nvme_cmd_flush] =
log->iocs[nvme_cmd_flush] = cpu_to_le32(1 << 0); log->iocs[nvme_cmd_dsm] =
log->iocs[nvme_cmd_dsm] = cpu_to_le32(1 << 0); cpu_to_le32(NVME_CMD_EFFECTS_CSUPP);
log->iocs[nvme_cmd_write_zeroes] = cpu_to_le32(1 << 0); log->iocs[nvme_cmd_write] =
log->iocs[nvme_cmd_write_zeroes] =
cpu_to_le32(NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC);
} }
static void nvmet_get_cmd_effects_zns(struct nvme_effects_log *log) static void nvmet_get_cmd_effects_zns(struct nvme_effects_log *log)
{ {
log->iocs[nvme_cmd_zone_append] = cpu_to_le32(1 << 0); log->iocs[nvme_cmd_zone_append] =
log->iocs[nvme_cmd_zone_mgmt_send] = cpu_to_le32(1 << 0); log->iocs[nvme_cmd_zone_mgmt_send] =
log->iocs[nvme_cmd_zone_mgmt_recv] = cpu_to_le32(1 << 0); cpu_to_le32(NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC);
log->iocs[nvme_cmd_zone_mgmt_recv] =
cpu_to_le32(NVME_CMD_EFFECTS_CSUPP);
} }
static void nvmet_execute_get_log_cmd_effects_ns(struct nvmet_req *req) static void nvmet_execute_get_log_cmd_effects_ns(struct nvmet_req *req)

View File

@ -334,14 +334,13 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req)
} }
/* /*
* If there are effects for the command we are about to execute, or * If a command needs post-execution fixups, or there are any
* an end_req function we need to use nvme_execute_passthru_rq() * non-trivial effects, make sure to execute the command synchronously
* synchronously in a work item seeing the end_req function and * in a workqueue so that nvme_passthru_end gets called.
* nvme_passthru_end() can't be called in the request done callback
* which is typically in interrupt context.
*/ */
effects = nvme_command_effects(ctrl, ns, req->cmd->common.opcode); effects = nvme_command_effects(ctrl, ns, req->cmd->common.opcode);
if (req->p.use_workqueue || effects) { if (req->p.use_workqueue ||
(effects & ~(NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC))) {
INIT_WORK(&req->p.work, nvmet_passthru_execute_cmd_work); INIT_WORK(&req->p.work, nvmet_passthru_execute_cmd_work);
req->p.rq = rq; req->p.rq = rq;
queue_work(nvmet_wq, &req->p.work); queue_work(nvmet_wq, &req->p.work);

View File

@ -53,6 +53,7 @@ enum acpi_backlight_type {
}; };
#if IS_ENABLED(CONFIG_ACPI_VIDEO) #if IS_ENABLED(CONFIG_ACPI_VIDEO)
extern void acpi_video_report_nolcd(void);
extern int acpi_video_register(void); extern int acpi_video_register(void);
extern void acpi_video_unregister(void); extern void acpi_video_unregister(void);
extern void acpi_video_register_backlight(void); extern void acpi_video_register_backlight(void);
@ -69,6 +70,7 @@ extern int acpi_video_get_levels(struct acpi_device *device,
struct acpi_video_device_brightness **dev_br, struct acpi_video_device_brightness **dev_br,
int *pmax_level); int *pmax_level);
#else #else
static inline void acpi_video_report_nolcd(void) { return; };
static inline int acpi_video_register(void) { return -ENODEV; } static inline int acpi_video_register(void) { return -ENODEV; }
static inline void acpi_video_unregister(void) { return; } static inline void acpi_video_unregister(void) { return; }
static inline void acpi_video_register_backlight(void) { return; } static inline void acpi_video_register_backlight(void) { return; }

View File

@ -891,7 +891,12 @@
#define PRINTK_INDEX #define PRINTK_INDEX
#endif #endif
/*
* Discard .note.GNU-stack, which is emitted as PROGBITS by the compiler.
* Otherwise, the type of .notes section would become PROGBITS instead of NOTES.
*/
#define NOTES \ #define NOTES \
/DISCARD/ : { *(.note.GNU-stack) } \
.notes : AT(ADDR(.notes) - LOAD_OFFSET) { \ .notes : AT(ADDR(.notes) - LOAD_OFFSET) { \
BOUNDED_SECTION_BY(.note.*, _notes) \ BOUNDED_SECTION_BY(.note.*, _notes) \
} NOTES_HEADERS \ } NOTES_HEADERS \

View File

@ -7,7 +7,6 @@
#define __LINUX_MTD_SPI_NOR_H #define __LINUX_MTD_SPI_NOR_H
#include <linux/bitops.h> #include <linux/bitops.h>
#include <linux/mtd/cfi.h>
#include <linux/mtd/mtd.h> #include <linux/mtd/mtd.h>
#include <linux/spi/spi-mem.h> #include <linux/spi/spi-mem.h>

View File

@ -7,6 +7,7 @@
#ifndef _LINUX_NVME_H #ifndef _LINUX_NVME_H
#define _LINUX_NVME_H #define _LINUX_NVME_H
#include <linux/bits.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/uuid.h> #include <linux/uuid.h>
@ -639,8 +640,9 @@ enum {
NVME_CMD_EFFECTS_NCC = 1 << 2, NVME_CMD_EFFECTS_NCC = 1 << 2,
NVME_CMD_EFFECTS_NIC = 1 << 3, NVME_CMD_EFFECTS_NIC = 1 << 3,
NVME_CMD_EFFECTS_CCC = 1 << 4, NVME_CMD_EFFECTS_CCC = 1 << 4,
NVME_CMD_EFFECTS_CSE_MASK = 3 << 16, NVME_CMD_EFFECTS_CSE_MASK = GENMASK(18, 16),
NVME_CMD_EFFECTS_UUID_SEL = 1 << 19, NVME_CMD_EFFECTS_UUID_SEL = 1 << 19,
NVME_CMD_EFFECTS_SCOPE_MASK = GENMASK(31, 20),
}; };
struct nvme_effects_log { struct nvme_effects_log {

View File

@ -10,7 +10,15 @@
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/types.h> #include <linux/types.h>
/*
* this file is shared with liburing and that has to autodetect
* if linux/time_types.h is available or not, it can
* define UAPI_LINUX_IO_URING_H_SKIP_LINUX_TIME_TYPES_H
* if linux/time_types.h is not available
*/
#ifndef UAPI_LINUX_IO_URING_H_SKIP_LINUX_TIME_TYPES_H
#include <linux/time_types.h> #include <linux/time_types.h>
#endif
#ifdef __cplusplus #ifdef __cplusplus
extern "C" { extern "C" {

View File

@ -1767,6 +1767,7 @@ struct kvm_xen_hvm_attr {
__u8 runstate_update_flag; __u8 runstate_update_flag;
struct { struct {
__u64 gfn; __u64 gfn;
#define KVM_XEN_INVALID_GFN ((__u64)-1)
} shared_info; } shared_info;
struct { struct {
__u32 send_port; __u32 send_port;
@ -1798,6 +1799,7 @@ struct kvm_xen_hvm_attr {
} u; } u;
}; };
/* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO */ /* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO */
#define KVM_XEN_ATTR_TYPE_LONG_MODE 0x0 #define KVM_XEN_ATTR_TYPE_LONG_MODE 0x0
#define KVM_XEN_ATTR_TYPE_SHARED_INFO 0x1 #define KVM_XEN_ATTR_TYPE_SHARED_INFO 0x1
@ -1823,6 +1825,7 @@ struct kvm_xen_vcpu_attr {
__u16 pad[3]; __u16 pad[3];
union { union {
__u64 gpa; __u64 gpa;
#define KVM_XEN_INVALID_GPA ((__u64)-1)
__u64 pad[8]; __u64 pad[8];
struct { struct {
__u64 state; __u64 state;

View File

@ -288,24 +288,23 @@ int io_sync_cancel(struct io_ring_ctx *ctx, void __user *arg)
ret = __io_sync_cancel(current->io_uring, &cd, sc.fd); ret = __io_sync_cancel(current->io_uring, &cd, sc.fd);
mutex_unlock(&ctx->uring_lock);
if (ret != -EALREADY) if (ret != -EALREADY)
break; break;
mutex_unlock(&ctx->uring_lock);
ret = io_run_task_work_sig(ctx); ret = io_run_task_work_sig(ctx);
if (ret < 0) { if (ret < 0)
mutex_lock(&ctx->uring_lock);
break; break;
}
ret = schedule_hrtimeout(&timeout, HRTIMER_MODE_ABS); ret = schedule_hrtimeout(&timeout, HRTIMER_MODE_ABS);
mutex_lock(&ctx->uring_lock);
if (!ret) { if (!ret) {
ret = -ETIME; ret = -ETIME;
break; break;
} }
mutex_lock(&ctx->uring_lock);
} while (1); } while (1);
finish_wait(&ctx->cq_wait, &wait); finish_wait(&ctx->cq_wait, &wait);
mutex_lock(&ctx->uring_lock);
if (ret == -ENOENT || ret > 0) if (ret == -ENOENT || ret > 0)
ret = 0; ret = 0;

View File

@ -677,16 +677,20 @@ static void __io_cqring_overflow_flush(struct io_ring_ctx *ctx)
io_cq_unlock_post(ctx); io_cq_unlock_post(ctx);
} }
static void io_cqring_do_overflow_flush(struct io_ring_ctx *ctx)
{
/* iopoll syncs against uring_lock, not completion_lock */
if (ctx->flags & IORING_SETUP_IOPOLL)
mutex_lock(&ctx->uring_lock);
__io_cqring_overflow_flush(ctx);
if (ctx->flags & IORING_SETUP_IOPOLL)
mutex_unlock(&ctx->uring_lock);
}
static void io_cqring_overflow_flush(struct io_ring_ctx *ctx) static void io_cqring_overflow_flush(struct io_ring_ctx *ctx)
{ {
if (test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq)) { if (test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq))
/* iopoll syncs against uring_lock, not completion_lock */ io_cqring_do_overflow_flush(ctx);
if (ctx->flags & IORING_SETUP_IOPOLL)
mutex_lock(&ctx->uring_lock);
__io_cqring_overflow_flush(ctx);
if (ctx->flags & IORING_SETUP_IOPOLL)
mutex_unlock(&ctx->uring_lock);
}
} }
void __io_put_task(struct task_struct *task, int nr) void __io_put_task(struct task_struct *task, int nr)
@ -2549,7 +2553,10 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
trace_io_uring_cqring_wait(ctx, min_events); trace_io_uring_cqring_wait(ctx, min_events);
do { do {
io_cqring_overflow_flush(ctx); if (test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq)) {
finish_wait(&ctx->cq_wait, &iowq.wq);
io_cqring_do_overflow_flush(ctx);
}
prepare_to_wait_exclusive(&ctx->cq_wait, &iowq.wq, prepare_to_wait_exclusive(&ctx->cq_wait, &iowq.wq,
TASK_INTERRUPTIBLE); TASK_INTERRUPTIBLE);
ret = io_cqring_wait_schedule(ctx, &iowq, timeout); ret = io_cqring_wait_schedule(ctx, &iowq, timeout);
@ -4013,8 +4020,6 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
return -EEXIST; return -EEXIST;
if (ctx->restricted) { if (ctx->restricted) {
if (opcode >= IORING_REGISTER_LAST)
return -EINVAL;
opcode = array_index_nospec(opcode, IORING_REGISTER_LAST); opcode = array_index_nospec(opcode, IORING_REGISTER_LAST);
if (!test_bit(opcode, ctx->restrictions.register_op)) if (!test_bit(opcode, ctx->restrictions.register_op))
return -EACCES; return -EACCES;
@ -4170,6 +4175,9 @@ SYSCALL_DEFINE4(io_uring_register, unsigned int, fd, unsigned int, opcode,
long ret = -EBADF; long ret = -EBADF;
struct fd f; struct fd f;
if (opcode >= IORING_REGISTER_LAST)
return -EINVAL;
f = fdget(fd); f = fdget(fd);
if (!f.file) if (!f.file)
return -EBADF; return -EBADF;

View File

@ -380,7 +380,6 @@ enum event_type_t {
/* /*
* perf_sched_events : >0 events exist * perf_sched_events : >0 events exist
* perf_cgroup_events: >0 per-cpu cgroup events exist on this cpu
*/ */
static void perf_sched_delayed(struct work_struct *work); static void perf_sched_delayed(struct work_struct *work);
@ -389,7 +388,6 @@ static DECLARE_DELAYED_WORK(perf_sched_work, perf_sched_delayed);
static DEFINE_MUTEX(perf_sched_mutex); static DEFINE_MUTEX(perf_sched_mutex);
static atomic_t perf_sched_count; static atomic_t perf_sched_count;
static DEFINE_PER_CPU(atomic_t, perf_cgroup_events);
static DEFINE_PER_CPU(struct pmu_event_list, pmu_sb_events); static DEFINE_PER_CPU(struct pmu_event_list, pmu_sb_events);
static atomic_t nr_mmap_events __read_mostly; static atomic_t nr_mmap_events __read_mostly;
@ -844,9 +842,16 @@ static void perf_cgroup_switch(struct task_struct *task)
struct perf_cpu_context *cpuctx = this_cpu_ptr(&perf_cpu_context); struct perf_cpu_context *cpuctx = this_cpu_ptr(&perf_cpu_context);
struct perf_cgroup *cgrp; struct perf_cgroup *cgrp;
cgrp = perf_cgroup_from_task(task, NULL); /*
* cpuctx->cgrp is set when the first cgroup event enabled,
* and is cleared when the last cgroup event disabled.
*/
if (READ_ONCE(cpuctx->cgrp) == NULL)
return;
WARN_ON_ONCE(cpuctx->ctx.nr_cgroups == 0); WARN_ON_ONCE(cpuctx->ctx.nr_cgroups == 0);
cgrp = perf_cgroup_from_task(task, NULL);
if (READ_ONCE(cpuctx->cgrp) == cgrp) if (READ_ONCE(cpuctx->cgrp) == cgrp)
return; return;
@ -3631,8 +3636,7 @@ void __perf_event_task_sched_out(struct task_struct *task,
* to check if we have to switch out PMU state. * to check if we have to switch out PMU state.
* cgroup event are system-wide mode only * cgroup event are system-wide mode only
*/ */
if (atomic_read(this_cpu_ptr(&perf_cgroup_events))) perf_cgroup_switch(next);
perf_cgroup_switch(next);
} }
static bool perf_less_group_idx(const void *l, const void *r) static bool perf_less_group_idx(const void *l, const void *r)
@ -4974,15 +4978,6 @@ static void unaccount_pmu_sb_event(struct perf_event *event)
detach_sb_event(event); detach_sb_event(event);
} }
static void unaccount_event_cpu(struct perf_event *event, int cpu)
{
if (event->parent)
return;
if (is_cgroup_event(event))
atomic_dec(&per_cpu(perf_cgroup_events, cpu));
}
#ifdef CONFIG_NO_HZ_FULL #ifdef CONFIG_NO_HZ_FULL
static DEFINE_SPINLOCK(nr_freq_lock); static DEFINE_SPINLOCK(nr_freq_lock);
#endif #endif
@ -5048,8 +5043,6 @@ static void unaccount_event(struct perf_event *event)
schedule_delayed_work(&perf_sched_work, HZ); schedule_delayed_work(&perf_sched_work, HZ);
} }
unaccount_event_cpu(event, event->cpu);
unaccount_pmu_sb_event(event); unaccount_pmu_sb_event(event);
} }
@ -11679,15 +11672,6 @@ static void account_pmu_sb_event(struct perf_event *event)
attach_sb_event(event); attach_sb_event(event);
} }
static void account_event_cpu(struct perf_event *event, int cpu)
{
if (event->parent)
return;
if (is_cgroup_event(event))
atomic_inc(&per_cpu(perf_cgroup_events, cpu));
}
/* Freq events need the tick to stay alive (see perf_event_task_tick). */ /* Freq events need the tick to stay alive (see perf_event_task_tick). */
static void account_freq_event_nohz(void) static void account_freq_event_nohz(void)
{ {
@ -11775,8 +11759,6 @@ static void account_event(struct perf_event *event)
} }
enabled: enabled:
account_event_cpu(event, event->cpu);
account_pmu_sb_event(event); account_pmu_sb_event(event);
} }
@ -12339,12 +12321,12 @@ SYSCALL_DEFINE5(perf_event_open,
if (flags & ~PERF_FLAG_ALL) if (flags & ~PERF_FLAG_ALL)
return -EINVAL; return -EINVAL;
/* Do we allow access to perf_event_open(2) ? */ err = perf_copy_attr(attr_uptr, &attr);
err = security_perf_event_open(&attr, PERF_SECURITY_OPEN);
if (err) if (err)
return err; return err;
err = perf_copy_attr(attr_uptr, &attr); /* Do we allow access to perf_event_open(2) ? */
err = security_perf_event_open(&attr, PERF_SECURITY_OPEN);
if (err) if (err)
return err; return err;
@ -12689,7 +12671,8 @@ SYSCALL_DEFINE5(perf_event_open,
return event_fd; return event_fd;
err_context: err_context:
/* event->pmu_ctx freed by free_event() */ put_pmu_ctx(event->pmu_ctx);
event->pmu_ctx = NULL; /* _free_event() */
err_locked: err_locked:
mutex_unlock(&ctx->mutex); mutex_unlock(&ctx->mutex);
perf_unpin_context(ctx); perf_unpin_context(ctx);
@ -12802,6 +12785,7 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu,
err_pmu_ctx: err_pmu_ctx:
put_pmu_ctx(pmu_ctx); put_pmu_ctx(pmu_ctx);
event->pmu_ctx = NULL; /* _free_event() */
err_unlock: err_unlock:
mutex_unlock(&ctx->mutex); mutex_unlock(&ctx->mutex);
perf_unpin_context(ctx); perf_unpin_context(ctx);
@ -12822,13 +12806,11 @@ static void __perf_pmu_remove(struct perf_event_context *ctx,
perf_event_groups_for_cpu_pmu(event, groups, cpu, pmu) { perf_event_groups_for_cpu_pmu(event, groups, cpu, pmu) {
perf_remove_from_context(event, 0); perf_remove_from_context(event, 0);
unaccount_event_cpu(event, cpu);
put_pmu_ctx(event->pmu_ctx); put_pmu_ctx(event->pmu_ctx);
list_add(&event->migrate_entry, events); list_add(&event->migrate_entry, events);
for_each_sibling_event(sibling, event) { for_each_sibling_event(sibling, event) {
perf_remove_from_context(sibling, 0); perf_remove_from_context(sibling, 0);
unaccount_event_cpu(sibling, cpu);
put_pmu_ctx(sibling->pmu_ctx); put_pmu_ctx(sibling->pmu_ctx);
list_add(&sibling->migrate_entry, events); list_add(&sibling->migrate_entry, events);
} }
@ -12847,7 +12829,6 @@ static void __perf_pmu_install_event(struct pmu *pmu,
if (event->state >= PERF_EVENT_STATE_OFF) if (event->state >= PERF_EVENT_STATE_OFF)
event->state = PERF_EVENT_STATE_INACTIVE; event->state = PERF_EVENT_STATE_INACTIVE;
account_event_cpu(event, cpu);
perf_install_in_context(ctx, event, cpu); perf_install_in_context(ctx, event, cpu);
} }
@ -13231,7 +13212,7 @@ inherit_event(struct perf_event *parent_event,
pmu_ctx = find_get_pmu_context(child_event->pmu, child_ctx, child_event); pmu_ctx = find_get_pmu_context(child_event->pmu, child_ctx, child_event);
if (IS_ERR(pmu_ctx)) { if (IS_ERR(pmu_ctx)) {
free_event(child_event); free_event(child_event);
return NULL; return ERR_CAST(pmu_ctx);
} }
child_event->pmu_ctx = pmu_ctx; child_event->pmu_ctx = pmu_ctx;
@ -13742,8 +13723,7 @@ static int __perf_cgroup_move(void *info)
struct task_struct *task = info; struct task_struct *task = info;
preempt_disable(); preempt_disable();
if (atomic_read(this_cpu_ptr(&perf_cgroup_events))) perf_cgroup_switch(task);
perf_cgroup_switch(task);
preempt_enable(); preempt_enable();
return 0; return 0;

View File

@ -286,19 +286,22 @@ SYSCALL_DEFINE5(futex_waitv, struct futex_waitv __user *, waiters,
} }
futexv = kcalloc(nr_futexes, sizeof(*futexv), GFP_KERNEL); futexv = kcalloc(nr_futexes, sizeof(*futexv), GFP_KERNEL);
if (!futexv) if (!futexv) {
return -ENOMEM; ret = -ENOMEM;
goto destroy_timer;
}
ret = futex_parse_waitv(futexv, waiters, nr_futexes); ret = futex_parse_waitv(futexv, waiters, nr_futexes);
if (!ret) if (!ret)
ret = futex_wait_multiple(futexv, nr_futexes, timeout ? &to : NULL); ret = futex_wait_multiple(futexv, nr_futexes, timeout ? &to : NULL);
kfree(futexv);
destroy_timer:
if (timeout) { if (timeout) {
hrtimer_cancel(&to.timer); hrtimer_cancel(&to.timer);
destroy_hrtimer_on_stack(&to.timer); destroy_hrtimer_on_stack(&to.timer);
} }
kfree(futexv);
return ret; return ret;
} }

View File

@ -89,15 +89,31 @@ static inline int __ww_mutex_check_kill(struct rt_mutex *lock,
* set this bit before looking at the lock. * set this bit before looking at the lock.
*/ */
static __always_inline void static __always_inline struct task_struct *
rt_mutex_set_owner(struct rt_mutex_base *lock, struct task_struct *owner) rt_mutex_owner_encode(struct rt_mutex_base *lock, struct task_struct *owner)
{ {
unsigned long val = (unsigned long)owner; unsigned long val = (unsigned long)owner;
if (rt_mutex_has_waiters(lock)) if (rt_mutex_has_waiters(lock))
val |= RT_MUTEX_HAS_WAITERS; val |= RT_MUTEX_HAS_WAITERS;
WRITE_ONCE(lock->owner, (struct task_struct *)val); return (struct task_struct *)val;
}
static __always_inline void
rt_mutex_set_owner(struct rt_mutex_base *lock, struct task_struct *owner)
{
/*
* lock->wait_lock is held but explicit acquire semantics are needed
* for a new lock owner so WRITE_ONCE is insufficient.
*/
xchg_acquire(&lock->owner, rt_mutex_owner_encode(lock, owner));
}
static __always_inline void rt_mutex_clear_owner(struct rt_mutex_base *lock)
{
/* lock->wait_lock is held so the unlock provides release semantics. */
WRITE_ONCE(lock->owner, rt_mutex_owner_encode(lock, NULL));
} }
static __always_inline void clear_rt_mutex_waiters(struct rt_mutex_base *lock) static __always_inline void clear_rt_mutex_waiters(struct rt_mutex_base *lock)
@ -106,7 +122,8 @@ static __always_inline void clear_rt_mutex_waiters(struct rt_mutex_base *lock)
((unsigned long)lock->owner & ~RT_MUTEX_HAS_WAITERS); ((unsigned long)lock->owner & ~RT_MUTEX_HAS_WAITERS);
} }
static __always_inline void fixup_rt_mutex_waiters(struct rt_mutex_base *lock) static __always_inline void
fixup_rt_mutex_waiters(struct rt_mutex_base *lock, bool acquire_lock)
{ {
unsigned long owner, *p = (unsigned long *) &lock->owner; unsigned long owner, *p = (unsigned long *) &lock->owner;
@ -172,8 +189,21 @@ static __always_inline void fixup_rt_mutex_waiters(struct rt_mutex_base *lock)
* still set. * still set.
*/ */
owner = READ_ONCE(*p); owner = READ_ONCE(*p);
if (owner & RT_MUTEX_HAS_WAITERS) if (owner & RT_MUTEX_HAS_WAITERS) {
WRITE_ONCE(*p, owner & ~RT_MUTEX_HAS_WAITERS); /*
* See rt_mutex_set_owner() and rt_mutex_clear_owner() on
* why xchg_acquire() is used for updating owner for
* locking and WRITE_ONCE() for unlocking.
*
* WRITE_ONCE() would work for the acquire case too, but
* in case that the lock acquisition failed it might
* force other lockers into the slow path unnecessarily.
*/
if (acquire_lock)
xchg_acquire(p, owner & ~RT_MUTEX_HAS_WAITERS);
else
WRITE_ONCE(*p, owner & ~RT_MUTEX_HAS_WAITERS);
}
} }
/* /*
@ -208,6 +238,13 @@ static __always_inline void mark_rt_mutex_waiters(struct rt_mutex_base *lock)
owner = *p; owner = *p;
} while (cmpxchg_relaxed(p, owner, } while (cmpxchg_relaxed(p, owner,
owner | RT_MUTEX_HAS_WAITERS) != owner); owner | RT_MUTEX_HAS_WAITERS) != owner);
/*
* The cmpxchg loop above is relaxed to avoid back-to-back ACQUIRE
* operations in the event of contention. Ensure the successful
* cmpxchg is visible.
*/
smp_mb__after_atomic();
} }
/* /*
@ -1243,7 +1280,7 @@ static int __sched __rt_mutex_slowtrylock(struct rt_mutex_base *lock)
* try_to_take_rt_mutex() sets the lock waiters bit * try_to_take_rt_mutex() sets the lock waiters bit
* unconditionally. Clean this up. * unconditionally. Clean this up.
*/ */
fixup_rt_mutex_waiters(lock); fixup_rt_mutex_waiters(lock, true);
return ret; return ret;
} }
@ -1604,7 +1641,7 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock,
* try_to_take_rt_mutex() sets the waiter bit * try_to_take_rt_mutex() sets the waiter bit
* unconditionally. We might have to fix that up. * unconditionally. We might have to fix that up.
*/ */
fixup_rt_mutex_waiters(lock); fixup_rt_mutex_waiters(lock, true);
trace_contention_end(lock, ret); trace_contention_end(lock, ret);
@ -1719,7 +1756,7 @@ static void __sched rtlock_slowlock_locked(struct rt_mutex_base *lock)
* try_to_take_rt_mutex() sets the waiter bit unconditionally. * try_to_take_rt_mutex() sets the waiter bit unconditionally.
* We might have to fix that up: * We might have to fix that up:
*/ */
fixup_rt_mutex_waiters(lock); fixup_rt_mutex_waiters(lock, true);
debug_rt_mutex_free_waiter(&waiter); debug_rt_mutex_free_waiter(&waiter);
trace_contention_end(lock, 0); trace_contention_end(lock, 0);

View File

@ -267,7 +267,7 @@ void __sched rt_mutex_init_proxy_locked(struct rt_mutex_base *lock,
void __sched rt_mutex_proxy_unlock(struct rt_mutex_base *lock) void __sched rt_mutex_proxy_unlock(struct rt_mutex_base *lock)
{ {
debug_rt_mutex_proxy_unlock(lock); debug_rt_mutex_proxy_unlock(lock);
rt_mutex_set_owner(lock, NULL); rt_mutex_clear_owner(lock);
} }
/** /**
@ -382,7 +382,7 @@ int __sched rt_mutex_wait_proxy_lock(struct rt_mutex_base *lock,
* try_to_take_rt_mutex() sets the waiter bit unconditionally. We might * try_to_take_rt_mutex() sets the waiter bit unconditionally. We might
* have to fix that up. * have to fix that up.
*/ */
fixup_rt_mutex_waiters(lock); fixup_rt_mutex_waiters(lock, true);
raw_spin_unlock_irq(&lock->wait_lock); raw_spin_unlock_irq(&lock->wait_lock);
return ret; return ret;
@ -438,7 +438,7 @@ bool __sched rt_mutex_cleanup_proxy_lock(struct rt_mutex_base *lock,
* try_to_take_rt_mutex() sets the waiter bit unconditionally. We might * try_to_take_rt_mutex() sets the waiter bit unconditionally. We might
* have to fix that up. * have to fix that up.
*/ */
fixup_rt_mutex_waiters(lock); fixup_rt_mutex_waiters(lock, false);
raw_spin_unlock_irq(&lock->wait_lock); raw_spin_unlock_irq(&lock->wait_lock);

View File

@ -23,8 +23,10 @@ static struct string_stream_fragment *alloc_string_stream_fragment(
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
frag->fragment = kunit_kmalloc(test, len, gfp); frag->fragment = kunit_kmalloc(test, len, gfp);
if (!frag->fragment) if (!frag->fragment) {
kunit_kfree(test, frag);
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
}
return frag; return frag;
} }

View File

@ -55,6 +55,17 @@ ifneq ($(findstring i,$(filter-out --%,$(MAKEFLAGS))),)
modpost-args += -n modpost-args += -n
endif endif
ifneq ($(KBUILD_MODPOST_WARN)$(missing-input),)
modpost-args += -w
endif
# Read out modules.order to pass in modpost.
# Otherwise, allmodconfig would fail with "Argument list too long".
ifdef KBUILD_MODULES
modpost-args += -T $(MODORDER)
modpost-deps += $(MODORDER)
endif
ifeq ($(KBUILD_EXTMOD),) ifeq ($(KBUILD_EXTMOD),)
# Generate the list of in-tree objects in vmlinux # Generate the list of in-tree objects in vmlinux
@ -113,17 +124,6 @@ modpost-args += -e $(addprefix -i , $(KBUILD_EXTRA_SYMBOLS))
endif # ($(KBUILD_EXTMOD),) endif # ($(KBUILD_EXTMOD),)
ifneq ($(KBUILD_MODPOST_WARN)$(missing-input),)
modpost-args += -w
endif
ifdef KBUILD_MODULES
modpost-args += -T $(MODORDER)
modpost-deps += $(MODORDER)
endif
# Read out modules.order to pass in modpost.
# Otherwise, allmodconfig would fail with "Argument list too long".
quiet_cmd_modpost = MODPOST $@ quiet_cmd_modpost = MODPOST $@
cmd_modpost = \ cmd_modpost = \
$(if $(missing-input), \ $(if $(missing-input), \

View File

@ -158,6 +158,7 @@ $(perf-tar-pkgs):
PHONY += help PHONY += help
help: help:
@echo ' rpm-pkg - Build both source and binary RPM kernel packages' @echo ' rpm-pkg - Build both source and binary RPM kernel packages'
@echo ' srcrpm-pkg - Build only the source kernel RPM package'
@echo ' binrpm-pkg - Build only the binary kernel RPM package' @echo ' binrpm-pkg - Build only the binary kernel RPM package'
@echo ' deb-pkg - Build both source and binary deb kernel packages' @echo ' deb-pkg - Build both source and binary deb kernel packages'
@echo ' bindeb-pkg - Build only the binary kernel deb package' @echo ' bindeb-pkg - Build only the binary kernel deb package'

View File

@ -94,7 +94,6 @@
#include <unistd.h> #include <unistd.h>
#include <fcntl.h> #include <fcntl.h>
#include <string.h> #include <string.h>
#include <stdarg.h>
#include <stdlib.h> #include <stdlib.h>
#include <stdio.h> #include <stdio.h>
#include <ctype.h> #include <ctype.h>

View File

@ -161,6 +161,12 @@ static const char mconf_readme[] =
"(especially with a larger number of unrolled categories) than the\n" "(especially with a larger number of unrolled categories) than the\n"
"default mode.\n" "default mode.\n"
"\n" "\n"
"Search\n"
"-------\n"
"Pressing the forward-slash (/) anywhere brings up a search dialog box.\n"
"\n"
"Different color themes available\n" "Different color themes available\n"
"--------------------------------\n" "--------------------------------\n"
"It is possible to select different color themes using the variable\n" "It is possible to select different color themes using the variable\n"

View File

@ -51,7 +51,8 @@ sed -e '/^DEL/d' -e 's/^\t*//' <<EOF
URL: https://www.kernel.org URL: https://www.kernel.org
$S Source: kernel-$__KERNELRELEASE.tar.gz $S Source: kernel-$__KERNELRELEASE.tar.gz
Provides: $PROVIDES Provides: $PROVIDES
$S BuildRequires: bc binutils bison dwarves elfutils-libelf-devel flex $S BuildRequires: bc binutils bison dwarves
$S BuildRequires: (elfutils-libelf-devel or libelf-devel) flex
$S BuildRequires: gcc make openssl openssl-devel perl python3 rsync $S BuildRequires: gcc make openssl openssl-devel perl python3 rsync
# $UTS_MACHINE as a fallback of _arch in case # $UTS_MACHINE as a fallback of _arch in case

View File

@ -167,6 +167,7 @@ struct hdmi_spec {
struct hdmi_ops ops; struct hdmi_ops ops;
bool dyn_pin_out; bool dyn_pin_out;
bool static_pcm_mapping;
/* hdmi interrupt trigger control flag for Nvidia codec */ /* hdmi interrupt trigger control flag for Nvidia codec */
bool hdmi_intr_trig_ctrl; bool hdmi_intr_trig_ctrl;
bool nv_dp_workaround; /* workaround DP audio infoframe for Nvidia */ bool nv_dp_workaround; /* workaround DP audio infoframe for Nvidia */
@ -1525,13 +1526,16 @@ static void update_eld(struct hda_codec *codec,
*/ */
pcm_jack = pin_idx_to_pcm_jack(codec, per_pin); pcm_jack = pin_idx_to_pcm_jack(codec, per_pin);
if (eld->eld_valid) { if (!spec->static_pcm_mapping) {
hdmi_attach_hda_pcm(spec, per_pin); if (eld->eld_valid) {
hdmi_pcm_setup_pin(spec, per_pin); hdmi_attach_hda_pcm(spec, per_pin);
} else { hdmi_pcm_setup_pin(spec, per_pin);
hdmi_pcm_reset_pin(spec, per_pin); } else {
hdmi_detach_hda_pcm(spec, per_pin); hdmi_pcm_reset_pin(spec, per_pin);
hdmi_detach_hda_pcm(spec, per_pin);
}
} }
/* if pcm_idx == -1, it means this is in monitor connection event /* if pcm_idx == -1, it means this is in monitor connection event
* we can get the correct pcm_idx now. * we can get the correct pcm_idx now.
*/ */
@ -2281,8 +2285,8 @@ static int generic_hdmi_build_pcms(struct hda_codec *codec)
struct hdmi_spec *spec = codec->spec; struct hdmi_spec *spec = codec->spec;
int idx, pcm_num; int idx, pcm_num;
/* limit the PCM devices to the codec converters */ /* limit the PCM devices to the codec converters or available PINs */
pcm_num = spec->num_cvts; pcm_num = min(spec->num_cvts, spec->num_pins);
codec_dbg(codec, "hdmi: pcm_num set to %d\n", pcm_num); codec_dbg(codec, "hdmi: pcm_num set to %d\n", pcm_num);
for (idx = 0; idx < pcm_num; idx++) { for (idx = 0; idx < pcm_num; idx++) {
@ -2379,6 +2383,11 @@ static int generic_hdmi_build_controls(struct hda_codec *codec)
struct hdmi_spec_per_pin *per_pin = get_pin(spec, pin_idx); struct hdmi_spec_per_pin *per_pin = get_pin(spec, pin_idx);
struct hdmi_eld *pin_eld = &per_pin->sink_eld; struct hdmi_eld *pin_eld = &per_pin->sink_eld;
if (spec->static_pcm_mapping) {
hdmi_attach_hda_pcm(spec, per_pin);
hdmi_pcm_setup_pin(spec, per_pin);
}
pin_eld->eld_valid = false; pin_eld->eld_valid = false;
hdmi_present_sense(per_pin, 0); hdmi_present_sense(per_pin, 0);
} }
@ -4419,6 +4428,8 @@ static int patch_atihdmi(struct hda_codec *codec)
spec = codec->spec; spec = codec->spec;
spec->static_pcm_mapping = true;
spec->ops.pin_get_eld = atihdmi_pin_get_eld; spec->ops.pin_get_eld = atihdmi_pin_get_eld;
spec->ops.pin_setup_infoframe = atihdmi_pin_setup_infoframe; spec->ops.pin_setup_infoframe = atihdmi_pin_setup_infoframe;
spec->ops.pin_hbr_setup = atihdmi_pin_hbr_setup; spec->ops.pin_hbr_setup = atihdmi_pin_hbr_setup;

View File

@ -7175,6 +7175,7 @@ enum {
ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK, ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK,
ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN, ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN,
ALC295_FIXUP_DELL_INSPIRON_TOP_SPEAKERS, ALC295_FIXUP_DELL_INSPIRON_TOP_SPEAKERS,
ALC236_FIXUP_DELL_DUAL_CODECS,
}; };
/* A special fixup for Lenovo C940 and Yoga Duet 7; /* A special fixup for Lenovo C940 and Yoga Duet 7;
@ -9130,6 +9131,12 @@ static const struct hda_fixup alc269_fixups[] = {
.chained = true, .chained = true,
.chain_id = ALC269_FIXUP_DELL4_MIC_NO_PRESENCE, .chain_id = ALC269_FIXUP_DELL4_MIC_NO_PRESENCE,
}, },
[ALC236_FIXUP_DELL_DUAL_CODECS] = {
.type = HDA_FIXUP_PINS,
.v.func = alc1220_fixup_gb_dual_codecs,
.chained = true,
.chain_id = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
},
}; };
static const struct snd_pci_quirk alc269_fixup_tbl[] = { static const struct snd_pci_quirk alc269_fixup_tbl[] = {
@ -9232,6 +9239,12 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x1028, 0x0b1a, "Dell Precision 5570", ALC289_FIXUP_DUAL_SPK), SND_PCI_QUIRK(0x1028, 0x0b1a, "Dell Precision 5570", ALC289_FIXUP_DUAL_SPK),
SND_PCI_QUIRK(0x1028, 0x0b37, "Dell Inspiron 16 Plus 7620 2-in-1", ALC295_FIXUP_DELL_INSPIRON_TOP_SPEAKERS), SND_PCI_QUIRK(0x1028, 0x0b37, "Dell Inspiron 16 Plus 7620 2-in-1", ALC295_FIXUP_DELL_INSPIRON_TOP_SPEAKERS),
SND_PCI_QUIRK(0x1028, 0x0b71, "Dell Inspiron 16 Plus 7620", ALC295_FIXUP_DELL_INSPIRON_TOP_SPEAKERS), SND_PCI_QUIRK(0x1028, 0x0b71, "Dell Inspiron 16 Plus 7620", ALC295_FIXUP_DELL_INSPIRON_TOP_SPEAKERS),
SND_PCI_QUIRK(0x1028, 0x0c19, "Dell Precision 3340", ALC236_FIXUP_DELL_DUAL_CODECS),
SND_PCI_QUIRK(0x1028, 0x0c1a, "Dell Precision 3340", ALC236_FIXUP_DELL_DUAL_CODECS),
SND_PCI_QUIRK(0x1028, 0x0c1b, "Dell Precision 3440", ALC236_FIXUP_DELL_DUAL_CODECS),
SND_PCI_QUIRK(0x1028, 0x0c1c, "Dell Precision 3540", ALC236_FIXUP_DELL_DUAL_CODECS),
SND_PCI_QUIRK(0x1028, 0x0c1d, "Dell Precision 3440", ALC236_FIXUP_DELL_DUAL_CODECS),
SND_PCI_QUIRK(0x1028, 0x0c1e, "Dell Precision 3540", ALC236_FIXUP_DELL_DUAL_CODECS),
SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2), SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),

View File

@ -304,7 +304,8 @@ static void line6_data_received(struct urb *urb)
for (;;) { for (;;) {
done = done =
line6_midibuf_read(mb, line6->buffer_message, line6_midibuf_read(mb, line6->buffer_message,
LINE6_MIDI_MESSAGE_MAXLEN); LINE6_MIDI_MESSAGE_MAXLEN,
LINE6_MIDIBUF_READ_RX);
if (done <= 0) if (done <= 0)
break; break;

View File

@ -44,7 +44,8 @@ static void line6_midi_transmit(struct snd_rawmidi_substream *substream)
int req, done; int req, done;
for (;;) { for (;;) {
req = min(line6_midibuf_bytes_free(mb), line6->max_packet_size); req = min3(line6_midibuf_bytes_free(mb), line6->max_packet_size,
LINE6_FALLBACK_MAXPACKETSIZE);
done = snd_rawmidi_transmit_peek(substream, chunk, req); done = snd_rawmidi_transmit_peek(substream, chunk, req);
if (done == 0) if (done == 0)
@ -56,7 +57,8 @@ static void line6_midi_transmit(struct snd_rawmidi_substream *substream)
for (;;) { for (;;) {
done = line6_midibuf_read(mb, chunk, done = line6_midibuf_read(mb, chunk,
LINE6_FALLBACK_MAXPACKETSIZE); LINE6_FALLBACK_MAXPACKETSIZE,
LINE6_MIDIBUF_READ_TX);
if (done == 0) if (done == 0)
break; break;

View File

@ -9,6 +9,7 @@
#include "midibuf.h" #include "midibuf.h"
static int midibuf_message_length(unsigned char code) static int midibuf_message_length(unsigned char code)
{ {
int message_length; int message_length;
@ -20,12 +21,7 @@ static int midibuf_message_length(unsigned char code)
message_length = length[(code >> 4) - 8]; message_length = length[(code >> 4) - 8];
} else { } else {
/* static const int length[] = { -1, 2, 2, 2, -1, -1, 1, 1, 1, -1,
Note that according to the MIDI specification 0xf2 is
the "Song Position Pointer", but this is used by Line 6
to send sysex messages to the host.
*/
static const int length[] = { -1, 2, -1, 2, -1, -1, 1, 1, 1, 1,
1, 1, 1, -1, 1, 1 1, 1, 1, -1, 1, 1
}; };
message_length = length[code & 0x0f]; message_length = length[code & 0x0f];
@ -125,7 +121,7 @@ int line6_midibuf_write(struct midi_buffer *this, unsigned char *data,
} }
int line6_midibuf_read(struct midi_buffer *this, unsigned char *data, int line6_midibuf_read(struct midi_buffer *this, unsigned char *data,
int length) int length, int read_type)
{ {
int bytes_used; int bytes_used;
int length1, length2; int length1, length2;
@ -148,9 +144,22 @@ int line6_midibuf_read(struct midi_buffer *this, unsigned char *data,
length1 = this->size - this->pos_read; length1 = this->size - this->pos_read;
/* check MIDI command length */
command = this->buf[this->pos_read]; command = this->buf[this->pos_read];
/*
PODxt always has status byte lower nibble set to 0010,
when it means to send 0000, so we correct if here so
that control/program changes come on channel 1 and
sysex message status byte is correct
*/
if (read_type == LINE6_MIDIBUF_READ_RX) {
if (command == 0xb2 || command == 0xc2 || command == 0xf2) {
unsigned char fixed = command & 0xf0;
this->buf[this->pos_read] = fixed;
command = fixed;
}
}
/* check MIDI command length */
if (command & 0x80) { if (command & 0x80) {
midi_length = midibuf_message_length(command); midi_length = midibuf_message_length(command);
this->command_prev = command; this->command_prev = command;

View File

@ -8,6 +8,9 @@
#ifndef MIDIBUF_H #ifndef MIDIBUF_H
#define MIDIBUF_H #define MIDIBUF_H
#define LINE6_MIDIBUF_READ_TX 0
#define LINE6_MIDIBUF_READ_RX 1
struct midi_buffer { struct midi_buffer {
unsigned char *buf; unsigned char *buf;
int size; int size;
@ -23,7 +26,7 @@ extern void line6_midibuf_destroy(struct midi_buffer *mb);
extern int line6_midibuf_ignore(struct midi_buffer *mb, int length); extern int line6_midibuf_ignore(struct midi_buffer *mb, int length);
extern int line6_midibuf_init(struct midi_buffer *mb, int size, int split); extern int line6_midibuf_init(struct midi_buffer *mb, int size, int split);
extern int line6_midibuf_read(struct midi_buffer *mb, unsigned char *data, extern int line6_midibuf_read(struct midi_buffer *mb, unsigned char *data,
int length); int length, int read_type);
extern void line6_midibuf_reset(struct midi_buffer *mb); extern void line6_midibuf_reset(struct midi_buffer *mb);
extern int line6_midibuf_write(struct midi_buffer *mb, unsigned char *data, extern int line6_midibuf_write(struct midi_buffer *mb, unsigned char *data,
int length); int length);

View File

@ -159,8 +159,9 @@ static struct line6_pcm_properties pod_pcm_properties = {
.bytes_per_channel = 3 /* SNDRV_PCM_FMTBIT_S24_3LE */ .bytes_per_channel = 3 /* SNDRV_PCM_FMTBIT_S24_3LE */
}; };
static const char pod_version_header[] = { static const char pod_version_header[] = {
0xf2, 0x7e, 0x7f, 0x06, 0x02 0xf0, 0x7e, 0x7f, 0x06, 0x02
}; };
static char *pod_alloc_sysex_buffer(struct usb_line6_pod *pod, int code, static char *pod_alloc_sysex_buffer(struct usb_line6_pod *pod, int code,

View File

@ -1,86 +1,7 @@
# SPDX-License-Identifier: GPL-2.0-only # SPDX-License-Identifier: GPL-2.0-only
/aarch64/aarch32_id_regs *
/aarch64/arch_timer !/**/
/aarch64/debug-exceptions !*.c
/aarch64/get-reg-list !*.h
/aarch64/hypercalls !*.S
/aarch64/page_fault_test !*.sh
/aarch64/psci_test
/aarch64/vcpu_width_config
/aarch64/vgic_init
/aarch64/vgic_irq
/s390x/memop
/s390x/resets
/s390x/sync_regs_test
/s390x/tprot
/x86_64/amx_test
/x86_64/cpuid_test
/x86_64/cr4_cpuid_sync_test
/x86_64/debug_regs
/x86_64/exit_on_emulation_failure_test
/x86_64/fix_hypercall_test
/x86_64/get_msr_index_features
/x86_64/kvm_clock_test
/x86_64/kvm_pv_test
/x86_64/hyperv_clock
/x86_64/hyperv_cpuid
/x86_64/hyperv_evmcs
/x86_64/hyperv_features
/x86_64/hyperv_ipi
/x86_64/hyperv_svm_test
/x86_64/hyperv_tlb_flush
/x86_64/max_vcpuid_cap_test
/x86_64/mmio_warning_test
/x86_64/monitor_mwait_test
/x86_64/nested_exceptions_test
/x86_64/nx_huge_pages_test
/x86_64/platform_info_test
/x86_64/pmu_event_filter_test
/x86_64/set_boot_cpu_id
/x86_64/set_sregs_test
/x86_64/sev_migrate_tests
/x86_64/smaller_maxphyaddr_emulation_test
/x86_64/smm_test
/x86_64/state_test
/x86_64/svm_vmcall_test
/x86_64/svm_int_ctl_test
/x86_64/svm_nested_soft_inject_test
/x86_64/svm_nested_shutdown_test
/x86_64/sync_regs_test
/x86_64/tsc_msrs_test
/x86_64/tsc_scaling_sync
/x86_64/ucna_injection_test
/x86_64/userspace_io_test
/x86_64/userspace_msr_exit_test
/x86_64/vmx_apic_access_test
/x86_64/vmx_close_while_nested_test
/x86_64/vmx_dirty_log_test
/x86_64/vmx_exception_with_invalid_guest_state
/x86_64/vmx_invalid_nested_guest_state
/x86_64/vmx_msrs_test
/x86_64/vmx_preemption_timer_test
/x86_64/vmx_set_nested_state_test
/x86_64/vmx_tsc_adjust_test
/x86_64/vmx_nested_tsc_scaling_test
/x86_64/xapic_ipi_test
/x86_64/xapic_state_test
/x86_64/xen_shinfo_test
/x86_64/xen_vmcall_test
/x86_64/xss_msr_test
/x86_64/vmx_pmu_caps_test
/x86_64/triple_fault_event_test
/access_tracking_perf_test
/demand_paging_test
/dirty_log_test
/dirty_log_perf_test
/hardware_disable_test
/kvm_create_max_vcpus
/kvm_page_table_test
/max_guest_memory_test
/memslot_modification_stress_test
/memslot_perf_test
/rseq_test
/set_memory_region_test
/steal_time
/kvm_binary_stats_test
/system_counter_offset_test

View File

@ -7,35 +7,14 @@ top_srcdir = ../../../..
include $(top_srcdir)/scripts/subarch.include include $(top_srcdir)/scripts/subarch.include
ARCH ?= $(SUBARCH) ARCH ?= $(SUBARCH)
# For cross-builds to work, UNAME_M has to map to ARCH and arch specific ifeq ($(ARCH),x86)
# directories and targets in this Makefile. "uname -m" doesn't map to ARCH_DIR := x86_64
# arch specific sub-directory names. else ifeq ($(ARCH),arm64)
# ARCH_DIR := aarch64
# UNAME_M variable to used to run the compiles pointing to the right arch else ifeq ($(ARCH),s390)
# directories and build the right targets for these supported architectures. ARCH_DIR := s390x
# else
# TEST_GEN_PROGS and LIBKVM are set using UNAME_M variable. ARCH_DIR := $(ARCH)
# LINUX_TOOL_ARCH_INCLUDE is set using ARCH variable.
#
# x86_64 targets are named to include x86_64 as a suffix and directories
# for includes are in x86_64 sub-directory. s390x and aarch64 follow the
# same convention. "uname -m" doesn't result in the correct mapping for
# s390x and aarch64.
#
# No change necessary for x86_64
UNAME_M := $(shell uname -m)
# Set UNAME_M for arm64 compile/install to work
ifeq ($(ARCH),arm64)
UNAME_M := aarch64
endif
# Set UNAME_M s390x compile/install to work
ifeq ($(ARCH),s390)
UNAME_M := s390x
endif
# Set UNAME_M riscv compile/install to work
ifeq ($(ARCH),riscv)
UNAME_M := riscv
endif endif
LIBKVM += lib/assert.c LIBKVM += lib/assert.c
@ -196,10 +175,15 @@ TEST_GEN_PROGS_riscv += kvm_page_table_test
TEST_GEN_PROGS_riscv += set_memory_region_test TEST_GEN_PROGS_riscv += set_memory_region_test
TEST_GEN_PROGS_riscv += kvm_binary_stats_test TEST_GEN_PROGS_riscv += kvm_binary_stats_test
TEST_PROGS += $(TEST_PROGS_$(UNAME_M)) TEST_PROGS += $(TEST_PROGS_$(ARCH_DIR))
TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(UNAME_M)) TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(ARCH_DIR))
TEST_GEN_PROGS_EXTENDED += $(TEST_GEN_PROGS_EXTENDED_$(UNAME_M)) TEST_GEN_PROGS_EXTENDED += $(TEST_GEN_PROGS_EXTENDED_$(ARCH_DIR))
LIBKVM += $(LIBKVM_$(UNAME_M)) LIBKVM += $(LIBKVM_$(ARCH_DIR))
# lib.mak defines $(OUTPUT), prepends $(OUTPUT)/ to $(TEST_GEN_PROGS), and most
# importantly defines, i.e. overwrites, $(CC) (unless `make -e` or `make CC=`,
# which causes the environment variable to override the makefile).
include ../lib.mk
INSTALL_HDR_PATH = $(top_srcdir)/usr INSTALL_HDR_PATH = $(top_srcdir)/usr
LINUX_HDR_PATH = $(INSTALL_HDR_PATH)/include/ LINUX_HDR_PATH = $(INSTALL_HDR_PATH)/include/
@ -210,25 +194,23 @@ else
LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/$(ARCH)/include LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/$(ARCH)/include
endif endif
CFLAGS += -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99 \ CFLAGS += -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99 \
-Wno-gnu-variable-sized-type-not-at-end \
-fno-builtin-memcmp -fno-builtin-memcpy -fno-builtin-memset \
-fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE) \ -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE) \
-I$(LINUX_TOOL_ARCH_INCLUDE) -I$(LINUX_HDR_PATH) -Iinclude \ -I$(LINUX_TOOL_ARCH_INCLUDE) -I$(LINUX_HDR_PATH) -Iinclude \
-I$(<D) -Iinclude/$(UNAME_M) -I ../rseq -I.. $(EXTRA_CFLAGS) \ -I$(<D) -Iinclude/$(ARCH_DIR) -I ../rseq -I.. $(EXTRA_CFLAGS) \
$(KHDR_INCLUDES) $(KHDR_INCLUDES)
no-pie-option := $(call try-run, echo 'int main() { return 0; }' | \ no-pie-option := $(call try-run, echo 'int main(void) { return 0; }' | \
$(CC) -Werror -no-pie -x c - -o "$$TMP", -no-pie) $(CC) -Werror $(CFLAGS) -no-pie -x c - -o "$$TMP", -no-pie)
# On s390, build the testcases KVM-enabled # On s390, build the testcases KVM-enabled
pgste-option = $(call try-run, echo 'int main() { return 0; }' | \ pgste-option = $(call try-run, echo 'int main(void) { return 0; }' | \
$(CC) -Werror -Wl$(comma)--s390-pgste -x c - -o "$$TMP",-Wl$(comma)--s390-pgste) $(CC) -Werror -Wl$(comma)--s390-pgste -x c - -o "$$TMP",-Wl$(comma)--s390-pgste)
LDLIBS += -ldl LDLIBS += -ldl
LDFLAGS += -pthread $(no-pie-option) $(pgste-option) LDFLAGS += -pthread $(no-pie-option) $(pgste-option)
# After inclusion, $(OUTPUT) is defined and
# $(TEST_GEN_PROGS) starts with $(OUTPUT)/
include ../lib.mk
LIBKVM_C := $(filter %.c,$(LIBKVM)) LIBKVM_C := $(filter %.c,$(LIBKVM))
LIBKVM_S := $(filter %.S,$(LIBKVM)) LIBKVM_S := $(filter %.S,$(LIBKVM))
LIBKVM_C_OBJ := $(patsubst %.c, $(OUTPUT)/%.o, $(LIBKVM_C)) LIBKVM_C_OBJ := $(patsubst %.c, $(OUTPUT)/%.o, $(LIBKVM_C))

View File

@ -117,7 +117,7 @@ static void guest_cas(void)
GUEST_ASSERT(guest_check_lse()); GUEST_ASSERT(guest_check_lse());
asm volatile(".arch_extension lse\n" asm volatile(".arch_extension lse\n"
"casal %0, %1, [%2]\n" "casal %0, %1, [%2]\n"
:: "r" (0), "r" (TEST_DATA), "r" (guest_test_memory)); :: "r" (0ul), "r" (TEST_DATA), "r" (guest_test_memory));
val = READ_ONCE(*guest_test_memory); val = READ_ONCE(*guest_test_memory);
GUEST_ASSERT_EQ(val, TEST_DATA); GUEST_ASSERT_EQ(val, TEST_DATA);
} }

View File

@ -14,11 +14,13 @@ static vm_vaddr_t *ucall_exit_mmio_addr;
void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa) void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
{ {
virt_pg_map(vm, mmio_gpa, mmio_gpa); vm_vaddr_t mmio_gva = vm_vaddr_unused_gap(vm, vm->page_size, KVM_UTIL_MIN_VADDR);
virt_map(vm, mmio_gva, mmio_gpa, 1);
vm->ucall_mmio_addr = mmio_gpa; vm->ucall_mmio_addr = mmio_gpa;
write_guest_global(vm, ucall_exit_mmio_addr, (vm_vaddr_t *)mmio_gpa); write_guest_global(vm, ucall_exit_mmio_addr, (vm_vaddr_t *)mmio_gva);
} }
void ucall_arch_do_ucall(vm_vaddr_t uc) void ucall_arch_do_ucall(vm_vaddr_t uc)

View File

@ -186,6 +186,15 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = {
_Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES, _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES,
"Missing new mode params?"); "Missing new mode params?");
/*
* Initializes vm->vpages_valid to match the canonical VA space of the
* architecture.
*
* The default implementation is valid for architectures which split the
* range addressed by a single page table into a low and high region
* based on the MSB of the VA. On architectures with this behavior
* the VA region spans [0, 2^(va_bits - 1)), [-(2^(va_bits - 1), -1].
*/
__weak void vm_vaddr_populate_bitmap(struct kvm_vm *vm) __weak void vm_vaddr_populate_bitmap(struct kvm_vm *vm)
{ {
sparsebit_set_num(vm->vpages_valid, sparsebit_set_num(vm->vpages_valid,
@ -1416,10 +1425,10 @@ void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
while (npages--) { while (npages--) {
virt_pg_map(vm, vaddr, paddr); virt_pg_map(vm, vaddr, paddr);
sparsebit_set(vm->vpages_mapped, vaddr >> vm->page_shift);
vaddr += page_size; vaddr += page_size;
paddr += page_size; paddr += page_size;
sparsebit_set(vm->vpages_mapped, vaddr >> vm->page_shift);
} }
} }

View File

@ -4,6 +4,8 @@
#include "linux/bitmap.h" #include "linux/bitmap.h"
#include "linux/atomic.h" #include "linux/atomic.h"
#define GUEST_UCALL_FAILED -1
struct ucall_header { struct ucall_header {
DECLARE_BITMAP(in_use, KVM_MAX_VCPUS); DECLARE_BITMAP(in_use, KVM_MAX_VCPUS);
struct ucall ucalls[KVM_MAX_VCPUS]; struct ucall ucalls[KVM_MAX_VCPUS];
@ -41,7 +43,8 @@ static struct ucall *ucall_alloc(void)
struct ucall *uc; struct ucall *uc;
int i; int i;
GUEST_ASSERT(ucall_pool); if (!ucall_pool)
goto ucall_failed;
for (i = 0; i < KVM_MAX_VCPUS; ++i) { for (i = 0; i < KVM_MAX_VCPUS; ++i) {
if (!test_and_set_bit(i, ucall_pool->in_use)) { if (!test_and_set_bit(i, ucall_pool->in_use)) {
@ -51,7 +54,13 @@ static struct ucall *ucall_alloc(void)
} }
} }
GUEST_ASSERT(0); ucall_failed:
/*
* If the vCPU cannot grab a ucall structure, make a bare ucall with a
* magic value to signal to get_ucall() that things went sideways.
* GUEST_ASSERT() depends on ucall_alloc() and so cannot be used here.
*/
ucall_arch_do_ucall(GUEST_UCALL_FAILED);
return NULL; return NULL;
} }
@ -93,6 +102,9 @@ uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
addr = ucall_arch_get_ucall(vcpu); addr = ucall_arch_get_ucall(vcpu);
if (addr) { if (addr) {
TEST_ASSERT(addr != (void *)GUEST_UCALL_FAILED,
"Guest failed to allocate ucall struct");
memcpy(uc, addr, sizeof(*uc)); memcpy(uc, addr, sizeof(*uc));
vcpu_run_complete_io(vcpu); vcpu_run_complete_io(vcpu);
} else { } else {

View File

@ -1031,7 +1031,7 @@ bool is_amd_cpu(void)
void kvm_get_cpu_address_width(unsigned int *pa_bits, unsigned int *va_bits) void kvm_get_cpu_address_width(unsigned int *pa_bits, unsigned int *va_bits)
{ {
if (!kvm_cpu_has_p(X86_PROPERTY_MAX_PHY_ADDR)) { if (!kvm_cpu_has_p(X86_PROPERTY_MAX_PHY_ADDR)) {
*pa_bits == kvm_cpu_has(X86_FEATURE_PAE) ? 36 : 32; *pa_bits = kvm_cpu_has(X86_FEATURE_PAE) ? 36 : 32;
*va_bits = 32; *va_bits = 32;
} else { } else {
*pa_bits = kvm_cpu_property(X86_PROPERTY_MAX_PHY_ADDR); *pa_bits = kvm_cpu_property(X86_PROPERTY_MAX_PHY_ADDR);

View File

@ -265,6 +265,9 @@ static uint64_t get_max_slots(struct vm_data *data, uint32_t host_page_size)
slots = data->nslots; slots = data->nslots;
while (--slots > 1) { while (--slots > 1) {
pages_per_slot = mempages / slots; pages_per_slot = mempages / slots;
if (!pages_per_slot)
continue;
rempages = mempages % pages_per_slot; rempages = mempages % pages_per_slot;
if (check_slot_pages(host_page_size, guest_page_size, if (check_slot_pages(host_page_size, guest_page_size,
pages_per_slot, rempages)) pages_per_slot, rempages))

Some files were not shown because too many files have changed in this diff Show More