Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR.

Conflicts:

tools/testing/selftests/net/fcnal-test.sh
  d7a2fc1437 ("selftests: net: fcnal-test: check if FIPS mode is enabled")
  dd017c72dd ("selftests: fcnal: Test SO_DONTROUTE on TCP sockets.")
https://lore.kernel.org/all/5007b52c-dd16-dbf6-8d64-b9701bfa498b@tessares.net/
https://lore.kernel.org/all/20230619105427.4a0df9b3@canb.auug.org.au/

No adjacent changes.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2023-06-22 18:40:38 -07:00
commit a7384f3918
226 changed files with 3201 additions and 1156 deletions

View File

@ -70,6 +70,8 @@ Baolin Wang <baolin.wang@linux.alibaba.com> <baolin.wang@unisoc.com>
Baolin Wang <baolin.wang@linux.alibaba.com> <baolin.wang7@gmail.com> Baolin Wang <baolin.wang@linux.alibaba.com> <baolin.wang7@gmail.com>
Bart Van Assche <bvanassche@acm.org> <bart.vanassche@sandisk.com> Bart Van Assche <bvanassche@acm.org> <bart.vanassche@sandisk.com>
Bart Van Assche <bvanassche@acm.org> <bart.vanassche@wdc.com> Bart Van Assche <bvanassche@acm.org> <bart.vanassche@wdc.com>
Ben Dooks <ben-linux@fluff.org> <ben.dooks@simtec.co.uk>
Ben Dooks <ben-linux@fluff.org> <ben.dooks@sifive.com>
Ben Gardner <bgardner@wabtec.com> Ben Gardner <bgardner@wabtec.com>
Ben M Cahill <ben.m.cahill@intel.com> Ben M Cahill <ben.m.cahill@intel.com>
Ben Widawsky <bwidawsk@kernel.org> <ben@bwidawsk.net> Ben Widawsky <bwidawsk@kernel.org> <ben@bwidawsk.net>

View File

@ -16,6 +16,24 @@ tested code over experimental code. We wish to extend these same
principles to the RISC-V-related code that will be accepted for principles to the RISC-V-related code that will be accepted for
inclusion in the kernel. inclusion in the kernel.
Patchwork
---------
RISC-V has a patchwork instance, where the status of patches can be checked:
https://patchwork.kernel.org/project/linux-riscv/list/
If your patch does not appear in the default view, the RISC-V maintainers have
likely either requested changes, or expect it to be applied to another tree.
Automation runs against this patchwork instance, building/testing patches as
they arrive. The automation applies patches against the current HEAD of the
RISC-V `for-next` and `fixes` branches, depending on whether the patch has been
detected as a fix. Failing those, it will use the RISC-V `master` branch.
The exact commit to which a series has been applied will be noted on patchwork.
Patches for which any of the checks fail are unlikely to be applied and in most
cases will need to be resubmitted.
Submit Checklist Addendum Submit Checklist Addendum
------------------------- -------------------------
We'll only accept patches for new modules or extensions if the We'll only accept patches for new modules or extensions if the

View File

@ -14,10 +14,6 @@ Programs can view status of the events via
/sys/kernel/tracing/user_events_status and can both register and write /sys/kernel/tracing/user_events_status and can both register and write
data out via /sys/kernel/tracing/user_events_data. data out via /sys/kernel/tracing/user_events_data.
Programs can also use /sys/kernel/tracing/dynamic_events to register and
delete user based events via the u: prefix. The format of the command to
dynamic_events is the same as the ioctl with the u: prefix applied.
Typically programs will register a set of events that they wish to expose to Typically programs will register a set of events that they wish to expose to
tools that can read trace_events (such as ftrace and perf). The registration tools that can read trace_events (such as ftrace and perf). The registration
process tells the kernel which address and bit to reflect if any tool has process tells the kernel which address and bit to reflect if any tool has
@ -144,6 +140,9 @@ its name. Delete will only succeed if there are no references left to the
event (in both user and kernel space). User programs should use a separate file event (in both user and kernel space). User programs should use a separate file
to request deletes than the one used for registration due to this. to request deletes than the one used for registration due to this.
**NOTE:** By default events will auto-delete when there are no references left
to the event. Flags in the future may change this logic.
Unregistering Unregistering
------------- -------------
If after registering an event it is no longer wanted to be updated then it can If after registering an event it is no longer wanted to be updated then it can

View File

@ -9971,8 +9971,9 @@ M: Miquel Raynal <miquel.raynal@bootlin.com>
L: linux-wpan@vger.kernel.org L: linux-wpan@vger.kernel.org
S: Maintained S: Maintained
W: https://linux-wpan.org/ W: https://linux-wpan.org/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/sschmidt/wpan.git Q: https://patchwork.kernel.org/project/linux-wpan/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/sschmidt/wpan-next.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/wpan/wpan.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/wpan/wpan-next.git
F: Documentation/networking/ieee802154.rst F: Documentation/networking/ieee802154.rst
F: drivers/net/ieee802154/ F: drivers/net/ieee802154/
F: include/linux/ieee802154.h F: include/linux/ieee802154.h
@ -13283,10 +13284,11 @@ F: drivers/memory/mtk-smi.c
F: include/soc/mediatek/smi.h F: include/soc/mediatek/smi.h
MEDIATEK SWITCH DRIVER MEDIATEK SWITCH DRIVER
M: Sean Wang <sean.wang@mediatek.com> M: Arınç ÜNAL <arinc.unal@arinc9.com>
M: Daniel Golle <daniel@makrotopia.org>
M: Landen Chao <Landen.Chao@mediatek.com> M: Landen Chao <Landen.Chao@mediatek.com>
M: DENG Qingfang <dqfext@gmail.com> M: DENG Qingfang <dqfext@gmail.com>
M: Daniel Golle <daniel@makrotopia.org> M: Sean Wang <sean.wang@mediatek.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Maintained S: Maintained
F: drivers/net/dsa/mt7530-mdio.c F: drivers/net/dsa/mt7530-mdio.c
@ -16399,7 +16401,7 @@ F: Documentation/devicetree/bindings/pci/intel,keembay-pcie*
F: drivers/pci/controller/dwc/pcie-keembay.c F: drivers/pci/controller/dwc/pcie-keembay.c
PCIE DRIVER FOR INTEL LGM GW SOC PCIE DRIVER FOR INTEL LGM GW SOC
M: Rahul Tanwar <rtanwar@maxlinear.com> M: Chuanhua Lei <lchuanhua@maxlinear.com>
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/pci/intel-gw-pcie.yaml F: Documentation/devicetree/bindings/pci/intel-gw-pcie.yaml
@ -17842,7 +17844,7 @@ F: tools/testing/selftests/rtc/
Real-time Linux Analysis (RTLA) tools Real-time Linux Analysis (RTLA) tools
M: Daniel Bristot de Oliveira <bristot@kernel.org> M: Daniel Bristot de Oliveira <bristot@kernel.org>
M: Steven Rostedt <rostedt@goodmis.org> M: Steven Rostedt <rostedt@goodmis.org>
L: linux-trace-devel@vger.kernel.org L: linux-trace-kernel@vger.kernel.org
S: Maintained S: Maintained
F: Documentation/tools/rtla/ F: Documentation/tools/rtla/
F: tools/tracing/rtla/ F: tools/tracing/rtla/
@ -18412,7 +18414,7 @@ F: drivers/infiniband/ulp/rtrs/
RUNTIME VERIFICATION (RV) RUNTIME VERIFICATION (RV)
M: Daniel Bristot de Oliveira <bristot@kernel.org> M: Daniel Bristot de Oliveira <bristot@kernel.org>
M: Steven Rostedt <rostedt@goodmis.org> M: Steven Rostedt <rostedt@goodmis.org>
L: linux-trace-devel@vger.kernel.org L: linux-trace-kernel@vger.kernel.org
S: Maintained S: Maintained
F: Documentation/trace/rv/ F: Documentation/trace/rv/
F: include/linux/rv.h F: include/linux/rv.h

View File

@ -2,7 +2,7 @@
VERSION = 6 VERSION = 6
PATCHLEVEL = 4 PATCHLEVEL = 4
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc6 EXTRAVERSION = -rc7
NAME = Hurr durr I'ma ninja sloth NAME = Hurr durr I'ma ninja sloth
# *DOCUMENTATION* # *DOCUMENTATION*

View File

@ -222,6 +222,11 @@ static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr)
return false; return false;
} }
static inline bool kvm_set_pmuserenr(u64 val)
{
return false;
}
/* PMU Version in DFR Register */ /* PMU Version in DFR Register */
#define ARMV8_PMU_DFR_VER_NI 0 #define ARMV8_PMU_DFR_VER_NI 0
#define ARMV8_PMU_DFR_VER_V3P4 0x5 #define ARMV8_PMU_DFR_VER_V3P4 0x5

View File

@ -67,7 +67,7 @@ static int __init hyperv_init(void)
if (ret) if (ret)
return ret; return ret;
ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "arm64/hyperv_init:online", ret = cpuhp_setup_state(CPUHP_AP_HYPERV_ONLINE, "arm64/hyperv_init:online",
hv_common_cpu_init, hv_common_cpu_die); hv_common_cpu_init, hv_common_cpu_die);
if (ret < 0) { if (ret < 0) {
hv_common_free(); hv_common_free();

View File

@ -699,6 +699,8 @@ struct kvm_vcpu_arch {
#define SYSREGS_ON_CPU __vcpu_single_flag(sflags, BIT(4)) #define SYSREGS_ON_CPU __vcpu_single_flag(sflags, BIT(4))
/* Software step state is Active-pending */ /* Software step state is Active-pending */
#define DBG_SS_ACTIVE_PENDING __vcpu_single_flag(sflags, BIT(5)) #define DBG_SS_ACTIVE_PENDING __vcpu_single_flag(sflags, BIT(5))
/* PMUSERENR for the guest EL0 is on physical CPU */
#define PMUSERENR_ON_CPU __vcpu_single_flag(sflags, BIT(6))
/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
@ -1065,9 +1067,14 @@ void kvm_arch_vcpu_put_debug_state_flags(struct kvm_vcpu *vcpu);
#ifdef CONFIG_KVM #ifdef CONFIG_KVM
void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr); void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr);
void kvm_clr_pmu_events(u32 clr); void kvm_clr_pmu_events(u32 clr);
bool kvm_set_pmuserenr(u64 val);
#else #else
static inline void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) {} static inline void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) {}
static inline void kvm_clr_pmu_events(u32 clr) {} static inline void kvm_clr_pmu_events(u32 clr) {}
static inline bool kvm_set_pmuserenr(u64 val)
{
return false;
}
#endif #endif
void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu); void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu);

View File

@ -82,8 +82,14 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
* EL1 instead of being trapped to EL2. * EL1 instead of being trapped to EL2.
*/ */
if (kvm_arm_support_pmu_v3()) { if (kvm_arm_support_pmu_v3()) {
struct kvm_cpu_context *hctxt;
write_sysreg(0, pmselr_el0); write_sysreg(0, pmselr_el0);
hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
ctxt_sys_reg(hctxt, PMUSERENR_EL0) = read_sysreg(pmuserenr_el0);
write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0); write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
vcpu_set_flag(vcpu, PMUSERENR_ON_CPU);
} }
vcpu->arch.mdcr_el2_host = read_sysreg(mdcr_el2); vcpu->arch.mdcr_el2_host = read_sysreg(mdcr_el2);
@ -106,8 +112,13 @@ static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
write_sysreg(vcpu->arch.mdcr_el2_host, mdcr_el2); write_sysreg(vcpu->arch.mdcr_el2_host, mdcr_el2);
write_sysreg(0, hstr_el2); write_sysreg(0, hstr_el2);
if (kvm_arm_support_pmu_v3()) if (kvm_arm_support_pmu_v3()) {
write_sysreg(0, pmuserenr_el0); struct kvm_cpu_context *hctxt;
hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
write_sysreg(ctxt_sys_reg(hctxt, PMUSERENR_EL0), pmuserenr_el0);
vcpu_clear_flag(vcpu, PMUSERENR_ON_CPU);
}
if (cpus_have_final_cap(ARM64_SME)) { if (cpus_have_final_cap(ARM64_SME)) {
sysreg_clear_set_s(SYS_HFGRTR_EL2, 0, sysreg_clear_set_s(SYS_HFGRTR_EL2, 0,

View File

@ -92,14 +92,28 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
} }
NOKPROBE_SYMBOL(__deactivate_traps); NOKPROBE_SYMBOL(__deactivate_traps);
/*
* Disable IRQs in {activate,deactivate}_traps_vhe_{load,put}() to
* prevent a race condition between context switching of PMUSERENR_EL0
* in __{activate,deactivate}_traps_common() and IPIs that attempts to
* update PMUSERENR_EL0. See also kvm_set_pmuserenr().
*/
void activate_traps_vhe_load(struct kvm_vcpu *vcpu) void activate_traps_vhe_load(struct kvm_vcpu *vcpu)
{ {
unsigned long flags;
local_irq_save(flags);
__activate_traps_common(vcpu); __activate_traps_common(vcpu);
local_irq_restore(flags);
} }
void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu) void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
{ {
unsigned long flags;
local_irq_save(flags);
__deactivate_traps_common(vcpu); __deactivate_traps_common(vcpu);
local_irq_restore(flags);
} }
static const exit_handler_fn hyp_exit_handlers[] = { static const exit_handler_fn hyp_exit_handlers[] = {

View File

@ -700,7 +700,25 @@ static struct arm_pmu *kvm_pmu_probe_armpmu(void)
mutex_lock(&arm_pmus_lock); mutex_lock(&arm_pmus_lock);
cpu = smp_processor_id(); /*
* It is safe to use a stale cpu to iterate the list of PMUs so long as
* the same value is used for the entirety of the loop. Given this, and
* the fact that no percpu data is used for the lookup there is no need
* to disable preemption.
*
* It is still necessary to get a valid cpu, though, to probe for the
* default PMU instance as userspace is not required to specify a PMU
* type. In order to uphold the preexisting behavior KVM selects the
* PMU instance for the core where the first call to the
* KVM_ARM_VCPU_PMU_V3_CTRL attribute group occurs. A dependent use case
* would be a user with disdain of all things big.LITTLE that affines
* the VMM to a particular cluster of cores.
*
* In any case, userspace should just do the sane thing and use the UAPI
* to select a PMU type directly. But, be wary of the baggage being
* carried here.
*/
cpu = raw_smp_processor_id();
list_for_each_entry(entry, &arm_pmus, entry) { list_for_each_entry(entry, &arm_pmus, entry) {
tmp = entry->arm_pmu; tmp = entry->arm_pmu;

View File

@ -209,3 +209,30 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu)
kvm_vcpu_pmu_enable_el0(events_host); kvm_vcpu_pmu_enable_el0(events_host);
kvm_vcpu_pmu_disable_el0(events_guest); kvm_vcpu_pmu_disable_el0(events_guest);
} }
/*
* With VHE, keep track of the PMUSERENR_EL0 value for the host EL0 on the pCPU
* where PMUSERENR_EL0 for the guest is loaded, since PMUSERENR_EL0 is switched
* to the value for the guest on vcpu_load(). The value for the host EL0
* will be restored on vcpu_put(), before returning to userspace.
* This isn't necessary for nVHE, as the register is context switched for
* every guest enter/exit.
*
* Return true if KVM takes care of the register. Otherwise return false.
*/
bool kvm_set_pmuserenr(u64 val)
{
struct kvm_cpu_context *hctxt;
struct kvm_vcpu *vcpu;
if (!kvm_arm_support_pmu_v3() || !has_vhe())
return false;
vcpu = kvm_get_running_vcpu();
if (!vcpu || !vcpu_get_flag(vcpu, PMUSERENR_ON_CPU))
return false;
hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
ctxt_sys_reg(hctxt, PMUSERENR_EL0) = val;
return true;
}

View File

@ -446,6 +446,7 @@ int vgic_lazy_init(struct kvm *kvm)
int kvm_vgic_map_resources(struct kvm *kvm) int kvm_vgic_map_resources(struct kvm *kvm)
{ {
struct vgic_dist *dist = &kvm->arch.vgic; struct vgic_dist *dist = &kvm->arch.vgic;
enum vgic_type type;
gpa_t dist_base; gpa_t dist_base;
int ret = 0; int ret = 0;
@ -460,10 +461,13 @@ int kvm_vgic_map_resources(struct kvm *kvm)
if (!irqchip_in_kernel(kvm)) if (!irqchip_in_kernel(kvm))
goto out; goto out;
if (dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V2) if (dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V2) {
ret = vgic_v2_map_resources(kvm); ret = vgic_v2_map_resources(kvm);
else type = VGIC_V2;
} else {
ret = vgic_v3_map_resources(kvm); ret = vgic_v3_map_resources(kvm);
type = VGIC_V3;
}
if (ret) { if (ret) {
__kvm_vgic_destroy(kvm); __kvm_vgic_destroy(kvm);
@ -473,8 +477,7 @@ int kvm_vgic_map_resources(struct kvm *kvm)
dist_base = dist->vgic_dist_base; dist_base = dist->vgic_dist_base;
mutex_unlock(&kvm->arch.config_lock); mutex_unlock(&kvm->arch.config_lock);
ret = vgic_register_dist_iodev(kvm, dist_base, ret = vgic_register_dist_iodev(kvm, dist_base, type);
kvm_vgic_global_state.type);
if (ret) { if (ret) {
kvm_err("Unable to register VGIC dist MMIO regions\n"); kvm_err("Unable to register VGIC dist MMIO regions\n");
kvm_vgic_destroy(kvm); kvm_vgic_destroy(kvm);

View File

@ -90,10 +90,6 @@
#include <asm/asmregs.h> #include <asm/asmregs.h>
#include <asm/psw.h> #include <asm/psw.h>
sp = 30
gp = 27
ipsw = 22
/* /*
* We provide two versions of each macro to convert from physical * We provide two versions of each macro to convert from physical
* to virtual and vice versa. The "_r1" versions take one argument * to virtual and vice versa. The "_r1" versions take one argument

View File

@ -795,12 +795,20 @@ void exit_lazy_flush_tlb(struct mm_struct *mm, bool always_flush)
goto out; goto out;
if (current->active_mm == mm) { if (current->active_mm == mm) {
unsigned long flags;
WARN_ON_ONCE(current->mm != NULL); WARN_ON_ONCE(current->mm != NULL);
/* Is a kernel thread and is using mm as the lazy tlb */ /*
* It is a kernel thread and is using mm as the lazy tlb, so
* switch it to init_mm. This is not always called from IPI
* (e.g., flush_type_needed), so must disable irqs.
*/
local_irq_save(flags);
mmgrab_lazy_tlb(&init_mm); mmgrab_lazy_tlb(&init_mm);
current->active_mm = &init_mm; current->active_mm = &init_mm;
switch_mm_irqs_off(mm, &init_mm, current); switch_mm_irqs_off(mm, &init_mm, current);
mmdrop_lazy_tlb(mm); mmdrop_lazy_tlb(mm);
local_irq_restore(flags);
} }
/* /*

View File

@ -416,7 +416,7 @@ void __init hyperv_init(void)
goto free_vp_assist_page; goto free_vp_assist_page;
} }
cpuhp = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "x86/hyperv_init:online", cpuhp = cpuhp_setup_state(CPUHP_AP_HYPERV_ONLINE, "x86/hyperv_init:online",
hv_cpu_init, hv_cpu_die); hv_cpu_init, hv_cpu_die);
if (cpuhp < 0) if (cpuhp < 0)
goto free_ghcb_page; goto free_ghcb_page;

View File

@ -20,6 +20,8 @@ void __init hv_vtl_init_platform(void)
{ {
pr_info("Linux runs in Hyper-V Virtual Trust Level\n"); pr_info("Linux runs in Hyper-V Virtual Trust Level\n");
x86_platform.realmode_reserve = x86_init_noop;
x86_platform.realmode_init = x86_init_noop;
x86_init.irqs.pre_vector_init = x86_init_noop; x86_init.irqs.pre_vector_init = x86_init_noop;
x86_init.timers.timer_init = x86_init_noop; x86_init.timers.timer_init = x86_init_noop;

View File

@ -2570,7 +2570,7 @@ out_image:
} }
if (bpf_jit_enable > 1) if (bpf_jit_enable > 1)
bpf_jit_dump(prog->len, proglen, pass + 1, image); bpf_jit_dump(prog->len, proglen, pass + 1, rw_image);
if (image) { if (image) {
if (!prog->is_func || extra_pass) { if (!prog->is_func || extra_pass) {

View File

@ -34,6 +34,8 @@
#include "blk-ioprio.h" #include "blk-ioprio.h"
#include "blk-throttle.h" #include "blk-throttle.h"
static void __blkcg_rstat_flush(struct blkcg *blkcg, int cpu);
/* /*
* blkcg_pol_mutex protects blkcg_policy[] and policy [de]activation. * blkcg_pol_mutex protects blkcg_policy[] and policy [de]activation.
* blkcg_pol_register_mutex nests outside of it and synchronizes entire * blkcg_pol_register_mutex nests outside of it and synchronizes entire
@ -56,6 +58,8 @@ static LIST_HEAD(all_blkcgs); /* protected by blkcg_pol_mutex */
bool blkcg_debug_stats = false; bool blkcg_debug_stats = false;
static DEFINE_RAW_SPINLOCK(blkg_stat_lock);
#define BLKG_DESTROY_BATCH_SIZE 64 #define BLKG_DESTROY_BATCH_SIZE 64
/* /*
@ -163,10 +167,20 @@ static void blkg_free(struct blkcg_gq *blkg)
static void __blkg_release(struct rcu_head *rcu) static void __blkg_release(struct rcu_head *rcu)
{ {
struct blkcg_gq *blkg = container_of(rcu, struct blkcg_gq, rcu_head); struct blkcg_gq *blkg = container_of(rcu, struct blkcg_gq, rcu_head);
struct blkcg *blkcg = blkg->blkcg;
int cpu;
#ifdef CONFIG_BLK_CGROUP_PUNT_BIO #ifdef CONFIG_BLK_CGROUP_PUNT_BIO
WARN_ON(!bio_list_empty(&blkg->async_bios)); WARN_ON(!bio_list_empty(&blkg->async_bios));
#endif #endif
/*
* Flush all the non-empty percpu lockless lists before releasing
* us, given these stat belongs to us.
*
* blkg_stat_lock is for serializing blkg stat update
*/
for_each_possible_cpu(cpu)
__blkcg_rstat_flush(blkcg, cpu);
/* release the blkcg and parent blkg refs this blkg has been holding */ /* release the blkcg and parent blkg refs this blkg has been holding */
css_put(&blkg->blkcg->css); css_put(&blkg->blkcg->css);
@ -951,23 +965,26 @@ static void blkcg_iostat_update(struct blkcg_gq *blkg, struct blkg_iostat *cur,
u64_stats_update_end_irqrestore(&blkg->iostat.sync, flags); u64_stats_update_end_irqrestore(&blkg->iostat.sync, flags);
} }
static void blkcg_rstat_flush(struct cgroup_subsys_state *css, int cpu) static void __blkcg_rstat_flush(struct blkcg *blkcg, int cpu)
{ {
struct blkcg *blkcg = css_to_blkcg(css);
struct llist_head *lhead = per_cpu_ptr(blkcg->lhead, cpu); struct llist_head *lhead = per_cpu_ptr(blkcg->lhead, cpu);
struct llist_node *lnode; struct llist_node *lnode;
struct blkg_iostat_set *bisc, *next_bisc; struct blkg_iostat_set *bisc, *next_bisc;
/* Root-level stats are sourced from system-wide IO stats */
if (!cgroup_parent(css->cgroup))
return;
rcu_read_lock(); rcu_read_lock();
lnode = llist_del_all(lhead); lnode = llist_del_all(lhead);
if (!lnode) if (!lnode)
goto out; goto out;
/*
* For covering concurrent parent blkg update from blkg_release().
*
* When flushing from cgroup, cgroup_rstat_lock is always held, so
* this lock won't cause contention most of time.
*/
raw_spin_lock(&blkg_stat_lock);
/* /*
* Iterate only the iostat_cpu's queued in the lockless list. * Iterate only the iostat_cpu's queued in the lockless list.
*/ */
@ -991,13 +1008,19 @@ static void blkcg_rstat_flush(struct cgroup_subsys_state *css, int cpu)
if (parent && parent->parent) if (parent && parent->parent)
blkcg_iostat_update(parent, &blkg->iostat.cur, blkcg_iostat_update(parent, &blkg->iostat.cur,
&blkg->iostat.last); &blkg->iostat.last);
percpu_ref_put(&blkg->refcnt);
} }
raw_spin_unlock(&blkg_stat_lock);
out: out:
rcu_read_unlock(); rcu_read_unlock();
} }
static void blkcg_rstat_flush(struct cgroup_subsys_state *css, int cpu)
{
/* Root-level stats are sourced from system-wide IO stats */
if (cgroup_parent(css->cgroup))
__blkcg_rstat_flush(css_to_blkcg(css), cpu);
}
/* /*
* We source root cgroup stats from the system-wide stats to avoid * We source root cgroup stats from the system-wide stats to avoid
* tracking the same information twice and incurring overhead when no * tracking the same information twice and incurring overhead when no
@ -2075,7 +2098,6 @@ void blk_cgroup_bio_start(struct bio *bio)
llist_add(&bis->lnode, lhead); llist_add(&bis->lnode, lhead);
WRITE_ONCE(bis->lqueued, true); WRITE_ONCE(bis->lqueued, true);
percpu_ref_get(&bis->blkg->refcnt);
} }
u64_stats_update_end_irqrestore(&bis->sync, flags); u64_stats_update_end_irqrestore(&bis->sync, flags);

View File

@ -97,6 +97,7 @@ static int qaic_open(struct drm_device *dev, struct drm_file *file)
cleanup_usr: cleanup_usr:
cleanup_srcu_struct(&usr->qddev_lock); cleanup_srcu_struct(&usr->qddev_lock);
ida_free(&qaic_usrs, usr->handle);
free_usr: free_usr:
kfree(usr); kfree(usr);
dev_unlock: dev_unlock:
@ -224,6 +225,9 @@ static void qaic_destroy_drm_device(struct qaic_device *qdev, s32 partition_id)
struct qaic_user *usr; struct qaic_user *usr;
qddev = qdev->qddev; qddev = qdev->qddev;
qdev->qddev = NULL;
if (!qddev)
return;
/* /*
* Existing users get unresolvable errors till they close FDs. * Existing users get unresolvable errors till they close FDs.

View File

@ -101,8 +101,6 @@ acpi_status
acpi_hw_get_gpe_status(struct acpi_gpe_event_info *gpe_event_info, acpi_hw_get_gpe_status(struct acpi_gpe_event_info *gpe_event_info,
acpi_event_status *event_status); acpi_event_status *event_status);
acpi_status acpi_hw_disable_all_gpes(void);
acpi_status acpi_hw_enable_all_runtime_gpes(void); acpi_status acpi_hw_enable_all_runtime_gpes(void);
acpi_status acpi_hw_enable_all_wakeup_gpes(void); acpi_status acpi_hw_enable_all_wakeup_gpes(void);

View File

@ -636,11 +636,19 @@ static int acpi_suspend_enter(suspend_state_t pm_state)
} }
/* /*
* Disable and clear GPE status before interrupt is enabled. Some GPEs * Disable all GPE and clear their status bits before interrupts are
* (like wakeup GPE) haven't handler, this can avoid such GPE misfire. * enabled. Some GPEs (like wakeup GPEs) have no handlers and this can
* acpi_leave_sleep_state will reenable specific GPEs later * prevent them from producing spurious interrups.
*
* acpi_leave_sleep_state() will reenable specific GPEs later.
*
* Because this code runs on one CPU with disabled interrupts (all of
* the other CPUs are offline at this time), it need not acquire any
* sleeping locks which may trigger an implicit preemption point even
* if there is no contention, so avoid doing that by using a low-level
* library routine here.
*/ */
acpi_disable_all_gpes(); acpi_hw_disable_all_gpes();
/* Allow EC transactions to happen. */ /* Allow EC transactions to happen. */
acpi_ec_unblock_transactions(); acpi_ec_unblock_transactions();

View File

@ -5348,7 +5348,7 @@ struct ata_port *ata_port_alloc(struct ata_host *host)
mutex_init(&ap->scsi_scan_mutex); mutex_init(&ap->scsi_scan_mutex);
INIT_DELAYED_WORK(&ap->hotplug_task, ata_scsi_hotplug); INIT_DELAYED_WORK(&ap->hotplug_task, ata_scsi_hotplug);
INIT_WORK(&ap->scsi_rescan_task, ata_scsi_dev_rescan); INIT_DELAYED_WORK(&ap->scsi_rescan_task, ata_scsi_dev_rescan);
INIT_LIST_HEAD(&ap->eh_done_q); INIT_LIST_HEAD(&ap->eh_done_q);
init_waitqueue_head(&ap->eh_wait_q); init_waitqueue_head(&ap->eh_wait_q);
init_completion(&ap->park_req_pending); init_completion(&ap->park_req_pending);
@ -5954,6 +5954,7 @@ static void ata_port_detach(struct ata_port *ap)
WARN_ON(!(ap->pflags & ATA_PFLAG_UNLOADED)); WARN_ON(!(ap->pflags & ATA_PFLAG_UNLOADED));
cancel_delayed_work_sync(&ap->hotplug_task); cancel_delayed_work_sync(&ap->hotplug_task);
cancel_delayed_work_sync(&ap->scsi_rescan_task);
skip_eh: skip_eh:
/* clean up zpodd on port removal */ /* clean up zpodd on port removal */

View File

@ -2984,7 +2984,7 @@ static int ata_eh_revalidate_and_attach(struct ata_link *link,
ehc->i.flags |= ATA_EHI_SETMODE; ehc->i.flags |= ATA_EHI_SETMODE;
/* schedule the scsi_rescan_device() here */ /* schedule the scsi_rescan_device() here */
schedule_work(&(ap->scsi_rescan_task)); schedule_delayed_work(&ap->scsi_rescan_task, 0);
} else if (dev->class == ATA_DEV_UNKNOWN && } else if (dev->class == ATA_DEV_UNKNOWN &&
ehc->tries[dev->devno] && ehc->tries[dev->devno] &&
ata_class_enabled(ehc->classes[dev->devno])) { ata_class_enabled(ehc->classes[dev->devno])) {

View File

@ -4597,10 +4597,11 @@ int ata_scsi_user_scan(struct Scsi_Host *shost, unsigned int channel,
void ata_scsi_dev_rescan(struct work_struct *work) void ata_scsi_dev_rescan(struct work_struct *work)
{ {
struct ata_port *ap = struct ata_port *ap =
container_of(work, struct ata_port, scsi_rescan_task); container_of(work, struct ata_port, scsi_rescan_task.work);
struct ata_link *link; struct ata_link *link;
struct ata_device *dev; struct ata_device *dev;
unsigned long flags; unsigned long flags;
bool delay_rescan = false;
mutex_lock(&ap->scsi_scan_mutex); mutex_lock(&ap->scsi_scan_mutex);
spin_lock_irqsave(ap->lock, flags); spin_lock_irqsave(ap->lock, flags);
@ -4614,6 +4615,21 @@ void ata_scsi_dev_rescan(struct work_struct *work)
if (scsi_device_get(sdev)) if (scsi_device_get(sdev))
continue; continue;
/*
* If the rescan work was scheduled because of a resume
* event, the port is already fully resumed, but the
* SCSI device may not yet be fully resumed. In such
* case, executing scsi_rescan_device() may cause a
* deadlock with the PM code on device_lock(). Prevent
* this by giving up and retrying rescan after a short
* delay.
*/
delay_rescan = sdev->sdev_gendev.power.is_suspended;
if (delay_rescan) {
scsi_device_put(sdev);
break;
}
spin_unlock_irqrestore(ap->lock, flags); spin_unlock_irqrestore(ap->lock, flags);
scsi_rescan_device(&(sdev->sdev_gendev)); scsi_rescan_device(&(sdev->sdev_gendev));
scsi_device_put(sdev); scsi_device_put(sdev);
@ -4623,4 +4639,8 @@ void ata_scsi_dev_rescan(struct work_struct *work)
spin_unlock_irqrestore(ap->lock, flags); spin_unlock_irqrestore(ap->lock, flags);
mutex_unlock(&ap->scsi_scan_mutex); mutex_unlock(&ap->scsi_scan_mutex);
if (delay_rescan)
schedule_delayed_work(&ap->scsi_rescan_task,
msecs_to_jiffies(5));
} }

View File

@ -660,7 +660,7 @@ static const struct regmap_bus regmap_spi_avmm_bus = {
.reg_format_endian_default = REGMAP_ENDIAN_NATIVE, .reg_format_endian_default = REGMAP_ENDIAN_NATIVE,
.val_format_endian_default = REGMAP_ENDIAN_NATIVE, .val_format_endian_default = REGMAP_ENDIAN_NATIVE,
.max_raw_read = SPI_AVMM_VAL_SIZE * MAX_READ_CNT, .max_raw_read = SPI_AVMM_VAL_SIZE * MAX_READ_CNT,
.max_raw_write = SPI_AVMM_VAL_SIZE * MAX_WRITE_CNT, .max_raw_write = SPI_AVMM_REG_SIZE + SPI_AVMM_VAL_SIZE * MAX_WRITE_CNT,
.free_context = spi_avmm_bridge_ctx_free, .free_context = spi_avmm_bridge_ctx_free,
}; };

View File

@ -348,63 +348,33 @@ static inline void virtblk_request_done(struct request *req)
blk_mq_end_request(req, status); blk_mq_end_request(req, status);
} }
static void virtblk_complete_batch(struct io_comp_batch *iob)
{
struct request *req;
rq_list_for_each(&iob->req_list, req) {
virtblk_unmap_data(req, blk_mq_rq_to_pdu(req));
virtblk_cleanup_cmd(req);
}
blk_mq_end_request_batch(iob);
}
static int virtblk_handle_req(struct virtio_blk_vq *vq,
struct io_comp_batch *iob)
{
struct virtblk_req *vbr;
int req_done = 0;
unsigned int len;
while ((vbr = virtqueue_get_buf(vq->vq, &len)) != NULL) {
struct request *req = blk_mq_rq_from_pdu(vbr);
if (likely(!blk_should_fake_timeout(req->q)) &&
!blk_mq_complete_request_remote(req) &&
!blk_mq_add_to_batch(req, iob, virtblk_vbr_status(vbr),
virtblk_complete_batch))
virtblk_request_done(req);
req_done++;
}
return req_done;
}
static void virtblk_done(struct virtqueue *vq) static void virtblk_done(struct virtqueue *vq)
{ {
struct virtio_blk *vblk = vq->vdev->priv; struct virtio_blk *vblk = vq->vdev->priv;
struct virtio_blk_vq *vblk_vq = &vblk->vqs[vq->index]; bool req_done = false;
int req_done = 0; int qid = vq->index;
struct virtblk_req *vbr;
unsigned long flags; unsigned long flags;
DEFINE_IO_COMP_BATCH(iob); unsigned int len;
spin_lock_irqsave(&vblk_vq->lock, flags); spin_lock_irqsave(&vblk->vqs[qid].lock, flags);
do { do {
virtqueue_disable_cb(vq); virtqueue_disable_cb(vq);
req_done += virtblk_handle_req(vblk_vq, &iob); while ((vbr = virtqueue_get_buf(vblk->vqs[qid].vq, &len)) != NULL) {
struct request *req = blk_mq_rq_from_pdu(vbr);
if (likely(!blk_should_fake_timeout(req->q)))
blk_mq_complete_request(req);
req_done = true;
}
if (unlikely(virtqueue_is_broken(vq))) if (unlikely(virtqueue_is_broken(vq)))
break; break;
} while (!virtqueue_enable_cb(vq)); } while (!virtqueue_enable_cb(vq));
if (req_done) { /* In case queue is stopped waiting for more buffers. */
if (!rq_list_empty(iob.req_list)) if (req_done)
iob.complete(&iob);
/* In case queue is stopped waiting for more buffers. */
blk_mq_start_stopped_hw_queues(vblk->disk->queue, true); blk_mq_start_stopped_hw_queues(vblk->disk->queue, true);
} spin_unlock_irqrestore(&vblk->vqs[qid].lock, flags);
spin_unlock_irqrestore(&vblk_vq->lock, flags);
} }
static void virtio_commit_rqs(struct blk_mq_hw_ctx *hctx) static void virtio_commit_rqs(struct blk_mq_hw_ctx *hctx)
@ -1283,15 +1253,37 @@ static void virtblk_map_queues(struct blk_mq_tag_set *set)
} }
} }
static void virtblk_complete_batch(struct io_comp_batch *iob)
{
struct request *req;
rq_list_for_each(&iob->req_list, req) {
virtblk_unmap_data(req, blk_mq_rq_to_pdu(req));
virtblk_cleanup_cmd(req);
}
blk_mq_end_request_batch(iob);
}
static int virtblk_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob) static int virtblk_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob)
{ {
struct virtio_blk *vblk = hctx->queue->queuedata; struct virtio_blk *vblk = hctx->queue->queuedata;
struct virtio_blk_vq *vq = get_virtio_blk_vq(hctx); struct virtio_blk_vq *vq = get_virtio_blk_vq(hctx);
struct virtblk_req *vbr;
unsigned long flags; unsigned long flags;
unsigned int len;
int found = 0; int found = 0;
spin_lock_irqsave(&vq->lock, flags); spin_lock_irqsave(&vq->lock, flags);
found = virtblk_handle_req(vq, iob);
while ((vbr = virtqueue_get_buf(vq->vq, &len)) != NULL) {
struct request *req = blk_mq_rq_from_pdu(vbr);
found++;
if (!blk_mq_complete_request_remote(req) &&
!blk_mq_add_to_batch(req, iob, virtblk_vbr_status(vbr),
virtblk_complete_batch))
virtblk_request_done(req);
}
if (found) if (found)
blk_mq_start_stopped_hw_queues(vblk->disk->queue, true); blk_mq_start_stopped_hw_queues(vblk->disk->queue, true);

View File

@ -119,7 +119,10 @@ static int clk_composite_determine_rate(struct clk_hw *hw,
if (ret) if (ret)
continue; continue;
rate_diff = abs(req->rate - tmp_req.rate); if (req->rate >= tmp_req.rate)
rate_diff = req->rate - tmp_req.rate;
else
rate_diff = tmp_req.rate - req->rate;
if (!rate_diff || !req->best_parent_hw if (!rate_diff || !req->best_parent_hw
|| best_rate_diff > rate_diff) { || best_rate_diff > rate_diff) {

View File

@ -40,7 +40,7 @@ static struct clk_hw *loongson2_clk_register(struct device *dev,
{ {
int ret; int ret;
struct clk_hw *hw; struct clk_hw *hw;
struct clk_init_data init; struct clk_init_data init = { };
hw = devm_kzalloc(dev, sizeof(*hw), GFP_KERNEL); hw = devm_kzalloc(dev, sizeof(*hw), GFP_KERNEL);
if (!hw) if (!hw)

View File

@ -23,6 +23,7 @@
static DEFINE_SPINLOCK(mt8365_clk_lock); static DEFINE_SPINLOCK(mt8365_clk_lock);
static const struct mtk_fixed_clk top_fixed_clks[] = { static const struct mtk_fixed_clk top_fixed_clks[] = {
FIXED_CLK(CLK_TOP_CLK_NULL, "clk_null", NULL, 0),
FIXED_CLK(CLK_TOP_I2S0_BCK, "i2s0_bck", NULL, 26000000), FIXED_CLK(CLK_TOP_I2S0_BCK, "i2s0_bck", NULL, 26000000),
FIXED_CLK(CLK_TOP_DSI0_LNTC_DSICK, "dsi0_lntc_dsick", "clk26m", FIXED_CLK(CLK_TOP_DSI0_LNTC_DSICK, "dsi0_lntc_dsick", "clk26m",
75000000), 75000000),
@ -559,6 +560,14 @@ static const struct mtk_clk_divider top_adj_divs[] = {
0x324, 16, 8, CLK_DIVIDER_ROUND_CLOSEST), 0x324, 16, 8, CLK_DIVIDER_ROUND_CLOSEST),
DIV_ADJ_F(CLK_TOP_APLL12_CK_DIV3, "apll12_ck_div3", "apll_i2s3_sel", DIV_ADJ_F(CLK_TOP_APLL12_CK_DIV3, "apll12_ck_div3", "apll_i2s3_sel",
0x324, 24, 8, CLK_DIVIDER_ROUND_CLOSEST), 0x324, 24, 8, CLK_DIVIDER_ROUND_CLOSEST),
DIV_ADJ_F(CLK_TOP_APLL12_CK_DIV4, "apll12_ck_div4", "apll_tdmout_sel",
0x328, 0, 8, CLK_DIVIDER_ROUND_CLOSEST),
DIV_ADJ_F(CLK_TOP_APLL12_CK_DIV4B, "apll12_ck_div4b", "apll_tdmout_sel",
0x328, 8, 8, CLK_DIVIDER_ROUND_CLOSEST),
DIV_ADJ_F(CLK_TOP_APLL12_CK_DIV5, "apll12_ck_div5", "apll_tdmin_sel",
0x328, 16, 8, CLK_DIVIDER_ROUND_CLOSEST),
DIV_ADJ_F(CLK_TOP_APLL12_CK_DIV5B, "apll12_ck_div5b", "apll_tdmin_sel",
0x328, 24, 8, CLK_DIVIDER_ROUND_CLOSEST),
DIV_ADJ_F(CLK_TOP_APLL12_CK_DIV6, "apll12_ck_div6", "apll_spdif_sel", DIV_ADJ_F(CLK_TOP_APLL12_CK_DIV6, "apll12_ck_div6", "apll_spdif_sel",
0x32c, 0, 8, CLK_DIVIDER_ROUND_CLOSEST), 0x32c, 0, 8, CLK_DIVIDER_ROUND_CLOSEST),
}; };
@ -583,15 +592,15 @@ static const struct mtk_gate_regs top2_cg_regs = {
#define GATE_TOP0(_id, _name, _parent, _shift) \ #define GATE_TOP0(_id, _name, _parent, _shift) \
GATE_MTK(_id, _name, _parent, &top0_cg_regs, \ GATE_MTK(_id, _name, _parent, &top0_cg_regs, \
_shift, &mtk_clk_gate_ops_no_setclr_inv) _shift, &mtk_clk_gate_ops_no_setclr)
#define GATE_TOP1(_id, _name, _parent, _shift) \ #define GATE_TOP1(_id, _name, _parent, _shift) \
GATE_MTK(_id, _name, _parent, &top1_cg_regs, \ GATE_MTK(_id, _name, _parent, &top1_cg_regs, \
_shift, &mtk_clk_gate_ops_no_setclr) _shift, &mtk_clk_gate_ops_no_setclr_inv)
#define GATE_TOP2(_id, _name, _parent, _shift) \ #define GATE_TOP2(_id, _name, _parent, _shift) \
GATE_MTK(_id, _name, _parent, &top2_cg_regs, \ GATE_MTK(_id, _name, _parent, &top2_cg_regs, \
_shift, &mtk_clk_gate_ops_no_setclr) _shift, &mtk_clk_gate_ops_no_setclr_inv)
static const struct mtk_gate top_clk_gates[] = { static const struct mtk_gate top_clk_gates[] = {
GATE_TOP0(CLK_TOP_CONN_32K, "conn_32k", "clk32k", 10), GATE_TOP0(CLK_TOP_CONN_32K, "conn_32k", "clk32k", 10),
@ -696,6 +705,7 @@ static const struct mtk_gate ifr_clks[] = {
GATE_IFR3(CLK_IFR_GCPU, "ifr_gcpu", "axi_sel", 8), GATE_IFR3(CLK_IFR_GCPU, "ifr_gcpu", "axi_sel", 8),
GATE_IFR3(CLK_IFR_TRNG, "ifr_trng", "axi_sel", 9), GATE_IFR3(CLK_IFR_TRNG, "ifr_trng", "axi_sel", 9),
GATE_IFR3(CLK_IFR_AUXADC, "ifr_auxadc", "clk26m", 10), GATE_IFR3(CLK_IFR_AUXADC, "ifr_auxadc", "clk26m", 10),
GATE_IFR3(CLK_IFR_CPUM, "ifr_cpum", "clk26m", 11),
GATE_IFR3(CLK_IFR_AUXADC_MD, "ifr_auxadc_md", "clk26m", 14), GATE_IFR3(CLK_IFR_AUXADC_MD, "ifr_auxadc_md", "clk26m", 14),
GATE_IFR3(CLK_IFR_AP_DMA, "ifr_ap_dma", "axi_sel", 18), GATE_IFR3(CLK_IFR_AP_DMA, "ifr_ap_dma", "axi_sel", 18),
GATE_IFR3(CLK_IFR_DEBUGSYS, "ifr_debugsys", "axi_sel", 24), GATE_IFR3(CLK_IFR_DEBUGSYS, "ifr_debugsys", "axi_sel", 24),
@ -717,6 +727,8 @@ static const struct mtk_gate ifr_clks[] = {
GATE_IFR5(CLK_IFR_PWRAP_TMR, "ifr_pwrap_tmr", "clk26m", 12), GATE_IFR5(CLK_IFR_PWRAP_TMR, "ifr_pwrap_tmr", "clk26m", 12),
GATE_IFR5(CLK_IFR_PWRAP_SPI, "ifr_pwrap_spi", "clk26m", 13), GATE_IFR5(CLK_IFR_PWRAP_SPI, "ifr_pwrap_spi", "clk26m", 13),
GATE_IFR5(CLK_IFR_PWRAP_SYS, "ifr_pwrap_sys", "clk26m", 14), GATE_IFR5(CLK_IFR_PWRAP_SYS, "ifr_pwrap_sys", "clk26m", 14),
GATE_MTK_FLAGS(CLK_IFR_MCU_PM_BK, "ifr_mcu_pm_bk", NULL, &ifr5_cg_regs,
17, &mtk_clk_gate_ops_setclr, CLK_IGNORE_UNUSED),
GATE_IFR5(CLK_IFR_IRRX_26M, "ifr_irrx_26m", "clk26m", 22), GATE_IFR5(CLK_IFR_IRRX_26M, "ifr_irrx_26m", "clk26m", 22),
GATE_IFR5(CLK_IFR_IRRX_32K, "ifr_irrx_32k", "clk32k", 23), GATE_IFR5(CLK_IFR_IRRX_32K, "ifr_irrx_32k", "clk32k", 23),
GATE_IFR5(CLK_IFR_I2C0_AXI, "ifr_i2c0_axi", "i2c_sel", 24), GATE_IFR5(CLK_IFR_I2C0_AXI, "ifr_i2c0_axi", "i2c_sel", 24),

View File

@ -164,7 +164,7 @@ void pxa3xx_clk_update_accr(u32 disable, u32 enable, u32 xclkcfg, u32 mask)
accr &= ~disable; accr &= ~disable;
accr |= enable; accr |= enable;
writel(accr, ACCR); writel(accr, clk_regs + ACCR);
if (xclkcfg) if (xclkcfg)
__asm__("mcr p14, 0, %0, c6, c0, 0\n" : : "r"(xclkcfg)); __asm__("mcr p14, 0, %0, c6, c0, 0\n" : : "r"(xclkcfg));

View File

@ -12,7 +12,6 @@
#include <linux/shmem_fs.h> #include <linux/shmem_fs.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/udmabuf.h> #include <linux/udmabuf.h>
#include <linux/hugetlb.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/iosys-map.h> #include <linux/iosys-map.h>
@ -207,9 +206,7 @@ static long udmabuf_create(struct miscdevice *device,
struct udmabuf *ubuf; struct udmabuf *ubuf;
struct dma_buf *buf; struct dma_buf *buf;
pgoff_t pgoff, pgcnt, pgidx, pgbuf = 0, pglimit; pgoff_t pgoff, pgcnt, pgidx, pgbuf = 0, pglimit;
struct page *page, *hpage = NULL; struct page *page;
pgoff_t subpgoff, maxsubpgs;
struct hstate *hpstate;
int seals, ret = -EINVAL; int seals, ret = -EINVAL;
u32 i, flags; u32 i, flags;
@ -245,7 +242,7 @@ static long udmabuf_create(struct miscdevice *device,
if (!memfd) if (!memfd)
goto err; goto err;
mapping = memfd->f_mapping; mapping = memfd->f_mapping;
if (!shmem_mapping(mapping) && !is_file_hugepages(memfd)) if (!shmem_mapping(mapping))
goto err; goto err;
seals = memfd_fcntl(memfd, F_GET_SEALS, 0); seals = memfd_fcntl(memfd, F_GET_SEALS, 0);
if (seals == -EINVAL) if (seals == -EINVAL)
@ -256,48 +253,16 @@ static long udmabuf_create(struct miscdevice *device,
goto err; goto err;
pgoff = list[i].offset >> PAGE_SHIFT; pgoff = list[i].offset >> PAGE_SHIFT;
pgcnt = list[i].size >> PAGE_SHIFT; pgcnt = list[i].size >> PAGE_SHIFT;
if (is_file_hugepages(memfd)) {
hpstate = hstate_file(memfd);
pgoff = list[i].offset >> huge_page_shift(hpstate);
subpgoff = (list[i].offset &
~huge_page_mask(hpstate)) >> PAGE_SHIFT;
maxsubpgs = huge_page_size(hpstate) >> PAGE_SHIFT;
}
for (pgidx = 0; pgidx < pgcnt; pgidx++) { for (pgidx = 0; pgidx < pgcnt; pgidx++) {
if (is_file_hugepages(memfd)) { page = shmem_read_mapping_page(mapping, pgoff + pgidx);
if (!hpage) { if (IS_ERR(page)) {
hpage = find_get_page_flags(mapping, pgoff, ret = PTR_ERR(page);
FGP_ACCESSED); goto err;
if (!hpage) {
ret = -EINVAL;
goto err;
}
}
page = hpage + subpgoff;
get_page(page);
subpgoff++;
if (subpgoff == maxsubpgs) {
put_page(hpage);
hpage = NULL;
subpgoff = 0;
pgoff++;
}
} else {
page = shmem_read_mapping_page(mapping,
pgoff + pgidx);
if (IS_ERR(page)) {
ret = PTR_ERR(page);
goto err;
}
} }
ubuf->pages[pgbuf++] = page; ubuf->pages[pgbuf++] = page;
} }
fput(memfd); fput(memfd);
memfd = NULL; memfd = NULL;
if (hpage) {
put_page(hpage);
hpage = NULL;
}
} }
exp_info.ops = &udmabuf_ops; exp_info.ops = &udmabuf_ops;

View File

@ -2124,6 +2124,7 @@ static int cs_dsp_load_coeff(struct cs_dsp *dsp, const struct firmware *firmware
file, blocks, le32_to_cpu(blk->len), file, blocks, le32_to_cpu(blk->len),
type, le32_to_cpu(blk->id)); type, le32_to_cpu(blk->id));
region_name = cs_dsp_mem_region_name(type);
mem = cs_dsp_find_region(dsp, type); mem = cs_dsp_find_region(dsp, type);
if (!mem) { if (!mem) {
cs_dsp_err(dsp, "No base for region %x\n", type); cs_dsp_err(dsp, "No base for region %x\n", type);
@ -2147,8 +2148,8 @@ static int cs_dsp_load_coeff(struct cs_dsp *dsp, const struct firmware *firmware
reg = dsp->ops->region_to_reg(mem, reg); reg = dsp->ops->region_to_reg(mem, reg);
reg += offset; reg += offset;
} else { } else {
cs_dsp_err(dsp, "No %x for algorithm %x\n", cs_dsp_err(dsp, "No %s for algorithm %x\n",
type, le32_to_cpu(blk->id)); region_name, le32_to_cpu(blk->id));
} }
break; break;

View File

@ -361,24 +361,6 @@ static void __init efi_debugfs_init(void)
static inline void efi_debugfs_init(void) {} static inline void efi_debugfs_init(void) {}
#endif #endif
static void refresh_nv_rng_seed(struct work_struct *work)
{
u8 seed[EFI_RANDOM_SEED_SIZE];
get_random_bytes(seed, sizeof(seed));
efi.set_variable(L"RandomSeed", &LINUX_EFI_RANDOM_SEED_TABLE_GUID,
EFI_VARIABLE_NON_VOLATILE | EFI_VARIABLE_BOOTSERVICE_ACCESS |
EFI_VARIABLE_RUNTIME_ACCESS, sizeof(seed), seed);
memzero_explicit(seed, sizeof(seed));
}
static int refresh_nv_rng_seed_notification(struct notifier_block *nb, unsigned long action, void *data)
{
static DECLARE_WORK(work, refresh_nv_rng_seed);
schedule_work(&work);
return NOTIFY_DONE;
}
static struct notifier_block refresh_nv_rng_seed_nb = { .notifier_call = refresh_nv_rng_seed_notification };
/* /*
* We register the efi subsystem with the firmware subsystem and the * We register the efi subsystem with the firmware subsystem and the
* efivars subsystem with the efi subsystem, if the system was booted with * efivars subsystem with the efi subsystem, if the system was booted with
@ -451,9 +433,6 @@ static int __init efisubsys_init(void)
platform_device_register_simple("efi_secret", 0, NULL, 0); platform_device_register_simple("efi_secret", 0, NULL, 0);
#endif #endif
if (efi_rt_services_supported(EFI_RT_SUPPORTED_SET_VARIABLE))
execute_with_initialized_rng(&refresh_nv_rng_seed_nb);
return 0; return 0;
err_remove_group: err_remove_group:

View File

@ -1615,6 +1615,7 @@ static const u16 amdgpu_unsupported_pciidlist[] = {
0x5874, 0x5874,
0x5940, 0x5940,
0x5941, 0x5941,
0x5b70,
0x5b72, 0x5b72,
0x5b73, 0x5b73,
0x5b74, 0x5b74,

View File

@ -140,7 +140,7 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain)
if (flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED) if (flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED)
places[c].lpfn = visible_pfn; places[c].lpfn = visible_pfn;
else if (adev->gmc.real_vram_size != adev->gmc.visible_vram_size) else
places[c].flags |= TTM_PL_FLAG_TOPDOWN; places[c].flags |= TTM_PL_FLAG_TOPDOWN;
if (flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS) if (flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS)

View File

@ -3548,6 +3548,9 @@ static ssize_t amdgpu_psp_vbflash_read(struct file *filp, struct kobject *kobj,
void *fw_pri_cpu_addr; void *fw_pri_cpu_addr;
int ret; int ret;
if (adev->psp.vbflash_image_size == 0)
return -EINVAL;
dev_info(adev->dev, "VBIOS flash to PSP started"); dev_info(adev->dev, "VBIOS flash to PSP started");
ret = amdgpu_bo_create_kernel(adev, adev->psp.vbflash_image_size, ret = amdgpu_bo_create_kernel(adev, adev->psp.vbflash_image_size,
@ -3599,13 +3602,13 @@ static ssize_t amdgpu_psp_vbflash_status(struct device *dev,
} }
static const struct bin_attribute psp_vbflash_bin_attr = { static const struct bin_attribute psp_vbflash_bin_attr = {
.attr = {.name = "psp_vbflash", .mode = 0664}, .attr = {.name = "psp_vbflash", .mode = 0660},
.size = 0, .size = 0,
.write = amdgpu_psp_vbflash_write, .write = amdgpu_psp_vbflash_write,
.read = amdgpu_psp_vbflash_read, .read = amdgpu_psp_vbflash_read,
}; };
static DEVICE_ATTR(psp_vbflash_status, 0444, amdgpu_psp_vbflash_status, NULL); static DEVICE_ATTR(psp_vbflash_status, 0440, amdgpu_psp_vbflash_status, NULL);
int amdgpu_psp_sysfs_init(struct amdgpu_device *adev) int amdgpu_psp_sysfs_init(struct amdgpu_device *adev)
{ {

View File

@ -581,3 +581,21 @@ void amdgpu_ring_ib_end(struct amdgpu_ring *ring)
if (ring->is_sw_ring) if (ring->is_sw_ring)
amdgpu_sw_ring_ib_end(ring); amdgpu_sw_ring_ib_end(ring);
} }
void amdgpu_ring_ib_on_emit_cntl(struct amdgpu_ring *ring)
{
if (ring->is_sw_ring)
amdgpu_sw_ring_ib_mark_offset(ring, AMDGPU_MUX_OFFSET_TYPE_CONTROL);
}
void amdgpu_ring_ib_on_emit_ce(struct amdgpu_ring *ring)
{
if (ring->is_sw_ring)
amdgpu_sw_ring_ib_mark_offset(ring, AMDGPU_MUX_OFFSET_TYPE_CE);
}
void amdgpu_ring_ib_on_emit_de(struct amdgpu_ring *ring)
{
if (ring->is_sw_ring)
amdgpu_sw_ring_ib_mark_offset(ring, AMDGPU_MUX_OFFSET_TYPE_DE);
}

View File

@ -227,6 +227,9 @@ struct amdgpu_ring_funcs {
int (*preempt_ib)(struct amdgpu_ring *ring); int (*preempt_ib)(struct amdgpu_ring *ring);
void (*emit_mem_sync)(struct amdgpu_ring *ring); void (*emit_mem_sync)(struct amdgpu_ring *ring);
void (*emit_wave_limit)(struct amdgpu_ring *ring, bool enable); void (*emit_wave_limit)(struct amdgpu_ring *ring, bool enable);
void (*patch_cntl)(struct amdgpu_ring *ring, unsigned offset);
void (*patch_ce)(struct amdgpu_ring *ring, unsigned offset);
void (*patch_de)(struct amdgpu_ring *ring, unsigned offset);
}; };
struct amdgpu_ring { struct amdgpu_ring {
@ -318,10 +321,16 @@ struct amdgpu_ring {
#define amdgpu_ring_init_cond_exec(r) (r)->funcs->init_cond_exec((r)) #define amdgpu_ring_init_cond_exec(r) (r)->funcs->init_cond_exec((r))
#define amdgpu_ring_patch_cond_exec(r,o) (r)->funcs->patch_cond_exec((r),(o)) #define amdgpu_ring_patch_cond_exec(r,o) (r)->funcs->patch_cond_exec((r),(o))
#define amdgpu_ring_preempt_ib(r) (r)->funcs->preempt_ib(r) #define amdgpu_ring_preempt_ib(r) (r)->funcs->preempt_ib(r)
#define amdgpu_ring_patch_cntl(r, o) ((r)->funcs->patch_cntl((r), (o)))
#define amdgpu_ring_patch_ce(r, o) ((r)->funcs->patch_ce((r), (o)))
#define amdgpu_ring_patch_de(r, o) ((r)->funcs->patch_de((r), (o)))
int amdgpu_ring_alloc(struct amdgpu_ring *ring, unsigned ndw); int amdgpu_ring_alloc(struct amdgpu_ring *ring, unsigned ndw);
void amdgpu_ring_ib_begin(struct amdgpu_ring *ring); void amdgpu_ring_ib_begin(struct amdgpu_ring *ring);
void amdgpu_ring_ib_end(struct amdgpu_ring *ring); void amdgpu_ring_ib_end(struct amdgpu_ring *ring);
void amdgpu_ring_ib_on_emit_cntl(struct amdgpu_ring *ring);
void amdgpu_ring_ib_on_emit_ce(struct amdgpu_ring *ring);
void amdgpu_ring_ib_on_emit_de(struct amdgpu_ring *ring);
void amdgpu_ring_insert_nop(struct amdgpu_ring *ring, uint32_t count); void amdgpu_ring_insert_nop(struct amdgpu_ring *ring, uint32_t count);
void amdgpu_ring_generic_pad_ib(struct amdgpu_ring *ring, struct amdgpu_ib *ib); void amdgpu_ring_generic_pad_ib(struct amdgpu_ring *ring, struct amdgpu_ib *ib);

View File

@ -105,6 +105,16 @@ static void amdgpu_mux_resubmit_chunks(struct amdgpu_ring_mux *mux)
amdgpu_fence_update_start_timestamp(e->ring, amdgpu_fence_update_start_timestamp(e->ring,
chunk->sync_seq, chunk->sync_seq,
ktime_get()); ktime_get());
if (chunk->sync_seq ==
le32_to_cpu(*(e->ring->fence_drv.cpu_addr + 2))) {
if (chunk->cntl_offset <= e->ring->buf_mask)
amdgpu_ring_patch_cntl(e->ring,
chunk->cntl_offset);
if (chunk->ce_offset <= e->ring->buf_mask)
amdgpu_ring_patch_ce(e->ring, chunk->ce_offset);
if (chunk->de_offset <= e->ring->buf_mask)
amdgpu_ring_patch_de(e->ring, chunk->de_offset);
}
amdgpu_ring_mux_copy_pkt_from_sw_ring(mux, e->ring, amdgpu_ring_mux_copy_pkt_from_sw_ring(mux, e->ring,
chunk->start, chunk->start,
chunk->end); chunk->end);
@ -407,6 +417,17 @@ void amdgpu_sw_ring_ib_end(struct amdgpu_ring *ring)
amdgpu_ring_mux_end_ib(mux, ring); amdgpu_ring_mux_end_ib(mux, ring);
} }
void amdgpu_sw_ring_ib_mark_offset(struct amdgpu_ring *ring, enum amdgpu_ring_mux_offset_type type)
{
struct amdgpu_device *adev = ring->adev;
struct amdgpu_ring_mux *mux = &adev->gfx.muxer;
unsigned offset;
offset = ring->wptr & ring->buf_mask;
amdgpu_ring_mux_ib_mark_offset(mux, ring, offset, type);
}
void amdgpu_ring_mux_start_ib(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring) void amdgpu_ring_mux_start_ib(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring)
{ {
struct amdgpu_mux_entry *e; struct amdgpu_mux_entry *e;
@ -429,6 +450,10 @@ void amdgpu_ring_mux_start_ib(struct amdgpu_ring_mux *mux, struct amdgpu_ring *r
} }
chunk->start = ring->wptr; chunk->start = ring->wptr;
/* the initialized value used to check if they are set by the ib submission*/
chunk->cntl_offset = ring->buf_mask + 1;
chunk->de_offset = ring->buf_mask + 1;
chunk->ce_offset = ring->buf_mask + 1;
list_add_tail(&chunk->entry, &e->list); list_add_tail(&chunk->entry, &e->list);
} }
@ -454,6 +479,41 @@ static void scan_and_remove_signaled_chunk(struct amdgpu_ring_mux *mux, struct a
} }
} }
void amdgpu_ring_mux_ib_mark_offset(struct amdgpu_ring_mux *mux,
struct amdgpu_ring *ring, u64 offset,
enum amdgpu_ring_mux_offset_type type)
{
struct amdgpu_mux_entry *e;
struct amdgpu_mux_chunk *chunk;
e = amdgpu_ring_mux_sw_entry(mux, ring);
if (!e) {
DRM_ERROR("cannot find entry!\n");
return;
}
chunk = list_last_entry(&e->list, struct amdgpu_mux_chunk, entry);
if (!chunk) {
DRM_ERROR("cannot find chunk!\n");
return;
}
switch (type) {
case AMDGPU_MUX_OFFSET_TYPE_CONTROL:
chunk->cntl_offset = offset;
break;
case AMDGPU_MUX_OFFSET_TYPE_DE:
chunk->de_offset = offset;
break;
case AMDGPU_MUX_OFFSET_TYPE_CE:
chunk->ce_offset = offset;
break;
default:
DRM_ERROR("invalid type (%d)\n", type);
break;
}
}
void amdgpu_ring_mux_end_ib(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring) void amdgpu_ring_mux_end_ib(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring)
{ {
struct amdgpu_mux_entry *e; struct amdgpu_mux_entry *e;

View File

@ -50,6 +50,12 @@ struct amdgpu_mux_entry {
struct list_head list; struct list_head list;
}; };
enum amdgpu_ring_mux_offset_type {
AMDGPU_MUX_OFFSET_TYPE_CONTROL,
AMDGPU_MUX_OFFSET_TYPE_DE,
AMDGPU_MUX_OFFSET_TYPE_CE,
};
struct amdgpu_ring_mux { struct amdgpu_ring_mux {
struct amdgpu_ring *real_ring; struct amdgpu_ring *real_ring;
@ -72,12 +78,18 @@ struct amdgpu_ring_mux {
* @sync_seq: the fence seqno related with the saved IB. * @sync_seq: the fence seqno related with the saved IB.
* @start:- start location on the software ring. * @start:- start location on the software ring.
* @end:- end location on the software ring. * @end:- end location on the software ring.
* @control_offset:- the PRE_RESUME bit position used for resubmission.
* @de_offset:- the anchor in write_data for de meta of resubmission.
* @ce_offset:- the anchor in write_data for ce meta of resubmission.
*/ */
struct amdgpu_mux_chunk { struct amdgpu_mux_chunk {
struct list_head entry; struct list_head entry;
uint32_t sync_seq; uint32_t sync_seq;
u64 start; u64 start;
u64 end; u64 end;
u64 cntl_offset;
u64 de_offset;
u64 ce_offset;
}; };
int amdgpu_ring_mux_init(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring, int amdgpu_ring_mux_init(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring,
@ -89,6 +101,8 @@ u64 amdgpu_ring_mux_get_wptr(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ri
u64 amdgpu_ring_mux_get_rptr(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring); u64 amdgpu_ring_mux_get_rptr(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring);
void amdgpu_ring_mux_start_ib(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring); void amdgpu_ring_mux_start_ib(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring);
void amdgpu_ring_mux_end_ib(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring); void amdgpu_ring_mux_end_ib(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring);
void amdgpu_ring_mux_ib_mark_offset(struct amdgpu_ring_mux *mux, struct amdgpu_ring *ring,
u64 offset, enum amdgpu_ring_mux_offset_type type);
bool amdgpu_mcbp_handle_trailing_fence_irq(struct amdgpu_ring_mux *mux); bool amdgpu_mcbp_handle_trailing_fence_irq(struct amdgpu_ring_mux *mux);
u64 amdgpu_sw_ring_get_rptr_gfx(struct amdgpu_ring *ring); u64 amdgpu_sw_ring_get_rptr_gfx(struct amdgpu_ring *ring);
@ -97,6 +111,7 @@ void amdgpu_sw_ring_set_wptr_gfx(struct amdgpu_ring *ring);
void amdgpu_sw_ring_insert_nop(struct amdgpu_ring *ring, uint32_t count); void amdgpu_sw_ring_insert_nop(struct amdgpu_ring *ring, uint32_t count);
void amdgpu_sw_ring_ib_begin(struct amdgpu_ring *ring); void amdgpu_sw_ring_ib_begin(struct amdgpu_ring *ring);
void amdgpu_sw_ring_ib_end(struct amdgpu_ring *ring); void amdgpu_sw_ring_ib_end(struct amdgpu_ring *ring);
void amdgpu_sw_ring_ib_mark_offset(struct amdgpu_ring *ring, enum amdgpu_ring_mux_offset_type type);
const char *amdgpu_sw_ring_name(int idx); const char *amdgpu_sw_ring_name(int idx);
unsigned int amdgpu_sw_ring_priority(int idx); unsigned int amdgpu_sw_ring_priority(int idx);

View File

@ -755,7 +755,7 @@ static void gfx_v9_0_set_rlc_funcs(struct amdgpu_device *adev);
static int gfx_v9_0_get_cu_info(struct amdgpu_device *adev, static int gfx_v9_0_get_cu_info(struct amdgpu_device *adev,
struct amdgpu_cu_info *cu_info); struct amdgpu_cu_info *cu_info);
static uint64_t gfx_v9_0_get_gpu_clock_counter(struct amdgpu_device *adev); static uint64_t gfx_v9_0_get_gpu_clock_counter(struct amdgpu_device *adev);
static void gfx_v9_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume); static void gfx_v9_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume, bool usegds);
static u64 gfx_v9_0_ring_get_rptr_compute(struct amdgpu_ring *ring); static u64 gfx_v9_0_ring_get_rptr_compute(struct amdgpu_ring *ring);
static void gfx_v9_0_query_ras_error_count(struct amdgpu_device *adev, static void gfx_v9_0_query_ras_error_count(struct amdgpu_device *adev,
void *ras_error_status); void *ras_error_status);
@ -5127,7 +5127,8 @@ static void gfx_v9_0_ring_emit_ib_gfx(struct amdgpu_ring *ring,
gfx_v9_0_ring_emit_de_meta(ring, gfx_v9_0_ring_emit_de_meta(ring,
(!amdgpu_sriov_vf(ring->adev) && (!amdgpu_sriov_vf(ring->adev) &&
flags & AMDGPU_IB_PREEMPTED) ? flags & AMDGPU_IB_PREEMPTED) ?
true : false); true : false,
job->gds_size > 0 && job->gds_base != 0);
} }
amdgpu_ring_write(ring, header); amdgpu_ring_write(ring, header);
@ -5138,9 +5139,83 @@ static void gfx_v9_0_ring_emit_ib_gfx(struct amdgpu_ring *ring,
#endif #endif
lower_32_bits(ib->gpu_addr)); lower_32_bits(ib->gpu_addr));
amdgpu_ring_write(ring, upper_32_bits(ib->gpu_addr)); amdgpu_ring_write(ring, upper_32_bits(ib->gpu_addr));
amdgpu_ring_ib_on_emit_cntl(ring);
amdgpu_ring_write(ring, control); amdgpu_ring_write(ring, control);
} }
static void gfx_v9_0_ring_patch_cntl(struct amdgpu_ring *ring,
unsigned offset)
{
u32 control = ring->ring[offset];
control |= INDIRECT_BUFFER_PRE_RESUME(1);
ring->ring[offset] = control;
}
static void gfx_v9_0_ring_patch_ce_meta(struct amdgpu_ring *ring,
unsigned offset)
{
struct amdgpu_device *adev = ring->adev;
void *ce_payload_cpu_addr;
uint64_t payload_offset, payload_size;
payload_size = sizeof(struct v9_ce_ib_state);
if (ring->is_mes_queue) {
payload_offset = offsetof(struct amdgpu_mes_ctx_meta_data,
gfx[0].gfx_meta_data) +
offsetof(struct v9_gfx_meta_data, ce_payload);
ce_payload_cpu_addr =
amdgpu_mes_ctx_get_offs_cpu_addr(ring, payload_offset);
} else {
payload_offset = offsetof(struct v9_gfx_meta_data, ce_payload);
ce_payload_cpu_addr = adev->virt.csa_cpu_addr + payload_offset;
}
if (offset + (payload_size >> 2) <= ring->buf_mask + 1) {
memcpy((void *)&ring->ring[offset], ce_payload_cpu_addr, payload_size);
} else {
memcpy((void *)&ring->ring[offset], ce_payload_cpu_addr,
(ring->buf_mask + 1 - offset) << 2);
payload_size -= (ring->buf_mask + 1 - offset) << 2;
memcpy((void *)&ring->ring[0],
ce_payload_cpu_addr + ((ring->buf_mask + 1 - offset) << 2),
payload_size);
}
}
static void gfx_v9_0_ring_patch_de_meta(struct amdgpu_ring *ring,
unsigned offset)
{
struct amdgpu_device *adev = ring->adev;
void *de_payload_cpu_addr;
uint64_t payload_offset, payload_size;
payload_size = sizeof(struct v9_de_ib_state);
if (ring->is_mes_queue) {
payload_offset = offsetof(struct amdgpu_mes_ctx_meta_data,
gfx[0].gfx_meta_data) +
offsetof(struct v9_gfx_meta_data, de_payload);
de_payload_cpu_addr =
amdgpu_mes_ctx_get_offs_cpu_addr(ring, payload_offset);
} else {
payload_offset = offsetof(struct v9_gfx_meta_data, de_payload);
de_payload_cpu_addr = adev->virt.csa_cpu_addr + payload_offset;
}
if (offset + (payload_size >> 2) <= ring->buf_mask + 1) {
memcpy((void *)&ring->ring[offset], de_payload_cpu_addr, payload_size);
} else {
memcpy((void *)&ring->ring[offset], de_payload_cpu_addr,
(ring->buf_mask + 1 - offset) << 2);
payload_size -= (ring->buf_mask + 1 - offset) << 2;
memcpy((void *)&ring->ring[0],
de_payload_cpu_addr + ((ring->buf_mask + 1 - offset) << 2),
payload_size);
}
}
static void gfx_v9_0_ring_emit_ib_compute(struct amdgpu_ring *ring, static void gfx_v9_0_ring_emit_ib_compute(struct amdgpu_ring *ring,
struct amdgpu_job *job, struct amdgpu_job *job,
struct amdgpu_ib *ib, struct amdgpu_ib *ib,
@ -5336,6 +5411,8 @@ static void gfx_v9_0_ring_emit_ce_meta(struct amdgpu_ring *ring, bool resume)
amdgpu_ring_write(ring, lower_32_bits(ce_payload_gpu_addr)); amdgpu_ring_write(ring, lower_32_bits(ce_payload_gpu_addr));
amdgpu_ring_write(ring, upper_32_bits(ce_payload_gpu_addr)); amdgpu_ring_write(ring, upper_32_bits(ce_payload_gpu_addr));
amdgpu_ring_ib_on_emit_ce(ring);
if (resume) if (resume)
amdgpu_ring_write_multiple(ring, ce_payload_cpu_addr, amdgpu_ring_write_multiple(ring, ce_payload_cpu_addr,
sizeof(ce_payload) >> 2); sizeof(ce_payload) >> 2);
@ -5369,10 +5446,6 @@ static int gfx_v9_0_ring_preempt_ib(struct amdgpu_ring *ring)
amdgpu_ring_alloc(ring, 13); amdgpu_ring_alloc(ring, 13);
gfx_v9_0_ring_emit_fence(ring, ring->trail_fence_gpu_addr, gfx_v9_0_ring_emit_fence(ring, ring->trail_fence_gpu_addr,
ring->trail_seq, AMDGPU_FENCE_FLAG_EXEC | AMDGPU_FENCE_FLAG_INT); ring->trail_seq, AMDGPU_FENCE_FLAG_EXEC | AMDGPU_FENCE_FLAG_INT);
/*reset the CP_VMID_PREEMPT after trailing fence*/
amdgpu_ring_emit_wreg(ring,
SOC15_REG_OFFSET(GC, 0, mmCP_VMID_PREEMPT),
0x0);
/* assert IB preemption, emit the trailing fence */ /* assert IB preemption, emit the trailing fence */
kiq->pmf->kiq_unmap_queues(kiq_ring, ring, PREEMPT_QUEUES_NO_UNMAP, kiq->pmf->kiq_unmap_queues(kiq_ring, ring, PREEMPT_QUEUES_NO_UNMAP,
@ -5395,6 +5468,10 @@ static int gfx_v9_0_ring_preempt_ib(struct amdgpu_ring *ring)
DRM_WARN("ring %d timeout to preempt ib\n", ring->idx); DRM_WARN("ring %d timeout to preempt ib\n", ring->idx);
} }
/*reset the CP_VMID_PREEMPT after trailing fence*/
amdgpu_ring_emit_wreg(ring,
SOC15_REG_OFFSET(GC, 0, mmCP_VMID_PREEMPT),
0x0);
amdgpu_ring_commit(ring); amdgpu_ring_commit(ring);
/* deassert preemption condition */ /* deassert preemption condition */
@ -5402,7 +5479,7 @@ static int gfx_v9_0_ring_preempt_ib(struct amdgpu_ring *ring)
return r; return r;
} }
static void gfx_v9_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume) static void gfx_v9_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume, bool usegds)
{ {
struct amdgpu_device *adev = ring->adev; struct amdgpu_device *adev = ring->adev;
struct v9_de_ib_state de_payload = {0}; struct v9_de_ib_state de_payload = {0};
@ -5433,8 +5510,10 @@ static void gfx_v9_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume)
PAGE_SIZE); PAGE_SIZE);
} }
de_payload.gds_backup_addrlo = lower_32_bits(gds_addr); if (usegds) {
de_payload.gds_backup_addrhi = upper_32_bits(gds_addr); de_payload.gds_backup_addrlo = lower_32_bits(gds_addr);
de_payload.gds_backup_addrhi = upper_32_bits(gds_addr);
}
cnt = (sizeof(de_payload) >> 2) + 4 - 2; cnt = (sizeof(de_payload) >> 2) + 4 - 2;
amdgpu_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, cnt)); amdgpu_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, cnt));
@ -5445,6 +5524,7 @@ static void gfx_v9_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume)
amdgpu_ring_write(ring, lower_32_bits(de_payload_gpu_addr)); amdgpu_ring_write(ring, lower_32_bits(de_payload_gpu_addr));
amdgpu_ring_write(ring, upper_32_bits(de_payload_gpu_addr)); amdgpu_ring_write(ring, upper_32_bits(de_payload_gpu_addr));
amdgpu_ring_ib_on_emit_de(ring);
if (resume) if (resume)
amdgpu_ring_write_multiple(ring, de_payload_cpu_addr, amdgpu_ring_write_multiple(ring, de_payload_cpu_addr,
sizeof(de_payload) >> 2); sizeof(de_payload) >> 2);
@ -6855,6 +6935,9 @@ static const struct amdgpu_ring_funcs gfx_v9_0_sw_ring_funcs_gfx = {
.emit_reg_write_reg_wait = gfx_v9_0_ring_emit_reg_write_reg_wait, .emit_reg_write_reg_wait = gfx_v9_0_ring_emit_reg_write_reg_wait,
.soft_recovery = gfx_v9_0_ring_soft_recovery, .soft_recovery = gfx_v9_0_ring_soft_recovery,
.emit_mem_sync = gfx_v9_0_emit_mem_sync, .emit_mem_sync = gfx_v9_0_emit_mem_sync,
.patch_cntl = gfx_v9_0_ring_patch_cntl,
.patch_de = gfx_v9_0_ring_patch_de_meta,
.patch_ce = gfx_v9_0_ring_patch_ce_meta,
}; };
static const struct amdgpu_ring_funcs gfx_v9_0_ring_funcs_compute = { static const struct amdgpu_ring_funcs gfx_v9_0_ring_funcs_compute = {

View File

@ -129,7 +129,11 @@ static int vcn_v4_0_sw_init(void *handle)
if (adev->vcn.harvest_config & (1 << i)) if (adev->vcn.harvest_config & (1 << i))
continue; continue;
atomic_set(&adev->vcn.inst[i].sched_score, 0); /* Init instance 0 sched_score to 1, so it's scheduled after other instances */
if (i == 0)
atomic_set(&adev->vcn.inst[i].sched_score, 1);
else
atomic_set(&adev->vcn.inst[i].sched_score, 0);
/* VCN UNIFIED TRAP */ /* VCN UNIFIED TRAP */
r = amdgpu_irq_add_id(adev, amdgpu_ih_clientid_vcns[i], r = amdgpu_irq_add_id(adev, amdgpu_ih_clientid_vcns[i],

View File

@ -7196,7 +7196,13 @@ static int amdgpu_dm_connector_get_modes(struct drm_connector *connector)
drm_add_modes_noedid(connector, 1920, 1080); drm_add_modes_noedid(connector, 1920, 1080);
} else { } else {
amdgpu_dm_connector_ddc_get_modes(connector, edid); amdgpu_dm_connector_ddc_get_modes(connector, edid);
amdgpu_dm_connector_add_common_modes(encoder, connector); /* most eDP supports only timings from its edid,
* usually only detailed timings are available
* from eDP edid. timings which are not from edid
* may damage eDP
*/
if (connector->connector_type != DRM_MODE_CONNECTOR_eDP)
amdgpu_dm_connector_add_common_modes(encoder, connector);
amdgpu_dm_connector_add_freesync_modes(connector, edid); amdgpu_dm_connector_add_freesync_modes(connector, edid);
} }
amdgpu_dm_fbc_init(connector); amdgpu_dm_fbc_init(connector);
@ -8198,6 +8204,12 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
if (acrtc_state->abm_level != dm_old_crtc_state->abm_level) if (acrtc_state->abm_level != dm_old_crtc_state->abm_level)
bundle->stream_update.abm_level = &acrtc_state->abm_level; bundle->stream_update.abm_level = &acrtc_state->abm_level;
mutex_lock(&dm->dc_lock);
if ((acrtc_state->update_type > UPDATE_TYPE_FAST) &&
acrtc_state->stream->link->psr_settings.psr_allow_active)
amdgpu_dm_psr_disable(acrtc_state->stream);
mutex_unlock(&dm->dc_lock);
/* /*
* If FreeSync state on the stream has changed then we need to * If FreeSync state on the stream has changed then we need to
* re-adjust the min/max bounds now that DC doesn't handle this * re-adjust the min/max bounds now that DC doesn't handle this
@ -8211,10 +8223,6 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
spin_unlock_irqrestore(&pcrtc->dev->event_lock, flags); spin_unlock_irqrestore(&pcrtc->dev->event_lock, flags);
} }
mutex_lock(&dm->dc_lock); mutex_lock(&dm->dc_lock);
if ((acrtc_state->update_type > UPDATE_TYPE_FAST) &&
acrtc_state->stream->link->psr_settings.psr_allow_active)
amdgpu_dm_psr_disable(acrtc_state->stream);
update_planes_and_stream_adapter(dm->dc, update_planes_and_stream_adapter(dm->dc,
acrtc_state->update_type, acrtc_state->update_type,
planes_count, planes_count,

View File

@ -980,6 +980,11 @@ static bool detect_link_and_local_sink(struct dc_link *link,
(link->dpcd_caps.dongle_type != (link->dpcd_caps.dongle_type !=
DISPLAY_DONGLE_DP_HDMI_CONVERTER)) DISPLAY_DONGLE_DP_HDMI_CONVERTER))
converter_disable_audio = true; converter_disable_audio = true;
/* limited link rate to HBR3 for DPIA until we implement USB4 V2 */
if (link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA &&
link->reported_link_cap.link_rate > LINK_RATE_HIGH3)
link->reported_link_cap.link_rate = LINK_RATE_HIGH3;
break; break;
} }

View File

@ -1696,10 +1696,39 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
} }
} }
/* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */ if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_COMPUTE &&
workload_type = smu_cmn_to_asic_specific_index(smu, (((smu->adev->pdev->device == 0x744C) && (smu->adev->pdev->revision == 0xC8)) ||
((smu->adev->pdev->device == 0x744C) && (smu->adev->pdev->revision == 0xCC)))) {
ret = smu_cmn_update_table(smu,
SMU_TABLE_ACTIVITY_MONITOR_COEFF,
WORKLOAD_PPLIB_COMPUTE_BIT,
(void *)(&activity_monitor_external),
false);
if (ret) {
dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
return ret;
}
ret = smu_cmn_update_table(smu,
SMU_TABLE_ACTIVITY_MONITOR_COEFF,
WORKLOAD_PPLIB_CUSTOM_BIT,
(void *)(&activity_monitor_external),
true);
if (ret) {
dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
return ret;
}
workload_type = smu_cmn_to_asic_specific_index(smu,
CMN2ASIC_MAPPING_WORKLOAD,
PP_SMC_POWER_PROFILE_CUSTOM);
} else {
/* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
workload_type = smu_cmn_to_asic_specific_index(smu,
CMN2ASIC_MAPPING_WORKLOAD, CMN2ASIC_MAPPING_WORKLOAD,
smu->power_profile_mode); smu->power_profile_mode);
}
if (workload_type < 0) if (workload_type < 0)
return -EINVAL; return -EINVAL;

View File

@ -298,6 +298,10 @@ static void ti_sn_bridge_set_refclk_freq(struct ti_sn65dsi86 *pdata)
if (refclk_lut[i] == refclk_rate) if (refclk_lut[i] == refclk_rate)
break; break;
/* avoid buffer overflow and "1" is the default rate in the datasheet. */
if (i >= refclk_lut_size)
i = 1;
regmap_update_bits(pdata->regmap, SN_DPPLL_SRC_REG, REFCLK_FREQ_MASK, regmap_update_bits(pdata->regmap, SN_DPPLL_SRC_REG, REFCLK_FREQ_MASK,
REFCLK_FREQ(i)); REFCLK_FREQ(i));

View File

@ -220,6 +220,9 @@ static void nouveau_dsm_pci_probe(struct pci_dev *pdev, acpi_handle *dhandle_out
int optimus_funcs; int optimus_funcs;
struct pci_dev *parent_pdev; struct pci_dev *parent_pdev;
if (pdev->vendor != PCI_VENDOR_ID_NVIDIA)
return;
*has_pr3 = false; *has_pr3 = false;
parent_pdev = pci_upstream_bridge(pdev); parent_pdev = pci_upstream_bridge(pdev);
if (parent_pdev) { if (parent_pdev) {

View File

@ -730,7 +730,8 @@ out:
#endif #endif
nouveau_connector_set_edid(nv_connector, edid); nouveau_connector_set_edid(nv_connector, edid);
nouveau_connector_set_encoder(connector, nv_encoder); if (nv_encoder)
nouveau_connector_set_encoder(connector, nv_encoder);
return status; return status;
} }
@ -966,7 +967,7 @@ nouveau_connector_get_modes(struct drm_connector *connector)
/* Determine display colour depth for everything except LVDS now, /* Determine display colour depth for everything except LVDS now,
* DP requires this before mode_valid() is called. * DP requires this before mode_valid() is called.
*/ */
if (connector->connector_type != DRM_MODE_CONNECTOR_LVDS) if (connector->connector_type != DRM_MODE_CONNECTOR_LVDS && nv_connector->native_mode)
nouveau_connector_detect_depth(connector); nouveau_connector_detect_depth(connector);
/* Find the native mode if this is a digital panel, if we didn't /* Find the native mode if this is a digital panel, if we didn't
@ -987,7 +988,7 @@ nouveau_connector_get_modes(struct drm_connector *connector)
* "native" mode as some VBIOS tables require us to use the * "native" mode as some VBIOS tables require us to use the
* pixel clock as part of the lookup... * pixel clock as part of the lookup...
*/ */
if (connector->connector_type == DRM_MODE_CONNECTOR_LVDS) if (connector->connector_type == DRM_MODE_CONNECTOR_LVDS && nv_connector->native_mode)
nouveau_connector_detect_depth(connector); nouveau_connector_detect_depth(connector);
if (nv_encoder->dcb->type == DCB_OUTPUT_TV) if (nv_encoder->dcb->type == DCB_OUTPUT_TV)

View File

@ -137,10 +137,16 @@ nouveau_name(struct drm_device *dev)
static inline bool static inline bool
nouveau_cli_work_ready(struct dma_fence *fence) nouveau_cli_work_ready(struct dma_fence *fence)
{ {
if (!dma_fence_is_signaled(fence)) bool ret = true;
return false;
dma_fence_put(fence); spin_lock_irq(fence->lock);
return true; if (!dma_fence_is_signaled_locked(fence))
ret = false;
spin_unlock_irq(fence->lock);
if (ret == true)
dma_fence_put(fence);
return ret;
} }
static void static void

View File

@ -307,6 +307,7 @@ static void radeon_fbdev_client_unregister(struct drm_client_dev *client)
if (fb_helper->info) { if (fb_helper->info) {
vga_switcheroo_client_fb_set(rdev->pdev, NULL); vga_switcheroo_client_fb_set(rdev->pdev, NULL);
drm_helper_force_disable_all(dev);
drm_fb_helper_unregister_info(fb_helper); drm_fb_helper_unregister_info(fb_helper);
} else { } else {
drm_client_release(&fb_helper->client); drm_client_release(&fb_helper->client);

View File

@ -829,11 +829,22 @@ static void vmbus_wait_for_unload(void)
if (completion_done(&vmbus_connection.unload_event)) if (completion_done(&vmbus_connection.unload_event))
goto completed; goto completed;
for_each_online_cpu(cpu) { for_each_present_cpu(cpu) {
struct hv_per_cpu_context *hv_cpu struct hv_per_cpu_context *hv_cpu
= per_cpu_ptr(hv_context.cpu_context, cpu); = per_cpu_ptr(hv_context.cpu_context, cpu);
/*
* In a CoCo VM the synic_message_page is not allocated
* in hv_synic_alloc(). Instead it is set/cleared in
* hv_synic_enable_regs() and hv_synic_disable_regs()
* such that it is set only when the CPU is online. If
* not all present CPUs are online, the message page
* might be NULL, so skip such CPUs.
*/
page_addr = hv_cpu->synic_message_page; page_addr = hv_cpu->synic_message_page;
if (!page_addr)
continue;
msg = (struct hv_message *)page_addr msg = (struct hv_message *)page_addr
+ VMBUS_MESSAGE_SINT; + VMBUS_MESSAGE_SINT;
@ -867,11 +878,14 @@ completed:
* maybe-pending messages on all CPUs to be able to receive new * maybe-pending messages on all CPUs to be able to receive new
* messages after we reconnect. * messages after we reconnect.
*/ */
for_each_online_cpu(cpu) { for_each_present_cpu(cpu) {
struct hv_per_cpu_context *hv_cpu struct hv_per_cpu_context *hv_cpu
= per_cpu_ptr(hv_context.cpu_context, cpu); = per_cpu_ptr(hv_context.cpu_context, cpu);
page_addr = hv_cpu->synic_message_page; page_addr = hv_cpu->synic_message_page;
if (!page_addr)
continue;
msg = (struct hv_message *)page_addr + VMBUS_MESSAGE_SINT; msg = (struct hv_message *)page_addr + VMBUS_MESSAGE_SINT;
msg->header.message_type = HVMSG_NONE; msg->header.message_type = HVMSG_NONE;
} }

View File

@ -364,13 +364,20 @@ int hv_common_cpu_init(unsigned int cpu)
flags = irqs_disabled() ? GFP_ATOMIC : GFP_KERNEL; flags = irqs_disabled() ? GFP_ATOMIC : GFP_KERNEL;
inputarg = (void **)this_cpu_ptr(hyperv_pcpu_input_arg); inputarg = (void **)this_cpu_ptr(hyperv_pcpu_input_arg);
*inputarg = kmalloc(pgcount * HV_HYP_PAGE_SIZE, flags);
if (!(*inputarg))
return -ENOMEM;
if (hv_root_partition) { /*
outputarg = (void **)this_cpu_ptr(hyperv_pcpu_output_arg); * hyperv_pcpu_input_arg and hyperv_pcpu_output_arg memory is already
*outputarg = (char *)(*inputarg) + HV_HYP_PAGE_SIZE; * allocated if this CPU was previously online and then taken offline
*/
if (!*inputarg) {
*inputarg = kmalloc(pgcount * HV_HYP_PAGE_SIZE, flags);
if (!(*inputarg))
return -ENOMEM;
if (hv_root_partition) {
outputarg = (void **)this_cpu_ptr(hyperv_pcpu_output_arg);
*outputarg = (char *)(*inputarg) + HV_HYP_PAGE_SIZE;
}
} }
msr_vp_index = hv_get_register(HV_REGISTER_VP_INDEX); msr_vp_index = hv_get_register(HV_REGISTER_VP_INDEX);
@ -385,24 +392,17 @@ int hv_common_cpu_init(unsigned int cpu)
int hv_common_cpu_die(unsigned int cpu) int hv_common_cpu_die(unsigned int cpu)
{ {
unsigned long flags; /*
void **inputarg, **outputarg; * The hyperv_pcpu_input_arg and hyperv_pcpu_output_arg memory
void *mem; * is not freed when the CPU goes offline as the hyperv_pcpu_input_arg
* may be used by the Hyper-V vPCI driver in reassigning interrupts
local_irq_save(flags); * as part of the offlining process. The interrupt reassignment
* happens *after* the CPUHP_AP_HYPERV_ONLINE state has run and
inputarg = (void **)this_cpu_ptr(hyperv_pcpu_input_arg); * called this function.
mem = *inputarg; *
*inputarg = NULL; * If a previously offlined CPU is brought back online again, the
* originally allocated memory is reused in hv_common_cpu_init().
if (hv_root_partition) { */
outputarg = (void **)this_cpu_ptr(hyperv_pcpu_output_arg);
*outputarg = NULL;
}
local_irq_restore(flags);
kfree(mem);
return 0; return 0;
} }

View File

@ -1372,7 +1372,7 @@ static int vmbus_bus_init(void)
ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "hyperv/vmbus:online", ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "hyperv/vmbus:online",
hv_synic_init, hv_synic_cleanup); hv_synic_init, hv_synic_cleanup);
if (ret < 0) if (ret < 0)
goto err_cpuhp; goto err_alloc;
hyperv_cpuhp_online = ret; hyperv_cpuhp_online = ret;
ret = vmbus_connect(); ret = vmbus_connect();
@ -1392,9 +1392,8 @@ static int vmbus_bus_init(void)
err_connect: err_connect:
cpuhp_remove_state(hyperv_cpuhp_online); cpuhp_remove_state(hyperv_cpuhp_online);
err_cpuhp:
hv_synic_free();
err_alloc: err_alloc:
hv_synic_free();
if (vmbus_irq == -1) { if (vmbus_irq == -1) {
hv_remove_vmbus_handler(); hv_remove_vmbus_handler();
} else { } else {

View File

@ -1828,7 +1828,7 @@ int dm_cache_metadata_abort(struct dm_cache_metadata *cmd)
* Replacement block manager (new_bm) is created and old_bm destroyed outside of * Replacement block manager (new_bm) is created and old_bm destroyed outside of
* cmd root_lock to avoid ABBA deadlock that would result (due to life-cycle of * cmd root_lock to avoid ABBA deadlock that would result (due to life-cycle of
* shrinker associated with the block manager's bufio client vs cmd root_lock). * shrinker associated with the block manager's bufio client vs cmd root_lock).
* - must take shrinker_mutex without holding cmd->root_lock * - must take shrinker_rwsem without holding cmd->root_lock
*/ */
new_bm = dm_block_manager_create(cmd->bdev, DM_CACHE_METADATA_BLOCK_SIZE << SECTOR_SHIFT, new_bm = dm_block_manager_create(cmd->bdev, DM_CACHE_METADATA_BLOCK_SIZE << SECTOR_SHIFT,
CACHE_MAX_CONCURRENT_LOCKS); CACHE_MAX_CONCURRENT_LOCKS);

View File

@ -1891,7 +1891,7 @@ int dm_pool_abort_metadata(struct dm_pool_metadata *pmd)
* Replacement block manager (new_bm) is created and old_bm destroyed outside of * Replacement block manager (new_bm) is created and old_bm destroyed outside of
* pmd root_lock to avoid ABBA deadlock that would result (due to life-cycle of * pmd root_lock to avoid ABBA deadlock that would result (due to life-cycle of
* shrinker associated with the block manager's bufio client vs pmd root_lock). * shrinker associated with the block manager's bufio client vs pmd root_lock).
* - must take shrinker_mutex without holding pmd->root_lock * - must take shrinker_rwsem without holding pmd->root_lock
*/ */
new_bm = dm_block_manager_create(pmd->bdev, THIN_METADATA_BLOCK_SIZE << SECTOR_SHIFT, new_bm = dm_block_manager_create(pmd->bdev, THIN_METADATA_BLOCK_SIZE << SECTOR_SHIFT,
THIN_MAX_CONCURRENT_LOCKS); THIN_MAX_CONCURRENT_LOCKS);

View File

@ -1403,8 +1403,8 @@ static int bcm2835_probe(struct platform_device *pdev)
host->max_clk = clk_get_rate(clk); host->max_clk = clk_get_rate(clk);
host->irq = platform_get_irq(pdev, 0); host->irq = platform_get_irq(pdev, 0);
if (host->irq <= 0) { if (host->irq < 0) {
ret = -EINVAL; ret = host->irq;
goto err; goto err;
} }

View File

@ -649,6 +649,7 @@ static struct platform_driver litex_mmc_driver = {
.driver = { .driver = {
.name = "litex-mmc", .name = "litex-mmc",
.of_match_table = litex_match, .of_match_table = litex_match,
.probe_type = PROBE_PREFER_ASYNCHRONOUS,
}, },
}; };
module_platform_driver(litex_mmc_driver); module_platform_driver(litex_mmc_driver);

View File

@ -991,11 +991,8 @@ static irqreturn_t meson_mmc_irq(int irq, void *dev_id)
if (data && !cmd->error) if (data && !cmd->error)
data->bytes_xfered = data->blksz * data->blocks; data->bytes_xfered = data->blksz * data->blocks;
if (meson_mmc_bounce_buf_read(data) ||
meson_mmc_get_next_command(cmd)) return IRQ_WAKE_THREAD;
ret = IRQ_WAKE_THREAD;
else
ret = IRQ_HANDLED;
} }
out: out:
@ -1007,9 +1004,6 @@ out:
writel(start, host->regs + SD_EMMC_START); writel(start, host->regs + SD_EMMC_START);
} }
if (ret == IRQ_HANDLED)
meson_mmc_request_done(host->mmc, cmd->mrq);
return ret; return ret;
} }
@ -1192,8 +1186,8 @@ static int meson_mmc_probe(struct platform_device *pdev)
return PTR_ERR(host->regs); return PTR_ERR(host->regs);
host->irq = platform_get_irq(pdev, 0); host->irq = platform_get_irq(pdev, 0);
if (host->irq <= 0) if (host->irq < 0)
return -EINVAL; return host->irq;
cd_irq = platform_get_irq_optional(pdev, 1); cd_irq = platform_get_irq_optional(pdev, 1);
mmc_gpio_set_cd_irq(mmc, cd_irq); mmc_gpio_set_cd_irq(mmc, cd_irq);

View File

@ -1735,7 +1735,8 @@ static void mmci_set_max_busy_timeout(struct mmc_host *mmc)
return; return;
if (host->variant->busy_timeout && mmc->actual_clock) if (host->variant->busy_timeout && mmc->actual_clock)
max_busy_timeout = ~0UL / (mmc->actual_clock / MSEC_PER_SEC); max_busy_timeout = U32_MAX / DIV_ROUND_UP(mmc->actual_clock,
MSEC_PER_SEC);
mmc->max_busy_timeout = max_busy_timeout; mmc->max_busy_timeout = max_busy_timeout;
} }

View File

@ -2680,7 +2680,7 @@ static int msdc_drv_probe(struct platform_device *pdev)
host->irq = platform_get_irq(pdev, 0); host->irq = platform_get_irq(pdev, 0);
if (host->irq < 0) { if (host->irq < 0) {
ret = -EINVAL; ret = host->irq;
goto host_free; goto host_free;
} }

View File

@ -704,7 +704,7 @@ static int mvsd_probe(struct platform_device *pdev)
} }
irq = platform_get_irq(pdev, 0); irq = platform_get_irq(pdev, 0);
if (irq < 0) if (irq < 0)
return -ENXIO; return irq;
mmc = mmc_alloc_host(sizeof(struct mvsd_host), &pdev->dev); mmc = mmc_alloc_host(sizeof(struct mvsd_host), &pdev->dev);
if (!mmc) { if (!mmc) {

View File

@ -1343,7 +1343,7 @@ static int mmc_omap_probe(struct platform_device *pdev)
irq = platform_get_irq(pdev, 0); irq = platform_get_irq(pdev, 0);
if (irq < 0) if (irq < 0)
return -ENXIO; return irq;
host->virt_base = devm_platform_get_and_ioremap_resource(pdev, 0, &res); host->virt_base = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
if (IS_ERR(host->virt_base)) if (IS_ERR(host->virt_base))

View File

@ -1791,9 +1791,11 @@ static int omap_hsmmc_probe(struct platform_device *pdev)
} }
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
irq = platform_get_irq(pdev, 0); if (!res)
if (res == NULL || irq < 0)
return -ENXIO; return -ENXIO;
irq = platform_get_irq(pdev, 0);
if (irq < 0)
return irq;
base = devm_ioremap_resource(&pdev->dev, res); base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(base)) if (IS_ERR(base))

View File

@ -637,7 +637,7 @@ static int owl_mmc_probe(struct platform_device *pdev)
owl_host->irq = platform_get_irq(pdev, 0); owl_host->irq = platform_get_irq(pdev, 0);
if (owl_host->irq < 0) { if (owl_host->irq < 0) {
ret = -EINVAL; ret = owl_host->irq;
goto err_release_channel; goto err_release_channel;
} }

View File

@ -829,7 +829,7 @@ static int sdhci_acpi_probe(struct platform_device *pdev)
host->ops = &sdhci_acpi_ops_dflt; host->ops = &sdhci_acpi_ops_dflt;
host->irq = platform_get_irq(pdev, 0); host->irq = platform_get_irq(pdev, 0);
if (host->irq < 0) { if (host->irq < 0) {
err = -EINVAL; err = host->irq;
goto err_free; goto err_free;
} }

View File

@ -2479,6 +2479,9 @@ static inline void sdhci_msm_get_of_property(struct platform_device *pdev,
msm_host->ddr_config = DDR_CONFIG_POR_VAL; msm_host->ddr_config = DDR_CONFIG_POR_VAL;
of_property_read_u32(node, "qcom,dll-config", &msm_host->dll_config); of_property_read_u32(node, "qcom,dll-config", &msm_host->dll_config);
if (of_device_is_compatible(node, "qcom,msm8916-sdhci"))
host->quirks2 |= SDHCI_QUIRK2_BROKEN_64_BIT_DMA;
} }
static int sdhci_msm_gcc_reset(struct device *dev, struct sdhci_host *host) static int sdhci_msm_gcc_reset(struct device *dev, struct sdhci_host *host)

View File

@ -65,8 +65,8 @@ static int sdhci_probe(struct platform_device *pdev)
host->hw_name = "sdhci"; host->hw_name = "sdhci";
host->ops = &sdhci_pltfm_ops; host->ops = &sdhci_pltfm_ops;
host->irq = platform_get_irq(pdev, 0); host->irq = platform_get_irq(pdev, 0);
if (host->irq <= 0) { if (host->irq < 0) {
ret = -EINVAL; ret = host->irq;
goto err_host; goto err_host;
} }
host->quirks = SDHCI_QUIRK_BROKEN_ADMA; host->quirks = SDHCI_QUIRK_BROKEN_ADMA;

View File

@ -1400,7 +1400,7 @@ static int sh_mmcif_probe(struct platform_device *pdev)
irq[0] = platform_get_irq(pdev, 0); irq[0] = platform_get_irq(pdev, 0);
irq[1] = platform_get_irq_optional(pdev, 1); irq[1] = platform_get_irq_optional(pdev, 1);
if (irq[0] < 0) if (irq[0] < 0)
return -ENXIO; return irq[0];
reg = devm_platform_ioremap_resource(pdev, 0); reg = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(reg)) if (IS_ERR(reg))

View File

@ -1350,8 +1350,8 @@ static int sunxi_mmc_resource_request(struct sunxi_mmc_host *host,
return ret; return ret;
host->irq = platform_get_irq(pdev, 0); host->irq = platform_get_irq(pdev, 0);
if (host->irq <= 0) { if (host->irq < 0) {
ret = -EINVAL; ret = host->irq;
goto error_disable_mmc; goto error_disable_mmc;
} }

View File

@ -1757,8 +1757,10 @@ static int usdhi6_probe(struct platform_device *pdev)
irq_cd = platform_get_irq_byname(pdev, "card detect"); irq_cd = platform_get_irq_byname(pdev, "card detect");
irq_sd = platform_get_irq_byname(pdev, "data"); irq_sd = platform_get_irq_byname(pdev, "data");
irq_sdio = platform_get_irq_byname(pdev, "SDIO"); irq_sdio = platform_get_irq_byname(pdev, "SDIO");
if (irq_sd < 0 || irq_sdio < 0) if (irq_sd < 0)
return -ENODEV; return irq_sd;
if (irq_sdio < 0)
return irq_sdio;
mmc = mmc_alloc_host(sizeof(struct usdhi6_host), dev); mmc = mmc_alloc_host(sizeof(struct usdhi6_host), dev);
if (!mmc) if (!mmc)

View File

@ -399,6 +399,20 @@ static void mt7530_pll_setup(struct mt7530_priv *priv)
core_set(priv, CORE_TRGMII_GSW_CLK_CG, REG_GSWCK_EN); core_set(priv, CORE_TRGMII_GSW_CLK_CG, REG_GSWCK_EN);
} }
/* If port 6 is available as a CPU port, always prefer that as the default,
* otherwise don't care.
*/
static struct dsa_port *
mt753x_preferred_default_local_cpu_port(struct dsa_switch *ds)
{
struct dsa_port *cpu_dp = dsa_to_port(ds, 6);
if (dsa_port_is_cpu(cpu_dp))
return cpu_dp;
return NULL;
}
/* Setup port 6 interface mode and TRGMII TX circuit */ /* Setup port 6 interface mode and TRGMII TX circuit */
static int static int
mt7530_pad_clk_setup(struct dsa_switch *ds, phy_interface_t interface) mt7530_pad_clk_setup(struct dsa_switch *ds, phy_interface_t interface)
@ -985,6 +999,18 @@ unlock_exit:
mutex_unlock(&priv->reg_mutex); mutex_unlock(&priv->reg_mutex);
} }
static void
mt753x_trap_frames(struct mt7530_priv *priv)
{
/* Trap BPDUs to the CPU port(s) */
mt7530_rmw(priv, MT753X_BPC, MT753X_BPDU_PORT_FW_MASK,
MT753X_BPDU_CPU_ONLY);
/* Trap LLDP frames with :0E MAC DA to the CPU port(s) */
mt7530_rmw(priv, MT753X_RGAC2, MT753X_R0E_PORT_FW_MASK,
MT753X_R0E_PORT_FW(MT753X_BPDU_CPU_ONLY));
}
static int static int
mt753x_cpu_port_enable(struct dsa_switch *ds, int port) mt753x_cpu_port_enable(struct dsa_switch *ds, int port)
{ {
@ -1007,9 +1033,16 @@ mt753x_cpu_port_enable(struct dsa_switch *ds, int port)
UNU_FFP(BIT(port))); UNU_FFP(BIT(port)));
/* Set CPU port number */ /* Set CPU port number */
if (priv->id == ID_MT7621) if (priv->id == ID_MT7530 || priv->id == ID_MT7621)
mt7530_rmw(priv, MT7530_MFC, CPU_MASK, CPU_EN | CPU_PORT(port)); mt7530_rmw(priv, MT7530_MFC, CPU_MASK, CPU_EN | CPU_PORT(port));
/* Add the CPU port to the CPU port bitmap for MT7531 and the switch on
* the MT7988 SoC. Trapped frames will be forwarded to the CPU port that
* is affine to the inbound user port.
*/
if (priv->id == ID_MT7531 || priv->id == ID_MT7988)
mt7530_set(priv, MT7531_CFC, MT7531_CPU_PMAP(BIT(port)));
/* CPU port gets connected to all user ports of /* CPU port gets connected to all user ports of
* the switch. * the switch.
*/ */
@ -2255,6 +2288,8 @@ mt7530_setup(struct dsa_switch *ds)
priv->p6_interface = PHY_INTERFACE_MODE_NA; priv->p6_interface = PHY_INTERFACE_MODE_NA;
mt753x_trap_frames(priv);
/* Enable and reset MIB counters */ /* Enable and reset MIB counters */
mt7530_mib_reset(ds); mt7530_mib_reset(ds);
@ -2352,17 +2387,9 @@ static int
mt7531_setup_common(struct dsa_switch *ds) mt7531_setup_common(struct dsa_switch *ds)
{ {
struct mt7530_priv *priv = ds->priv; struct mt7530_priv *priv = ds->priv;
struct dsa_port *cpu_dp;
int ret, i; int ret, i;
/* BPDU to CPU port */ mt753x_trap_frames(priv);
dsa_switch_for_each_cpu_port(cpu_dp, ds) {
mt7530_rmw(priv, MT7531_CFC, MT7531_CPU_PMAP_MASK,
BIT(cpu_dp->index));
break;
}
mt7530_rmw(priv, MT753X_BPC, MT753X_BPDU_PORT_FW_MASK,
MT753X_BPDU_CPU_ONLY);
/* Enable and reset MIB counters */ /* Enable and reset MIB counters */
mt7530_mib_reset(ds); mt7530_mib_reset(ds);
@ -3085,6 +3112,7 @@ static int mt7988_setup(struct dsa_switch *ds)
const struct dsa_switch_ops mt7530_switch_ops = { const struct dsa_switch_ops mt7530_switch_ops = {
.get_tag_protocol = mtk_get_tag_protocol, .get_tag_protocol = mtk_get_tag_protocol,
.setup = mt753x_setup, .setup = mt753x_setup,
.preferred_default_local_cpu_port = mt753x_preferred_default_local_cpu_port,
.get_strings = mt7530_get_strings, .get_strings = mt7530_get_strings,
.get_ethtool_stats = mt7530_get_ethtool_stats, .get_ethtool_stats = mt7530_get_ethtool_stats,
.get_sset_count = mt7530_get_sset_count, .get_sset_count = mt7530_get_sset_count,

View File

@ -54,6 +54,7 @@ enum mt753x_id {
#define MT7531_MIRROR_PORT_GET(x) (((x) >> 16) & MIRROR_MASK) #define MT7531_MIRROR_PORT_GET(x) (((x) >> 16) & MIRROR_MASK)
#define MT7531_MIRROR_PORT_SET(x) (((x) & MIRROR_MASK) << 16) #define MT7531_MIRROR_PORT_SET(x) (((x) & MIRROR_MASK) << 16)
#define MT7531_CPU_PMAP_MASK GENMASK(7, 0) #define MT7531_CPU_PMAP_MASK GENMASK(7, 0)
#define MT7531_CPU_PMAP(x) FIELD_PREP(MT7531_CPU_PMAP_MASK, x)
#define MT753X_MIRROR_REG(id) ((((id) == ID_MT7531) || ((id) == ID_MT7988)) ? \ #define MT753X_MIRROR_REG(id) ((((id) == ID_MT7531) || ((id) == ID_MT7988)) ? \
MT7531_CFC : MT7530_MFC) MT7531_CFC : MT7530_MFC)
@ -66,6 +67,11 @@ enum mt753x_id {
#define MT753X_BPC 0x24 #define MT753X_BPC 0x24
#define MT753X_BPDU_PORT_FW_MASK GENMASK(2, 0) #define MT753X_BPDU_PORT_FW_MASK GENMASK(2, 0)
/* Register for :03 and :0E MAC DA frame control */
#define MT753X_RGAC2 0x2c
#define MT753X_R0E_PORT_FW_MASK GENMASK(18, 16)
#define MT753X_R0E_PORT_FW(x) FIELD_PREP(MT753X_R0E_PORT_FW_MASK, x)
enum mt753x_bpdu_port_fw { enum mt753x_bpdu_port_fw {
MT753X_BPDU_FOLLOW_MFC, MT753X_BPDU_FOLLOW_MFC,
MT753X_BPDU_CPU_EXCLUDE = 4, MT753X_BPDU_CPU_EXCLUDE = 4,

View File

@ -1135,8 +1135,8 @@ static struct sk_buff *be_lancer_xmit_workarounds(struct be_adapter *adapter,
eth_hdr_len = ntohs(skb->protocol) == ETH_P_8021Q ? eth_hdr_len = ntohs(skb->protocol) == ETH_P_8021Q ?
VLAN_ETH_HLEN : ETH_HLEN; VLAN_ETH_HLEN : ETH_HLEN;
if (skb->len <= 60 && if (skb->len <= 60 &&
(lancer_chip(adapter) || skb_vlan_tag_present(skb)) && (lancer_chip(adapter) || BE3_chip(adapter) ||
is_ipv4_pkt(skb)) { skb_vlan_tag_present(skb)) && is_ipv4_pkt(skb)) {
ip = (struct iphdr *)ip_hdr(skb); ip = (struct iphdr *)ip_hdr(skb);
pskb_trim(skb, eth_hdr_len + ntohs(ip->tot_len)); pskb_trim(skb, eth_hdr_len + ntohs(ip->tot_len));
} }

View File

@ -54,6 +54,9 @@ static int phy_mode(enum dpmac_eth_if eth_if, phy_interface_t *if_mode)
case DPMAC_ETH_IF_XFI: case DPMAC_ETH_IF_XFI:
*if_mode = PHY_INTERFACE_MODE_10GBASER; *if_mode = PHY_INTERFACE_MODE_10GBASER;
break; break;
case DPMAC_ETH_IF_CAUI:
*if_mode = PHY_INTERFACE_MODE_25GBASER;
break;
default: default:
return -EINVAL; return -EINVAL;
} }
@ -79,6 +82,8 @@ static enum dpmac_eth_if dpmac_eth_if_mode(phy_interface_t if_mode)
return DPMAC_ETH_IF_XFI; return DPMAC_ETH_IF_XFI;
case PHY_INTERFACE_MODE_1000BASEX: case PHY_INTERFACE_MODE_1000BASEX:
return DPMAC_ETH_IF_1000BASEX; return DPMAC_ETH_IF_1000BASEX;
case PHY_INTERFACE_MODE_25GBASER:
return DPMAC_ETH_IF_CAUI;
default: default:
return DPMAC_ETH_IF_MII; return DPMAC_ETH_IF_MII;
} }
@ -415,7 +420,7 @@ int dpaa2_mac_connect(struct dpaa2_mac *mac)
mac->phylink_config.mac_capabilities = MAC_SYM_PAUSE | MAC_ASYM_PAUSE | mac->phylink_config.mac_capabilities = MAC_SYM_PAUSE | MAC_ASYM_PAUSE |
MAC_10FD | MAC_100FD | MAC_1000FD | MAC_2500FD | MAC_5000FD | MAC_10FD | MAC_100FD | MAC_1000FD | MAC_2500FD | MAC_5000FD |
MAC_10000FD; MAC_10000FD | MAC_25000FD;
dpaa2_mac_set_supported_interfaces(mac); dpaa2_mac_set_supported_interfaces(mac);

View File

@ -732,7 +732,8 @@ static void mlx5e_rx_compute_wqe_bulk_params(struct mlx5e_params *params,
static int mlx5e_build_rq_frags_info(struct mlx5_core_dev *mdev, static int mlx5e_build_rq_frags_info(struct mlx5_core_dev *mdev,
struct mlx5e_params *params, struct mlx5e_params *params,
struct mlx5e_xsk_param *xsk, struct mlx5e_xsk_param *xsk,
struct mlx5e_rq_frags_info *info) struct mlx5e_rq_frags_info *info,
u32 *xdp_frag_size)
{ {
u32 byte_count = MLX5E_SW2HW_MTU(params, params->sw_mtu); u32 byte_count = MLX5E_SW2HW_MTU(params, params->sw_mtu);
int frag_size_max = DEFAULT_FRAG_SIZE; int frag_size_max = DEFAULT_FRAG_SIZE;
@ -845,6 +846,8 @@ out:
info->log_num_frags = order_base_2(info->num_frags); info->log_num_frags = order_base_2(info->num_frags);
*xdp_frag_size = info->num_frags > 1 && params->xdp_prog ? PAGE_SIZE : 0;
return 0; return 0;
} }
@ -989,7 +992,8 @@ int mlx5e_build_rq_param(struct mlx5_core_dev *mdev,
} }
default: /* MLX5_WQ_TYPE_CYCLIC */ default: /* MLX5_WQ_TYPE_CYCLIC */
MLX5_SET(wq, wq, log_wq_sz, params->log_rq_mtu_frames); MLX5_SET(wq, wq, log_wq_sz, params->log_rq_mtu_frames);
err = mlx5e_build_rq_frags_info(mdev, params, xsk, &param->frags_info); err = mlx5e_build_rq_frags_info(mdev, params, xsk, &param->frags_info,
&param->xdp_frag_size);
if (err) if (err)
return err; return err;
ndsegs = param->frags_info.num_frags; ndsegs = param->frags_info.num_frags;

View File

@ -24,6 +24,7 @@ struct mlx5e_rq_param {
u32 rqc[MLX5_ST_SZ_DW(rqc)]; u32 rqc[MLX5_ST_SZ_DW(rqc)];
struct mlx5_wq_param wq; struct mlx5_wq_param wq;
struct mlx5e_rq_frags_info frags_info; struct mlx5e_rq_frags_info frags_info;
u32 xdp_frag_size;
}; };
struct mlx5e_sq_param { struct mlx5e_sq_param {

View File

@ -2021,6 +2021,8 @@ void
mlx5_tc_ct_delete_flow(struct mlx5_tc_ct_priv *priv, mlx5_tc_ct_delete_flow(struct mlx5_tc_ct_priv *priv,
struct mlx5_flow_attr *attr) struct mlx5_flow_attr *attr)
{ {
if (!attr->ct_attr.ft) /* no ct action, return */
return;
if (!attr->ct_attr.nf_ft) /* means only ct clear action, and not ct_clear,ct() */ if (!attr->ct_attr.nf_ft) /* means only ct clear action, and not ct_clear,ct() */
return; return;

View File

@ -86,7 +86,7 @@ static int mlx5e_init_xsk_rq(struct mlx5e_channel *c,
if (err) if (err)
return err; return err;
return xdp_rxq_info_reg(&rq->xdp_rxq, rq->netdev, rq_xdp_ix, 0); return xdp_rxq_info_reg(&rq->xdp_rxq, rq->netdev, rq_xdp_ix, c->napi.napi_id);
} }
static int mlx5e_open_xsk_rq(struct mlx5e_channel *c, struct mlx5e_params *params, static int mlx5e_open_xsk_rq(struct mlx5e_channel *c, struct mlx5e_params *params,

View File

@ -61,16 +61,19 @@ static void mlx5e_ipsec_handle_tx_limit(struct work_struct *_work)
struct mlx5e_ipsec_sa_entry *sa_entry = dwork->sa_entry; struct mlx5e_ipsec_sa_entry *sa_entry = dwork->sa_entry;
struct xfrm_state *x = sa_entry->x; struct xfrm_state *x = sa_entry->x;
spin_lock(&x->lock); if (sa_entry->attrs.drop)
return;
spin_lock_bh(&x->lock);
xfrm_state_check_expire(x); xfrm_state_check_expire(x);
if (x->km.state == XFRM_STATE_EXPIRED) { if (x->km.state == XFRM_STATE_EXPIRED) {
sa_entry->attrs.drop = true; sa_entry->attrs.drop = true;
mlx5e_accel_ipsec_fs_modify(sa_entry); spin_unlock_bh(&x->lock);
}
spin_unlock(&x->lock);
if (sa_entry->attrs.drop) mlx5e_accel_ipsec_fs_modify(sa_entry);
return; return;
}
spin_unlock_bh(&x->lock);
queue_delayed_work(sa_entry->ipsec->wq, &dwork->dwork, queue_delayed_work(sa_entry->ipsec->wq, &dwork->dwork,
MLX5_IPSEC_RESCHED); MLX5_IPSEC_RESCHED);
@ -1040,11 +1043,17 @@ err_fs:
return err; return err;
} }
static void mlx5e_xfrm_free_policy(struct xfrm_policy *x) static void mlx5e_xfrm_del_policy(struct xfrm_policy *x)
{ {
struct mlx5e_ipsec_pol_entry *pol_entry = to_ipsec_pol_entry(x); struct mlx5e_ipsec_pol_entry *pol_entry = to_ipsec_pol_entry(x);
mlx5e_accel_ipsec_fs_del_pol(pol_entry); mlx5e_accel_ipsec_fs_del_pol(pol_entry);
}
static void mlx5e_xfrm_free_policy(struct xfrm_policy *x)
{
struct mlx5e_ipsec_pol_entry *pol_entry = to_ipsec_pol_entry(x);
kfree(pol_entry); kfree(pol_entry);
} }
@ -1065,6 +1074,7 @@ static const struct xfrmdev_ops mlx5e_ipsec_packet_xfrmdev_ops = {
.xdo_dev_state_update_curlft = mlx5e_xfrm_update_curlft, .xdo_dev_state_update_curlft = mlx5e_xfrm_update_curlft,
.xdo_dev_policy_add = mlx5e_xfrm_add_policy, .xdo_dev_policy_add = mlx5e_xfrm_add_policy,
.xdo_dev_policy_delete = mlx5e_xfrm_del_policy,
.xdo_dev_policy_free = mlx5e_xfrm_free_policy, .xdo_dev_policy_free = mlx5e_xfrm_free_policy,
}; };

View File

@ -305,7 +305,17 @@ static void mlx5e_ipsec_update_esn_state(struct mlx5e_ipsec_sa_entry *sa_entry,
} }
mlx5e_ipsec_build_accel_xfrm_attrs(sa_entry, &attrs); mlx5e_ipsec_build_accel_xfrm_attrs(sa_entry, &attrs);
/* It is safe to execute the modify below unlocked since the only flows
* that could affect this HW object, are create, destroy and this work.
*
* Creation flow can't co-exist with this modify work, the destruction
* flow would cancel this work, and this work is a single entity that
* can't conflict with it self.
*/
spin_unlock_bh(&sa_entry->x->lock);
mlx5_accel_esp_modify_xfrm(sa_entry, &attrs); mlx5_accel_esp_modify_xfrm(sa_entry, &attrs);
spin_lock_bh(&sa_entry->x->lock);
data.data_offset_condition_operand = data.data_offset_condition_operand =
MLX5_IPSEC_ASO_REMOVE_FLOW_PKT_CNT_OFFSET; MLX5_IPSEC_ASO_REMOVE_FLOW_PKT_CNT_OFFSET;
@ -431,7 +441,7 @@ static void mlx5e_ipsec_handle_event(struct work_struct *_work)
aso = sa_entry->ipsec->aso; aso = sa_entry->ipsec->aso;
attrs = &sa_entry->attrs; attrs = &sa_entry->attrs;
spin_lock(&sa_entry->x->lock); spin_lock_bh(&sa_entry->x->lock);
ret = mlx5e_ipsec_aso_query(sa_entry, NULL); ret = mlx5e_ipsec_aso_query(sa_entry, NULL);
if (ret) if (ret)
goto unlock; goto unlock;
@ -447,7 +457,7 @@ static void mlx5e_ipsec_handle_event(struct work_struct *_work)
mlx5e_ipsec_handle_limits(sa_entry); mlx5e_ipsec_handle_limits(sa_entry);
unlock: unlock:
spin_unlock(&sa_entry->x->lock); spin_unlock_bh(&sa_entry->x->lock);
kfree(work); kfree(work);
} }
@ -596,7 +606,8 @@ int mlx5e_ipsec_aso_query(struct mlx5e_ipsec_sa_entry *sa_entry,
do { do {
ret = mlx5_aso_poll_cq(aso->aso, false); ret = mlx5_aso_poll_cq(aso->aso, false);
if (ret) if (ret)
usleep_range(2, 10); /* We are in atomic context */
udelay(10);
} while (ret && time_is_after_jiffies(expires)); } while (ret && time_is_after_jiffies(expires));
spin_unlock_bh(&aso->lock); spin_unlock_bh(&aso->lock);
return ret; return ret;

View File

@ -641,7 +641,7 @@ static void mlx5e_free_mpwqe_rq_drop_page(struct mlx5e_rq *rq)
} }
static int mlx5e_init_rxq_rq(struct mlx5e_channel *c, struct mlx5e_params *params, static int mlx5e_init_rxq_rq(struct mlx5e_channel *c, struct mlx5e_params *params,
struct mlx5e_rq *rq) u32 xdp_frag_size, struct mlx5e_rq *rq)
{ {
struct mlx5_core_dev *mdev = c->mdev; struct mlx5_core_dev *mdev = c->mdev;
int err; int err;
@ -665,7 +665,8 @@ static int mlx5e_init_rxq_rq(struct mlx5e_channel *c, struct mlx5e_params *param
if (err) if (err)
return err; return err;
return xdp_rxq_info_reg(&rq->xdp_rxq, rq->netdev, rq->ix, c->napi.napi_id); return __xdp_rxq_info_reg(&rq->xdp_rxq, rq->netdev, rq->ix, c->napi.napi_id,
xdp_frag_size);
} }
static int mlx5_rq_shampo_alloc(struct mlx5_core_dev *mdev, static int mlx5_rq_shampo_alloc(struct mlx5_core_dev *mdev,
@ -2240,7 +2241,7 @@ static int mlx5e_open_rxq_rq(struct mlx5e_channel *c, struct mlx5e_params *param
{ {
int err; int err;
err = mlx5e_init_rxq_rq(c, params, &c->rq); err = mlx5e_init_rxq_rq(c, params, rq_params->xdp_frag_size, &c->rq);
if (err) if (err)
return err; return err;

View File

@ -1439,6 +1439,7 @@ static void mlx5e_tc_del_nic_flow(struct mlx5e_priv *priv,
mlx5e_hairpin_flow_del(priv, flow); mlx5e_hairpin_flow_del(priv, flow);
free_flow_post_acts(flow); free_flow_post_acts(flow);
mlx5_tc_ct_delete_flow(get_ct_priv(flow->priv), attr);
kvfree(attr->parse_attr); kvfree(attr->parse_attr);
kfree(flow->attr); kfree(flow->attr);

View File

@ -518,10 +518,11 @@ static int mlx5_cmd_set_fte(struct mlx5_core_dev *dev,
struct mlx5_flow_rule *dst; struct mlx5_flow_rule *dst;
void *in_flow_context, *vlan; void *in_flow_context, *vlan;
void *in_match_value; void *in_match_value;
int reformat_id = 0;
unsigned int inlen; unsigned int inlen;
int dst_cnt_size; int dst_cnt_size;
u32 *in, action;
void *in_dests; void *in_dests;
u32 *in;
int err; int err;
if (mlx5_set_extended_dest(dev, fte, &extended_dest)) if (mlx5_set_extended_dest(dev, fte, &extended_dest))
@ -560,22 +561,42 @@ static int mlx5_cmd_set_fte(struct mlx5_core_dev *dev,
MLX5_SET(flow_context, in_flow_context, extended_destination, MLX5_SET(flow_context, in_flow_context, extended_destination,
extended_dest); extended_dest);
if (extended_dest) {
u32 action;
action = fte->action.action & action = fte->action.action;
~MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT; if (extended_dest)
MLX5_SET(flow_context, in_flow_context, action, action); action &= ~MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
} else {
MLX5_SET(flow_context, in_flow_context, action, MLX5_SET(flow_context, in_flow_context, action, action);
fte->action.action);
if (fte->action.pkt_reformat) if (!extended_dest && fte->action.pkt_reformat) {
MLX5_SET(flow_context, in_flow_context, packet_reformat_id, struct mlx5_pkt_reformat *pkt_reformat = fte->action.pkt_reformat;
fte->action.pkt_reformat->id);
if (pkt_reformat->owner == MLX5_FLOW_RESOURCE_OWNER_SW) {
reformat_id = mlx5_fs_dr_action_get_pkt_reformat_id(pkt_reformat);
if (reformat_id < 0) {
mlx5_core_err(dev,
"Unsupported SW-owned pkt_reformat type (%d) in FW-owned table\n",
pkt_reformat->reformat_type);
err = reformat_id;
goto err_out;
}
} else {
reformat_id = fte->action.pkt_reformat->id;
}
} }
if (fte->action.modify_hdr)
MLX5_SET(flow_context, in_flow_context, packet_reformat_id, (u32)reformat_id);
if (fte->action.modify_hdr) {
if (fte->action.modify_hdr->owner == MLX5_FLOW_RESOURCE_OWNER_SW) {
mlx5_core_err(dev, "Can't use SW-owned modify_hdr in FW-owned table\n");
err = -EOPNOTSUPP;
goto err_out;
}
MLX5_SET(flow_context, in_flow_context, modify_header_id, MLX5_SET(flow_context, in_flow_context, modify_header_id,
fte->action.modify_hdr->id); fte->action.modify_hdr->id);
}
MLX5_SET(flow_context, in_flow_context, encrypt_decrypt_type, MLX5_SET(flow_context, in_flow_context, encrypt_decrypt_type,
fte->action.crypto.type); fte->action.crypto.type);
@ -892,6 +913,8 @@ static int mlx5_cmd_packet_reformat_alloc(struct mlx5_flow_root_namespace *ns,
pkt_reformat->id = MLX5_GET(alloc_packet_reformat_context_out, pkt_reformat->id = MLX5_GET(alloc_packet_reformat_context_out,
out, packet_reformat_id); out, packet_reformat_id);
pkt_reformat->owner = MLX5_FLOW_RESOURCE_OWNER_FW;
kfree(in); kfree(in);
return err; return err;
} }
@ -976,6 +999,7 @@ static int mlx5_cmd_modify_header_alloc(struct mlx5_flow_root_namespace *ns,
err = mlx5_cmd_exec(dev, in, inlen, out, sizeof(out)); err = mlx5_cmd_exec(dev, in, inlen, out, sizeof(out));
modify_hdr->id = MLX5_GET(alloc_modify_header_context_out, out, modify_header_id); modify_hdr->id = MLX5_GET(alloc_modify_header_context_out, out, modify_header_id);
modify_hdr->owner = MLX5_FLOW_RESOURCE_OWNER_FW;
kfree(in); kfree(in);
return err; return err;
} }

View File

@ -54,8 +54,14 @@ struct mlx5_flow_definer {
u32 id; u32 id;
}; };
enum mlx5_flow_resource_owner {
MLX5_FLOW_RESOURCE_OWNER_FW,
MLX5_FLOW_RESOURCE_OWNER_SW,
};
struct mlx5_modify_hdr { struct mlx5_modify_hdr {
enum mlx5_flow_namespace_type ns_type; enum mlx5_flow_namespace_type ns_type;
enum mlx5_flow_resource_owner owner;
union { union {
struct mlx5_fs_dr_action action; struct mlx5_fs_dr_action action;
u32 id; u32 id;
@ -65,6 +71,7 @@ struct mlx5_modify_hdr {
struct mlx5_pkt_reformat { struct mlx5_pkt_reformat {
enum mlx5_flow_namespace_type ns_type; enum mlx5_flow_namespace_type ns_type;
int reformat_type; /* from mlx5_ifc */ int reformat_type; /* from mlx5_ifc */
enum mlx5_flow_resource_owner owner;
union { union {
struct mlx5_fs_dr_action action; struct mlx5_fs_dr_action action;
u32 id; u32 id;

View File

@ -140,14 +140,22 @@ out:
return ret; return ret;
} }
static void irq_release(struct mlx5_irq *irq) /* mlx5_system_free_irq - Free an IRQ
* @irq: IRQ to free
*
* Free the IRQ and other resources such as rmap from the system.
* BUT doesn't free or remove reference from mlx5.
* This function is very important for the shutdown flow, where we need to
* cleanup system resoruces but keep mlx5 objects alive,
* see mlx5_irq_table_free_irqs().
*/
static void mlx5_system_free_irq(struct mlx5_irq *irq)
{ {
struct mlx5_irq_pool *pool = irq->pool; struct mlx5_irq_pool *pool = irq->pool;
#ifdef CONFIG_RFS_ACCEL #ifdef CONFIG_RFS_ACCEL
struct cpu_rmap *rmap; struct cpu_rmap *rmap;
#endif #endif
xa_erase(&pool->irqs, irq->pool_index);
/* free_irq requires that affinity_hint and rmap will be cleared before /* free_irq requires that affinity_hint and rmap will be cleared before
* calling it. To satisfy this requirement, we call * calling it. To satisfy this requirement, we call
* irq_cpu_rmap_remove() to remove the notifier * irq_cpu_rmap_remove() to remove the notifier
@ -159,10 +167,18 @@ static void irq_release(struct mlx5_irq *irq)
irq_cpu_rmap_remove(rmap, irq->map.virq); irq_cpu_rmap_remove(rmap, irq->map.virq);
#endif #endif
free_cpumask_var(irq->mask);
free_irq(irq->map.virq, &irq->nh); free_irq(irq->map.virq, &irq->nh);
if (irq->map.index && pci_msix_can_alloc_dyn(pool->dev->pdev)) if (irq->map.index && pci_msix_can_alloc_dyn(pool->dev->pdev))
pci_msix_free_irq(pool->dev->pdev, irq->map); pci_msix_free_irq(pool->dev->pdev, irq->map);
}
static void irq_release(struct mlx5_irq *irq)
{
struct mlx5_irq_pool *pool = irq->pool;
xa_erase(&pool->irqs, irq->pool_index);
mlx5_system_free_irq(irq);
free_cpumask_var(irq->mask);
kfree(irq); kfree(irq);
} }
@ -579,15 +595,21 @@ void mlx5_irqs_release_vectors(struct mlx5_irq **irqs, int nirqs)
int mlx5_irqs_request_vectors(struct mlx5_core_dev *dev, u16 *cpus, int nirqs, int mlx5_irqs_request_vectors(struct mlx5_core_dev *dev, u16 *cpus, int nirqs,
struct mlx5_irq **irqs, struct cpu_rmap **rmap) struct mlx5_irq **irqs, struct cpu_rmap **rmap)
{ {
struct mlx5_irq_table *table = mlx5_irq_table_get(dev);
struct mlx5_irq_pool *pool = table->pcif_pool;
struct irq_affinity_desc af_desc; struct irq_affinity_desc af_desc;
struct mlx5_irq *irq; struct mlx5_irq *irq;
int offset = 1;
int i; int i;
if (!pool->xa_num_irqs.max)
offset = 0;
af_desc.is_managed = false; af_desc.is_managed = false;
for (i = 0; i < nirqs; i++) { for (i = 0; i < nirqs; i++) {
cpumask_clear(&af_desc.mask); cpumask_clear(&af_desc.mask);
cpumask_set_cpu(cpus[i], &af_desc.mask); cpumask_set_cpu(cpus[i], &af_desc.mask);
irq = mlx5_irq_request(dev, i + 1, &af_desc, rmap); irq = mlx5_irq_request(dev, i + offset, &af_desc, rmap);
if (IS_ERR(irq)) if (IS_ERR(irq))
break; break;
irqs[i] = irq; irqs[i] = irq;
@ -713,7 +735,8 @@ static void mlx5_irq_pool_free_irqs(struct mlx5_irq_pool *pool)
unsigned long index; unsigned long index;
xa_for_each(&pool->irqs, index, irq) xa_for_each(&pool->irqs, index, irq)
free_irq(irq->map.virq, &irq->nh); mlx5_system_free_irq(irq);
} }
static void mlx5_irq_pools_free_irqs(struct mlx5_irq_table *table) static void mlx5_irq_pools_free_irqs(struct mlx5_irq_table *table)

View File

@ -1421,9 +1421,13 @@ dr_action_create_reformat_action(struct mlx5dr_domain *dmn,
} }
case DR_ACTION_TYP_TNL_L3_TO_L2: case DR_ACTION_TYP_TNL_L3_TO_L2:
{ {
u8 hw_actions[DR_ACTION_CACHE_LINE_SIZE] = {}; u8 *hw_actions;
int ret; int ret;
hw_actions = kzalloc(DR_ACTION_CACHE_LINE_SIZE, GFP_KERNEL);
if (!hw_actions)
return -ENOMEM;
ret = mlx5dr_ste_set_action_decap_l3_list(dmn->ste_ctx, ret = mlx5dr_ste_set_action_decap_l3_list(dmn->ste_ctx,
data, data_sz, data, data_sz,
hw_actions, hw_actions,
@ -1431,6 +1435,7 @@ dr_action_create_reformat_action(struct mlx5dr_domain *dmn,
&action->rewrite->num_of_actions); &action->rewrite->num_of_actions);
if (ret) { if (ret) {
mlx5dr_dbg(dmn, "Failed creating decap l3 action list\n"); mlx5dr_dbg(dmn, "Failed creating decap l3 action list\n");
kfree(hw_actions);
return ret; return ret;
} }
@ -1440,6 +1445,7 @@ dr_action_create_reformat_action(struct mlx5dr_domain *dmn,
ret = mlx5dr_ste_alloc_modify_hdr(action); ret = mlx5dr_ste_alloc_modify_hdr(action);
if (ret) { if (ret) {
mlx5dr_dbg(dmn, "Failed preparing reformat data\n"); mlx5dr_dbg(dmn, "Failed preparing reformat data\n");
kfree(hw_actions);
return ret; return ret;
} }
return 0; return 0;
@ -2130,6 +2136,11 @@ mlx5dr_action_create_aso(struct mlx5dr_domain *dmn, u32 obj_id,
return action; return action;
} }
u32 mlx5dr_action_get_pkt_reformat_id(struct mlx5dr_action *action)
{
return action->reformat->id;
}
int mlx5dr_action_destroy(struct mlx5dr_action *action) int mlx5dr_action_destroy(struct mlx5dr_action *action)
{ {
if (WARN_ON_ONCE(refcount_read(&action->refcount) > 1)) if (WARN_ON_ONCE(refcount_read(&action->refcount) > 1))

View File

@ -331,8 +331,16 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns,
} }
if (fte->action.action & MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT) { if (fte->action.action & MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT) {
bool is_decap = fte->action.pkt_reformat->reformat_type == bool is_decap;
MLX5_REFORMAT_TYPE_L3_TUNNEL_TO_L2;
if (fte->action.pkt_reformat->owner == MLX5_FLOW_RESOURCE_OWNER_FW) {
err = -EINVAL;
mlx5dr_err(domain, "FW-owned reformat can't be used in SW rule\n");
goto free_actions;
}
is_decap = fte->action.pkt_reformat->reformat_type ==
MLX5_REFORMAT_TYPE_L3_TUNNEL_TO_L2;
if (is_decap) if (is_decap)
actions[num_actions++] = actions[num_actions++] =
@ -661,6 +669,7 @@ static int mlx5_cmd_dr_packet_reformat_alloc(struct mlx5_flow_root_namespace *ns
return -EINVAL; return -EINVAL;
} }
pkt_reformat->owner = MLX5_FLOW_RESOURCE_OWNER_SW;
pkt_reformat->action.dr_action = action; pkt_reformat->action.dr_action = action;
return 0; return 0;
@ -691,6 +700,7 @@ static int mlx5_cmd_dr_modify_header_alloc(struct mlx5_flow_root_namespace *ns,
return -EINVAL; return -EINVAL;
} }
modify_hdr->owner = MLX5_FLOW_RESOURCE_OWNER_SW;
modify_hdr->action.dr_action = action; modify_hdr->action.dr_action = action;
return 0; return 0;
@ -817,6 +827,19 @@ static u32 mlx5_cmd_dr_get_capabilities(struct mlx5_flow_root_namespace *ns,
return steering_caps; return steering_caps;
} }
int mlx5_fs_dr_action_get_pkt_reformat_id(struct mlx5_pkt_reformat *pkt_reformat)
{
switch (pkt_reformat->reformat_type) {
case MLX5_REFORMAT_TYPE_L2_TO_VXLAN:
case MLX5_REFORMAT_TYPE_L2_TO_NVGRE:
case MLX5_REFORMAT_TYPE_L2_TO_L2_TUNNEL:
case MLX5_REFORMAT_TYPE_L2_TO_L3_TUNNEL:
case MLX5_REFORMAT_TYPE_INSERT_HDR:
return mlx5dr_action_get_pkt_reformat_id(pkt_reformat->action.dr_action);
}
return -EOPNOTSUPP;
}
bool mlx5_fs_dr_is_supported(struct mlx5_core_dev *dev) bool mlx5_fs_dr_is_supported(struct mlx5_core_dev *dev)
{ {
return mlx5dr_is_supported(dev); return mlx5dr_is_supported(dev);

View File

@ -38,6 +38,8 @@ struct mlx5_fs_dr_table {
bool mlx5_fs_dr_is_supported(struct mlx5_core_dev *dev); bool mlx5_fs_dr_is_supported(struct mlx5_core_dev *dev);
int mlx5_fs_dr_action_get_pkt_reformat_id(struct mlx5_pkt_reformat *pkt_reformat);
const struct mlx5_flow_cmds *mlx5_fs_cmd_get_dr_cmds(void); const struct mlx5_flow_cmds *mlx5_fs_cmd_get_dr_cmds(void);
#else #else
@ -47,6 +49,11 @@ static inline const struct mlx5_flow_cmds *mlx5_fs_cmd_get_dr_cmds(void)
return NULL; return NULL;
} }
static inline u32 mlx5_fs_dr_action_get_pkt_reformat_id(struct mlx5_pkt_reformat *pkt_reformat)
{
return 0;
}
static inline bool mlx5_fs_dr_is_supported(struct mlx5_core_dev *dev) static inline bool mlx5_fs_dr_is_supported(struct mlx5_core_dev *dev)
{ {
return false; return false;

View File

@ -151,6 +151,8 @@ mlx5dr_action_create_dest_match_range(struct mlx5dr_domain *dmn,
int mlx5dr_action_destroy(struct mlx5dr_action *action); int mlx5dr_action_destroy(struct mlx5dr_action *action);
u32 mlx5dr_action_get_pkt_reformat_id(struct mlx5dr_action *action);
int mlx5dr_definer_get(struct mlx5dr_domain *dmn, u16 format_id, int mlx5dr_definer_get(struct mlx5dr_domain *dmn, u16 format_id,
u8 *dw_selectors, u8 *byte_selectors, u8 *dw_selectors, u8 *byte_selectors,
u8 *match_mask, u32 *definer_id); u8 *match_mask, u32 *definer_id);

View File

@ -582,8 +582,7 @@ qcaspi_spi_thread(void *data)
while (!kthread_should_stop()) { while (!kthread_should_stop()) {
set_current_state(TASK_INTERRUPTIBLE); set_current_state(TASK_INTERRUPTIBLE);
if ((qca->intr_req == qca->intr_svc) && if ((qca->intr_req == qca->intr_svc) &&
(qca->txr.skb[qca->txr.head] == NULL) && !qca->txr.skb[qca->txr.head])
(qca->sync == QCASPI_SYNC_READY))
schedule(); schedule();
set_current_state(TASK_RUNNING); set_current_state(TASK_RUNNING);

View File

@ -2950,7 +2950,7 @@ static u32 efx_ef10_extract_event_ts(efx_qword_t *event)
return tstamp; return tstamp;
} }
static void static int
efx_ef10_handle_tx_event(struct efx_channel *channel, efx_qword_t *event) efx_ef10_handle_tx_event(struct efx_channel *channel, efx_qword_t *event)
{ {
struct efx_nic *efx = channel->efx; struct efx_nic *efx = channel->efx;
@ -2958,13 +2958,14 @@ efx_ef10_handle_tx_event(struct efx_channel *channel, efx_qword_t *event)
unsigned int tx_ev_desc_ptr; unsigned int tx_ev_desc_ptr;
unsigned int tx_ev_q_label; unsigned int tx_ev_q_label;
unsigned int tx_ev_type; unsigned int tx_ev_type;
int work_done;
u64 ts_part; u64 ts_part;
if (unlikely(READ_ONCE(efx->reset_pending))) if (unlikely(READ_ONCE(efx->reset_pending)))
return; return 0;
if (unlikely(EFX_QWORD_FIELD(*event, ESF_DZ_TX_DROP_EVENT))) if (unlikely(EFX_QWORD_FIELD(*event, ESF_DZ_TX_DROP_EVENT)))
return; return 0;
/* Get the transmit queue */ /* Get the transmit queue */
tx_ev_q_label = EFX_QWORD_FIELD(*event, ESF_DZ_TX_QLABEL); tx_ev_q_label = EFX_QWORD_FIELD(*event, ESF_DZ_TX_QLABEL);
@ -2973,8 +2974,7 @@ efx_ef10_handle_tx_event(struct efx_channel *channel, efx_qword_t *event)
if (!tx_queue->timestamping) { if (!tx_queue->timestamping) {
/* Transmit completion */ /* Transmit completion */
tx_ev_desc_ptr = EFX_QWORD_FIELD(*event, ESF_DZ_TX_DESCR_INDX); tx_ev_desc_ptr = EFX_QWORD_FIELD(*event, ESF_DZ_TX_DESCR_INDX);
efx_xmit_done(tx_queue, tx_ev_desc_ptr & tx_queue->ptr_mask); return efx_xmit_done(tx_queue, tx_ev_desc_ptr & tx_queue->ptr_mask);
return;
} }
/* Transmit timestamps are only available for 8XXX series. They result /* Transmit timestamps are only available for 8XXX series. They result
@ -3000,6 +3000,7 @@ efx_ef10_handle_tx_event(struct efx_channel *channel, efx_qword_t *event)
* fields in the event. * fields in the event.
*/ */
tx_ev_type = EFX_QWORD_FIELD(*event, ESF_EZ_TX_SOFT1); tx_ev_type = EFX_QWORD_FIELD(*event, ESF_EZ_TX_SOFT1);
work_done = 0;
switch (tx_ev_type) { switch (tx_ev_type) {
case TX_TIMESTAMP_EVENT_TX_EV_COMPLETION: case TX_TIMESTAMP_EVENT_TX_EV_COMPLETION:
@ -3016,6 +3017,7 @@ efx_ef10_handle_tx_event(struct efx_channel *channel, efx_qword_t *event)
tx_queue->completed_timestamp_major = ts_part; tx_queue->completed_timestamp_major = ts_part;
efx_xmit_done_single(tx_queue); efx_xmit_done_single(tx_queue);
work_done = 1;
break; break;
default: default:
@ -3026,6 +3028,8 @@ efx_ef10_handle_tx_event(struct efx_channel *channel, efx_qword_t *event)
EFX_QWORD_VAL(*event)); EFX_QWORD_VAL(*event));
break; break;
} }
return work_done;
} }
static void static void
@ -3081,13 +3085,16 @@ static void efx_ef10_handle_driver_generated_event(struct efx_channel *channel,
} }
} }
#define EFX_NAPI_MAX_TX 512
static int efx_ef10_ev_process(struct efx_channel *channel, int quota) static int efx_ef10_ev_process(struct efx_channel *channel, int quota)
{ {
struct efx_nic *efx = channel->efx; struct efx_nic *efx = channel->efx;
efx_qword_t event, *p_event; efx_qword_t event, *p_event;
unsigned int read_ptr; unsigned int read_ptr;
int ev_code; int spent_tx = 0;
int spent = 0; int spent = 0;
int ev_code;
if (quota <= 0) if (quota <= 0)
return spent; return spent;
@ -3126,7 +3133,11 @@ static int efx_ef10_ev_process(struct efx_channel *channel, int quota)
} }
break; break;
case ESE_DZ_EV_CODE_TX_EV: case ESE_DZ_EV_CODE_TX_EV:
efx_ef10_handle_tx_event(channel, &event); spent_tx += efx_ef10_handle_tx_event(channel, &event);
if (spent_tx >= EFX_NAPI_MAX_TX) {
spent = quota;
goto out;
}
break; break;
case ESE_DZ_EV_CODE_DRIVER_EV: case ESE_DZ_EV_CODE_DRIVER_EV:
efx_ef10_handle_driver_event(channel, &event); efx_ef10_handle_driver_event(channel, &event);

View File

@ -253,6 +253,8 @@ static void ef100_ev_read_ack(struct efx_channel *channel)
efx_reg(channel->efx, ER_GZ_EVQ_INT_PRIME)); efx_reg(channel->efx, ER_GZ_EVQ_INT_PRIME));
} }
#define EFX_NAPI_MAX_TX 512
static int ef100_ev_process(struct efx_channel *channel, int quota) static int ef100_ev_process(struct efx_channel *channel, int quota)
{ {
struct efx_nic *efx = channel->efx; struct efx_nic *efx = channel->efx;
@ -260,6 +262,7 @@ static int ef100_ev_process(struct efx_channel *channel, int quota)
bool evq_phase, old_evq_phase; bool evq_phase, old_evq_phase;
unsigned int read_ptr; unsigned int read_ptr;
efx_qword_t *p_event; efx_qword_t *p_event;
int spent_tx = 0;
int spent = 0; int spent = 0;
bool ev_phase; bool ev_phase;
int ev_type; int ev_type;
@ -295,7 +298,9 @@ static int ef100_ev_process(struct efx_channel *channel, int quota)
efx_mcdi_process_event(channel, p_event); efx_mcdi_process_event(channel, p_event);
break; break;
case ESE_GZ_EF100_EV_TX_COMPLETION: case ESE_GZ_EF100_EV_TX_COMPLETION:
ef100_ev_tx(channel, p_event); spent_tx += ef100_ev_tx(channel, p_event);
if (spent_tx >= EFX_NAPI_MAX_TX)
spent = quota;
break; break;
case ESE_GZ_EF100_EV_DRIVER: case ESE_GZ_EF100_EV_DRIVER:
netif_info(efx, drv, efx->net_dev, netif_info(efx, drv, efx->net_dev,

View File

@ -346,7 +346,7 @@ void ef100_tx_write(struct efx_tx_queue *tx_queue)
ef100_tx_push_buffers(tx_queue); ef100_tx_push_buffers(tx_queue);
} }
void ef100_ev_tx(struct efx_channel *channel, const efx_qword_t *p_event) int ef100_ev_tx(struct efx_channel *channel, const efx_qword_t *p_event)
{ {
unsigned int tx_done = unsigned int tx_done =
EFX_QWORD_FIELD(*p_event, ESF_GZ_EV_TXCMPL_NUM_DESC); EFX_QWORD_FIELD(*p_event, ESF_GZ_EV_TXCMPL_NUM_DESC);
@ -357,7 +357,7 @@ void ef100_ev_tx(struct efx_channel *channel, const efx_qword_t *p_event)
unsigned int tx_index = (tx_queue->read_count + tx_done - 1) & unsigned int tx_index = (tx_queue->read_count + tx_done - 1) &
tx_queue->ptr_mask; tx_queue->ptr_mask;
efx_xmit_done(tx_queue, tx_index); return efx_xmit_done(tx_queue, tx_index);
} }
/* Add a socket buffer to a TX queue /* Add a socket buffer to a TX queue

View File

@ -20,7 +20,7 @@ void ef100_tx_init(struct efx_tx_queue *tx_queue);
void ef100_tx_write(struct efx_tx_queue *tx_queue); void ef100_tx_write(struct efx_tx_queue *tx_queue);
unsigned int ef100_tx_max_skb_descs(struct efx_nic *efx); unsigned int ef100_tx_max_skb_descs(struct efx_nic *efx);
void ef100_ev_tx(struct efx_channel *channel, const efx_qword_t *p_event); int ef100_ev_tx(struct efx_channel *channel, const efx_qword_t *p_event);
netdev_tx_t ef100_enqueue_skb(struct efx_tx_queue *tx_queue, struct sk_buff *skb); netdev_tx_t ef100_enqueue_skb(struct efx_tx_queue *tx_queue, struct sk_buff *skb);
int __ef100_enqueue_skb(struct efx_tx_queue *tx_queue, struct sk_buff *skb, int __ef100_enqueue_skb(struct efx_tx_queue *tx_queue, struct sk_buff *skb,

View File

@ -250,7 +250,7 @@ void efx_xmit_done_check_empty(struct efx_tx_queue *tx_queue)
} }
} }
void efx_xmit_done(struct efx_tx_queue *tx_queue, unsigned int index) int efx_xmit_done(struct efx_tx_queue *tx_queue, unsigned int index)
{ {
unsigned int fill_level, pkts_compl = 0, bytes_compl = 0; unsigned int fill_level, pkts_compl = 0, bytes_compl = 0;
unsigned int efv_pkts_compl = 0; unsigned int efv_pkts_compl = 0;
@ -280,6 +280,8 @@ void efx_xmit_done(struct efx_tx_queue *tx_queue, unsigned int index)
} }
efx_xmit_done_check_empty(tx_queue); efx_xmit_done_check_empty(tx_queue);
return pkts_compl + efv_pkts_compl;
} }
/* Remove buffers put into a tx_queue for the current packet. /* Remove buffers put into a tx_queue for the current packet.

View File

@ -28,7 +28,7 @@ static inline bool efx_tx_buffer_in_use(struct efx_tx_buffer *buffer)
} }
void efx_xmit_done_check_empty(struct efx_tx_queue *tx_queue); void efx_xmit_done_check_empty(struct efx_tx_queue *tx_queue);
void efx_xmit_done(struct efx_tx_queue *tx_queue, unsigned int index); int efx_xmit_done(struct efx_tx_queue *tx_queue, unsigned int index);
void efx_enqueue_unwind(struct efx_tx_queue *tx_queue, void efx_enqueue_unwind(struct efx_tx_queue *tx_queue,
unsigned int insert_count); unsigned int insert_count);

View File

@ -1348,3 +1348,5 @@ module_spi_driver(adf7242_driver);
MODULE_AUTHOR("Michael Hennerich <michael.hennerich@analog.com>"); MODULE_AUTHOR("Michael Hennerich <michael.hennerich@analog.com>");
MODULE_DESCRIPTION("ADF7242 IEEE802.15.4 Transceiver Driver"); MODULE_DESCRIPTION("ADF7242 IEEE802.15.4 Transceiver Driver");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_FIRMWARE(FIRMWARE);

View File

@ -685,7 +685,7 @@ static int hwsim_del_edge_nl(struct sk_buff *msg, struct genl_info *info)
static int hwsim_set_edge_lqi(struct sk_buff *msg, struct genl_info *info) static int hwsim_set_edge_lqi(struct sk_buff *msg, struct genl_info *info)
{ {
struct nlattr *edge_attrs[MAC802154_HWSIM_EDGE_ATTR_MAX + 1]; struct nlattr *edge_attrs[MAC802154_HWSIM_EDGE_ATTR_MAX + 1];
struct hwsim_edge_info *einfo; struct hwsim_edge_info *einfo, *einfo_old;
struct hwsim_phy *phy_v0; struct hwsim_phy *phy_v0;
struct hwsim_edge *e; struct hwsim_edge *e;
u32 v0, v1; u32 v0, v1;
@ -723,8 +723,10 @@ static int hwsim_set_edge_lqi(struct sk_buff *msg, struct genl_info *info)
list_for_each_entry_rcu(e, &phy_v0->edges, list) { list_for_each_entry_rcu(e, &phy_v0->edges, list) {
if (e->endpoint->idx == v1) { if (e->endpoint->idx == v1) {
einfo->lqi = lqi; einfo->lqi = lqi;
rcu_assign_pointer(e->info, einfo); einfo_old = rcu_replace_pointer(e->info, einfo,
lockdep_is_held(&hwsim_phys_lock));
rcu_read_unlock(); rcu_read_unlock();
kfree_rcu(einfo_old, rcu);
mutex_unlock(&hwsim_phys_lock); mutex_unlock(&hwsim_phys_lock);
return 0; return 0;
} }

View File

@ -936,7 +936,7 @@ static int dp83867_phy_reset(struct phy_device *phydev)
{ {
int err; int err;
err = phy_write(phydev, DP83867_CTRL, DP83867_SW_RESTART); err = phy_write(phydev, DP83867_CTRL, DP83867_SW_RESET);
if (err < 0) if (err < 0)
return err; return err;

Some files were not shown because too many files have changed in this diff Show More