Char/Misc driver updates for 5.13-rc1
Here is the big set of various smaller driver subsystem updates for 5.13-rc1. Major bits in here are: - habanalabs driver updates - hwtracing driver updates - interconnect driver updates - mhi driver updates - extcon driver updates - fpga driver updates - new binder features added - nvmem driver updates - phy driver updates - soundwire driver updates - smaller misc and char driver fixes and updates. - bluetooth driver bugfix that maintainer wanted to go through this tree. All of these have been in linux-next with no reported issues. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> -----BEGIN PGP SIGNATURE----- iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCYIa0CQ8cZ3JlZ0Brcm9h aC5jb20ACgkQMUfUDdst+ylQ/QCgwLQleU5hH/iQwxbHgNL5GawNUroAmwZtxILF 1r6zjmGi0Ak4oFBf7A0T =Rrl6 -----END PGP SIGNATURE----- Merge tag 'char-misc-5.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc Pull char/misc driver updates from Greg KH: "Here is the big set of various smaller driver subsystem updates for 5.13-rc1. Major bits in here are: - habanalabs driver updates - hwtracing driver updates - interconnect driver updates - mhi driver updates - extcon driver updates - fpga driver updates - new binder features added - nvmem driver updates - phy driver updates - soundwire driver updates - smaller misc and char driver fixes and updates. - bluetooth driver bugfix that maintainer wanted to go through this tree. All of these have been in linux-next with no reported issues" * tag 'char-misc-5.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (330 commits) bluetooth: eliminate the potential race condition when removing the HCI controller coresight: etm-perf: Fix define build issue when built as module phy: Revert "phy: ti: j721e-wiz: add missing of_node_put" phy: ti: j721e-wiz: Add missing include linux/slab.h phy: phy-twl4030-usb: Fix possible use-after-free in twl4030_usb_remove() stm class: Use correct UUID APIs intel_th: pci: Add Alder Lake-M support intel_th: pci: Add Rocket Lake CPU support intel_th: Consistency and off-by-one fix intel_th: Constify attribute_group structs intel_th: Constify all drvdata references stm class: Remove an unused function habanalabs/gaudi: Fix uninitialized return code rc when read size is zero greybus: es2: fix kernel-doc warnings mei: me: add Alder Lake P device id. dw-xdata-pcie: Update outdated info and improve text format dw-xdata-pcie: Fix documentation build warns fbdev: zero-fill colormap in fbcmap.c firmware: qcom-scm: Fix QCOM_SCM configuration speakup: i18n: Switch to kmemdup_nul() in spk_msg_set() ...
This commit is contained in:
commit
8e3a324950
|
@ -82,6 +82,24 @@ Description: Allows the root user to read or write 64 bit data directly
|
|||
If the IOMMU is disabled, it also allows the root user to read
|
||||
or write from the host a device VA of a host mapped memory
|
||||
|
||||
What: /sys/kernel/debug/habanalabs/hl<n>/data_dma
|
||||
Date: Apr 2021
|
||||
KernelVersion: 5.13
|
||||
Contact: ogabbay@kernel.org
|
||||
Description: Allows the root user to read from the device's internal
|
||||
memory (DRAM/SRAM) through a DMA engine.
|
||||
This property is a binary blob that contains the result of the
|
||||
DMA transfer.
|
||||
This custom interface is needed (instead of using the generic
|
||||
Linux user-space PCI mapping) because the amount of internal
|
||||
memory is huge (>32GB) and reading it via the PCI bar will take
|
||||
a very long time.
|
||||
This interface doesn't support concurrency in the same device.
|
||||
In GAUDI and GOYA, this action can cause undefined behavior
|
||||
in case the it is done while the device is executing user
|
||||
workloads.
|
||||
Only supported on GAUDI at this stage.
|
||||
|
||||
What: /sys/kernel/debug/habanalabs/hl<n>/device
|
||||
Date: Jan 2019
|
||||
KernelVersion: 5.1
|
||||
|
@ -90,6 +108,24 @@ Description: Enables the root user to set the device to specific state.
|
|||
Valid values are "disable", "enable", "suspend", "resume".
|
||||
User can read this property to see the valid values
|
||||
|
||||
What: /sys/kernel/debug/habanalabs/hl<n>/dma_size
|
||||
Date: Apr 2021
|
||||
KernelVersion: 5.13
|
||||
Contact: ogabbay@kernel.org
|
||||
Description: Specify the size of the DMA transaction when using DMA to read
|
||||
from the device's internal memory. The value can not be larger
|
||||
than 128MB. Writing to this value initiates the DMA transfer.
|
||||
When the write is finished, the user can read the "data_dma"
|
||||
blob
|
||||
|
||||
What: /sys/kernel/debug/habanalabs/hl<n>/dump_security_violations
|
||||
Date: Jan 2021
|
||||
KernelVersion: 5.12
|
||||
Contact: ogabbay@kernel.org
|
||||
Description: Dumps all security violations to dmesg. This will also ack
|
||||
all security violations meanings those violations will not be
|
||||
dumped next time user calls this API
|
||||
|
||||
What: /sys/kernel/debug/habanalabs/hl<n>/engines
|
||||
Date: Jul 2019
|
||||
KernelVersion: 5.3
|
||||
|
@ -154,6 +190,16 @@ Description: Displays the hop values and physical address for a given ASID
|
|||
e.g. to display info about VA 0x1000 for ASID 1 you need to do:
|
||||
echo "1 0x1000" > /sys/kernel/debug/habanalabs/hl0/mmu
|
||||
|
||||
What: /sys/kernel/debug/habanalabs/hl<n>/mmu_error
|
||||
Date: Mar 2021
|
||||
KernelVersion: 5.12
|
||||
Contact: fkassabri@habana.ai
|
||||
Description: Check and display page fault or access violation mmu errors for
|
||||
all MMUs specified in mmu_cap_mask.
|
||||
e.g. to display error info for MMU hw cap bit 9, you need to do:
|
||||
echo "0x200" > /sys/kernel/debug/habanalabs/hl0/mmu_error
|
||||
cat /sys/kernel/debug/habanalabs/hl0/mmu_error
|
||||
|
||||
What: /sys/kernel/debug/habanalabs/hl<n>/set_power_state
|
||||
Date: Jan 2019
|
||||
KernelVersion: 5.1
|
||||
|
@ -161,6 +207,13 @@ Contact: ogabbay@kernel.org
|
|||
Description: Sets the PCI power state. Valid values are "1" for D0 and "2"
|
||||
for D3Hot
|
||||
|
||||
What: /sys/kernel/debug/habanalabs/hl<n>/stop_on_err
|
||||
Date: Mar 2020
|
||||
KernelVersion: 5.6
|
||||
Contact: ogabbay@kernel.org
|
||||
Description: Sets the stop-on_error option for the device engines. Value of
|
||||
"0" is for disable, otherwise enable.
|
||||
|
||||
What: /sys/kernel/debug/habanalabs/hl<n>/userptr
|
||||
Date: Jan 2019
|
||||
KernelVersion: 5.1
|
||||
|
@ -174,19 +227,4 @@ Date: Jan 2019
|
|||
KernelVersion: 5.1
|
||||
Contact: ogabbay@kernel.org
|
||||
Description: Displays a list with information about all the active virtual
|
||||
address mappings per ASID
|
||||
|
||||
What: /sys/kernel/debug/habanalabs/hl<n>/stop_on_err
|
||||
Date: Mar 2020
|
||||
KernelVersion: 5.6
|
||||
Contact: ogabbay@kernel.org
|
||||
Description: Sets the stop-on_error option for the device engines. Value of
|
||||
"0" is for disable, otherwise enable.
|
||||
|
||||
What: /sys/kernel/debug/habanalabs/hl<n>/dump_security_violations
|
||||
Date: Jan 2021
|
||||
KernelVersion: 5.12
|
||||
Contact: ogabbay@kernel.org
|
||||
Description: Dumps all security violations to dmesg. This will also ack
|
||||
all security violations meanings those violations will not be
|
||||
dumped next time user calls this API
|
||||
address mappings per ASID and all user mappings of HW blocks
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
What: /sys/devices/pci0000:00/*/QEMU0001:00/capability
|
||||
What: /sys/devices/pci0000:00/*/QEMU0001:00/capability for MMIO
|
||||
/sys/bus/pci/drivers/pvpanic-pci/0000:00:0*.0/capability for PCI
|
||||
Date: Jan 2021
|
||||
Contact: zhenwei pi <pizhenwei@bytedance.com>
|
||||
Description:
|
||||
|
@ -12,6 +13,7 @@ Description:
|
|||
https://git.qemu.org/?p=qemu.git;a=blob_plain;f=docs/specs/pvpanic.txt
|
||||
|
||||
What: /sys/devices/pci0000:00/*/QEMU0001:00/events
|
||||
/sys/bus/pci/drivers/pvpanic-pci/0000:00:0*.0/events for PCI
|
||||
Date: Jan 2021
|
||||
Contact: zhenwei pi <pizhenwei@bytedance.com>
|
||||
Description:
|
||||
|
|
|
@ -0,0 +1,49 @@
|
|||
What: /sys/class/misc/drivers/dw-xdata-pcie.<device>/write
|
||||
Date: April 2021
|
||||
KernelVersion: 5.13
|
||||
Contact: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
|
||||
Description: Allows the user to enable the PCIe traffic generator which
|
||||
will create write TLPs frames - from the Root Complex to the
|
||||
Endpoint direction or to disable the PCIe traffic generator
|
||||
in all directions.
|
||||
|
||||
Write y/1/on to enable, n/0/off to disable
|
||||
|
||||
Usage e.g.
|
||||
echo 1 > /sys/class/misc/dw-xdata-pcie.<device>/write
|
||||
or
|
||||
echo 0 > /sys/class/misc/dw-xdata-pcie.<device>/write
|
||||
|
||||
The user can read the current PCIe link throughput generated
|
||||
through this generator in MB/s.
|
||||
|
||||
Usage e.g.
|
||||
cat /sys/class/misc/dw-xdata-pcie.<device>/write
|
||||
204
|
||||
|
||||
The file is read and write.
|
||||
|
||||
What: /sys/class/misc/dw-xdata-pcie.<device>/read
|
||||
Date: April 2021
|
||||
KernelVersion: 5.13
|
||||
Contact: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
|
||||
Description: Allows the user to enable the PCIe traffic generator which
|
||||
will create read TLPs frames - from the Endpoint to the Root
|
||||
Complex direction or to disable the PCIe traffic generator
|
||||
in all directions.
|
||||
|
||||
Write y/1/on to enable, n/0/off to disable
|
||||
|
||||
Usage e.g.
|
||||
echo 1 > /sys/class/misc/dw-xdata-pcie.<device>/read
|
||||
or
|
||||
echo 0 > /sys/class/misc/dw-xdata-pcie.<device>/read
|
||||
|
||||
The user can read the current PCIe link throughput generated
|
||||
through this generator in MB/s.
|
||||
|
||||
Usage e.g.
|
||||
cat /sys/class/misc/dw-xdata-pcie.<device>/read
|
||||
199
|
||||
|
||||
The file is read and write.
|
|
@ -1,41 +0,0 @@
|
|||
Qualcomm's PM8941 USB ID Extcon device
|
||||
|
||||
Some Qualcomm PMICs have a "misc" module that can be used to detect when
|
||||
the USB ID pin has been pulled low or high.
|
||||
|
||||
PROPERTIES
|
||||
|
||||
- compatible:
|
||||
Usage: required
|
||||
Value type: <string>
|
||||
Definition: Should contain "qcom,pm8941-misc";
|
||||
|
||||
- reg:
|
||||
Usage: required
|
||||
Value type: <u32>
|
||||
Definition: Should contain the offset to the misc address space
|
||||
|
||||
- interrupts:
|
||||
Usage: required
|
||||
Value type: <prop-encoded-array>
|
||||
Definition: Should contain the usb id interrupt
|
||||
|
||||
- interrupt-names:
|
||||
Usage: required
|
||||
Value type: <stringlist>
|
||||
Definition: Should contain the string "usb_id" for the usb id interrupt
|
||||
|
||||
Example:
|
||||
|
||||
pmic {
|
||||
usb_id: misc@900 {
|
||||
compatible = "qcom,pm8941-misc";
|
||||
reg = <0x900>;
|
||||
interrupts = <0x0 0x9 0 IRQ_TYPE_EDGE_BOTH>;
|
||||
interrupt-names = "usb_id";
|
||||
};
|
||||
}
|
||||
|
||||
usb-controller {
|
||||
extcon = <&usb_id>;
|
||||
};
|
|
@ -0,0 +1,62 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/extcon/qcom,pm8941-misc.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Qualcomm Technologies, Inc. PM8941 USB ID Extcon device
|
||||
|
||||
maintainers:
|
||||
- Guru Das Srinagesh <gurus@codeaurora.org>
|
||||
|
||||
description: |
|
||||
Some Qualcomm PMICs have a "misc" module that can be used to detect when
|
||||
the USB ID pin has been pulled low or high.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
items:
|
||||
- const: qcom,pm8941-misc
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
interrupts:
|
||||
minItems: 1
|
||||
maxItems: 2
|
||||
|
||||
interrupt-names:
|
||||
minItems: 1
|
||||
items:
|
||||
- const: usb_id
|
||||
- const: usb_vbus
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- interrupts
|
||||
- interrupt-names
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
#include <dt-bindings/interrupt-controller/irq.h>
|
||||
|
||||
pmic {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
interrupt-controller;
|
||||
#interrupt-cells = <4>;
|
||||
|
||||
usb_id: misc@900 {
|
||||
compatible = "qcom,pm8941-misc";
|
||||
reg = <0x900>;
|
||||
interrupts = <0x0 0x9 0 IRQ_TYPE_EDGE_BOTH>;
|
||||
interrupt-names = "usb_id";
|
||||
};
|
||||
};
|
||||
|
||||
usb-controller {
|
||||
extcon = <&usb_id>;
|
||||
};
|
|
@ -245,13 +245,10 @@ Base tree contains:
|
|||
|
||||
Overlay contains:
|
||||
|
||||
/dts-v1/ /plugin/;
|
||||
/ {
|
||||
fragment@0 {
|
||||
target = <&fpga_region0>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
__overlay__ {
|
||||
/dts-v1/;
|
||||
/plugin/;
|
||||
|
||||
&fpga_region0 {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
|
||||
|
@ -274,8 +271,6 @@ Overlay contains:
|
|||
compatible = "altr,onchipmem-15.1";
|
||||
reg = <0x0 0x10000>;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
|
||||
|
@ -371,13 +366,11 @@ Live Device Tree contains:
|
|||
};
|
||||
|
||||
DT Overlay contains:
|
||||
/dts-v1/ /plugin/;
|
||||
/ {
|
||||
fragment@0 {
|
||||
target = <&fpga_region0>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
__overlay__ {
|
||||
|
||||
/dts-v1/;
|
||||
/plugin/;
|
||||
|
||||
&fpga_region0 {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
|
||||
|
@ -390,7 +383,6 @@ fragment@0 {
|
|||
#gpio-cells = <0x2>;
|
||||
xlnx,gpio-width= <0x6>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
Device Tree Example: Full Reconfiguration to add PRR's
|
||||
|
@ -402,13 +394,11 @@ This example programs the FPGA to have two regions that can later be partially
|
|||
configured. Each region has its own bridge in the FPGA fabric.
|
||||
|
||||
DT Overlay contains:
|
||||
/dts-v1/ /plugin/;
|
||||
/ {
|
||||
fragment@0 {
|
||||
target = <&fpga_region0>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
__overlay__ {
|
||||
|
||||
/dts-v1/;
|
||||
/plugin/;
|
||||
|
||||
&fpga_region0 {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
|
||||
|
@ -437,8 +427,6 @@ DT Overlay contains:
|
|||
ranges;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
Device Tree Example: Partial Reconfiguration
|
||||
|
@ -451,13 +439,10 @@ differences are that the FPGA is partially reconfigured due to the
|
|||
"partial-fpga-config" boolean and the only bridge that is controlled during
|
||||
programming is the FPGA based bridge of fpga_region1.
|
||||
|
||||
/dts-v1/ /plugin/;
|
||||
/ {
|
||||
fragment@0 {
|
||||
target = <&fpga_region1>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
__overlay__ {
|
||||
/dts-v1/;
|
||||
/plugin/;
|
||||
|
||||
&fpga_region1 {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
|
||||
|
@ -472,8 +457,6 @@ programming is the FPGA based bridge of fpga_region1.
|
|||
#gpio-cells = <0x2>;
|
||||
gpio-controller;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
Constraints
|
||||
|
|
|
@ -7,13 +7,24 @@ changes from passing through the bridge. The controller can also
|
|||
couple / enable the bridges which allows traffic to pass through the
|
||||
bridge normally.
|
||||
|
||||
Xilinx LogiCORE Dynamic Function eXchange(DFX) AXI shutdown manager
|
||||
Softcore is compatible with the Xilinx LogiCORE pr-decoupler.
|
||||
|
||||
The Dynamic Function eXchange AXI shutdown manager prevents AXI traffic
|
||||
from passing through the bridge. The controller safely handles AXI4MM
|
||||
and AXI4-Lite interfaces on a Reconfigurable Partition when it is
|
||||
undergoing dynamic reconfiguration, preventing the system deadlock
|
||||
that can occur if AXI transactions are interrupted by DFX
|
||||
|
||||
The Driver supports only MMIO handling. A PR region can have multiple
|
||||
PR Decouplers which can be handled independently or chained via decouple/
|
||||
decouple_status signals.
|
||||
|
||||
Required properties:
|
||||
- compatible : Should contain "xlnx,pr-decoupler-1.00" followed by
|
||||
"xlnx,pr-decoupler"
|
||||
"xlnx,pr-decoupler" or
|
||||
"xlnx,dfx-axi-shutdown-manager-1.00" followed by
|
||||
"xlnx,dfx-axi-shutdown-manager"
|
||||
- regs : base address and size for decoupler module
|
||||
- clocks : input clock to IP
|
||||
- clock-names : should contain "aclk"
|
||||
|
@ -22,6 +33,7 @@ See Documentation/devicetree/bindings/fpga/fpga-region.txt and
|
|||
Documentation/devicetree/bindings/fpga/fpga-bridge.txt for generic bindings.
|
||||
|
||||
Example:
|
||||
Partial Reconfig Decoupler:
|
||||
fpga-bridge@100000450 {
|
||||
compatible = "xlnx,pr-decoupler-1.00",
|
||||
"xlnx-pr-decoupler";
|
||||
|
@ -30,3 +42,13 @@ Example:
|
|||
clock-names = "aclk";
|
||||
bridge-enable = <0>;
|
||||
};
|
||||
|
||||
Dynamic Function eXchange AXI shutdown manager:
|
||||
fpga-bridge@100000450 {
|
||||
compatible = "xlnx,dfx-axi-shutdown-manager-1.00",
|
||||
"xlnx,dfx-axi-shutdown-manager";
|
||||
regs = <0x10000045 0x10>;
|
||||
clocks = <&clkc 15>;
|
||||
clock-names = "aclk";
|
||||
bridge-enable = <0>;
|
||||
};
|
||||
|
|
|
@ -71,6 +71,16 @@ properties:
|
|||
- qcom,sm8250-mmss-noc
|
||||
- qcom,sm8250-npu-noc
|
||||
- qcom,sm8250-system-noc
|
||||
- qcom,sm8350-aggre1-noc
|
||||
- qcom,sm8350-aggre2-noc
|
||||
- qcom,sm8350-config-noc
|
||||
- qcom,sm8350-dc-noc
|
||||
- qcom,sm8350-gem-noc
|
||||
- qcom,sm8350-lpass-ag-noc
|
||||
- qcom,sm8350-mc-virt
|
||||
- qcom,sm8350-mmss-noc
|
||||
- qcom,sm8350-compute-noc
|
||||
- qcom,sm8350-system-noc
|
||||
|
||||
'#interconnect-cells':
|
||||
enum: [ 1, 2 ]
|
||||
|
|
|
@ -0,0 +1,147 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/interconnect/qcom,sdm660.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Qualcomm SDM660 Network-On-Chip interconnect
|
||||
|
||||
maintainers:
|
||||
- AngeloGioacchino Del Regno <kholk11@gmail.com>
|
||||
|
||||
description: |
|
||||
The Qualcomm SDM660 interconnect providers support adjusting the
|
||||
bandwidth requirements between the various NoC fabrics.
|
||||
|
||||
properties:
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
compatible:
|
||||
enum:
|
||||
- qcom,sdm660-a2noc
|
||||
- qcom,sdm660-bimc
|
||||
- qcom,sdm660-cnoc
|
||||
- qcom,sdm660-gnoc
|
||||
- qcom,sdm660-mnoc
|
||||
- qcom,sdm660-snoc
|
||||
|
||||
'#interconnect-cells':
|
||||
const: 1
|
||||
|
||||
clocks:
|
||||
minItems: 1
|
||||
maxItems: 3
|
||||
|
||||
clock-names:
|
||||
minItems: 1
|
||||
maxItems: 3
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- '#interconnect-cells'
|
||||
- clock-names
|
||||
- clocks
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
allOf:
|
||||
- if:
|
||||
properties:
|
||||
compatible:
|
||||
contains:
|
||||
enum:
|
||||
- qcom,sdm660-mnoc
|
||||
then:
|
||||
properties:
|
||||
clocks:
|
||||
items:
|
||||
- description: Bus Clock.
|
||||
- description: Bus A Clock.
|
||||
- description: CPU-NoC High-performance Bus Clock.
|
||||
clock-names:
|
||||
items:
|
||||
- const: bus
|
||||
- const: bus_a
|
||||
- const: iface
|
||||
|
||||
- if:
|
||||
properties:
|
||||
compatible:
|
||||
contains:
|
||||
enum:
|
||||
- qcom,sdm660-a2noc
|
||||
- qcom,sdm660-bimc
|
||||
- qcom,sdm660-cnoc
|
||||
- qcom,sdm660-gnoc
|
||||
- qcom,sdm660-snoc
|
||||
then:
|
||||
properties:
|
||||
clocks:
|
||||
items:
|
||||
- description: Bus Clock.
|
||||
- description: Bus A Clock.
|
||||
clock-names:
|
||||
items:
|
||||
- const: bus
|
||||
- const: bus_a
|
||||
|
||||
examples:
|
||||
- |
|
||||
#include <dt-bindings/clock/qcom,rpmcc.h>
|
||||
#include <dt-bindings/clock/qcom,mmcc-sdm660.h>
|
||||
|
||||
bimc: interconnect@1008000 {
|
||||
compatible = "qcom,sdm660-bimc";
|
||||
reg = <0x01008000 0x78000>;
|
||||
#interconnect-cells = <1>;
|
||||
clock-names = "bus", "bus_a";
|
||||
clocks = <&rpmcc RPM_SMD_BIMC_CLK>,
|
||||
<&rpmcc RPM_SMD_BIMC_A_CLK>;
|
||||
};
|
||||
|
||||
cnoc: interconnect@1500000 {
|
||||
compatible = "qcom,sdm660-cnoc";
|
||||
reg = <0x01500000 0x10000>;
|
||||
#interconnect-cells = <1>;
|
||||
clock-names = "bus", "bus_a";
|
||||
clocks = <&rpmcc RPM_SMD_CNOC_CLK>,
|
||||
<&rpmcc RPM_SMD_CNOC_A_CLK>;
|
||||
};
|
||||
|
||||
snoc: interconnect@1626000 {
|
||||
compatible = "qcom,sdm660-snoc";
|
||||
reg = <0x01626000 0x7090>;
|
||||
#interconnect-cells = <1>;
|
||||
clock-names = "bus", "bus_a";
|
||||
clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
|
||||
<&rpmcc RPM_SMD_SNOC_A_CLK>;
|
||||
};
|
||||
|
||||
a2noc: interconnect@1704000 {
|
||||
compatible = "qcom,sdm660-a2noc";
|
||||
reg = <0x01704000 0xc100>;
|
||||
#interconnect-cells = <1>;
|
||||
clock-names = "bus", "bus_a";
|
||||
clocks = <&rpmcc RPM_SMD_AGGR2_NOC_CLK>,
|
||||
<&rpmcc RPM_SMD_AGGR2_NOC_A_CLK>;
|
||||
};
|
||||
|
||||
mnoc: interconnect@1745000 {
|
||||
compatible = "qcom,sdm660-mnoc";
|
||||
reg = <0x01745000 0xa010>;
|
||||
#interconnect-cells = <1>;
|
||||
clock-names = "bus", "bus_a", "iface";
|
||||
clocks = <&rpmcc RPM_SMD_MMSSNOC_AXI_CLK>,
|
||||
<&rpmcc RPM_SMD_MMSSNOC_AXI_CLK_A>,
|
||||
<&mmcc AHB_CLK_SRC>;
|
||||
};
|
||||
|
||||
gnoc: interconnect@17900000 {
|
||||
compatible = "qcom,sdm660-gnoc";
|
||||
reg = <0x17900000 0xe000>;
|
||||
#interconnect-cells = <1>;
|
||||
clock-names = "bus", "bus_a";
|
||||
clocks = <&xo_board>, <&xo_board>;
|
||||
};
|
|
@ -0,0 +1,34 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/nvmem/brcm,nvram.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Broadcom's NVRAM
|
||||
|
||||
description: |
|
||||
Broadcom's NVRAM is a structure containing device specific environment
|
||||
variables. It is used for storing device configuration, booting parameters
|
||||
and calibration data.
|
||||
|
||||
NVRAM can be accessed on Broadcom BCM47xx MIPS and Northstar ARM Cortex-A9
|
||||
devices usiong I/O mapped memory.
|
||||
|
||||
maintainers:
|
||||
- Rafał Miłecki <rafal@milecki.pl>
|
||||
|
||||
allOf:
|
||||
- $ref: "nvmem.yaml#"
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
const: brcm,nvram
|
||||
|
||||
unevaluatedProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
nvram@1eff0000 {
|
||||
compatible = "brcm,nvram";
|
||||
reg = <0x1eff0000 0x10000>;
|
||||
};
|
|
@ -8,6 +8,7 @@ Required properties:
|
|||
"mediatek,mt7623-efuse", "mediatek,efuse": for MT7623
|
||||
"mediatek,mt8173-efuse" or "mediatek,efuse": for MT8173
|
||||
"mediatek,mt8516-efuse", "mediatek,efuse": for MT8516
|
||||
"mediatek,mt8192-efuse", "mediatek,efuse": for MT8192
|
||||
- reg: Should contain registers location and length
|
||||
|
||||
= Data cells =
|
||||
|
|
|
@ -24,6 +24,7 @@ properties:
|
|||
- qcom,msm8998-qfprom
|
||||
- qcom,qcs404-qfprom
|
||||
- qcom,sc7180-qfprom
|
||||
- qcom,sc7280-qfprom
|
||||
- qcom,sdm845-qfprom
|
||||
- const: qcom,qfprom
|
||||
|
||||
|
|
|
@ -1,21 +0,0 @@
|
|||
Driver for Broadcom Northstar USB 2.0 PHY
|
||||
|
||||
Required properties:
|
||||
- compatible: brcm,ns-usb2-phy
|
||||
- reg: iomem address range of DMU (Device Management Unit)
|
||||
- reg-names: "dmu", the only needed & supported reg right now
|
||||
- clocks: USB PHY reference clock
|
||||
- clock-names: "phy-ref-clk", the only needed & supported clock right now
|
||||
|
||||
To initialize USB 2.0 PHY driver needs to setup PLL correctly. To do this it
|
||||
requires passing phandle to the USB PHY reference clock.
|
||||
|
||||
Example:
|
||||
usb2-phy {
|
||||
compatible = "brcm,ns-usb2-phy";
|
||||
reg = <0x1800c000 0x1000>;
|
||||
reg-names = "dmu";
|
||||
#phy-cells = <0>;
|
||||
clocks = <&genpll BCM_NSP_GENPLL_USB_PHY_REF_CLK>;
|
||||
clock-names = "phy-ref-clk";
|
||||
};
|
|
@ -0,0 +1,59 @@
|
|||
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/phy/bcm-ns-usb2-phy.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Broadcom Northstar USB 2.0 PHY
|
||||
|
||||
description: >
|
||||
To initialize USB 2.0 PHY driver needs to setup PLL correctly.
|
||||
To do this it requires passing phandle to the USB PHY reference clock.
|
||||
|
||||
maintainers:
|
||||
- Rafał Miłecki <rafal@milecki.pl>
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
const: brcm,ns-usb2-phy
|
||||
|
||||
reg:
|
||||
items:
|
||||
- description: iomem address range of DMU (Device Management Unit)
|
||||
|
||||
reg-names:
|
||||
items:
|
||||
- const: dmu
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: USB PHY reference clock
|
||||
|
||||
clock-names:
|
||||
items:
|
||||
- const: phy-ref-clk
|
||||
|
||||
"#phy-cells":
|
||||
const: 0
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- reg-names
|
||||
- clocks
|
||||
- clock-names
|
||||
- "#phy-cells"
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
#include <dt-bindings/clock/bcm-nsp.h>
|
||||
phy@1800c000 {
|
||||
compatible = "brcm,ns-usb2-phy";
|
||||
reg = <0x1800c000 0x1000>;
|
||||
reg-names = "dmu";
|
||||
clocks = <&genpll BCM_NSP_GENPLL_USB_PHY_REF_CLK>;
|
||||
clock-names = "phy-ref-clk";
|
||||
#phy-cells = <0>;
|
||||
};
|
|
@ -1,34 +0,0 @@
|
|||
Driver for Broadcom Northstar USB 3.0 PHY
|
||||
|
||||
Required properties:
|
||||
|
||||
- compatible: one of: "brcm,ns-ax-usb3-phy", "brcm,ns-bx-usb3-phy".
|
||||
- reg: address of MDIO bus device
|
||||
- usb3-dmp-syscon: phandle to syscon with DMP (Device Management Plugin)
|
||||
registers
|
||||
- #phy-cells: must be 0
|
||||
|
||||
Initialization of USB 3.0 PHY depends on Northstar version. There are currently
|
||||
three known series: Ax, Bx and Cx.
|
||||
Known A0: BCM4707 rev 0
|
||||
Known B0: BCM4707 rev 4, BCM53573 rev 2
|
||||
Known B1: BCM4707 rev 6
|
||||
Known C0: BCM47094 rev 0
|
||||
|
||||
Example:
|
||||
mdio: mdio@0 {
|
||||
reg = <0x0>;
|
||||
#size-cells = <1>;
|
||||
#address-cells = <0>;
|
||||
|
||||
usb3-phy@10 {
|
||||
compatible = "brcm,ns-ax-usb3-phy";
|
||||
reg = <0x10>;
|
||||
usb3-dmp-syscon = <&usb3_dmp>;
|
||||
#phy-cells = <0>;
|
||||
};
|
||||
};
|
||||
|
||||
usb3_dmp: syscon@18105000 {
|
||||
reg = <0x18105000 0x1000>;
|
||||
};
|
|
@ -0,0 +1,62 @@
|
|||
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/phy/bcm-ns-usb3-phy.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Broadcom Northstar USB 3.0 PHY
|
||||
|
||||
description: |
|
||||
Initialization of USB 3.0 PHY depends on Northstar version. There are currently
|
||||
three known series: Ax, Bx and Cx.
|
||||
Known A0: BCM4707 rev 0
|
||||
Known B0: BCM4707 rev 4, BCM53573 rev 2
|
||||
Known B1: BCM4707 rev 6
|
||||
Known C0: BCM47094 rev 0
|
||||
|
||||
maintainers:
|
||||
- Rafał Miłecki <rafal@milecki.pl>
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
enum:
|
||||
- brcm,ns-ax-usb3-phy
|
||||
- brcm,ns-bx-usb3-phy
|
||||
|
||||
reg:
|
||||
description: address of MDIO bus device
|
||||
maxItems: 1
|
||||
|
||||
usb3-dmp-syscon:
|
||||
$ref: /schemas/types.yaml#/definitions/phandle
|
||||
description:
|
||||
Phandle to the DMP (Device Management Plugin) syscon
|
||||
|
||||
"#phy-cells":
|
||||
const: 0
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- usb3-dmp-syscon
|
||||
- "#phy-cells"
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
mdio {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
|
||||
usb3-phy@10 {
|
||||
compatible = "brcm,ns-ax-usb3-phy";
|
||||
reg = <0x10>;
|
||||
usb3-dmp-syscon = <&usb3_dmp>;
|
||||
#phy-cells = <0>;
|
||||
};
|
||||
};
|
||||
|
||||
usb3_dmp: syscon@18105000 {
|
||||
reg = <0x18105000 0x1000>;
|
||||
};
|
|
@ -42,6 +42,9 @@ properties:
|
|||
- const: usb_mdio
|
||||
- const: bdc_ec
|
||||
|
||||
power-domains:
|
||||
maxItems: 1
|
||||
|
||||
clocks:
|
||||
minItems: 1
|
||||
maxItems: 2
|
||||
|
|
|
@ -0,0 +1,57 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
|
||||
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: "http://devicetree.org/schemas/phy/marvell,armada-3700-utmi-phy.yaml#"
|
||||
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
|
||||
|
||||
title: Marvell Armada UTMI/UTMI+ PHY
|
||||
|
||||
maintainers:
|
||||
- Miquel Raynal <miquel.raynal@bootlin.com>
|
||||
|
||||
description:
|
||||
On Armada 3700, there are two USB controllers, one is compatible with
|
||||
the USB2 and USB3 specifications and supports OTG. The other one is USB2
|
||||
compliant and only supports host mode. Both of these controllers come with
|
||||
a slightly different UTMI PHY.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
enum:
|
||||
- marvell,a3700-utmi-host-phy
|
||||
- marvell,a3700-utmi-otg-phy
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
"#phy-cells":
|
||||
const: 0
|
||||
|
||||
marvell,usb-misc-reg:
|
||||
description:
|
||||
Phandle on the "USB miscellaneous registers" shared region
|
||||
covering registers related to both the host controller and
|
||||
the PHY.
|
||||
$ref: /schemas/types.yaml#/definitions/phandle
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- "#phy-cells"
|
||||
- marvell,usb-misc-reg
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
usb2_utmi_host_phy: phy@5f000 {
|
||||
compatible = "marvell,armada-3700-utmi-host-phy";
|
||||
reg = <0x5f000 0x800>;
|
||||
marvell,usb-misc-reg = <&usb2_syscon>;
|
||||
#phy-cells = <0>;
|
||||
};
|
||||
|
||||
usb2_syscon: system-controller@5f800 {
|
||||
compatible = "marvell,armada-3700-usb2-host-misc", "syscon";
|
||||
reg = <0x5f800 0x800>;
|
||||
};
|
|
@ -0,0 +1,109 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
|
||||
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: "http://devicetree.org/schemas/phy/marvell,armada-cp110-utmi-phy.yaml#"
|
||||
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
|
||||
|
||||
title: Marvell Armada CP110/CP115 UTMI PHY
|
||||
|
||||
maintainers:
|
||||
- Konstantin Porotchkin <kostap@marvell.com>
|
||||
|
||||
description:
|
||||
On Armada 7k/8k and CN913x, there are two host and one device USB controllers.
|
||||
Each of two exiting UTMI PHYs could be connected to either USB host or USB device
|
||||
controller.
|
||||
The USB device controller can only be connected to a single UTMI PHY port
|
||||
0.H----- USB HOST0
|
||||
UTMI PHY0 --------/
|
||||
0.D-----0
|
||||
\------ USB DEVICE
|
||||
1.D-----1
|
||||
UTMI PHY1 --------\
|
||||
1.H----- USB HOST1
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
const: marvell,cp110-utmi-phy
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
"#address-cells":
|
||||
const: 1
|
||||
|
||||
"#size-cells":
|
||||
const: 0
|
||||
|
||||
marvell,system-controller:
|
||||
description:
|
||||
Phandle to the system controller node
|
||||
$ref: /schemas/types.yaml#/definitions/phandle
|
||||
|
||||
#Required child nodes:
|
||||
|
||||
patternProperties:
|
||||
"^usb-phy@[0|1]$":
|
||||
type: object
|
||||
description:
|
||||
Each UTMI PHY port must be represented as a sub-node.
|
||||
|
||||
properties:
|
||||
reg:
|
||||
description: phy port index.
|
||||
maxItems: 1
|
||||
|
||||
"#phy-cells":
|
||||
const: 0
|
||||
|
||||
required:
|
||||
- reg
|
||||
- "#phy-cells"
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- "#address-cells"
|
||||
- "#size-cells"
|
||||
- marvell,system-controller
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
cp0_utmi: utmi@580000 {
|
||||
compatible = "marvell,cp110-utmi-phy";
|
||||
reg = <0x580000 0x2000>;
|
||||
marvell,system-controller = <&cp0_syscon0>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
|
||||
cp0_utmi0: usb-phy@0 {
|
||||
reg = <0>;
|
||||
#phy-cells = <0>;
|
||||
};
|
||||
|
||||
cp0_utmi1: usb-phy@1 {
|
||||
reg = <1>;
|
||||
#phy-cells = <0>;
|
||||
};
|
||||
};
|
||||
|
||||
cp0_usb3_0 {
|
||||
usb-phy = <&cp0_usb3_0_phy0>;
|
||||
phys = <&cp0_utmi0>;
|
||||
phy-names = "utmi";
|
||||
/* UTMI0 is connected to USB host controller (default mode) */
|
||||
dr_mode = "host";
|
||||
};
|
||||
|
||||
cp0_usb3_1 {
|
||||
usb-phy = <&cp0_usb3_0_phy1>;
|
||||
phys = <&cp0_utmi1>;
|
||||
phy-names = "utmi";
|
||||
/* UTMI1 is connected to USB device controller */
|
||||
dr_mode = "peripheral";
|
||||
};
|
|
@ -19,11 +19,14 @@ properties:
|
|||
pattern: "^dsi-phy@[0-9a-f]+$"
|
||||
|
||||
compatible:
|
||||
enum:
|
||||
- mediatek,mt2701-mipi-tx
|
||||
oneOf:
|
||||
- items:
|
||||
- enum:
|
||||
- mediatek,mt7623-mipi-tx
|
||||
- mediatek,mt8173-mipi-tx
|
||||
- mediatek,mt8183-mipi-tx
|
||||
- const: mediatek,mt2701-mipi-tx
|
||||
- const: mediatek,mt2701-mipi-tx
|
||||
- const: mediatek,mt8173-mipi-tx
|
||||
- const: mediatek,mt8183-mipi-tx
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
|
|
@ -21,10 +21,13 @@ properties:
|
|||
pattern: "^hdmi-phy@[0-9a-f]+$"
|
||||
|
||||
compatible:
|
||||
enum:
|
||||
- mediatek,mt2701-hdmi-phy
|
||||
oneOf:
|
||||
- items:
|
||||
- enum:
|
||||
- mediatek,mt7623-hdmi-phy
|
||||
- mediatek,mt8173-hdmi-phy
|
||||
- const: mediatek,mt2701-hdmi-phy
|
||||
- const: mediatek,mt2701-hdmi-phy
|
||||
- const: mediatek,mt8173-hdmi-phy
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
|
|
@ -79,6 +79,7 @@ properties:
|
|||
- mediatek,mt2712-tphy
|
||||
- mediatek,mt7629-tphy
|
||||
- mediatek,mt8183-tphy
|
||||
- mediatek,mt8195-tphy
|
||||
- const: mediatek,generic-tphy-v2
|
||||
- const: mediatek,mt2701-u3phy
|
||||
deprecated: true
|
||||
|
@ -117,7 +118,7 @@ properties:
|
|||
|
||||
# Required child node:
|
||||
patternProperties:
|
||||
"^usb-phy@[0-9a-f]+$":
|
||||
"^(usb|pcie|sata)-phy@[0-9a-f]+$":
|
||||
type: object
|
||||
description:
|
||||
A sub-node is required for each port the controller provides.
|
||||
|
|
|
@ -22,7 +22,12 @@ properties:
|
|||
pattern: "^ufs-phy@[0-9a-f]+$"
|
||||
|
||||
compatible:
|
||||
const: mediatek,mt8183-ufsphy
|
||||
oneOf:
|
||||
- items:
|
||||
- enum:
|
||||
- mediatek,mt8195-ufsphy
|
||||
- const: mediatek,mt8183-ufsphy
|
||||
- const: mediatek,mt8183-ufsphy
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
|
|
@ -0,0 +1,100 @@
|
|||
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/phy/microchip,sparx5-serdes.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Microchip Sparx5 Serdes controller
|
||||
|
||||
maintainers:
|
||||
- Steen Hegelund <steen.hegelund@microchip.com>
|
||||
|
||||
description: |
|
||||
The Sparx5 SERDES interfaces share the same basic functionality, but
|
||||
support different operating modes and line rates.
|
||||
|
||||
The following list lists the SERDES features:
|
||||
|
||||
* RX Adaptive Decision Feedback Equalizer (DFE)
|
||||
* Programmable continuous time linear equalizer (CTLE)
|
||||
* Rx variable gain control
|
||||
* Rx built-in fault detector (loss-of-lock/loss-of-signal)
|
||||
* Adjustable tx de-emphasis (FFE)
|
||||
* Tx output amplitude control
|
||||
* Supports rx eye monitor
|
||||
* Multiple loopback modes
|
||||
* Prbs generator and checker
|
||||
* Polarity inversion control
|
||||
|
||||
SERDES6G:
|
||||
|
||||
The SERDES6G is a high-speed SERDES interface, which can operate at
|
||||
the following data rates:
|
||||
|
||||
* 100 Mbps (100BASE-FX)
|
||||
* 1.25 Gbps (SGMII/1000BASE-X/1000BASE-KX)
|
||||
* 3.125 Gbps (2.5GBASE-X/2.5GBASE-KX)
|
||||
* 5.15625 Gbps (5GBASE-KR/5G-USXGMII)
|
||||
|
||||
SERDES10G
|
||||
|
||||
The SERDES10G is a high-speed SERDES interface, which can operate at
|
||||
the following data rates:
|
||||
|
||||
* 100 Mbps (100BASE-FX)
|
||||
* 1.25 Gbps (SGMII/1000BASE-X/1000BASE-KX)
|
||||
* 3.125 Gbps (2.5GBASE-X/2.5GBASE-KX)
|
||||
* 5 Gbps (QSGMII/USGMII)
|
||||
* 5.15625 Gbps (5GBASE-KR/5G-USXGMII)
|
||||
* 10 Gbps (10G-USGMII)
|
||||
* 10.3125 Gbps (10GBASE-R/10GBASE-KR/USXGMII)
|
||||
|
||||
SERDES25G
|
||||
|
||||
The SERDES25G is a high-speed SERDES interface, which can operate at
|
||||
the following data rates:
|
||||
|
||||
* 1.25 Gbps (SGMII/1000BASE-X/1000BASE-KX)
|
||||
* 3.125 Gbps (2.5GBASE-X/2.5GBASE-KX)
|
||||
* 5 Gbps (QSGMII/USGMII)
|
||||
* 5.15625 Gbps (5GBASE-KR/5G-USXGMII)
|
||||
* 10 Gbps (10G-USGMII)
|
||||
* 10.3125 Gbps (10GBASE-R/10GBASE-KR/USXGMII)
|
||||
* 25.78125 Gbps (25GBASE-KR/25GBASE-CR/25GBASE-SR/25GBASE-LR/25GBASE-ER)
|
||||
|
||||
properties:
|
||||
$nodename:
|
||||
pattern: "^serdes@[0-9a-f]+$"
|
||||
|
||||
compatible:
|
||||
const: microchip,sparx5-serdes
|
||||
|
||||
reg:
|
||||
minItems: 1
|
||||
|
||||
'#phy-cells':
|
||||
const: 1
|
||||
description: |
|
||||
- The main serdes input port
|
||||
|
||||
clocks:
|
||||
maxItems: 1
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- '#phy-cells'
|
||||
- clocks
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
serdes: serdes@10808000 {
|
||||
compatible = "microchip,sparx5-serdes";
|
||||
#phy-cells = <1>;
|
||||
clocks = <&sys_clk>;
|
||||
reg = <0x10808000 0x5d0000>;
|
||||
};
|
||||
|
||||
...
|
|
@ -26,6 +26,9 @@ properties:
|
|||
'#size-cells':
|
||||
const: 0
|
||||
|
||||
'#clock-cells':
|
||||
const: 1
|
||||
|
||||
resets:
|
||||
minItems: 1
|
||||
maxItems: 2
|
||||
|
@ -49,12 +52,24 @@ properties:
|
|||
const: serdes
|
||||
|
||||
clocks:
|
||||
maxItems: 2
|
||||
minItems: 2
|
||||
maxItems: 4
|
||||
|
||||
clock-names:
|
||||
minItems: 2
|
||||
items:
|
||||
- const: cmn_refclk_dig_div
|
||||
- const: cmn_refclk1_dig_div
|
||||
- const: pll0_refclk
|
||||
- const: pll1_refclk
|
||||
|
||||
assigned-clocks:
|
||||
minItems: 1
|
||||
maxItems: 2
|
||||
|
||||
assigned-clock-parents:
|
||||
minItems: 1
|
||||
maxItems: 2
|
||||
|
||||
cdns,autoconf:
|
||||
type: boolean
|
||||
|
|
|
@ -28,13 +28,27 @@ properties:
|
|||
'#size-cells':
|
||||
const: 0
|
||||
|
||||
'#clock-cells':
|
||||
const: 1
|
||||
|
||||
clocks:
|
||||
maxItems: 1
|
||||
minItems: 1
|
||||
maxItems: 2
|
||||
description:
|
||||
PHY reference clock. Must contain an entry in clock-names.
|
||||
PHY reference clock for 1 item. Must contain an entry in clock-names.
|
||||
Optional Parent to enable output reference clock.
|
||||
|
||||
clock-names:
|
||||
const: refclk
|
||||
minItems: 1
|
||||
items:
|
||||
- const: refclk
|
||||
- const: phy_en_refclk
|
||||
|
||||
assigned-clocks:
|
||||
maxItems: 3
|
||||
|
||||
assigned-clock-parents:
|
||||
maxItems: 3
|
||||
|
||||
reg:
|
||||
minItems: 1
|
||||
|
@ -170,7 +184,7 @@ examples:
|
|||
};
|
||||
- |
|
||||
#include <dt-bindings/phy/phy.h>
|
||||
#include <dt-bindings/phy/phy-cadence-torrent.h>
|
||||
#include <dt-bindings/phy/phy-cadence.h>
|
||||
|
||||
bus {
|
||||
#address-cells = <2>;
|
||||
|
|
|
@ -1,38 +0,0 @@
|
|||
MVEBU A3700 UTMI PHY
|
||||
--------------------
|
||||
|
||||
USB2 UTMI+ PHY controllers can be found on the following Marvell MVEBU SoCs:
|
||||
* Armada 3700
|
||||
|
||||
On Armada 3700, there are two USB controllers, one is compatible with the USB2
|
||||
and USB3 specifications and supports OTG. The other one is USB2 compliant and
|
||||
only supports host mode. Both of these controllers come with a slightly
|
||||
different UTMI PHY.
|
||||
|
||||
Required Properties:
|
||||
|
||||
- compatible: Should be one of:
|
||||
* "marvell,a3700-utmi-host-phy" for the PHY connected to
|
||||
the USB2 host-only controller.
|
||||
* "marvell,a3700-utmi-otg-phy" for the PHY connected to
|
||||
the USB3 and USB2 OTG capable controller.
|
||||
- reg: PHY IP register range.
|
||||
- marvell,usb-misc-reg: handle on the "USB miscellaneous registers" shared
|
||||
region covering registers related to both the host
|
||||
controller and the PHY.
|
||||
- #phy-cells: Standard property (Documentation: phy-bindings.txt) Should be 0.
|
||||
|
||||
|
||||
Example:
|
||||
|
||||
usb2_utmi_host_phy: phy@5f000 {
|
||||
compatible = "marvell,armada-3700-utmi-host-phy";
|
||||
reg = <0x5f000 0x800>;
|
||||
marvell,usb-misc-reg = <&usb2_syscon>;
|
||||
#phy-cells = <0>;
|
||||
};
|
||||
|
||||
usb2_syscon: system-controller@5f800 {
|
||||
compatible = "marvell,armada-3700-usb2-host-misc", "syscon";
|
||||
reg = <0x5f800 0x800>;
|
||||
};
|
|
@ -51,6 +51,10 @@ properties:
|
|||
vdda1v8-supply:
|
||||
description: regulator providing 1V8 power supply to the PLL block
|
||||
|
||||
'#clock-cells':
|
||||
description: number of clock cells for ck_usbo_48m consumer
|
||||
const: 0
|
||||
|
||||
#Required child nodes:
|
||||
|
||||
patternProperties:
|
||||
|
@ -120,6 +124,7 @@ examples:
|
|||
vdda1v8-supply = <®18>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
#clock-cells = <0>;
|
||||
|
||||
usbphyc_port0: usb-phy@0 {
|
||||
reg = <0>;
|
||||
|
|
|
@ -25,11 +25,13 @@ properties:
|
|||
- qcom,msm8998-qmp-pcie-phy
|
||||
- qcom,msm8998-qmp-ufs-phy
|
||||
- qcom,msm8998-qmp-usb3-phy
|
||||
- qcom,sc7180-qmp-usb3-phy
|
||||
- qcom,sc8180x-qmp-ufs-phy
|
||||
- qcom,sc8180x-qmp-usb3-phy
|
||||
- qcom,sdm845-qhp-pcie-phy
|
||||
- qcom,sdm845-qmp-pcie-phy
|
||||
- qcom,sdm845-qmp-ufs-phy
|
||||
- qcom,sdm845-qmp-usb3-phy
|
||||
- qcom,sdm845-qmp-usb3-uni-phy
|
||||
- qcom,sm8150-qmp-ufs-phy
|
||||
- qcom,sm8150-qmp-usb3-phy
|
||||
|
|
|
@ -14,9 +14,8 @@ properties:
|
|||
compatible:
|
||||
enum:
|
||||
- qcom,sc7180-qmp-usb3-dp-phy
|
||||
- qcom,sc7180-qmp-usb3-phy
|
||||
- qcom,sdm845-qmp-usb3-dp-phy
|
||||
- qcom,sdm845-qmp-usb3-phy
|
||||
- qcom,sm8250-qmp-usb3-dp-phy
|
||||
reg:
|
||||
items:
|
||||
- description: Address and length of PHY's USB serdes block.
|
||||
|
|
|
@ -16,6 +16,7 @@ properties:
|
|||
compatible:
|
||||
enum:
|
||||
- qcom,usb-snps-hs-7nm-phy
|
||||
- qcom,sc7280-usb-hs-phy
|
||||
- qcom,sm8150-usb-hs-phy
|
||||
- qcom,sm8250-usb-hs-phy
|
||||
- qcom,sm8350-usb-hs-phy
|
||||
|
|
|
@ -15,6 +15,7 @@ properties:
|
|||
enum:
|
||||
- ti,j721e-wiz-16g
|
||||
- ti,j721e-wiz-10g
|
||||
- ti,am64-wiz-10g
|
||||
|
||||
power-domains:
|
||||
maxItems: 1
|
||||
|
@ -42,6 +43,9 @@ properties:
|
|||
"#reset-cells":
|
||||
const: 1
|
||||
|
||||
"#clock-cells":
|
||||
const: 1
|
||||
|
||||
ranges: true
|
||||
|
||||
assigned-clocks:
|
||||
|
|
|
@ -54,6 +54,8 @@ board specific bus parameters.
|
|||
Value type: <prop-encoded-array>
|
||||
Definition: should specify payload transport window offset1 of each
|
||||
data port. Out ports followed by In ports.
|
||||
Value of 0xFF indicates that this option is not implemented
|
||||
or applicable for the respective data port.
|
||||
More info in MIPI Alliance SoundWire 1.0 Specifications.
|
||||
|
||||
- qcom,ports-offset2:
|
||||
|
@ -61,6 +63,8 @@ board specific bus parameters.
|
|||
Value type: <prop-encoded-array>
|
||||
Definition: should specify payload transport window offset2 of each
|
||||
data port. Out ports followed by In ports.
|
||||
Value of 0xFF indicates that this option is not implemented
|
||||
or applicable for the respective data port.
|
||||
More info in MIPI Alliance SoundWire 1.0 Specifications.
|
||||
|
||||
- qcom,ports-sinterval-low:
|
||||
|
@ -69,12 +73,16 @@ board specific bus parameters.
|
|||
Definition: should be sample interval low of each data port.
|
||||
Out ports followed by In ports. Used for Sample Interval
|
||||
calculation.
|
||||
Value of 0xFF indicates that this option is not implemented
|
||||
or applicable for the respective data port.
|
||||
More info in MIPI Alliance SoundWire 1.0 Specifications.
|
||||
|
||||
- qcom,ports-word-length:
|
||||
Usage: optional
|
||||
Value type: <prop-encoded-array>
|
||||
Definition: should be size of payload channel sample.
|
||||
Value of 0xFF indicates that this option is not implemented
|
||||
or applicable for the respective data port.
|
||||
More info in MIPI Alliance SoundWire 1.0 Specifications.
|
||||
|
||||
- qcom,ports-block-pack-mode:
|
||||
|
@ -84,6 +92,8 @@ board specific bus parameters.
|
|||
0 to indicate Blocks are per Channel
|
||||
1 to indicate Blocks are per Port.
|
||||
Out ports followed by In ports.
|
||||
Value of 0xFF indicates that this option is not implemented
|
||||
or applicable for the respective data port.
|
||||
More info in MIPI Alliance SoundWire 1.0 Specifications.
|
||||
|
||||
- qcom,ports-block-group-count:
|
||||
|
@ -92,6 +102,8 @@ board specific bus parameters.
|
|||
Definition: should be in range 1 to 4 to indicate how many sample
|
||||
intervals are combined into a payload.
|
||||
Out ports followed by In ports.
|
||||
Value of 0xFF indicates that this option is not implemented
|
||||
or applicable for the respective data port.
|
||||
More info in MIPI Alliance SoundWire 1.0 Specifications.
|
||||
|
||||
- qcom,ports-lane-control:
|
||||
|
@ -100,6 +112,8 @@ board specific bus parameters.
|
|||
Definition: should be in range 0 to 7 to identify which data lane
|
||||
the data port uses.
|
||||
Out ports followed by In ports.
|
||||
Value of 0xFF indicates that this option is not implemented
|
||||
or applicable for the respective data port.
|
||||
More info in MIPI Alliance SoundWire 1.0 Specifications.
|
||||
|
||||
- qcom,ports-hstart:
|
||||
|
@ -109,6 +123,8 @@ board specific bus parameters.
|
|||
SoundWire Frame, i.e. left edge of the Transport sub-frame
|
||||
for each port. Values between 0 and 15 are valid.
|
||||
Out ports followed by In ports.
|
||||
Value of 0xFF indicates that this option is not implemented
|
||||
or applicable for the respective data port.
|
||||
More info in MIPI Alliance SoundWire 1.0 Specifications.
|
||||
|
||||
- qcom,ports-hstop:
|
||||
|
@ -118,6 +134,8 @@ board specific bus parameters.
|
|||
SoundWire Frame, i.e. the right edge of the Transport
|
||||
sub-frame for each port. Values between 0 and 15 are valid.
|
||||
Out ports followed by In ports.
|
||||
Value of 0xFF indicates that this option is not implemented
|
||||
or applicable for the respective data port.
|
||||
More info in MIPI Alliance SoundWire 1.0 Specifications.
|
||||
|
||||
- qcom,dports-type:
|
||||
|
@ -128,6 +146,8 @@ board specific bus parameters.
|
|||
1 for simple ports
|
||||
2 for full port
|
||||
Out ports followed by In ports.
|
||||
Value of 0xFF indicates that this option is not implemented
|
||||
or applicable for the respective data port.
|
||||
More info in MIPI Alliance SoundWire 1.0 Specifications.
|
||||
|
||||
Note:
|
||||
|
|
|
@ -7,6 +7,7 @@ Authors:
|
|||
- Enno Luebbers <enno.luebbers@intel.com>
|
||||
- Xiao Guangrong <guangrong.xiao@linux.intel.com>
|
||||
- Wu Hao <hao.wu@intel.com>
|
||||
- Xu Yilun <yilun.xu@intel.com>
|
||||
|
||||
The Device Feature List (DFL) FPGA framework (and drivers according to
|
||||
this framework) hides the very details of low layer hardwares and provides
|
||||
|
@ -530,6 +531,31 @@ Being able to specify more than one DFL per BAR has been considered, but it
|
|||
was determined the use case did not provide value. Specifying a single DFL
|
||||
per BAR simplifies the implementation and allows for extra error checking.
|
||||
|
||||
|
||||
Userspace driver support for DFL devices
|
||||
========================================
|
||||
The purpose of an FPGA is to be reprogrammed with newly developed hardware
|
||||
components. New hardware can instantiate a new private feature in the DFL, and
|
||||
then present a DFL device in the system. In some cases users may need a
|
||||
userspace driver for the DFL device:
|
||||
|
||||
* Users may need to run some diagnostic test for their hardware.
|
||||
* Users may prototype the kernel driver in user space.
|
||||
* Some hardware is designed for specific purposes and does not fit into one of
|
||||
the standard kernel subsystems.
|
||||
|
||||
This requires direct access to MMIO space and interrupt handling from
|
||||
userspace. The uio_dfl module exposes the UIO device interfaces for this
|
||||
purpose.
|
||||
|
||||
Currently the uio_dfl driver only supports the Ether Group sub feature, which
|
||||
has no irq in hardware. So the interrupt handling is not added in this driver.
|
||||
|
||||
UIO_DFL should be selected to enable the uio_dfl module driver. To support a
|
||||
new DFL feature via UIO direct access, its feature id should be added to the
|
||||
driver's id_table.
|
||||
|
||||
|
||||
Open discussion
|
||||
===============
|
||||
FME driver exports one ioctl (DFL_FPGA_FME_PORT_PR) for partial reconfiguration
|
||||
|
|
|
@ -0,0 +1,64 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
===========================================================================
|
||||
Driver for Synopsys DesignWare PCIe traffic generator (also known as xData)
|
||||
===========================================================================
|
||||
|
||||
Supported chips:
|
||||
Synopsys DesignWare PCIe prototype solution
|
||||
|
||||
Datasheet:
|
||||
Not freely available
|
||||
|
||||
Author:
|
||||
Gustavo Pimentel <gustavo.pimentel@synopsys.com>
|
||||
|
||||
Description
|
||||
-----------
|
||||
|
||||
This driver should be used as a host-side (Root Complex) driver and Synopsys
|
||||
DesignWare prototype that includes this IP.
|
||||
|
||||
The dw-xdata-pcie driver can be used to enable/disable PCIe traffic
|
||||
generator in either direction (mutual exclusion) besides allowing the
|
||||
PCIe link performance analysis.
|
||||
|
||||
The interaction with this driver is done through the module parameter and
|
||||
can be changed in runtime. The driver outputs the requested command state
|
||||
information to ``/var/log/kern.log`` or dmesg.
|
||||
|
||||
Example
|
||||
-------
|
||||
|
||||
Write TLPs traffic generation - Root Complex to Endpoint direction
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Generate traffic::
|
||||
|
||||
# echo 1 > /sys/class/misc/dw-xdata-pcie.0/write
|
||||
|
||||
Get link throughput in MB/s::
|
||||
|
||||
# cat /sys/class/misc/dw-xdata-pcie.0/write
|
||||
204
|
||||
|
||||
Stop traffic in any direction::
|
||||
|
||||
# echo 0 > /sys/class/misc/dw-xdata-pcie.0/write
|
||||
|
||||
Read TLPs traffic generation - Endpoint to Root Complex direction
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Generate traffic::
|
||||
|
||||
# echo 1 > /sys/class/misc/dw-xdata-pcie.0/read
|
||||
|
||||
Get link throughput in MB/s::
|
||||
|
||||
# cat /sys/class/misc/dw-xdata-pcie.0/read
|
||||
199
|
||||
|
||||
Stop traffic in any direction::
|
||||
|
||||
# echo 0 > /sys/class/misc/dw-xdata-pcie.0/read
|
||||
|
|
@ -19,6 +19,7 @@ fit into other categories.
|
|||
bh1770glc
|
||||
eeprom
|
||||
c2port
|
||||
dw-xdata-pcie
|
||||
ibmvmc
|
||||
ics932s401
|
||||
isl29003
|
||||
|
|
18
MAINTAINERS
18
MAINTAINERS
|
@ -1777,6 +1777,7 @@ F: Documentation/devicetree/bindings/arm/coresight.txt
|
|||
F: Documentation/trace/coresight/*
|
||||
F: drivers/hwtracing/coresight/*
|
||||
F: include/dt-bindings/arm/coresight-cti-dt.h
|
||||
F: include/linux/coresight*
|
||||
F: tools/perf/arch/arm/util/auxtrace.c
|
||||
F: tools/perf/arch/arm/util/cs-etm.c
|
||||
F: tools/perf/arch/arm/util/cs-etm.h
|
||||
|
@ -5119,6 +5120,13 @@ S: Maintained
|
|||
F: drivers/dma/dw-edma/
|
||||
F: include/linux/dma/edma.h
|
||||
|
||||
DESIGNWARE XDATA IP DRIVER
|
||||
M: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
|
||||
L: linux-pci@vger.kernel.org
|
||||
S: Maintained
|
||||
F: Documentation/misc-devices/dw-xdata-pcie.rst
|
||||
F: drivers/misc/dw-xdata-pcie.c
|
||||
|
||||
DESIGNWARE USB2 DRD IP DRIVER
|
||||
M: Minas Harutyunyan <hminas@synopsys.com>
|
||||
L: linux-usb@vger.kernel.org
|
||||
|
@ -7010,6 +7018,7 @@ S: Maintained
|
|||
F: Documentation/ABI/testing/sysfs-bus-dfl*
|
||||
F: Documentation/fpga/dfl.rst
|
||||
F: drivers/fpga/dfl*
|
||||
F: drivers/uio/uio_dfl.c
|
||||
F: include/linux/dfl.h
|
||||
F: include/uapi/linux/fpga-dfl.h
|
||||
|
||||
|
@ -7917,6 +7926,11 @@ W: https://linuxtv.org
|
|||
T: git git://linuxtv.org/media_tree.git
|
||||
F: drivers/media/usb/hdpvr/
|
||||
|
||||
HEWLETT PACKARD ENTERPRISE ILO CHIF DRIVER
|
||||
M: Matt Hsiao <matt.hsiao@hpe.com>
|
||||
S: Supported
|
||||
F: drivers/misc/hpilo.[ch]
|
||||
|
||||
HEWLETT PACKARD ENTERPRISE ILO NMI WATCHDOG DRIVER
|
||||
M: Jerry Hoemann <jerry.hoemann@hpe.com>
|
||||
S: Supported
|
||||
|
@ -8571,7 +8585,8 @@ S: Supported
|
|||
F: drivers/scsi/ibmvscsi/ibmvfc*
|
||||
|
||||
IBM Power Virtual Management Channel Driver
|
||||
M: Steven Royer <seroyer@linux.ibm.com>
|
||||
M: Brad Warrum <bwarrum@linux.ibm.com>
|
||||
M: Ritu Agarwal <rituagar@linux.ibm.com>
|
||||
S: Supported
|
||||
F: drivers/misc/ibmvmc.*
|
||||
|
||||
|
@ -9309,6 +9324,7 @@ INTERCONNECT API
|
|||
M: Georgi Djakov <djakov@kernel.org>
|
||||
L: linux-pm@vger.kernel.org
|
||||
S: Maintained
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/djakov/icc.git
|
||||
F: Documentation/devicetree/bindings/interconnect/
|
||||
F: Documentation/driver-api/interconnect.rst
|
||||
F: drivers/interconnect/
|
||||
|
|
|
@ -548,12 +548,10 @@ ssize_t spk_msg_set(enum msg_index_t index, char *text, size_t length)
|
|||
if ((index < MSG_FIRST_INDEX) || (index >= MSG_LAST_INDEX))
|
||||
return -EINVAL;
|
||||
|
||||
newstr = kmalloc(length + 1, GFP_KERNEL);
|
||||
newstr = kmemdup_nul(text, length, GFP_KERNEL);
|
||||
if (!newstr)
|
||||
return -ENOMEM;
|
||||
|
||||
memcpy(newstr, text, length);
|
||||
newstr[length] = '\0';
|
||||
if (index >= MSG_FORMATTED_START &&
|
||||
index <= MSG_FORMATTED_END &&
|
||||
!fmt_validate(speakup_default_msgs[index], newstr)) {
|
||||
|
|
|
@ -1506,6 +1506,12 @@ static void binder_free_transaction(struct binder_transaction *t)
|
|||
|
||||
if (target_proc) {
|
||||
binder_inner_proc_lock(target_proc);
|
||||
target_proc->outstanding_txns--;
|
||||
if (target_proc->outstanding_txns < 0)
|
||||
pr_warn("%s: Unexpected outstanding_txns %d\n",
|
||||
__func__, target_proc->outstanding_txns);
|
||||
if (!target_proc->outstanding_txns && target_proc->is_frozen)
|
||||
wake_up_interruptible_all(&target_proc->freeze_wait);
|
||||
if (t->buffer)
|
||||
t->buffer->transaction = NULL;
|
||||
binder_inner_proc_unlock(target_proc);
|
||||
|
@ -2331,10 +2337,11 @@ static int binder_fixup_parent(struct binder_transaction *t,
|
|||
* If the @thread parameter is not NULL, the transaction is always queued
|
||||
* to the waitlist of that specific thread.
|
||||
*
|
||||
* Return: true if the transactions was successfully queued
|
||||
* false if the target process or thread is dead
|
||||
* Return: 0 if the transaction was successfully queued
|
||||
* BR_DEAD_REPLY if the target process or thread is dead
|
||||
* BR_FROZEN_REPLY if the target process or thread is frozen
|
||||
*/
|
||||
static bool binder_proc_transaction(struct binder_transaction *t,
|
||||
static int binder_proc_transaction(struct binder_transaction *t,
|
||||
struct binder_proc *proc,
|
||||
struct binder_thread *thread)
|
||||
{
|
||||
|
@ -2353,11 +2360,16 @@ static bool binder_proc_transaction(struct binder_transaction *t,
|
|||
}
|
||||
|
||||
binder_inner_proc_lock(proc);
|
||||
if (proc->is_frozen) {
|
||||
proc->sync_recv |= !oneway;
|
||||
proc->async_recv |= oneway;
|
||||
}
|
||||
|
||||
if (proc->is_dead || (thread && thread->is_dead)) {
|
||||
if ((proc->is_frozen && !oneway) || proc->is_dead ||
|
||||
(thread && thread->is_dead)) {
|
||||
binder_inner_proc_unlock(proc);
|
||||
binder_node_unlock(node);
|
||||
return false;
|
||||
return proc->is_frozen ? BR_FROZEN_REPLY : BR_DEAD_REPLY;
|
||||
}
|
||||
|
||||
if (!thread && !pending_async)
|
||||
|
@ -2373,10 +2385,11 @@ static bool binder_proc_transaction(struct binder_transaction *t,
|
|||
if (!pending_async)
|
||||
binder_wakeup_thread_ilocked(proc, thread, !oneway /* sync */);
|
||||
|
||||
proc->outstanding_txns++;
|
||||
binder_inner_proc_unlock(proc);
|
||||
binder_node_unlock(node);
|
||||
|
||||
return true;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -3007,19 +3020,25 @@ static void binder_transaction(struct binder_proc *proc,
|
|||
goto err_bad_object_type;
|
||||
}
|
||||
}
|
||||
if (t->buffer->oneway_spam_suspect)
|
||||
tcomplete->type = BINDER_WORK_TRANSACTION_ONEWAY_SPAM_SUSPECT;
|
||||
else
|
||||
tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
|
||||
t->work.type = BINDER_WORK_TRANSACTION;
|
||||
|
||||
if (reply) {
|
||||
binder_enqueue_thread_work(thread, tcomplete);
|
||||
binder_inner_proc_lock(target_proc);
|
||||
if (target_thread->is_dead) {
|
||||
if (target_thread->is_dead || target_proc->is_frozen) {
|
||||
return_error = target_thread->is_dead ?
|
||||
BR_DEAD_REPLY : BR_FROZEN_REPLY;
|
||||
binder_inner_proc_unlock(target_proc);
|
||||
goto err_dead_proc_or_thread;
|
||||
}
|
||||
BUG_ON(t->buffer->async_transaction != 0);
|
||||
binder_pop_transaction_ilocked(target_thread, in_reply_to);
|
||||
binder_enqueue_thread_work_ilocked(target_thread, &t->work);
|
||||
target_proc->outstanding_txns++;
|
||||
binder_inner_proc_unlock(target_proc);
|
||||
wake_up_interruptible_sync(&target_thread->wait);
|
||||
binder_free_transaction(in_reply_to);
|
||||
|
@ -3038,7 +3057,9 @@ static void binder_transaction(struct binder_proc *proc,
|
|||
t->from_parent = thread->transaction_stack;
|
||||
thread->transaction_stack = t;
|
||||
binder_inner_proc_unlock(proc);
|
||||
if (!binder_proc_transaction(t, target_proc, target_thread)) {
|
||||
return_error = binder_proc_transaction(t,
|
||||
target_proc, target_thread);
|
||||
if (return_error) {
|
||||
binder_inner_proc_lock(proc);
|
||||
binder_pop_transaction_ilocked(thread, t);
|
||||
binder_inner_proc_unlock(proc);
|
||||
|
@ -3048,7 +3069,8 @@ static void binder_transaction(struct binder_proc *proc,
|
|||
BUG_ON(target_node == NULL);
|
||||
BUG_ON(t->buffer->async_transaction != 1);
|
||||
binder_enqueue_thread_work(thread, tcomplete);
|
||||
if (!binder_proc_transaction(t, target_proc, NULL))
|
||||
return_error = binder_proc_transaction(t, target_proc, NULL);
|
||||
if (return_error)
|
||||
goto err_dead_proc_or_thread;
|
||||
}
|
||||
if (target_thread)
|
||||
|
@ -3065,7 +3087,6 @@ static void binder_transaction(struct binder_proc *proc,
|
|||
return;
|
||||
|
||||
err_dead_proc_or_thread:
|
||||
return_error = BR_DEAD_REPLY;
|
||||
return_error_line = __LINE__;
|
||||
binder_dequeue_work(proc, tcomplete);
|
||||
err_translate_failed:
|
||||
|
@ -3696,7 +3717,7 @@ static int binder_wait_for_work(struct binder_thread *thread,
|
|||
binder_inner_proc_lock(proc);
|
||||
list_del_init(&thread->waiting_thread_node);
|
||||
if (signal_pending(current)) {
|
||||
ret = -ERESTARTSYS;
|
||||
ret = -EINTR;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
@ -3875,9 +3896,14 @@ retry:
|
|||
|
||||
binder_stat_br(proc, thread, cmd);
|
||||
} break;
|
||||
case BINDER_WORK_TRANSACTION_COMPLETE: {
|
||||
binder_inner_proc_unlock(proc);
|
||||
case BINDER_WORK_TRANSACTION_COMPLETE:
|
||||
case BINDER_WORK_TRANSACTION_ONEWAY_SPAM_SUSPECT: {
|
||||
if (proc->oneway_spam_detection_enabled &&
|
||||
w->type == BINDER_WORK_TRANSACTION_ONEWAY_SPAM_SUSPECT)
|
||||
cmd = BR_ONEWAY_SPAM_SUSPECT;
|
||||
else
|
||||
cmd = BR_TRANSACTION_COMPLETE;
|
||||
binder_inner_proc_unlock(proc);
|
||||
kfree(w);
|
||||
binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
|
||||
if (put_user(cmd, (uint32_t __user *)ptr))
|
||||
|
@ -4298,6 +4324,9 @@ static void binder_free_proc(struct binder_proc *proc)
|
|||
|
||||
BUG_ON(!list_empty(&proc->todo));
|
||||
BUG_ON(!list_empty(&proc->delivered_death));
|
||||
if (proc->outstanding_txns)
|
||||
pr_warn("%s: Unexpected outstanding_txns %d\n",
|
||||
__func__, proc->outstanding_txns);
|
||||
device = container_of(proc->context, struct binder_device, context);
|
||||
if (refcount_dec_and_test(&device->ref)) {
|
||||
kfree(proc->context->name);
|
||||
|
@ -4359,6 +4388,7 @@ static int binder_thread_release(struct binder_proc *proc,
|
|||
(t->to_thread == thread) ? "in" : "out");
|
||||
|
||||
if (t->to_thread == thread) {
|
||||
thread->proc->outstanding_txns--;
|
||||
t->to_proc = NULL;
|
||||
t->to_thread = NULL;
|
||||
if (t->buffer) {
|
||||
|
@ -4609,6 +4639,76 @@ static int binder_ioctl_get_node_debug_info(struct binder_proc *proc,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int binder_ioctl_freeze(struct binder_freeze_info *info,
|
||||
struct binder_proc *target_proc)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
if (!info->enable) {
|
||||
binder_inner_proc_lock(target_proc);
|
||||
target_proc->sync_recv = false;
|
||||
target_proc->async_recv = false;
|
||||
target_proc->is_frozen = false;
|
||||
binder_inner_proc_unlock(target_proc);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Freezing the target. Prevent new transactions by
|
||||
* setting frozen state. If timeout specified, wait
|
||||
* for transactions to drain.
|
||||
*/
|
||||
binder_inner_proc_lock(target_proc);
|
||||
target_proc->sync_recv = false;
|
||||
target_proc->async_recv = false;
|
||||
target_proc->is_frozen = true;
|
||||
binder_inner_proc_unlock(target_proc);
|
||||
|
||||
if (info->timeout_ms > 0)
|
||||
ret = wait_event_interruptible_timeout(
|
||||
target_proc->freeze_wait,
|
||||
(!target_proc->outstanding_txns),
|
||||
msecs_to_jiffies(info->timeout_ms));
|
||||
|
||||
if (!ret && target_proc->outstanding_txns)
|
||||
ret = -EAGAIN;
|
||||
|
||||
if (ret < 0) {
|
||||
binder_inner_proc_lock(target_proc);
|
||||
target_proc->is_frozen = false;
|
||||
binder_inner_proc_unlock(target_proc);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int binder_ioctl_get_freezer_info(
|
||||
struct binder_frozen_status_info *info)
|
||||
{
|
||||
struct binder_proc *target_proc;
|
||||
bool found = false;
|
||||
|
||||
info->sync_recv = 0;
|
||||
info->async_recv = 0;
|
||||
|
||||
mutex_lock(&binder_procs_lock);
|
||||
hlist_for_each_entry(target_proc, &binder_procs, proc_node) {
|
||||
if (target_proc->pid == info->pid) {
|
||||
found = true;
|
||||
binder_inner_proc_lock(target_proc);
|
||||
info->sync_recv |= target_proc->sync_recv;
|
||||
info->async_recv |= target_proc->async_recv;
|
||||
binder_inner_proc_unlock(target_proc);
|
||||
}
|
||||
}
|
||||
mutex_unlock(&binder_procs_lock);
|
||||
|
||||
if (!found)
|
||||
return -EINVAL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
int ret;
|
||||
|
@ -4727,6 +4827,96 @@ static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
|
|||
}
|
||||
break;
|
||||
}
|
||||
case BINDER_FREEZE: {
|
||||
struct binder_freeze_info info;
|
||||
struct binder_proc **target_procs = NULL, *target_proc;
|
||||
int target_procs_count = 0, i = 0;
|
||||
|
||||
ret = 0;
|
||||
|
||||
if (copy_from_user(&info, ubuf, sizeof(info))) {
|
||||
ret = -EFAULT;
|
||||
goto err;
|
||||
}
|
||||
|
||||
mutex_lock(&binder_procs_lock);
|
||||
hlist_for_each_entry(target_proc, &binder_procs, proc_node) {
|
||||
if (target_proc->pid == info.pid)
|
||||
target_procs_count++;
|
||||
}
|
||||
|
||||
if (target_procs_count == 0) {
|
||||
mutex_unlock(&binder_procs_lock);
|
||||
ret = -EINVAL;
|
||||
goto err;
|
||||
}
|
||||
|
||||
target_procs = kcalloc(target_procs_count,
|
||||
sizeof(struct binder_proc *),
|
||||
GFP_KERNEL);
|
||||
|
||||
if (!target_procs) {
|
||||
mutex_unlock(&binder_procs_lock);
|
||||
ret = -ENOMEM;
|
||||
goto err;
|
||||
}
|
||||
|
||||
hlist_for_each_entry(target_proc, &binder_procs, proc_node) {
|
||||
if (target_proc->pid != info.pid)
|
||||
continue;
|
||||
|
||||
binder_inner_proc_lock(target_proc);
|
||||
target_proc->tmp_ref++;
|
||||
binder_inner_proc_unlock(target_proc);
|
||||
|
||||
target_procs[i++] = target_proc;
|
||||
}
|
||||
mutex_unlock(&binder_procs_lock);
|
||||
|
||||
for (i = 0; i < target_procs_count; i++) {
|
||||
if (ret >= 0)
|
||||
ret = binder_ioctl_freeze(&info,
|
||||
target_procs[i]);
|
||||
|
||||
binder_proc_dec_tmpref(target_procs[i]);
|
||||
}
|
||||
|
||||
kfree(target_procs);
|
||||
|
||||
if (ret < 0)
|
||||
goto err;
|
||||
break;
|
||||
}
|
||||
case BINDER_GET_FROZEN_INFO: {
|
||||
struct binder_frozen_status_info info;
|
||||
|
||||
if (copy_from_user(&info, ubuf, sizeof(info))) {
|
||||
ret = -EFAULT;
|
||||
goto err;
|
||||
}
|
||||
|
||||
ret = binder_ioctl_get_freezer_info(&info);
|
||||
if (ret < 0)
|
||||
goto err;
|
||||
|
||||
if (copy_to_user(ubuf, &info, sizeof(info))) {
|
||||
ret = -EFAULT;
|
||||
goto err;
|
||||
}
|
||||
break;
|
||||
}
|
||||
case BINDER_ENABLE_ONEWAY_SPAM_DETECTION: {
|
||||
uint32_t enable;
|
||||
|
||||
if (copy_from_user(&enable, ubuf, sizeof(enable))) {
|
||||
ret = -EINVAL;
|
||||
goto err;
|
||||
}
|
||||
binder_inner_proc_lock(proc);
|
||||
proc->oneway_spam_detection_enabled = (bool)enable;
|
||||
binder_inner_proc_unlock(proc);
|
||||
break;
|
||||
}
|
||||
default:
|
||||
ret = -EINVAL;
|
||||
goto err;
|
||||
|
@ -4736,7 +4926,7 @@ err:
|
|||
if (thread)
|
||||
thread->looper_need_return = false;
|
||||
wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
|
||||
if (ret && ret != -ERESTARTSYS)
|
||||
if (ret && ret != -EINTR)
|
||||
pr_info("%d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
|
||||
err_unlocked:
|
||||
trace_binder_ioctl_done(ret);
|
||||
|
@ -4823,6 +5013,7 @@ static int binder_open(struct inode *nodp, struct file *filp)
|
|||
get_task_struct(current->group_leader);
|
||||
proc->tsk = current->group_leader;
|
||||
INIT_LIST_HEAD(&proc->todo);
|
||||
init_waitqueue_head(&proc->freeze_wait);
|
||||
proc->default_priority = task_nice(current);
|
||||
/* binderfs stashes devices in i_private */
|
||||
if (is_binderfs_device(nodp)) {
|
||||
|
@ -5035,6 +5226,9 @@ static void binder_deferred_release(struct binder_proc *proc)
|
|||
proc->tmp_ref++;
|
||||
|
||||
proc->is_dead = true;
|
||||
proc->is_frozen = false;
|
||||
proc->sync_recv = false;
|
||||
proc->async_recv = false;
|
||||
threads = 0;
|
||||
active_transactions = 0;
|
||||
while ((n = rb_first(&proc->threads))) {
|
||||
|
@ -5385,7 +5579,9 @@ static const char * const binder_return_strings[] = {
|
|||
"BR_FINISHED",
|
||||
"BR_DEAD_BINDER",
|
||||
"BR_CLEAR_DEATH_NOTIFICATION_DONE",
|
||||
"BR_FAILED_REPLY"
|
||||
"BR_FAILED_REPLY",
|
||||
"BR_FROZEN_REPLY",
|
||||
"BR_ONEWAY_SPAM_SUSPECT",
|
||||
};
|
||||
|
||||
static const char * const binder_command_strings[] = {
|
||||
|
|
|
@ -338,7 +338,7 @@ static inline struct vm_area_struct *binder_alloc_get_vma(
|
|||
return vma;
|
||||
}
|
||||
|
||||
static void debug_low_async_space_locked(struct binder_alloc *alloc, int pid)
|
||||
static bool debug_low_async_space_locked(struct binder_alloc *alloc, int pid)
|
||||
{
|
||||
/*
|
||||
* Find the amount and size of buffers allocated by the current caller;
|
||||
|
@ -366,13 +366,19 @@ static void debug_low_async_space_locked(struct binder_alloc *alloc, int pid)
|
|||
|
||||
/*
|
||||
* Warn if this pid has more than 50 transactions, or more than 50% of
|
||||
* async space (which is 25% of total buffer size).
|
||||
* async space (which is 25% of total buffer size). Oneway spam is only
|
||||
* detected when the threshold is exceeded.
|
||||
*/
|
||||
if (num_buffers > 50 || total_alloc_size > alloc->buffer_size / 4) {
|
||||
binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
|
||||
"%d: pid %d spamming oneway? %zd buffers allocated for a total size of %zd\n",
|
||||
alloc->pid, pid, num_buffers, total_alloc_size);
|
||||
if (!alloc->oneway_spam_detected) {
|
||||
alloc->oneway_spam_detected = true;
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
static struct binder_buffer *binder_alloc_new_buf_locked(
|
||||
|
@ -525,6 +531,7 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
|
|||
buffer->async_transaction = is_async;
|
||||
buffer->extra_buffers_size = extra_buffers_size;
|
||||
buffer->pid = pid;
|
||||
buffer->oneway_spam_suspect = false;
|
||||
if (is_async) {
|
||||
alloc->free_async_space -= size + sizeof(struct binder_buffer);
|
||||
binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC_ASYNC,
|
||||
|
@ -536,7 +543,9 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
|
|||
* of async space left (which is less than 10% of total
|
||||
* buffer size).
|
||||
*/
|
||||
debug_low_async_space_locked(alloc, pid);
|
||||
buffer->oneway_spam_suspect = debug_low_async_space_locked(alloc, pid);
|
||||
} else {
|
||||
alloc->oneway_spam_detected = false;
|
||||
}
|
||||
}
|
||||
return buffer;
|
||||
|
|
|
@ -26,6 +26,8 @@ struct binder_transaction;
|
|||
* @clear_on_free: %true if buffer must be zeroed after use
|
||||
* @allow_user_free: %true if user is allowed to free buffer
|
||||
* @async_transaction: %true if buffer is in use for an async txn
|
||||
* @oneway_spam_suspect: %true if total async allocate size just exceed
|
||||
* spamming detect threshold
|
||||
* @debug_id: unique ID for debugging
|
||||
* @transaction: pointer to associated struct binder_transaction
|
||||
* @target_node: struct binder_node associated with this buffer
|
||||
|
@ -45,7 +47,8 @@ struct binder_buffer {
|
|||
unsigned clear_on_free:1;
|
||||
unsigned allow_user_free:1;
|
||||
unsigned async_transaction:1;
|
||||
unsigned debug_id:28;
|
||||
unsigned oneway_spam_suspect:1;
|
||||
unsigned debug_id:27;
|
||||
|
||||
struct binder_transaction *transaction;
|
||||
|
||||
|
@ -87,6 +90,8 @@ struct binder_lru_page {
|
|||
* @buffer_size: size of address space specified via mmap
|
||||
* @pid: pid for associated binder_proc (invariant after init)
|
||||
* @pages_high: high watermark of offset in @pages
|
||||
* @oneway_spam_detected: %true if oneway spam detection fired, clear that
|
||||
* flag once the async buffer has returned to a healthy state
|
||||
*
|
||||
* Bookkeeping structure for per-proc address space management for binder
|
||||
* buffers. It is normally initialized during binder_init() and binder_mmap()
|
||||
|
@ -107,6 +112,7 @@ struct binder_alloc {
|
|||
uint32_t buffer_free;
|
||||
int pid;
|
||||
size_t pages_high;
|
||||
bool oneway_spam_detected;
|
||||
};
|
||||
|
||||
#ifdef CONFIG_ANDROID_BINDER_IPC_SELFTEST
|
||||
|
|
|
@ -155,7 +155,7 @@ enum binder_stat_types {
|
|||
};
|
||||
|
||||
struct binder_stats {
|
||||
atomic_t br[_IOC_NR(BR_FAILED_REPLY) + 1];
|
||||
atomic_t br[_IOC_NR(BR_ONEWAY_SPAM_SUSPECT) + 1];
|
||||
atomic_t bc[_IOC_NR(BC_REPLY_SG) + 1];
|
||||
atomic_t obj_created[BINDER_STAT_COUNT];
|
||||
atomic_t obj_deleted[BINDER_STAT_COUNT];
|
||||
|
@ -174,6 +174,7 @@ struct binder_work {
|
|||
enum binder_work_type {
|
||||
BINDER_WORK_TRANSACTION = 1,
|
||||
BINDER_WORK_TRANSACTION_COMPLETE,
|
||||
BINDER_WORK_TRANSACTION_ONEWAY_SPAM_SUSPECT,
|
||||
BINDER_WORK_RETURN_ERROR,
|
||||
BINDER_WORK_NODE,
|
||||
BINDER_WORK_DEAD_BINDER,
|
||||
|
@ -367,9 +368,22 @@ struct binder_ref {
|
|||
* (protected by binder_deferred_lock)
|
||||
* @deferred_work: bitmap of deferred work to perform
|
||||
* (protected by binder_deferred_lock)
|
||||
* @outstanding_txns: number of transactions to be transmitted before
|
||||
* processes in freeze_wait are woken up
|
||||
* (protected by @inner_lock)
|
||||
* @is_dead: process is dead and awaiting free
|
||||
* when outstanding transactions are cleaned up
|
||||
* (protected by @inner_lock)
|
||||
* @is_frozen: process is frozen and unable to service
|
||||
* binder transactions
|
||||
* (protected by @inner_lock)
|
||||
* @sync_recv: process received sync transactions since last frozen
|
||||
* (protected by @inner_lock)
|
||||
* @async_recv: process received async transactions since last frozen
|
||||
* (protected by @inner_lock)
|
||||
* @freeze_wait: waitqueue of processes waiting for all outstanding
|
||||
* transactions to be processed
|
||||
* (protected by @inner_lock)
|
||||
* @todo: list of work for this process
|
||||
* (protected by @inner_lock)
|
||||
* @stats: per-process binder statistics
|
||||
|
@ -396,6 +410,8 @@ struct binder_ref {
|
|||
* @outer_lock: no nesting under innor or node lock
|
||||
* Lock order: 1) outer, 2) node, 3) inner
|
||||
* @binderfs_entry: process-specific binderfs log file
|
||||
* @oneway_spam_detection_enabled: process enabled oneway spam detection
|
||||
* or not
|
||||
*
|
||||
* Bookkeeping structure for binder processes
|
||||
*/
|
||||
|
@ -410,7 +426,12 @@ struct binder_proc {
|
|||
struct task_struct *tsk;
|
||||
struct hlist_node deferred_work_node;
|
||||
int deferred_work;
|
||||
int outstanding_txns;
|
||||
bool is_dead;
|
||||
bool is_frozen;
|
||||
bool sync_recv;
|
||||
bool async_recv;
|
||||
wait_queue_head_t freeze_wait;
|
||||
|
||||
struct list_head todo;
|
||||
struct binder_stats stats;
|
||||
|
@ -426,6 +447,7 @@ struct binder_proc {
|
|||
spinlock_t inner_lock;
|
||||
spinlock_t outer_lock;
|
||||
struct dentry *binderfs_entry;
|
||||
bool oneway_spam_detection_enabled;
|
||||
};
|
||||
|
||||
/**
|
||||
|
|
|
@ -389,7 +389,6 @@ static void mhi_firmware_copy(struct mhi_controller *mhi_cntrl,
|
|||
void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
|
||||
{
|
||||
const struct firmware *firmware = NULL;
|
||||
struct image_info *image_info;
|
||||
struct device *dev = &mhi_cntrl->mhi_dev->dev;
|
||||
const char *fw_name;
|
||||
void *buf;
|
||||
|
@ -417,9 +416,9 @@ void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
|
|||
}
|
||||
}
|
||||
|
||||
/* If device is in pass through, do reset to ready state transition */
|
||||
if (mhi_cntrl->ee == MHI_EE_PTHRU)
|
||||
goto fw_load_ee_pthru;
|
||||
/* wait for ready on pass through or any other execution environment */
|
||||
if (mhi_cntrl->ee != MHI_EE_EDL && mhi_cntrl->ee != MHI_EE_PBL)
|
||||
goto fw_load_ready_state;
|
||||
|
||||
fw_name = (mhi_cntrl->ee == MHI_EE_EDL) ?
|
||||
mhi_cntrl->edl_image : mhi_cntrl->fw_image;
|
||||
|
@ -461,9 +460,10 @@ void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
|
|||
goto error_fw_load;
|
||||
}
|
||||
|
||||
if (mhi_cntrl->ee == MHI_EE_EDL) {
|
||||
/* Wait for ready since EDL image was loaded */
|
||||
if (fw_name == mhi_cntrl->edl_image) {
|
||||
release_firmware(firmware);
|
||||
return;
|
||||
goto fw_load_ready_state;
|
||||
}
|
||||
|
||||
write_lock_irq(&mhi_cntrl->pm_lock);
|
||||
|
@ -488,47 +488,45 @@ void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
|
|||
|
||||
release_firmware(firmware);
|
||||
|
||||
fw_load_ee_pthru:
|
||||
fw_load_ready_state:
|
||||
/* Transitioning into MHI RESET->READY state */
|
||||
ret = mhi_ready_state_transition(mhi_cntrl);
|
||||
|
||||
if (!mhi_cntrl->fbc_download)
|
||||
return;
|
||||
|
||||
if (ret) {
|
||||
dev_err(dev, "MHI did not enter READY state\n");
|
||||
goto error_ready_state;
|
||||
}
|
||||
|
||||
/* Wait for the SBL event */
|
||||
ret = wait_event_timeout(mhi_cntrl->state_event,
|
||||
mhi_cntrl->ee == MHI_EE_SBL ||
|
||||
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
|
||||
msecs_to_jiffies(mhi_cntrl->timeout_ms));
|
||||
|
||||
if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
|
||||
dev_err(dev, "MHI did not enter SBL\n");
|
||||
goto error_ready_state;
|
||||
}
|
||||
|
||||
/* Start full firmware image download */
|
||||
image_info = mhi_cntrl->fbc_image;
|
||||
ret = mhi_fw_load_bhie(mhi_cntrl,
|
||||
/* Vector table is the last entry */
|
||||
&image_info->mhi_buf[image_info->entries - 1]);
|
||||
if (ret) {
|
||||
dev_err(dev, "MHI did not load image over BHIe, ret: %d\n",
|
||||
ret);
|
||||
goto error_fw_load;
|
||||
}
|
||||
|
||||
dev_info(dev, "Wait for device to enter SBL or Mission mode\n");
|
||||
return;
|
||||
|
||||
error_ready_state:
|
||||
if (mhi_cntrl->fbc_download) {
|
||||
mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
|
||||
mhi_cntrl->fbc_image = NULL;
|
||||
}
|
||||
|
||||
error_fw_load:
|
||||
mhi_cntrl->pm_state = MHI_PM_FW_DL_ERR;
|
||||
wake_up_all(&mhi_cntrl->state_event);
|
||||
}
|
||||
|
||||
int mhi_download_amss_image(struct mhi_controller *mhi_cntrl)
|
||||
{
|
||||
struct image_info *image_info = mhi_cntrl->fbc_image;
|
||||
struct device *dev = &mhi_cntrl->mhi_dev->dev;
|
||||
int ret;
|
||||
|
||||
if (!image_info)
|
||||
return -EIO;
|
||||
|
||||
ret = mhi_fw_load_bhie(mhi_cntrl,
|
||||
/* Vector table is the last entry */
|
||||
&image_info->mhi_buf[image_info->entries - 1]);
|
||||
if (ret) {
|
||||
dev_err(dev, "MHI did not load AMSS, ret:%d\n", ret);
|
||||
mhi_cntrl->pm_state = MHI_PM_FW_DL_ERR;
|
||||
wake_up_all(&mhi_cntrl->state_event);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -377,7 +377,7 @@ static struct dentry *mhi_debugfs_root;
|
|||
void mhi_create_debugfs(struct mhi_controller *mhi_cntrl)
|
||||
{
|
||||
mhi_cntrl->debugfs_dentry =
|
||||
debugfs_create_dir(dev_name(mhi_cntrl->cntrl_dev),
|
||||
debugfs_create_dir(dev_name(&mhi_cntrl->mhi_dev->dev),
|
||||
mhi_debugfs_root);
|
||||
|
||||
debugfs_create_file("states", 0444, mhi_cntrl->debugfs_dentry,
|
||||
|
|
|
@ -22,13 +22,14 @@
|
|||
static DEFINE_IDA(mhi_controller_ida);
|
||||
|
||||
const char * const mhi_ee_str[MHI_EE_MAX] = {
|
||||
[MHI_EE_PBL] = "PBL",
|
||||
[MHI_EE_SBL] = "SBL",
|
||||
[MHI_EE_AMSS] = "AMSS",
|
||||
[MHI_EE_RDDM] = "RDDM",
|
||||
[MHI_EE_WFW] = "WFW",
|
||||
[MHI_EE_PTHRU] = "PASS THRU",
|
||||
[MHI_EE_EDL] = "EDL",
|
||||
[MHI_EE_PBL] = "PRIMARY BOOTLOADER",
|
||||
[MHI_EE_SBL] = "SECONDARY BOOTLOADER",
|
||||
[MHI_EE_AMSS] = "MISSION MODE",
|
||||
[MHI_EE_RDDM] = "RAMDUMP DOWNLOAD MODE",
|
||||
[MHI_EE_WFW] = "WLAN FIRMWARE",
|
||||
[MHI_EE_PTHRU] = "PASS THROUGH",
|
||||
[MHI_EE_EDL] = "EMERGENCY DOWNLOAD",
|
||||
[MHI_EE_FP] = "FLASH PROGRAMMER",
|
||||
[MHI_EE_DISABLE_TRANSITION] = "DISABLE",
|
||||
[MHI_EE_NOT_SUPPORTED] = "NOT SUPPORTED",
|
||||
};
|
||||
|
@ -37,8 +38,9 @@ const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX] = {
|
|||
[DEV_ST_TRANSITION_PBL] = "PBL",
|
||||
[DEV_ST_TRANSITION_READY] = "READY",
|
||||
[DEV_ST_TRANSITION_SBL] = "SBL",
|
||||
[DEV_ST_TRANSITION_MISSION_MODE] = "MISSION_MODE",
|
||||
[DEV_ST_TRANSITION_SYS_ERR] = "SYS_ERR",
|
||||
[DEV_ST_TRANSITION_MISSION_MODE] = "MISSION MODE",
|
||||
[DEV_ST_TRANSITION_FP] = "FLASH PROGRAMMER",
|
||||
[DEV_ST_TRANSITION_SYS_ERR] = "SYS ERROR",
|
||||
[DEV_ST_TRANSITION_DISABLE] = "DISABLE",
|
||||
};
|
||||
|
||||
|
@ -49,24 +51,30 @@ const char * const mhi_state_str[MHI_STATE_MAX] = {
|
|||
[MHI_STATE_M1] = "M1",
|
||||
[MHI_STATE_M2] = "M2",
|
||||
[MHI_STATE_M3] = "M3",
|
||||
[MHI_STATE_M3_FAST] = "M3_FAST",
|
||||
[MHI_STATE_M3_FAST] = "M3 FAST",
|
||||
[MHI_STATE_BHI] = "BHI",
|
||||
[MHI_STATE_SYS_ERR] = "SYS_ERR",
|
||||
[MHI_STATE_SYS_ERR] = "SYS ERROR",
|
||||
};
|
||||
|
||||
const char * const mhi_ch_state_type_str[MHI_CH_STATE_TYPE_MAX] = {
|
||||
[MHI_CH_STATE_TYPE_RESET] = "RESET",
|
||||
[MHI_CH_STATE_TYPE_STOP] = "STOP",
|
||||
[MHI_CH_STATE_TYPE_START] = "START",
|
||||
};
|
||||
|
||||
static const char * const mhi_pm_state_str[] = {
|
||||
[MHI_PM_STATE_DISABLE] = "DISABLE",
|
||||
[MHI_PM_STATE_POR] = "POR",
|
||||
[MHI_PM_STATE_POR] = "POWER ON RESET",
|
||||
[MHI_PM_STATE_M0] = "M0",
|
||||
[MHI_PM_STATE_M2] = "M2",
|
||||
[MHI_PM_STATE_M3_ENTER] = "M?->M3",
|
||||
[MHI_PM_STATE_M3] = "M3",
|
||||
[MHI_PM_STATE_M3_EXIT] = "M3->M0",
|
||||
[MHI_PM_STATE_FW_DL_ERR] = "FW DL Error",
|
||||
[MHI_PM_STATE_SYS_ERR_DETECT] = "SYS_ERR Detect",
|
||||
[MHI_PM_STATE_SYS_ERR_PROCESS] = "SYS_ERR Process",
|
||||
[MHI_PM_STATE_FW_DL_ERR] = "Firmware Download Error",
|
||||
[MHI_PM_STATE_SYS_ERR_DETECT] = "SYS ERROR Detect",
|
||||
[MHI_PM_STATE_SYS_ERR_PROCESS] = "SYS ERROR Process",
|
||||
[MHI_PM_STATE_SHUTDOWN_PROCESS] = "SHUTDOWN Process",
|
||||
[MHI_PM_STATE_LD_ERR_FATAL_DETECT] = "LD or Error Fatal Detect",
|
||||
[MHI_PM_STATE_LD_ERR_FATAL_DETECT] = "Linkdown or Error Fatal Detect",
|
||||
};
|
||||
|
||||
const char *to_mhi_pm_state_str(enum mhi_pm_state state)
|
||||
|
@ -508,8 +516,6 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
|
|||
|
||||
/* Setup wake db */
|
||||
mhi_cntrl->wake_db = base + val + (8 * MHI_DEV_WAKE_DB);
|
||||
mhi_write_reg(mhi_cntrl, mhi_cntrl->wake_db, 4, 0);
|
||||
mhi_write_reg(mhi_cntrl, mhi_cntrl->wake_db, 0, 0);
|
||||
mhi_cntrl->wake_set = false;
|
||||
|
||||
/* Setup channel db address for each channel in tre_ring */
|
||||
|
@ -552,6 +558,7 @@ void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
|
|||
struct mhi_ring *buf_ring;
|
||||
struct mhi_ring *tre_ring;
|
||||
struct mhi_chan_ctxt *chan_ctxt;
|
||||
u32 tmp;
|
||||
|
||||
buf_ring = &mhi_chan->buf_ring;
|
||||
tre_ring = &mhi_chan->tre_ring;
|
||||
|
@ -565,7 +572,19 @@ void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
|
|||
vfree(buf_ring->base);
|
||||
|
||||
buf_ring->base = tre_ring->base = NULL;
|
||||
tre_ring->ctxt_wp = NULL;
|
||||
chan_ctxt->rbase = 0;
|
||||
chan_ctxt->rlen = 0;
|
||||
chan_ctxt->rp = 0;
|
||||
chan_ctxt->wp = 0;
|
||||
|
||||
tmp = chan_ctxt->chcfg;
|
||||
tmp &= ~CHAN_CTX_CHSTATE_MASK;
|
||||
tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
|
||||
chan_ctxt->chcfg = tmp;
|
||||
|
||||
/* Update to all cores */
|
||||
smp_wmb();
|
||||
}
|
||||
|
||||
int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
|
||||
|
@ -863,12 +882,10 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
|
|||
u32 soc_info;
|
||||
int ret, i;
|
||||
|
||||
if (!mhi_cntrl)
|
||||
return -EINVAL;
|
||||
|
||||
if (!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put ||
|
||||
if (!mhi_cntrl || !mhi_cntrl->cntrl_dev || !mhi_cntrl->regs ||
|
||||
!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put ||
|
||||
!mhi_cntrl->status_cb || !mhi_cntrl->read_reg ||
|
||||
!mhi_cntrl->write_reg || !mhi_cntrl->nr_irqs)
|
||||
!mhi_cntrl->write_reg || !mhi_cntrl->nr_irqs || !mhi_cntrl->irq)
|
||||
return -EINVAL;
|
||||
|
||||
ret = parse_config(mhi_cntrl, config);
|
||||
|
@ -890,8 +907,7 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
|
|||
INIT_WORK(&mhi_cntrl->st_worker, mhi_pm_st_worker);
|
||||
init_waitqueue_head(&mhi_cntrl->state_event);
|
||||
|
||||
mhi_cntrl->hiprio_wq = alloc_ordered_workqueue
|
||||
("mhi_hiprio_wq", WQ_MEM_RECLAIM | WQ_HIGHPRI);
|
||||
mhi_cntrl->hiprio_wq = alloc_ordered_workqueue("mhi_hiprio_wq", WQ_HIGHPRI);
|
||||
if (!mhi_cntrl->hiprio_wq) {
|
||||
dev_err(mhi_cntrl->cntrl_dev, "Failed to allocate workqueue\n");
|
||||
ret = -ENOMEM;
|
||||
|
@ -1083,8 +1099,6 @@ int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl)
|
|||
mhi_rddm_prepare(mhi_cntrl, mhi_cntrl->rddm_image);
|
||||
}
|
||||
|
||||
mhi_cntrl->pre_init = true;
|
||||
|
||||
mutex_unlock(&mhi_cntrl->pm_mutex);
|
||||
|
||||
return 0;
|
||||
|
@ -1115,7 +1129,6 @@ void mhi_unprepare_after_power_down(struct mhi_controller *mhi_cntrl)
|
|||
}
|
||||
|
||||
mhi_deinit_dev_ctxt(mhi_cntrl);
|
||||
mhi_cntrl->pre_init = false;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mhi_unprepare_after_power_down);
|
||||
|
||||
|
@ -1296,7 +1309,8 @@ static int mhi_driver_remove(struct device *dev)
|
|||
|
||||
mutex_lock(&mhi_chan->mutex);
|
||||
|
||||
if (ch_state[dir] == MHI_CH_STATE_ENABLED &&
|
||||
if ((ch_state[dir] == MHI_CH_STATE_ENABLED ||
|
||||
ch_state[dir] == MHI_CH_STATE_STOP) &&
|
||||
!mhi_chan->offload_ch)
|
||||
mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
|
||||
|
||||
|
|
|
@ -369,6 +369,18 @@ enum mhi_ch_state {
|
|||
MHI_CH_STATE_ERROR = 0x5,
|
||||
};
|
||||
|
||||
enum mhi_ch_state_type {
|
||||
MHI_CH_STATE_TYPE_RESET,
|
||||
MHI_CH_STATE_TYPE_STOP,
|
||||
MHI_CH_STATE_TYPE_START,
|
||||
MHI_CH_STATE_TYPE_MAX,
|
||||
};
|
||||
|
||||
extern const char * const mhi_ch_state_type_str[MHI_CH_STATE_TYPE_MAX];
|
||||
#define TO_CH_STATE_TYPE_STR(state) (((state) >= MHI_CH_STATE_TYPE_MAX) ? \
|
||||
"INVALID_STATE" : \
|
||||
mhi_ch_state_type_str[(state)])
|
||||
|
||||
#define MHI_INVALID_BRSTMODE(mode) (mode != MHI_DB_BRST_DISABLE && \
|
||||
mode != MHI_DB_BRST_ENABLE)
|
||||
|
||||
|
@ -379,13 +391,15 @@ extern const char * const mhi_ee_str[MHI_EE_MAX];
|
|||
#define MHI_IN_PBL(ee) (ee == MHI_EE_PBL || ee == MHI_EE_PTHRU || \
|
||||
ee == MHI_EE_EDL)
|
||||
|
||||
#define MHI_IN_MISSION_MODE(ee) (ee == MHI_EE_AMSS || ee == MHI_EE_WFW)
|
||||
#define MHI_IN_MISSION_MODE(ee) (ee == MHI_EE_AMSS || ee == MHI_EE_WFW || \
|
||||
ee == MHI_EE_FP)
|
||||
|
||||
enum dev_st_transition {
|
||||
DEV_ST_TRANSITION_PBL,
|
||||
DEV_ST_TRANSITION_READY,
|
||||
DEV_ST_TRANSITION_SBL,
|
||||
DEV_ST_TRANSITION_MISSION_MODE,
|
||||
DEV_ST_TRANSITION_FP,
|
||||
DEV_ST_TRANSITION_SYS_ERR,
|
||||
DEV_ST_TRANSITION_DISABLE,
|
||||
DEV_ST_TRANSITION_MAX,
|
||||
|
@ -619,6 +633,7 @@ int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl);
|
|||
int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl);
|
||||
int mhi_send_cmd(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
|
||||
enum mhi_cmd_type cmd);
|
||||
int mhi_download_amss_image(struct mhi_controller *mhi_cntrl);
|
||||
static inline bool mhi_is_active(struct mhi_controller *mhi_cntrl)
|
||||
{
|
||||
return (mhi_cntrl->dev_state >= MHI_STATE_M0 &&
|
||||
|
@ -643,6 +658,9 @@ int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
|
|||
int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
|
||||
void __iomem *base, u32 offset, u32 mask,
|
||||
u32 shift, u32 *out);
|
||||
int __must_check mhi_poll_reg_field(struct mhi_controller *mhi_cntrl,
|
||||
void __iomem *base, u32 offset, u32 mask,
|
||||
u32 shift, u32 val, u32 delayus);
|
||||
void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
|
||||
u32 offset, u32 val);
|
||||
void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
|
||||
|
|
|
@ -4,6 +4,7 @@
|
|||
*
|
||||
*/
|
||||
|
||||
#include <linux/delay.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/dma-direction.h>
|
||||
#include <linux/dma-mapping.h>
|
||||
|
@ -37,6 +38,28 @@ int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
|
|||
return 0;
|
||||
}
|
||||
|
||||
int __must_check mhi_poll_reg_field(struct mhi_controller *mhi_cntrl,
|
||||
void __iomem *base, u32 offset,
|
||||
u32 mask, u32 shift, u32 val, u32 delayus)
|
||||
{
|
||||
int ret;
|
||||
u32 out, retry = (mhi_cntrl->timeout_ms * 1000) / delayus;
|
||||
|
||||
while (retry--) {
|
||||
ret = mhi_read_reg_field(mhi_cntrl, base, offset, mask, shift,
|
||||
&out);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (out == val)
|
||||
return 0;
|
||||
|
||||
fsleep(delayus);
|
||||
}
|
||||
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
|
||||
u32 offset, u32 val)
|
||||
{
|
||||
|
@ -242,10 +265,17 @@ static void mhi_del_ring_element(struct mhi_controller *mhi_cntrl,
|
|||
smp_wmb();
|
||||
}
|
||||
|
||||
static bool is_valid_ring_ptr(struct mhi_ring *ring, dma_addr_t addr)
|
||||
{
|
||||
return addr >= ring->iommu_base && addr < ring->iommu_base + ring->len;
|
||||
}
|
||||
|
||||
int mhi_destroy_device(struct device *dev, void *data)
|
||||
{
|
||||
struct mhi_chan *ul_chan, *dl_chan;
|
||||
struct mhi_device *mhi_dev;
|
||||
struct mhi_controller *mhi_cntrl;
|
||||
enum mhi_ee_type ee = MHI_EE_MAX;
|
||||
|
||||
if (dev->bus != &mhi_bus_type)
|
||||
return 0;
|
||||
|
@ -257,6 +287,17 @@ int mhi_destroy_device(struct device *dev, void *data)
|
|||
if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
|
||||
return 0;
|
||||
|
||||
ul_chan = mhi_dev->ul_chan;
|
||||
dl_chan = mhi_dev->dl_chan;
|
||||
|
||||
/*
|
||||
* If execution environment is specified, remove only those devices that
|
||||
* started in them based on ee_mask for the channels as we move on to a
|
||||
* different execution environment
|
||||
*/
|
||||
if (data)
|
||||
ee = *(enum mhi_ee_type *)data;
|
||||
|
||||
/*
|
||||
* For the suspend and resume case, this function will get called
|
||||
* without mhi_unregister_controller(). Hence, we need to drop the
|
||||
|
@ -264,11 +305,19 @@ int mhi_destroy_device(struct device *dev, void *data)
|
|||
* be sure that there will be no instances of mhi_dev left after
|
||||
* this.
|
||||
*/
|
||||
if (mhi_dev->ul_chan)
|
||||
put_device(&mhi_dev->ul_chan->mhi_dev->dev);
|
||||
if (ul_chan) {
|
||||
if (ee != MHI_EE_MAX && !(ul_chan->ee_mask & BIT(ee)))
|
||||
return 0;
|
||||
|
||||
if (mhi_dev->dl_chan)
|
||||
put_device(&mhi_dev->dl_chan->mhi_dev->dev);
|
||||
put_device(&ul_chan->mhi_dev->dev);
|
||||
}
|
||||
|
||||
if (dl_chan) {
|
||||
if (ee != MHI_EE_MAX && !(dl_chan->ee_mask & BIT(ee)))
|
||||
return 0;
|
||||
|
||||
put_device(&dl_chan->mhi_dev->dev);
|
||||
}
|
||||
|
||||
dev_dbg(&mhi_cntrl->mhi_dev->dev, "destroy device for chan:%s\n",
|
||||
mhi_dev->name);
|
||||
|
@ -383,7 +432,16 @@ irqreturn_t mhi_irq_handler(int irq_number, void *dev)
|
|||
struct mhi_event_ctxt *er_ctxt =
|
||||
&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
|
||||
struct mhi_ring *ev_ring = &mhi_event->ring;
|
||||
void *dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
|
||||
dma_addr_t ptr = er_ctxt->rp;
|
||||
void *dev_rp;
|
||||
|
||||
if (!is_valid_ring_ptr(ev_ring, ptr)) {
|
||||
dev_err(&mhi_cntrl->mhi_dev->dev,
|
||||
"Event ring rp points outside of the event ring\n");
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
dev_rp = mhi_to_virtual(ev_ring, ptr);
|
||||
|
||||
/* Only proceed if event ring has pending events */
|
||||
if (ev_ring->rp == dev_rp)
|
||||
|
@ -407,9 +465,9 @@ irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *priv)
|
|||
{
|
||||
struct mhi_controller *mhi_cntrl = priv;
|
||||
struct device *dev = &mhi_cntrl->mhi_dev->dev;
|
||||
enum mhi_state state = MHI_STATE_MAX;
|
||||
enum mhi_state state;
|
||||
enum mhi_pm_state pm_state = 0;
|
||||
enum mhi_ee_type ee = 0;
|
||||
enum mhi_ee_type ee;
|
||||
|
||||
write_lock_irq(&mhi_cntrl->pm_lock);
|
||||
if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
|
||||
|
@ -418,11 +476,11 @@ irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *priv)
|
|||
}
|
||||
|
||||
state = mhi_get_mhi_state(mhi_cntrl);
|
||||
ee = mhi_cntrl->ee;
|
||||
mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
|
||||
dev_dbg(dev, "local ee:%s device ee:%s dev_state:%s\n",
|
||||
TO_MHI_EXEC_STR(mhi_cntrl->ee), TO_MHI_EXEC_STR(ee),
|
||||
TO_MHI_STATE_STR(state));
|
||||
ee = mhi_get_exec_env(mhi_cntrl);
|
||||
dev_dbg(dev, "local ee: %s state: %s device ee: %s state: %s\n",
|
||||
TO_MHI_EXEC_STR(mhi_cntrl->ee),
|
||||
TO_MHI_STATE_STR(mhi_cntrl->dev_state),
|
||||
TO_MHI_EXEC_STR(ee), TO_MHI_STATE_STR(state));
|
||||
|
||||
if (state == MHI_STATE_SYS_ERR) {
|
||||
dev_dbg(dev, "System error detected\n");
|
||||
|
@ -431,27 +489,30 @@ irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *priv)
|
|||
}
|
||||
write_unlock_irq(&mhi_cntrl->pm_lock);
|
||||
|
||||
/* If device supports RDDM don't bother processing SYS error */
|
||||
if (mhi_cntrl->rddm_image) {
|
||||
/* host may be performing a device power down already */
|
||||
if (!mhi_is_active(mhi_cntrl))
|
||||
if (pm_state != MHI_PM_SYS_ERR_DETECT || ee == mhi_cntrl->ee)
|
||||
goto exit_intvec;
|
||||
|
||||
if (mhi_cntrl->ee == MHI_EE_RDDM && mhi_cntrl->ee != ee) {
|
||||
switch (ee) {
|
||||
case MHI_EE_RDDM:
|
||||
/* proceed if power down is not already in progress */
|
||||
if (mhi_cntrl->rddm_image && mhi_is_active(mhi_cntrl)) {
|
||||
mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_EE_RDDM);
|
||||
mhi_cntrl->ee = ee;
|
||||
wake_up_all(&mhi_cntrl->state_event);
|
||||
}
|
||||
goto exit_intvec;
|
||||
}
|
||||
|
||||
if (pm_state == MHI_PM_SYS_ERR_DETECT) {
|
||||
wake_up_all(&mhi_cntrl->state_event);
|
||||
|
||||
/* For fatal errors, we let controller decide next step */
|
||||
if (MHI_IN_PBL(ee))
|
||||
break;
|
||||
case MHI_EE_PBL:
|
||||
case MHI_EE_EDL:
|
||||
case MHI_EE_PTHRU:
|
||||
mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_FATAL_ERROR);
|
||||
else
|
||||
mhi_cntrl->ee = ee;
|
||||
wake_up_all(&mhi_cntrl->state_event);
|
||||
mhi_pm_sys_err_handler(mhi_cntrl);
|
||||
break;
|
||||
default:
|
||||
wake_up_all(&mhi_cntrl->state_event);
|
||||
mhi_pm_sys_err_handler(mhi_cntrl);
|
||||
break;
|
||||
}
|
||||
|
||||
exit_intvec:
|
||||
|
@ -536,6 +597,11 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
|
|||
struct mhi_buf_info *buf_info;
|
||||
u16 xfer_len;
|
||||
|
||||
if (!is_valid_ring_ptr(tre_ring, ptr)) {
|
||||
dev_err(&mhi_cntrl->mhi_dev->dev,
|
||||
"Event element points outside of the tre ring\n");
|
||||
break;
|
||||
}
|
||||
/* Get the TRB this event points to */
|
||||
ev_tre = mhi_to_virtual(tre_ring, ptr);
|
||||
|
||||
|
@ -570,8 +636,11 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
|
|||
/* notify client */
|
||||
mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
|
||||
|
||||
if (mhi_chan->dir == DMA_TO_DEVICE)
|
||||
if (mhi_chan->dir == DMA_TO_DEVICE) {
|
||||
atomic_dec(&mhi_cntrl->pending_pkts);
|
||||
/* Release the reference got from mhi_queue() */
|
||||
mhi_cntrl->runtime_put(mhi_cntrl);
|
||||
}
|
||||
|
||||
/*
|
||||
* Recycle the buffer if buffer is pre-allocated,
|
||||
|
@ -595,15 +664,15 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
|
|||
case MHI_EV_CC_OOB:
|
||||
case MHI_EV_CC_DB_MODE:
|
||||
{
|
||||
unsigned long flags;
|
||||
unsigned long pm_lock_flags;
|
||||
|
||||
mhi_chan->db_cfg.db_mode = 1;
|
||||
read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
|
||||
read_lock_irqsave(&mhi_cntrl->pm_lock, pm_lock_flags);
|
||||
if (tre_ring->wp != tre_ring->rp &&
|
||||
MHI_DB_ACCESS_VALID(mhi_cntrl)) {
|
||||
mhi_ring_chan_db(mhi_cntrl, mhi_chan);
|
||||
}
|
||||
read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
|
||||
read_unlock_irqrestore(&mhi_cntrl->pm_lock, pm_lock_flags);
|
||||
break;
|
||||
}
|
||||
case MHI_EV_CC_BAD_TRE:
|
||||
|
@ -695,6 +764,12 @@ static void mhi_process_cmd_completion(struct mhi_controller *mhi_cntrl,
|
|||
struct mhi_chan *mhi_chan;
|
||||
u32 chan;
|
||||
|
||||
if (!is_valid_ring_ptr(mhi_ring, ptr)) {
|
||||
dev_err(&mhi_cntrl->mhi_dev->dev,
|
||||
"Event element points outside of the cmd ring\n");
|
||||
return;
|
||||
}
|
||||
|
||||
cmd_pkt = mhi_to_virtual(mhi_ring, ptr);
|
||||
|
||||
chan = MHI_TRE_GET_CMD_CHID(cmd_pkt);
|
||||
|
@ -719,6 +794,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
|
|||
struct device *dev = &mhi_cntrl->mhi_dev->dev;
|
||||
u32 chan;
|
||||
int count = 0;
|
||||
dma_addr_t ptr = er_ctxt->rp;
|
||||
|
||||
/*
|
||||
* This is a quick check to avoid unnecessary event processing
|
||||
|
@ -728,7 +804,13 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
|
|||
if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
|
||||
return -EIO;
|
||||
|
||||
dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
|
||||
if (!is_valid_ring_ptr(ev_ring, ptr)) {
|
||||
dev_err(&mhi_cntrl->mhi_dev->dev,
|
||||
"Event ring rp points outside of the event ring\n");
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
dev_rp = mhi_to_virtual(ev_ring, ptr);
|
||||
local_rp = ev_ring->rp;
|
||||
|
||||
while (dev_rp != local_rp) {
|
||||
|
@ -771,14 +853,14 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
|
|||
break;
|
||||
case MHI_STATE_SYS_ERR:
|
||||
{
|
||||
enum mhi_pm_state new_state;
|
||||
enum mhi_pm_state pm_state;
|
||||
|
||||
dev_dbg(dev, "System error detected\n");
|
||||
write_lock_irq(&mhi_cntrl->pm_lock);
|
||||
new_state = mhi_tryset_pm_state(mhi_cntrl,
|
||||
pm_state = mhi_tryset_pm_state(mhi_cntrl,
|
||||
MHI_PM_SYS_ERR_DETECT);
|
||||
write_unlock_irq(&mhi_cntrl->pm_lock);
|
||||
if (new_state == MHI_PM_SYS_ERR_DETECT)
|
||||
if (pm_state == MHI_PM_SYS_ERR_DETECT)
|
||||
mhi_pm_sys_err_handler(mhi_cntrl);
|
||||
break;
|
||||
}
|
||||
|
@ -807,6 +889,9 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
|
|||
case MHI_EE_AMSS:
|
||||
st = DEV_ST_TRANSITION_MISSION_MODE;
|
||||
break;
|
||||
case MHI_EE_FP:
|
||||
st = DEV_ST_TRANSITION_FP;
|
||||
break;
|
||||
case MHI_EE_RDDM:
|
||||
mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_EE_RDDM);
|
||||
write_lock_irq(&mhi_cntrl->pm_lock);
|
||||
|
@ -834,6 +919,8 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
|
|||
*/
|
||||
if (chan < mhi_cntrl->max_chan) {
|
||||
mhi_chan = &mhi_cntrl->mhi_chan[chan];
|
||||
if (!mhi_chan->configured)
|
||||
break;
|
||||
parse_xfer_event(mhi_cntrl, local_rp, mhi_chan);
|
||||
event_quota--;
|
||||
}
|
||||
|
@ -845,7 +932,15 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
|
|||
|
||||
mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
|
||||
local_rp = ev_ring->rp;
|
||||
dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
|
||||
|
||||
ptr = er_ctxt->rp;
|
||||
if (!is_valid_ring_ptr(ev_ring, ptr)) {
|
||||
dev_err(&mhi_cntrl->mhi_dev->dev,
|
||||
"Event ring rp points outside of the event ring\n");
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
dev_rp = mhi_to_virtual(ev_ring, ptr);
|
||||
count++;
|
||||
}
|
||||
|
||||
|
@ -868,11 +963,18 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
|
|||
int count = 0;
|
||||
u32 chan;
|
||||
struct mhi_chan *mhi_chan;
|
||||
dma_addr_t ptr = er_ctxt->rp;
|
||||
|
||||
if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
|
||||
return -EIO;
|
||||
|
||||
dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
|
||||
if (!is_valid_ring_ptr(ev_ring, ptr)) {
|
||||
dev_err(&mhi_cntrl->mhi_dev->dev,
|
||||
"Event ring rp points outside of the event ring\n");
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
dev_rp = mhi_to_virtual(ev_ring, ptr);
|
||||
local_rp = ev_ring->rp;
|
||||
|
||||
while (dev_rp != local_rp && event_quota > 0) {
|
||||
|
@ -886,7 +988,8 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
|
|||
* Only process the event ring elements whose channel
|
||||
* ID is within the maximum supported range.
|
||||
*/
|
||||
if (chan < mhi_cntrl->max_chan) {
|
||||
if (chan < mhi_cntrl->max_chan &&
|
||||
mhi_cntrl->mhi_chan[chan].configured) {
|
||||
mhi_chan = &mhi_cntrl->mhi_chan[chan];
|
||||
|
||||
if (likely(type == MHI_PKT_TYPE_TX_EVENT)) {
|
||||
|
@ -900,7 +1003,15 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
|
|||
|
||||
mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
|
||||
local_rp = ev_ring->rp;
|
||||
dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
|
||||
|
||||
ptr = er_ctxt->rp;
|
||||
if (!is_valid_ring_ptr(ev_ring, ptr)) {
|
||||
dev_err(&mhi_cntrl->mhi_dev->dev,
|
||||
"Event ring rp points outside of the event ring\n");
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
dev_rp = mhi_to_virtual(ev_ring, ptr);
|
||||
count++;
|
||||
}
|
||||
read_lock_bh(&mhi_cntrl->pm_lock);
|
||||
|
@ -996,7 +1107,7 @@ static int mhi_queue(struct mhi_device *mhi_dev, struct mhi_buf_info *buf_info,
|
|||
|
||||
ret = mhi_is_ring_full(mhi_cntrl, tre_ring);
|
||||
if (unlikely(ret)) {
|
||||
ret = -ENOMEM;
|
||||
ret = -EAGAIN;
|
||||
goto exit_unlock;
|
||||
}
|
||||
|
||||
|
@ -1004,9 +1115,11 @@ static int mhi_queue(struct mhi_device *mhi_dev, struct mhi_buf_info *buf_info,
|
|||
if (unlikely(ret))
|
||||
goto exit_unlock;
|
||||
|
||||
/* trigger M3 exit if necessary */
|
||||
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
|
||||
mhi_trigger_resume(mhi_cntrl);
|
||||
/* Packet is queued, take a usage ref to exit M3 if necessary
|
||||
* for host->device buffer, balanced put is done on buffer completion
|
||||
* for device->host buffer, balanced put is after ringing the DB
|
||||
*/
|
||||
mhi_cntrl->runtime_get(mhi_cntrl);
|
||||
|
||||
/* Assert dev_wake (to exit/prevent M1/M2)*/
|
||||
mhi_cntrl->wake_toggle(mhi_cntrl);
|
||||
|
@ -1014,13 +1127,12 @@ static int mhi_queue(struct mhi_device *mhi_dev, struct mhi_buf_info *buf_info,
|
|||
if (mhi_chan->dir == DMA_TO_DEVICE)
|
||||
atomic_inc(&mhi_cntrl->pending_pkts);
|
||||
|
||||
if (unlikely(!MHI_DB_ACCESS_VALID(mhi_cntrl))) {
|
||||
ret = -EIO;
|
||||
goto exit_unlock;
|
||||
}
|
||||
|
||||
if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl)))
|
||||
mhi_ring_chan_db(mhi_cntrl, mhi_chan);
|
||||
|
||||
if (dir == DMA_FROM_DEVICE)
|
||||
mhi_cntrl->runtime_put(mhi_cntrl);
|
||||
|
||||
exit_unlock:
|
||||
read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
|
||||
|
||||
|
@ -1162,6 +1274,11 @@ int mhi_send_cmd(struct mhi_controller *mhi_cntrl,
|
|||
cmd_tre->dword[0] = MHI_TRE_CMD_RESET_DWORD0;
|
||||
cmd_tre->dword[1] = MHI_TRE_CMD_RESET_DWORD1(chan);
|
||||
break;
|
||||
case MHI_CMD_STOP_CHAN:
|
||||
cmd_tre->ptr = MHI_TRE_CMD_STOP_PTR;
|
||||
cmd_tre->dword[0] = MHI_TRE_CMD_STOP_DWORD0;
|
||||
cmd_tre->dword[1] = MHI_TRE_CMD_STOP_DWORD1(chan);
|
||||
break;
|
||||
case MHI_CMD_START_CHAN:
|
||||
cmd_tre->ptr = MHI_TRE_CMD_START_PTR;
|
||||
cmd_tre->dword[0] = MHI_TRE_CMD_START_DWORD0;
|
||||
|
@ -1183,56 +1300,125 @@ int mhi_send_cmd(struct mhi_controller *mhi_cntrl,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void __mhi_unprepare_channel(struct mhi_controller *mhi_cntrl,
|
||||
struct mhi_chan *mhi_chan)
|
||||
static int mhi_update_channel_state(struct mhi_controller *mhi_cntrl,
|
||||
struct mhi_chan *mhi_chan,
|
||||
enum mhi_ch_state_type to_state)
|
||||
{
|
||||
struct device *dev = &mhi_chan->mhi_dev->dev;
|
||||
enum mhi_cmd_type cmd = MHI_CMD_NOP;
|
||||
int ret;
|
||||
struct device *dev = &mhi_cntrl->mhi_dev->dev;
|
||||
|
||||
dev_dbg(dev, "Entered: unprepare channel:%d\n", mhi_chan->chan);
|
||||
dev_dbg(dev, "%d: Updating channel state to: %s\n", mhi_chan->chan,
|
||||
TO_CH_STATE_TYPE_STR(to_state));
|
||||
|
||||
/* no more processing events for this channel */
|
||||
mutex_lock(&mhi_chan->mutex);
|
||||
switch (to_state) {
|
||||
case MHI_CH_STATE_TYPE_RESET:
|
||||
write_lock_irq(&mhi_chan->lock);
|
||||
if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED &&
|
||||
if (mhi_chan->ch_state != MHI_CH_STATE_STOP &&
|
||||
mhi_chan->ch_state != MHI_CH_STATE_ENABLED &&
|
||||
mhi_chan->ch_state != MHI_CH_STATE_SUSPENDED) {
|
||||
write_unlock_irq(&mhi_chan->lock);
|
||||
mutex_unlock(&mhi_chan->mutex);
|
||||
return;
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
|
||||
write_unlock_irq(&mhi_chan->lock);
|
||||
|
||||
reinit_completion(&mhi_chan->completion);
|
||||
read_lock_bh(&mhi_cntrl->pm_lock);
|
||||
if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
|
||||
read_unlock_bh(&mhi_cntrl->pm_lock);
|
||||
goto error_invalid_state;
|
||||
cmd = MHI_CMD_RESET_CHAN;
|
||||
break;
|
||||
case MHI_CH_STATE_TYPE_STOP:
|
||||
if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED)
|
||||
return -EINVAL;
|
||||
|
||||
cmd = MHI_CMD_STOP_CHAN;
|
||||
break;
|
||||
case MHI_CH_STATE_TYPE_START:
|
||||
if (mhi_chan->ch_state != MHI_CH_STATE_STOP &&
|
||||
mhi_chan->ch_state != MHI_CH_STATE_DISABLED)
|
||||
return -EINVAL;
|
||||
|
||||
cmd = MHI_CMD_START_CHAN;
|
||||
break;
|
||||
default:
|
||||
dev_err(dev, "%d: Channel state update to %s not allowed\n",
|
||||
mhi_chan->chan, TO_CH_STATE_TYPE_STR(to_state));
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
mhi_cntrl->wake_toggle(mhi_cntrl);
|
||||
read_unlock_bh(&mhi_cntrl->pm_lock);
|
||||
|
||||
mhi_cntrl->runtime_get(mhi_cntrl);
|
||||
mhi_cntrl->runtime_put(mhi_cntrl);
|
||||
ret = mhi_send_cmd(mhi_cntrl, mhi_chan, MHI_CMD_RESET_CHAN);
|
||||
/* bring host and device out of suspended states */
|
||||
ret = mhi_device_get_sync(mhi_cntrl->mhi_dev);
|
||||
if (ret)
|
||||
goto error_invalid_state;
|
||||
return ret;
|
||||
mhi_cntrl->runtime_get(mhi_cntrl);
|
||||
|
||||
reinit_completion(&mhi_chan->completion);
|
||||
ret = mhi_send_cmd(mhi_cntrl, mhi_chan, cmd);
|
||||
if (ret) {
|
||||
dev_err(dev, "%d: Failed to send %s channel command\n",
|
||||
mhi_chan->chan, TO_CH_STATE_TYPE_STR(to_state));
|
||||
goto exit_channel_update;
|
||||
}
|
||||
|
||||
/* even if it fails we will still reset */
|
||||
ret = wait_for_completion_timeout(&mhi_chan->completion,
|
||||
msecs_to_jiffies(mhi_cntrl->timeout_ms));
|
||||
if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS)
|
||||
if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS) {
|
||||
dev_err(dev,
|
||||
"Failed to receive cmd completion, still resetting\n");
|
||||
"%d: Failed to receive %s channel command completion\n",
|
||||
mhi_chan->chan, TO_CH_STATE_TYPE_STR(to_state));
|
||||
ret = -EIO;
|
||||
goto exit_channel_update;
|
||||
}
|
||||
|
||||
ret = 0;
|
||||
|
||||
if (to_state != MHI_CH_STATE_TYPE_RESET) {
|
||||
write_lock_irq(&mhi_chan->lock);
|
||||
mhi_chan->ch_state = (to_state == MHI_CH_STATE_TYPE_START) ?
|
||||
MHI_CH_STATE_ENABLED : MHI_CH_STATE_STOP;
|
||||
write_unlock_irq(&mhi_chan->lock);
|
||||
}
|
||||
|
||||
dev_dbg(dev, "%d: Channel state change to %s successful\n",
|
||||
mhi_chan->chan, TO_CH_STATE_TYPE_STR(to_state));
|
||||
|
||||
exit_channel_update:
|
||||
mhi_cntrl->runtime_put(mhi_cntrl);
|
||||
mhi_device_put(mhi_cntrl->mhi_dev);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void mhi_unprepare_channel(struct mhi_controller *mhi_cntrl,
|
||||
struct mhi_chan *mhi_chan)
|
||||
{
|
||||
int ret;
|
||||
struct device *dev = &mhi_chan->mhi_dev->dev;
|
||||
|
||||
mutex_lock(&mhi_chan->mutex);
|
||||
|
||||
if (!(BIT(mhi_cntrl->ee) & mhi_chan->ee_mask)) {
|
||||
dev_dbg(dev, "Current EE: %s Required EE Mask: 0x%x\n",
|
||||
TO_MHI_EXEC_STR(mhi_cntrl->ee), mhi_chan->ee_mask);
|
||||
goto exit_unprepare_channel;
|
||||
}
|
||||
|
||||
/* no more processing events for this channel */
|
||||
ret = mhi_update_channel_state(mhi_cntrl, mhi_chan,
|
||||
MHI_CH_STATE_TYPE_RESET);
|
||||
if (ret)
|
||||
dev_err(dev, "%d: Failed to reset channel, still resetting\n",
|
||||
mhi_chan->chan);
|
||||
|
||||
exit_unprepare_channel:
|
||||
write_lock_irq(&mhi_chan->lock);
|
||||
mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
|
||||
write_unlock_irq(&mhi_chan->lock);
|
||||
|
||||
error_invalid_state:
|
||||
if (!mhi_chan->offload_ch) {
|
||||
mhi_reset_chan(mhi_cntrl, mhi_chan);
|
||||
mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
|
||||
}
|
||||
dev_dbg(dev, "chan:%d successfully resetted\n", mhi_chan->chan);
|
||||
dev_dbg(dev, "%d: successfully reset\n", mhi_chan->chan);
|
||||
|
||||
mutex_unlock(&mhi_chan->mutex);
|
||||
}
|
||||
|
||||
|
@ -1240,28 +1426,16 @@ int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
|
|||
struct mhi_chan *mhi_chan)
|
||||
{
|
||||
int ret = 0;
|
||||
struct device *dev = &mhi_cntrl->mhi_dev->dev;
|
||||
|
||||
dev_dbg(dev, "Preparing channel: %d\n", mhi_chan->chan);
|
||||
struct device *dev = &mhi_chan->mhi_dev->dev;
|
||||
|
||||
if (!(BIT(mhi_cntrl->ee) & mhi_chan->ee_mask)) {
|
||||
dev_err(dev,
|
||||
"Current EE: %s Required EE Mask: 0x%x for chan: %s\n",
|
||||
TO_MHI_EXEC_STR(mhi_cntrl->ee), mhi_chan->ee_mask,
|
||||
mhi_chan->name);
|
||||
dev_err(dev, "Current EE: %s Required EE Mask: 0x%x\n",
|
||||
TO_MHI_EXEC_STR(mhi_cntrl->ee), mhi_chan->ee_mask);
|
||||
return -ENOTCONN;
|
||||
}
|
||||
|
||||
mutex_lock(&mhi_chan->mutex);
|
||||
|
||||
/* If channel is not in disable state, do not allow it to start */
|
||||
if (mhi_chan->ch_state != MHI_CH_STATE_DISABLED) {
|
||||
ret = -EIO;
|
||||
dev_dbg(dev, "channel: %d is not in disabled state\n",
|
||||
mhi_chan->chan);
|
||||
goto error_init_chan;
|
||||
}
|
||||
|
||||
/* Check of client manages channel context for offload channels */
|
||||
if (!mhi_chan->offload_ch) {
|
||||
ret = mhi_init_chan_ctxt(mhi_cntrl, mhi_chan);
|
||||
|
@ -1269,34 +1443,11 @@ int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
|
|||
goto error_init_chan;
|
||||
}
|
||||
|
||||
reinit_completion(&mhi_chan->completion);
|
||||
read_lock_bh(&mhi_cntrl->pm_lock);
|
||||
if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
|
||||
read_unlock_bh(&mhi_cntrl->pm_lock);
|
||||
ret = -EIO;
|
||||
goto error_pm_state;
|
||||
}
|
||||
|
||||
mhi_cntrl->wake_toggle(mhi_cntrl);
|
||||
read_unlock_bh(&mhi_cntrl->pm_lock);
|
||||
mhi_cntrl->runtime_get(mhi_cntrl);
|
||||
mhi_cntrl->runtime_put(mhi_cntrl);
|
||||
|
||||
ret = mhi_send_cmd(mhi_cntrl, mhi_chan, MHI_CMD_START_CHAN);
|
||||
ret = mhi_update_channel_state(mhi_cntrl, mhi_chan,
|
||||
MHI_CH_STATE_TYPE_START);
|
||||
if (ret)
|
||||
goto error_pm_state;
|
||||
|
||||
ret = wait_for_completion_timeout(&mhi_chan->completion,
|
||||
msecs_to_jiffies(mhi_cntrl->timeout_ms));
|
||||
if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS) {
|
||||
ret = -EIO;
|
||||
goto error_pm_state;
|
||||
}
|
||||
|
||||
write_lock_irq(&mhi_chan->lock);
|
||||
mhi_chan->ch_state = MHI_CH_STATE_ENABLED;
|
||||
write_unlock_irq(&mhi_chan->lock);
|
||||
|
||||
/* Pre-allocate buffer for xfer ring */
|
||||
if (mhi_chan->pre_alloc) {
|
||||
int nr_el = get_nr_avail_ring_elements(mhi_cntrl,
|
||||
|
@ -1334,9 +1485,6 @@ int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
|
|||
|
||||
mutex_unlock(&mhi_chan->mutex);
|
||||
|
||||
dev_dbg(dev, "Chan: %d successfully moved to start state\n",
|
||||
mhi_chan->chan);
|
||||
|
||||
return 0;
|
||||
|
||||
error_pm_state:
|
||||
|
@ -1350,7 +1498,7 @@ error_init_chan:
|
|||
|
||||
error_pre_alloc:
|
||||
mutex_unlock(&mhi_chan->mutex);
|
||||
__mhi_unprepare_channel(mhi_cntrl, mhi_chan);
|
||||
mhi_unprepare_channel(mhi_cntrl, mhi_chan);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -1365,6 +1513,7 @@ static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl,
|
|||
struct mhi_ring *ev_ring;
|
||||
struct device *dev = &mhi_cntrl->mhi_dev->dev;
|
||||
unsigned long flags;
|
||||
dma_addr_t ptr;
|
||||
|
||||
dev_dbg(dev, "Marking all events for chan: %d as stale\n", chan);
|
||||
|
||||
|
@ -1372,7 +1521,15 @@ static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl,
|
|||
|
||||
/* mark all stale events related to channel as STALE event */
|
||||
spin_lock_irqsave(&mhi_event->lock, flags);
|
||||
dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
|
||||
|
||||
ptr = er_ctxt->rp;
|
||||
if (!is_valid_ring_ptr(ev_ring, ptr)) {
|
||||
dev_err(&mhi_cntrl->mhi_dev->dev,
|
||||
"Event ring rp points outside of the event ring\n");
|
||||
dev_rp = ev_ring->rp;
|
||||
} else {
|
||||
dev_rp = mhi_to_virtual(ev_ring, ptr);
|
||||
}
|
||||
|
||||
local_rp = ev_ring->rp;
|
||||
while (dev_rp != local_rp) {
|
||||
|
@ -1403,8 +1560,11 @@ static void mhi_reset_data_chan(struct mhi_controller *mhi_cntrl,
|
|||
while (tre_ring->rp != tre_ring->wp) {
|
||||
struct mhi_buf_info *buf_info = buf_ring->rp;
|
||||
|
||||
if (mhi_chan->dir == DMA_TO_DEVICE)
|
||||
if (mhi_chan->dir == DMA_TO_DEVICE) {
|
||||
atomic_dec(&mhi_cntrl->pending_pkts);
|
||||
/* Release the reference got from mhi_queue() */
|
||||
mhi_cntrl->runtime_put(mhi_cntrl);
|
||||
}
|
||||
|
||||
if (!buf_info->pre_mapped)
|
||||
mhi_cntrl->unmap_single(mhi_cntrl, buf_info);
|
||||
|
@ -1467,7 +1627,7 @@ error_open_chan:
|
|||
if (!mhi_chan)
|
||||
continue;
|
||||
|
||||
__mhi_unprepare_channel(mhi_cntrl, mhi_chan);
|
||||
mhi_unprepare_channel(mhi_cntrl, mhi_chan);
|
||||
}
|
||||
|
||||
return ret;
|
||||
|
@ -1485,7 +1645,7 @@ void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev)
|
|||
if (!mhi_chan)
|
||||
continue;
|
||||
|
||||
__mhi_unprepare_channel(mhi_cntrl, mhi_chan);
|
||||
mhi_unprepare_channel(mhi_cntrl, mhi_chan);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mhi_unprepare_from_transfer);
|
||||
|
|
|
@ -153,35 +153,33 @@ static void mhi_toggle_dev_wake(struct mhi_controller *mhi_cntrl)
|
|||
/* Handle device ready state transition */
|
||||
int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
|
||||
{
|
||||
void __iomem *base = mhi_cntrl->regs;
|
||||
struct mhi_event *mhi_event;
|
||||
enum mhi_pm_state cur_state;
|
||||
struct device *dev = &mhi_cntrl->mhi_dev->dev;
|
||||
u32 reset = 1, ready = 0;
|
||||
u32 interval_us = 25000; /* poll register field every 25 milliseconds */
|
||||
int ret, i;
|
||||
|
||||
/* Wait for RESET to be cleared and READY bit to be set by the device */
|
||||
wait_event_timeout(mhi_cntrl->state_event,
|
||||
MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state) ||
|
||||
mhi_read_reg_field(mhi_cntrl, base, MHICTRL,
|
||||
MHICTRL_RESET_MASK,
|
||||
MHICTRL_RESET_SHIFT, &reset) ||
|
||||
mhi_read_reg_field(mhi_cntrl, base, MHISTATUS,
|
||||
MHISTATUS_READY_MASK,
|
||||
MHISTATUS_READY_SHIFT, &ready) ||
|
||||
(!reset && ready),
|
||||
msecs_to_jiffies(mhi_cntrl->timeout_ms));
|
||||
|
||||
/* Check if device entered error state */
|
||||
if (MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state)) {
|
||||
dev_err(dev, "Device link is not accessible\n");
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
/* Timeout if device did not transition to ready state */
|
||||
if (reset || !ready) {
|
||||
dev_err(dev, "Device Ready timeout\n");
|
||||
return -ETIMEDOUT;
|
||||
/* Wait for RESET to be cleared and READY bit to be set by the device */
|
||||
ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
|
||||
MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 0,
|
||||
interval_us);
|
||||
if (ret) {
|
||||
dev_err(dev, "Device failed to clear MHI Reset\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHISTATUS,
|
||||
MHISTATUS_READY_MASK, MHISTATUS_READY_SHIFT, 1,
|
||||
interval_us);
|
||||
if (ret) {
|
||||
dev_err(dev, "Device failed to enter MHI Ready\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
dev_dbg(dev, "Device in READY State\n");
|
||||
|
@ -377,24 +375,28 @@ static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
|
|||
{
|
||||
struct mhi_event *mhi_event;
|
||||
struct device *dev = &mhi_cntrl->mhi_dev->dev;
|
||||
enum mhi_ee_type ee = MHI_EE_MAX, current_ee = mhi_cntrl->ee;
|
||||
int i, ret;
|
||||
|
||||
dev_dbg(dev, "Processing Mission Mode transition\n");
|
||||
|
||||
write_lock_irq(&mhi_cntrl->pm_lock);
|
||||
if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
|
||||
mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
|
||||
ee = mhi_get_exec_env(mhi_cntrl);
|
||||
|
||||
if (!MHI_IN_MISSION_MODE(mhi_cntrl->ee)) {
|
||||
if (!MHI_IN_MISSION_MODE(ee)) {
|
||||
mhi_cntrl->pm_state = MHI_PM_LD_ERR_FATAL_DETECT;
|
||||
write_unlock_irq(&mhi_cntrl->pm_lock);
|
||||
wake_up_all(&mhi_cntrl->state_event);
|
||||
return -EIO;
|
||||
}
|
||||
mhi_cntrl->ee = ee;
|
||||
write_unlock_irq(&mhi_cntrl->pm_lock);
|
||||
|
||||
wake_up_all(&mhi_cntrl->state_event);
|
||||
|
||||
device_for_each_child(&mhi_cntrl->mhi_dev->dev, ¤t_ee,
|
||||
mhi_destroy_device);
|
||||
mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_EE_MISSION_MODE);
|
||||
|
||||
/* Force MHI to be in M0 state before continuing */
|
||||
|
@ -560,6 +562,7 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl)
|
|||
static void mhi_pm_sys_error_transition(struct mhi_controller *mhi_cntrl)
|
||||
{
|
||||
enum mhi_pm_state cur_state, prev_state;
|
||||
enum dev_st_transition next_state;
|
||||
struct mhi_event *mhi_event;
|
||||
struct mhi_cmd_ctxt *cmd_ctxt;
|
||||
struct mhi_cmd *mhi_cmd;
|
||||
|
@ -673,7 +676,23 @@ static void mhi_pm_sys_error_transition(struct mhi_controller *mhi_cntrl)
|
|||
er_ctxt->wp = er_ctxt->rbase;
|
||||
}
|
||||
|
||||
mhi_ready_state_transition(mhi_cntrl);
|
||||
/* Transition to next state */
|
||||
if (MHI_IN_PBL(mhi_get_exec_env(mhi_cntrl))) {
|
||||
write_lock_irq(&mhi_cntrl->pm_lock);
|
||||
cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_POR);
|
||||
write_unlock_irq(&mhi_cntrl->pm_lock);
|
||||
if (cur_state != MHI_PM_POR) {
|
||||
dev_err(dev, "Error moving to state %s from %s\n",
|
||||
to_mhi_pm_state_str(MHI_PM_POR),
|
||||
to_mhi_pm_state_str(cur_state));
|
||||
goto exit_sys_error_transition;
|
||||
}
|
||||
next_state = DEV_ST_TRANSITION_PBL;
|
||||
} else {
|
||||
next_state = DEV_ST_TRANSITION_READY;
|
||||
}
|
||||
|
||||
mhi_queue_state_transition(mhi_cntrl, next_state);
|
||||
|
||||
exit_sys_error_transition:
|
||||
dev_dbg(dev, "Exiting with PM state: %s, MHI state: %s\n",
|
||||
|
@ -742,7 +761,6 @@ void mhi_pm_st_worker(struct work_struct *work)
|
|||
if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
|
||||
mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
|
||||
write_unlock_irq(&mhi_cntrl->pm_lock);
|
||||
if (MHI_IN_PBL(mhi_cntrl->ee))
|
||||
mhi_fw_load_handler(mhi_cntrl);
|
||||
break;
|
||||
case DEV_ST_TRANSITION_SBL:
|
||||
|
@ -755,10 +773,18 @@ void mhi_pm_st_worker(struct work_struct *work)
|
|||
* either SBL or AMSS states
|
||||
*/
|
||||
mhi_create_devices(mhi_cntrl);
|
||||
if (mhi_cntrl->fbc_download)
|
||||
mhi_download_amss_image(mhi_cntrl);
|
||||
break;
|
||||
case DEV_ST_TRANSITION_MISSION_MODE:
|
||||
mhi_pm_mission_mode_transition(mhi_cntrl);
|
||||
break;
|
||||
case DEV_ST_TRANSITION_FP:
|
||||
write_lock_irq(&mhi_cntrl->pm_lock);
|
||||
mhi_cntrl->ee = MHI_EE_FP;
|
||||
write_unlock_irq(&mhi_cntrl->pm_lock);
|
||||
mhi_create_devices(mhi_cntrl);
|
||||
break;
|
||||
case DEV_ST_TRANSITION_READY:
|
||||
mhi_ready_state_transition(mhi_cntrl);
|
||||
break;
|
||||
|
@ -822,7 +848,7 @@ int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
|
|||
return -EBUSY;
|
||||
}
|
||||
|
||||
dev_info(dev, "Allowing M3 transition\n");
|
||||
dev_dbg(dev, "Allowing M3 transition\n");
|
||||
new_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M3_ENTER);
|
||||
if (new_state != MHI_PM_M3_ENTER) {
|
||||
write_unlock_irq(&mhi_cntrl->pm_lock);
|
||||
|
@ -836,7 +862,7 @@ int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
|
|||
/* Set MHI to M3 and wait for completion */
|
||||
mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M3);
|
||||
write_unlock_irq(&mhi_cntrl->pm_lock);
|
||||
dev_info(dev, "Wait for M3 completion\n");
|
||||
dev_dbg(dev, "Waiting for M3 completion\n");
|
||||
|
||||
ret = wait_event_timeout(mhi_cntrl->state_event,
|
||||
mhi_cntrl->dev_state == MHI_STATE_M3 ||
|
||||
|
@ -870,7 +896,7 @@ int mhi_pm_resume(struct mhi_controller *mhi_cntrl)
|
|||
enum mhi_pm_state cur_state;
|
||||
int ret;
|
||||
|
||||
dev_info(dev, "Entered with PM state: %s, MHI state: %s\n",
|
||||
dev_dbg(dev, "Entered with PM state: %s, MHI state: %s\n",
|
||||
to_mhi_pm_state_str(mhi_cntrl->pm_state),
|
||||
TO_MHI_STATE_STR(mhi_cntrl->dev_state));
|
||||
|
||||
|
@ -880,6 +906,9 @@ int mhi_pm_resume(struct mhi_controller *mhi_cntrl)
|
|||
if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
|
||||
return -EIO;
|
||||
|
||||
if (mhi_get_mhi_state(mhi_cntrl) != MHI_STATE_M3)
|
||||
return -EINVAL;
|
||||
|
||||
/* Notify clients about exiting LPM */
|
||||
list_for_each_entry_safe(itr, tmp, &mhi_cntrl->lpm_chans, node) {
|
||||
mutex_lock(&itr->mutex);
|
||||
|
@ -1033,13 +1062,6 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
|
|||
mutex_lock(&mhi_cntrl->pm_mutex);
|
||||
mhi_cntrl->pm_state = MHI_PM_DISABLE;
|
||||
|
||||
if (!mhi_cntrl->pre_init) {
|
||||
/* Setup device context */
|
||||
ret = mhi_init_dev_ctxt(mhi_cntrl);
|
||||
if (ret)
|
||||
goto error_dev_ctxt;
|
||||
}
|
||||
|
||||
ret = mhi_init_irq_setup(mhi_cntrl);
|
||||
if (ret)
|
||||
goto error_setup_irq;
|
||||
|
@ -1092,7 +1114,7 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
|
|||
&val) ||
|
||||
!val,
|
||||
msecs_to_jiffies(mhi_cntrl->timeout_ms));
|
||||
if (ret) {
|
||||
if (!ret) {
|
||||
ret = -EIO;
|
||||
dev_info(dev, "Failed to reset MHI due to syserr state\n");
|
||||
goto error_bhi_offset;
|
||||
|
@ -1121,10 +1143,7 @@ error_bhi_offset:
|
|||
mhi_deinit_free_irq(mhi_cntrl);
|
||||
|
||||
error_setup_irq:
|
||||
if (!mhi_cntrl->pre_init)
|
||||
mhi_deinit_dev_ctxt(mhi_cntrl);
|
||||
|
||||
error_dev_ctxt:
|
||||
mhi_cntrl->pm_state = MHI_PM_DISABLE;
|
||||
mutex_unlock(&mhi_cntrl->pm_mutex);
|
||||
|
||||
return ret;
|
||||
|
@ -1136,12 +1155,19 @@ void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful)
|
|||
enum mhi_pm_state cur_state, transition_state;
|
||||
struct device *dev = &mhi_cntrl->mhi_dev->dev;
|
||||
|
||||
mutex_lock(&mhi_cntrl->pm_mutex);
|
||||
write_lock_irq(&mhi_cntrl->pm_lock);
|
||||
cur_state = mhi_cntrl->pm_state;
|
||||
if (cur_state == MHI_PM_DISABLE) {
|
||||
write_unlock_irq(&mhi_cntrl->pm_lock);
|
||||
mutex_unlock(&mhi_cntrl->pm_mutex);
|
||||
return; /* Already powered down */
|
||||
}
|
||||
|
||||
/* If it's not a graceful shutdown, force MHI to linkdown state */
|
||||
transition_state = (graceful) ? MHI_PM_SHUTDOWN_PROCESS :
|
||||
MHI_PM_LD_ERR_FATAL_DETECT;
|
||||
|
||||
mutex_lock(&mhi_cntrl->pm_mutex);
|
||||
write_lock_irq(&mhi_cntrl->pm_lock);
|
||||
cur_state = mhi_tryset_pm_state(mhi_cntrl, transition_state);
|
||||
if (cur_state != transition_state) {
|
||||
dev_err(dev, "Failed to move to state: %s from: %s\n",
|
||||
|
@ -1166,15 +1192,6 @@ void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful)
|
|||
flush_work(&mhi_cntrl->st_worker);
|
||||
|
||||
free_irq(mhi_cntrl->irq[0], mhi_cntrl);
|
||||
|
||||
if (!mhi_cntrl->pre_init) {
|
||||
/* Free all allocated resources */
|
||||
if (mhi_cntrl->fbc_image) {
|
||||
mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
|
||||
mhi_cntrl->fbc_image = NULL;
|
||||
}
|
||||
mhi_deinit_dev_ctxt(mhi_cntrl);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mhi_power_down);
|
||||
|
||||
|
|
|
@ -14,6 +14,7 @@
|
|||
#include <linux/mhi.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <linux/timer.h>
|
||||
#include <linux/workqueue.h>
|
||||
|
||||
|
@ -71,9 +72,9 @@ struct mhi_pci_dev_info {
|
|||
.doorbell_mode_switch = false, \
|
||||
}
|
||||
|
||||
#define MHI_EVENT_CONFIG_CTRL(ev_ring) \
|
||||
#define MHI_EVENT_CONFIG_CTRL(ev_ring, el_count) \
|
||||
{ \
|
||||
.num_elements = 64, \
|
||||
.num_elements = el_count, \
|
||||
.irq_moderation_ms = 0, \
|
||||
.irq = (ev_ring) + 1, \
|
||||
.priority = 1, \
|
||||
|
@ -114,9 +115,69 @@ struct mhi_pci_dev_info {
|
|||
.doorbell_mode_switch = true, \
|
||||
}
|
||||
|
||||
#define MHI_EVENT_CONFIG_DATA(ev_ring) \
|
||||
#define MHI_CHANNEL_CONFIG_UL_SBL(ch_num, ch_name, el_count, ev_ring) \
|
||||
{ \
|
||||
.num_elements = 128, \
|
||||
.num = ch_num, \
|
||||
.name = ch_name, \
|
||||
.num_elements = el_count, \
|
||||
.event_ring = ev_ring, \
|
||||
.dir = DMA_TO_DEVICE, \
|
||||
.ee_mask = BIT(MHI_EE_SBL), \
|
||||
.pollcfg = 0, \
|
||||
.doorbell = MHI_DB_BRST_DISABLE, \
|
||||
.lpm_notify = false, \
|
||||
.offload_channel = false, \
|
||||
.doorbell_mode_switch = false, \
|
||||
} \
|
||||
|
||||
#define MHI_CHANNEL_CONFIG_DL_SBL(ch_num, ch_name, el_count, ev_ring) \
|
||||
{ \
|
||||
.num = ch_num, \
|
||||
.name = ch_name, \
|
||||
.num_elements = el_count, \
|
||||
.event_ring = ev_ring, \
|
||||
.dir = DMA_FROM_DEVICE, \
|
||||
.ee_mask = BIT(MHI_EE_SBL), \
|
||||
.pollcfg = 0, \
|
||||
.doorbell = MHI_DB_BRST_DISABLE, \
|
||||
.lpm_notify = false, \
|
||||
.offload_channel = false, \
|
||||
.doorbell_mode_switch = false, \
|
||||
}
|
||||
|
||||
#define MHI_CHANNEL_CONFIG_UL_FP(ch_num, ch_name, el_count, ev_ring) \
|
||||
{ \
|
||||
.num = ch_num, \
|
||||
.name = ch_name, \
|
||||
.num_elements = el_count, \
|
||||
.event_ring = ev_ring, \
|
||||
.dir = DMA_TO_DEVICE, \
|
||||
.ee_mask = BIT(MHI_EE_FP), \
|
||||
.pollcfg = 0, \
|
||||
.doorbell = MHI_DB_BRST_DISABLE, \
|
||||
.lpm_notify = false, \
|
||||
.offload_channel = false, \
|
||||
.doorbell_mode_switch = false, \
|
||||
} \
|
||||
|
||||
#define MHI_CHANNEL_CONFIG_DL_FP(ch_num, ch_name, el_count, ev_ring) \
|
||||
{ \
|
||||
.num = ch_num, \
|
||||
.name = ch_name, \
|
||||
.num_elements = el_count, \
|
||||
.event_ring = ev_ring, \
|
||||
.dir = DMA_FROM_DEVICE, \
|
||||
.ee_mask = BIT(MHI_EE_FP), \
|
||||
.pollcfg = 0, \
|
||||
.doorbell = MHI_DB_BRST_DISABLE, \
|
||||
.lpm_notify = false, \
|
||||
.offload_channel = false, \
|
||||
.doorbell_mode_switch = false, \
|
||||
}
|
||||
|
||||
#define MHI_EVENT_CONFIG_DATA(ev_ring, el_count) \
|
||||
{ \
|
||||
.num_elements = el_count, \
|
||||
.irq_moderation_ms = 5, \
|
||||
.irq = (ev_ring) + 1, \
|
||||
.priority = 1, \
|
||||
|
@ -127,9 +188,9 @@ struct mhi_pci_dev_info {
|
|||
.offload_channel = false, \
|
||||
}
|
||||
|
||||
#define MHI_EVENT_CONFIG_HW_DATA(ev_ring, ch_num) \
|
||||
#define MHI_EVENT_CONFIG_HW_DATA(ev_ring, el_count, ch_num) \
|
||||
{ \
|
||||
.num_elements = 2048, \
|
||||
.num_elements = el_count, \
|
||||
.irq_moderation_ms = 1, \
|
||||
.irq = (ev_ring) + 1, \
|
||||
.priority = 1, \
|
||||
|
@ -150,21 +211,23 @@ static const struct mhi_channel_config modem_qcom_v1_mhi_channels[] = {
|
|||
MHI_CHANNEL_CONFIG_DL(15, "QMI", 4, 0),
|
||||
MHI_CHANNEL_CONFIG_UL(20, "IPCR", 8, 0),
|
||||
MHI_CHANNEL_CONFIG_DL(21, "IPCR", 8, 0),
|
||||
MHI_CHANNEL_CONFIG_UL_FP(34, "FIREHOSE", 32, 0),
|
||||
MHI_CHANNEL_CONFIG_DL_FP(35, "FIREHOSE", 32, 0),
|
||||
MHI_CHANNEL_CONFIG_HW_UL(100, "IP_HW0", 128, 2),
|
||||
MHI_CHANNEL_CONFIG_HW_DL(101, "IP_HW0", 128, 3),
|
||||
};
|
||||
|
||||
static struct mhi_event_config modem_qcom_v1_mhi_events[] = {
|
||||
/* first ring is control+data ring */
|
||||
MHI_EVENT_CONFIG_CTRL(0),
|
||||
MHI_EVENT_CONFIG_CTRL(0, 64),
|
||||
/* DIAG dedicated event ring */
|
||||
MHI_EVENT_CONFIG_DATA(1),
|
||||
MHI_EVENT_CONFIG_DATA(1, 128),
|
||||
/* Hardware channels request dedicated hardware event rings */
|
||||
MHI_EVENT_CONFIG_HW_DATA(2, 100),
|
||||
MHI_EVENT_CONFIG_HW_DATA(3, 101)
|
||||
MHI_EVENT_CONFIG_HW_DATA(2, 1024, 100),
|
||||
MHI_EVENT_CONFIG_HW_DATA(3, 2048, 101)
|
||||
};
|
||||
|
||||
static struct mhi_controller_config modem_qcom_v1_mhiv_config = {
|
||||
static const struct mhi_controller_config modem_qcom_v1_mhiv_config = {
|
||||
.max_channels = 128,
|
||||
.timeout_ms = 8000,
|
||||
.num_channels = ARRAY_SIZE(modem_qcom_v1_mhi_channels),
|
||||
|
@ -173,6 +236,15 @@ static struct mhi_controller_config modem_qcom_v1_mhiv_config = {
|
|||
.event_cfg = modem_qcom_v1_mhi_events,
|
||||
};
|
||||
|
||||
static const struct mhi_pci_dev_info mhi_qcom_sdx65_info = {
|
||||
.name = "qcom-sdx65m",
|
||||
.fw = "qcom/sdx65m/xbl.elf",
|
||||
.edl = "qcom/sdx65m/edl.mbn",
|
||||
.config = &modem_qcom_v1_mhiv_config,
|
||||
.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
|
||||
.dma_data_width = 32
|
||||
};
|
||||
|
||||
static const struct mhi_pci_dev_info mhi_qcom_sdx55_info = {
|
||||
.name = "qcom-sdx55m",
|
||||
.fw = "qcom/sdx55m/sbl1.mbn",
|
||||
|
@ -182,15 +254,121 @@ static const struct mhi_pci_dev_info mhi_qcom_sdx55_info = {
|
|||
.dma_data_width = 32
|
||||
};
|
||||
|
||||
static const struct mhi_pci_dev_info mhi_qcom_sdx24_info = {
|
||||
.name = "qcom-sdx24",
|
||||
.edl = "qcom/prog_firehose_sdx24.mbn",
|
||||
.config = &modem_qcom_v1_mhiv_config,
|
||||
.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
|
||||
.dma_data_width = 32
|
||||
};
|
||||
|
||||
static const struct mhi_channel_config mhi_quectel_em1xx_channels[] = {
|
||||
MHI_CHANNEL_CONFIG_UL(0, "NMEA", 32, 0),
|
||||
MHI_CHANNEL_CONFIG_DL(1, "NMEA", 32, 0),
|
||||
MHI_CHANNEL_CONFIG_UL_SBL(2, "SAHARA", 32, 0),
|
||||
MHI_CHANNEL_CONFIG_DL_SBL(3, "SAHARA", 32, 0),
|
||||
MHI_CHANNEL_CONFIG_UL(4, "DIAG", 32, 1),
|
||||
MHI_CHANNEL_CONFIG_DL(5, "DIAG", 32, 1),
|
||||
MHI_CHANNEL_CONFIG_UL(12, "MBIM", 32, 0),
|
||||
MHI_CHANNEL_CONFIG_DL(13, "MBIM", 32, 0),
|
||||
MHI_CHANNEL_CONFIG_UL(32, "DUN", 32, 0),
|
||||
MHI_CHANNEL_CONFIG_DL(33, "DUN", 32, 0),
|
||||
/* The EDL firmware is a flash-programmer exposing firehose protocol */
|
||||
MHI_CHANNEL_CONFIG_UL_FP(34, "FIREHOSE", 32, 0),
|
||||
MHI_CHANNEL_CONFIG_DL_FP(35, "FIREHOSE", 32, 0),
|
||||
MHI_CHANNEL_CONFIG_HW_UL(100, "IP_HW0_MBIM", 128, 2),
|
||||
MHI_CHANNEL_CONFIG_HW_DL(101, "IP_HW0_MBIM", 128, 3),
|
||||
};
|
||||
|
||||
static struct mhi_event_config mhi_quectel_em1xx_events[] = {
|
||||
MHI_EVENT_CONFIG_CTRL(0, 128),
|
||||
MHI_EVENT_CONFIG_DATA(1, 128),
|
||||
MHI_EVENT_CONFIG_HW_DATA(2, 1024, 100),
|
||||
MHI_EVENT_CONFIG_HW_DATA(3, 1024, 101)
|
||||
};
|
||||
|
||||
static const struct mhi_controller_config modem_quectel_em1xx_config = {
|
||||
.max_channels = 128,
|
||||
.timeout_ms = 20000,
|
||||
.num_channels = ARRAY_SIZE(mhi_quectel_em1xx_channels),
|
||||
.ch_cfg = mhi_quectel_em1xx_channels,
|
||||
.num_events = ARRAY_SIZE(mhi_quectel_em1xx_events),
|
||||
.event_cfg = mhi_quectel_em1xx_events,
|
||||
};
|
||||
|
||||
static const struct mhi_pci_dev_info mhi_quectel_em1xx_info = {
|
||||
.name = "quectel-em1xx",
|
||||
.edl = "qcom/prog_firehose_sdx24.mbn",
|
||||
.config = &modem_quectel_em1xx_config,
|
||||
.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
|
||||
.dma_data_width = 32
|
||||
};
|
||||
|
||||
static const struct mhi_channel_config mhi_foxconn_sdx55_channels[] = {
|
||||
MHI_CHANNEL_CONFIG_UL(0, "LOOPBACK", 32, 0),
|
||||
MHI_CHANNEL_CONFIG_DL(1, "LOOPBACK", 32, 0),
|
||||
MHI_CHANNEL_CONFIG_UL(4, "DIAG", 32, 1),
|
||||
MHI_CHANNEL_CONFIG_DL(5, "DIAG", 32, 1),
|
||||
MHI_CHANNEL_CONFIG_UL(12, "MBIM", 32, 0),
|
||||
MHI_CHANNEL_CONFIG_DL(13, "MBIM", 32, 0),
|
||||
MHI_CHANNEL_CONFIG_UL(32, "AT", 32, 0),
|
||||
MHI_CHANNEL_CONFIG_DL(33, "AT", 32, 0),
|
||||
MHI_CHANNEL_CONFIG_HW_UL(100, "IP_HW0_MBIM", 128, 2),
|
||||
MHI_CHANNEL_CONFIG_HW_DL(101, "IP_HW0_MBIM", 128, 3),
|
||||
};
|
||||
|
||||
static struct mhi_event_config mhi_foxconn_sdx55_events[] = {
|
||||
MHI_EVENT_CONFIG_CTRL(0, 128),
|
||||
MHI_EVENT_CONFIG_DATA(1, 128),
|
||||
MHI_EVENT_CONFIG_HW_DATA(2, 1024, 100),
|
||||
MHI_EVENT_CONFIG_HW_DATA(3, 1024, 101)
|
||||
};
|
||||
|
||||
static const struct mhi_controller_config modem_foxconn_sdx55_config = {
|
||||
.max_channels = 128,
|
||||
.timeout_ms = 20000,
|
||||
.num_channels = ARRAY_SIZE(mhi_foxconn_sdx55_channels),
|
||||
.ch_cfg = mhi_foxconn_sdx55_channels,
|
||||
.num_events = ARRAY_SIZE(mhi_foxconn_sdx55_events),
|
||||
.event_cfg = mhi_foxconn_sdx55_events,
|
||||
};
|
||||
|
||||
static const struct mhi_pci_dev_info mhi_foxconn_sdx55_info = {
|
||||
.name = "foxconn-sdx55",
|
||||
.fw = "qcom/sdx55m/sbl1.mbn",
|
||||
.edl = "qcom/sdx55m/edl.mbn",
|
||||
.config = &modem_foxconn_sdx55_config,
|
||||
.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
|
||||
.dma_data_width = 32
|
||||
};
|
||||
|
||||
static const struct pci_device_id mhi_pci_id_table[] = {
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0306),
|
||||
.driver_data = (kernel_ulong_t) &mhi_qcom_sdx55_info },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0304),
|
||||
.driver_data = (kernel_ulong_t) &mhi_qcom_sdx24_info },
|
||||
{ PCI_DEVICE(0x1eac, 0x1001), /* EM120R-GL (sdx24) */
|
||||
.driver_data = (kernel_ulong_t) &mhi_quectel_em1xx_info },
|
||||
{ PCI_DEVICE(0x1eac, 0x1002), /* EM160R-GL (sdx24) */
|
||||
.driver_data = (kernel_ulong_t) &mhi_quectel_em1xx_info },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0308),
|
||||
.driver_data = (kernel_ulong_t) &mhi_qcom_sdx65_info },
|
||||
/* T99W175 (sdx55), Both for eSIM and Non-eSIM */
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe0ab),
|
||||
.driver_data = (kernel_ulong_t) &mhi_foxconn_sdx55_info },
|
||||
/* DW5930e (sdx55), With eSIM, It's also T99W175 */
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe0b0),
|
||||
.driver_data = (kernel_ulong_t) &mhi_foxconn_sdx55_info },
|
||||
/* DW5930e (sdx55), Non-eSIM, It's also T99W175 */
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe0b1),
|
||||
.driver_data = (kernel_ulong_t) &mhi_foxconn_sdx55_info },
|
||||
{ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(pci, mhi_pci_id_table);
|
||||
|
||||
enum mhi_pci_device_status {
|
||||
MHI_PCI_DEV_STARTED,
|
||||
MHI_PCI_DEV_SUSPENDED,
|
||||
};
|
||||
|
||||
struct mhi_pci_device {
|
||||
|
@ -224,12 +402,31 @@ static void mhi_pci_status_cb(struct mhi_controller *mhi_cntrl,
|
|||
case MHI_CB_FATAL_ERROR:
|
||||
case MHI_CB_SYS_ERROR:
|
||||
dev_warn(&pdev->dev, "firmware crashed (%u)\n", cb);
|
||||
pm_runtime_forbid(&pdev->dev);
|
||||
break;
|
||||
case MHI_CB_EE_MISSION_MODE:
|
||||
pm_runtime_allow(&pdev->dev);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
static void mhi_pci_wake_get_nop(struct mhi_controller *mhi_cntrl, bool force)
|
||||
{
|
||||
/* no-op */
|
||||
}
|
||||
|
||||
static void mhi_pci_wake_put_nop(struct mhi_controller *mhi_cntrl, bool override)
|
||||
{
|
||||
/* no-op */
|
||||
}
|
||||
|
||||
static void mhi_pci_wake_toggle_nop(struct mhi_controller *mhi_cntrl)
|
||||
{
|
||||
/* no-op */
|
||||
}
|
||||
|
||||
static bool mhi_pci_is_alive(struct mhi_controller *mhi_cntrl)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(mhi_cntrl->cntrl_dev);
|
||||
|
@ -330,13 +527,19 @@ static int mhi_pci_get_irqs(struct mhi_controller *mhi_cntrl,
|
|||
|
||||
static int mhi_pci_runtime_get(struct mhi_controller *mhi_cntrl)
|
||||
{
|
||||
/* no PM for now */
|
||||
return 0;
|
||||
/* The runtime_get() MHI callback means:
|
||||
* Do whatever is requested to leave M3.
|
||||
*/
|
||||
return pm_runtime_get(mhi_cntrl->cntrl_dev);
|
||||
}
|
||||
|
||||
static void mhi_pci_runtime_put(struct mhi_controller *mhi_cntrl)
|
||||
{
|
||||
/* no PM for now */
|
||||
/* The runtime_put() MHI callback means:
|
||||
* Device can be moved in M3 state.
|
||||
*/
|
||||
pm_runtime_mark_last_busy(mhi_cntrl->cntrl_dev);
|
||||
pm_runtime_put(mhi_cntrl->cntrl_dev);
|
||||
}
|
||||
|
||||
static void mhi_pci_recovery_work(struct work_struct *work)
|
||||
|
@ -350,6 +553,7 @@ static void mhi_pci_recovery_work(struct work_struct *work)
|
|||
dev_warn(&pdev->dev, "device recovery started\n");
|
||||
|
||||
del_timer(&mhi_pdev->health_check_timer);
|
||||
pm_runtime_forbid(&pdev->dev);
|
||||
|
||||
/* Clean up MHI state */
|
||||
if (test_and_clear_bit(MHI_PCI_DEV_STARTED, &mhi_pdev->status)) {
|
||||
|
@ -357,7 +561,6 @@ static void mhi_pci_recovery_work(struct work_struct *work)
|
|||
mhi_unprepare_after_power_down(mhi_cntrl);
|
||||
}
|
||||
|
||||
/* Check if we can recover without full reset */
|
||||
pci_set_power_state(pdev, PCI_D0);
|
||||
pci_load_saved_state(pdev, mhi_pdev->pci_state);
|
||||
pci_restore_state(pdev);
|
||||
|
@ -391,6 +594,10 @@ static void health_check(struct timer_list *t)
|
|||
struct mhi_pci_device *mhi_pdev = from_timer(mhi_pdev, t, health_check_timer);
|
||||
struct mhi_controller *mhi_cntrl = &mhi_pdev->mhi_cntrl;
|
||||
|
||||
if (!test_bit(MHI_PCI_DEV_STARTED, &mhi_pdev->status) ||
|
||||
test_bit(MHI_PCI_DEV_SUSPENDED, &mhi_pdev->status))
|
||||
return;
|
||||
|
||||
if (!mhi_pci_is_alive(mhi_cntrl)) {
|
||||
dev_err(mhi_cntrl->cntrl_dev, "Device died\n");
|
||||
queue_work(system_long_wq, &mhi_pdev->recovery_work);
|
||||
|
@ -433,6 +640,9 @@ static int mhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
|||
mhi_cntrl->status_cb = mhi_pci_status_cb;
|
||||
mhi_cntrl->runtime_get = mhi_pci_runtime_get;
|
||||
mhi_cntrl->runtime_put = mhi_pci_runtime_put;
|
||||
mhi_cntrl->wake_get = mhi_pci_wake_get_nop;
|
||||
mhi_cntrl->wake_put = mhi_pci_wake_put_nop;
|
||||
mhi_cntrl->wake_toggle = mhi_pci_wake_toggle_nop;
|
||||
|
||||
err = mhi_pci_claim(mhi_cntrl, info->bar_num, DMA_BIT_MASK(info->dma_data_width));
|
||||
if (err)
|
||||
|
@ -444,9 +654,12 @@ static int mhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
|||
|
||||
pci_set_drvdata(pdev, mhi_pdev);
|
||||
|
||||
/* Have stored pci confspace at hand for restore in sudden PCI error */
|
||||
/* Have stored pci confspace at hand for restore in sudden PCI error.
|
||||
* cache the state locally and discard the PCI core one.
|
||||
*/
|
||||
pci_save_state(pdev);
|
||||
mhi_pdev->pci_state = pci_store_saved_state(pdev);
|
||||
pci_load_saved_state(pdev, NULL);
|
||||
|
||||
pci_enable_pcie_error_reporting(pdev);
|
||||
|
||||
|
@ -472,6 +685,14 @@ static int mhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
|||
/* start health check */
|
||||
mod_timer(&mhi_pdev->health_check_timer, jiffies + HEALTH_CHECK_PERIOD);
|
||||
|
||||
/* Only allow runtime-suspend if PME capable (for wakeup) */
|
||||
if (pci_pme_capable(pdev, PCI_D3hot)) {
|
||||
pm_runtime_set_autosuspend_delay(&pdev->dev, 2000);
|
||||
pm_runtime_use_autosuspend(&pdev->dev);
|
||||
pm_runtime_mark_last_busy(&pdev->dev);
|
||||
pm_runtime_put_noidle(&pdev->dev);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err_unprepare:
|
||||
|
@ -495,9 +716,19 @@ static void mhi_pci_remove(struct pci_dev *pdev)
|
|||
mhi_unprepare_after_power_down(mhi_cntrl);
|
||||
}
|
||||
|
||||
/* balancing probe put_noidle */
|
||||
if (pci_pme_capable(pdev, PCI_D3hot))
|
||||
pm_runtime_get_noresume(&pdev->dev);
|
||||
|
||||
mhi_unregister_controller(mhi_cntrl);
|
||||
}
|
||||
|
||||
static void mhi_pci_shutdown(struct pci_dev *pdev)
|
||||
{
|
||||
mhi_pci_remove(pdev);
|
||||
pci_set_power_state(pdev, PCI_D3hot);
|
||||
}
|
||||
|
||||
static void mhi_pci_reset_prepare(struct pci_dev *pdev)
|
||||
{
|
||||
struct mhi_pci_device *mhi_pdev = pci_get_drvdata(pdev);
|
||||
|
@ -605,41 +836,59 @@ static const struct pci_error_handlers mhi_pci_err_handler = {
|
|||
.reset_done = mhi_pci_reset_done,
|
||||
};
|
||||
|
||||
static int __maybe_unused mhi_pci_suspend(struct device *dev)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
struct mhi_pci_device *mhi_pdev = dev_get_drvdata(dev);
|
||||
struct mhi_controller *mhi_cntrl = &mhi_pdev->mhi_cntrl;
|
||||
|
||||
del_timer(&mhi_pdev->health_check_timer);
|
||||
cancel_work_sync(&mhi_pdev->recovery_work);
|
||||
|
||||
/* Transition to M3 state */
|
||||
mhi_pm_suspend(mhi_cntrl);
|
||||
|
||||
pci_save_state(pdev);
|
||||
pci_disable_device(pdev);
|
||||
pci_wake_from_d3(pdev, true);
|
||||
pci_set_power_state(pdev, PCI_D3hot);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __maybe_unused mhi_pci_resume(struct device *dev)
|
||||
static int __maybe_unused mhi_pci_runtime_suspend(struct device *dev)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
struct mhi_pci_device *mhi_pdev = dev_get_drvdata(dev);
|
||||
struct mhi_controller *mhi_cntrl = &mhi_pdev->mhi_cntrl;
|
||||
int err;
|
||||
|
||||
pci_set_power_state(pdev, PCI_D0);
|
||||
pci_restore_state(pdev);
|
||||
pci_set_master(pdev);
|
||||
if (test_and_set_bit(MHI_PCI_DEV_SUSPENDED, &mhi_pdev->status))
|
||||
return 0;
|
||||
|
||||
del_timer(&mhi_pdev->health_check_timer);
|
||||
cancel_work_sync(&mhi_pdev->recovery_work);
|
||||
|
||||
if (!test_bit(MHI_PCI_DEV_STARTED, &mhi_pdev->status) ||
|
||||
mhi_cntrl->ee != MHI_EE_AMSS)
|
||||
goto pci_suspend; /* Nothing to do at MHI level */
|
||||
|
||||
/* Transition to M3 state */
|
||||
err = mhi_pm_suspend(mhi_cntrl);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "failed to suspend device: %d\n", err);
|
||||
clear_bit(MHI_PCI_DEV_SUSPENDED, &mhi_pdev->status);
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
pci_suspend:
|
||||
pci_disable_device(pdev);
|
||||
pci_wake_from_d3(pdev, true);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __maybe_unused mhi_pci_runtime_resume(struct device *dev)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
struct mhi_pci_device *mhi_pdev = dev_get_drvdata(dev);
|
||||
struct mhi_controller *mhi_cntrl = &mhi_pdev->mhi_cntrl;
|
||||
int err;
|
||||
|
||||
if (!test_and_clear_bit(MHI_PCI_DEV_SUSPENDED, &mhi_pdev->status))
|
||||
return 0;
|
||||
|
||||
err = pci_enable_device(pdev);
|
||||
if (err)
|
||||
goto err_recovery;
|
||||
|
||||
pci_set_master(pdev);
|
||||
pci_wake_from_d3(pdev, false);
|
||||
|
||||
if (!test_bit(MHI_PCI_DEV_STARTED, &mhi_pdev->status) ||
|
||||
mhi_cntrl->ee != MHI_EE_AMSS)
|
||||
return 0; /* Nothing to do at MHI level */
|
||||
|
||||
/* Exit M3, transition to M0 state */
|
||||
err = mhi_pm_resume(mhi_cntrl);
|
||||
if (err) {
|
||||
|
@ -650,16 +899,44 @@ static int __maybe_unused mhi_pci_resume(struct device *dev)
|
|||
/* Resume health check */
|
||||
mod_timer(&mhi_pdev->health_check_timer, jiffies + HEALTH_CHECK_PERIOD);
|
||||
|
||||
/* It can be a remote wakeup (no mhi runtime_get), update access time */
|
||||
pm_runtime_mark_last_busy(dev);
|
||||
|
||||
return 0;
|
||||
|
||||
err_recovery:
|
||||
/* The device may have loose power or crashed, try recovering it */
|
||||
/* Do not fail to not mess up our PCI device state, the device likely
|
||||
* lost power (d3cold) and we simply need to reset it from the recovery
|
||||
* procedure, trigger the recovery asynchronously to prevent system
|
||||
* suspend exit delaying.
|
||||
*/
|
||||
queue_work(system_long_wq, &mhi_pdev->recovery_work);
|
||||
pm_runtime_mark_last_busy(dev);
|
||||
|
||||
return err;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __maybe_unused mhi_pci_suspend(struct device *dev)
|
||||
{
|
||||
pm_runtime_disable(dev);
|
||||
return mhi_pci_runtime_suspend(dev);
|
||||
}
|
||||
|
||||
static int __maybe_unused mhi_pci_resume(struct device *dev)
|
||||
{
|
||||
int ret;
|
||||
|
||||
/* Depending the platform, device may have lost power (d3cold), we need
|
||||
* to resume it now to check its state and recover when necessary.
|
||||
*/
|
||||
ret = mhi_pci_runtime_resume(dev);
|
||||
pm_runtime_enable(dev);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const struct dev_pm_ops mhi_pci_pm_ops = {
|
||||
SET_RUNTIME_PM_OPS(mhi_pci_runtime_suspend, mhi_pci_runtime_resume, NULL)
|
||||
SET_SYSTEM_SLEEP_PM_OPS(mhi_pci_suspend, mhi_pci_resume)
|
||||
};
|
||||
|
||||
|
@ -668,6 +945,7 @@ static struct pci_driver mhi_pci_driver = {
|
|||
.id_table = mhi_pci_id_table,
|
||||
.probe = mhi_pci_probe,
|
||||
.remove = mhi_pci_remove,
|
||||
.shutdown = mhi_pci_shutdown,
|
||||
.err_handler = &mhi_pci_err_handler,
|
||||
.driver.pm = &mhi_pci_pm_ops
|
||||
};
|
||||
|
|
|
@ -836,7 +836,7 @@ static long ac_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
|||
Dummy = readb(apbs[IndexCard].RamIO + VERS);
|
||||
kfree(adgl);
|
||||
mutex_unlock(&ac_mutex);
|
||||
return 0;
|
||||
return ret;
|
||||
|
||||
err:
|
||||
if (warncount) {
|
||||
|
|
|
@ -546,7 +546,7 @@ static int lp_open(struct inode *inode, struct file *file)
|
|||
}
|
||||
/* Determine if the peripheral supports ECP mode */
|
||||
lp_claim_parport_or_block(&lp_table[minor]);
|
||||
if ( (lp_table[minor].dev->port->modes & PARPORT_MODE_ECP) &&
|
||||
if ((lp_table[minor].dev->port->modes & PARPORT_MODE_ECP) &&
|
||||
!parport_negotiate(lp_table[minor].dev->port,
|
||||
IEEE1284_MODE_ECP)) {
|
||||
printk(KERN_INFO "lp%d: ECP mode\n", minor);
|
||||
|
@ -590,7 +590,7 @@ static int lp_do_ioctl(unsigned int minor, unsigned int cmd,
|
|||
return -ENODEV;
|
||||
if ((LP_F(minor) & LP_EXIST) == 0)
|
||||
return -ENODEV;
|
||||
switch ( cmd ) {
|
||||
switch (cmd) {
|
||||
case LPTIME:
|
||||
if (arg > UINT_MAX / HZ)
|
||||
return -EINVAL;
|
||||
|
|
|
@ -177,14 +177,10 @@ int tp3780I_InitializeBoardData(THINKPAD_BD_DATA * pBDData)
|
|||
return retval;
|
||||
}
|
||||
|
||||
int tp3780I_Cleanup(THINKPAD_BD_DATA * pBDData)
|
||||
void tp3780I_Cleanup(THINKPAD_BD_DATA *pBDData)
|
||||
{
|
||||
int retval = 0;
|
||||
|
||||
PRINTK_2(TRACE_TP3780I,
|
||||
"tp3780i::tp3780I_Cleanup entry and exit pBDData %p\n", pBDData);
|
||||
|
||||
return retval;
|
||||
}
|
||||
|
||||
int tp3780I_CalcResources(THINKPAD_BD_DATA * pBDData)
|
||||
|
|
|
@ -91,7 +91,7 @@ int tp3780I_DisableDSP(THINKPAD_BD_DATA * pBDData);
|
|||
int tp3780I_ResetDSP(THINKPAD_BD_DATA * pBDData);
|
||||
int tp3780I_StartDSP(THINKPAD_BD_DATA * pBDData);
|
||||
int tp3780I_QueryAbilities(THINKPAD_BD_DATA * pBDData, MW_ABILITIES * pAbilities);
|
||||
int tp3780I_Cleanup(THINKPAD_BD_DATA * pBDData);
|
||||
void tp3780I_Cleanup(THINKPAD_BD_DATA *pBDData);
|
||||
int tp3780I_ReadWriteDspDStore(THINKPAD_BD_DATA * pBDData, unsigned int uOpcode,
|
||||
void __user *pvBuffer, unsigned int uCount,
|
||||
unsigned long ulDSPAddr);
|
||||
|
|
|
@ -1456,7 +1456,6 @@ static int add_port(struct ports_device *portdev, u32 id)
|
|||
*/
|
||||
send_control_msg(port, VIRTIO_CONSOLE_PORT_READY, 1);
|
||||
|
||||
if (pdrvdata.debugfs_dir) {
|
||||
/*
|
||||
* Finally, create the debugfs file that we can use to
|
||||
* inspect a port's state at any time
|
||||
|
@ -1465,9 +1464,7 @@ static int add_port(struct ports_device *portdev, u32 id)
|
|||
port->portdev->vdev->index, id);
|
||||
port->debugfs_file = debugfs_create_file(debugfs_name, 0444,
|
||||
pdrvdata.debugfs_dir,
|
||||
port,
|
||||
&port_debugfs_fops);
|
||||
}
|
||||
port, &port_debugfs_fops);
|
||||
return 0;
|
||||
|
||||
free_inbufs:
|
||||
|
@ -2244,8 +2241,6 @@ static int __init init(void)
|
|||
}
|
||||
|
||||
pdrvdata.debugfs_dir = debugfs_create_dir("virtio-ports", NULL);
|
||||
if (!pdrvdata.debugfs_dir)
|
||||
pr_warn("Error creating debugfs dir for virtio-ports\n");
|
||||
INIT_LIST_HEAD(&pdrvdata.consoles);
|
||||
INIT_LIST_HEAD(&pdrvdata.portdevs);
|
||||
|
||||
|
|
|
@ -44,6 +44,8 @@ static struct max8997_muic_irq muic_irqs[] = {
|
|||
{ MAX8997_MUICIRQ_ChgDetRun, "muic-CHGDETRUN" },
|
||||
{ MAX8997_MUICIRQ_ChgTyp, "muic-CHGTYP" },
|
||||
{ MAX8997_MUICIRQ_OVP, "muic-OVP" },
|
||||
{ MAX8997_PMICIRQ_CHGINS, "pmic-CHGINS" },
|
||||
{ MAX8997_PMICIRQ_CHGRM, "pmic-CHGRM" },
|
||||
};
|
||||
|
||||
/* Define supported cable type */
|
||||
|
@ -538,6 +540,8 @@ static void max8997_muic_irq_work(struct work_struct *work)
|
|||
case MAX8997_MUICIRQ_DCDTmr:
|
||||
case MAX8997_MUICIRQ_ChgDetRun:
|
||||
case MAX8997_MUICIRQ_ChgTyp:
|
||||
case MAX8997_PMICIRQ_CHGINS:
|
||||
case MAX8997_PMICIRQ_CHGRM:
|
||||
/* Handle charger cable */
|
||||
ret = max8997_muic_chg_handler(info);
|
||||
break;
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
/**
|
||||
* extcon-qcom-spmi-misc.c - Qualcomm USB extcon driver to support USB ID
|
||||
* detection based on extcon-usb-gpio.c.
|
||||
* and VBUS detection based on extcon-usb-gpio.c.
|
||||
*
|
||||
* Copyright (C) 2016 Linaro, Ltd.
|
||||
* Stephen Boyd <stephen.boyd@linaro.org>
|
||||
|
@ -21,30 +21,56 @@
|
|||
|
||||
struct qcom_usb_extcon_info {
|
||||
struct extcon_dev *edev;
|
||||
int irq;
|
||||
int id_irq;
|
||||
int vbus_irq;
|
||||
struct delayed_work wq_detcable;
|
||||
unsigned long debounce_jiffies;
|
||||
};
|
||||
|
||||
static const unsigned int qcom_usb_extcon_cable[] = {
|
||||
EXTCON_USB,
|
||||
EXTCON_USB_HOST,
|
||||
EXTCON_NONE,
|
||||
};
|
||||
|
||||
static void qcom_usb_extcon_detect_cable(struct work_struct *work)
|
||||
{
|
||||
bool id;
|
||||
bool state = false;
|
||||
int ret;
|
||||
union extcon_property_value val;
|
||||
struct qcom_usb_extcon_info *info = container_of(to_delayed_work(work),
|
||||
struct qcom_usb_extcon_info,
|
||||
wq_detcable);
|
||||
|
||||
if (info->id_irq > 0) {
|
||||
/* check ID and update cable state */
|
||||
ret = irq_get_irqchip_state(info->irq, IRQCHIP_STATE_LINE_LEVEL, &id);
|
||||
ret = irq_get_irqchip_state(info->id_irq,
|
||||
IRQCHIP_STATE_LINE_LEVEL, &state);
|
||||
if (ret)
|
||||
return;
|
||||
|
||||
extcon_set_state_sync(info->edev, EXTCON_USB_HOST, !id);
|
||||
if (!state) {
|
||||
val.intval = true;
|
||||
extcon_set_property(info->edev, EXTCON_USB_HOST,
|
||||
EXTCON_PROP_USB_SS, val);
|
||||
}
|
||||
extcon_set_state_sync(info->edev, EXTCON_USB_HOST, !state);
|
||||
}
|
||||
|
||||
if (info->vbus_irq > 0) {
|
||||
/* check VBUS and update cable state */
|
||||
ret = irq_get_irqchip_state(info->vbus_irq,
|
||||
IRQCHIP_STATE_LINE_LEVEL, &state);
|
||||
if (ret)
|
||||
return;
|
||||
|
||||
if (state) {
|
||||
val.intval = true;
|
||||
extcon_set_property(info->edev, EXTCON_USB,
|
||||
EXTCON_PROP_USB_SS, val);
|
||||
}
|
||||
extcon_set_state_sync(info->edev, EXTCON_USB, state);
|
||||
}
|
||||
}
|
||||
|
||||
static irqreturn_t qcom_usb_irq_handler(int irq, void *dev_id)
|
||||
|
@ -79,14 +105,22 @@ static int qcom_usb_extcon_probe(struct platform_device *pdev)
|
|||
return ret;
|
||||
}
|
||||
|
||||
ret = extcon_set_property_capability(info->edev,
|
||||
EXTCON_USB, EXTCON_PROP_USB_SS);
|
||||
ret |= extcon_set_property_capability(info->edev,
|
||||
EXTCON_USB_HOST, EXTCON_PROP_USB_SS);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to register extcon props rc=%d\n",
|
||||
ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
info->debounce_jiffies = msecs_to_jiffies(USB_ID_DEBOUNCE_MS);
|
||||
INIT_DELAYED_WORK(&info->wq_detcable, qcom_usb_extcon_detect_cable);
|
||||
|
||||
info->irq = platform_get_irq_byname(pdev, "usb_id");
|
||||
if (info->irq < 0)
|
||||
return info->irq;
|
||||
|
||||
ret = devm_request_threaded_irq(dev, info->irq, NULL,
|
||||
info->id_irq = platform_get_irq_byname(pdev, "usb_id");
|
||||
if (info->id_irq > 0) {
|
||||
ret = devm_request_threaded_irq(dev, info->id_irq, NULL,
|
||||
qcom_usb_irq_handler,
|
||||
IRQF_TRIGGER_RISING |
|
||||
IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
|
||||
|
@ -95,6 +129,25 @@ static int qcom_usb_extcon_probe(struct platform_device *pdev)
|
|||
dev_err(dev, "failed to request handler for ID IRQ\n");
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
info->vbus_irq = platform_get_irq_byname(pdev, "usb_vbus");
|
||||
if (info->vbus_irq > 0) {
|
||||
ret = devm_request_threaded_irq(dev, info->vbus_irq, NULL,
|
||||
qcom_usb_irq_handler,
|
||||
IRQF_TRIGGER_RISING |
|
||||
IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
|
||||
pdev->name, info);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "failed to request handler for VBUS IRQ\n");
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
if (info->id_irq < 0 && info->vbus_irq < 0) {
|
||||
dev_err(dev, "ID and VBUS IRQ not found\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
platform_set_drvdata(pdev, info);
|
||||
device_init_wakeup(dev, 1);
|
||||
|
@ -120,8 +173,12 @@ static int qcom_usb_extcon_suspend(struct device *dev)
|
|||
struct qcom_usb_extcon_info *info = dev_get_drvdata(dev);
|
||||
int ret = 0;
|
||||
|
||||
if (device_may_wakeup(dev))
|
||||
ret = enable_irq_wake(info->irq);
|
||||
if (device_may_wakeup(dev)) {
|
||||
if (info->id_irq > 0)
|
||||
ret = enable_irq_wake(info->id_irq);
|
||||
if (info->vbus_irq > 0)
|
||||
ret = enable_irq_wake(info->vbus_irq);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -131,8 +188,12 @@ static int qcom_usb_extcon_resume(struct device *dev)
|
|||
struct qcom_usb_extcon_info *info = dev_get_drvdata(dev);
|
||||
int ret = 0;
|
||||
|
||||
if (device_may_wakeup(dev))
|
||||
ret = disable_irq_wake(info->irq);
|
||||
if (device_may_wakeup(dev)) {
|
||||
if (info->id_irq > 0)
|
||||
ret = disable_irq_wake(info->id_irq);
|
||||
if (info->vbus_irq > 0)
|
||||
ret = disable_irq_wake(info->vbus_irq);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -144,6 +144,7 @@ enum sm5502_muic_acc_type {
|
|||
SM5502_MUIC_ADC_AUDIO_TYPE1_FULL_REMOTE = 0x3e, /* | 001|11110| */
|
||||
SM5502_MUIC_ADC_AUDIO_TYPE1_SEND_END = 0x5e, /* | 010|11110| */
|
||||
/* |Dev Type1|--ADC| */
|
||||
SM5502_MUIC_ADC_GROUND_USB_OTG = 0x80, /* | 100|00000| */
|
||||
SM5502_MUIC_ADC_OPEN_USB = 0x5f, /* | 010|11111| */
|
||||
SM5502_MUIC_ADC_OPEN_TA = 0xdf, /* | 110|11111| */
|
||||
SM5502_MUIC_ADC_OPEN_USB_OTG = 0xff, /* | 111|11111| */
|
||||
|
@ -291,11 +292,27 @@ static unsigned int sm5502_muic_get_cable_type(struct sm5502_muic_info *info)
|
|||
* connected with to MUIC device.
|
||||
*/
|
||||
cable_type = adc & SM5502_REG_ADC_MASK;
|
||||
if (cable_type == SM5502_MUIC_ADC_GROUND)
|
||||
return SM5502_MUIC_ADC_GROUND;
|
||||
|
||||
switch (cable_type) {
|
||||
case SM5502_MUIC_ADC_GROUND:
|
||||
ret = regmap_read(info->regmap, SM5502_REG_DEV_TYPE1,
|
||||
&dev_type1);
|
||||
if (ret) {
|
||||
dev_err(info->dev, "failed to read DEV_TYPE1 reg\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
switch (dev_type1) {
|
||||
case SM5502_REG_DEV_TYPE1_USB_OTG_MASK:
|
||||
cable_type = SM5502_MUIC_ADC_GROUND_USB_OTG;
|
||||
break;
|
||||
default:
|
||||
dev_dbg(info->dev,
|
||||
"cannot identify the cable type: adc(0x%x), dev_type1(0x%x)\n",
|
||||
adc, dev_type1);
|
||||
return -EINVAL;
|
||||
}
|
||||
break;
|
||||
case SM5502_MUIC_ADC_SEND_END_BUTTON:
|
||||
case SM5502_MUIC_ADC_REMOTE_S1_BUTTON:
|
||||
case SM5502_MUIC_ADC_REMOTE_S2_BUTTON:
|
||||
|
@ -396,6 +413,7 @@ static int sm5502_muic_cable_handler(struct sm5502_muic_info *info,
|
|||
con_sw = DM_DP_SWITCH_OPEN;
|
||||
vbus_sw = VBUSIN_SWITCH_VBUSOUT;
|
||||
break;
|
||||
case SM5502_MUIC_ADC_GROUND_USB_OTG:
|
||||
case SM5502_MUIC_ADC_OPEN_USB_OTG:
|
||||
id = EXTCON_USB_HOST;
|
||||
con_sw = DM_DP_SWITCH_USB;
|
||||
|
|
|
@ -237,6 +237,7 @@ config INTEL_STRATIX10_RSU
|
|||
config QCOM_SCM
|
||||
bool
|
||||
depends on ARM || ARM64
|
||||
depends on HAVE_ARM_SMCCC
|
||||
select RESET_CONTROLLER
|
||||
|
||||
config QCOM_SCM_DOWNLOAD_MODE_DEFAULT
|
||||
|
|
|
@ -136,12 +136,16 @@ MODULE_PARM_DESC(spincount,
|
|||
"The number of loop iterations to use when using the spin handshake.");
|
||||
|
||||
/*
|
||||
* Platforms might not support S0ix logging in their GSMI handlers. In order to
|
||||
* avoid any side-effects of generating an SMI for S0ix logging, use the S0ix
|
||||
* related GSMI commands only for those platforms that explicitly enable this
|
||||
* option.
|
||||
* Some older platforms with Apollo Lake chipsets do not support S0ix logging
|
||||
* in their GSMI handlers, and behaved poorly when resuming via power button
|
||||
* press if the logging was attempted. Updated firmware with proper behavior
|
||||
* has long since shipped, removing the need for this opt-in parameter. It
|
||||
* now exists as an opt-out parameter for folks defiantly running old
|
||||
* firmware, or unforeseen circumstances. After the change from opt-in to
|
||||
* opt-out has baked sufficiently, this parameter should probably be removed
|
||||
* entirely.
|
||||
*/
|
||||
static bool s0ix_logging_enable;
|
||||
static bool s0ix_logging_enable = true;
|
||||
module_param(s0ix_logging_enable, bool, 0600);
|
||||
|
||||
static struct gsmi_buf *gsmi_buf_alloc(void)
|
||||
|
|
|
@ -118,10 +118,17 @@ config XILINX_PR_DECOUPLER
|
|||
depends on FPGA_BRIDGE
|
||||
depends on HAS_IOMEM
|
||||
help
|
||||
Say Y to enable drivers for Xilinx LogiCORE PR Decoupler.
|
||||
Say Y to enable drivers for Xilinx LogiCORE PR Decoupler
|
||||
or Xilinx Dynamic Function eXchnage AIX Shutdown Manager.
|
||||
The PR Decoupler exists in the FPGA fabric to isolate one
|
||||
region of the FPGA from the busses while that region is
|
||||
being reprogrammed during partial reconfig.
|
||||
The Dynamic Function eXchange AXI shutdown manager prevents
|
||||
AXI traffic from passing through the bridge. The controller
|
||||
safely handles AXI4MM and AXI4-Lite interfaces on a
|
||||
Reconfigurable Partition when it is undergoing dynamic
|
||||
reconfiguration, preventing the system deadlock that can
|
||||
occur if AXI transactions are interrupted by DFX.
|
||||
|
||||
config FPGA_REGION
|
||||
tristate "FPGA Region"
|
||||
|
|
|
@ -52,7 +52,7 @@ static int afu_port_err_clear(struct device *dev, u64 err)
|
|||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
struct platform_device *pdev = to_platform_device(dev);
|
||||
void __iomem *base_err, *base_hdr;
|
||||
int ret = -EBUSY;
|
||||
int enable_ret = 0, ret = -EBUSY;
|
||||
u64 v;
|
||||
|
||||
base_err = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_ERROR);
|
||||
|
@ -96,18 +96,20 @@ static int afu_port_err_clear(struct device *dev, u64 err)
|
|||
v = readq(base_err + PORT_FIRST_ERROR);
|
||||
writeq(v, base_err + PORT_FIRST_ERROR);
|
||||
} else {
|
||||
dev_warn(dev, "%s: received 0x%llx, expected 0x%llx\n",
|
||||
__func__, v, err);
|
||||
ret = -EINVAL;
|
||||
}
|
||||
|
||||
/* Clear mask */
|
||||
__afu_port_err_mask(dev, false);
|
||||
|
||||
/* Enable the Port by clear the reset */
|
||||
__afu_port_enable(pdev);
|
||||
/* Enable the Port by clearing the reset */
|
||||
enable_ret = __afu_port_enable(pdev);
|
||||
|
||||
done:
|
||||
mutex_unlock(&pdata->lock);
|
||||
return ret;
|
||||
return enable_ret ? enable_ret : ret;
|
||||
}
|
||||
|
||||
static ssize_t errors_show(struct device *dev, struct device_attribute *attr,
|
||||
|
|
|
@ -21,6 +21,9 @@
|
|||
|
||||
#include "dfl-afu.h"
|
||||
|
||||
#define RST_POLL_INVL 10 /* us */
|
||||
#define RST_POLL_TIMEOUT 1000 /* us */
|
||||
|
||||
/**
|
||||
* __afu_port_enable - enable a port by clear reset
|
||||
* @pdev: port platform device.
|
||||
|
@ -32,7 +35,7 @@
|
|||
*
|
||||
* The caller needs to hold lock for protection.
|
||||
*/
|
||||
void __afu_port_enable(struct platform_device *pdev)
|
||||
int __afu_port_enable(struct platform_device *pdev)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev);
|
||||
void __iomem *base;
|
||||
|
@ -41,7 +44,7 @@ void __afu_port_enable(struct platform_device *pdev)
|
|||
WARN_ON(!pdata->disable_count);
|
||||
|
||||
if (--pdata->disable_count != 0)
|
||||
return;
|
||||
return 0;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(&pdev->dev, PORT_FEATURE_ID_HEADER);
|
||||
|
||||
|
@ -49,10 +52,20 @@ void __afu_port_enable(struct platform_device *pdev)
|
|||
v = readq(base + PORT_HDR_CTRL);
|
||||
v &= ~PORT_CTRL_SFTRST;
|
||||
writeq(v, base + PORT_HDR_CTRL);
|
||||
}
|
||||
|
||||
#define RST_POLL_INVL 10 /* us */
|
||||
#define RST_POLL_TIMEOUT 1000 /* us */
|
||||
/*
|
||||
* HW clears the ack bit to indicate that the port is fully out
|
||||
* of reset.
|
||||
*/
|
||||
if (readq_poll_timeout(base + PORT_HDR_CTRL, v,
|
||||
!(v & PORT_CTRL_SFTRST_ACK),
|
||||
RST_POLL_INVL, RST_POLL_TIMEOUT)) {
|
||||
dev_err(&pdev->dev, "timeout, failure to enable device\n");
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* __afu_port_disable - disable a port by hold reset
|
||||
|
@ -86,7 +99,7 @@ int __afu_port_disable(struct platform_device *pdev)
|
|||
if (readq_poll_timeout(base + PORT_HDR_CTRL, v,
|
||||
v & PORT_CTRL_SFTRST_ACK,
|
||||
RST_POLL_INVL, RST_POLL_TIMEOUT)) {
|
||||
dev_err(&pdev->dev, "timeout, fail to reset device\n");
|
||||
dev_err(&pdev->dev, "timeout, failure to disable device\n");
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
|
@ -110,10 +123,10 @@ static int __port_reset(struct platform_device *pdev)
|
|||
int ret;
|
||||
|
||||
ret = __afu_port_disable(pdev);
|
||||
if (!ret)
|
||||
__afu_port_enable(pdev);
|
||||
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return __afu_port_enable(pdev);
|
||||
}
|
||||
|
||||
static int port_reset(struct platform_device *pdev)
|
||||
|
@ -872,11 +885,11 @@ static int afu_dev_destroy(struct platform_device *pdev)
|
|||
static int port_enable_set(struct platform_device *pdev, bool enable)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev);
|
||||
int ret = 0;
|
||||
int ret;
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
if (enable)
|
||||
__afu_port_enable(pdev);
|
||||
ret = __afu_port_enable(pdev);
|
||||
else
|
||||
ret = __afu_port_disable(pdev);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
|
|
@ -80,7 +80,7 @@ struct dfl_afu {
|
|||
};
|
||||
|
||||
/* hold pdata->lock when call __afu_port_enable/disable */
|
||||
void __afu_port_enable(struct platform_device *pdev);
|
||||
int __afu_port_enable(struct platform_device *pdev);
|
||||
int __afu_port_disable(struct platform_device *pdev);
|
||||
|
||||
void afu_mmio_region_init(struct dfl_feature_platform_data *pdata);
|
||||
|
|
|
@ -73,10 +73,12 @@ static void cci_pci_free_irq(struct pci_dev *pcidev)
|
|||
#define PCIE_DEVICE_ID_PF_INT_6_X 0xBCC0
|
||||
#define PCIE_DEVICE_ID_PF_DSC_1_X 0x09C4
|
||||
#define PCIE_DEVICE_ID_INTEL_PAC_N3000 0x0B30
|
||||
#define PCIE_DEVICE_ID_INTEL_PAC_D5005 0x0B2B
|
||||
/* VF Device */
|
||||
#define PCIE_DEVICE_ID_VF_INT_5_X 0xBCBF
|
||||
#define PCIE_DEVICE_ID_VF_INT_6_X 0xBCC1
|
||||
#define PCIE_DEVICE_ID_VF_DSC_1_X 0x09C5
|
||||
#define PCIE_DEVICE_ID_INTEL_PAC_D5005_VF 0x0B2C
|
||||
|
||||
static struct pci_device_id cci_pcie_id_tbl[] = {
|
||||
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_PF_INT_5_X),},
|
||||
|
@ -86,6 +88,8 @@ static struct pci_device_id cci_pcie_id_tbl[] = {
|
|||
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_PF_DSC_1_X),},
|
||||
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_VF_DSC_1_X),},
|
||||
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_INTEL_PAC_N3000),},
|
||||
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_INTEL_PAC_D5005),},
|
||||
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_INTEL_PAC_D5005_VF),},
|
||||
{0,}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(pci, cci_pcie_id_tbl);
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
/*
|
||||
* Copyright (c) 2017, National Instruments Corp.
|
||||
* Copyright (c) 2017, Xilix Inc
|
||||
* Copyright (c) 2017, Xilinx Inc
|
||||
*
|
||||
* FPGA Bridge Driver for the Xilinx LogiCORE Partial Reconfiguration
|
||||
* Decoupler IP Core.
|
||||
|
@ -18,7 +18,12 @@
|
|||
#define CTRL_CMD_COUPLE 0
|
||||
#define CTRL_OFFSET 0
|
||||
|
||||
struct xlnx_config_data {
|
||||
const char *name;
|
||||
};
|
||||
|
||||
struct xlnx_pr_decoupler_data {
|
||||
const struct xlnx_config_data *ipconfig;
|
||||
void __iomem *io_base;
|
||||
struct clk *clk;
|
||||
};
|
||||
|
@ -76,15 +81,28 @@ static const struct fpga_bridge_ops xlnx_pr_decoupler_br_ops = {
|
|||
.enable_show = xlnx_pr_decoupler_enable_show,
|
||||
};
|
||||
|
||||
static const struct xlnx_config_data decoupler_config = {
|
||||
.name = "Xilinx PR Decoupler",
|
||||
};
|
||||
|
||||
static const struct xlnx_config_data shutdown_config = {
|
||||
.name = "Xilinx DFX AXI Shutdown Manager",
|
||||
};
|
||||
|
||||
static const struct of_device_id xlnx_pr_decoupler_of_match[] = {
|
||||
{ .compatible = "xlnx,pr-decoupler-1.00", },
|
||||
{ .compatible = "xlnx,pr-decoupler", },
|
||||
{ .compatible = "xlnx,pr-decoupler-1.00", .data = &decoupler_config },
|
||||
{ .compatible = "xlnx,pr-decoupler", .data = &decoupler_config },
|
||||
{ .compatible = "xlnx,dfx-axi-shutdown-manager-1.00",
|
||||
.data = &shutdown_config },
|
||||
{ .compatible = "xlnx,dfx-axi-shutdown-manager",
|
||||
.data = &shutdown_config },
|
||||
{},
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, xlnx_pr_decoupler_of_match);
|
||||
|
||||
static int xlnx_pr_decoupler_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device_node *np = pdev->dev.of_node;
|
||||
struct xlnx_pr_decoupler_data *priv;
|
||||
struct fpga_bridge *br;
|
||||
int err;
|
||||
|
@ -94,17 +112,23 @@ static int xlnx_pr_decoupler_probe(struct platform_device *pdev)
|
|||
if (!priv)
|
||||
return -ENOMEM;
|
||||
|
||||
if (np) {
|
||||
const struct of_device_id *match;
|
||||
|
||||
match = of_match_node(xlnx_pr_decoupler_of_match, np);
|
||||
if (match && match->data)
|
||||
priv->ipconfig = match->data;
|
||||
}
|
||||
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
priv->io_base = devm_ioremap_resource(&pdev->dev, res);
|
||||
if (IS_ERR(priv->io_base))
|
||||
return PTR_ERR(priv->io_base);
|
||||
|
||||
priv->clk = devm_clk_get(&pdev->dev, "aclk");
|
||||
if (IS_ERR(priv->clk)) {
|
||||
if (PTR_ERR(priv->clk) != -EPROBE_DEFER)
|
||||
dev_err(&pdev->dev, "input clock not found\n");
|
||||
return PTR_ERR(priv->clk);
|
||||
}
|
||||
if (IS_ERR(priv->clk))
|
||||
return dev_err_probe(&pdev->dev, PTR_ERR(priv->clk),
|
||||
"input clock not found\n");
|
||||
|
||||
err = clk_prepare_enable(priv->clk);
|
||||
if (err) {
|
||||
|
@ -114,7 +138,7 @@ static int xlnx_pr_decoupler_probe(struct platform_device *pdev)
|
|||
|
||||
clk_disable(priv->clk);
|
||||
|
||||
br = devm_fpga_bridge_create(&pdev->dev, "Xilinx PR Decoupler",
|
||||
br = devm_fpga_bridge_create(&pdev->dev, priv->ipconfig->name,
|
||||
&xlnx_pr_decoupler_br_ops, priv);
|
||||
if (!br) {
|
||||
err = -ENOMEM;
|
||||
|
@ -125,7 +149,8 @@ static int xlnx_pr_decoupler_probe(struct platform_device *pdev)
|
|||
|
||||
err = fpga_bridge_register(br);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "unable to register Xilinx PR Decoupler");
|
||||
dev_err(&pdev->dev, "unable to register %s",
|
||||
priv->ipconfig->name);
|
||||
goto err_clk;
|
||||
}
|
||||
|
||||
|
|
|
@ -233,25 +233,19 @@ static int xilinx_spi_probe(struct spi_device *spi)
|
|||
|
||||
/* PROGRAM_B is active low */
|
||||
conf->prog_b = devm_gpiod_get(&spi->dev, "prog_b", GPIOD_OUT_LOW);
|
||||
if (IS_ERR(conf->prog_b)) {
|
||||
dev_err(&spi->dev, "Failed to get PROGRAM_B gpio: %ld\n",
|
||||
PTR_ERR(conf->prog_b));
|
||||
return PTR_ERR(conf->prog_b);
|
||||
}
|
||||
if (IS_ERR(conf->prog_b))
|
||||
return dev_err_probe(&spi->dev, PTR_ERR(conf->prog_b),
|
||||
"Failed to get PROGRAM_B gpio\n");
|
||||
|
||||
conf->init_b = devm_gpiod_get_optional(&spi->dev, "init-b", GPIOD_IN);
|
||||
if (IS_ERR(conf->init_b)) {
|
||||
dev_err(&spi->dev, "Failed to get INIT_B gpio: %ld\n",
|
||||
PTR_ERR(conf->init_b));
|
||||
return PTR_ERR(conf->init_b);
|
||||
}
|
||||
if (IS_ERR(conf->init_b))
|
||||
return dev_err_probe(&spi->dev, PTR_ERR(conf->init_b),
|
||||
"Failed to get INIT_B gpio\n");
|
||||
|
||||
conf->done = devm_gpiod_get(&spi->dev, "done", GPIOD_IN);
|
||||
if (IS_ERR(conf->done)) {
|
||||
dev_err(&spi->dev, "Failed to get DONE gpio: %ld\n",
|
||||
PTR_ERR(conf->done));
|
||||
return PTR_ERR(conf->done);
|
||||
}
|
||||
if (IS_ERR(conf->done))
|
||||
return dev_err_probe(&spi->dev, PTR_ERR(conf->done),
|
||||
"Failed to get DONE gpio\n");
|
||||
|
||||
mgr = devm_fpga_mgr_create(&spi->dev,
|
||||
"Xilinx Slave Serial FPGA Manager",
|
||||
|
|
|
@ -72,11 +72,11 @@ struct es2_cport_in {
|
|||
};
|
||||
|
||||
/**
|
||||
* es2_ap_dev - ES2 USB Bridge to AP structure
|
||||
* struct es2_ap_dev - ES2 USB Bridge to AP structure
|
||||
* @usb_dev: pointer to the USB device we are.
|
||||
* @usb_intf: pointer to the USB interface we are bound to.
|
||||
* @hd: pointer to our gb_host_device structure
|
||||
|
||||
*
|
||||
* @cport_in: endpoint, urbs and buffer for cport in messages
|
||||
* @cport_out_endpoint: endpoint for for cport out messages
|
||||
* @cport_out_urb: array of urbs for the CPort out messages
|
||||
|
@ -85,7 +85,7 @@ struct es2_cport_in {
|
|||
* @cport_out_urb_cancelled: array of flags indicating whether the
|
||||
* corresponding @cport_out_urb is being cancelled
|
||||
* @cport_out_urb_lock: locks the @cport_out_urb_busy "list"
|
||||
*
|
||||
* @cdsi1_in_use: true if cport CDSI1 is in use
|
||||
* @apb_log_task: task pointer for logging thread
|
||||
* @apb_log_dentry: file system entry for the log file interface
|
||||
* @apb_log_enable_dentry: file system entry for enabling logging
|
||||
|
@ -1171,7 +1171,7 @@ static ssize_t apb_log_enable_read(struct file *f, char __user *buf,
|
|||
char tmp_buf[3];
|
||||
|
||||
sprintf(tmp_buf, "%d\n", enable);
|
||||
return simple_read_from_buffer(buf, count, ppos, tmp_buf, 3);
|
||||
return simple_read_from_buffer(buf, count, ppos, tmp_buf, 2);
|
||||
}
|
||||
|
||||
static ssize_t apb_log_enable_write(struct file *f, const char __user *buf,
|
||||
|
|
|
@ -86,7 +86,7 @@ static int coresight_id_match(struct device *dev, void *data)
|
|||
i_csdev->type != CORESIGHT_DEV_TYPE_SOURCE)
|
||||
return 0;
|
||||
|
||||
/* Get the source ID for both compoment */
|
||||
/* Get the source ID for both components */
|
||||
trace_id = source_ops(csdev)->trace_id(csdev);
|
||||
i_trace_id = source_ops(i_csdev)->trace_id(i_csdev);
|
||||
|
||||
|
|
|
@ -52,13 +52,13 @@ static ssize_t format_attr_contextid_show(struct device *dev,
|
|||
{
|
||||
int pid_fmt = ETM_OPT_CTXTID;
|
||||
|
||||
#if defined(CONFIG_CORESIGHT_SOURCE_ETM4X)
|
||||
#if IS_ENABLED(CONFIG_CORESIGHT_SOURCE_ETM4X)
|
||||
pid_fmt = is_kernel_in_hyp_mode() ? ETM_OPT_CTXTID2 : ETM_OPT_CTXTID;
|
||||
#endif
|
||||
return sprintf(page, "config:%d\n", pid_fmt);
|
||||
}
|
||||
|
||||
struct device_attribute format_attr_contextid =
|
||||
static struct device_attribute format_attr_contextid =
|
||||
__ATTR(contextid, 0444, format_attr_contextid_show, NULL);
|
||||
|
||||
static struct attribute *etm_config_formats_attr[] = {
|
||||
|
|
|
@ -1951,6 +1951,7 @@ static const struct amba_id etm4_ids[] = {
|
|||
CS_AMBA_UCI_ID(0x000bbd05, uci_id_etm4),/* Cortex-A55 */
|
||||
CS_AMBA_UCI_ID(0x000bbd0a, uci_id_etm4),/* Cortex-A75 */
|
||||
CS_AMBA_UCI_ID(0x000bbd0c, uci_id_etm4),/* Neoverse N1 */
|
||||
CS_AMBA_UCI_ID(0x000bbd41, uci_id_etm4),/* Cortex-A78 */
|
||||
CS_AMBA_UCI_ID(0x000f0205, uci_id_etm4),/* Qualcomm Kryo */
|
||||
CS_AMBA_UCI_ID(0x000f0211, uci_id_etm4),/* Qualcomm Kryo */
|
||||
CS_AMBA_UCI_ID(0x000bb802, uci_id_etm4),/* Qualcomm Kryo 385 Cortex-A55 */
|
||||
|
|
|
@ -844,7 +844,7 @@ static irqreturn_t intel_th_irq(int irq, void *data)
|
|||
* @irq: irq number
|
||||
*/
|
||||
struct intel_th *
|
||||
intel_th_alloc(struct device *dev, struct intel_th_drvdata *drvdata,
|
||||
intel_th_alloc(struct device *dev, const struct intel_th_drvdata *drvdata,
|
||||
struct resource *devres, unsigned int ndevres)
|
||||
{
|
||||
int err, r, nr_mmios = 0;
|
||||
|
|
|
@ -543,7 +543,7 @@ static void intel_th_gth_disable(struct intel_th_device *thdev,
|
|||
output->active = false;
|
||||
|
||||
for_each_set_bit(master, gth->output[output->port].master,
|
||||
TH_CONFIGURABLE_MASTERS) {
|
||||
TH_CONFIGURABLE_MASTERS + 1) {
|
||||
gth_master_set(gth, master, -1);
|
||||
}
|
||||
spin_unlock(>h->gth_lock);
|
||||
|
@ -697,7 +697,7 @@ static void intel_th_gth_unassign(struct intel_th_device *thdev,
|
|||
othdev->output.port = -1;
|
||||
othdev->output.active = false;
|
||||
gth->output[port].output = NULL;
|
||||
for (master = 0; master <= TH_CONFIGURABLE_MASTERS; master++)
|
||||
for (master = 0; master < TH_CONFIGURABLE_MASTERS + 1; master++)
|
||||
if (gth->master[master] == port)
|
||||
gth->master[master] = -1;
|
||||
spin_unlock(>h->gth_lock);
|
||||
|
|
|
@ -74,7 +74,7 @@ struct intel_th_drvdata {
|
|||
*/
|
||||
struct intel_th_device {
|
||||
struct device dev;
|
||||
struct intel_th_drvdata *drvdata;
|
||||
const struct intel_th_drvdata *drvdata;
|
||||
struct resource *resource;
|
||||
unsigned int num_resources;
|
||||
unsigned int type;
|
||||
|
@ -178,7 +178,7 @@ struct intel_th_driver {
|
|||
/* file_operations for those who want a device node */
|
||||
const struct file_operations *fops;
|
||||
/* optional attributes */
|
||||
struct attribute_group *attr_group;
|
||||
const struct attribute_group *attr_group;
|
||||
|
||||
/* source ops */
|
||||
int (*set_output)(struct intel_th_device *thdev,
|
||||
|
@ -224,7 +224,7 @@ static inline struct intel_th *to_intel_th(struct intel_th_device *thdev)
|
|||
}
|
||||
|
||||
struct intel_th *
|
||||
intel_th_alloc(struct device *dev, struct intel_th_drvdata *drvdata,
|
||||
intel_th_alloc(struct device *dev, const struct intel_th_drvdata *drvdata,
|
||||
struct resource *devres, unsigned int ndevres);
|
||||
void intel_th_free(struct intel_th *th);
|
||||
|
||||
|
@ -272,7 +272,7 @@ struct intel_th {
|
|||
|
||||
struct intel_th_device *thdev[TH_SUBDEVICE_MAX];
|
||||
struct intel_th_device *hub;
|
||||
struct intel_th_drvdata *drvdata;
|
||||
const struct intel_th_drvdata *drvdata;
|
||||
|
||||
struct resource resource[TH_MMIO_END];
|
||||
int (*activate)(struct intel_th *);
|
||||
|
|
|
@ -2095,7 +2095,7 @@ static struct attribute *msc_output_attrs[] = {
|
|||
NULL,
|
||||
};
|
||||
|
||||
static struct attribute_group msc_output_group = {
|
||||
static const struct attribute_group msc_output_group = {
|
||||
.attrs = msc_output_attrs,
|
||||
};
|
||||
|
||||
|
|
|
@ -71,7 +71,7 @@ static void intel_th_pci_deactivate(struct intel_th *th)
|
|||
static int intel_th_pci_probe(struct pci_dev *pdev,
|
||||
const struct pci_device_id *id)
|
||||
{
|
||||
struct intel_th_drvdata *drvdata = (void *)id->driver_data;
|
||||
const struct intel_th_drvdata *drvdata = (void *)id->driver_data;
|
||||
struct resource resource[TH_MMIO_END + TH_NVEC_MAX] = {
|
||||
[TH_MMIO_CONFIG] = pdev->resource[TH_PCI_CONFIG_BAR],
|
||||
[TH_MMIO_SW] = pdev->resource[TH_PCI_STH_SW_BAR],
|
||||
|
@ -273,11 +273,21 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
|
|||
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x51a6),
|
||||
.driver_data = (kernel_ulong_t)&intel_th_2x,
|
||||
},
|
||||
{
|
||||
/* Alder Lake-M */
|
||||
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x54a6),
|
||||
.driver_data = (kernel_ulong_t)&intel_th_2x,
|
||||
},
|
||||
{
|
||||
/* Alder Lake CPU */
|
||||
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x466f),
|
||||
.driver_data = (kernel_ulong_t)&intel_th_2x,
|
||||
},
|
||||
{
|
||||
/* Rocket Lake CPU */
|
||||
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4c19),
|
||||
.driver_data = (kernel_ulong_t)&intel_th_2x,
|
||||
},
|
||||
{ 0 },
|
||||
};
|
||||
|
||||
|
|
|
@ -142,7 +142,7 @@ static struct attribute *pti_output_attrs[] = {
|
|||
NULL,
|
||||
};
|
||||
|
||||
static struct attribute_group pti_output_group = {
|
||||
static const struct attribute_group pti_output_group = {
|
||||
.attrs = pti_output_attrs,
|
||||
};
|
||||
|
||||
|
@ -295,7 +295,7 @@ static struct attribute *lpp_output_attrs[] = {
|
|||
NULL,
|
||||
};
|
||||
|
||||
static struct attribute_group lpp_output_group = {
|
||||
static const struct attribute_group lpp_output_group = {
|
||||
.attrs = lpp_output_attrs,
|
||||
};
|
||||
|
||||
|
|
|
@ -92,7 +92,7 @@ static void sys_t_policy_node_init(void *priv)
|
|||
{
|
||||
struct sys_t_policy_node *pn = priv;
|
||||
|
||||
generate_random_uuid(pn->uuid.b);
|
||||
uuid_gen(&pn->uuid);
|
||||
}
|
||||
|
||||
static int sys_t_output_open(void *priv, struct stm_output *output)
|
||||
|
@ -292,6 +292,7 @@ static ssize_t sys_t_write(struct stm_data *data, struct stm_output *output,
|
|||
unsigned int m = output->master;
|
||||
const unsigned char nil = 0;
|
||||
u32 header = DATA_HEADER;
|
||||
u8 uuid[UUID_SIZE];
|
||||
ssize_t sz;
|
||||
|
||||
/* We require an existing policy node to proceed */
|
||||
|
@ -322,7 +323,8 @@ static ssize_t sys_t_write(struct stm_data *data, struct stm_output *output,
|
|||
return sz;
|
||||
|
||||
/* GUID */
|
||||
sz = stm_data_write(data, m, c, false, op->node.uuid.b, UUID_SIZE);
|
||||
export_uuid(uuid, &op->node.uuid);
|
||||
sz = stm_data_write(data, m, c, false, uuid, sizeof(op->node.uuid));
|
||||
if (sz <= 0)
|
||||
return sz;
|
||||
|
||||
|
|
|
@ -57,11 +57,6 @@ void stp_policy_node_get_ranges(struct stp_policy_node *policy_node,
|
|||
*cend = policy_node->last_channel;
|
||||
}
|
||||
|
||||
static inline char *stp_policy_node_name(struct stp_policy_node *policy_node)
|
||||
{
|
||||
return policy_node->group.cg_item.ci_name ? : "<none>";
|
||||
}
|
||||
|
||||
static inline struct stp_policy *to_stp_policy(struct config_item *item)
|
||||
{
|
||||
return item ?
|
||||
|
|
|
@ -74,6 +74,15 @@ config INTERCONNECT_QCOM_SC7180
|
|||
This is a driver for the Qualcomm Network-on-Chip on sc7180-based
|
||||
platforms.
|
||||
|
||||
config INTERCONNECT_QCOM_SDM660
|
||||
tristate "Qualcomm SDM660 interconnect driver"
|
||||
depends on INTERCONNECT_QCOM
|
||||
depends on QCOM_SMD_RPM
|
||||
select INTERCONNECT_QCOM_SMD_RPM
|
||||
help
|
||||
This is a driver for the Qualcomm Network-on-Chip on sdm660-based
|
||||
platforms.
|
||||
|
||||
config INTERCONNECT_QCOM_SDM845
|
||||
tristate "Qualcomm SDM845 interconnect driver"
|
||||
depends on INTERCONNECT_QCOM_RPMH_POSSIBLE
|
||||
|
@ -110,5 +119,14 @@ config INTERCONNECT_QCOM_SM8250
|
|||
This is a driver for the Qualcomm Network-on-Chip on sm8250-based
|
||||
platforms.
|
||||
|
||||
config INTERCONNECT_QCOM_SM8350
|
||||
tristate "Qualcomm SM8350 interconnect driver"
|
||||
depends on INTERCONNECT_QCOM_RPMH_POSSIBLE
|
||||
select INTERCONNECT_QCOM_RPMH
|
||||
select INTERCONNECT_QCOM_BCM_VOTER
|
||||
help
|
||||
This is a driver for the Qualcomm Network-on-Chip on SM8350-based
|
||||
platforms.
|
||||
|
||||
config INTERCONNECT_QCOM_SMD_RPM
|
||||
tristate
|
||||
|
|
|
@ -8,10 +8,12 @@ icc-osm-l3-objs := osm-l3.o
|
|||
qnoc-qcs404-objs := qcs404.o
|
||||
icc-rpmh-obj := icc-rpmh.o
|
||||
qnoc-sc7180-objs := sc7180.o
|
||||
qnoc-sdm660-objs := sdm660.o
|
||||
qnoc-sdm845-objs := sdm845.o
|
||||
qnoc-sdx55-objs := sdx55.o
|
||||
qnoc-sm8150-objs := sm8150.o
|
||||
qnoc-sm8250-objs := sm8250.o
|
||||
qnoc-sm8350-objs := sm8350.o
|
||||
icc-smd-rpm-objs := smd-rpm.o icc-rpm.o
|
||||
|
||||
obj-$(CONFIG_INTERCONNECT_QCOM_BCM_VOTER) += icc-bcm-voter.o
|
||||
|
@ -22,8 +24,10 @@ obj-$(CONFIG_INTERCONNECT_QCOM_OSM_L3) += icc-osm-l3.o
|
|||
obj-$(CONFIG_INTERCONNECT_QCOM_QCS404) += qnoc-qcs404.o
|
||||
obj-$(CONFIG_INTERCONNECT_QCOM_RPMH) += icc-rpmh.o
|
||||
obj-$(CONFIG_INTERCONNECT_QCOM_SC7180) += qnoc-sc7180.o
|
||||
obj-$(CONFIG_INTERCONNECT_QCOM_SDM660) += qnoc-sdm660.o
|
||||
obj-$(CONFIG_INTERCONNECT_QCOM_SDM845) += qnoc-sdm845.o
|
||||
obj-$(CONFIG_INTERCONNECT_QCOM_SDX55) += qnoc-sdx55.o
|
||||
obj-$(CONFIG_INTERCONNECT_QCOM_SM8150) += qnoc-sm8150.o
|
||||
obj-$(CONFIG_INTERCONNECT_QCOM_SM8250) += qnoc-sm8250.o
|
||||
obj-$(CONFIG_INTERCONNECT_QCOM_SM8350) += qnoc-sm8350.o
|
||||
obj-$(CONFIG_INTERCONNECT_QCOM_SMD_RPM) += icc-smd-rpm.o
|
||||
|
|
|
@ -59,8 +59,8 @@ static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
|
|||
qn->slv_rpm_id,
|
||||
sum_bw);
|
||||
if (ret) {
|
||||
pr_err("qcom_icc_rpm_smd_send slv error %d\n",
|
||||
ret);
|
||||
pr_err("qcom_icc_rpm_smd_send slv %d error %d\n",
|
||||
qn->slv_rpm_id, ret);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,923 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Qualcomm SDM630/SDM636/SDM660 Network-on-Chip (NoC) QoS driver
|
||||
* Copyright (C) 2020, AngeloGioacchino Del Regno <kholk11@gmail.com>
|
||||
*/
|
||||
|
||||
#include <dt-bindings/interconnect/qcom,sdm660.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/interconnect-provider.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/regmap.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include "smd-rpm.h"
|
||||
|
||||
#define RPM_BUS_MASTER_REQ 0x73616d62
|
||||
#define RPM_BUS_SLAVE_REQ 0x766c7362
|
||||
|
||||
/* BIMC QoS */
|
||||
#define M_BKE_REG_BASE(n) (0x300 + (0x4000 * n))
|
||||
#define M_BKE_EN_ADDR(n) (M_BKE_REG_BASE(n))
|
||||
#define M_BKE_HEALTH_CFG_ADDR(i, n) (M_BKE_REG_BASE(n) + 0x40 + (0x4 * i))
|
||||
|
||||
#define M_BKE_HEALTH_CFG_LIMITCMDS_MASK 0x80000000
|
||||
#define M_BKE_HEALTH_CFG_AREQPRIO_MASK 0x300
|
||||
#define M_BKE_HEALTH_CFG_PRIOLVL_MASK 0x3
|
||||
#define M_BKE_HEALTH_CFG_AREQPRIO_SHIFT 0x8
|
||||
#define M_BKE_HEALTH_CFG_LIMITCMDS_SHIFT 0x1f
|
||||
|
||||
#define M_BKE_EN_EN_BMASK 0x1
|
||||
|
||||
/* Valid for both NoC and BIMC */
|
||||
#define NOC_QOS_MODE_FIXED 0x0
|
||||
#define NOC_QOS_MODE_LIMITER 0x1
|
||||
#define NOC_QOS_MODE_BYPASS 0x2
|
||||
|
||||
/* NoC QoS */
|
||||
#define NOC_PERM_MODE_FIXED 1
|
||||
#define NOC_PERM_MODE_BYPASS (1 << NOC_QOS_MODE_BYPASS)
|
||||
|
||||
#define NOC_QOS_PRIORITYn_ADDR(n) (0x8 + (n * 0x1000))
|
||||
#define NOC_QOS_PRIORITY_MASK 0xf
|
||||
#define NOC_QOS_PRIORITY_P1_SHIFT 0x2
|
||||
#define NOC_QOS_PRIORITY_P0_SHIFT 0x3
|
||||
|
||||
#define NOC_QOS_MODEn_ADDR(n) (0xc + (n * 0x1000))
|
||||
#define NOC_QOS_MODEn_MASK 0x3
|
||||
|
||||
enum {
|
||||
SDM660_MASTER_IPA = 1,
|
||||
SDM660_MASTER_CNOC_A2NOC,
|
||||
SDM660_MASTER_SDCC_1,
|
||||
SDM660_MASTER_SDCC_2,
|
||||
SDM660_MASTER_BLSP_1,
|
||||
SDM660_MASTER_BLSP_2,
|
||||
SDM660_MASTER_UFS,
|
||||
SDM660_MASTER_USB_HS,
|
||||
SDM660_MASTER_USB3,
|
||||
SDM660_MASTER_CRYPTO_C0,
|
||||
SDM660_MASTER_GNOC_BIMC,
|
||||
SDM660_MASTER_OXILI,
|
||||
SDM660_MASTER_MNOC_BIMC,
|
||||
SDM660_MASTER_SNOC_BIMC,
|
||||
SDM660_MASTER_PIMEM,
|
||||
SDM660_MASTER_SNOC_CNOC,
|
||||
SDM660_MASTER_QDSS_DAP,
|
||||
SDM660_MASTER_APPS_PROC,
|
||||
SDM660_MASTER_CNOC_MNOC_MMSS_CFG,
|
||||
SDM660_MASTER_CNOC_MNOC_CFG,
|
||||
SDM660_MASTER_CPP,
|
||||
SDM660_MASTER_JPEG,
|
||||
SDM660_MASTER_MDP_P0,
|
||||
SDM660_MASTER_MDP_P1,
|
||||
SDM660_MASTER_VENUS,
|
||||
SDM660_MASTER_VFE,
|
||||
SDM660_MASTER_QDSS_ETR,
|
||||
SDM660_MASTER_QDSS_BAM,
|
||||
SDM660_MASTER_SNOC_CFG,
|
||||
SDM660_MASTER_BIMC_SNOC,
|
||||
SDM660_MASTER_A2NOC_SNOC,
|
||||
SDM660_MASTER_GNOC_SNOC,
|
||||
|
||||
SDM660_SLAVE_A2NOC_SNOC,
|
||||
SDM660_SLAVE_EBI,
|
||||
SDM660_SLAVE_HMSS_L3,
|
||||
SDM660_SLAVE_BIMC_SNOC,
|
||||
SDM660_SLAVE_CNOC_A2NOC,
|
||||
SDM660_SLAVE_MPM,
|
||||
SDM660_SLAVE_PMIC_ARB,
|
||||
SDM660_SLAVE_TLMM_NORTH,
|
||||
SDM660_SLAVE_TCSR,
|
||||
SDM660_SLAVE_PIMEM_CFG,
|
||||
SDM660_SLAVE_IMEM_CFG,
|
||||
SDM660_SLAVE_MESSAGE_RAM,
|
||||
SDM660_SLAVE_GLM,
|
||||
SDM660_SLAVE_BIMC_CFG,
|
||||
SDM660_SLAVE_PRNG,
|
||||
SDM660_SLAVE_SPDM,
|
||||
SDM660_SLAVE_QDSS_CFG,
|
||||
SDM660_SLAVE_CNOC_MNOC_CFG,
|
||||
SDM660_SLAVE_SNOC_CFG,
|
||||
SDM660_SLAVE_QM_CFG,
|
||||
SDM660_SLAVE_CLK_CTL,
|
||||
SDM660_SLAVE_MSS_CFG,
|
||||
SDM660_SLAVE_TLMM_SOUTH,
|
||||
SDM660_SLAVE_UFS_CFG,
|
||||
SDM660_SLAVE_A2NOC_CFG,
|
||||
SDM660_SLAVE_A2NOC_SMMU_CFG,
|
||||
SDM660_SLAVE_GPUSS_CFG,
|
||||
SDM660_SLAVE_AHB2PHY,
|
||||
SDM660_SLAVE_BLSP_1,
|
||||
SDM660_SLAVE_SDCC_1,
|
||||
SDM660_SLAVE_SDCC_2,
|
||||
SDM660_SLAVE_TLMM_CENTER,
|
||||
SDM660_SLAVE_BLSP_2,
|
||||
SDM660_SLAVE_PDM,
|
||||
SDM660_SLAVE_CNOC_MNOC_MMSS_CFG,
|
||||
SDM660_SLAVE_USB_HS,
|
||||
SDM660_SLAVE_USB3_0,
|
||||
SDM660_SLAVE_SRVC_CNOC,
|
||||
SDM660_SLAVE_GNOC_BIMC,
|
||||
SDM660_SLAVE_GNOC_SNOC,
|
||||
SDM660_SLAVE_CAMERA_CFG,
|
||||
SDM660_SLAVE_CAMERA_THROTTLE_CFG,
|
||||
SDM660_SLAVE_MISC_CFG,
|
||||
SDM660_SLAVE_VENUS_THROTTLE_CFG,
|
||||
SDM660_SLAVE_VENUS_CFG,
|
||||
SDM660_SLAVE_MMSS_CLK_XPU_CFG,
|
||||
SDM660_SLAVE_MMSS_CLK_CFG,
|
||||
SDM660_SLAVE_MNOC_MPU_CFG,
|
||||
SDM660_SLAVE_DISPLAY_CFG,
|
||||
SDM660_SLAVE_CSI_PHY_CFG,
|
||||
SDM660_SLAVE_DISPLAY_THROTTLE_CFG,
|
||||
SDM660_SLAVE_SMMU_CFG,
|
||||
SDM660_SLAVE_MNOC_BIMC,
|
||||
SDM660_SLAVE_SRVC_MNOC,
|
||||
SDM660_SLAVE_HMSS,
|
||||
SDM660_SLAVE_LPASS,
|
||||
SDM660_SLAVE_WLAN,
|
||||
SDM660_SLAVE_CDSP,
|
||||
SDM660_SLAVE_IPA,
|
||||
SDM660_SLAVE_SNOC_BIMC,
|
||||
SDM660_SLAVE_SNOC_CNOC,
|
||||
SDM660_SLAVE_IMEM,
|
||||
SDM660_SLAVE_PIMEM,
|
||||
SDM660_SLAVE_QDSS_STM,
|
||||
SDM660_SLAVE_SRVC_SNOC,
|
||||
|
||||
SDM660_A2NOC,
|
||||
SDM660_BIMC,
|
||||
SDM660_CNOC,
|
||||
SDM660_GNOC,
|
||||
SDM660_MNOC,
|
||||
SDM660_SNOC,
|
||||
};
|
||||
|
||||
#define to_qcom_provider(_provider) \
|
||||
container_of(_provider, struct qcom_icc_provider, provider)
|
||||
|
||||
static const struct clk_bulk_data bus_clocks[] = {
|
||||
{ .id = "bus" },
|
||||
{ .id = "bus_a" },
|
||||
};
|
||||
|
||||
static const struct clk_bulk_data bus_mm_clocks[] = {
|
||||
{ .id = "bus" },
|
||||
{ .id = "bus_a" },
|
||||
{ .id = "iface" },
|
||||
};
|
||||
|
||||
/**
|
||||
* struct qcom_icc_provider - Qualcomm specific interconnect provider
|
||||
* @provider: generic interconnect provider
|
||||
* @bus_clks: the clk_bulk_data table of bus clocks
|
||||
* @num_clks: the total number of clk_bulk_data entries
|
||||
* @is_bimc_node: indicates whether to use bimc specific setting
|
||||
* @regmap: regmap for QoS registers read/write access
|
||||
* @mmio: NoC base iospace
|
||||
*/
|
||||
struct qcom_icc_provider {
|
||||
struct icc_provider provider;
|
||||
struct clk_bulk_data *bus_clks;
|
||||
int num_clks;
|
||||
bool is_bimc_node;
|
||||
struct regmap *regmap;
|
||||
void __iomem *mmio;
|
||||
};
|
||||
|
||||
#define SDM660_MAX_LINKS 34
|
||||
|
||||
/**
|
||||
* struct qcom_icc_qos - Qualcomm specific interconnect QoS parameters
|
||||
* @areq_prio: node requests priority
|
||||
* @prio_level: priority level for bus communication
|
||||
* @limit_commands: activate/deactivate limiter mode during runtime
|
||||
* @ap_owned: indicates if the node is owned by the AP or by the RPM
|
||||
* @qos_mode: default qos mode for this node
|
||||
* @qos_port: qos port number for finding qos registers of this node
|
||||
*/
|
||||
struct qcom_icc_qos {
|
||||
u32 areq_prio;
|
||||
u32 prio_level;
|
||||
bool limit_commands;
|
||||
bool ap_owned;
|
||||
int qos_mode;
|
||||
int qos_port;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct qcom_icc_node - Qualcomm specific interconnect nodes
|
||||
* @name: the node name used in debugfs
|
||||
* @id: a unique node identifier
|
||||
* @links: an array of nodes where we can go next while traversing
|
||||
* @num_links: the total number of @links
|
||||
* @buswidth: width of the interconnect between a node and the bus (bytes)
|
||||
* @mas_rpm_id: RPM id for devices that are bus masters
|
||||
* @slv_rpm_id: RPM id for devices that are bus slaves
|
||||
* @qos: NoC QoS setting parameters
|
||||
* @rate: current bus clock rate in Hz
|
||||
*/
|
||||
struct qcom_icc_node {
|
||||
unsigned char *name;
|
||||
u16 id;
|
||||
u16 links[SDM660_MAX_LINKS];
|
||||
u16 num_links;
|
||||
u16 buswidth;
|
||||
int mas_rpm_id;
|
||||
int slv_rpm_id;
|
||||
struct qcom_icc_qos qos;
|
||||
u64 rate;
|
||||
};
|
||||
|
||||
struct qcom_icc_desc {
|
||||
struct qcom_icc_node **nodes;
|
||||
size_t num_nodes;
|
||||
const struct regmap_config *regmap_cfg;
|
||||
};
|
||||
|
||||
#define DEFINE_QNODE(_name, _id, _buswidth, _mas_rpm_id, _slv_rpm_id, \
|
||||
_ap_owned, _qos_mode, _qos_prio, _qos_port, ...) \
|
||||
static struct qcom_icc_node _name = { \
|
||||
.name = #_name, \
|
||||
.id = _id, \
|
||||
.buswidth = _buswidth, \
|
||||
.mas_rpm_id = _mas_rpm_id, \
|
||||
.slv_rpm_id = _slv_rpm_id, \
|
||||
.qos.ap_owned = _ap_owned, \
|
||||
.qos.qos_mode = _qos_mode, \
|
||||
.qos.areq_prio = _qos_prio, \
|
||||
.qos.prio_level = _qos_prio, \
|
||||
.qos.qos_port = _qos_port, \
|
||||
.num_links = ARRAY_SIZE(((int[]){ __VA_ARGS__ })), \
|
||||
.links = { __VA_ARGS__ }, \
|
||||
}
|
||||
|
||||
DEFINE_QNODE(mas_ipa, SDM660_MASTER_IPA, 8, 59, -1, true, NOC_QOS_MODE_FIXED, 1, 3, SDM660_SLAVE_A2NOC_SNOC);
|
||||
DEFINE_QNODE(mas_cnoc_a2noc, SDM660_MASTER_CNOC_A2NOC, 8, 146, -1, true, -1, 0, -1, SDM660_SLAVE_A2NOC_SNOC);
|
||||
DEFINE_QNODE(mas_sdcc_1, SDM660_MASTER_SDCC_1, 8, 33, -1, false, -1, 0, -1, SDM660_SLAVE_A2NOC_SNOC);
|
||||
DEFINE_QNODE(mas_sdcc_2, SDM660_MASTER_SDCC_2, 8, 35, -1, false, -1, 0, -1, SDM660_SLAVE_A2NOC_SNOC);
|
||||
DEFINE_QNODE(mas_blsp_1, SDM660_MASTER_BLSP_1, 4, 41, -1, false, -1, 0, -1, SDM660_SLAVE_A2NOC_SNOC);
|
||||
DEFINE_QNODE(mas_blsp_2, SDM660_MASTER_BLSP_2, 4, 39, -1, false, -1, 0, -1, SDM660_SLAVE_A2NOC_SNOC);
|
||||
DEFINE_QNODE(mas_ufs, SDM660_MASTER_UFS, 8, 68, -1, true, NOC_QOS_MODE_FIXED, 1, 4, SDM660_SLAVE_A2NOC_SNOC);
|
||||
DEFINE_QNODE(mas_usb_hs, SDM660_MASTER_USB_HS, 8, 42, -1, true, NOC_QOS_MODE_FIXED, 1, 1, SDM660_SLAVE_A2NOC_SNOC);
|
||||
DEFINE_QNODE(mas_usb3, SDM660_MASTER_USB3, 8, 32, -1, true, NOC_QOS_MODE_FIXED, 1, 2, SDM660_SLAVE_A2NOC_SNOC);
|
||||
DEFINE_QNODE(mas_crypto, SDM660_MASTER_CRYPTO_C0, 8, 23, -1, true, NOC_QOS_MODE_FIXED, 1, 11, SDM660_SLAVE_A2NOC_SNOC);
|
||||
DEFINE_QNODE(mas_gnoc_bimc, SDM660_MASTER_GNOC_BIMC, 4, 144, -1, true, NOC_QOS_MODE_FIXED, 0, 0, SDM660_SLAVE_EBI);
|
||||
DEFINE_QNODE(mas_oxili, SDM660_MASTER_OXILI, 4, 6, -1, true, NOC_QOS_MODE_BYPASS, 0, 1, SDM660_SLAVE_HMSS_L3, SDM660_SLAVE_EBI, SDM660_SLAVE_BIMC_SNOC);
|
||||
DEFINE_QNODE(mas_mnoc_bimc, SDM660_MASTER_MNOC_BIMC, 4, 2, -1, true, NOC_QOS_MODE_BYPASS, 0, 2, SDM660_SLAVE_HMSS_L3, SDM660_SLAVE_EBI, SDM660_SLAVE_BIMC_SNOC);
|
||||
DEFINE_QNODE(mas_snoc_bimc, SDM660_MASTER_SNOC_BIMC, 4, 3, -1, false, -1, 0, -1, SDM660_SLAVE_HMSS_L3, SDM660_SLAVE_EBI);
|
||||
DEFINE_QNODE(mas_pimem, SDM660_MASTER_PIMEM, 4, 113, -1, true, NOC_QOS_MODE_FIXED, 1, 4, SDM660_SLAVE_HMSS_L3, SDM660_SLAVE_EBI);
|
||||
DEFINE_QNODE(mas_snoc_cnoc, SDM660_MASTER_SNOC_CNOC, 8, 52, -1, true, -1, 0, -1, SDM660_SLAVE_CLK_CTL, SDM660_SLAVE_QDSS_CFG, SDM660_SLAVE_QM_CFG, SDM660_SLAVE_SRVC_CNOC, SDM660_SLAVE_UFS_CFG, SDM660_SLAVE_TCSR, SDM660_SLAVE_A2NOC_SMMU_CFG, SDM660_SLAVE_SNOC_CFG, SDM660_SLAVE_TLMM_SOUTH, SDM660_SLAVE_MPM, SDM660_SLAVE_CNOC_MNOC_MMSS_CFG, SDM660_SLAVE_SDCC_2, SDM660_SLAVE_SDCC_1, SDM660_SLAVE_SPDM, SDM660_SLAVE_PMIC_ARB, SDM660_SLAVE_PRNG, SDM660_SLAVE_MSS_CFG, SDM660_SLAVE_GPUSS_CFG, SDM660_SLAVE_IMEM_CFG, SDM660_SLAVE_USB3_0, SDM660_SLAVE_A2NOC_CFG, SDM660_SLAVE_TLMM_NORTH, SDM660_SLAVE_USB_HS, SDM660_SLAVE_PDM, SDM660_SLAVE_TLMM_CENTER, SDM660_SLAVE_AHB2PHY, SDM660_SLAVE_BLSP_2, SDM660_SLAVE_BLSP_1, SDM660_SLAVE_PIMEM_CFG, SDM660_SLAVE_GLM, SDM660_SLAVE_MESSAGE_RAM, SDM660_SLAVE_BIMC_CFG, SDM660_SLAVE_CNOC_MNOC_CFG);
|
||||
DEFINE_QNODE(mas_qdss_dap, SDM660_MASTER_QDSS_DAP, 8, 49, -1, true, -1, 0, -1, SDM660_SLAVE_CLK_CTL, SDM660_SLAVE_QDSS_CFG, SDM660_SLAVE_QM_CFG, SDM660_SLAVE_SRVC_CNOC, SDM660_SLAVE_UFS_CFG, SDM660_SLAVE_TCSR, SDM660_SLAVE_A2NOC_SMMU_CFG, SDM660_SLAVE_SNOC_CFG, SDM660_SLAVE_TLMM_SOUTH, SDM660_SLAVE_MPM, SDM660_SLAVE_CNOC_MNOC_MMSS_CFG, SDM660_SLAVE_SDCC_2, SDM660_SLAVE_SDCC_1, SDM660_SLAVE_SPDM, SDM660_SLAVE_PMIC_ARB, SDM660_SLAVE_PRNG, SDM660_SLAVE_MSS_CFG, SDM660_SLAVE_GPUSS_CFG, SDM660_SLAVE_IMEM_CFG, SDM660_SLAVE_USB3_0, SDM660_SLAVE_A2NOC_CFG, SDM660_SLAVE_TLMM_NORTH, SDM660_SLAVE_USB_HS, SDM660_SLAVE_PDM, SDM660_SLAVE_TLMM_CENTER, SDM660_SLAVE_AHB2PHY, SDM660_SLAVE_BLSP_2, SDM660_SLAVE_BLSP_1, SDM660_SLAVE_PIMEM_CFG, SDM660_SLAVE_GLM, SDM660_SLAVE_MESSAGE_RAM, SDM660_SLAVE_CNOC_A2NOC, SDM660_SLAVE_BIMC_CFG, SDM660_SLAVE_CNOC_MNOC_CFG);
|
||||
DEFINE_QNODE(mas_apss_proc, SDM660_MASTER_APPS_PROC, 16, 0, -1, true, -1, 0, -1, SDM660_SLAVE_GNOC_SNOC, SDM660_SLAVE_GNOC_BIMC);
|
||||
DEFINE_QNODE(mas_cnoc_mnoc_mmss_cfg, SDM660_MASTER_CNOC_MNOC_MMSS_CFG, 8, 4, -1, true, -1, 0, -1, SDM660_SLAVE_VENUS_THROTTLE_CFG, SDM660_SLAVE_VENUS_CFG, SDM660_SLAVE_CAMERA_THROTTLE_CFG, SDM660_SLAVE_SMMU_CFG, SDM660_SLAVE_CAMERA_CFG, SDM660_SLAVE_CSI_PHY_CFG, SDM660_SLAVE_DISPLAY_THROTTLE_CFG, SDM660_SLAVE_DISPLAY_CFG, SDM660_SLAVE_MMSS_CLK_CFG, SDM660_SLAVE_MNOC_MPU_CFG, SDM660_SLAVE_MISC_CFG, SDM660_SLAVE_MMSS_CLK_XPU_CFG);
|
||||
DEFINE_QNODE(mas_cnoc_mnoc_cfg, SDM660_MASTER_CNOC_MNOC_CFG, 4, 5, -1, true, -1, 0, -1, SDM660_SLAVE_SRVC_MNOC);
|
||||
DEFINE_QNODE(mas_cpp, SDM660_MASTER_CPP, 16, 115, -1, true, NOC_QOS_MODE_BYPASS, 0, 4, SDM660_SLAVE_MNOC_BIMC);
|
||||
DEFINE_QNODE(mas_jpeg, SDM660_MASTER_JPEG, 16, 7, -1, true, NOC_QOS_MODE_BYPASS, 0, 6, SDM660_SLAVE_MNOC_BIMC);
|
||||
DEFINE_QNODE(mas_mdp_p0, SDM660_MASTER_MDP_P0, 16, 8, -1, true, NOC_QOS_MODE_BYPASS, 0, 0, SDM660_SLAVE_MNOC_BIMC); /* vrail-comp???? */
|
||||
DEFINE_QNODE(mas_mdp_p1, SDM660_MASTER_MDP_P1, 16, 61, -1, true, NOC_QOS_MODE_BYPASS, 0, 1, SDM660_SLAVE_MNOC_BIMC); /* vrail-comp??? */
|
||||
DEFINE_QNODE(mas_venus, SDM660_MASTER_VENUS, 16, 9, -1, true, NOC_QOS_MODE_BYPASS, 0, 1, SDM660_SLAVE_MNOC_BIMC);
|
||||
DEFINE_QNODE(mas_vfe, SDM660_MASTER_VFE, 16, 11, -1, true, NOC_QOS_MODE_BYPASS, 0, 5, SDM660_SLAVE_MNOC_BIMC);
|
||||
DEFINE_QNODE(mas_qdss_etr, SDM660_MASTER_QDSS_ETR, 8, 31, -1, true, NOC_QOS_MODE_FIXED, 1, 1, SDM660_SLAVE_PIMEM, SDM660_SLAVE_IMEM, SDM660_SLAVE_SNOC_CNOC, SDM660_SLAVE_SNOC_BIMC);
|
||||
DEFINE_QNODE(mas_qdss_bam, SDM660_MASTER_QDSS_BAM, 4, 19, -1, true, NOC_QOS_MODE_FIXED, 1, 0, SDM660_SLAVE_PIMEM, SDM660_SLAVE_IMEM, SDM660_SLAVE_SNOC_CNOC, SDM660_SLAVE_SNOC_BIMC);
|
||||
DEFINE_QNODE(mas_snoc_cfg, SDM660_MASTER_SNOC_CFG, 4, 20, -1, false, -1, 0, -1, SDM660_SLAVE_SRVC_SNOC);
|
||||
DEFINE_QNODE(mas_bimc_snoc, SDM660_MASTER_BIMC_SNOC, 8, 21, -1, false, -1, 0, -1, SDM660_SLAVE_PIMEM, SDM660_SLAVE_IPA, SDM660_SLAVE_QDSS_STM, SDM660_SLAVE_LPASS, SDM660_SLAVE_HMSS, SDM660_SLAVE_CDSP, SDM660_SLAVE_SNOC_CNOC, SDM660_SLAVE_WLAN, SDM660_SLAVE_IMEM);
|
||||
DEFINE_QNODE(mas_gnoc_snoc, SDM660_MASTER_GNOC_SNOC, 8, 150, -1, false, -1, 0, -1, SDM660_SLAVE_PIMEM, SDM660_SLAVE_IPA, SDM660_SLAVE_QDSS_STM, SDM660_SLAVE_LPASS, SDM660_SLAVE_HMSS, SDM660_SLAVE_CDSP, SDM660_SLAVE_SNOC_CNOC, SDM660_SLAVE_WLAN, SDM660_SLAVE_IMEM);
|
||||
DEFINE_QNODE(mas_a2noc_snoc, SDM660_MASTER_A2NOC_SNOC, 16, 112, -1, false, -1, 0, -1, SDM660_SLAVE_PIMEM, SDM660_SLAVE_IPA, SDM660_SLAVE_QDSS_STM, SDM660_SLAVE_LPASS, SDM660_SLAVE_HMSS, SDM660_SLAVE_SNOC_BIMC, SDM660_SLAVE_CDSP, SDM660_SLAVE_SNOC_CNOC, SDM660_SLAVE_WLAN, SDM660_SLAVE_IMEM);
|
||||
DEFINE_QNODE(slv_a2noc_snoc, SDM660_SLAVE_A2NOC_SNOC, 16, -1, 143, false, -1, 0, -1, SDM660_MASTER_A2NOC_SNOC);
|
||||
DEFINE_QNODE(slv_ebi, SDM660_SLAVE_EBI, 4, -1, 0, false, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_hmss_l3, SDM660_SLAVE_HMSS_L3, 4, -1, 160, false, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_bimc_snoc, SDM660_SLAVE_BIMC_SNOC, 4, -1, 2, false, -1, 0, -1, SDM660_MASTER_BIMC_SNOC);
|
||||
DEFINE_QNODE(slv_cnoc_a2noc, SDM660_SLAVE_CNOC_A2NOC, 8, -1, 208, true, -1, 0, -1, SDM660_MASTER_CNOC_A2NOC);
|
||||
DEFINE_QNODE(slv_mpm, SDM660_SLAVE_MPM, 4, -1, 62, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_pmic_arb, SDM660_SLAVE_PMIC_ARB, 4, -1, 59, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_tlmm_north, SDM660_SLAVE_TLMM_NORTH, 8, -1, 214, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_tcsr, SDM660_SLAVE_TCSR, 4, -1, 50, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_pimem_cfg, SDM660_SLAVE_PIMEM_CFG, 4, -1, 167, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_imem_cfg, SDM660_SLAVE_IMEM_CFG, 4, -1, 54, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_message_ram, SDM660_SLAVE_MESSAGE_RAM, 4, -1, 55, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_glm, SDM660_SLAVE_GLM, 4, -1, 209, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_bimc_cfg, SDM660_SLAVE_BIMC_CFG, 4, -1, 56, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_prng, SDM660_SLAVE_PRNG, 4, -1, 44, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_spdm, SDM660_SLAVE_SPDM, 4, -1, 60, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_qdss_cfg, SDM660_SLAVE_QDSS_CFG, 4, -1, 63, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_cnoc_mnoc_cfg, SDM660_SLAVE_BLSP_1, 4, -1, 66, true, -1, 0, -1, SDM660_MASTER_CNOC_MNOC_CFG);
|
||||
DEFINE_QNODE(slv_snoc_cfg, SDM660_SLAVE_SNOC_CFG, 4, -1, 70, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_qm_cfg, SDM660_SLAVE_QM_CFG, 4, -1, 212, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_clk_ctl, SDM660_SLAVE_CLK_CTL, 4, -1, 47, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_mss_cfg, SDM660_SLAVE_MSS_CFG, 4, -1, 48, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_tlmm_south, SDM660_SLAVE_TLMM_SOUTH, 4, -1, 217, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_ufs_cfg, SDM660_SLAVE_UFS_CFG, 4, -1, 92, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_a2noc_cfg, SDM660_SLAVE_A2NOC_CFG, 4, -1, 150, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_a2noc_smmu_cfg, SDM660_SLAVE_A2NOC_SMMU_CFG, 8, -1, 152, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_gpuss_cfg, SDM660_SLAVE_GPUSS_CFG, 8, -1, 11, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_ahb2phy, SDM660_SLAVE_AHB2PHY, 4, -1, 163, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_blsp_1, SDM660_SLAVE_BLSP_1, 4, -1, 39, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_sdcc_1, SDM660_SLAVE_SDCC_1, 4, -1, 31, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_sdcc_2, SDM660_SLAVE_SDCC_2, 4, -1, 33, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_tlmm_center, SDM660_SLAVE_TLMM_CENTER, 4, -1, 218, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_blsp_2, SDM660_SLAVE_BLSP_2, 4, -1, 37, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_pdm, SDM660_SLAVE_PDM, 4, -1, 41, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_cnoc_mnoc_mmss_cfg, SDM660_SLAVE_CNOC_MNOC_MMSS_CFG, 8, -1, 58, true, -1, 0, -1, SDM660_MASTER_CNOC_MNOC_MMSS_CFG);
|
||||
DEFINE_QNODE(slv_usb_hs, SDM660_SLAVE_USB_HS, 4, -1, 40, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_usb3_0, SDM660_SLAVE_USB3_0, 4, -1, 22, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_srvc_cnoc, SDM660_SLAVE_SRVC_CNOC, 4, -1, 76, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_gnoc_bimc, SDM660_SLAVE_GNOC_BIMC, 16, -1, 210, true, -1, 0, -1, SDM660_MASTER_GNOC_BIMC);
|
||||
DEFINE_QNODE(slv_gnoc_snoc, SDM660_SLAVE_GNOC_SNOC, 8, -1, 211, true, -1, 0, -1, SDM660_MASTER_GNOC_SNOC);
|
||||
DEFINE_QNODE(slv_camera_cfg, SDM660_SLAVE_CAMERA_CFG, 4, -1, 3, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_camera_throttle_cfg, SDM660_SLAVE_CAMERA_THROTTLE_CFG, 4, -1, 154, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_misc_cfg, SDM660_SLAVE_MISC_CFG, 4, -1, 8, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_venus_throttle_cfg, SDM660_SLAVE_VENUS_THROTTLE_CFG, 4, -1, 178, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_venus_cfg, SDM660_SLAVE_VENUS_CFG, 4, -1, 10, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_mmss_clk_xpu_cfg, SDM660_SLAVE_MMSS_CLK_XPU_CFG, 4, -1, 13, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_mmss_clk_cfg, SDM660_SLAVE_MMSS_CLK_CFG, 4, -1, 12, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_mnoc_mpu_cfg, SDM660_SLAVE_MNOC_MPU_CFG, 4, -1, 14, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_display_cfg, SDM660_SLAVE_DISPLAY_CFG, 4, -1, 4, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_csi_phy_cfg, SDM660_SLAVE_CSI_PHY_CFG, 4, -1, 224, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_display_throttle_cfg, SDM660_SLAVE_DISPLAY_THROTTLE_CFG, 4, -1, 156, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_smmu_cfg, SDM660_SLAVE_SMMU_CFG, 8, -1, 205, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_mnoc_bimc, SDM660_SLAVE_MNOC_BIMC, 16, -1, 16, true, -1, 0, -1, SDM660_MASTER_MNOC_BIMC);
|
||||
DEFINE_QNODE(slv_srvc_mnoc, SDM660_SLAVE_SRVC_MNOC, 8, -1, 17, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_hmss, SDM660_SLAVE_HMSS, 8, -1, 20, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_lpass, SDM660_SLAVE_LPASS, 4, -1, 21, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_wlan, SDM660_SLAVE_WLAN, 4, -1, 206, false, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_cdsp, SDM660_SLAVE_CDSP, 4, -1, 221, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_ipa, SDM660_SLAVE_IPA, 4, -1, 183, true, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_snoc_bimc, SDM660_SLAVE_SNOC_BIMC, 16, -1, 24, false, -1, 0, -1, SDM660_MASTER_SNOC_BIMC);
|
||||
DEFINE_QNODE(slv_snoc_cnoc, SDM660_SLAVE_SNOC_CNOC, 8, -1, 25, false, -1, 0, -1, SDM660_MASTER_SNOC_CNOC);
|
||||
DEFINE_QNODE(slv_imem, SDM660_SLAVE_IMEM, 8, -1, 26, false, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_pimem, SDM660_SLAVE_PIMEM, 8, -1, 166, false, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_qdss_stm, SDM660_SLAVE_QDSS_STM, 4, -1, 30, false, -1, 0, -1, 0);
|
||||
DEFINE_QNODE(slv_srvc_snoc, SDM660_SLAVE_SRVC_SNOC, 16, -1, 29, false, -1, 0, -1, 0);
|
||||
|
||||
static struct qcom_icc_node *sdm660_a2noc_nodes[] = {
|
||||
[MASTER_IPA] = &mas_ipa,
|
||||
[MASTER_CNOC_A2NOC] = &mas_cnoc_a2noc,
|
||||
[MASTER_SDCC_1] = &mas_sdcc_1,
|
||||
[MASTER_SDCC_2] = &mas_sdcc_2,
|
||||
[MASTER_BLSP_1] = &mas_blsp_1,
|
||||
[MASTER_BLSP_2] = &mas_blsp_2,
|
||||
[MASTER_UFS] = &mas_ufs,
|
||||
[MASTER_USB_HS] = &mas_usb_hs,
|
||||
[MASTER_USB3] = &mas_usb3,
|
||||
[MASTER_CRYPTO_C0] = &mas_crypto,
|
||||
[SLAVE_A2NOC_SNOC] = &slv_a2noc_snoc,
|
||||
};
|
||||
|
||||
static const struct regmap_config sdm660_a2noc_regmap_config = {
|
||||
.reg_bits = 32,
|
||||
.reg_stride = 4,
|
||||
.val_bits = 32,
|
||||
.max_register = 0x20000,
|
||||
.fast_io = true,
|
||||
};
|
||||
|
||||
static struct qcom_icc_desc sdm660_a2noc = {
|
||||
.nodes = sdm660_a2noc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(sdm660_a2noc_nodes),
|
||||
.regmap_cfg = &sdm660_a2noc_regmap_config,
|
||||
};
|
||||
|
||||
static struct qcom_icc_node *sdm660_bimc_nodes[] = {
|
||||
[MASTER_GNOC_BIMC] = &mas_gnoc_bimc,
|
||||
[MASTER_OXILI] = &mas_oxili,
|
||||
[MASTER_MNOC_BIMC] = &mas_mnoc_bimc,
|
||||
[MASTER_SNOC_BIMC] = &mas_snoc_bimc,
|
||||
[MASTER_PIMEM] = &mas_pimem,
|
||||
[SLAVE_EBI] = &slv_ebi,
|
||||
[SLAVE_HMSS_L3] = &slv_hmss_l3,
|
||||
[SLAVE_BIMC_SNOC] = &slv_bimc_snoc,
|
||||
};
|
||||
|
||||
static const struct regmap_config sdm660_bimc_regmap_config = {
|
||||
.reg_bits = 32,
|
||||
.reg_stride = 4,
|
||||
.val_bits = 32,
|
||||
.max_register = 0x80000,
|
||||
.fast_io = true,
|
||||
};
|
||||
|
||||
static struct qcom_icc_desc sdm660_bimc = {
|
||||
.nodes = sdm660_bimc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(sdm660_bimc_nodes),
|
||||
.regmap_cfg = &sdm660_bimc_regmap_config,
|
||||
};
|
||||
|
||||
static struct qcom_icc_node *sdm660_cnoc_nodes[] = {
|
||||
[MASTER_SNOC_CNOC] = &mas_snoc_cnoc,
|
||||
[MASTER_QDSS_DAP] = &mas_qdss_dap,
|
||||
[SLAVE_CNOC_A2NOC] = &slv_cnoc_a2noc,
|
||||
[SLAVE_MPM] = &slv_mpm,
|
||||
[SLAVE_PMIC_ARB] = &slv_pmic_arb,
|
||||
[SLAVE_TLMM_NORTH] = &slv_tlmm_north,
|
||||
[SLAVE_TCSR] = &slv_tcsr,
|
||||
[SLAVE_PIMEM_CFG] = &slv_pimem_cfg,
|
||||
[SLAVE_IMEM_CFG] = &slv_imem_cfg,
|
||||
[SLAVE_MESSAGE_RAM] = &slv_message_ram,
|
||||
[SLAVE_GLM] = &slv_glm,
|
||||
[SLAVE_BIMC_CFG] = &slv_bimc_cfg,
|
||||
[SLAVE_PRNG] = &slv_prng,
|
||||
[SLAVE_SPDM] = &slv_spdm,
|
||||
[SLAVE_QDSS_CFG] = &slv_qdss_cfg,
|
||||
[SLAVE_CNOC_MNOC_CFG] = &slv_cnoc_mnoc_cfg,
|
||||
[SLAVE_SNOC_CFG] = &slv_snoc_cfg,
|
||||
[SLAVE_QM_CFG] = &slv_qm_cfg,
|
||||
[SLAVE_CLK_CTL] = &slv_clk_ctl,
|
||||
[SLAVE_MSS_CFG] = &slv_mss_cfg,
|
||||
[SLAVE_TLMM_SOUTH] = &slv_tlmm_south,
|
||||
[SLAVE_UFS_CFG] = &slv_ufs_cfg,
|
||||
[SLAVE_A2NOC_CFG] = &slv_a2noc_cfg,
|
||||
[SLAVE_A2NOC_SMMU_CFG] = &slv_a2noc_smmu_cfg,
|
||||
[SLAVE_GPUSS_CFG] = &slv_gpuss_cfg,
|
||||
[SLAVE_AHB2PHY] = &slv_ahb2phy,
|
||||
[SLAVE_BLSP_1] = &slv_blsp_1,
|
||||
[SLAVE_SDCC_1] = &slv_sdcc_1,
|
||||
[SLAVE_SDCC_2] = &slv_sdcc_2,
|
||||
[SLAVE_TLMM_CENTER] = &slv_tlmm_center,
|
||||
[SLAVE_BLSP_2] = &slv_blsp_2,
|
||||
[SLAVE_PDM] = &slv_pdm,
|
||||
[SLAVE_CNOC_MNOC_MMSS_CFG] = &slv_cnoc_mnoc_mmss_cfg,
|
||||
[SLAVE_USB_HS] = &slv_usb_hs,
|
||||
[SLAVE_USB3_0] = &slv_usb3_0,
|
||||
[SLAVE_SRVC_CNOC] = &slv_srvc_cnoc,
|
||||
};
|
||||
|
||||
static const struct regmap_config sdm660_cnoc_regmap_config = {
|
||||
.reg_bits = 32,
|
||||
.reg_stride = 4,
|
||||
.val_bits = 32,
|
||||
.max_register = 0x10000,
|
||||
.fast_io = true,
|
||||
};
|
||||
|
||||
static struct qcom_icc_desc sdm660_cnoc = {
|
||||
.nodes = sdm660_cnoc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(sdm660_cnoc_nodes),
|
||||
.regmap_cfg = &sdm660_cnoc_regmap_config,
|
||||
};
|
||||
|
||||
static struct qcom_icc_node *sdm660_gnoc_nodes[] = {
|
||||
[MASTER_APSS_PROC] = &mas_apss_proc,
|
||||
[SLAVE_GNOC_BIMC] = &slv_gnoc_bimc,
|
||||
[SLAVE_GNOC_SNOC] = &slv_gnoc_snoc,
|
||||
};
|
||||
|
||||
static const struct regmap_config sdm660_gnoc_regmap_config = {
|
||||
.reg_bits = 32,
|
||||
.reg_stride = 4,
|
||||
.val_bits = 32,
|
||||
.max_register = 0xe000,
|
||||
.fast_io = true,
|
||||
};
|
||||
|
||||
static struct qcom_icc_desc sdm660_gnoc = {
|
||||
.nodes = sdm660_gnoc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(sdm660_gnoc_nodes),
|
||||
.regmap_cfg = &sdm660_gnoc_regmap_config,
|
||||
};
|
||||
|
||||
static struct qcom_icc_node *sdm660_mnoc_nodes[] = {
|
||||
[MASTER_CPP] = &mas_cpp,
|
||||
[MASTER_JPEG] = &mas_jpeg,
|
||||
[MASTER_MDP_P0] = &mas_mdp_p0,
|
||||
[MASTER_MDP_P1] = &mas_mdp_p1,
|
||||
[MASTER_VENUS] = &mas_venus,
|
||||
[MASTER_VFE] = &mas_vfe,
|
||||
[MASTER_CNOC_MNOC_MMSS_CFG] = &mas_cnoc_mnoc_mmss_cfg,
|
||||
[MASTER_CNOC_MNOC_CFG] = &mas_cnoc_mnoc_cfg,
|
||||
[SLAVE_CAMERA_CFG] = &slv_camera_cfg,
|
||||
[SLAVE_CAMERA_THROTTLE_CFG] = &slv_camera_throttle_cfg,
|
||||
[SLAVE_MISC_CFG] = &slv_misc_cfg,
|
||||
[SLAVE_VENUS_THROTTLE_CFG] = &slv_venus_throttle_cfg,
|
||||
[SLAVE_VENUS_CFG] = &slv_venus_cfg,
|
||||
[SLAVE_MMSS_CLK_XPU_CFG] = &slv_mmss_clk_xpu_cfg,
|
||||
[SLAVE_MMSS_CLK_CFG] = &slv_mmss_clk_cfg,
|
||||
[SLAVE_MNOC_MPU_CFG] = &slv_mnoc_mpu_cfg,
|
||||
[SLAVE_DISPLAY_CFG] = &slv_display_cfg,
|
||||
[SLAVE_CSI_PHY_CFG] = &slv_csi_phy_cfg,
|
||||
[SLAVE_DISPLAY_THROTTLE_CFG] = &slv_display_throttle_cfg,
|
||||
[SLAVE_SMMU_CFG] = &slv_smmu_cfg,
|
||||
[SLAVE_SRVC_MNOC] = &slv_srvc_mnoc,
|
||||
[SLAVE_MNOC_BIMC] = &slv_mnoc_bimc,
|
||||
};
|
||||
|
||||
static const struct regmap_config sdm660_mnoc_regmap_config = {
|
||||
.reg_bits = 32,
|
||||
.reg_stride = 4,
|
||||
.val_bits = 32,
|
||||
.max_register = 0x10000,
|
||||
.fast_io = true,
|
||||
};
|
||||
|
||||
static struct qcom_icc_desc sdm660_mnoc = {
|
||||
.nodes = sdm660_mnoc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(sdm660_mnoc_nodes),
|
||||
.regmap_cfg = &sdm660_mnoc_regmap_config,
|
||||
};
|
||||
|
||||
static struct qcom_icc_node *sdm660_snoc_nodes[] = {
|
||||
[MASTER_QDSS_ETR] = &mas_qdss_etr,
|
||||
[MASTER_QDSS_BAM] = &mas_qdss_bam,
|
||||
[MASTER_SNOC_CFG] = &mas_snoc_cfg,
|
||||
[MASTER_BIMC_SNOC] = &mas_bimc_snoc,
|
||||
[MASTER_A2NOC_SNOC] = &mas_a2noc_snoc,
|
||||
[MASTER_GNOC_SNOC] = &mas_gnoc_snoc,
|
||||
[SLAVE_HMSS] = &slv_hmss,
|
||||
[SLAVE_LPASS] = &slv_lpass,
|
||||
[SLAVE_WLAN] = &slv_wlan,
|
||||
[SLAVE_CDSP] = &slv_cdsp,
|
||||
[SLAVE_IPA] = &slv_ipa,
|
||||
[SLAVE_SNOC_BIMC] = &slv_snoc_bimc,
|
||||
[SLAVE_SNOC_CNOC] = &slv_snoc_cnoc,
|
||||
[SLAVE_IMEM] = &slv_imem,
|
||||
[SLAVE_PIMEM] = &slv_pimem,
|
||||
[SLAVE_QDSS_STM] = &slv_qdss_stm,
|
||||
[SLAVE_SRVC_SNOC] = &slv_srvc_snoc,
|
||||
};
|
||||
|
||||
static const struct regmap_config sdm660_snoc_regmap_config = {
|
||||
.reg_bits = 32,
|
||||
.reg_stride = 4,
|
||||
.val_bits = 32,
|
||||
.max_register = 0x20000,
|
||||
.fast_io = true,
|
||||
};
|
||||
|
||||
static struct qcom_icc_desc sdm660_snoc = {
|
||||
.nodes = sdm660_snoc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(sdm660_snoc_nodes),
|
||||
.regmap_cfg = &sdm660_snoc_regmap_config,
|
||||
};
|
||||
|
||||
static int qcom_icc_bimc_set_qos_health(struct regmap *rmap,
|
||||
struct qcom_icc_qos *qos,
|
||||
int regnum)
|
||||
{
|
||||
u32 val;
|
||||
u32 mask;
|
||||
|
||||
val = qos->prio_level;
|
||||
mask = M_BKE_HEALTH_CFG_PRIOLVL_MASK;
|
||||
|
||||
val |= qos->areq_prio << M_BKE_HEALTH_CFG_AREQPRIO_SHIFT;
|
||||
mask |= M_BKE_HEALTH_CFG_AREQPRIO_MASK;
|
||||
|
||||
/* LIMITCMDS is not present on M_BKE_HEALTH_3 */
|
||||
if (regnum != 3) {
|
||||
val |= qos->limit_commands << M_BKE_HEALTH_CFG_LIMITCMDS_SHIFT;
|
||||
mask |= M_BKE_HEALTH_CFG_LIMITCMDS_MASK;
|
||||
}
|
||||
|
||||
return regmap_update_bits(rmap,
|
||||
M_BKE_HEALTH_CFG_ADDR(regnum, qos->qos_port),
|
||||
mask, val);
|
||||
}
|
||||
|
||||
static int qcom_icc_set_bimc_qos(struct icc_node *src, u64 max_bw,
|
||||
bool bypass_mode)
|
||||
{
|
||||
struct qcom_icc_provider *qp;
|
||||
struct qcom_icc_node *qn;
|
||||
struct icc_provider *provider;
|
||||
u32 mode = NOC_QOS_MODE_BYPASS;
|
||||
u32 val = 0;
|
||||
int i, rc = 0;
|
||||
|
||||
qn = src->data;
|
||||
provider = src->provider;
|
||||
qp = to_qcom_provider(provider);
|
||||
|
||||
if (qn->qos.qos_mode != -1)
|
||||
mode = qn->qos.qos_mode;
|
||||
|
||||
/* QoS Priority: The QoS Health parameters are getting considered
|
||||
* only if we are NOT in Bypass Mode.
|
||||
*/
|
||||
if (mode != NOC_QOS_MODE_BYPASS) {
|
||||
for (i = 3; i >= 0; i--) {
|
||||
rc = qcom_icc_bimc_set_qos_health(qp->regmap,
|
||||
&qn->qos, i);
|
||||
if (rc)
|
||||
return rc;
|
||||
}
|
||||
|
||||
/* Set BKE_EN to 1 when Fixed, Regulator or Limiter Mode */
|
||||
val = 1;
|
||||
}
|
||||
|
||||
return regmap_update_bits(qp->regmap, M_BKE_EN_ADDR(qn->qos.qos_port),
|
||||
M_BKE_EN_EN_BMASK, val);
|
||||
}
|
||||
|
||||
static int qcom_icc_noc_set_qos_priority(struct regmap *rmap,
|
||||
struct qcom_icc_qos *qos)
|
||||
{
|
||||
u32 val;
|
||||
int rc;
|
||||
|
||||
/* Must be updated one at a time, P1 first, P0 last */
|
||||
val = qos->areq_prio << NOC_QOS_PRIORITY_P1_SHIFT;
|
||||
rc = regmap_update_bits(rmap, NOC_QOS_PRIORITYn_ADDR(qos->qos_port),
|
||||
NOC_QOS_PRIORITY_MASK, val);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
val = qos->prio_level << NOC_QOS_PRIORITY_P0_SHIFT;
|
||||
return regmap_update_bits(rmap, NOC_QOS_PRIORITYn_ADDR(qos->qos_port),
|
||||
NOC_QOS_PRIORITY_MASK, val);
|
||||
}
|
||||
|
||||
static int qcom_icc_set_noc_qos(struct icc_node *src, u64 max_bw)
|
||||
{
|
||||
struct qcom_icc_provider *qp;
|
||||
struct qcom_icc_node *qn;
|
||||
struct icc_provider *provider;
|
||||
u32 mode = NOC_QOS_MODE_BYPASS;
|
||||
int rc = 0;
|
||||
|
||||
qn = src->data;
|
||||
provider = src->provider;
|
||||
qp = to_qcom_provider(provider);
|
||||
|
||||
if (qn->qos.qos_port < 0) {
|
||||
dev_dbg(src->provider->dev,
|
||||
"NoC QoS: Skipping %s: vote aggregated on parent.\n",
|
||||
qn->name);
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (qn->qos.qos_mode != -1)
|
||||
mode = qn->qos.qos_mode;
|
||||
|
||||
if (mode == NOC_QOS_MODE_FIXED) {
|
||||
dev_dbg(src->provider->dev, "NoC QoS: %s: Set Fixed mode\n",
|
||||
qn->name);
|
||||
rc = qcom_icc_noc_set_qos_priority(qp->regmap, &qn->qos);
|
||||
if (rc)
|
||||
return rc;
|
||||
} else if (mode == NOC_QOS_MODE_BYPASS) {
|
||||
dev_dbg(src->provider->dev, "NoC QoS: %s: Set Bypass mode\n",
|
||||
qn->name);
|
||||
}
|
||||
|
||||
return regmap_update_bits(qp->regmap,
|
||||
NOC_QOS_MODEn_ADDR(qn->qos.qos_port),
|
||||
NOC_QOS_MODEn_MASK, mode);
|
||||
}
|
||||
|
||||
static int qcom_icc_qos_set(struct icc_node *node, u64 sum_bw)
|
||||
{
|
||||
struct qcom_icc_provider *qp = to_qcom_provider(node->provider);
|
||||
struct qcom_icc_node *qn = node->data;
|
||||
|
||||
dev_dbg(node->provider->dev, "Setting QoS for %s\n", qn->name);
|
||||
|
||||
if (qp->is_bimc_node)
|
||||
return qcom_icc_set_bimc_qos(node, sum_bw,
|
||||
(qn->qos.qos_mode == NOC_QOS_MODE_BYPASS));
|
||||
|
||||
return qcom_icc_set_noc_qos(node, sum_bw);
|
||||
}
|
||||
|
||||
static int qcom_icc_rpm_set(int mas_rpm_id, int slv_rpm_id, u64 sum_bw)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
if (mas_rpm_id != -1) {
|
||||
ret = qcom_icc_rpm_smd_send(QCOM_SMD_RPM_ACTIVE_STATE,
|
||||
RPM_BUS_MASTER_REQ,
|
||||
mas_rpm_id,
|
||||
sum_bw);
|
||||
if (ret) {
|
||||
pr_err("qcom_icc_rpm_smd_send mas %d error %d\n",
|
||||
mas_rpm_id, ret);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
if (slv_rpm_id != -1) {
|
||||
ret = qcom_icc_rpm_smd_send(QCOM_SMD_RPM_ACTIVE_STATE,
|
||||
RPM_BUS_SLAVE_REQ,
|
||||
slv_rpm_id,
|
||||
sum_bw);
|
||||
if (ret) {
|
||||
pr_err("qcom_icc_rpm_smd_send slv %d error %d\n",
|
||||
slv_rpm_id, ret);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
|
||||
{
|
||||
struct qcom_icc_provider *qp;
|
||||
struct qcom_icc_node *qn;
|
||||
struct icc_provider *provider;
|
||||
struct icc_node *n;
|
||||
u64 sum_bw;
|
||||
u64 max_peak_bw;
|
||||
u64 rate;
|
||||
u32 agg_avg = 0;
|
||||
u32 agg_peak = 0;
|
||||
int ret, i;
|
||||
|
||||
qn = src->data;
|
||||
provider = src->provider;
|
||||
qp = to_qcom_provider(provider);
|
||||
|
||||
list_for_each_entry(n, &provider->nodes, node_list)
|
||||
provider->aggregate(n, 0, n->avg_bw, n->peak_bw,
|
||||
&agg_avg, &agg_peak);
|
||||
|
||||
sum_bw = icc_units_to_bps(agg_avg);
|
||||
max_peak_bw = icc_units_to_bps(agg_peak);
|
||||
|
||||
if (!qn->qos.ap_owned) {
|
||||
/* send bandwidth request message to the RPM processor */
|
||||
ret = qcom_icc_rpm_set(qn->mas_rpm_id, qn->slv_rpm_id, sum_bw);
|
||||
if (ret)
|
||||
return ret;
|
||||
} else if (qn->qos.qos_mode != -1) {
|
||||
/* set bandwidth directly from the AP */
|
||||
ret = qcom_icc_qos_set(src, sum_bw);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
rate = max(sum_bw, max_peak_bw);
|
||||
|
||||
do_div(rate, qn->buswidth);
|
||||
|
||||
if (qn->rate == rate)
|
||||
return 0;
|
||||
|
||||
for (i = 0; i < qp->num_clks; i++) {
|
||||
ret = clk_set_rate(qp->bus_clks[i].clk, rate);
|
||||
if (ret) {
|
||||
pr_err("%s clk_set_rate error: %d\n",
|
||||
qp->bus_clks[i].id, ret);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
qn->rate = rate;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int qnoc_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
const struct qcom_icc_desc *desc;
|
||||
struct icc_onecell_data *data;
|
||||
struct icc_provider *provider;
|
||||
struct qcom_icc_node **qnodes;
|
||||
struct qcom_icc_provider *qp;
|
||||
struct icc_node *node;
|
||||
struct resource *res;
|
||||
size_t num_nodes, i;
|
||||
int ret;
|
||||
|
||||
/* wait for the RPM proxy */
|
||||
if (!qcom_icc_rpm_smd_available())
|
||||
return -EPROBE_DEFER;
|
||||
|
||||
desc = of_device_get_match_data(dev);
|
||||
if (!desc)
|
||||
return -EINVAL;
|
||||
|
||||
qnodes = desc->nodes;
|
||||
num_nodes = desc->num_nodes;
|
||||
|
||||
qp = devm_kzalloc(dev, sizeof(*qp), GFP_KERNEL);
|
||||
if (!qp)
|
||||
return -ENOMEM;
|
||||
|
||||
data = devm_kzalloc(dev, struct_size(data, nodes, num_nodes),
|
||||
GFP_KERNEL);
|
||||
if (!data)
|
||||
return -ENOMEM;
|
||||
|
||||
if (of_device_is_compatible(dev->of_node, "qcom,sdm660-mnoc")) {
|
||||
qp->bus_clks = devm_kmemdup(dev, bus_mm_clocks,
|
||||
sizeof(bus_mm_clocks), GFP_KERNEL);
|
||||
qp->num_clks = ARRAY_SIZE(bus_mm_clocks);
|
||||
} else {
|
||||
if (of_device_is_compatible(dev->of_node, "qcom,sdm660-bimc"))
|
||||
qp->is_bimc_node = true;
|
||||
|
||||
qp->bus_clks = devm_kmemdup(dev, bus_clocks, sizeof(bus_clocks),
|
||||
GFP_KERNEL);
|
||||
qp->num_clks = ARRAY_SIZE(bus_clocks);
|
||||
}
|
||||
if (!qp->bus_clks)
|
||||
return -ENOMEM;
|
||||
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
if (!res)
|
||||
return -ENODEV;
|
||||
|
||||
qp->mmio = devm_ioremap_resource(dev, res);
|
||||
if (IS_ERR(qp->mmio)) {
|
||||
dev_err(dev, "Cannot ioremap interconnect bus resource\n");
|
||||
return PTR_ERR(qp->mmio);
|
||||
}
|
||||
|
||||
qp->regmap = devm_regmap_init_mmio(dev, qp->mmio, desc->regmap_cfg);
|
||||
if (IS_ERR(qp->regmap)) {
|
||||
dev_err(dev, "Cannot regmap interconnect bus resource\n");
|
||||
return PTR_ERR(qp->regmap);
|
||||
}
|
||||
|
||||
ret = devm_clk_bulk_get(dev, qp->num_clks, qp->bus_clks);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = clk_bulk_prepare_enable(qp->num_clks, qp->bus_clks);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
provider = &qp->provider;
|
||||
INIT_LIST_HEAD(&provider->nodes);
|
||||
provider->dev = dev;
|
||||
provider->set = qcom_icc_set;
|
||||
provider->aggregate = icc_std_aggregate;
|
||||
provider->xlate = of_icc_xlate_onecell;
|
||||
provider->data = data;
|
||||
|
||||
ret = icc_provider_add(provider);
|
||||
if (ret) {
|
||||
dev_err(dev, "error adding interconnect provider: %d\n", ret);
|
||||
clk_bulk_disable_unprepare(qp->num_clks, qp->bus_clks);
|
||||
return ret;
|
||||
}
|
||||
|
||||
for (i = 0; i < num_nodes; i++) {
|
||||
size_t j;
|
||||
|
||||
node = icc_node_create(qnodes[i]->id);
|
||||
if (IS_ERR(node)) {
|
||||
ret = PTR_ERR(node);
|
||||
goto err;
|
||||
}
|
||||
|
||||
node->name = qnodes[i]->name;
|
||||
node->data = qnodes[i];
|
||||
icc_node_add(node, provider);
|
||||
|
||||
for (j = 0; j < qnodes[i]->num_links; j++)
|
||||
icc_link_create(node, qnodes[i]->links[j]);
|
||||
|
||||
data->nodes[i] = node;
|
||||
}
|
||||
data->num_nodes = num_nodes;
|
||||
platform_set_drvdata(pdev, qp);
|
||||
|
||||
return 0;
|
||||
err:
|
||||
icc_nodes_remove(provider);
|
||||
clk_bulk_disable_unprepare(qp->num_clks, qp->bus_clks);
|
||||
icc_provider_del(provider);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int qnoc_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct qcom_icc_provider *qp = platform_get_drvdata(pdev);
|
||||
|
||||
icc_nodes_remove(&qp->provider);
|
||||
clk_bulk_disable_unprepare(qp->num_clks, qp->bus_clks);
|
||||
return icc_provider_del(&qp->provider);
|
||||
}
|
||||
|
||||
static const struct of_device_id sdm660_noc_of_match[] = {
|
||||
{ .compatible = "qcom,sdm660-a2noc", .data = &sdm660_a2noc },
|
||||
{ .compatible = "qcom,sdm660-bimc", .data = &sdm660_bimc },
|
||||
{ .compatible = "qcom,sdm660-cnoc", .data = &sdm660_cnoc },
|
||||
{ .compatible = "qcom,sdm660-gnoc", .data = &sdm660_gnoc },
|
||||
{ .compatible = "qcom,sdm660-mnoc", .data = &sdm660_mnoc },
|
||||
{ .compatible = "qcom,sdm660-snoc", .data = &sdm660_snoc },
|
||||
{ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, sdm660_noc_of_match);
|
||||
|
||||
static struct platform_driver sdm660_noc_driver = {
|
||||
.probe = qnoc_probe,
|
||||
.remove = qnoc_remove,
|
||||
.driver = {
|
||||
.name = "qnoc-sdm660",
|
||||
.of_match_table = sdm660_noc_of_match,
|
||||
},
|
||||
};
|
||||
module_platform_driver(sdm660_noc_driver);
|
||||
MODULE_DESCRIPTION("Qualcomm sdm660 NoC driver");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -0,0 +1,633 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
/*
|
||||
* Copyright (c) 2019-2020, The Linux Foundation. All rights reserved.
|
||||
* Copyright (c) 2021, Linaro Limited
|
||||
*
|
||||
*/
|
||||
|
||||
#include <linux/interconnect-provider.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <dt-bindings/interconnect/qcom,sm8350.h>
|
||||
|
||||
#include "bcm-voter.h"
|
||||
#include "icc-rpmh.h"
|
||||
#include "sm8350.h"
|
||||
|
||||
DEFINE_QNODE(qhm_qspi, SM8350_MASTER_QSPI_0, 1, 4, SM8350_SLAVE_A1NOC_SNOC);
|
||||
DEFINE_QNODE(qhm_qup0, SM8350_MASTER_QUP_0, 1, 4, SM8350_SLAVE_A2NOC_SNOC);
|
||||
DEFINE_QNODE(qhm_qup1, SM8350_MASTER_QUP_1, 1, 4, SM8350_SLAVE_A1NOC_SNOC);
|
||||
DEFINE_QNODE(qhm_qup2, SM8350_MASTER_QUP_2, 1, 4, SM8350_SLAVE_A2NOC_SNOC);
|
||||
DEFINE_QNODE(qnm_a1noc_cfg, SM8350_MASTER_A1NOC_CFG, 1, 4, SM8350_SLAVE_SERVICE_A1NOC);
|
||||
DEFINE_QNODE(xm_sdc4, SM8350_MASTER_SDCC_4, 1, 8, SM8350_SLAVE_A1NOC_SNOC);
|
||||
DEFINE_QNODE(xm_ufs_mem, SM8350_MASTER_UFS_MEM, 1, 8, SM8350_SLAVE_A1NOC_SNOC);
|
||||
DEFINE_QNODE(xm_usb3_0, SM8350_MASTER_USB3_0, 1, 8, SM8350_SLAVE_A1NOC_SNOC);
|
||||
DEFINE_QNODE(xm_usb3_1, SM8350_MASTER_USB3_1, 1, 8, SM8350_SLAVE_A1NOC_SNOC);
|
||||
DEFINE_QNODE(qhm_qdss_bam, SM8350_MASTER_QDSS_BAM, 1, 4, SM8350_SLAVE_A2NOC_SNOC);
|
||||
DEFINE_QNODE(qnm_a2noc_cfg, SM8350_MASTER_A2NOC_CFG, 1, 4, SM8350_SLAVE_SERVICE_A2NOC);
|
||||
DEFINE_QNODE(qxm_crypto, SM8350_MASTER_CRYPTO, 1, 8, SM8350_SLAVE_A2NOC_SNOC);
|
||||
DEFINE_QNODE(qxm_ipa, SM8350_MASTER_IPA, 1, 8, SM8350_SLAVE_A2NOC_SNOC);
|
||||
DEFINE_QNODE(xm_pcie3_0, SM8350_MASTER_PCIE_0, 1, 8, SM8350_SLAVE_ANOC_PCIE_GEM_NOC);
|
||||
DEFINE_QNODE(xm_pcie3_1, SM8350_MASTER_PCIE_1, 1, 8, SM8350_SLAVE_ANOC_PCIE_GEM_NOC);
|
||||
DEFINE_QNODE(xm_qdss_etr, SM8350_MASTER_QDSS_ETR, 1, 8, SM8350_SLAVE_A2NOC_SNOC);
|
||||
DEFINE_QNODE(xm_sdc2, SM8350_MASTER_SDCC_2, 1, 8, SM8350_SLAVE_A2NOC_SNOC);
|
||||
DEFINE_QNODE(xm_ufs_card, SM8350_MASTER_UFS_CARD, 1, 8, SM8350_SLAVE_A2NOC_SNOC);
|
||||
DEFINE_QNODE(qnm_gemnoc_cnoc, SM8350_MASTER_GEM_NOC_CNOC, 1, 16, SM8350_SLAVE_AHB2PHY_SOUTH, SM8350_SLAVE_AHB2PHY_NORTH, SM8350_SLAVE_AOSS, SM8350_SLAVE_APPSS, SM8350_SLAVE_CAMERA_CFG, SM8350_SLAVE_CLK_CTL, SM8350_SLAVE_CDSP_CFG, SM8350_SLAVE_RBCPR_CX_CFG, SM8350_SLAVE_RBCPR_MMCX_CFG, SM8350_SLAVE_RBCPR_MX_CFG, SM8350_SLAVE_CRYPTO_0_CFG, SM8350_SLAVE_CX_RDPM, SM8350_SLAVE_DCC_CFG, SM8350_SLAVE_DISPLAY_CFG, SM8350_SLAVE_GFX3D_CFG, SM8350_SLAVE_HWKM, SM8350_SLAVE_IMEM_CFG, SM8350_SLAVE_IPA_CFG, SM8350_SLAVE_IPC_ROUTER_CFG, SM8350_SLAVE_LPASS, SM8350_SLAVE_CNOC_MSS, SM8350_SLAVE_MX_RDPM, SM8350_SLAVE_PCIE_0_CFG, SM8350_SLAVE_PCIE_1_CFG, SM8350_SLAVE_PDM, SM8350_SLAVE_PIMEM_CFG, SM8350_SLAVE_PKA_WRAPPER_CFG, SM8350_SLAVE_PMU_WRAPPER_CFG, SM8350_SLAVE_QDSS_CFG, SM8350_SLAVE_QSPI_0, SM8350_SLAVE_QUP_0, SM8350_SLAVE_QUP_1, SM8350_SLAVE_QUP_2, SM8350_SLAVE_SDCC_2, SM8350_SLAVE_SDCC_4, SM8350_SLAVE_SECURITY, SM8350_SLAVE_SPSS_CFG, SM8350_SLAVE_TCSR, SM8350_SLAVE_TLMM, SM8350_SLAVE_UFS_CARD_CFG, SM8350_SLAVE_UFS_MEM_CFG, SM8350_SLAVE_USB3_0, SM8350_SLAVE_USB3_1, SM8350_SLAVE_VENUS_CFG, SM8350_SLAVE_VSENSE_CTRL_CFG, SM8350_SLAVE_A1NOC_CFG, SM8350_SLAVE_A2NOC_CFG, SM8350_SLAVE_DDRSS_CFG, SM8350_SLAVE_CNOC_MNOC_CFG, SM8350_SLAVE_SNOC_CFG, SM8350_SLAVE_BOOT_IMEM, SM8350_SLAVE_IMEM, SM8350_SLAVE_PIMEM, SM8350_SLAVE_SERVICE_CNOC, SM8350_SLAVE_QDSS_STM, SM8350_SLAVE_TCU);
|
||||
DEFINE_QNODE(qnm_gemnoc_pcie, SM8350_MASTER_GEM_NOC_PCIE_SNOC, 1, 8, SM8350_SLAVE_PCIE_0, SM8350_SLAVE_PCIE_1);
|
||||
DEFINE_QNODE(xm_qdss_dap, SM8350_MASTER_QDSS_DAP, 1, 8, SM8350_SLAVE_AHB2PHY_SOUTH, SM8350_SLAVE_AHB2PHY_NORTH, SM8350_SLAVE_AOSS, SM8350_SLAVE_APPSS, SM8350_SLAVE_CAMERA_CFG, SM8350_SLAVE_CLK_CTL, SM8350_SLAVE_CDSP_CFG, SM8350_SLAVE_RBCPR_CX_CFG, SM8350_SLAVE_RBCPR_MMCX_CFG, SM8350_SLAVE_RBCPR_MX_CFG, SM8350_SLAVE_CRYPTO_0_CFG, SM8350_SLAVE_CX_RDPM, SM8350_SLAVE_DCC_CFG, SM8350_SLAVE_DISPLAY_CFG, SM8350_SLAVE_GFX3D_CFG, SM8350_SLAVE_HWKM, SM8350_SLAVE_IMEM_CFG, SM8350_SLAVE_IPA_CFG, SM8350_SLAVE_IPC_ROUTER_CFG, SM8350_SLAVE_LPASS, SM8350_SLAVE_CNOC_MSS, SM8350_SLAVE_MX_RDPM, SM8350_SLAVE_PCIE_0_CFG, SM8350_SLAVE_PCIE_1_CFG, SM8350_SLAVE_PDM, SM8350_SLAVE_PIMEM_CFG, SM8350_SLAVE_PKA_WRAPPER_CFG, SM8350_SLAVE_PMU_WRAPPER_CFG, SM8350_SLAVE_QDSS_CFG, SM8350_SLAVE_QSPI_0, SM8350_SLAVE_QUP_0, SM8350_SLAVE_QUP_1, SM8350_SLAVE_QUP_2, SM8350_SLAVE_SDCC_2, SM8350_SLAVE_SDCC_4, SM8350_SLAVE_SECURITY, SM8350_SLAVE_SPSS_CFG, SM8350_SLAVE_TCSR, SM8350_SLAVE_TLMM, SM8350_SLAVE_UFS_CARD_CFG, SM8350_SLAVE_UFS_MEM_CFG, SM8350_SLAVE_USB3_0, SM8350_SLAVE_USB3_1, SM8350_SLAVE_VENUS_CFG, SM8350_SLAVE_VSENSE_CTRL_CFG, SM8350_SLAVE_A1NOC_CFG, SM8350_SLAVE_A2NOC_CFG, SM8350_SLAVE_DDRSS_CFG, SM8350_SLAVE_CNOC_MNOC_CFG, SM8350_SLAVE_SNOC_CFG, SM8350_SLAVE_BOOT_IMEM, SM8350_SLAVE_IMEM, SM8350_SLAVE_PIMEM, SM8350_SLAVE_SERVICE_CNOC, SM8350_SLAVE_QDSS_STM, SM8350_SLAVE_TCU);
|
||||
DEFINE_QNODE(qnm_cnoc_dc_noc, SM8350_MASTER_CNOC_DC_NOC, 1, 4, SM8350_SLAVE_LLCC_CFG, SM8350_SLAVE_GEM_NOC_CFG);
|
||||
DEFINE_QNODE(alm_gpu_tcu, SM8350_MASTER_GPU_TCU, 1, 8, SM8350_SLAVE_GEM_NOC_CNOC, SM8350_SLAVE_LLCC);
|
||||
DEFINE_QNODE(alm_sys_tcu, SM8350_MASTER_SYS_TCU, 1, 8, SM8350_SLAVE_GEM_NOC_CNOC, SM8350_SLAVE_LLCC);
|
||||
DEFINE_QNODE(chm_apps, SM8350_MASTER_APPSS_PROC, 2, 32, SM8350_SLAVE_GEM_NOC_CNOC, SM8350_SLAVE_LLCC, SM8350_SLAVE_MEM_NOC_PCIE_SNOC);
|
||||
DEFINE_QNODE(qnm_cmpnoc, SM8350_MASTER_COMPUTE_NOC, 2, 32, SM8350_SLAVE_GEM_NOC_CNOC, SM8350_SLAVE_LLCC);
|
||||
DEFINE_QNODE(qnm_gemnoc_cfg, SM8350_MASTER_GEM_NOC_CFG, 1, 4, SM8350_SLAVE_MSS_PROC_MS_MPU_CFG, SM8350_SLAVE_MCDMA_MS_MPU_CFG, SM8350_SLAVE_SERVICE_GEM_NOC_1, SM8350_SLAVE_SERVICE_GEM_NOC_2, SM8350_SLAVE_SERVICE_GEM_NOC);
|
||||
DEFINE_QNODE(qnm_gpu, SM8350_MASTER_GFX3D, 2, 32, SM8350_SLAVE_GEM_NOC_CNOC, SM8350_SLAVE_LLCC);
|
||||
DEFINE_QNODE(qnm_mnoc_hf, SM8350_MASTER_MNOC_HF_MEM_NOC, 2, 32, SM8350_SLAVE_LLCC);
|
||||
DEFINE_QNODE(qnm_mnoc_sf, SM8350_MASTER_MNOC_SF_MEM_NOC, 2, 32, SM8350_SLAVE_GEM_NOC_CNOC, SM8350_SLAVE_LLCC);
|
||||
DEFINE_QNODE(qnm_pcie, SM8350_MASTER_ANOC_PCIE_GEM_NOC, 1, 16, SM8350_SLAVE_GEM_NOC_CNOC, SM8350_SLAVE_LLCC);
|
||||
DEFINE_QNODE(qnm_snoc_gc, SM8350_MASTER_SNOC_GC_MEM_NOC, 1, 8, SM8350_SLAVE_LLCC);
|
||||
DEFINE_QNODE(qnm_snoc_sf, SM8350_MASTER_SNOC_SF_MEM_NOC, 1, 16, SM8350_SLAVE_GEM_NOC_CNOC, SM8350_SLAVE_LLCC, SM8350_SLAVE_MEM_NOC_PCIE_SNOC);
|
||||
DEFINE_QNODE(qhm_config_noc, SM8350_MASTER_CNOC_LPASS_AG_NOC, 1, 4, SM8350_SLAVE_LPASS_CORE_CFG, SM8350_SLAVE_LPASS_LPI_CFG, SM8350_SLAVE_LPASS_MPU_CFG, SM8350_SLAVE_LPASS_TOP_CFG, SM8350_SLAVE_SERVICES_LPASS_AML_NOC, SM8350_SLAVE_SERVICE_LPASS_AG_NOC);
|
||||
DEFINE_QNODE(llcc_mc, SM8350_MASTER_LLCC, 4, 4, SM8350_SLAVE_EBI1);
|
||||
DEFINE_QNODE(qnm_camnoc_hf, SM8350_MASTER_CAMNOC_HF, 2, 32, SM8350_SLAVE_MNOC_HF_MEM_NOC);
|
||||
DEFINE_QNODE(qnm_camnoc_icp, SM8350_MASTER_CAMNOC_ICP, 1, 8, SM8350_SLAVE_MNOC_SF_MEM_NOC);
|
||||
DEFINE_QNODE(qnm_camnoc_sf, SM8350_MASTER_CAMNOC_SF, 2, 32, SM8350_SLAVE_MNOC_SF_MEM_NOC);
|
||||
DEFINE_QNODE(qnm_mnoc_cfg, SM8350_MASTER_CNOC_MNOC_CFG, 1, 4, SM8350_SLAVE_SERVICE_MNOC);
|
||||
DEFINE_QNODE(qnm_video0, SM8350_MASTER_VIDEO_P0, 1, 32, SM8350_SLAVE_MNOC_SF_MEM_NOC);
|
||||
DEFINE_QNODE(qnm_video1, SM8350_MASTER_VIDEO_P1, 1, 32, SM8350_SLAVE_MNOC_SF_MEM_NOC);
|
||||
DEFINE_QNODE(qnm_video_cvp, SM8350_MASTER_VIDEO_PROC, 1, 32, SM8350_SLAVE_MNOC_SF_MEM_NOC);
|
||||
DEFINE_QNODE(qxm_mdp0, SM8350_MASTER_MDP0, 1, 32, SM8350_SLAVE_MNOC_HF_MEM_NOC);
|
||||
DEFINE_QNODE(qxm_mdp1, SM8350_MASTER_MDP1, 1, 32, SM8350_SLAVE_MNOC_HF_MEM_NOC);
|
||||
DEFINE_QNODE(qxm_rot, SM8350_MASTER_ROTATOR, 1, 32, SM8350_SLAVE_MNOC_SF_MEM_NOC);
|
||||
DEFINE_QNODE(qhm_nsp_noc_config, SM8350_MASTER_CDSP_NOC_CFG, 1, 4, SM8350_SLAVE_SERVICE_NSP_NOC);
|
||||
DEFINE_QNODE(qxm_nsp, SM8350_MASTER_CDSP_PROC, 2, 32, SM8350_SLAVE_CDSP_MEM_NOC);
|
||||
DEFINE_QNODE(qnm_aggre1_noc, SM8350_MASTER_A1NOC_SNOC, 1, 16, SM8350_SLAVE_SNOC_GEM_NOC_SF);
|
||||
DEFINE_QNODE(qnm_aggre2_noc, SM8350_MASTER_A2NOC_SNOC, 1, 16, SM8350_SLAVE_SNOC_GEM_NOC_SF);
|
||||
DEFINE_QNODE(qnm_snoc_cfg, SM8350_MASTER_SNOC_CFG, 1, 4, SM8350_SLAVE_SERVICE_SNOC);
|
||||
DEFINE_QNODE(qxm_pimem, SM8350_MASTER_PIMEM, 1, 8, SM8350_SLAVE_SNOC_GEM_NOC_GC);
|
||||
DEFINE_QNODE(xm_gic, SM8350_MASTER_GIC, 1, 8, SM8350_SLAVE_SNOC_GEM_NOC_GC);
|
||||
DEFINE_QNODE(qnm_mnoc_hf_disp, SM8350_MASTER_MNOC_HF_MEM_NOC_DISP, 2, 32, SM8350_SLAVE_LLCC_DISP);
|
||||
DEFINE_QNODE(qnm_mnoc_sf_disp, SM8350_MASTER_MNOC_SF_MEM_NOC_DISP, 2, 32, SM8350_SLAVE_LLCC_DISP);
|
||||
DEFINE_QNODE(llcc_mc_disp, SM8350_MASTER_LLCC_DISP, 4, 4, SM8350_SLAVE_EBI1_DISP);
|
||||
DEFINE_QNODE(qxm_mdp0_disp, SM8350_MASTER_MDP0_DISP, 1, 32, SM8350_SLAVE_MNOC_HF_MEM_NOC_DISP);
|
||||
DEFINE_QNODE(qxm_mdp1_disp, SM8350_MASTER_MDP1_DISP, 1, 32, SM8350_SLAVE_MNOC_HF_MEM_NOC_DISP);
|
||||
DEFINE_QNODE(qxm_rot_disp, SM8350_MASTER_ROTATOR_DISP, 1, 32, SM8350_SLAVE_MNOC_SF_MEM_NOC_DISP);
|
||||
DEFINE_QNODE(qns_a1noc_snoc, SM8350_SLAVE_A1NOC_SNOC, 1, 16, SM8350_MASTER_A1NOC_SNOC);
|
||||
DEFINE_QNODE(srvc_aggre1_noc, SM8350_SLAVE_SERVICE_A1NOC, 1, 4);
|
||||
DEFINE_QNODE(qns_a2noc_snoc, SM8350_SLAVE_A2NOC_SNOC, 1, 16, SM8350_MASTER_A2NOC_SNOC);
|
||||
DEFINE_QNODE(qns_pcie_mem_noc, SM8350_SLAVE_ANOC_PCIE_GEM_NOC, 1, 16, SM8350_MASTER_ANOC_PCIE_GEM_NOC);
|
||||
DEFINE_QNODE(srvc_aggre2_noc, SM8350_SLAVE_SERVICE_A2NOC, 1, 4);
|
||||
DEFINE_QNODE(qhs_ahb2phy0, SM8350_SLAVE_AHB2PHY_SOUTH, 1, 4);
|
||||
DEFINE_QNODE(qhs_ahb2phy1, SM8350_SLAVE_AHB2PHY_NORTH, 1, 4);
|
||||
DEFINE_QNODE(qhs_aoss, SM8350_SLAVE_AOSS, 1, 4);
|
||||
DEFINE_QNODE(qhs_apss, SM8350_SLAVE_APPSS, 1, 8);
|
||||
DEFINE_QNODE(qhs_camera_cfg, SM8350_SLAVE_CAMERA_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_clk_ctl, SM8350_SLAVE_CLK_CTL, 1, 4);
|
||||
DEFINE_QNODE(qhs_compute_cfg, SM8350_SLAVE_CDSP_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_cpr_cx, SM8350_SLAVE_RBCPR_CX_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_cpr_mmcx, SM8350_SLAVE_RBCPR_MMCX_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_cpr_mx, SM8350_SLAVE_RBCPR_MX_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_crypto0_cfg, SM8350_SLAVE_CRYPTO_0_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_cx_rdpm, SM8350_SLAVE_CX_RDPM, 1, 4);
|
||||
DEFINE_QNODE(qhs_dcc_cfg, SM8350_SLAVE_DCC_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_display_cfg, SM8350_SLAVE_DISPLAY_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_gpuss_cfg, SM8350_SLAVE_GFX3D_CFG, 1, 8);
|
||||
DEFINE_QNODE(qhs_hwkm, SM8350_SLAVE_HWKM, 1, 4);
|
||||
DEFINE_QNODE(qhs_imem_cfg, SM8350_SLAVE_IMEM_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_ipa, SM8350_SLAVE_IPA_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_ipc_router, SM8350_SLAVE_IPC_ROUTER_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_lpass_cfg, SM8350_SLAVE_LPASS, 1, 4, SM8350_MASTER_CNOC_LPASS_AG_NOC);
|
||||
DEFINE_QNODE(qhs_mss_cfg, SM8350_SLAVE_CNOC_MSS, 1, 4);
|
||||
DEFINE_QNODE(qhs_mx_rdpm, SM8350_SLAVE_MX_RDPM, 1, 4);
|
||||
DEFINE_QNODE(qhs_pcie0_cfg, SM8350_SLAVE_PCIE_0_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_pcie1_cfg, SM8350_SLAVE_PCIE_1_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_pdm, SM8350_SLAVE_PDM, 1, 4);
|
||||
DEFINE_QNODE(qhs_pimem_cfg, SM8350_SLAVE_PIMEM_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_pka_wrapper_cfg, SM8350_SLAVE_PKA_WRAPPER_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_pmu_wrapper_cfg, SM8350_SLAVE_PMU_WRAPPER_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_qdss_cfg, SM8350_SLAVE_QDSS_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_qspi, SM8350_SLAVE_QSPI_0, 1, 4);
|
||||
DEFINE_QNODE(qhs_qup0, SM8350_SLAVE_QUP_0, 1, 4);
|
||||
DEFINE_QNODE(qhs_qup1, SM8350_SLAVE_QUP_1, 1, 4);
|
||||
DEFINE_QNODE(qhs_qup2, SM8350_SLAVE_QUP_2, 1, 4);
|
||||
DEFINE_QNODE(qhs_sdc2, SM8350_SLAVE_SDCC_2, 1, 4);
|
||||
DEFINE_QNODE(qhs_sdc4, SM8350_SLAVE_SDCC_4, 1, 4);
|
||||
DEFINE_QNODE(qhs_security, SM8350_SLAVE_SECURITY, 1, 4);
|
||||
DEFINE_QNODE(qhs_spss_cfg, SM8350_SLAVE_SPSS_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_tcsr, SM8350_SLAVE_TCSR, 1, 4);
|
||||
DEFINE_QNODE(qhs_tlmm, SM8350_SLAVE_TLMM, 1, 4);
|
||||
DEFINE_QNODE(qhs_ufs_card_cfg, SM8350_SLAVE_UFS_CARD_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_ufs_mem_cfg, SM8350_SLAVE_UFS_MEM_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_usb3_0, SM8350_SLAVE_USB3_0, 1, 4);
|
||||
DEFINE_QNODE(qhs_usb3_1, SM8350_SLAVE_USB3_1, 1, 4);
|
||||
DEFINE_QNODE(qhs_venus_cfg, SM8350_SLAVE_VENUS_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_vsense_ctrl_cfg, SM8350_SLAVE_VSENSE_CTRL_CFG, 1, 4);
|
||||
DEFINE_QNODE(qns_a1_noc_cfg, SM8350_SLAVE_A1NOC_CFG, 1, 4);
|
||||
DEFINE_QNODE(qns_a2_noc_cfg, SM8350_SLAVE_A2NOC_CFG, 1, 4);
|
||||
DEFINE_QNODE(qns_ddrss_cfg, SM8350_SLAVE_DDRSS_CFG, 1, 4);
|
||||
DEFINE_QNODE(qns_mnoc_cfg, SM8350_SLAVE_CNOC_MNOC_CFG, 1, 4);
|
||||
DEFINE_QNODE(qns_snoc_cfg, SM8350_SLAVE_SNOC_CFG, 1, 4);
|
||||
DEFINE_QNODE(qxs_boot_imem, SM8350_SLAVE_BOOT_IMEM, 1, 8);
|
||||
DEFINE_QNODE(qxs_imem, SM8350_SLAVE_IMEM, 1, 8);
|
||||
DEFINE_QNODE(qxs_pimem, SM8350_SLAVE_PIMEM, 1, 8);
|
||||
DEFINE_QNODE(srvc_cnoc, SM8350_SLAVE_SERVICE_CNOC, 1, 4);
|
||||
DEFINE_QNODE(xs_pcie_0, SM8350_SLAVE_PCIE_0, 1, 8);
|
||||
DEFINE_QNODE(xs_pcie_1, SM8350_SLAVE_PCIE_1, 1, 8);
|
||||
DEFINE_QNODE(xs_qdss_stm, SM8350_SLAVE_QDSS_STM, 1, 4);
|
||||
DEFINE_QNODE(xs_sys_tcu_cfg, SM8350_SLAVE_TCU, 1, 8);
|
||||
DEFINE_QNODE(qhs_llcc, SM8350_SLAVE_LLCC_CFG, 1, 4);
|
||||
DEFINE_QNODE(qns_gemnoc, SM8350_SLAVE_GEM_NOC_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_mdsp_ms_mpu_cfg, SM8350_SLAVE_MSS_PROC_MS_MPU_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_modem_ms_mpu_cfg, SM8350_SLAVE_MCDMA_MS_MPU_CFG, 1, 4);
|
||||
DEFINE_QNODE(qns_gem_noc_cnoc, SM8350_SLAVE_GEM_NOC_CNOC, 1, 16, SM8350_MASTER_GEM_NOC_CNOC);
|
||||
DEFINE_QNODE(qns_llcc, SM8350_SLAVE_LLCC, 4, 16, SM8350_MASTER_LLCC);
|
||||
DEFINE_QNODE(qns_pcie, SM8350_SLAVE_MEM_NOC_PCIE_SNOC, 1, 8);
|
||||
DEFINE_QNODE(srvc_even_gemnoc, SM8350_SLAVE_SERVICE_GEM_NOC_1, 1, 4);
|
||||
DEFINE_QNODE(srvc_odd_gemnoc, SM8350_SLAVE_SERVICE_GEM_NOC_2, 1, 4);
|
||||
DEFINE_QNODE(srvc_sys_gemnoc, SM8350_SLAVE_SERVICE_GEM_NOC, 1, 4);
|
||||
DEFINE_QNODE(qhs_lpass_core, SM8350_SLAVE_LPASS_CORE_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_lpass_lpi, SM8350_SLAVE_LPASS_LPI_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_lpass_mpu, SM8350_SLAVE_LPASS_MPU_CFG, 1, 4);
|
||||
DEFINE_QNODE(qhs_lpass_top, SM8350_SLAVE_LPASS_TOP_CFG, 1, 4);
|
||||
DEFINE_QNODE(srvc_niu_aml_noc, SM8350_SLAVE_SERVICES_LPASS_AML_NOC, 1, 4);
|
||||
DEFINE_QNODE(srvc_niu_lpass_agnoc, SM8350_SLAVE_SERVICE_LPASS_AG_NOC, 1, 4);
|
||||
DEFINE_QNODE(ebi, SM8350_SLAVE_EBI1, 4, 4);
|
||||
DEFINE_QNODE(qns_mem_noc_hf, SM8350_SLAVE_MNOC_HF_MEM_NOC, 2, 32, SM8350_MASTER_MNOC_HF_MEM_NOC);
|
||||
DEFINE_QNODE(qns_mem_noc_sf, SM8350_SLAVE_MNOC_SF_MEM_NOC, 2, 32, SM8350_MASTER_MNOC_SF_MEM_NOC);
|
||||
DEFINE_QNODE(srvc_mnoc, SM8350_SLAVE_SERVICE_MNOC, 1, 4);
|
||||
DEFINE_QNODE(qns_nsp_gemnoc, SM8350_SLAVE_CDSP_MEM_NOC, 2, 32, SM8350_MASTER_COMPUTE_NOC);
|
||||
DEFINE_QNODE(service_nsp_noc, SM8350_SLAVE_SERVICE_NSP_NOC, 1, 4);
|
||||
DEFINE_QNODE(qns_gemnoc_gc, SM8350_SLAVE_SNOC_GEM_NOC_GC, 1, 8, SM8350_MASTER_SNOC_GC_MEM_NOC);
|
||||
DEFINE_QNODE(qns_gemnoc_sf, SM8350_SLAVE_SNOC_GEM_NOC_SF, 1, 16, SM8350_MASTER_SNOC_SF_MEM_NOC);
|
||||
DEFINE_QNODE(srvc_snoc, SM8350_SLAVE_SERVICE_SNOC, 1, 4);
|
||||
DEFINE_QNODE(qns_llcc_disp, SM8350_SLAVE_LLCC_DISP, 4, 16, SM8350_MASTER_LLCC_DISP);
|
||||
DEFINE_QNODE(ebi_disp, SM8350_SLAVE_EBI1_DISP, 4, 4);
|
||||
DEFINE_QNODE(qns_mem_noc_hf_disp, SM8350_SLAVE_MNOC_HF_MEM_NOC_DISP, 2, 32, SM8350_MASTER_MNOC_HF_MEM_NOC_DISP);
|
||||
DEFINE_QNODE(qns_mem_noc_sf_disp, SM8350_SLAVE_MNOC_SF_MEM_NOC_DISP, 2, 32, SM8350_MASTER_MNOC_SF_MEM_NOC_DISP);
|
||||
|
||||
DEFINE_QBCM(bcm_acv, "ACV", false, &ebi);
|
||||
DEFINE_QBCM(bcm_ce0, "CE0", false, &qxm_crypto);
|
||||
DEFINE_QBCM(bcm_cn0, "CN0", true, &qnm_gemnoc_cnoc, &qnm_gemnoc_pcie);
|
||||
DEFINE_QBCM(bcm_cn1, "CN1", false, &xm_qdss_dap, &qhs_ahb2phy0, &qhs_ahb2phy1, &qhs_aoss, &qhs_apss, &qhs_camera_cfg, &qhs_clk_ctl, &qhs_compute_cfg, &qhs_cpr_cx, &qhs_cpr_mmcx, &qhs_cpr_mx, &qhs_crypto0_cfg, &qhs_cx_rdpm, &qhs_dcc_cfg, &qhs_display_cfg, &qhs_gpuss_cfg, &qhs_hwkm, &qhs_imem_cfg, &qhs_ipa, &qhs_ipc_router, &qhs_mss_cfg, &qhs_mx_rdpm, &qhs_pcie0_cfg, &qhs_pcie1_cfg, &qhs_pimem_cfg, &qhs_pka_wrapper_cfg, &qhs_pmu_wrapper_cfg, &qhs_qdss_cfg, &qhs_qup0, &qhs_qup1, &qhs_qup2, &qhs_security, &qhs_spss_cfg, &qhs_tcsr, &qhs_tlmm, &qhs_ufs_card_cfg, &qhs_ufs_mem_cfg, &qhs_usb3_0, &qhs_usb3_1, &qhs_venus_cfg, &qhs_vsense_ctrl_cfg, &qns_a1_noc_cfg, &qns_a2_noc_cfg, &qns_ddrss_cfg, &qns_mnoc_cfg, &qns_snoc_cfg, &srvc_cnoc);
|
||||
DEFINE_QBCM(bcm_cn2, "CN2", false, &qhs_lpass_cfg, &qhs_pdm, &qhs_qspi, &qhs_sdc2, &qhs_sdc4);
|
||||
DEFINE_QBCM(bcm_co0, "CO0", false, &qns_nsp_gemnoc);
|
||||
DEFINE_QBCM(bcm_co3, "CO3", false, &qxm_nsp);
|
||||
DEFINE_QBCM(bcm_mc0, "MC0", true, &ebi);
|
||||
DEFINE_QBCM(bcm_mm0, "MM0", true, &qns_mem_noc_hf);
|
||||
DEFINE_QBCM(bcm_mm1, "MM1", false, &qnm_camnoc_hf, &qxm_mdp0, &qxm_mdp1);
|
||||
DEFINE_QBCM(bcm_mm4, "MM4", false, &qns_mem_noc_sf);
|
||||
DEFINE_QBCM(bcm_mm5, "MM5", false, &qnm_camnoc_icp, &qnm_camnoc_sf, &qnm_video0, &qnm_video1, &qnm_video_cvp, &qxm_rot);
|
||||
DEFINE_QBCM(bcm_sh0, "SH0", true, &qns_llcc);
|
||||
DEFINE_QBCM(bcm_sh2, "SH2", false, &alm_gpu_tcu, &alm_sys_tcu);
|
||||
DEFINE_QBCM(bcm_sh3, "SH3", false, &qnm_cmpnoc);
|
||||
DEFINE_QBCM(bcm_sh4, "SH4", false, &chm_apps);
|
||||
DEFINE_QBCM(bcm_sn0, "SN0", true, &qns_gemnoc_sf);
|
||||
DEFINE_QBCM(bcm_sn2, "SN2", false, &qns_gemnoc_gc);
|
||||
DEFINE_QBCM(bcm_sn3, "SN3", false, &qxs_pimem);
|
||||
DEFINE_QBCM(bcm_sn4, "SN4", false, &xs_qdss_stm);
|
||||
DEFINE_QBCM(bcm_sn5, "SN5", false, &xm_pcie3_0);
|
||||
DEFINE_QBCM(bcm_sn6, "SN6", false, &xm_pcie3_1);
|
||||
DEFINE_QBCM(bcm_sn7, "SN7", false, &qnm_aggre1_noc);
|
||||
DEFINE_QBCM(bcm_sn8, "SN8", false, &qnm_aggre2_noc);
|
||||
DEFINE_QBCM(bcm_sn14, "SN14", false, &qns_pcie_mem_noc);
|
||||
DEFINE_QBCM(bcm_acv_disp, "ACV", false, &ebi_disp);
|
||||
DEFINE_QBCM(bcm_mc0_disp, "MC0", false, &ebi_disp);
|
||||
DEFINE_QBCM(bcm_mm0_disp, "MM0", false, &qns_mem_noc_hf_disp);
|
||||
DEFINE_QBCM(bcm_mm1_disp, "MM1", false, &qxm_mdp0_disp, &qxm_mdp1_disp);
|
||||
DEFINE_QBCM(bcm_mm4_disp, "MM4", false, &qns_mem_noc_sf_disp);
|
||||
DEFINE_QBCM(bcm_mm5_disp, "MM5", false, &qxm_rot_disp);
|
||||
DEFINE_QBCM(bcm_sh0_disp, "SH0", false, &qns_llcc_disp);
|
||||
|
||||
static struct qcom_icc_bcm *aggre1_noc_bcms[] = {
|
||||
};
|
||||
|
||||
static struct qcom_icc_node *aggre1_noc_nodes[] = {
|
||||
[MASTER_QSPI_0] = &qhm_qspi,
|
||||
[MASTER_QUP_1] = &qhm_qup1,
|
||||
[MASTER_A1NOC_CFG] = &qnm_a1noc_cfg,
|
||||
[MASTER_SDCC_4] = &xm_sdc4,
|
||||
[MASTER_UFS_MEM] = &xm_ufs_mem,
|
||||
[MASTER_USB3_0] = &xm_usb3_0,
|
||||
[MASTER_USB3_1] = &xm_usb3_1,
|
||||
[SLAVE_A1NOC_SNOC] = &qns_a1noc_snoc,
|
||||
[SLAVE_SERVICE_A1NOC] = &srvc_aggre1_noc,
|
||||
};
|
||||
|
||||
static struct qcom_icc_desc sm8350_aggre1_noc = {
|
||||
.nodes = aggre1_noc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(aggre1_noc_nodes),
|
||||
.bcms = aggre1_noc_bcms,
|
||||
.num_bcms = ARRAY_SIZE(aggre1_noc_bcms),
|
||||
};
|
||||
|
||||
static struct qcom_icc_bcm *aggre2_noc_bcms[] = {
|
||||
&bcm_ce0,
|
||||
&bcm_sn5,
|
||||
&bcm_sn6,
|
||||
&bcm_sn14,
|
||||
};
|
||||
|
||||
static struct qcom_icc_node *aggre2_noc_nodes[] = {
|
||||
[MASTER_QDSS_BAM] = &qhm_qdss_bam,
|
||||
[MASTER_QUP_0] = &qhm_qup0,
|
||||
[MASTER_QUP_2] = &qhm_qup2,
|
||||
[MASTER_A2NOC_CFG] = &qnm_a2noc_cfg,
|
||||
[MASTER_CRYPTO] = &qxm_crypto,
|
||||
[MASTER_IPA] = &qxm_ipa,
|
||||
[MASTER_PCIE_0] = &xm_pcie3_0,
|
||||
[MASTER_PCIE_1] = &xm_pcie3_1,
|
||||
[MASTER_QDSS_ETR] = &xm_qdss_etr,
|
||||
[MASTER_SDCC_2] = &xm_sdc2,
|
||||
[MASTER_UFS_CARD] = &xm_ufs_card,
|
||||
[SLAVE_A2NOC_SNOC] = &qns_a2noc_snoc,
|
||||
[SLAVE_ANOC_PCIE_GEM_NOC] = &qns_pcie_mem_noc,
|
||||
[SLAVE_SERVICE_A2NOC] = &srvc_aggre2_noc,
|
||||
};
|
||||
|
||||
static struct qcom_icc_desc sm8350_aggre2_noc = {
|
||||
.nodes = aggre2_noc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(aggre2_noc_nodes),
|
||||
.bcms = aggre2_noc_bcms,
|
||||
.num_bcms = ARRAY_SIZE(aggre2_noc_bcms),
|
||||
};
|
||||
|
||||
static struct qcom_icc_bcm *config_noc_bcms[] = {
|
||||
&bcm_cn0,
|
||||
&bcm_cn1,
|
||||
&bcm_cn2,
|
||||
&bcm_sn3,
|
||||
&bcm_sn4,
|
||||
};
|
||||
|
||||
static struct qcom_icc_node *config_noc_nodes[] = {
|
||||
[MASTER_GEM_NOC_CNOC] = &qnm_gemnoc_cnoc,
|
||||
[MASTER_GEM_NOC_PCIE_SNOC] = &qnm_gemnoc_pcie,
|
||||
[MASTER_QDSS_DAP] = &xm_qdss_dap,
|
||||
[SLAVE_AHB2PHY_SOUTH] = &qhs_ahb2phy0,
|
||||
[SLAVE_AHB2PHY_NORTH] = &qhs_ahb2phy1,
|
||||
[SLAVE_AOSS] = &qhs_aoss,
|
||||
[SLAVE_APPSS] = &qhs_apss,
|
||||
[SLAVE_CAMERA_CFG] = &qhs_camera_cfg,
|
||||
[SLAVE_CLK_CTL] = &qhs_clk_ctl,
|
||||
[SLAVE_CDSP_CFG] = &qhs_compute_cfg,
|
||||
[SLAVE_RBCPR_CX_CFG] = &qhs_cpr_cx,
|
||||
[SLAVE_RBCPR_MMCX_CFG] = &qhs_cpr_mmcx,
|
||||
[SLAVE_RBCPR_MX_CFG] = &qhs_cpr_mx,
|
||||
[SLAVE_CRYPTO_0_CFG] = &qhs_crypto0_cfg,
|
||||
[SLAVE_CX_RDPM] = &qhs_cx_rdpm,
|
||||
[SLAVE_DCC_CFG] = &qhs_dcc_cfg,
|
||||
[SLAVE_DISPLAY_CFG] = &qhs_display_cfg,
|
||||
[SLAVE_GFX3D_CFG] = &qhs_gpuss_cfg,
|
||||
[SLAVE_HWKM] = &qhs_hwkm,
|
||||
[SLAVE_IMEM_CFG] = &qhs_imem_cfg,
|
||||
[SLAVE_IPA_CFG] = &qhs_ipa,
|
||||
[SLAVE_IPC_ROUTER_CFG] = &qhs_ipc_router,
|
||||
[SLAVE_LPASS] = &qhs_lpass_cfg,
|
||||
[SLAVE_CNOC_MSS] = &qhs_mss_cfg,
|
||||
[SLAVE_MX_RDPM] = &qhs_mx_rdpm,
|
||||
[SLAVE_PCIE_0_CFG] = &qhs_pcie0_cfg,
|
||||
[SLAVE_PCIE_1_CFG] = &qhs_pcie1_cfg,
|
||||
[SLAVE_PDM] = &qhs_pdm,
|
||||
[SLAVE_PIMEM_CFG] = &qhs_pimem_cfg,
|
||||
[SLAVE_PKA_WRAPPER_CFG] = &qhs_pka_wrapper_cfg,
|
||||
[SLAVE_PMU_WRAPPER_CFG] = &qhs_pmu_wrapper_cfg,
|
||||
[SLAVE_QDSS_CFG] = &qhs_qdss_cfg,
|
||||
[SLAVE_QSPI_0] = &qhs_qspi,
|
||||
[SLAVE_QUP_0] = &qhs_qup0,
|
||||
[SLAVE_QUP_1] = &qhs_qup1,
|
||||
[SLAVE_QUP_2] = &qhs_qup2,
|
||||
[SLAVE_SDCC_2] = &qhs_sdc2,
|
||||
[SLAVE_SDCC_4] = &qhs_sdc4,
|
||||
[SLAVE_SECURITY] = &qhs_security,
|
||||
[SLAVE_SPSS_CFG] = &qhs_spss_cfg,
|
||||
[SLAVE_TCSR] = &qhs_tcsr,
|
||||
[SLAVE_TLMM] = &qhs_tlmm,
|
||||
[SLAVE_UFS_CARD_CFG] = &qhs_ufs_card_cfg,
|
||||
[SLAVE_UFS_MEM_CFG] = &qhs_ufs_mem_cfg,
|
||||
[SLAVE_USB3_0] = &qhs_usb3_0,
|
||||
[SLAVE_USB3_1] = &qhs_usb3_1,
|
||||
[SLAVE_VENUS_CFG] = &qhs_venus_cfg,
|
||||
[SLAVE_VSENSE_CTRL_CFG] = &qhs_vsense_ctrl_cfg,
|
||||
[SLAVE_A1NOC_CFG] = &qns_a1_noc_cfg,
|
||||
[SLAVE_A2NOC_CFG] = &qns_a2_noc_cfg,
|
||||
[SLAVE_DDRSS_CFG] = &qns_ddrss_cfg,
|
||||
[SLAVE_CNOC_MNOC_CFG] = &qns_mnoc_cfg,
|
||||
[SLAVE_SNOC_CFG] = &qns_snoc_cfg,
|
||||
[SLAVE_BOOT_IMEM] = &qxs_boot_imem,
|
||||
[SLAVE_IMEM] = &qxs_imem,
|
||||
[SLAVE_PIMEM] = &qxs_pimem,
|
||||
[SLAVE_SERVICE_CNOC] = &srvc_cnoc,
|
||||
[SLAVE_PCIE_0] = &xs_pcie_0,
|
||||
[SLAVE_PCIE_1] = &xs_pcie_1,
|
||||
[SLAVE_QDSS_STM] = &xs_qdss_stm,
|
||||
[SLAVE_TCU] = &xs_sys_tcu_cfg,
|
||||
};
|
||||
|
||||
static struct qcom_icc_desc sm8350_config_noc = {
|
||||
.nodes = config_noc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(config_noc_nodes),
|
||||
.bcms = config_noc_bcms,
|
||||
.num_bcms = ARRAY_SIZE(config_noc_bcms),
|
||||
};
|
||||
|
||||
static struct qcom_icc_bcm *dc_noc_bcms[] = {
|
||||
};
|
||||
|
||||
static struct qcom_icc_node *dc_noc_nodes[] = {
|
||||
[MASTER_CNOC_DC_NOC] = &qnm_cnoc_dc_noc,
|
||||
[SLAVE_LLCC_CFG] = &qhs_llcc,
|
||||
[SLAVE_GEM_NOC_CFG] = &qns_gemnoc,
|
||||
};
|
||||
|
||||
static struct qcom_icc_desc sm8350_dc_noc = {
|
||||
.nodes = dc_noc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(dc_noc_nodes),
|
||||
.bcms = dc_noc_bcms,
|
||||
.num_bcms = ARRAY_SIZE(dc_noc_bcms),
|
||||
};
|
||||
|
||||
static struct qcom_icc_bcm *gem_noc_bcms[] = {
|
||||
&bcm_sh0,
|
||||
&bcm_sh2,
|
||||
&bcm_sh3,
|
||||
&bcm_sh4,
|
||||
&bcm_sh0_disp,
|
||||
};
|
||||
|
||||
static struct qcom_icc_node *gem_noc_nodes[] = {
|
||||
[MASTER_GPU_TCU] = &alm_gpu_tcu,
|
||||
[MASTER_SYS_TCU] = &alm_sys_tcu,
|
||||
[MASTER_APPSS_PROC] = &chm_apps,
|
||||
[MASTER_COMPUTE_NOC] = &qnm_cmpnoc,
|
||||
[MASTER_GEM_NOC_CFG] = &qnm_gemnoc_cfg,
|
||||
[MASTER_GFX3D] = &qnm_gpu,
|
||||
[MASTER_MNOC_HF_MEM_NOC] = &qnm_mnoc_hf,
|
||||
[MASTER_MNOC_SF_MEM_NOC] = &qnm_mnoc_sf,
|
||||
[MASTER_ANOC_PCIE_GEM_NOC] = &qnm_pcie,
|
||||
[MASTER_SNOC_GC_MEM_NOC] = &qnm_snoc_gc,
|
||||
[MASTER_SNOC_SF_MEM_NOC] = &qnm_snoc_sf,
|
||||
[SLAVE_MSS_PROC_MS_MPU_CFG] = &qhs_mdsp_ms_mpu_cfg,
|
||||
[SLAVE_MCDMA_MS_MPU_CFG] = &qhs_modem_ms_mpu_cfg,
|
||||
[SLAVE_GEM_NOC_CNOC] = &qns_gem_noc_cnoc,
|
||||
[SLAVE_LLCC] = &qns_llcc,
|
||||
[SLAVE_MEM_NOC_PCIE_SNOC] = &qns_pcie,
|
||||
[SLAVE_SERVICE_GEM_NOC_1] = &srvc_even_gemnoc,
|
||||
[SLAVE_SERVICE_GEM_NOC_2] = &srvc_odd_gemnoc,
|
||||
[SLAVE_SERVICE_GEM_NOC] = &srvc_sys_gemnoc,
|
||||
[MASTER_MNOC_HF_MEM_NOC_DISP] = &qnm_mnoc_hf_disp,
|
||||
[MASTER_MNOC_SF_MEM_NOC_DISP] = &qnm_mnoc_sf_disp,
|
||||
[SLAVE_LLCC_DISP] = &qns_llcc_disp,
|
||||
};
|
||||
|
||||
static struct qcom_icc_desc sm8350_gem_noc = {
|
||||
.nodes = gem_noc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(gem_noc_nodes),
|
||||
.bcms = gem_noc_bcms,
|
||||
.num_bcms = ARRAY_SIZE(gem_noc_bcms),
|
||||
};
|
||||
|
||||
static struct qcom_icc_bcm *lpass_ag_noc_bcms[] = {
|
||||
};
|
||||
|
||||
static struct qcom_icc_node *lpass_ag_noc_nodes[] = {
|
||||
[MASTER_CNOC_LPASS_AG_NOC] = &qhm_config_noc,
|
||||
[SLAVE_LPASS_CORE_CFG] = &qhs_lpass_core,
|
||||
[SLAVE_LPASS_LPI_CFG] = &qhs_lpass_lpi,
|
||||
[SLAVE_LPASS_MPU_CFG] = &qhs_lpass_mpu,
|
||||
[SLAVE_LPASS_TOP_CFG] = &qhs_lpass_top,
|
||||
[SLAVE_SERVICES_LPASS_AML_NOC] = &srvc_niu_aml_noc,
|
||||
[SLAVE_SERVICE_LPASS_AG_NOC] = &srvc_niu_lpass_agnoc,
|
||||
};
|
||||
|
||||
static struct qcom_icc_desc sm8350_lpass_ag_noc = {
|
||||
.nodes = lpass_ag_noc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(lpass_ag_noc_nodes),
|
||||
.bcms = lpass_ag_noc_bcms,
|
||||
.num_bcms = ARRAY_SIZE(lpass_ag_noc_bcms),
|
||||
};
|
||||
|
||||
static struct qcom_icc_bcm *mc_virt_bcms[] = {
|
||||
&bcm_acv,
|
||||
&bcm_mc0,
|
||||
&bcm_acv_disp,
|
||||
&bcm_mc0_disp,
|
||||
};
|
||||
|
||||
static struct qcom_icc_node *mc_virt_nodes[] = {
|
||||
[MASTER_LLCC] = &llcc_mc,
|
||||
[SLAVE_EBI1] = &ebi,
|
||||
[MASTER_LLCC_DISP] = &llcc_mc_disp,
|
||||
[SLAVE_EBI1_DISP] = &ebi_disp,
|
||||
};
|
||||
|
||||
static struct qcom_icc_desc sm8350_mc_virt = {
|
||||
.nodes = mc_virt_nodes,
|
||||
.num_nodes = ARRAY_SIZE(mc_virt_nodes),
|
||||
.bcms = mc_virt_bcms,
|
||||
.num_bcms = ARRAY_SIZE(mc_virt_bcms),
|
||||
};
|
||||
|
||||
static struct qcom_icc_bcm *mmss_noc_bcms[] = {
|
||||
&bcm_mm0,
|
||||
&bcm_mm1,
|
||||
&bcm_mm4,
|
||||
&bcm_mm5,
|
||||
&bcm_mm0_disp,
|
||||
&bcm_mm1_disp,
|
||||
&bcm_mm4_disp,
|
||||
&bcm_mm5_disp,
|
||||
};
|
||||
|
||||
static struct qcom_icc_node *mmss_noc_nodes[] = {
|
||||
[MASTER_CAMNOC_HF] = &qnm_camnoc_hf,
|
||||
[MASTER_CAMNOC_ICP] = &qnm_camnoc_icp,
|
||||
[MASTER_CAMNOC_SF] = &qnm_camnoc_sf,
|
||||
[MASTER_CNOC_MNOC_CFG] = &qnm_mnoc_cfg,
|
||||
[MASTER_VIDEO_P0] = &qnm_video0,
|
||||
[MASTER_VIDEO_P1] = &qnm_video1,
|
||||
[MASTER_VIDEO_PROC] = &qnm_video_cvp,
|
||||
[MASTER_MDP0] = &qxm_mdp0,
|
||||
[MASTER_MDP1] = &qxm_mdp1,
|
||||
[MASTER_ROTATOR] = &qxm_rot,
|
||||
[SLAVE_MNOC_HF_MEM_NOC] = &qns_mem_noc_hf,
|
||||
[SLAVE_MNOC_SF_MEM_NOC] = &qns_mem_noc_sf,
|
||||
[SLAVE_SERVICE_MNOC] = &srvc_mnoc,
|
||||
[MASTER_MDP0_DISP] = &qxm_mdp0_disp,
|
||||
[MASTER_MDP1_DISP] = &qxm_mdp1_disp,
|
||||
[MASTER_ROTATOR_DISP] = &qxm_rot_disp,
|
||||
[SLAVE_MNOC_HF_MEM_NOC_DISP] = &qns_mem_noc_hf_disp,
|
||||
[SLAVE_MNOC_SF_MEM_NOC_DISP] = &qns_mem_noc_sf_disp,
|
||||
};
|
||||
|
||||
static struct qcom_icc_desc sm8350_mmss_noc = {
|
||||
.nodes = mmss_noc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(mmss_noc_nodes),
|
||||
.bcms = mmss_noc_bcms,
|
||||
.num_bcms = ARRAY_SIZE(mmss_noc_bcms),
|
||||
};
|
||||
|
||||
static struct qcom_icc_bcm *nsp_noc_bcms[] = {
|
||||
&bcm_co0,
|
||||
&bcm_co3,
|
||||
};
|
||||
|
||||
static struct qcom_icc_node *nsp_noc_nodes[] = {
|
||||
[MASTER_CDSP_NOC_CFG] = &qhm_nsp_noc_config,
|
||||
[MASTER_CDSP_PROC] = &qxm_nsp,
|
||||
[SLAVE_CDSP_MEM_NOC] = &qns_nsp_gemnoc,
|
||||
[SLAVE_SERVICE_NSP_NOC] = &service_nsp_noc,
|
||||
};
|
||||
|
||||
static struct qcom_icc_desc sm8350_compute_noc = {
|
||||
.nodes = nsp_noc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(nsp_noc_nodes),
|
||||
.bcms = nsp_noc_bcms,
|
||||
.num_bcms = ARRAY_SIZE(nsp_noc_bcms),
|
||||
};
|
||||
|
||||
static struct qcom_icc_bcm *system_noc_bcms[] = {
|
||||
&bcm_sn0,
|
||||
&bcm_sn2,
|
||||
&bcm_sn7,
|
||||
&bcm_sn8,
|
||||
};
|
||||
|
||||
static struct qcom_icc_node *system_noc_nodes[] = {
|
||||
[MASTER_A1NOC_SNOC] = &qnm_aggre1_noc,
|
||||
[MASTER_A2NOC_SNOC] = &qnm_aggre2_noc,
|
||||
[MASTER_SNOC_CFG] = &qnm_snoc_cfg,
|
||||
[MASTER_PIMEM] = &qxm_pimem,
|
||||
[MASTER_GIC] = &xm_gic,
|
||||
[SLAVE_SNOC_GEM_NOC_GC] = &qns_gemnoc_gc,
|
||||
[SLAVE_SNOC_GEM_NOC_SF] = &qns_gemnoc_sf,
|
||||
[SLAVE_SERVICE_SNOC] = &srvc_snoc,
|
||||
};
|
||||
|
||||
static struct qcom_icc_desc sm8350_system_noc = {
|
||||
.nodes = system_noc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(system_noc_nodes),
|
||||
.bcms = system_noc_bcms,
|
||||
.num_bcms = ARRAY_SIZE(system_noc_bcms),
|
||||
};
|
||||
|
||||
static int qnoc_probe(struct platform_device *pdev)
|
||||
{
|
||||
const struct qcom_icc_desc *desc;
|
||||
struct icc_onecell_data *data;
|
||||
struct icc_provider *provider;
|
||||
struct qcom_icc_node **qnodes;
|
||||
struct qcom_icc_provider *qp;
|
||||
struct icc_node *node;
|
||||
size_t num_nodes, i;
|
||||
int ret;
|
||||
|
||||
desc = of_device_get_match_data(&pdev->dev);
|
||||
if (!desc)
|
||||
return -EINVAL;
|
||||
|
||||
qnodes = desc->nodes;
|
||||
num_nodes = desc->num_nodes;
|
||||
|
||||
qp = devm_kzalloc(&pdev->dev, sizeof(*qp), GFP_KERNEL);
|
||||
if (!qp)
|
||||
return -ENOMEM;
|
||||
|
||||
data = devm_kcalloc(&pdev->dev, num_nodes, sizeof(*node), GFP_KERNEL);
|
||||
if (!data)
|
||||
return -ENOMEM;
|
||||
|
||||
provider = &qp->provider;
|
||||
provider->dev = &pdev->dev;
|
||||
provider->set = qcom_icc_set;
|
||||
provider->pre_aggregate = qcom_icc_pre_aggregate;
|
||||
provider->aggregate = qcom_icc_aggregate;
|
||||
provider->xlate = of_icc_xlate_onecell;
|
||||
INIT_LIST_HEAD(&provider->nodes);
|
||||
provider->data = data;
|
||||
|
||||
qp->dev = &pdev->dev;
|
||||
qp->bcms = desc->bcms;
|
||||
qp->num_bcms = desc->num_bcms;
|
||||
|
||||
qp->voter = of_bcm_voter_get(qp->dev, NULL);
|
||||
if (IS_ERR(qp->voter))
|
||||
return PTR_ERR(qp->voter);
|
||||
|
||||
ret = icc_provider_add(provider);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "error adding interconnect provider\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
for (i = 0; i < qp->num_bcms; i++)
|
||||
qcom_icc_bcm_init(qp->bcms[i], &pdev->dev);
|
||||
|
||||
for (i = 0; i < num_nodes; i++) {
|
||||
size_t j;
|
||||
|
||||
if (!qnodes[i])
|
||||
continue;
|
||||
|
||||
node = icc_node_create(qnodes[i]->id);
|
||||
if (IS_ERR(node)) {
|
||||
ret = PTR_ERR(node);
|
||||
goto err;
|
||||
}
|
||||
|
||||
node->name = qnodes[i]->name;
|
||||
node->data = qnodes[i];
|
||||
icc_node_add(node, provider);
|
||||
|
||||
for (j = 0; j < qnodes[i]->num_links; j++)
|
||||
icc_link_create(node, qnodes[i]->links[j]);
|
||||
|
||||
data->nodes[i] = node;
|
||||
}
|
||||
data->num_nodes = num_nodes;
|
||||
|
||||
platform_set_drvdata(pdev, qp);
|
||||
|
||||
return ret;
|
||||
|
||||
err:
|
||||
icc_nodes_remove(provider);
|
||||
icc_provider_del(provider);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int qnoc_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct qcom_icc_provider *qp = platform_get_drvdata(pdev);
|
||||
|
||||
icc_nodes_remove(&qp->provider);
|
||||
return icc_provider_del(&qp->provider);
|
||||
}
|
||||
|
||||
static const struct of_device_id qnoc_of_match[] = {
|
||||
{ .compatible = "qcom,sm8350-aggre1-noc", .data = &sm8350_aggre1_noc},
|
||||
{ .compatible = "qcom,sm8350-aggre2-noc", .data = &sm8350_aggre2_noc},
|
||||
{ .compatible = "qcom,sm8350-config-noc", .data = &sm8350_config_noc},
|
||||
{ .compatible = "qcom,sm8350-dc-noc", .data = &sm8350_dc_noc},
|
||||
{ .compatible = "qcom,sm8350-gem-noc", .data = &sm8350_gem_noc},
|
||||
{ .compatible = "qcom,sm8350-lpass-ag-noc", .data = &sm8350_lpass_ag_noc},
|
||||
{ .compatible = "qcom,sm8350-mc-virt", .data = &sm8350_mc_virt},
|
||||
{ .compatible = "qcom,sm8350-mmss-noc", .data = &sm8350_mmss_noc},
|
||||
{ .compatible = "qcom,sm8350-compute-noc", .data = &sm8350_compute_noc},
|
||||
{ .compatible = "qcom,sm8350-system-noc", .data = &sm8350_system_noc},
|
||||
{ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, qnoc_of_match);
|
||||
|
||||
static struct platform_driver qnoc_driver = {
|
||||
.probe = qnoc_probe,
|
||||
.remove = qnoc_remove,
|
||||
.driver = {
|
||||
.name = "qnoc-sm8350",
|
||||
.of_match_table = qnoc_of_match,
|
||||
.sync_state = icc_sync_state,
|
||||
},
|
||||
};
|
||||
module_platform_driver(qnoc_driver);
|
||||
|
||||
MODULE_DESCRIPTION("SM8350 NoC driver");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -0,0 +1,168 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Qualcomm SM8350 interconnect IDs
|
||||
*
|
||||
* Copyright (c) 2021, Linaro Limited
|
||||
*/
|
||||
|
||||
#ifndef __DRIVERS_INTERCONNECT_QCOM_SM8350_H
|
||||
#define __DRIVERS_INTERCONNECT_QCOM_SM8350_H
|
||||
|
||||
#define SM8350_MASTER_GPU_TCU 0
|
||||
#define SM8350_MASTER_SYS_TCU 1
|
||||
#define SM8350_MASTER_APPSS_PROC 2
|
||||
#define SM8350_MASTER_LLCC 3
|
||||
#define SM8350_MASTER_CNOC_LPASS_AG_NOC 4
|
||||
#define SM8350_MASTER_CDSP_NOC_CFG 5
|
||||
#define SM8350_MASTER_QDSS_BAM 6
|
||||
#define SM8350_MASTER_QSPI_0 7
|
||||
#define SM8350_MASTER_QUP_0 8
|
||||
#define SM8350_MASTER_QUP_1 9
|
||||
#define SM8350_MASTER_QUP_2 10
|
||||
#define SM8350_MASTER_A1NOC_CFG 11
|
||||
#define SM8350_MASTER_A2NOC_CFG 12
|
||||
#define SM8350_MASTER_A1NOC_SNOC 13
|
||||
#define SM8350_MASTER_A2NOC_SNOC 14
|
||||
#define SM8350_MASTER_CAMNOC_HF 15
|
||||
#define SM8350_MASTER_CAMNOC_ICP 16
|
||||
#define SM8350_MASTER_CAMNOC_SF 17
|
||||
#define SM8350_MASTER_COMPUTE_NOC 18
|
||||
#define SM8350_MASTER_CNOC_DC_NOC 19
|
||||
#define SM8350_MASTER_GEM_NOC_CFG 20
|
||||
#define SM8350_MASTER_GEM_NOC_CNOC 21
|
||||
#define SM8350_MASTER_GEM_NOC_PCIE_SNOC 22
|
||||
#define SM8350_MASTER_GFX3D 23
|
||||
#define SM8350_MASTER_CNOC_MNOC_CFG 24
|
||||
#define SM8350_MASTER_MNOC_HF_MEM_NOC 25
|
||||
#define SM8350_MASTER_MNOC_SF_MEM_NOC 26
|
||||
#define SM8350_MASTER_ANOC_PCIE_GEM_NOC 27
|
||||
#define SM8350_MASTER_SNOC_CFG 28
|
||||
#define SM8350_MASTER_SNOC_GC_MEM_NOC 29
|
||||
#define SM8350_MASTER_SNOC_SF_MEM_NOC 30
|
||||
#define SM8350_MASTER_VIDEO_P0 31
|
||||
#define SM8350_MASTER_VIDEO_P1 32
|
||||
#define SM8350_MASTER_VIDEO_PROC 33
|
||||
#define SM8350_MASTER_QUP_CORE_0 34
|
||||
#define SM8350_MASTER_QUP_CORE_1 35
|
||||
#define SM8350_MASTER_QUP_CORE_2 36
|
||||
#define SM8350_MASTER_CRYPTO 37
|
||||
#define SM8350_MASTER_IPA 38
|
||||
#define SM8350_MASTER_MDP0 39
|
||||
#define SM8350_MASTER_MDP1 40
|
||||
#define SM8350_MASTER_CDSP_PROC 41
|
||||
#define SM8350_MASTER_PIMEM 42
|
||||
#define SM8350_MASTER_ROTATOR 43
|
||||
#define SM8350_MASTER_GIC 44
|
||||
#define SM8350_MASTER_PCIE_0 45
|
||||
#define SM8350_MASTER_PCIE_1 46
|
||||
#define SM8350_MASTER_QDSS_DAP 47
|
||||
#define SM8350_MASTER_QDSS_ETR 48
|
||||
#define SM8350_MASTER_SDCC_2 49
|
||||
#define SM8350_MASTER_SDCC_4 50
|
||||
#define SM8350_MASTER_UFS_CARD 51
|
||||
#define SM8350_MASTER_UFS_MEM 52
|
||||
#define SM8350_MASTER_USB3_0 53
|
||||
#define SM8350_MASTER_USB3_1 54
|
||||
#define SM8350_SLAVE_EBI1 55
|
||||
#define SM8350_SLAVE_AHB2PHY_SOUTH 56
|
||||
#define SM8350_SLAVE_AHB2PHY_NORTH 57
|
||||
#define SM8350_SLAVE_AOSS 58
|
||||
#define SM8350_SLAVE_APPSS 59
|
||||
#define SM8350_SLAVE_CAMERA_CFG 60
|
||||
#define SM8350_SLAVE_CLK_CTL 61
|
||||
#define SM8350_SLAVE_CDSP_CFG 62
|
||||
#define SM8350_SLAVE_RBCPR_CX_CFG 63
|
||||
#define SM8350_SLAVE_RBCPR_MMCX_CFG 64
|
||||
#define SM8350_SLAVE_RBCPR_MX_CFG 65
|
||||
#define SM8350_SLAVE_CRYPTO_0_CFG 66
|
||||
#define SM8350_SLAVE_CX_RDPM 67
|
||||
#define SM8350_SLAVE_DCC_CFG 68
|
||||
#define SM8350_SLAVE_DISPLAY_CFG 69
|
||||
#define SM8350_SLAVE_GFX3D_CFG 70
|
||||
#define SM8350_SLAVE_HWKM 71
|
||||
#define SM8350_SLAVE_IMEM_CFG 72
|
||||
#define SM8350_SLAVE_IPA_CFG 73
|
||||
#define SM8350_SLAVE_IPC_ROUTER_CFG 74
|
||||
#define SM8350_SLAVE_LLCC_CFG 75
|
||||
#define SM8350_SLAVE_LPASS 76
|
||||
#define SM8350_SLAVE_LPASS_CORE_CFG 77
|
||||
#define SM8350_SLAVE_LPASS_LPI_CFG 78
|
||||
#define SM8350_SLAVE_LPASS_MPU_CFG 79
|
||||
#define SM8350_SLAVE_LPASS_TOP_CFG 80
|
||||
#define SM8350_SLAVE_MSS_PROC_MS_MPU_CFG 81
|
||||
#define SM8350_SLAVE_MCDMA_MS_MPU_CFG 82
|
||||
#define SM8350_SLAVE_CNOC_MSS 83
|
||||
#define SM8350_SLAVE_MX_RDPM 84
|
||||
#define SM8350_SLAVE_PCIE_0_CFG 85
|
||||
#define SM8350_SLAVE_PCIE_1_CFG 86
|
||||
#define SM8350_SLAVE_PDM 87
|
||||
#define SM8350_SLAVE_PIMEM_CFG 88
|
||||
#define SM8350_SLAVE_PKA_WRAPPER_CFG 89
|
||||
#define SM8350_SLAVE_PMU_WRAPPER_CFG 90
|
||||
#define SM8350_SLAVE_QDSS_CFG 91
|
||||
#define SM8350_SLAVE_QSPI_0 92
|
||||
#define SM8350_SLAVE_QUP_0 93
|
||||
#define SM8350_SLAVE_QUP_1 94
|
||||
#define SM8350_SLAVE_QUP_2 95
|
||||
#define SM8350_SLAVE_SDCC_2 96
|
||||
#define SM8350_SLAVE_SDCC_4 97
|
||||
#define SM8350_SLAVE_SECURITY 98
|
||||
#define SM8350_SLAVE_SPSS_CFG 99
|
||||
#define SM8350_SLAVE_TCSR 100
|
||||
#define SM8350_SLAVE_TLMM 101
|
||||
#define SM8350_SLAVE_UFS_CARD_CFG 102
|
||||
#define SM8350_SLAVE_UFS_MEM_CFG 103
|
||||
#define SM8350_SLAVE_USB3_0 104
|
||||
#define SM8350_SLAVE_USB3_1 105
|
||||
#define SM8350_SLAVE_VENUS_CFG 106
|
||||
#define SM8350_SLAVE_VSENSE_CTRL_CFG 107
|
||||
#define SM8350_SLAVE_A1NOC_CFG 108
|
||||
#define SM8350_SLAVE_A1NOC_SNOC 109
|
||||
#define SM8350_SLAVE_A2NOC_CFG 110
|
||||
#define SM8350_SLAVE_A2NOC_SNOC 111
|
||||
#define SM8350_SLAVE_DDRSS_CFG 112
|
||||
#define SM8350_SLAVE_GEM_NOC_CNOC 113
|
||||
#define SM8350_SLAVE_GEM_NOC_CFG 114
|
||||
#define SM8350_SLAVE_SNOC_GEM_NOC_GC 115
|
||||
#define SM8350_SLAVE_SNOC_GEM_NOC_SF 116
|
||||
#define SM8350_SLAVE_LLCC 117
|
||||
#define SM8350_SLAVE_MNOC_HF_MEM_NOC 118
|
||||
#define SM8350_SLAVE_MNOC_SF_MEM_NOC 119
|
||||
#define SM8350_SLAVE_CNOC_MNOC_CFG 120
|
||||
#define SM8350_SLAVE_CDSP_MEM_NOC 121
|
||||
#define SM8350_SLAVE_MEM_NOC_PCIE_SNOC 122
|
||||
#define SM8350_SLAVE_ANOC_PCIE_GEM_NOC 123
|
||||
#define SM8350_SLAVE_SNOC_CFG 124
|
||||
#define SM8350_SLAVE_QUP_CORE_0 125
|
||||
#define SM8350_SLAVE_QUP_CORE_1 126
|
||||
#define SM8350_SLAVE_QUP_CORE_2 127
|
||||
#define SM8350_SLAVE_BOOT_IMEM 128
|
||||
#define SM8350_SLAVE_IMEM 129
|
||||
#define SM8350_SLAVE_PIMEM 130
|
||||
#define SM8350_SLAVE_SERVICE_NSP_NOC 131
|
||||
#define SM8350_SLAVE_SERVICE_A1NOC 132
|
||||
#define SM8350_SLAVE_SERVICE_A2NOC 133
|
||||
#define SM8350_SLAVE_SERVICE_CNOC 134
|
||||
#define SM8350_SLAVE_SERVICE_GEM_NOC_1 135
|
||||
#define SM8350_SLAVE_SERVICE_MNOC 136
|
||||
#define SM8350_SLAVE_SERVICES_LPASS_AML_NOC 137
|
||||
#define SM8350_SLAVE_SERVICE_LPASS_AG_NOC 138
|
||||
#define SM8350_SLAVE_SERVICE_GEM_NOC_2 139
|
||||
#define SM8350_SLAVE_SERVICE_SNOC 140
|
||||
#define SM8350_SLAVE_SERVICE_GEM_NOC 141
|
||||
#define SM8350_SLAVE_PCIE_0 142
|
||||
#define SM8350_SLAVE_PCIE_1 143
|
||||
#define SM8350_SLAVE_QDSS_STM 144
|
||||
#define SM8350_SLAVE_TCU 145
|
||||
#define SM8350_MASTER_LLCC_DISP 146
|
||||
#define SM8350_MASTER_MNOC_HF_MEM_NOC_DISP 147
|
||||
#define SM8350_MASTER_MNOC_SF_MEM_NOC_DISP 148
|
||||
#define SM8350_MASTER_MDP0_DISP 149
|
||||
#define SM8350_MASTER_MDP1_DISP 150
|
||||
#define SM8350_MASTER_ROTATOR_DISP 151
|
||||
#define SM8350_SLAVE_EBI1_DISP 152
|
||||
#define SM8350_SLAVE_LLCC_DISP 153
|
||||
#define SM8350_SLAVE_MNOC_HF_MEM_NOC_DISP 154
|
||||
#define SM8350_SLAVE_MNOC_SF_MEM_NOC_DISP 155
|
||||
|
||||
#endif
|
|
@ -402,6 +402,16 @@ config SRAM
|
|||
config SRAM_EXEC
|
||||
bool
|
||||
|
||||
config DW_XDATA_PCIE
|
||||
depends on PCI
|
||||
tristate "Synopsys DesignWare xData PCIe driver"
|
||||
help
|
||||
This driver allows controlling Synopsys DesignWare PCIe traffic
|
||||
generator IP also known as xData, present in Synopsys DesignWare
|
||||
PCIe Endpoint prototype.
|
||||
|
||||
If unsure, say N.
|
||||
|
||||
config PCI_ENDPOINT_TEST
|
||||
depends on PCI
|
||||
select CRC32
|
||||
|
@ -427,14 +437,6 @@ config MISC_RTSX
|
|||
tristate
|
||||
default MISC_RTSX_PCI || MISC_RTSX_USB
|
||||
|
||||
config PVPANIC
|
||||
tristate "pvpanic device support"
|
||||
depends on HAS_IOMEM && (ACPI || OF)
|
||||
help
|
||||
This driver provides support for the pvpanic device. pvpanic is
|
||||
a paravirtualized device provided by QEMU; it lets a virtual machine
|
||||
(guest) communicate panic events to the host.
|
||||
|
||||
config HISI_HIKEY_USB
|
||||
tristate "USB GPIO Hub on HiSilicon Hikey 960/970 Platform"
|
||||
depends on (OF && GPIOLIB) || COMPILE_TEST
|
||||
|
@ -461,4 +463,5 @@ source "drivers/misc/bcm-vk/Kconfig"
|
|||
source "drivers/misc/cardreader/Kconfig"
|
||||
source "drivers/misc/habanalabs/Kconfig"
|
||||
source "drivers/misc/uacce/Kconfig"
|
||||
source "drivers/misc/pvpanic/Kconfig"
|
||||
endmenu
|
||||
|
|
|
@ -47,11 +47,12 @@ obj-$(CONFIG_SRAM_EXEC) += sram-exec.o
|
|||
obj-$(CONFIG_GENWQE) += genwqe/
|
||||
obj-$(CONFIG_ECHO) += echo/
|
||||
obj-$(CONFIG_CXL_BASE) += cxl/
|
||||
obj-$(CONFIG_DW_XDATA_PCIE) += dw-xdata-pcie.o
|
||||
obj-$(CONFIG_PCI_ENDPOINT_TEST) += pci_endpoint_test.o
|
||||
obj-$(CONFIG_OCXL) += ocxl/
|
||||
obj-$(CONFIG_BCM_VK) += bcm-vk/
|
||||
obj-y += cardreader/
|
||||
obj-$(CONFIG_PVPANIC) += pvpanic.o
|
||||
obj-$(CONFIG_PVPANIC) += pvpanic/
|
||||
obj-$(CONFIG_HABANA_AI) += habanalabs/
|
||||
obj-$(CONFIG_UACCE) += uacce/
|
||||
obj-$(CONFIG_XILINX_SDFEC) += xilinx_sdfec.o
|
||||
|
|
|
@ -139,6 +139,9 @@ static s32 dpot_read_spi(struct dpot_data *dpot, u8 reg)
|
|||
value = dpot_read_r8d8(dpot,
|
||||
DPOT_AD5291_READ_RDAC << 2);
|
||||
|
||||
if (value < 0)
|
||||
return value;
|
||||
|
||||
if (dpot->uid == DPOT_UID(AD5291_ID))
|
||||
value = value >> 2;
|
||||
|
||||
|
|
|
@ -52,7 +52,7 @@ int cxl_context_init(struct cxl_context *ctx, struct cxl_afu *afu, bool master)
|
|||
* can always access it when dereferenced from IDR. For the same
|
||||
* reason, the segment table is only destroyed after the context is
|
||||
* removed from the IDR. Access to this in the IOCTL is protected by
|
||||
* Linux filesytem symantics (can't IOCTL until open is complete).
|
||||
* Linux filesystem semantics (can't IOCTL until open is complete).
|
||||
*/
|
||||
i = cxl_alloc_sst(ctx);
|
||||
if (i)
|
||||
|
|
|
@ -200,7 +200,7 @@ static struct mm_struct *get_mem_context(struct cxl_context *ctx)
|
|||
if (ctx->mm == NULL)
|
||||
return NULL;
|
||||
|
||||
if (!atomic_inc_not_zero(&ctx->mm->mm_users))
|
||||
if (!mmget_not_zero(ctx->mm))
|
||||
return NULL;
|
||||
|
||||
return ctx->mm;
|
||||
|
|
|
@ -0,0 +1,420 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (c) 2020 Synopsys, Inc. and/or its affiliates.
|
||||
* Synopsys DesignWare xData driver
|
||||
*
|
||||
* Author: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
|
||||
*/
|
||||
|
||||
#include <linux/miscdevice.h>
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/pci-epf.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/bitops.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/pci.h>
|
||||
|
||||
#define DW_XDATA_DRIVER_NAME "dw-xdata-pcie"
|
||||
|
||||
#define DW_XDATA_EP_MEM_OFFSET 0x8000000
|
||||
|
||||
static DEFINE_IDA(xdata_ida);
|
||||
|
||||
#define STATUS_DONE BIT(0)
|
||||
|
||||
#define CONTROL_DOORBELL BIT(0)
|
||||
#define CONTROL_IS_WRITE BIT(1)
|
||||
#define CONTROL_LENGTH(a) FIELD_PREP(GENMASK(13, 2), a)
|
||||
#define CONTROL_PATTERN_INC BIT(16)
|
||||
#define CONTROL_NO_ADDR_INC BIT(18)
|
||||
|
||||
#define XPERF_CONTROL_ENABLE BIT(5)
|
||||
|
||||
#define BURST_REPEAT BIT(31)
|
||||
#define BURST_VALUE 0x1001
|
||||
|
||||
#define PATTERN_VALUE 0x0
|
||||
|
||||
struct dw_xdata_regs {
|
||||
u32 addr_lsb; /* 0x000 */
|
||||
u32 addr_msb; /* 0x004 */
|
||||
u32 burst_cnt; /* 0x008 */
|
||||
u32 control; /* 0x00c */
|
||||
u32 pattern; /* 0x010 */
|
||||
u32 status; /* 0x014 */
|
||||
u32 RAM_addr; /* 0x018 */
|
||||
u32 RAM_port; /* 0x01c */
|
||||
u32 _reserved0[14]; /* 0x020..0x054 */
|
||||
u32 perf_control; /* 0x058 */
|
||||
u32 _reserved1[41]; /* 0x05c..0x0fc */
|
||||
u32 wr_cnt_lsb; /* 0x100 */
|
||||
u32 wr_cnt_msb; /* 0x104 */
|
||||
u32 rd_cnt_lsb; /* 0x108 */
|
||||
u32 rd_cnt_msb; /* 0x10c */
|
||||
} __packed;
|
||||
|
||||
struct dw_xdata_region {
|
||||
phys_addr_t paddr; /* physical address */
|
||||
void __iomem *vaddr; /* virtual address */
|
||||
};
|
||||
|
||||
struct dw_xdata {
|
||||
struct dw_xdata_region rg_region; /* registers */
|
||||
size_t max_wr_len; /* max wr xfer len */
|
||||
size_t max_rd_len; /* max rd xfer len */
|
||||
struct mutex mutex;
|
||||
struct pci_dev *pdev;
|
||||
struct miscdevice misc_dev;
|
||||
};
|
||||
|
||||
static inline struct dw_xdata_regs __iomem *__dw_regs(struct dw_xdata *dw)
|
||||
{
|
||||
return dw->rg_region.vaddr;
|
||||
}
|
||||
|
||||
static void dw_xdata_stop(struct dw_xdata *dw)
|
||||
{
|
||||
u32 burst;
|
||||
|
||||
mutex_lock(&dw->mutex);
|
||||
|
||||
burst = readl(&(__dw_regs(dw)->burst_cnt));
|
||||
|
||||
if (burst & BURST_REPEAT) {
|
||||
burst &= ~(u32)BURST_REPEAT;
|
||||
writel(burst, &(__dw_regs(dw)->burst_cnt));
|
||||
}
|
||||
|
||||
mutex_unlock(&dw->mutex);
|
||||
}
|
||||
|
||||
static void dw_xdata_start(struct dw_xdata *dw, bool write)
|
||||
{
|
||||
struct device *dev = &dw->pdev->dev;
|
||||
u32 control, status;
|
||||
|
||||
/* Stop first if xfer in progress */
|
||||
dw_xdata_stop(dw);
|
||||
|
||||
mutex_lock(&dw->mutex);
|
||||
|
||||
/* Clear status register */
|
||||
writel(0x0, &(__dw_regs(dw)->status));
|
||||
|
||||
/* Burst count register set for continuous until stopped */
|
||||
writel(BURST_REPEAT | BURST_VALUE, &(__dw_regs(dw)->burst_cnt));
|
||||
|
||||
/* Pattern register */
|
||||
writel(PATTERN_VALUE, &(__dw_regs(dw)->pattern));
|
||||
|
||||
/* Control register */
|
||||
control = CONTROL_DOORBELL | CONTROL_PATTERN_INC | CONTROL_NO_ADDR_INC;
|
||||
if (write) {
|
||||
control |= CONTROL_IS_WRITE;
|
||||
control |= CONTROL_LENGTH(dw->max_wr_len);
|
||||
} else {
|
||||
control |= CONTROL_LENGTH(dw->max_rd_len);
|
||||
}
|
||||
writel(control, &(__dw_regs(dw)->control));
|
||||
|
||||
/*
|
||||
* The xData HW block needs about 100 ms to initiate the traffic
|
||||
* generation according this HW block datasheet.
|
||||
*/
|
||||
usleep_range(100, 150);
|
||||
|
||||
status = readl(&(__dw_regs(dw)->status));
|
||||
|
||||
mutex_unlock(&dw->mutex);
|
||||
|
||||
if (!(status & STATUS_DONE))
|
||||
dev_dbg(dev, "xData: started %s direction\n",
|
||||
write ? "write" : "read");
|
||||
}
|
||||
|
||||
static void dw_xdata_perf_meas(struct dw_xdata *dw, u64 *data, bool write)
|
||||
{
|
||||
if (write) {
|
||||
*data = readl(&(__dw_regs(dw)->wr_cnt_msb));
|
||||
*data <<= 32;
|
||||
*data |= readl(&(__dw_regs(dw)->wr_cnt_lsb));
|
||||
} else {
|
||||
*data = readl(&(__dw_regs(dw)->rd_cnt_msb));
|
||||
*data <<= 32;
|
||||
*data |= readl(&(__dw_regs(dw)->rd_cnt_lsb));
|
||||
}
|
||||
}
|
||||
|
||||
static u64 dw_xdata_perf_diff(u64 *m1, u64 *m2, u64 time)
|
||||
{
|
||||
u64 rate = (*m1 - *m2);
|
||||
|
||||
rate *= (1000 * 1000 * 1000);
|
||||
rate >>= 20;
|
||||
rate = DIV_ROUND_CLOSEST_ULL(rate, time);
|
||||
|
||||
return rate;
|
||||
}
|
||||
|
||||
static void dw_xdata_perf(struct dw_xdata *dw, u64 *rate, bool write)
|
||||
{
|
||||
struct device *dev = &dw->pdev->dev;
|
||||
u64 data[2], time[2], diff;
|
||||
|
||||
mutex_lock(&dw->mutex);
|
||||
|
||||
/* First acquisition of current count frames */
|
||||
writel(0x0, &(__dw_regs(dw)->perf_control));
|
||||
dw_xdata_perf_meas(dw, &data[0], write);
|
||||
time[0] = jiffies;
|
||||
writel((u32)XPERF_CONTROL_ENABLE, &(__dw_regs(dw)->perf_control));
|
||||
|
||||
/*
|
||||
* Wait 100ms between the 1st count frame acquisition and the 2nd
|
||||
* count frame acquisition, in order to calculate the speed later
|
||||
*/
|
||||
mdelay(100);
|
||||
|
||||
/* Second acquisition of current count frames */
|
||||
writel(0x0, &(__dw_regs(dw)->perf_control));
|
||||
dw_xdata_perf_meas(dw, &data[1], write);
|
||||
time[1] = jiffies;
|
||||
writel((u32)XPERF_CONTROL_ENABLE, &(__dw_regs(dw)->perf_control));
|
||||
|
||||
/*
|
||||
* Speed calculation
|
||||
*
|
||||
* rate = (2nd count frames - 1st count frames) / (time elapsed)
|
||||
*/
|
||||
diff = jiffies_to_nsecs(time[1] - time[0]);
|
||||
*rate = dw_xdata_perf_diff(&data[1], &data[0], diff);
|
||||
|
||||
mutex_unlock(&dw->mutex);
|
||||
|
||||
dev_dbg(dev, "xData: time=%llu us, %s=%llu MB/s\n",
|
||||
diff, write ? "write" : "read", *rate);
|
||||
}
|
||||
|
||||
static struct dw_xdata *misc_dev_to_dw(struct miscdevice *misc_dev)
|
||||
{
|
||||
return container_of(misc_dev, struct dw_xdata, misc_dev);
|
||||
}
|
||||
|
||||
static ssize_t write_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct miscdevice *misc_dev = dev_get_drvdata(dev);
|
||||
struct dw_xdata *dw = misc_dev_to_dw(misc_dev);
|
||||
u64 rate;
|
||||
|
||||
dw_xdata_perf(dw, &rate, true);
|
||||
|
||||
return sysfs_emit(buf, "%llu\n", rate);
|
||||
}
|
||||
|
||||
static ssize_t write_store(struct device *dev, struct device_attribute *attr,
|
||||
const char *buf, size_t size)
|
||||
{
|
||||
struct miscdevice *misc_dev = dev_get_drvdata(dev);
|
||||
struct dw_xdata *dw = misc_dev_to_dw(misc_dev);
|
||||
bool enabled;
|
||||
int ret;
|
||||
|
||||
ret = kstrtobool(buf, &enabled);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
if (enabled) {
|
||||
dev_dbg(dev, "xData: requested write transfer\n");
|
||||
dw_xdata_start(dw, true);
|
||||
} else {
|
||||
dev_dbg(dev, "xData: requested stop transfer\n");
|
||||
dw_xdata_stop(dw);
|
||||
}
|
||||
|
||||
return size;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR_RW(write);
|
||||
|
||||
static ssize_t read_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct miscdevice *misc_dev = dev_get_drvdata(dev);
|
||||
struct dw_xdata *dw = misc_dev_to_dw(misc_dev);
|
||||
u64 rate;
|
||||
|
||||
dw_xdata_perf(dw, &rate, false);
|
||||
|
||||
return sysfs_emit(buf, "%llu\n", rate);
|
||||
}
|
||||
|
||||
static ssize_t read_store(struct device *dev, struct device_attribute *attr,
|
||||
const char *buf, size_t size)
|
||||
{
|
||||
struct miscdevice *misc_dev = dev_get_drvdata(dev);
|
||||
struct dw_xdata *dw = misc_dev_to_dw(misc_dev);
|
||||
bool enabled;
|
||||
int ret;
|
||||
|
||||
ret = kstrtobool(buf, &enabled);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
if (enabled) {
|
||||
dev_dbg(dev, "xData: requested read transfer\n");
|
||||
dw_xdata_start(dw, false);
|
||||
} else {
|
||||
dev_dbg(dev, "xData: requested stop transfer\n");
|
||||
dw_xdata_stop(dw);
|
||||
}
|
||||
|
||||
return size;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR_RW(read);
|
||||
|
||||
static struct attribute *xdata_attrs[] = {
|
||||
&dev_attr_write.attr,
|
||||
&dev_attr_read.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
ATTRIBUTE_GROUPS(xdata);
|
||||
|
||||
static int dw_xdata_pcie_probe(struct pci_dev *pdev,
|
||||
const struct pci_device_id *pid)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct dw_xdata *dw;
|
||||
char name[24];
|
||||
u64 addr;
|
||||
int err;
|
||||
int id;
|
||||
|
||||
/* Enable PCI device */
|
||||
err = pcim_enable_device(pdev);
|
||||
if (err) {
|
||||
dev_err(dev, "enabling device failed\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Mapping PCI BAR regions */
|
||||
err = pcim_iomap_regions(pdev, BIT(BAR_0), pci_name(pdev));
|
||||
if (err) {
|
||||
dev_err(dev, "xData BAR I/O remapping failed\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
pci_set_master(pdev);
|
||||
|
||||
/* Allocate memory */
|
||||
dw = devm_kzalloc(dev, sizeof(*dw), GFP_KERNEL);
|
||||
if (!dw)
|
||||
return -ENOMEM;
|
||||
|
||||
/* Data structure initialization */
|
||||
mutex_init(&dw->mutex);
|
||||
|
||||
dw->rg_region.vaddr = pcim_iomap_table(pdev)[BAR_0];
|
||||
if (!dw->rg_region.vaddr)
|
||||
return -ENOMEM;
|
||||
|
||||
dw->rg_region.paddr = pdev->resource[BAR_0].start;
|
||||
|
||||
dw->max_wr_len = pcie_get_mps(pdev);
|
||||
dw->max_wr_len >>= 2;
|
||||
|
||||
dw->max_rd_len = pcie_get_readrq(pdev);
|
||||
dw->max_rd_len >>= 2;
|
||||
|
||||
dw->pdev = pdev;
|
||||
|
||||
id = ida_simple_get(&xdata_ida, 0, 0, GFP_KERNEL);
|
||||
if (id < 0) {
|
||||
dev_err(dev, "xData: unable to get id\n");
|
||||
return id;
|
||||
}
|
||||
|
||||
snprintf(name, sizeof(name), DW_XDATA_DRIVER_NAME ".%d", id);
|
||||
dw->misc_dev.name = kstrdup(name, GFP_KERNEL);
|
||||
if (!dw->misc_dev.name) {
|
||||
err = -ENOMEM;
|
||||
goto err_ida_remove;
|
||||
}
|
||||
|
||||
dw->misc_dev.minor = MISC_DYNAMIC_MINOR;
|
||||
dw->misc_dev.parent = dev;
|
||||
dw->misc_dev.groups = xdata_groups;
|
||||
|
||||
writel(0x0, &(__dw_regs(dw)->RAM_addr));
|
||||
writel(0x0, &(__dw_regs(dw)->RAM_port));
|
||||
|
||||
addr = dw->rg_region.paddr + DW_XDATA_EP_MEM_OFFSET;
|
||||
writel(lower_32_bits(addr), &(__dw_regs(dw)->addr_lsb));
|
||||
writel(upper_32_bits(addr), &(__dw_regs(dw)->addr_msb));
|
||||
dev_dbg(dev, "xData: target address = 0x%.16llx\n", addr);
|
||||
|
||||
dev_dbg(dev, "xData: wr_len = %zu, rd_len = %zu\n",
|
||||
dw->max_wr_len * 4, dw->max_rd_len * 4);
|
||||
|
||||
/* Saving data structure reference */
|
||||
pci_set_drvdata(pdev, dw);
|
||||
|
||||
/* Register misc device */
|
||||
err = misc_register(&dw->misc_dev);
|
||||
if (err) {
|
||||
dev_err(dev, "xData: failed to register device\n");
|
||||
goto err_kfree_name;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err_kfree_name:
|
||||
kfree(dw->misc_dev.name);
|
||||
|
||||
err_ida_remove:
|
||||
ida_simple_remove(&xdata_ida, id);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static void dw_xdata_pcie_remove(struct pci_dev *pdev)
|
||||
{
|
||||
struct dw_xdata *dw = pci_get_drvdata(pdev);
|
||||
int id;
|
||||
|
||||
if (sscanf(dw->misc_dev.name, DW_XDATA_DRIVER_NAME ".%d", &id) != 1)
|
||||
return;
|
||||
|
||||
if (id < 0)
|
||||
return;
|
||||
|
||||
dw_xdata_stop(dw);
|
||||
misc_deregister(&dw->misc_dev);
|
||||
kfree(dw->misc_dev.name);
|
||||
ida_simple_remove(&xdata_ida, id);
|
||||
}
|
||||
|
||||
static const struct pci_device_id dw_xdata_pcie_id_table[] = {
|
||||
{ PCI_DEVICE_DATA(SYNOPSYS, EDDA, NULL) },
|
||||
{ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(pci, dw_xdata_pcie_id_table);
|
||||
|
||||
static struct pci_driver dw_xdata_pcie_driver = {
|
||||
.name = DW_XDATA_DRIVER_NAME,
|
||||
.id_table = dw_xdata_pcie_id_table,
|
||||
.probe = dw_xdata_pcie_probe,
|
||||
.remove = dw_xdata_pcie_remove,
|
||||
};
|
||||
|
||||
module_pci_driver(dw_xdata_pcie_driver);
|
||||
|
||||
MODULE_LICENSE("GPL v2");
|
||||
MODULE_DESCRIPTION("Synopsys DesignWare xData PCIe driver");
|
||||
MODULE_AUTHOR("Gustavo Pimentel <gustavo.pimentel@synopsys.com>");
|
||||
|
|
@ -316,7 +316,7 @@ static int enqueue_ddcb(struct genwqe_dev *cd, struct ddcb_queue *queue,
|
|||
|
||||
/**
|
||||
* copy_ddcb_results() - Copy output state from real DDCB to request
|
||||
* @req: pointer to requsted DDCB parameters
|
||||
* @req: pointer to requested DDCB parameters
|
||||
* @ddcb_no: pointer to ddcb number being tapped
|
||||
*
|
||||
* Copy DDCB ASV to request struct. There is no endian
|
||||
|
@ -356,7 +356,7 @@ static void copy_ddcb_results(struct ddcb_requ *req, int ddcb_no)
|
|||
}
|
||||
|
||||
/**
|
||||
* genwqe_check_ddcb_queue() - Checks DDCB queue for completed work equests.
|
||||
* genwqe_check_ddcb_queue() - Checks DDCB queue for completed work requests.
|
||||
* @cd: pointer to genwqe device descriptor
|
||||
* @queue: queue to be checked
|
||||
*
|
||||
|
@ -498,7 +498,7 @@ int __genwqe_wait_ddcb(struct genwqe_dev *cd, struct ddcb_requ *req)
|
|||
|
||||
/*
|
||||
* We need to distinguish 3 cases here:
|
||||
* 1. rc == 0 timeout occured
|
||||
* 1. rc == 0 timeout occurred
|
||||
* 2. rc == -ERESTARTSYS signal received
|
||||
* 3. rc > 0 remaining jiffies condition is true
|
||||
*/
|
||||
|
@ -982,7 +982,7 @@ static int genwqe_next_ddcb_ready(struct genwqe_dev *cd)
|
|||
|
||||
spin_lock_irqsave(&queue->ddcb_lock, flags);
|
||||
|
||||
if (queue_empty(queue)) { /* emtpy queue */
|
||||
if (queue_empty(queue)) { /* empty queue */
|
||||
spin_unlock_irqrestore(&queue->ddcb_lock, flags);
|
||||
return 0;
|
||||
}
|
||||
|
@ -1002,7 +1002,7 @@ static int genwqe_next_ddcb_ready(struct genwqe_dev *cd)
|
|||
* @cd: pointer to genwqe device descriptor
|
||||
*
|
||||
* Keep track on the number of DDCBs which ware currently in the
|
||||
* queue. This is needed for statistics as well as conditon if we want
|
||||
* queue. This is needed for statistics as well as condition if we want
|
||||
* to wait or better do polling in case of no interrupts available.
|
||||
*/
|
||||
int genwqe_ddcbs_in_flight(struct genwqe_dev *cd)
|
||||
|
|
|
@ -181,7 +181,7 @@ static void cb_release(struct kref *ref)
|
|||
static struct hl_cb *hl_cb_alloc(struct hl_device *hdev, u32 cb_size,
|
||||
int ctx_id, bool internal_cb)
|
||||
{
|
||||
struct hl_cb *cb;
|
||||
struct hl_cb *cb = NULL;
|
||||
u32 cb_offset;
|
||||
void *p;
|
||||
|
||||
|
@ -193,9 +193,10 @@ static struct hl_cb *hl_cb_alloc(struct hl_device *hdev, u32 cb_size,
|
|||
* the kernel's copy. Hence, we must never sleep in this code section
|
||||
* and must use GFP_ATOMIC for all memory allocations.
|
||||
*/
|
||||
if (ctx_id == HL_KERNEL_ASID_ID)
|
||||
if (ctx_id == HL_KERNEL_ASID_ID && !hdev->disabled)
|
||||
cb = kzalloc(sizeof(*cb), GFP_ATOMIC);
|
||||
else
|
||||
|
||||
if (!cb)
|
||||
cb = kzalloc(sizeof(*cb), GFP_KERNEL);
|
||||
|
||||
if (!cb)
|
||||
|
@ -214,6 +215,9 @@ static struct hl_cb *hl_cb_alloc(struct hl_device *hdev, u32 cb_size,
|
|||
} else if (ctx_id == HL_KERNEL_ASID_ID) {
|
||||
p = hdev->asic_funcs->asic_dma_alloc_coherent(hdev, cb_size,
|
||||
&cb->bus_address, GFP_ATOMIC);
|
||||
if (!p)
|
||||
p = hdev->asic_funcs->asic_dma_alloc_coherent(hdev,
|
||||
cb_size, &cb->bus_address, GFP_KERNEL);
|
||||
} else {
|
||||
p = hdev->asic_funcs->asic_dma_alloc_coherent(hdev, cb_size,
|
||||
&cb->bus_address,
|
||||
|
@ -310,6 +314,8 @@ int hl_cb_create(struct hl_device *hdev, struct hl_cb_mgr *mgr,
|
|||
|
||||
spin_lock(&mgr->cb_lock);
|
||||
rc = idr_alloc(&mgr->cb_handles, cb, 1, 0, GFP_ATOMIC);
|
||||
if (rc < 0)
|
||||
rc = idr_alloc(&mgr->cb_handles, cb, 1, 0, GFP_KERNEL);
|
||||
spin_unlock(&mgr->cb_lock);
|
||||
|
||||
if (rc < 0) {
|
||||
|
|
|
@ -84,6 +84,38 @@ int hl_gen_sob_mask(u16 sob_base, u8 sob_mask, u8 *mask)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void sob_reset_work(struct work_struct *work)
|
||||
{
|
||||
struct hl_cs_compl *hl_cs_cmpl =
|
||||
container_of(work, struct hl_cs_compl, sob_reset_work);
|
||||
struct hl_device *hdev = hl_cs_cmpl->hdev;
|
||||
|
||||
/*
|
||||
* A signal CS can get completion while the corresponding wait
|
||||
* for signal CS is on its way to the PQ. The wait for signal CS
|
||||
* will get stuck if the signal CS incremented the SOB to its
|
||||
* max value and there are no pending (submitted) waits on this
|
||||
* SOB.
|
||||
* We do the following to void this situation:
|
||||
* 1. The wait for signal CS must get a ref for the signal CS as
|
||||
* soon as possible in cs_ioctl_signal_wait() and put it
|
||||
* before being submitted to the PQ but after it incremented
|
||||
* the SOB refcnt in init_signal_wait_cs().
|
||||
* 2. Signal/Wait for signal CS will decrement the SOB refcnt
|
||||
* here.
|
||||
* These two measures guarantee that the wait for signal CS will
|
||||
* reset the SOB upon completion rather than the signal CS and
|
||||
* hence the above scenario is avoided.
|
||||
*/
|
||||
kref_put(&hl_cs_cmpl->hw_sob->kref, hl_sob_reset);
|
||||
|
||||
if (hl_cs_cmpl->type == CS_TYPE_COLLECTIVE_WAIT)
|
||||
hdev->asic_funcs->reset_sob_group(hdev,
|
||||
hl_cs_cmpl->sob_group);
|
||||
|
||||
kfree(hl_cs_cmpl);
|
||||
}
|
||||
|
||||
static void hl_fence_release(struct kref *kref)
|
||||
{
|
||||
struct hl_fence *fence =
|
||||
|
@ -109,28 +141,9 @@ static void hl_fence_release(struct kref *kref)
|
|||
hl_cs_cmpl->hw_sob->sob_id,
|
||||
hl_cs_cmpl->sob_val);
|
||||
|
||||
/*
|
||||
* A signal CS can get completion while the corresponding wait
|
||||
* for signal CS is on its way to the PQ. The wait for signal CS
|
||||
* will get stuck if the signal CS incremented the SOB to its
|
||||
* max value and there are no pending (submitted) waits on this
|
||||
* SOB.
|
||||
* We do the following to void this situation:
|
||||
* 1. The wait for signal CS must get a ref for the signal CS as
|
||||
* soon as possible in cs_ioctl_signal_wait() and put it
|
||||
* before being submitted to the PQ but after it incremented
|
||||
* the SOB refcnt in init_signal_wait_cs().
|
||||
* 2. Signal/Wait for signal CS will decrement the SOB refcnt
|
||||
* here.
|
||||
* These two measures guarantee that the wait for signal CS will
|
||||
* reset the SOB upon completion rather than the signal CS and
|
||||
* hence the above scenario is avoided.
|
||||
*/
|
||||
kref_put(&hl_cs_cmpl->hw_sob->kref, hl_sob_reset);
|
||||
queue_work(hdev->sob_reset_wq, &hl_cs_cmpl->sob_reset_work);
|
||||
|
||||
if (hl_cs_cmpl->type == CS_TYPE_COLLECTIVE_WAIT)
|
||||
hdev->asic_funcs->reset_sob_group(hdev,
|
||||
hl_cs_cmpl->sob_group);
|
||||
return;
|
||||
}
|
||||
|
||||
free:
|
||||
|
@ -454,8 +467,7 @@ static void cs_handle_tdr(struct hl_device *hdev, struct hl_cs *cs)
|
|||
|
||||
if (next_entry_found && !next->tdr_active) {
|
||||
next->tdr_active = true;
|
||||
schedule_delayed_work(&next->work_tdr,
|
||||
hdev->timeout_jiffies);
|
||||
schedule_delayed_work(&next->work_tdr, next->timeout_jiffies);
|
||||
}
|
||||
|
||||
spin_unlock(&hdev->cs_mirror_lock);
|
||||
|
@ -492,24 +504,6 @@ static void cs_do_release(struct kref *ref)
|
|||
goto out;
|
||||
}
|
||||
|
||||
hdev->asic_funcs->hw_queues_lock(hdev);
|
||||
|
||||
hdev->cs_active_cnt--;
|
||||
if (!hdev->cs_active_cnt) {
|
||||
struct hl_device_idle_busy_ts *ts;
|
||||
|
||||
ts = &hdev->idle_busy_ts_arr[hdev->idle_busy_ts_idx++];
|
||||
ts->busy_to_idle_ts = ktime_get();
|
||||
|
||||
if (hdev->idle_busy_ts_idx == HL_IDLE_BUSY_TS_ARR_SIZE)
|
||||
hdev->idle_busy_ts_idx = 0;
|
||||
} else if (hdev->cs_active_cnt < 0) {
|
||||
dev_crit(hdev->dev, "CS active cnt %d is negative\n",
|
||||
hdev->cs_active_cnt);
|
||||
}
|
||||
|
||||
hdev->asic_funcs->hw_queues_unlock(hdev);
|
||||
|
||||
/* Need to update CI for all queue jobs that does not get completion */
|
||||
hl_hw_queue_update_ci(cs);
|
||||
|
||||
|
@ -620,14 +614,14 @@ static void cs_timedout(struct work_struct *work)
|
|||
cs_put(cs);
|
||||
|
||||
if (hdev->reset_on_lockup)
|
||||
hl_device_reset(hdev, false, false);
|
||||
hl_device_reset(hdev, 0);
|
||||
else
|
||||
hdev->needs_reset = true;
|
||||
}
|
||||
|
||||
static int allocate_cs(struct hl_device *hdev, struct hl_ctx *ctx,
|
||||
enum hl_cs_type cs_type, u64 user_sequence,
|
||||
struct hl_cs **cs_new)
|
||||
struct hl_cs **cs_new, u32 flags, u32 timeout)
|
||||
{
|
||||
struct hl_cs_counters_atomic *cntr;
|
||||
struct hl_fence *other = NULL;
|
||||
|
@ -638,6 +632,9 @@ static int allocate_cs(struct hl_device *hdev, struct hl_ctx *ctx,
|
|||
cntr = &hdev->aggregated_cs_counters;
|
||||
|
||||
cs = kzalloc(sizeof(*cs), GFP_ATOMIC);
|
||||
if (!cs)
|
||||
cs = kzalloc(sizeof(*cs), GFP_KERNEL);
|
||||
|
||||
if (!cs) {
|
||||
atomic64_inc(&ctx->cs_counters.out_of_mem_drop_cnt);
|
||||
atomic64_inc(&cntr->out_of_mem_drop_cnt);
|
||||
|
@ -651,12 +648,17 @@ static int allocate_cs(struct hl_device *hdev, struct hl_ctx *ctx,
|
|||
cs->submitted = false;
|
||||
cs->completed = false;
|
||||
cs->type = cs_type;
|
||||
cs->timestamp = !!(flags & HL_CS_FLAGS_TIMESTAMP);
|
||||
cs->timeout_jiffies = timeout;
|
||||
INIT_LIST_HEAD(&cs->job_list);
|
||||
INIT_DELAYED_WORK(&cs->work_tdr, cs_timedout);
|
||||
kref_init(&cs->refcount);
|
||||
spin_lock_init(&cs->job_lock);
|
||||
|
||||
cs_cmpl = kmalloc(sizeof(*cs_cmpl), GFP_ATOMIC);
|
||||
if (!cs_cmpl)
|
||||
cs_cmpl = kmalloc(sizeof(*cs_cmpl), GFP_KERNEL);
|
||||
|
||||
if (!cs_cmpl) {
|
||||
atomic64_inc(&ctx->cs_counters.out_of_mem_drop_cnt);
|
||||
atomic64_inc(&cntr->out_of_mem_drop_cnt);
|
||||
|
@ -664,9 +666,23 @@ static int allocate_cs(struct hl_device *hdev, struct hl_ctx *ctx,
|
|||
goto free_cs;
|
||||
}
|
||||
|
||||
cs->jobs_in_queue_cnt = kcalloc(hdev->asic_prop.max_queues,
|
||||
sizeof(*cs->jobs_in_queue_cnt), GFP_ATOMIC);
|
||||
if (!cs->jobs_in_queue_cnt)
|
||||
cs->jobs_in_queue_cnt = kcalloc(hdev->asic_prop.max_queues,
|
||||
sizeof(*cs->jobs_in_queue_cnt), GFP_KERNEL);
|
||||
|
||||
if (!cs->jobs_in_queue_cnt) {
|
||||
atomic64_inc(&ctx->cs_counters.out_of_mem_drop_cnt);
|
||||
atomic64_inc(&cntr->out_of_mem_drop_cnt);
|
||||
rc = -ENOMEM;
|
||||
goto free_cs_cmpl;
|
||||
}
|
||||
|
||||
cs_cmpl->hdev = hdev;
|
||||
cs_cmpl->type = cs->type;
|
||||
spin_lock_init(&cs_cmpl->lock);
|
||||
INIT_WORK(&cs_cmpl->sob_reset_work, sob_reset_work);
|
||||
cs->fence = &cs_cmpl->base_fence;
|
||||
|
||||
spin_lock(&ctx->cs_lock);
|
||||
|
@ -696,15 +712,6 @@ static int allocate_cs(struct hl_device *hdev, struct hl_ctx *ctx,
|
|||
goto free_fence;
|
||||
}
|
||||
|
||||
cs->jobs_in_queue_cnt = kcalloc(hdev->asic_prop.max_queues,
|
||||
sizeof(*cs->jobs_in_queue_cnt), GFP_ATOMIC);
|
||||
if (!cs->jobs_in_queue_cnt) {
|
||||
atomic64_inc(&ctx->cs_counters.out_of_mem_drop_cnt);
|
||||
atomic64_inc(&cntr->out_of_mem_drop_cnt);
|
||||
rc = -ENOMEM;
|
||||
goto free_fence;
|
||||
}
|
||||
|
||||
/* init hl_fence */
|
||||
hl_fence_init(&cs_cmpl->base_fence, cs_cmpl->cs_seq);
|
||||
|
||||
|
@ -727,6 +734,8 @@ static int allocate_cs(struct hl_device *hdev, struct hl_ctx *ctx,
|
|||
|
||||
free_fence:
|
||||
spin_unlock(&ctx->cs_lock);
|
||||
kfree(cs->jobs_in_queue_cnt);
|
||||
free_cs_cmpl:
|
||||
kfree(cs_cmpl);
|
||||
free_cs:
|
||||
kfree(cs);
|
||||
|
@ -749,6 +758,8 @@ void hl_cs_rollback_all(struct hl_device *hdev)
|
|||
int i;
|
||||
struct hl_cs *cs, *tmp;
|
||||
|
||||
flush_workqueue(hdev->sob_reset_wq);
|
||||
|
||||
/* flush all completions before iterating over the CS mirror list in
|
||||
* order to avoid a race with the release functions
|
||||
*/
|
||||
|
@ -778,6 +789,44 @@ void hl_pending_cb_list_flush(struct hl_ctx *ctx)
|
|||
}
|
||||
}
|
||||
|
||||
static void
|
||||
wake_pending_user_interrupt_threads(struct hl_user_interrupt *interrupt)
|
||||
{
|
||||
struct hl_user_pending_interrupt *pend;
|
||||
|
||||
spin_lock(&interrupt->wait_list_lock);
|
||||
list_for_each_entry(pend, &interrupt->wait_list_head, wait_list_node) {
|
||||
pend->fence.error = -EIO;
|
||||
complete_all(&pend->fence.completion);
|
||||
}
|
||||
spin_unlock(&interrupt->wait_list_lock);
|
||||
}
|
||||
|
||||
void hl_release_pending_user_interrupts(struct hl_device *hdev)
|
||||
{
|
||||
struct asic_fixed_properties *prop = &hdev->asic_prop;
|
||||
struct hl_user_interrupt *interrupt;
|
||||
int i;
|
||||
|
||||
if (!prop->user_interrupt_count)
|
||||
return;
|
||||
|
||||
/* We iterate through the user interrupt requests and waking up all
|
||||
* user threads waiting for interrupt completion. We iterate the
|
||||
* list under a lock, this is why all user threads, once awake,
|
||||
* will wait on the same lock and will release the waiting object upon
|
||||
* unlock.
|
||||
*/
|
||||
|
||||
for (i = 0 ; i < prop->user_interrupt_count ; i++) {
|
||||
interrupt = &hdev->user_interrupt[i];
|
||||
wake_pending_user_interrupt_threads(interrupt);
|
||||
}
|
||||
|
||||
interrupt = &hdev->common_user_interrupt;
|
||||
wake_pending_user_interrupt_threads(interrupt);
|
||||
}
|
||||
|
||||
static void job_wq_completion(struct work_struct *work)
|
||||
{
|
||||
struct hl_cs_job *job = container_of(work, struct hl_cs_job,
|
||||
|
@ -889,6 +938,9 @@ struct hl_cs_job *hl_cs_allocate_job(struct hl_device *hdev,
|
|||
struct hl_cs_job *job;
|
||||
|
||||
job = kzalloc(sizeof(*job), GFP_ATOMIC);
|
||||
if (!job)
|
||||
job = kzalloc(sizeof(*job), GFP_KERNEL);
|
||||
|
||||
if (!job)
|
||||
return NULL;
|
||||
|
||||
|
@ -991,6 +1043,9 @@ static int hl_cs_copy_chunk_array(struct hl_device *hdev,
|
|||
|
||||
*cs_chunk_array = kmalloc_array(num_chunks, sizeof(**cs_chunk_array),
|
||||
GFP_ATOMIC);
|
||||
if (!*cs_chunk_array)
|
||||
*cs_chunk_array = kmalloc_array(num_chunks,
|
||||
sizeof(**cs_chunk_array), GFP_KERNEL);
|
||||
if (!*cs_chunk_array) {
|
||||
atomic64_inc(&ctx->cs_counters.out_of_mem_drop_cnt);
|
||||
atomic64_inc(&hdev->aggregated_cs_counters.out_of_mem_drop_cnt);
|
||||
|
@ -1038,7 +1093,8 @@ static int cs_staged_submission(struct hl_device *hdev, struct hl_cs *cs,
|
|||
}
|
||||
|
||||
static int cs_ioctl_default(struct hl_fpriv *hpriv, void __user *chunks,
|
||||
u32 num_chunks, u64 *cs_seq, u32 flags)
|
||||
u32 num_chunks, u64 *cs_seq, u32 flags,
|
||||
u32 timeout)
|
||||
{
|
||||
bool staged_mid, int_queues_only = true;
|
||||
struct hl_device *hdev = hpriv->hdev;
|
||||
|
@ -1067,11 +1123,11 @@ static int cs_ioctl_default(struct hl_fpriv *hpriv, void __user *chunks,
|
|||
staged_mid = false;
|
||||
|
||||
rc = allocate_cs(hdev, hpriv->ctx, CS_TYPE_DEFAULT,
|
||||
staged_mid ? user_sequence : ULLONG_MAX, &cs);
|
||||
staged_mid ? user_sequence : ULLONG_MAX, &cs, flags,
|
||||
timeout);
|
||||
if (rc)
|
||||
goto free_cs_chunk_array;
|
||||
|
||||
cs->timestamp = !!(flags & HL_CS_FLAGS_TIMESTAMP);
|
||||
*cs_seq = cs->sequence;
|
||||
|
||||
hl_debugfs_add_cs(cs);
|
||||
|
@ -1269,7 +1325,8 @@ static int hl_submit_pending_cb(struct hl_fpriv *hpriv)
|
|||
list_move_tail(&pending_cb->cb_node, &local_cb_list);
|
||||
spin_unlock(&ctx->pending_cb_lock);
|
||||
|
||||
rc = allocate_cs(hdev, ctx, CS_TYPE_DEFAULT, ULLONG_MAX, &cs);
|
||||
rc = allocate_cs(hdev, ctx, CS_TYPE_DEFAULT, ULLONG_MAX, &cs, 0,
|
||||
hdev->timeout_jiffies);
|
||||
if (rc)
|
||||
goto add_list_elements;
|
||||
|
||||
|
@ -1370,7 +1427,7 @@ static int hl_cs_ctx_switch(struct hl_fpriv *hpriv, union hl_cs_args *args,
|
|||
rc = 0;
|
||||
} else {
|
||||
rc = cs_ioctl_default(hpriv, chunks, num_chunks,
|
||||
cs_seq, 0);
|
||||
cs_seq, 0, hdev->timeout_jiffies);
|
||||
}
|
||||
|
||||
mutex_unlock(&hpriv->restore_phase_mutex);
|
||||
|
@ -1419,7 +1476,7 @@ wait_again:
|
|||
|
||||
out:
|
||||
if ((rc == -ETIMEDOUT || rc == -EBUSY) && (need_soft_reset))
|
||||
hl_device_reset(hdev, false, false);
|
||||
hl_device_reset(hdev, 0);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
@ -1445,6 +1502,10 @@ static int cs_ioctl_extract_signal_seq(struct hl_device *hdev,
|
|||
signal_seq_arr = kmalloc_array(signal_seq_arr_len,
|
||||
sizeof(*signal_seq_arr),
|
||||
GFP_ATOMIC);
|
||||
if (!signal_seq_arr)
|
||||
signal_seq_arr = kmalloc_array(signal_seq_arr_len,
|
||||
sizeof(*signal_seq_arr),
|
||||
GFP_KERNEL);
|
||||
if (!signal_seq_arr) {
|
||||
atomic64_inc(&ctx->cs_counters.out_of_mem_drop_cnt);
|
||||
atomic64_inc(&hdev->aggregated_cs_counters.out_of_mem_drop_cnt);
|
||||
|
@ -1536,7 +1597,7 @@ static int cs_ioctl_signal_wait_create_jobs(struct hl_device *hdev,
|
|||
|
||||
static int cs_ioctl_signal_wait(struct hl_fpriv *hpriv, enum hl_cs_type cs_type,
|
||||
void __user *chunks, u32 num_chunks,
|
||||
u64 *cs_seq, bool timestamp)
|
||||
u64 *cs_seq, u32 flags, u32 timeout)
|
||||
{
|
||||
struct hl_cs_chunk *cs_chunk_array, *chunk;
|
||||
struct hw_queue_properties *hw_queue_prop;
|
||||
|
@ -1642,7 +1703,7 @@ static int cs_ioctl_signal_wait(struct hl_fpriv *hpriv, enum hl_cs_type cs_type,
|
|||
}
|
||||
}
|
||||
|
||||
rc = allocate_cs(hdev, ctx, cs_type, ULLONG_MAX, &cs);
|
||||
rc = allocate_cs(hdev, ctx, cs_type, ULLONG_MAX, &cs, flags, timeout);
|
||||
if (rc) {
|
||||
if (cs_type == CS_TYPE_WAIT ||
|
||||
cs_type == CS_TYPE_COLLECTIVE_WAIT)
|
||||
|
@ -1650,8 +1711,6 @@ static int cs_ioctl_signal_wait(struct hl_fpriv *hpriv, enum hl_cs_type cs_type,
|
|||
goto free_cs_chunk_array;
|
||||
}
|
||||
|
||||
cs->timestamp = !!timestamp;
|
||||
|
||||
/*
|
||||
* Save the signal CS fence for later initialization right before
|
||||
* hanging the wait CS on the queue.
|
||||
|
@ -1709,7 +1768,7 @@ int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data)
|
|||
enum hl_cs_type cs_type;
|
||||
u64 cs_seq = ULONG_MAX;
|
||||
void __user *chunks;
|
||||
u32 num_chunks, flags;
|
||||
u32 num_chunks, flags, timeout;
|
||||
int rc;
|
||||
|
||||
rc = hl_cs_sanity_checks(hpriv, args);
|
||||
|
@ -1735,16 +1794,20 @@ int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data)
|
|||
!(flags & HL_CS_FLAGS_STAGED_SUBMISSION_FIRST))
|
||||
cs_seq = args->in.seq;
|
||||
|
||||
timeout = flags & HL_CS_FLAGS_CUSTOM_TIMEOUT
|
||||
? msecs_to_jiffies(args->in.timeout * 1000)
|
||||
: hpriv->hdev->timeout_jiffies;
|
||||
|
||||
switch (cs_type) {
|
||||
case CS_TYPE_SIGNAL:
|
||||
case CS_TYPE_WAIT:
|
||||
case CS_TYPE_COLLECTIVE_WAIT:
|
||||
rc = cs_ioctl_signal_wait(hpriv, cs_type, chunks, num_chunks,
|
||||
&cs_seq, args->in.cs_flags & HL_CS_FLAGS_TIMESTAMP);
|
||||
&cs_seq, args->in.cs_flags, timeout);
|
||||
break;
|
||||
default:
|
||||
rc = cs_ioctl_default(hpriv, chunks, num_chunks, &cs_seq,
|
||||
args->in.cs_flags);
|
||||
args->in.cs_flags, timeout);
|
||||
break;
|
||||
}
|
||||
|
||||
|
@ -1818,7 +1881,7 @@ static int _hl_cs_wait_ioctl(struct hl_device *hdev, struct hl_ctx *ctx,
|
|||
return rc;
|
||||
}
|
||||
|
||||
int hl_cs_wait_ioctl(struct hl_fpriv *hpriv, void *data)
|
||||
static int hl_cs_wait_ioctl(struct hl_fpriv *hpriv, void *data)
|
||||
{
|
||||
struct hl_device *hdev = hpriv->hdev;
|
||||
union hl_wait_cs_args *args = data;
|
||||
|
@ -1873,3 +1936,176 @@ int hl_cs_wait_ioctl(struct hl_fpriv *hpriv, void *data)
|
|||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int _hl_interrupt_wait_ioctl(struct hl_device *hdev, struct hl_ctx *ctx,
|
||||
u32 timeout_us, u64 user_address,
|
||||
u32 target_value, u16 interrupt_offset,
|
||||
enum hl_cs_wait_status *status)
|
||||
{
|
||||
struct hl_user_pending_interrupt *pend;
|
||||
struct hl_user_interrupt *interrupt;
|
||||
unsigned long timeout;
|
||||
long completion_rc;
|
||||
u32 completion_value;
|
||||
int rc = 0;
|
||||
|
||||
if (timeout_us == U32_MAX)
|
||||
timeout = timeout_us;
|
||||
else
|
||||
timeout = usecs_to_jiffies(timeout_us);
|
||||
|
||||
hl_ctx_get(hdev, ctx);
|
||||
|
||||
pend = kmalloc(sizeof(*pend), GFP_KERNEL);
|
||||
if (!pend) {
|
||||
hl_ctx_put(ctx);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
hl_fence_init(&pend->fence, ULONG_MAX);
|
||||
|
||||
if (interrupt_offset == HL_COMMON_USER_INTERRUPT_ID)
|
||||
interrupt = &hdev->common_user_interrupt;
|
||||
else
|
||||
interrupt = &hdev->user_interrupt[interrupt_offset];
|
||||
|
||||
spin_lock(&interrupt->wait_list_lock);
|
||||
if (!hl_device_operational(hdev, NULL)) {
|
||||
rc = -EPERM;
|
||||
goto unlock_and_free_fence;
|
||||
}
|
||||
|
||||
if (copy_from_user(&completion_value, u64_to_user_ptr(user_address), 4)) {
|
||||
dev_err(hdev->dev,
|
||||
"Failed to copy completion value from user\n");
|
||||
rc = -EFAULT;
|
||||
goto unlock_and_free_fence;
|
||||
}
|
||||
|
||||
if (completion_value >= target_value)
|
||||
*status = CS_WAIT_STATUS_COMPLETED;
|
||||
else
|
||||
*status = CS_WAIT_STATUS_BUSY;
|
||||
|
||||
if (!timeout_us || (*status == CS_WAIT_STATUS_COMPLETED))
|
||||
goto unlock_and_free_fence;
|
||||
|
||||
/* Add pending user interrupt to relevant list for the interrupt
|
||||
* handler to monitor
|
||||
*/
|
||||
list_add_tail(&pend->wait_list_node, &interrupt->wait_list_head);
|
||||
spin_unlock(&interrupt->wait_list_lock);
|
||||
|
||||
wait_again:
|
||||
/* Wait for interrupt handler to signal completion */
|
||||
completion_rc =
|
||||
wait_for_completion_interruptible_timeout(
|
||||
&pend->fence.completion, timeout);
|
||||
|
||||
/* If timeout did not expire we need to perform the comparison.
|
||||
* If comparison fails, keep waiting until timeout expires
|
||||
*/
|
||||
if (completion_rc > 0) {
|
||||
if (copy_from_user(&completion_value,
|
||||
u64_to_user_ptr(user_address), 4)) {
|
||||
dev_err(hdev->dev,
|
||||
"Failed to copy completion value from user\n");
|
||||
rc = -EFAULT;
|
||||
goto remove_pending_user_interrupt;
|
||||
}
|
||||
|
||||
if (completion_value >= target_value) {
|
||||
*status = CS_WAIT_STATUS_COMPLETED;
|
||||
} else {
|
||||
timeout -= jiffies_to_usecs(completion_rc);
|
||||
goto wait_again;
|
||||
}
|
||||
} else {
|
||||
*status = CS_WAIT_STATUS_BUSY;
|
||||
}
|
||||
|
||||
remove_pending_user_interrupt:
|
||||
spin_lock(&interrupt->wait_list_lock);
|
||||
list_del(&pend->wait_list_node);
|
||||
|
||||
unlock_and_free_fence:
|
||||
spin_unlock(&interrupt->wait_list_lock);
|
||||
kfree(pend);
|
||||
hl_ctx_put(ctx);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int hl_interrupt_wait_ioctl(struct hl_fpriv *hpriv, void *data)
|
||||
{
|
||||
u16 interrupt_id, interrupt_offset, first_interrupt, last_interrupt;
|
||||
struct hl_device *hdev = hpriv->hdev;
|
||||
struct asic_fixed_properties *prop;
|
||||
union hl_wait_cs_args *args = data;
|
||||
enum hl_cs_wait_status status;
|
||||
int rc;
|
||||
|
||||
prop = &hdev->asic_prop;
|
||||
|
||||
if (!prop->user_interrupt_count) {
|
||||
dev_err(hdev->dev, "no user interrupts allowed");
|
||||
return -EPERM;
|
||||
}
|
||||
|
||||
interrupt_id =
|
||||
FIELD_GET(HL_WAIT_CS_FLAGS_INTERRUPT_MASK, args->in.flags);
|
||||
|
||||
first_interrupt = prop->first_available_user_msix_interrupt;
|
||||
last_interrupt = prop->first_available_user_msix_interrupt +
|
||||
prop->user_interrupt_count - 1;
|
||||
|
||||
if ((interrupt_id < first_interrupt || interrupt_id > last_interrupt) &&
|
||||
interrupt_id != HL_COMMON_USER_INTERRUPT_ID) {
|
||||
dev_err(hdev->dev, "invalid user interrupt %u", interrupt_id);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (interrupt_id == HL_COMMON_USER_INTERRUPT_ID)
|
||||
interrupt_offset = HL_COMMON_USER_INTERRUPT_ID;
|
||||
else
|
||||
interrupt_offset = interrupt_id - first_interrupt;
|
||||
|
||||
rc = _hl_interrupt_wait_ioctl(hdev, hpriv->ctx,
|
||||
args->in.interrupt_timeout_us, args->in.addr,
|
||||
args->in.target, interrupt_offset, &status);
|
||||
|
||||
memset(args, 0, sizeof(*args));
|
||||
|
||||
if (rc) {
|
||||
dev_err_ratelimited(hdev->dev,
|
||||
"interrupt_wait_ioctl failed (%d)\n", rc);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
switch (status) {
|
||||
case CS_WAIT_STATUS_COMPLETED:
|
||||
args->out.status = HL_WAIT_CS_STATUS_COMPLETED;
|
||||
break;
|
||||
case CS_WAIT_STATUS_BUSY:
|
||||
default:
|
||||
args->out.status = HL_WAIT_CS_STATUS_BUSY;
|
||||
break;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int hl_wait_ioctl(struct hl_fpriv *hpriv, void *data)
|
||||
{
|
||||
union hl_wait_cs_args *args = data;
|
||||
u32 flags = args->in.flags;
|
||||
int rc;
|
||||
|
||||
if (flags & HL_WAIT_CS_FLAGS_INTERRUPT)
|
||||
rc = hl_interrupt_wait_ioctl(hpriv, data);
|
||||
else
|
||||
rc = hl_cs_wait_ioctl(hpriv, data);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
|
|
@ -20,6 +20,11 @@ static void hl_ctx_fini(struct hl_ctx *ctx)
|
|||
*/
|
||||
hl_pending_cb_list_flush(ctx);
|
||||
|
||||
/* Release all allocated HW block mapped list entries and destroy
|
||||
* the mutex.
|
||||
*/
|
||||
hl_hw_block_mem_fini(ctx);
|
||||
|
||||
/*
|
||||
* If we arrived here, there are no jobs waiting for this context
|
||||
* on its queues so we can safely remove it.
|
||||
|
@ -160,13 +165,15 @@ int hl_ctx_init(struct hl_device *hdev, struct hl_ctx *ctx, bool is_kernel_ctx)
|
|||
if (!ctx->cs_pending)
|
||||
return -ENOMEM;
|
||||
|
||||
hl_hw_block_mem_init(ctx);
|
||||
|
||||
if (is_kernel_ctx) {
|
||||
ctx->asid = HL_KERNEL_ASID_ID; /* Kernel driver gets ASID 0 */
|
||||
rc = hl_vm_ctx_init(ctx);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev, "Failed to init mem ctx module\n");
|
||||
rc = -ENOMEM;
|
||||
goto err_free_cs_pending;
|
||||
goto err_hw_block_mem_fini;
|
||||
}
|
||||
|
||||
rc = hdev->asic_funcs->ctx_init(ctx);
|
||||
|
@ -179,7 +186,7 @@ int hl_ctx_init(struct hl_device *hdev, struct hl_ctx *ctx, bool is_kernel_ctx)
|
|||
if (!ctx->asid) {
|
||||
dev_err(hdev->dev, "No free ASID, failed to create context\n");
|
||||
rc = -ENOMEM;
|
||||
goto err_free_cs_pending;
|
||||
goto err_hw_block_mem_fini;
|
||||
}
|
||||
|
||||
rc = hl_vm_ctx_init(ctx);
|
||||
|
@ -214,7 +221,8 @@ err_vm_ctx_fini:
|
|||
err_asid_free:
|
||||
if (ctx->asid != HL_KERNEL_ASID_ID)
|
||||
hl_asid_free(hdev, ctx->asid);
|
||||
err_free_cs_pending:
|
||||
err_hw_block_mem_fini:
|
||||
hl_hw_block_mem_fini(ctx);
|
||||
kfree(ctx->cs_pending);
|
||||
|
||||
return rc;
|
||||
|
|
|
@ -9,8 +9,8 @@
|
|||
#include "../include/hw_ip/mmu/mmu_general.h"
|
||||
|
||||
#include <linux/pci.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <linux/vmalloc.h>
|
||||
|
||||
#define MMU_ADDR_BUF_SIZE 40
|
||||
#define MMU_ASID_BUF_SIZE 10
|
||||
|
@ -229,6 +229,7 @@ static int vm_show(struct seq_file *s, void *data)
|
|||
{
|
||||
struct hl_debugfs_entry *entry = s->private;
|
||||
struct hl_dbg_device_entry *dev_entry = entry->dev_entry;
|
||||
struct hl_vm_hw_block_list_node *lnode;
|
||||
struct hl_ctx *ctx;
|
||||
struct hl_vm *vm;
|
||||
struct hl_vm_hash_node *hnode;
|
||||
|
@ -272,6 +273,21 @@ static int vm_show(struct seq_file *s, void *data)
|
|||
}
|
||||
mutex_unlock(&ctx->mem_hash_lock);
|
||||
|
||||
if (ctx->asid != HL_KERNEL_ASID_ID &&
|
||||
!list_empty(&ctx->hw_block_mem_list)) {
|
||||
seq_puts(s, "\nhw_block mappings:\n\n");
|
||||
seq_puts(s, " virtual address size HW block id\n");
|
||||
seq_puts(s, "-------------------------------------------\n");
|
||||
mutex_lock(&ctx->hw_block_list_lock);
|
||||
list_for_each_entry(lnode, &ctx->hw_block_mem_list,
|
||||
node) {
|
||||
seq_printf(s,
|
||||
" 0x%-14lx %-6u %-9u\n",
|
||||
lnode->vaddr, lnode->size, lnode->id);
|
||||
}
|
||||
mutex_unlock(&ctx->hw_block_list_lock);
|
||||
}
|
||||
|
||||
vm = &ctx->hdev->vm;
|
||||
spin_lock(&vm->idr_lock);
|
||||
|
||||
|
@ -441,20 +457,85 @@ out:
|
|||
return false;
|
||||
}
|
||||
|
||||
static int device_va_to_pa(struct hl_device *hdev, u64 virt_addr,
|
||||
static bool hl_is_device_internal_memory_va(struct hl_device *hdev, u64 addr,
|
||||
u32 size)
|
||||
{
|
||||
struct asic_fixed_properties *prop = &hdev->asic_prop;
|
||||
u64 dram_start_addr, dram_end_addr;
|
||||
|
||||
if (!hdev->mmu_enable)
|
||||
return false;
|
||||
|
||||
if (prop->dram_supports_virtual_memory) {
|
||||
dram_start_addr = prop->dmmu.start_addr;
|
||||
dram_end_addr = prop->dmmu.end_addr;
|
||||
} else {
|
||||
dram_start_addr = prop->dram_base_address;
|
||||
dram_end_addr = prop->dram_end_address;
|
||||
}
|
||||
|
||||
if (hl_mem_area_inside_range(addr, size, dram_start_addr,
|
||||
dram_end_addr))
|
||||
return true;
|
||||
|
||||
if (hl_mem_area_inside_range(addr, size, prop->sram_base_address,
|
||||
prop->sram_end_address))
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static int device_va_to_pa(struct hl_device *hdev, u64 virt_addr, u32 size,
|
||||
u64 *phys_addr)
|
||||
{
|
||||
struct hl_vm_phys_pg_pack *phys_pg_pack;
|
||||
struct hl_ctx *ctx = hdev->compute_ctx;
|
||||
int rc = 0;
|
||||
struct hl_vm_hash_node *hnode;
|
||||
struct hl_userptr *userptr;
|
||||
enum vm_type_t *vm_type;
|
||||
bool valid = false;
|
||||
u64 end_address;
|
||||
u32 range_size;
|
||||
int i, rc = 0;
|
||||
|
||||
if (!ctx) {
|
||||
dev_err(hdev->dev, "no ctx available\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Verify address is mapped */
|
||||
mutex_lock(&ctx->mem_hash_lock);
|
||||
hash_for_each(ctx->mem_hash, i, hnode, node) {
|
||||
vm_type = hnode->ptr;
|
||||
|
||||
if (*vm_type == VM_TYPE_USERPTR) {
|
||||
userptr = hnode->ptr;
|
||||
range_size = userptr->size;
|
||||
} else {
|
||||
phys_pg_pack = hnode->ptr;
|
||||
range_size = phys_pg_pack->total_size;
|
||||
}
|
||||
|
||||
end_address = virt_addr + size;
|
||||
if ((virt_addr >= hnode->vaddr) &&
|
||||
(end_address <= hnode->vaddr + range_size)) {
|
||||
valid = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
mutex_unlock(&ctx->mem_hash_lock);
|
||||
|
||||
if (!valid) {
|
||||
dev_err(hdev->dev,
|
||||
"virt addr 0x%llx is not mapped\n",
|
||||
virt_addr);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
rc = hl_mmu_va_to_pa(ctx, virt_addr, phys_addr);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev, "virt addr 0x%llx is not mapped to phys addr\n",
|
||||
dev_err(hdev->dev,
|
||||
"virt addr 0x%llx is not mapped to phys addr\n",
|
||||
virt_addr);
|
||||
rc = -EINVAL;
|
||||
}
|
||||
|
@ -467,10 +548,11 @@ static ssize_t hl_data_read32(struct file *f, char __user *buf,
|
|||
{
|
||||
struct hl_dbg_device_entry *entry = file_inode(f)->i_private;
|
||||
struct hl_device *hdev = entry->hdev;
|
||||
char tmp_buf[32];
|
||||
u64 addr = entry->addr;
|
||||
u32 val;
|
||||
bool user_address;
|
||||
char tmp_buf[32];
|
||||
ssize_t rc;
|
||||
u32 val;
|
||||
|
||||
if (atomic_read(&hdev->in_reset)) {
|
||||
dev_warn_ratelimited(hdev->dev, "Can't read during reset\n");
|
||||
|
@ -480,13 +562,14 @@ static ssize_t hl_data_read32(struct file *f, char __user *buf,
|
|||
if (*ppos)
|
||||
return 0;
|
||||
|
||||
if (hl_is_device_va(hdev, addr)) {
|
||||
rc = device_va_to_pa(hdev, addr, &addr);
|
||||
user_address = hl_is_device_va(hdev, addr);
|
||||
if (user_address) {
|
||||
rc = device_va_to_pa(hdev, addr, sizeof(val), &addr);
|
||||
if (rc)
|
||||
return rc;
|
||||
}
|
||||
|
||||
rc = hdev->asic_funcs->debugfs_read32(hdev, addr, &val);
|
||||
rc = hdev->asic_funcs->debugfs_read32(hdev, addr, user_address, &val);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev, "Failed to read from 0x%010llx\n", addr);
|
||||
return rc;
|
||||
|
@ -503,6 +586,7 @@ static ssize_t hl_data_write32(struct file *f, const char __user *buf,
|
|||
struct hl_dbg_device_entry *entry = file_inode(f)->i_private;
|
||||
struct hl_device *hdev = entry->hdev;
|
||||
u64 addr = entry->addr;
|
||||
bool user_address;
|
||||
u32 value;
|
||||
ssize_t rc;
|
||||
|
||||
|
@ -515,13 +599,14 @@ static ssize_t hl_data_write32(struct file *f, const char __user *buf,
|
|||
if (rc)
|
||||
return rc;
|
||||
|
||||
if (hl_is_device_va(hdev, addr)) {
|
||||
rc = device_va_to_pa(hdev, addr, &addr);
|
||||
user_address = hl_is_device_va(hdev, addr);
|
||||
if (user_address) {
|
||||
rc = device_va_to_pa(hdev, addr, sizeof(value), &addr);
|
||||
if (rc)
|
||||
return rc;
|
||||
}
|
||||
|
||||
rc = hdev->asic_funcs->debugfs_write32(hdev, addr, value);
|
||||
rc = hdev->asic_funcs->debugfs_write32(hdev, addr, user_address, value);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev, "Failed to write 0x%08x to 0x%010llx\n",
|
||||
value, addr);
|
||||
|
@ -536,21 +621,28 @@ static ssize_t hl_data_read64(struct file *f, char __user *buf,
|
|||
{
|
||||
struct hl_dbg_device_entry *entry = file_inode(f)->i_private;
|
||||
struct hl_device *hdev = entry->hdev;
|
||||
char tmp_buf[32];
|
||||
u64 addr = entry->addr;
|
||||
u64 val;
|
||||
bool user_address;
|
||||
char tmp_buf[32];
|
||||
ssize_t rc;
|
||||
u64 val;
|
||||
|
||||
if (atomic_read(&hdev->in_reset)) {
|
||||
dev_warn_ratelimited(hdev->dev, "Can't read during reset\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (*ppos)
|
||||
return 0;
|
||||
|
||||
if (hl_is_device_va(hdev, addr)) {
|
||||
rc = device_va_to_pa(hdev, addr, &addr);
|
||||
user_address = hl_is_device_va(hdev, addr);
|
||||
if (user_address) {
|
||||
rc = device_va_to_pa(hdev, addr, sizeof(val), &addr);
|
||||
if (rc)
|
||||
return rc;
|
||||
}
|
||||
|
||||
rc = hdev->asic_funcs->debugfs_read64(hdev, addr, &val);
|
||||
rc = hdev->asic_funcs->debugfs_read64(hdev, addr, user_address, &val);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev, "Failed to read from 0x%010llx\n", addr);
|
||||
return rc;
|
||||
|
@ -567,20 +659,27 @@ static ssize_t hl_data_write64(struct file *f, const char __user *buf,
|
|||
struct hl_dbg_device_entry *entry = file_inode(f)->i_private;
|
||||
struct hl_device *hdev = entry->hdev;
|
||||
u64 addr = entry->addr;
|
||||
bool user_address;
|
||||
u64 value;
|
||||
ssize_t rc;
|
||||
|
||||
if (atomic_read(&hdev->in_reset)) {
|
||||
dev_warn_ratelimited(hdev->dev, "Can't write during reset\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
rc = kstrtoull_from_user(buf, count, 16, &value);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
if (hl_is_device_va(hdev, addr)) {
|
||||
rc = device_va_to_pa(hdev, addr, &addr);
|
||||
user_address = hl_is_device_va(hdev, addr);
|
||||
if (user_address) {
|
||||
rc = device_va_to_pa(hdev, addr, sizeof(value), &addr);
|
||||
if (rc)
|
||||
return rc;
|
||||
}
|
||||
|
||||
rc = hdev->asic_funcs->debugfs_write64(hdev, addr, value);
|
||||
rc = hdev->asic_funcs->debugfs_write64(hdev, addr, user_address, value);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev, "Failed to write 0x%016llx to 0x%010llx\n",
|
||||
value, addr);
|
||||
|
@ -590,6 +689,63 @@ static ssize_t hl_data_write64(struct file *f, const char __user *buf,
|
|||
return count;
|
||||
}
|
||||
|
||||
static ssize_t hl_dma_size_write(struct file *f, const char __user *buf,
|
||||
size_t count, loff_t *ppos)
|
||||
{
|
||||
struct hl_dbg_device_entry *entry = file_inode(f)->i_private;
|
||||
struct hl_device *hdev = entry->hdev;
|
||||
u64 addr = entry->addr;
|
||||
ssize_t rc;
|
||||
u32 size;
|
||||
|
||||
if (atomic_read(&hdev->in_reset)) {
|
||||
dev_warn_ratelimited(hdev->dev, "Can't DMA during reset\n");
|
||||
return 0;
|
||||
}
|
||||
rc = kstrtouint_from_user(buf, count, 16, &size);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
if (!size) {
|
||||
dev_err(hdev->dev, "DMA read failed. size can't be 0\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (size > SZ_128M) {
|
||||
dev_err(hdev->dev,
|
||||
"DMA read failed. size can't be larger than 128MB\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (!hl_is_device_internal_memory_va(hdev, addr, size)) {
|
||||
dev_err(hdev->dev,
|
||||
"DMA read failed. Invalid 0x%010llx + 0x%08x\n",
|
||||
addr, size);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Free the previous allocation, if there was any */
|
||||
entry->blob_desc.size = 0;
|
||||
vfree(entry->blob_desc.data);
|
||||
|
||||
entry->blob_desc.data = vmalloc(size);
|
||||
if (!entry->blob_desc.data)
|
||||
return -ENOMEM;
|
||||
|
||||
rc = hdev->asic_funcs->debugfs_read_dma(hdev, addr, size,
|
||||
entry->blob_desc.data);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev, "Failed to DMA from 0x%010llx\n", addr);
|
||||
vfree(entry->blob_desc.data);
|
||||
entry->blob_desc.data = NULL;
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
entry->blob_desc.size = size;
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static ssize_t hl_get_power_state(struct file *f, char __user *buf,
|
||||
size_t count, loff_t *ppos)
|
||||
{
|
||||
|
@ -871,7 +1027,7 @@ static ssize_t hl_stop_on_err_write(struct file *f, const char __user *buf,
|
|||
|
||||
hdev->stop_on_err = value ? 1 : 0;
|
||||
|
||||
hl_device_reset(hdev, false, false);
|
||||
hl_device_reset(hdev, 0);
|
||||
|
||||
return count;
|
||||
}
|
||||
|
@ -899,6 +1055,11 @@ static const struct file_operations hl_data64b_fops = {
|
|||
.write = hl_data_write64
|
||||
};
|
||||
|
||||
static const struct file_operations hl_dma_size_fops = {
|
||||
.owner = THIS_MODULE,
|
||||
.write = hl_dma_size_write
|
||||
};
|
||||
|
||||
static const struct file_operations hl_i2c_data_fops = {
|
||||
.owner = THIS_MODULE,
|
||||
.read = hl_i2c_data_read,
|
||||
|
@ -1001,6 +1162,9 @@ void hl_debugfs_add_device(struct hl_device *hdev)
|
|||
if (!dev_entry->entry_arr)
|
||||
return;
|
||||
|
||||
dev_entry->blob_desc.size = 0;
|
||||
dev_entry->blob_desc.data = NULL;
|
||||
|
||||
INIT_LIST_HEAD(&dev_entry->file_list);
|
||||
INIT_LIST_HEAD(&dev_entry->cb_list);
|
||||
INIT_LIST_HEAD(&dev_entry->cs_list);
|
||||
|
@ -1103,6 +1267,17 @@ void hl_debugfs_add_device(struct hl_device *hdev)
|
|||
dev_entry,
|
||||
&hl_security_violations_fops);
|
||||
|
||||
debugfs_create_file("dma_size",
|
||||
0200,
|
||||
dev_entry->root,
|
||||
dev_entry,
|
||||
&hl_dma_size_fops);
|
||||
|
||||
debugfs_create_blob("data_dma",
|
||||
0400,
|
||||
dev_entry->root,
|
||||
&dev_entry->blob_desc);
|
||||
|
||||
for (i = 0, entry = dev_entry->entry_arr ; i < count ; i++, entry++) {
|
||||
debugfs_create_file(hl_debugfs_list[i].name,
|
||||
0444,
|
||||
|
@ -1121,6 +1296,9 @@ void hl_debugfs_remove_device(struct hl_device *hdev)
|
|||
debugfs_remove_recursive(entry->root);
|
||||
|
||||
mutex_destroy(&entry->file_mutex);
|
||||
|
||||
vfree(entry->blob_desc.data);
|
||||
|
||||
kfree(entry->entry_arr);
|
||||
}
|
||||
|
||||
|
|
|
@ -70,6 +70,9 @@ static void hpriv_release(struct kref *ref)
|
|||
mutex_unlock(&hdev->fpriv_list_lock);
|
||||
|
||||
kfree(hpriv);
|
||||
|
||||
if (hdev->reset_upon_device_release)
|
||||
hl_device_reset(hdev, 0);
|
||||
}
|
||||
|
||||
void hl_hpriv_get(struct hl_fpriv *hpriv)
|
||||
|
@ -77,9 +80,9 @@ void hl_hpriv_get(struct hl_fpriv *hpriv)
|
|||
kref_get(&hpriv->refcount);
|
||||
}
|
||||
|
||||
void hl_hpriv_put(struct hl_fpriv *hpriv)
|
||||
int hl_hpriv_put(struct hl_fpriv *hpriv)
|
||||
{
|
||||
kref_put(&hpriv->refcount, hpriv_release);
|
||||
return kref_put(&hpriv->refcount, hpriv_release);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -103,10 +106,17 @@ static int hl_device_release(struct inode *inode, struct file *filp)
|
|||
return 0;
|
||||
}
|
||||
|
||||
hl_cb_mgr_fini(hpriv->hdev, &hpriv->cb_mgr);
|
||||
hl_ctx_mgr_fini(hpriv->hdev, &hpriv->ctx_mgr);
|
||||
/* Each pending user interrupt holds the user's context, hence we
|
||||
* must release them all before calling hl_ctx_mgr_fini().
|
||||
*/
|
||||
hl_release_pending_user_interrupts(hpriv->hdev);
|
||||
|
||||
hl_hpriv_put(hpriv);
|
||||
hl_cb_mgr_fini(hdev, &hpriv->cb_mgr);
|
||||
hl_ctx_mgr_fini(hdev, &hpriv->ctx_mgr);
|
||||
|
||||
if (!hl_hpriv_put(hpriv))
|
||||
dev_warn(hdev->dev,
|
||||
"Device is still in use because there are live CS and/or memory mappings\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -283,7 +293,7 @@ static void device_hard_reset_pending(struct work_struct *work)
|
|||
struct hl_device *hdev = device_reset_work->hdev;
|
||||
int rc;
|
||||
|
||||
rc = hl_device_reset(hdev, true, true);
|
||||
rc = hl_device_reset(hdev, HL_RESET_HARD | HL_RESET_FROM_RESET_THREAD);
|
||||
if ((rc == -EBUSY) && !hdev->device_fini_pending) {
|
||||
dev_info(hdev->dev,
|
||||
"Could not reset device. will try again in %u seconds",
|
||||
|
@ -311,11 +321,15 @@ static int device_early_init(struct hl_device *hdev)
|
|||
switch (hdev->asic_type) {
|
||||
case ASIC_GOYA:
|
||||
goya_set_asic_funcs(hdev);
|
||||
strlcpy(hdev->asic_name, "GOYA", sizeof(hdev->asic_name));
|
||||
strscpy(hdev->asic_name, "GOYA", sizeof(hdev->asic_name));
|
||||
break;
|
||||
case ASIC_GAUDI:
|
||||
gaudi_set_asic_funcs(hdev);
|
||||
sprintf(hdev->asic_name, "GAUDI");
|
||||
strscpy(hdev->asic_name, "GAUDI", sizeof(hdev->asic_name));
|
||||
break;
|
||||
case ASIC_GAUDI_SEC:
|
||||
gaudi_set_asic_funcs(hdev);
|
||||
strscpy(hdev->asic_name, "GAUDI SEC", sizeof(hdev->asic_name));
|
||||
break;
|
||||
default:
|
||||
dev_err(hdev->dev, "Unrecognized ASIC type %d\n",
|
||||
|
@ -334,7 +348,7 @@ static int device_early_init(struct hl_device *hdev)
|
|||
if (hdev->asic_prop.completion_queues_count) {
|
||||
hdev->cq_wq = kcalloc(hdev->asic_prop.completion_queues_count,
|
||||
sizeof(*hdev->cq_wq),
|
||||
GFP_ATOMIC);
|
||||
GFP_KERNEL);
|
||||
if (!hdev->cq_wq) {
|
||||
rc = -ENOMEM;
|
||||
goto asid_fini;
|
||||
|
@ -358,24 +372,24 @@ static int device_early_init(struct hl_device *hdev)
|
|||
goto free_cq_wq;
|
||||
}
|
||||
|
||||
hdev->hl_chip_info = kzalloc(sizeof(struct hwmon_chip_info),
|
||||
GFP_KERNEL);
|
||||
if (!hdev->hl_chip_info) {
|
||||
hdev->sob_reset_wq = alloc_workqueue("hl-sob-reset", WQ_UNBOUND, 0);
|
||||
if (!hdev->sob_reset_wq) {
|
||||
dev_err(hdev->dev,
|
||||
"Failed to allocate SOB reset workqueue\n");
|
||||
rc = -ENOMEM;
|
||||
goto free_eq_wq;
|
||||
}
|
||||
|
||||
hdev->idle_busy_ts_arr = kmalloc_array(HL_IDLE_BUSY_TS_ARR_SIZE,
|
||||
sizeof(struct hl_device_idle_busy_ts),
|
||||
(GFP_KERNEL | __GFP_ZERO));
|
||||
if (!hdev->idle_busy_ts_arr) {
|
||||
hdev->hl_chip_info = kzalloc(sizeof(struct hwmon_chip_info),
|
||||
GFP_KERNEL);
|
||||
if (!hdev->hl_chip_info) {
|
||||
rc = -ENOMEM;
|
||||
goto free_chip_info;
|
||||
goto free_sob_reset_wq;
|
||||
}
|
||||
|
||||
rc = hl_mmu_if_set_funcs(hdev);
|
||||
if (rc)
|
||||
goto free_idle_busy_ts_arr;
|
||||
goto free_chip_info;
|
||||
|
||||
hl_cb_mgr_init(&hdev->kernel_cb_mgr);
|
||||
|
||||
|
@ -404,10 +418,10 @@ static int device_early_init(struct hl_device *hdev)
|
|||
|
||||
free_cb_mgr:
|
||||
hl_cb_mgr_fini(hdev, &hdev->kernel_cb_mgr);
|
||||
free_idle_busy_ts_arr:
|
||||
kfree(hdev->idle_busy_ts_arr);
|
||||
free_chip_info:
|
||||
kfree(hdev->hl_chip_info);
|
||||
free_sob_reset_wq:
|
||||
destroy_workqueue(hdev->sob_reset_wq);
|
||||
free_eq_wq:
|
||||
destroy_workqueue(hdev->eq_wq);
|
||||
free_cq_wq:
|
||||
|
@ -441,9 +455,9 @@ static void device_early_fini(struct hl_device *hdev)
|
|||
|
||||
hl_cb_mgr_fini(hdev, &hdev->kernel_cb_mgr);
|
||||
|
||||
kfree(hdev->idle_busy_ts_arr);
|
||||
kfree(hdev->hl_chip_info);
|
||||
|
||||
destroy_workqueue(hdev->sob_reset_wq);
|
||||
destroy_workqueue(hdev->eq_wq);
|
||||
destroy_workqueue(hdev->device_reset_work.wq);
|
||||
|
||||
|
@ -485,7 +499,7 @@ static void hl_device_heartbeat(struct work_struct *work)
|
|||
goto reschedule;
|
||||
|
||||
dev_err(hdev->dev, "Device heartbeat failed!\n");
|
||||
hl_device_reset(hdev, true, false);
|
||||
hl_device_reset(hdev, HL_RESET_HARD | HL_RESET_HEARTBEAT);
|
||||
|
||||
return;
|
||||
|
||||
|
@ -561,100 +575,24 @@ static void device_late_fini(struct hl_device *hdev)
|
|||
hdev->late_init_done = false;
|
||||
}
|
||||
|
||||
uint32_t hl_device_utilization(struct hl_device *hdev, uint32_t period_ms)
|
||||
int hl_device_utilization(struct hl_device *hdev, u32 *utilization)
|
||||
{
|
||||
struct hl_device_idle_busy_ts *ts;
|
||||
ktime_t zero_ktime, curr = ktime_get();
|
||||
u32 overlap_cnt = 0, last_index = hdev->idle_busy_ts_idx;
|
||||
s64 period_us, last_start_us, last_end_us, last_busy_time_us,
|
||||
total_busy_time_us = 0, total_busy_time_ms;
|
||||
u64 max_power, curr_power, dc_power, dividend;
|
||||
int rc;
|
||||
|
||||
zero_ktime = ktime_set(0, 0);
|
||||
period_us = period_ms * USEC_PER_MSEC;
|
||||
ts = &hdev->idle_busy_ts_arr[last_index];
|
||||
max_power = hdev->asic_prop.max_power_default;
|
||||
dc_power = hdev->asic_prop.dc_power_default;
|
||||
rc = hl_fw_cpucp_power_get(hdev, &curr_power);
|
||||
|
||||
/* check case that device is currently in idle */
|
||||
if (!ktime_compare(ts->busy_to_idle_ts, zero_ktime) &&
|
||||
!ktime_compare(ts->idle_to_busy_ts, zero_ktime)) {
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
last_index--;
|
||||
/* Handle case idle_busy_ts_idx was 0 */
|
||||
if (last_index > HL_IDLE_BUSY_TS_ARR_SIZE)
|
||||
last_index = HL_IDLE_BUSY_TS_ARR_SIZE - 1;
|
||||
curr_power = clamp(curr_power, dc_power, max_power);
|
||||
|
||||
ts = &hdev->idle_busy_ts_arr[last_index];
|
||||
}
|
||||
dividend = (curr_power - dc_power) * 100;
|
||||
*utilization = (u32) div_u64(dividend, (max_power - dc_power));
|
||||
|
||||
while (overlap_cnt < HL_IDLE_BUSY_TS_ARR_SIZE) {
|
||||
/* Check if we are in last sample case. i.e. if the sample
|
||||
* begun before the sampling period. This could be a real
|
||||
* sample or 0 so need to handle both cases
|
||||
*/
|
||||
last_start_us = ktime_to_us(
|
||||
ktime_sub(curr, ts->idle_to_busy_ts));
|
||||
|
||||
if (last_start_us > period_us) {
|
||||
|
||||
/* First check two cases:
|
||||
* 1. If the device is currently busy
|
||||
* 2. If the device was idle during the whole sampling
|
||||
* period
|
||||
*/
|
||||
|
||||
if (!ktime_compare(ts->busy_to_idle_ts, zero_ktime)) {
|
||||
/* Check if the device is currently busy */
|
||||
if (ktime_compare(ts->idle_to_busy_ts,
|
||||
zero_ktime))
|
||||
return 100;
|
||||
|
||||
/* We either didn't have any activity or we
|
||||
* reached an entry which is 0. Either way,
|
||||
* exit and return what was accumulated so far
|
||||
*/
|
||||
break;
|
||||
}
|
||||
|
||||
/* If sample has finished, check it is relevant */
|
||||
last_end_us = ktime_to_us(
|
||||
ktime_sub(curr, ts->busy_to_idle_ts));
|
||||
|
||||
if (last_end_us > period_us)
|
||||
break;
|
||||
|
||||
/* It is relevant so add it but with adjustment */
|
||||
last_busy_time_us = ktime_to_us(
|
||||
ktime_sub(ts->busy_to_idle_ts,
|
||||
ts->idle_to_busy_ts));
|
||||
total_busy_time_us += last_busy_time_us -
|
||||
(last_start_us - period_us);
|
||||
break;
|
||||
}
|
||||
|
||||
/* Check if the sample is finished or still open */
|
||||
if (ktime_compare(ts->busy_to_idle_ts, zero_ktime))
|
||||
last_busy_time_us = ktime_to_us(
|
||||
ktime_sub(ts->busy_to_idle_ts,
|
||||
ts->idle_to_busy_ts));
|
||||
else
|
||||
last_busy_time_us = ktime_to_us(
|
||||
ktime_sub(curr, ts->idle_to_busy_ts));
|
||||
|
||||
total_busy_time_us += last_busy_time_us;
|
||||
|
||||
last_index--;
|
||||
/* Handle case idle_busy_ts_idx was 0 */
|
||||
if (last_index > HL_IDLE_BUSY_TS_ARR_SIZE)
|
||||
last_index = HL_IDLE_BUSY_TS_ARR_SIZE - 1;
|
||||
|
||||
ts = &hdev->idle_busy_ts_arr[last_index];
|
||||
|
||||
overlap_cnt++;
|
||||
}
|
||||
|
||||
total_busy_time_ms = DIV_ROUND_UP_ULL(total_busy_time_us,
|
||||
USEC_PER_MSEC);
|
||||
|
||||
return DIV_ROUND_UP_ULL(total_busy_time_ms * 100, period_ms);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -809,7 +747,7 @@ int hl_device_resume(struct hl_device *hdev)
|
|||
hdev->disabled = false;
|
||||
atomic_set(&hdev->in_reset, 0);
|
||||
|
||||
rc = hl_device_reset(hdev, true, false);
|
||||
rc = hl_device_reset(hdev, HL_RESET_HARD);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev, "Failed to reset device during resume\n");
|
||||
goto disable_device;
|
||||
|
@ -915,9 +853,7 @@ static void device_disable_open_processes(struct hl_device *hdev)
|
|||
* hl_device_reset - reset the device
|
||||
*
|
||||
* @hdev: pointer to habanalabs device structure
|
||||
* @hard_reset: should we do hard reset to all engines or just reset the
|
||||
* compute/dma engines
|
||||
* @from_hard_reset_thread: is the caller the hard-reset thread
|
||||
* @flags: reset flags.
|
||||
*
|
||||
* Block future CS and wait for pending CS to be enqueued
|
||||
* Call ASIC H/W fini
|
||||
|
@ -929,9 +865,10 @@ static void device_disable_open_processes(struct hl_device *hdev)
|
|||
*
|
||||
* Returns 0 for success or an error on failure.
|
||||
*/
|
||||
int hl_device_reset(struct hl_device *hdev, bool hard_reset,
|
||||
bool from_hard_reset_thread)
|
||||
int hl_device_reset(struct hl_device *hdev, u32 flags)
|
||||
{
|
||||
u64 idle_mask[HL_BUSY_ENGINES_MASK_EXT_SIZE] = {0};
|
||||
bool hard_reset, from_hard_reset_thread;
|
||||
int i, rc;
|
||||
|
||||
if (!hdev->init_done) {
|
||||
|
@ -940,6 +877,9 @@ int hl_device_reset(struct hl_device *hdev, bool hard_reset,
|
|||
return 0;
|
||||
}
|
||||
|
||||
hard_reset = (flags & HL_RESET_HARD) != 0;
|
||||
from_hard_reset_thread = (flags & HL_RESET_FROM_RESET_THREAD) != 0;
|
||||
|
||||
if ((!hard_reset) && (!hdev->supports_soft_reset)) {
|
||||
dev_dbg(hdev->dev, "Doing hard-reset instead of soft-reset\n");
|
||||
hard_reset = true;
|
||||
|
@ -960,7 +900,11 @@ int hl_device_reset(struct hl_device *hdev, bool hard_reset,
|
|||
if (rc)
|
||||
return 0;
|
||||
|
||||
if (hard_reset) {
|
||||
/*
|
||||
* if reset is due to heartbeat, device CPU is no responsive in
|
||||
* which case no point sending PCI disable message to it
|
||||
*/
|
||||
if (hard_reset && !(flags & HL_RESET_HEARTBEAT)) {
|
||||
/* Disable PCI access from device F/W so he won't send
|
||||
* us additional interrupts. We disable MSI/MSI-X at
|
||||
* the halt_engines function and we can't have the F/W
|
||||
|
@ -1030,6 +974,11 @@ again:
|
|||
/* Go over all the queues, release all CS and their jobs */
|
||||
hl_cs_rollback_all(hdev);
|
||||
|
||||
/* Release all pending user interrupts, each pending user interrupt
|
||||
* holds a reference to user context
|
||||
*/
|
||||
hl_release_pending_user_interrupts(hdev);
|
||||
|
||||
kill_processes:
|
||||
if (hard_reset) {
|
||||
/* Kill processes here after CS rollback. This is because the
|
||||
|
@ -1078,14 +1027,6 @@ kill_processes:
|
|||
for (i = 0 ; i < hdev->asic_prop.completion_queues_count ; i++)
|
||||
hl_cq_reset(hdev, &hdev->completion_queue[i]);
|
||||
|
||||
hdev->idle_busy_ts_idx = 0;
|
||||
hdev->idle_busy_ts_arr[0].busy_to_idle_ts = ktime_set(0, 0);
|
||||
hdev->idle_busy_ts_arr[0].idle_to_busy_ts = ktime_set(0, 0);
|
||||
|
||||
if (hdev->cs_active_cnt)
|
||||
dev_crit(hdev->dev, "CS active cnt %d is not 0 during reset\n",
|
||||
hdev->cs_active_cnt);
|
||||
|
||||
mutex_lock(&hdev->fpriv_list_lock);
|
||||
|
||||
/* Make sure the context switch phase will run again */
|
||||
|
@ -1151,6 +1092,16 @@ kill_processes:
|
|||
goto out_err;
|
||||
}
|
||||
|
||||
/* If device is not idle fail the reset process */
|
||||
if (!hdev->asic_funcs->is_device_idle(hdev, idle_mask,
|
||||
HL_BUSY_ENGINES_MASK_EXT_SIZE, NULL)) {
|
||||
dev_err(hdev->dev,
|
||||
"device is not idle (mask %#llx %#llx) after reset\n",
|
||||
idle_mask[0], idle_mask[1]);
|
||||
rc = -EIO;
|
||||
goto out_err;
|
||||
}
|
||||
|
||||
/* Check that the communication with the device is working */
|
||||
rc = hdev->asic_funcs->test_queues(hdev);
|
||||
if (rc) {
|
||||
|
@ -1235,7 +1186,7 @@ out_err:
|
|||
*/
|
||||
int hl_device_init(struct hl_device *hdev, struct class *hclass)
|
||||
{
|
||||
int i, rc, cq_cnt, cq_ready_cnt;
|
||||
int i, rc, cq_cnt, user_interrupt_cnt, cq_ready_cnt;
|
||||
char *name;
|
||||
bool add_cdev_sysfs_on_err = false;
|
||||
|
||||
|
@ -1274,13 +1225,26 @@ int hl_device_init(struct hl_device *hdev, struct class *hclass)
|
|||
if (rc)
|
||||
goto free_dev_ctrl;
|
||||
|
||||
user_interrupt_cnt = hdev->asic_prop.user_interrupt_count;
|
||||
|
||||
if (user_interrupt_cnt) {
|
||||
hdev->user_interrupt = kcalloc(user_interrupt_cnt,
|
||||
sizeof(*hdev->user_interrupt),
|
||||
GFP_KERNEL);
|
||||
|
||||
if (!hdev->user_interrupt) {
|
||||
rc = -ENOMEM;
|
||||
goto early_fini;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Start calling ASIC initialization. First S/W then H/W and finally
|
||||
* late init
|
||||
*/
|
||||
rc = hdev->asic_funcs->sw_init(hdev);
|
||||
if (rc)
|
||||
goto early_fini;
|
||||
goto user_interrupts_fini;
|
||||
|
||||
/*
|
||||
* Initialize the H/W queues. Must be done before hw_init, because
|
||||
|
@ -1478,6 +1442,8 @@ hw_queues_destroy:
|
|||
hl_hw_queues_destroy(hdev);
|
||||
sw_fini:
|
||||
hdev->asic_funcs->sw_fini(hdev);
|
||||
user_interrupts_fini:
|
||||
kfree(hdev->user_interrupt);
|
||||
early_fini:
|
||||
device_early_fini(hdev);
|
||||
free_dev_ctrl:
|
||||
|
@ -1609,6 +1575,7 @@ void hl_device_fini(struct hl_device *hdev)
|
|||
for (i = 0 ; i < hdev->asic_prop.completion_queues_count ; i++)
|
||||
hl_cq_fini(hdev, &hdev->completion_queue[i]);
|
||||
kfree(hdev->completion_queue);
|
||||
kfree(hdev->user_interrupt);
|
||||
|
||||
hl_hw_queues_destroy(hdev);
|
||||
|
||||
|
|
|
@ -293,6 +293,7 @@ static int fw_read_errors(struct hl_device *hdev, u32 boot_err0_reg,
|
|||
u32 cpu_security_boot_status_reg)
|
||||
{
|
||||
u32 err_val, security_val;
|
||||
bool err_exists = false;
|
||||
|
||||
/* Some of the firmware status codes are deprecated in newer f/w
|
||||
* versions. In those versions, the errors are reported
|
||||
|
@ -307,48 +308,102 @@ static int fw_read_errors(struct hl_device *hdev, u32 boot_err0_reg,
|
|||
if (!(err_val & CPU_BOOT_ERR0_ENABLED))
|
||||
return 0;
|
||||
|
||||
if (err_val & CPU_BOOT_ERR0_DRAM_INIT_FAIL)
|
||||
if (err_val & CPU_BOOT_ERR0_DRAM_INIT_FAIL) {
|
||||
dev_err(hdev->dev,
|
||||
"Device boot error - DRAM initialization failed\n");
|
||||
if (err_val & CPU_BOOT_ERR0_FIT_CORRUPTED)
|
||||
dev_err(hdev->dev, "Device boot error - FIT image corrupted\n");
|
||||
if (err_val & CPU_BOOT_ERR0_TS_INIT_FAIL)
|
||||
dev_err(hdev->dev,
|
||||
"Device boot error - Thermal Sensor initialization failed\n");
|
||||
if (err_val & CPU_BOOT_ERR0_DRAM_SKIPPED)
|
||||
dev_warn(hdev->dev,
|
||||
"Device boot warning - Skipped DRAM initialization\n");
|
||||
|
||||
if (err_val & CPU_BOOT_ERR0_BMC_WAIT_SKIPPED) {
|
||||
if (hdev->bmc_enable)
|
||||
dev_warn(hdev->dev,
|
||||
"Device boot error - Skipped waiting for BMC\n");
|
||||
else
|
||||
err_val &= ~CPU_BOOT_ERR0_BMC_WAIT_SKIPPED;
|
||||
err_exists = true;
|
||||
}
|
||||
|
||||
if (err_val & CPU_BOOT_ERR0_NIC_DATA_NOT_RDY)
|
||||
if (err_val & CPU_BOOT_ERR0_FIT_CORRUPTED) {
|
||||
dev_err(hdev->dev, "Device boot error - FIT image corrupted\n");
|
||||
err_exists = true;
|
||||
}
|
||||
|
||||
if (err_val & CPU_BOOT_ERR0_TS_INIT_FAIL) {
|
||||
dev_err(hdev->dev,
|
||||
"Device boot error - Thermal Sensor initialization failed\n");
|
||||
err_exists = true;
|
||||
}
|
||||
|
||||
if (err_val & CPU_BOOT_ERR0_DRAM_SKIPPED) {
|
||||
dev_warn(hdev->dev,
|
||||
"Device boot warning - Skipped DRAM initialization\n");
|
||||
/* This is a warning so we don't want it to disable the
|
||||
* device
|
||||
*/
|
||||
err_val &= ~CPU_BOOT_ERR0_DRAM_SKIPPED;
|
||||
}
|
||||
|
||||
if (err_val & CPU_BOOT_ERR0_BMC_WAIT_SKIPPED) {
|
||||
if (hdev->bmc_enable) {
|
||||
dev_err(hdev->dev,
|
||||
"Device boot error - Skipped waiting for BMC\n");
|
||||
err_exists = true;
|
||||
} else {
|
||||
dev_info(hdev->dev,
|
||||
"Device boot message - Skipped waiting for BMC\n");
|
||||
/* This is an info so we don't want it to disable the
|
||||
* device
|
||||
*/
|
||||
err_val &= ~CPU_BOOT_ERR0_BMC_WAIT_SKIPPED;
|
||||
}
|
||||
}
|
||||
|
||||
if (err_val & CPU_BOOT_ERR0_NIC_DATA_NOT_RDY) {
|
||||
dev_err(hdev->dev,
|
||||
"Device boot error - Serdes data from BMC not available\n");
|
||||
if (err_val & CPU_BOOT_ERR0_NIC_FW_FAIL)
|
||||
err_exists = true;
|
||||
}
|
||||
|
||||
if (err_val & CPU_BOOT_ERR0_NIC_FW_FAIL) {
|
||||
dev_err(hdev->dev,
|
||||
"Device boot error - NIC F/W initialization failed\n");
|
||||
if (err_val & CPU_BOOT_ERR0_SECURITY_NOT_RDY)
|
||||
err_exists = true;
|
||||
}
|
||||
|
||||
if (err_val & CPU_BOOT_ERR0_SECURITY_NOT_RDY) {
|
||||
dev_warn(hdev->dev,
|
||||
"Device boot warning - security not ready\n");
|
||||
if (err_val & CPU_BOOT_ERR0_SECURITY_FAIL)
|
||||
/* This is a warning so we don't want it to disable the
|
||||
* device
|
||||
*/
|
||||
err_val &= ~CPU_BOOT_ERR0_SECURITY_NOT_RDY;
|
||||
}
|
||||
|
||||
if (err_val & CPU_BOOT_ERR0_SECURITY_FAIL) {
|
||||
dev_err(hdev->dev, "Device boot error - security failure\n");
|
||||
if (err_val & CPU_BOOT_ERR0_EFUSE_FAIL)
|
||||
err_exists = true;
|
||||
}
|
||||
|
||||
if (err_val & CPU_BOOT_ERR0_EFUSE_FAIL) {
|
||||
dev_err(hdev->dev, "Device boot error - eFuse failure\n");
|
||||
if (err_val & CPU_BOOT_ERR0_PLL_FAIL)
|
||||
err_exists = true;
|
||||
}
|
||||
|
||||
if (err_val & CPU_BOOT_ERR0_PLL_FAIL) {
|
||||
dev_err(hdev->dev, "Device boot error - PLL failure\n");
|
||||
err_exists = true;
|
||||
}
|
||||
|
||||
if (err_val & CPU_BOOT_ERR0_DEVICE_UNUSABLE_FAIL) {
|
||||
dev_err(hdev->dev,
|
||||
"Device boot error - device unusable\n");
|
||||
err_exists = true;
|
||||
}
|
||||
|
||||
security_val = RREG32(cpu_security_boot_status_reg);
|
||||
if (security_val & CPU_BOOT_DEV_STS0_ENABLED)
|
||||
dev_dbg(hdev->dev, "Device security status %#x\n",
|
||||
security_val);
|
||||
|
||||
if (err_val & ~CPU_BOOT_ERR0_ENABLED)
|
||||
if (!err_exists && (err_val & ~CPU_BOOT_ERR0_ENABLED)) {
|
||||
dev_err(hdev->dev,
|
||||
"Device boot error - unknown error 0x%08x\n",
|
||||
err_val);
|
||||
err_exists = true;
|
||||
}
|
||||
|
||||
if (err_exists)
|
||||
return -EIO;
|
||||
|
||||
return 0;
|
||||
|
@ -419,6 +474,73 @@ out:
|
|||
return rc;
|
||||
}
|
||||
|
||||
static int hl_fw_send_msi_info_msg(struct hl_device *hdev)
|
||||
{
|
||||
struct cpucp_array_data_packet *pkt;
|
||||
size_t total_pkt_size, data_size;
|
||||
u64 result;
|
||||
int rc;
|
||||
|
||||
/* skip sending this info for unsupported ASICs */
|
||||
if (!hdev->asic_funcs->get_msi_info)
|
||||
return 0;
|
||||
|
||||
data_size = CPUCP_NUM_OF_MSI_TYPES * sizeof(u32);
|
||||
total_pkt_size = sizeof(struct cpucp_array_data_packet) + data_size;
|
||||
|
||||
/* data should be aligned to 8 bytes in order to CPU-CP to copy it */
|
||||
total_pkt_size = (total_pkt_size + 0x7) & ~0x7;
|
||||
|
||||
/* total_pkt_size is casted to u16 later on */
|
||||
if (total_pkt_size > USHRT_MAX) {
|
||||
dev_err(hdev->dev, "CPUCP array data is too big\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
pkt = kzalloc(total_pkt_size, GFP_KERNEL);
|
||||
if (!pkt)
|
||||
return -ENOMEM;
|
||||
|
||||
pkt->length = cpu_to_le32(CPUCP_NUM_OF_MSI_TYPES);
|
||||
|
||||
hdev->asic_funcs->get_msi_info((u32 *)&pkt->data);
|
||||
|
||||
pkt->cpucp_pkt.ctl = cpu_to_le32(CPUCP_PACKET_MSI_INFO_SET <<
|
||||
CPUCP_PKT_CTL_OPCODE_SHIFT);
|
||||
|
||||
rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *)pkt,
|
||||
total_pkt_size, 0, &result);
|
||||
|
||||
/*
|
||||
* in case packet result is invalid it means that FW does not support
|
||||
* this feature and will use default/hard coded MSI values. no reason
|
||||
* to stop the boot
|
||||
*/
|
||||
if (rc && result == cpucp_packet_invalid)
|
||||
rc = 0;
|
||||
|
||||
if (rc)
|
||||
dev_err(hdev->dev, "failed to send CPUCP array data\n");
|
||||
|
||||
kfree(pkt);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
int hl_fw_cpucp_handshake(struct hl_device *hdev,
|
||||
u32 cpu_security_boot_status_reg,
|
||||
u32 boot_err0_reg)
|
||||
{
|
||||
int rc;
|
||||
|
||||
rc = hl_fw_cpucp_info_get(hdev, cpu_security_boot_status_reg,
|
||||
boot_err0_reg);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
return hl_fw_send_msi_info_msg(hdev);
|
||||
}
|
||||
|
||||
int hl_fw_get_eeprom_data(struct hl_device *hdev, void *data, size_t max_size)
|
||||
{
|
||||
struct cpucp_packet pkt = {};
|
||||
|
@ -539,18 +661,63 @@ int hl_fw_cpucp_total_energy_get(struct hl_device *hdev, u64 *total_energy)
|
|||
return rc;
|
||||
}
|
||||
|
||||
int hl_fw_cpucp_pll_info_get(struct hl_device *hdev, u16 pll_index,
|
||||
int get_used_pll_index(struct hl_device *hdev, enum pll_index input_pll_index,
|
||||
enum pll_index *pll_index)
|
||||
{
|
||||
struct asic_fixed_properties *prop = &hdev->asic_prop;
|
||||
u8 pll_byte, pll_bit_off;
|
||||
bool dynamic_pll;
|
||||
|
||||
if (input_pll_index >= PLL_MAX) {
|
||||
dev_err(hdev->dev, "PLL index %d is out of range\n",
|
||||
input_pll_index);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
dynamic_pll = prop->fw_security_status_valid &&
|
||||
(prop->fw_app_security_map & CPU_BOOT_DEV_STS0_DYN_PLL_EN);
|
||||
|
||||
if (!dynamic_pll) {
|
||||
/*
|
||||
* in case we are working with legacy FW (each asic has unique
|
||||
* PLL numbering) extract the legacy numbering
|
||||
*/
|
||||
*pll_index = hdev->legacy_pll_map[input_pll_index];
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* PLL map is a u8 array */
|
||||
pll_byte = prop->cpucp_info.pll_map[input_pll_index >> 3];
|
||||
pll_bit_off = input_pll_index & 0x7;
|
||||
|
||||
if (!(pll_byte & BIT(pll_bit_off))) {
|
||||
dev_err(hdev->dev, "PLL index %d is not supported\n",
|
||||
input_pll_index);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
*pll_index = input_pll_index;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int hl_fw_cpucp_pll_info_get(struct hl_device *hdev, enum pll_index pll_index,
|
||||
u16 *pll_freq_arr)
|
||||
{
|
||||
struct cpucp_packet pkt;
|
||||
enum pll_index used_pll_idx;
|
||||
u64 result;
|
||||
int rc;
|
||||
|
||||
rc = get_used_pll_index(hdev, pll_index, &used_pll_idx);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
memset(&pkt, 0, sizeof(pkt));
|
||||
|
||||
pkt.ctl = cpu_to_le32(CPUCP_PACKET_PLL_INFO_GET <<
|
||||
CPUCP_PKT_CTL_OPCODE_SHIFT);
|
||||
pkt.pll_type = __cpu_to_le16(pll_index);
|
||||
pkt.pll_type = __cpu_to_le16((u16)used_pll_idx);
|
||||
|
||||
rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt),
|
||||
HL_CPUCP_INFO_TIMEOUT_USEC, &result);
|
||||
|
@ -565,6 +732,29 @@ int hl_fw_cpucp_pll_info_get(struct hl_device *hdev, u16 pll_index,
|
|||
return rc;
|
||||
}
|
||||
|
||||
int hl_fw_cpucp_power_get(struct hl_device *hdev, u64 *power)
|
||||
{
|
||||
struct cpucp_packet pkt;
|
||||
u64 result;
|
||||
int rc;
|
||||
|
||||
memset(&pkt, 0, sizeof(pkt));
|
||||
|
||||
pkt.ctl = cpu_to_le32(CPUCP_PACKET_POWER_GET <<
|
||||
CPUCP_PKT_CTL_OPCODE_SHIFT);
|
||||
|
||||
rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt),
|
||||
HL_CPUCP_INFO_TIMEOUT_USEC, &result);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev, "Failed to read power, error %d\n", rc);
|
||||
return rc;
|
||||
}
|
||||
|
||||
*power = result;
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static void detect_cpu_boot_status(struct hl_device *hdev, u32 status)
|
||||
{
|
||||
/* Some of the status codes below are deprecated in newer f/w
|
||||
|
@ -623,7 +813,11 @@ int hl_fw_read_preboot_status(struct hl_device *hdev, u32 cpu_boot_status_reg,
|
|||
u32 status, security_status;
|
||||
int rc;
|
||||
|
||||
if (!hdev->cpu_enable)
|
||||
/* pldm was added for cases in which we use preboot on pldm and want
|
||||
* to load boot fit, but we can't wait for preboot because it runs
|
||||
* very slowly
|
||||
*/
|
||||
if (!(hdev->fw_components & FW_TYPE_PREBOOT_CPU) || hdev->pldm)
|
||||
return 0;
|
||||
|
||||
/* Need to check two possible scenarios:
|
||||
|
@ -677,16 +871,16 @@ int hl_fw_read_preboot_status(struct hl_device *hdev, u32 cpu_boot_status_reg,
|
|||
if (security_status & CPU_BOOT_DEV_STS0_ENABLED) {
|
||||
prop->fw_security_status_valid = 1;
|
||||
|
||||
/* FW security should be derived from PCI ID, we keep this
|
||||
* check for backward compatibility
|
||||
*/
|
||||
if (security_status & CPU_BOOT_DEV_STS0_SECURITY_EN)
|
||||
prop->fw_security_disabled = false;
|
||||
else
|
||||
prop->fw_security_disabled = true;
|
||||
|
||||
if (security_status & CPU_BOOT_DEV_STS0_FW_HARD_RST_EN)
|
||||
prop->hard_reset_done_by_fw = true;
|
||||
} else {
|
||||
prop->fw_security_status_valid = 0;
|
||||
prop->fw_security_disabled = true;
|
||||
}
|
||||
|
||||
dev_dbg(hdev->dev, "Firmware preboot security status %#x\n",
|
||||
|
@ -710,7 +904,7 @@ int hl_fw_init_cpu(struct hl_device *hdev, u32 cpu_boot_status_reg,
|
|||
u32 status;
|
||||
int rc;
|
||||
|
||||
if (!(hdev->fw_loading & FW_TYPE_BOOT_CPU))
|
||||
if (!(hdev->fw_components & FW_TYPE_BOOT_CPU))
|
||||
return 0;
|
||||
|
||||
dev_info(hdev->dev, "Going to wait for device boot (up to %lds)\n",
|
||||
|
@ -801,7 +995,7 @@ int hl_fw_init_cpu(struct hl_device *hdev, u32 cpu_boot_status_reg,
|
|||
goto out;
|
||||
}
|
||||
|
||||
if (!(hdev->fw_loading & FW_TYPE_LINUX)) {
|
||||
if (!(hdev->fw_components & FW_TYPE_LINUX)) {
|
||||
dev_info(hdev->dev, "Skip loading Linux F/W\n");
|
||||
goto out;
|
||||
}
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
#include <linux/dma-direction.h>
|
||||
#include <linux/scatterlist.h>
|
||||
#include <linux/hashtable.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/genalloc.h>
|
||||
#include <linux/sched/signal.h>
|
||||
|
@ -61,7 +62,7 @@
|
|||
|
||||
#define HL_SIM_MAX_TIMEOUT_US 10000000 /* 10s */
|
||||
|
||||
#define HL_IDLE_BUSY_TS_ARR_SIZE 4096
|
||||
#define HL_COMMON_USER_INTERRUPT_ID 0xFFF
|
||||
|
||||
/* Memory */
|
||||
#define MEM_HASH_TABLE_BITS 7 /* 1 << 7 buckets */
|
||||
|
@ -102,6 +103,23 @@ enum hl_mmu_page_table_location {
|
|||
|
||||
#define HL_MAX_DCORES 4
|
||||
|
||||
/*
|
||||
* Reset Flags
|
||||
*
|
||||
* - HL_RESET_HARD
|
||||
* If set do hard reset to all engines. If not set reset just
|
||||
* compute/DMA engines.
|
||||
*
|
||||
* - HL_RESET_FROM_RESET_THREAD
|
||||
* Set if the caller is the hard-reset thread
|
||||
*
|
||||
* - HL_RESET_HEARTBEAT
|
||||
* Set if reset is due to heartbeat
|
||||
*/
|
||||
#define HL_RESET_HARD (1 << 0)
|
||||
#define HL_RESET_FROM_RESET_THREAD (1 << 1)
|
||||
#define HL_RESET_HEARTBEAT (1 << 2)
|
||||
|
||||
#define HL_MAX_SOBS_PER_MONITOR 8
|
||||
|
||||
/**
|
||||
|
@ -169,15 +187,19 @@ enum hl_fw_component {
|
|||
};
|
||||
|
||||
/**
|
||||
* enum hl_fw_types - F/W types to load
|
||||
* enum hl_fw_types - F/W types present in the system
|
||||
* @FW_TYPE_LINUX: Linux image for device CPU
|
||||
* @FW_TYPE_BOOT_CPU: Boot image for device CPU
|
||||
* @FW_TYPE_PREBOOT_CPU: Indicates pre-loaded CPUs are present in the system
|
||||
* (preboot, ppboot etc...)
|
||||
* @FW_TYPE_ALL_TYPES: Mask for all types
|
||||
*/
|
||||
enum hl_fw_types {
|
||||
FW_TYPE_LINUX = 0x1,
|
||||
FW_TYPE_BOOT_CPU = 0x2,
|
||||
FW_TYPE_ALL_TYPES = (FW_TYPE_LINUX | FW_TYPE_BOOT_CPU)
|
||||
FW_TYPE_PREBOOT_CPU = 0x4,
|
||||
FW_TYPE_ALL_TYPES =
|
||||
(FW_TYPE_LINUX | FW_TYPE_BOOT_CPU | FW_TYPE_PREBOOT_CPU)
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -368,6 +390,7 @@ struct hl_mmu_properties {
|
|||
* @dram_size: DRAM total size.
|
||||
* @dram_pci_bar_size: size of PCI bar towards DRAM.
|
||||
* @max_power_default: max power of the device after reset
|
||||
* @dc_power_default: power consumed by the device in mode idle.
|
||||
* @dram_size_for_default_page_mapping: DRAM size needed to map to avoid page
|
||||
* fault.
|
||||
* @pcie_dbi_base_address: Base address of the PCIE_DBI block.
|
||||
|
@ -412,6 +435,7 @@ struct hl_mmu_properties {
|
|||
* @first_available_user_msix_interrupt: first available msix interrupt
|
||||
* reserved for the user
|
||||
* @first_available_cq: first available CQ for the user.
|
||||
* @user_interrupt_count: number of user interrupts.
|
||||
* @tpc_enabled_mask: which TPCs are enabled.
|
||||
* @completion_queues_count: number of completion queues.
|
||||
* @fw_security_disabled: true if security measures are disabled in firmware,
|
||||
|
@ -421,6 +445,7 @@ struct hl_mmu_properties {
|
|||
* @dram_supports_virtual_memory: is there an MMU towards the DRAM
|
||||
* @hard_reset_done_by_fw: true if firmware is handling hard reset flow
|
||||
* @num_functional_hbms: number of functional HBMs in each DCORE.
|
||||
* @iatu_done_by_fw: true if iATU configuration is being done by FW.
|
||||
*/
|
||||
struct asic_fixed_properties {
|
||||
struct hw_queue_properties *hw_queues_props;
|
||||
|
@ -439,6 +464,7 @@ struct asic_fixed_properties {
|
|||
u64 dram_size;
|
||||
u64 dram_pci_bar_size;
|
||||
u64 max_power_default;
|
||||
u64 dc_power_default;
|
||||
u64 dram_size_for_default_page_mapping;
|
||||
u64 pcie_dbi_base_address;
|
||||
u64 pcie_aux_dbi_reg_addr;
|
||||
|
@ -475,6 +501,7 @@ struct asic_fixed_properties {
|
|||
u16 first_available_user_mon[HL_MAX_DCORES];
|
||||
u16 first_available_user_msix_interrupt;
|
||||
u16 first_available_cq[HL_MAX_DCORES];
|
||||
u16 user_interrupt_count;
|
||||
u8 tpc_enabled_mask;
|
||||
u8 completion_queues_count;
|
||||
u8 fw_security_disabled;
|
||||
|
@ -482,6 +509,7 @@ struct asic_fixed_properties {
|
|||
u8 dram_supports_virtual_memory;
|
||||
u8 hard_reset_done_by_fw;
|
||||
u8 num_functional_hbms;
|
||||
u8 iatu_done_by_fw;
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -503,6 +531,7 @@ struct hl_fence {
|
|||
|
||||
/**
|
||||
* struct hl_cs_compl - command submission completion object.
|
||||
* @sob_reset_work: workqueue object to run SOB reset flow.
|
||||
* @base_fence: hl fence object.
|
||||
* @lock: spinlock to protect fence.
|
||||
* @hdev: habanalabs device structure.
|
||||
|
@ -513,6 +542,7 @@ struct hl_fence {
|
|||
* @sob_group: the SOB group that is used in this collective wait CS.
|
||||
*/
|
||||
struct hl_cs_compl {
|
||||
struct work_struct sob_reset_work;
|
||||
struct hl_fence base_fence;
|
||||
spinlock_t lock;
|
||||
struct hl_device *hdev;
|
||||
|
@ -689,6 +719,31 @@ struct hl_cq {
|
|||
atomic_t free_slots_cnt;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct hl_user_interrupt - holds user interrupt information
|
||||
* @hdev: pointer to the device structure
|
||||
* @wait_list_head: head to the list of user threads pending on this interrupt
|
||||
* @wait_list_lock: protects wait_list_head
|
||||
* @interrupt_id: msix interrupt id
|
||||
*/
|
||||
struct hl_user_interrupt {
|
||||
struct hl_device *hdev;
|
||||
struct list_head wait_list_head;
|
||||
spinlock_t wait_list_lock;
|
||||
u32 interrupt_id;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct hl_user_pending_interrupt - holds a context to a user thread
|
||||
* pending on an interrupt
|
||||
* @wait_list_node: node in the list of user threads pending on an interrupt
|
||||
* @fence: hl fence object for interrupt completion
|
||||
*/
|
||||
struct hl_user_pending_interrupt {
|
||||
struct list_head wait_list_node;
|
||||
struct hl_fence fence;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct hl_eq - describes the event queue (single one per device)
|
||||
* @hdev: pointer to the device structure
|
||||
|
@ -713,11 +768,13 @@ struct hl_eq {
|
|||
* @ASIC_INVALID: Invalid ASIC type.
|
||||
* @ASIC_GOYA: Goya device.
|
||||
* @ASIC_GAUDI: Gaudi device.
|
||||
* @ASIC_GAUDI_SEC: Gaudi secured device (HL-2000).
|
||||
*/
|
||||
enum hl_asic_type {
|
||||
ASIC_INVALID,
|
||||
ASIC_GOYA,
|
||||
ASIC_GAUDI
|
||||
ASIC_GAUDI,
|
||||
ASIC_GAUDI_SEC
|
||||
};
|
||||
|
||||
struct hl_cs_parser;
|
||||
|
@ -802,8 +859,12 @@ enum div_select_defs {
|
|||
* @update_eq_ci: update event queue CI.
|
||||
* @context_switch: called upon ASID context switch.
|
||||
* @restore_phase_topology: clear all SOBs amd MONs.
|
||||
* @debugfs_read32: debug interface for reading u32 from DRAM/SRAM.
|
||||
* @debugfs_write32: debug interface for writing u32 to DRAM/SRAM.
|
||||
* @debugfs_read32: debug interface for reading u32 from DRAM/SRAM/Host memory.
|
||||
* @debugfs_write32: debug interface for writing u32 to DRAM/SRAM/Host memory.
|
||||
* @debugfs_read64: debug interface for reading u64 from DRAM/SRAM/Host memory.
|
||||
* @debugfs_write64: debug interface for writing u64 to DRAM/SRAM/Host memory.
|
||||
* @debugfs_read_dma: debug interface for reading up to 2MB from the device's
|
||||
* internal memory via DMA engine.
|
||||
* @add_device_attr: add ASIC specific device attributes.
|
||||
* @handle_eqe: handle event queue entry (IRQ) from CPU-CP.
|
||||
* @set_pll_profile: change PLL profile (manual/automatic).
|
||||
|
@ -919,10 +980,16 @@ struct hl_asic_funcs {
|
|||
void (*update_eq_ci)(struct hl_device *hdev, u32 val);
|
||||
int (*context_switch)(struct hl_device *hdev, u32 asid);
|
||||
void (*restore_phase_topology)(struct hl_device *hdev);
|
||||
int (*debugfs_read32)(struct hl_device *hdev, u64 addr, u32 *val);
|
||||
int (*debugfs_write32)(struct hl_device *hdev, u64 addr, u32 val);
|
||||
int (*debugfs_read64)(struct hl_device *hdev, u64 addr, u64 *val);
|
||||
int (*debugfs_write64)(struct hl_device *hdev, u64 addr, u64 val);
|
||||
int (*debugfs_read32)(struct hl_device *hdev, u64 addr,
|
||||
bool user_address, u32 *val);
|
||||
int (*debugfs_write32)(struct hl_device *hdev, u64 addr,
|
||||
bool user_address, u32 val);
|
||||
int (*debugfs_read64)(struct hl_device *hdev, u64 addr,
|
||||
bool user_address, u64 *val);
|
||||
int (*debugfs_write64)(struct hl_device *hdev, u64 addr,
|
||||
bool user_address, u64 val);
|
||||
int (*debugfs_read_dma)(struct hl_device *hdev, u64 addr, u32 size,
|
||||
void *blob_addr);
|
||||
void (*add_device_attr)(struct hl_device *hdev,
|
||||
struct attribute_group *dev_attr_grp);
|
||||
void (*handle_eqe)(struct hl_device *hdev,
|
||||
|
@ -986,6 +1053,7 @@ struct hl_asic_funcs {
|
|||
int (*hw_block_mmap)(struct hl_device *hdev, struct vm_area_struct *vma,
|
||||
u32 block_id, u32 block_size);
|
||||
void (*enable_events_from_fw)(struct hl_device *hdev);
|
||||
void (*get_msi_info)(u32 *table);
|
||||
};
|
||||
|
||||
|
||||
|
@ -1070,9 +1138,11 @@ struct hl_pending_cb {
|
|||
* @mem_hash_lock: protects the mem_hash.
|
||||
* @mmu_lock: protects the MMU page tables. Any change to the PGT, modifying the
|
||||
* MMU hash or walking the PGT requires talking this lock.
|
||||
* @hw_block_list_lock: protects the HW block memory list.
|
||||
* @debugfs_list: node in debugfs list of contexts.
|
||||
* pending_cb_list: list of pending command buffers waiting to be sent upon
|
||||
* next user command submission context.
|
||||
* @hw_block_mem_list: list of HW block virtual mapped addresses.
|
||||
* @cs_counters: context command submission counters.
|
||||
* @cb_va_pool: device VA pool for command buffers which are mapped to the
|
||||
* device's MMU.
|
||||
|
@ -1109,8 +1179,10 @@ struct hl_ctx {
|
|||
struct hl_va_range *va_range[HL_VA_RANGE_TYPE_MAX];
|
||||
struct mutex mem_hash_lock;
|
||||
struct mutex mmu_lock;
|
||||
struct mutex hw_block_list_lock;
|
||||
struct list_head debugfs_list;
|
||||
struct list_head pending_cb_list;
|
||||
struct list_head hw_block_mem_list;
|
||||
struct hl_cs_counters_atomic cs_counters;
|
||||
struct gen_pool *cb_va_pool;
|
||||
u64 cs_sequence;
|
||||
|
@ -1185,6 +1257,7 @@ struct hl_userptr {
|
|||
* @sequence: the sequence number of this CS.
|
||||
* @staged_sequence: the sequence of the staged submission this CS is part of,
|
||||
* relevant only if staged_cs is set.
|
||||
* @timeout_jiffies: cs timeout in jiffies.
|
||||
* @type: CS_TYPE_*.
|
||||
* @submitted: true if CS was submitted to H/W.
|
||||
* @completed: true if CS was completed by device.
|
||||
|
@ -1213,6 +1286,7 @@ struct hl_cs {
|
|||
struct list_head debugfs_list;
|
||||
u64 sequence;
|
||||
u64 staged_sequence;
|
||||
u64 timeout_jiffies;
|
||||
enum hl_cs_type type;
|
||||
u8 submitted;
|
||||
u8 completed;
|
||||
|
@ -1329,6 +1403,23 @@ struct hl_vm_hash_node {
|
|||
void *ptr;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct hl_vm_hw_block_list_node - list element from user virtual address to
|
||||
* HW block id.
|
||||
* @node: node to hang on the list in context object.
|
||||
* @ctx: the context this node belongs to.
|
||||
* @vaddr: virtual address of the HW block.
|
||||
* @size: size of the block.
|
||||
* @id: HW block id (handle).
|
||||
*/
|
||||
struct hl_vm_hw_block_list_node {
|
||||
struct list_head node;
|
||||
struct hl_ctx *ctx;
|
||||
unsigned long vaddr;
|
||||
u32 size;
|
||||
u32 id;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct hl_vm_phys_pg_pack - physical page pack.
|
||||
* @vm_type: describes the type of the virtual area descriptor.
|
||||
|
@ -1490,12 +1581,13 @@ struct hl_debugfs_entry {
|
|||
* @userptr_spinlock: protects userptr_list.
|
||||
* @ctx_mem_hash_list: list of available contexts with MMU mappings.
|
||||
* @ctx_mem_hash_spinlock: protects cb_list.
|
||||
* @blob_desc: descriptor of blob
|
||||
* @addr: next address to read/write from/to in read/write32.
|
||||
* @mmu_addr: next virtual address to translate to physical address in mmu_show.
|
||||
* @mmu_asid: ASID to use while translating in mmu_show.
|
||||
* @i2c_bus: generic u8 debugfs file for bus value to use in i2c_data_read.
|
||||
* @i2c_bus: generic u8 debugfs file for address value to use in i2c_data_read.
|
||||
* @i2c_bus: generic u8 debugfs file for register value to use in i2c_data_read.
|
||||
* @i2c_addr: generic u8 debugfs file for address value to use in i2c_data_read.
|
||||
* @i2c_reg: generic u8 debugfs file for register value to use in i2c_data_read.
|
||||
*/
|
||||
struct hl_dbg_device_entry {
|
||||
struct dentry *root;
|
||||
|
@ -1513,6 +1605,7 @@ struct hl_dbg_device_entry {
|
|||
spinlock_t userptr_spinlock;
|
||||
struct list_head ctx_mem_hash_list;
|
||||
spinlock_t ctx_mem_hash_spinlock;
|
||||
struct debugfs_blob_wrapper blob_desc;
|
||||
u64 addr;
|
||||
u64 mmu_addr;
|
||||
u32 mmu_asid;
|
||||
|
@ -1683,16 +1776,6 @@ struct hl_device_reset_work {
|
|||
struct hl_device *hdev;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct hl_device_idle_busy_ts - used for calculating device utilization rate.
|
||||
* @idle_to_busy_ts: timestamp where device changed from idle to busy.
|
||||
* @busy_to_idle_ts: timestamp where device changed from busy to idle.
|
||||
*/
|
||||
struct hl_device_idle_busy_ts {
|
||||
ktime_t idle_to_busy_ts;
|
||||
ktime_t busy_to_idle_ts;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct hr_mmu_hop_addrs - used for holding per-device host-resident mmu hop
|
||||
* information.
|
||||
|
@ -1821,9 +1904,16 @@ struct hl_mmu_funcs {
|
|||
* @asic_name: ASIC specific name.
|
||||
* @asic_type: ASIC specific type.
|
||||
* @completion_queue: array of hl_cq.
|
||||
* @user_interrupt: array of hl_user_interrupt. upon the corresponding user
|
||||
* interrupt, driver will monitor the list of fences
|
||||
* registered to this interrupt.
|
||||
* @common_user_interrupt: common user interrupt for all user interrupts.
|
||||
* upon any user interrupt, driver will monitor the
|
||||
* list of fences registered to this common structure.
|
||||
* @cq_wq: work queues of completion queues for executing work in process
|
||||
* context.
|
||||
* @eq_wq: work queue of event queue for executing work in process context.
|
||||
* @sob_reset_wq: work queue for sob reset executions.
|
||||
* @kernel_ctx: Kernel driver context structure.
|
||||
* @kernel_queues: array of hl_hw_queue.
|
||||
* @cs_mirror_list: CS mirror list for TDR.
|
||||
|
@ -1857,11 +1947,11 @@ struct hl_mmu_funcs {
|
|||
* when a user opens the device
|
||||
* @fpriv_list_lock: protects the fpriv_list
|
||||
* @compute_ctx: current compute context executing.
|
||||
* @idle_busy_ts_arr: array to hold time stamps of transitions from idle to busy
|
||||
* and vice-versa
|
||||
* @aggregated_cs_counters: aggregated cs counters among all contexts
|
||||
* @mmu_priv: device-specific MMU data.
|
||||
* @mmu_func: device-related MMU functions.
|
||||
* @legacy_pll_map: map holding map between dynamic (common) PLL indexes and
|
||||
* static (asic specific) PLL indexes.
|
||||
* @dram_used_mem: current DRAM memory consumption.
|
||||
* @timeout_jiffies: device CS timeout value.
|
||||
* @max_power: the max power of the device, as configured by the sysadmin. This
|
||||
|
@ -1874,13 +1964,10 @@ struct hl_mmu_funcs {
|
|||
* @curr_pll_profile: current PLL profile.
|
||||
* @card_type: Various ASICs have several card types. This indicates the card
|
||||
* type of the current device.
|
||||
* @cs_active_cnt: number of active command submissions on this device (active
|
||||
* means already in H/W queues)
|
||||
* @major: habanalabs kernel driver major.
|
||||
* @high_pll: high PLL profile frequency.
|
||||
* @soft_reset_cnt: number of soft reset since the driver was loaded.
|
||||
* @hard_reset_cnt: number of hard reset since the driver was loaded.
|
||||
* @idle_busy_ts_idx: index of current entry in idle_busy_ts_arr
|
||||
* @clk_throttling_reason: bitmask represents the current clk throttling reasons
|
||||
* @id: device minor.
|
||||
* @id_control: minor of the control device
|
||||
|
@ -1937,8 +2024,11 @@ struct hl_device {
|
|||
char status[HL_DEV_STS_MAX][HL_STR_MAX];
|
||||
enum hl_asic_type asic_type;
|
||||
struct hl_cq *completion_queue;
|
||||
struct hl_user_interrupt *user_interrupt;
|
||||
struct hl_user_interrupt common_user_interrupt;
|
||||
struct workqueue_struct **cq_wq;
|
||||
struct workqueue_struct *eq_wq;
|
||||
struct workqueue_struct *sob_reset_wq;
|
||||
struct hl_ctx *kernel_ctx;
|
||||
struct hl_hw_queue *kernel_queues;
|
||||
struct list_head cs_mirror_list;
|
||||
|
@ -1976,13 +2066,13 @@ struct hl_device {
|
|||
|
||||
struct hl_ctx *compute_ctx;
|
||||
|
||||
struct hl_device_idle_busy_ts *idle_busy_ts_arr;
|
||||
|
||||
struct hl_cs_counters_atomic aggregated_cs_counters;
|
||||
|
||||
struct hl_mmu_priv mmu_priv;
|
||||
struct hl_mmu_funcs mmu_func[MMU_NUM_PGT_LOCATIONS];
|
||||
|
||||
enum pll_index *legacy_pll_map;
|
||||
|
||||
atomic64_t dram_used_mem;
|
||||
u64 timeout_jiffies;
|
||||
u64 max_power;
|
||||
|
@ -1990,12 +2080,10 @@ struct hl_device {
|
|||
atomic_t in_reset;
|
||||
enum hl_pll_frequency curr_pll_profile;
|
||||
enum cpucp_card_types card_type;
|
||||
int cs_active_cnt;
|
||||
u32 major;
|
||||
u32 high_pll;
|
||||
u32 soft_reset_cnt;
|
||||
u32 hard_reset_cnt;
|
||||
u32 idle_busy_ts_idx;
|
||||
u32 clk_throttling_reason;
|
||||
u16 id;
|
||||
u16 id_control;
|
||||
|
@ -2029,10 +2117,9 @@ struct hl_device {
|
|||
|
||||
/* Parameters for bring-up */
|
||||
u64 nic_ports_mask;
|
||||
u64 fw_loading;
|
||||
u64 fw_components;
|
||||
u8 mmu_enable;
|
||||
u8 mmu_huge_page_opt;
|
||||
u8 cpu_enable;
|
||||
u8 reset_pcilink;
|
||||
u8 cpu_queues_enable;
|
||||
u8 pldm;
|
||||
|
@ -2043,6 +2130,7 @@ struct hl_device {
|
|||
u8 bmc_enable;
|
||||
u8 rl_enable;
|
||||
u8 reset_on_preboot_fail;
|
||||
u8 reset_upon_device_release;
|
||||
};
|
||||
|
||||
|
||||
|
@ -2157,6 +2245,8 @@ void hl_cq_reset(struct hl_device *hdev, struct hl_cq *q);
|
|||
void hl_eq_reset(struct hl_device *hdev, struct hl_eq *q);
|
||||
irqreturn_t hl_irq_handler_cq(int irq, void *arg);
|
||||
irqreturn_t hl_irq_handler_eq(int irq, void *arg);
|
||||
irqreturn_t hl_irq_handler_user_cq(int irq, void *arg);
|
||||
irqreturn_t hl_irq_handler_default(int irq, void *arg);
|
||||
u32 hl_cq_inc_ptr(u32 ptr);
|
||||
|
||||
int hl_asid_init(struct hl_device *hdev);
|
||||
|
@ -2178,12 +2268,11 @@ int hl_device_init(struct hl_device *hdev, struct class *hclass);
|
|||
void hl_device_fini(struct hl_device *hdev);
|
||||
int hl_device_suspend(struct hl_device *hdev);
|
||||
int hl_device_resume(struct hl_device *hdev);
|
||||
int hl_device_reset(struct hl_device *hdev, bool hard_reset,
|
||||
bool from_hard_reset_thread);
|
||||
int hl_device_reset(struct hl_device *hdev, u32 flags);
|
||||
void hl_hpriv_get(struct hl_fpriv *hpriv);
|
||||
void hl_hpriv_put(struct hl_fpriv *hpriv);
|
||||
int hl_hpriv_put(struct hl_fpriv *hpriv);
|
||||
int hl_device_set_frequency(struct hl_device *hdev, enum hl_pll_frequency freq);
|
||||
uint32_t hl_device_utilization(struct hl_device *hdev, uint32_t period_ms);
|
||||
int hl_device_utilization(struct hl_device *hdev, u32 *utilization);
|
||||
|
||||
int hl_build_hwmon_channel_info(struct hl_device *hdev,
|
||||
struct cpucp_sensor *sensors_arr);
|
||||
|
@ -2235,6 +2324,9 @@ void hl_vm_ctx_fini(struct hl_ctx *ctx);
|
|||
int hl_vm_init(struct hl_device *hdev);
|
||||
void hl_vm_fini(struct hl_device *hdev);
|
||||
|
||||
void hl_hw_block_mem_init(struct hl_ctx *ctx);
|
||||
void hl_hw_block_mem_fini(struct hl_ctx *ctx);
|
||||
|
||||
u64 hl_reserve_va_block(struct hl_device *hdev, struct hl_ctx *ctx,
|
||||
enum hl_va_range_type type, u32 size, u32 alignment);
|
||||
int hl_unreserve_va_block(struct hl_device *hdev, struct hl_ctx *ctx,
|
||||
|
@ -2287,13 +2379,19 @@ int hl_fw_send_heartbeat(struct hl_device *hdev);
|
|||
int hl_fw_cpucp_info_get(struct hl_device *hdev,
|
||||
u32 cpu_security_boot_status_reg,
|
||||
u32 boot_err0_reg);
|
||||
int hl_fw_cpucp_handshake(struct hl_device *hdev,
|
||||
u32 cpu_security_boot_status_reg,
|
||||
u32 boot_err0_reg);
|
||||
int hl_fw_get_eeprom_data(struct hl_device *hdev, void *data, size_t max_size);
|
||||
int hl_fw_cpucp_pci_counters_get(struct hl_device *hdev,
|
||||
struct hl_info_pci_counters *counters);
|
||||
int hl_fw_cpucp_total_energy_get(struct hl_device *hdev,
|
||||
u64 *total_energy);
|
||||
int hl_fw_cpucp_pll_info_get(struct hl_device *hdev, u16 pll_index,
|
||||
int get_used_pll_index(struct hl_device *hdev, enum pll_index input_pll_index,
|
||||
enum pll_index *pll_index);
|
||||
int hl_fw_cpucp_pll_info_get(struct hl_device *hdev, enum pll_index pll_index,
|
||||
u16 *pll_freq_arr);
|
||||
int hl_fw_cpucp_power_get(struct hl_device *hdev, u64 *power);
|
||||
int hl_fw_init_cpu(struct hl_device *hdev, u32 cpu_boot_status_reg,
|
||||
u32 msg_to_cpu_reg, u32 cpu_msg_status_reg,
|
||||
u32 cpu_security_boot_status_reg, u32 boot_err0_reg,
|
||||
|
@ -2304,6 +2402,7 @@ int hl_fw_read_preboot_status(struct hl_device *hdev, u32 cpu_boot_status_reg,
|
|||
|
||||
int hl_pci_bars_map(struct hl_device *hdev, const char * const name[3],
|
||||
bool is_wc[3]);
|
||||
int hl_pci_elbi_read(struct hl_device *hdev, u64 addr, u32 *data);
|
||||
int hl_pci_iatu_write(struct hl_device *hdev, u32 addr, u32 data);
|
||||
int hl_pci_set_inbound_region(struct hl_device *hdev, u8 region,
|
||||
struct hl_inbound_pci_region *pci_region);
|
||||
|
@ -2312,8 +2411,10 @@ int hl_pci_set_outbound_region(struct hl_device *hdev,
|
|||
int hl_pci_init(struct hl_device *hdev);
|
||||
void hl_pci_fini(struct hl_device *hdev);
|
||||
|
||||
long hl_get_frequency(struct hl_device *hdev, u32 pll_index, bool curr);
|
||||
void hl_set_frequency(struct hl_device *hdev, u32 pll_index, u64 freq);
|
||||
long hl_get_frequency(struct hl_device *hdev, enum pll_index pll_index,
|
||||
bool curr);
|
||||
void hl_set_frequency(struct hl_device *hdev, enum pll_index pll_index,
|
||||
u64 freq);
|
||||
int hl_get_temperature(struct hl_device *hdev,
|
||||
int sensor_index, u32 attr, long *value);
|
||||
int hl_set_temperature(struct hl_device *hdev,
|
||||
|
@ -2334,6 +2435,7 @@ int hl_set_voltage(struct hl_device *hdev,
|
|||
int sensor_index, u32 attr, long value);
|
||||
int hl_set_current(struct hl_device *hdev,
|
||||
int sensor_index, u32 attr, long value);
|
||||
void hl_release_pending_user_interrupts(struct hl_device *hdev);
|
||||
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
|
||||
|
@ -2434,7 +2536,7 @@ long hl_ioctl(struct file *filep, unsigned int cmd, unsigned long arg);
|
|||
long hl_ioctl_control(struct file *filep, unsigned int cmd, unsigned long arg);
|
||||
int hl_cb_ioctl(struct hl_fpriv *hpriv, void *data);
|
||||
int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data);
|
||||
int hl_cs_wait_ioctl(struct hl_fpriv *hpriv, void *data);
|
||||
int hl_wait_ioctl(struct hl_fpriv *hpriv, void *data);
|
||||
int hl_mem_ioctl(struct hl_fpriv *hpriv, void *data);
|
||||
|
||||
#endif /* HABANALABSP_H_ */
|
||||
|
|
|
@ -27,13 +27,13 @@ static struct class *hl_class;
|
|||
static DEFINE_IDR(hl_devs_idr);
|
||||
static DEFINE_MUTEX(hl_devs_idr_lock);
|
||||
|
||||
static int timeout_locked = 5;
|
||||
static int timeout_locked = 30;
|
||||
static int reset_on_lockup = 1;
|
||||
static int memory_scrub = 1;
|
||||
|
||||
module_param(timeout_locked, int, 0444);
|
||||
MODULE_PARM_DESC(timeout_locked,
|
||||
"Device lockup timeout in seconds (0 = disabled, default 5s)");
|
||||
"Device lockup timeout in seconds (0 = disabled, default 30s)");
|
||||
|
||||
module_param(reset_on_lockup, int, 0444);
|
||||
MODULE_PARM_DESC(reset_on_lockup,
|
||||
|
@ -47,10 +47,12 @@ MODULE_PARM_DESC(memory_scrub,
|
|||
|
||||
#define PCI_IDS_GOYA 0x0001
|
||||
#define PCI_IDS_GAUDI 0x1000
|
||||
#define PCI_IDS_GAUDI_SEC 0x1010
|
||||
|
||||
static const struct pci_device_id ids[] = {
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_HABANALABS, PCI_IDS_GOYA), },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_HABANALABS, PCI_IDS_GAUDI), },
|
||||
{ PCI_DEVICE(PCI_VENDOR_ID_HABANALABS, PCI_IDS_GAUDI_SEC), },
|
||||
{ 0, }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(pci, ids);
|
||||
|
@ -74,6 +76,9 @@ static enum hl_asic_type get_asic_type(u16 device)
|
|||
case PCI_IDS_GAUDI:
|
||||
asic_type = ASIC_GAUDI;
|
||||
break;
|
||||
case PCI_IDS_GAUDI_SEC:
|
||||
asic_type = ASIC_GAUDI_SEC;
|
||||
break;
|
||||
default:
|
||||
asic_type = ASIC_INVALID;
|
||||
break;
|
||||
|
@ -82,6 +87,16 @@ static enum hl_asic_type get_asic_type(u16 device)
|
|||
return asic_type;
|
||||
}
|
||||
|
||||
static bool is_asic_secured(enum hl_asic_type asic_type)
|
||||
{
|
||||
switch (asic_type) {
|
||||
case ASIC_GAUDI_SEC:
|
||||
return true;
|
||||
default:
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* hl_device_open - open function for habanalabs device
|
||||
*
|
||||
|
@ -234,8 +249,7 @@ out_err:
|
|||
|
||||
static void set_driver_behavior_per_device(struct hl_device *hdev)
|
||||
{
|
||||
hdev->cpu_enable = 1;
|
||||
hdev->fw_loading = FW_TYPE_ALL_TYPES;
|
||||
hdev->fw_components = FW_TYPE_ALL_TYPES;
|
||||
hdev->cpu_queues_enable = 1;
|
||||
hdev->heartbeat = 1;
|
||||
hdev->mmu_enable = 1;
|
||||
|
@ -288,6 +302,12 @@ int create_hdev(struct hl_device **dev, struct pci_dev *pdev,
|
|||
hdev->asic_type = asic_type;
|
||||
}
|
||||
|
||||
if (pdev)
|
||||
hdev->asic_prop.fw_security_disabled =
|
||||
!is_asic_secured(pdev->device);
|
||||
else
|
||||
hdev->asic_prop.fw_security_disabled = true;
|
||||
|
||||
/* Assign status description string */
|
||||
strncpy(hdev->status[HL_DEVICE_STATUS_MALFUNCTION],
|
||||
"disabled", HL_STR_MAX);
|
||||
|
|
|
@ -226,19 +226,14 @@ static int device_utilization(struct hl_device *hdev, struct hl_info_args *args)
|
|||
struct hl_info_device_utilization device_util = {0};
|
||||
u32 max_size = args->return_size;
|
||||
void __user *out = (void __user *) (uintptr_t) args->return_pointer;
|
||||
int rc;
|
||||
|
||||
if ((!max_size) || (!out))
|
||||
return -EINVAL;
|
||||
|
||||
if ((args->period_ms < 100) || (args->period_ms > 1000) ||
|
||||
(args->period_ms % 100)) {
|
||||
dev_err(hdev->dev,
|
||||
"period %u must be between 100 - 1000 and must be divisible by 100\n",
|
||||
args->period_ms);
|
||||
rc = hl_device_utilization(hdev, &device_util.utilization);
|
||||
if (rc)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
device_util.utilization = hl_device_utilization(hdev, args->period_ms);
|
||||
|
||||
return copy_to_user(out, &device_util,
|
||||
min((size_t) max_size, sizeof(device_util))) ? -EFAULT : 0;
|
||||
|
@ -446,6 +441,25 @@ static int pll_frequency_info(struct hl_fpriv *hpriv, struct hl_info_args *args)
|
|||
min((size_t) max_size, sizeof(freq_info))) ? -EFAULT : 0;
|
||||
}
|
||||
|
||||
static int power_info(struct hl_fpriv *hpriv, struct hl_info_args *args)
|
||||
{
|
||||
struct hl_device *hdev = hpriv->hdev;
|
||||
u32 max_size = args->return_size;
|
||||
struct hl_power_info power_info = {0};
|
||||
void __user *out = (void __user *) (uintptr_t) args->return_pointer;
|
||||
int rc;
|
||||
|
||||
if ((!max_size) || (!out))
|
||||
return -EINVAL;
|
||||
|
||||
rc = hl_fw_cpucp_power_get(hdev, &power_info.power);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
return copy_to_user(out, &power_info,
|
||||
min((size_t) max_size, sizeof(power_info))) ? -EFAULT : 0;
|
||||
}
|
||||
|
||||
static int _hl_info_ioctl(struct hl_fpriv *hpriv, void *data,
|
||||
struct device *dev)
|
||||
{
|
||||
|
@ -526,6 +540,9 @@ static int _hl_info_ioctl(struct hl_fpriv *hpriv, void *data,
|
|||
case HL_INFO_PLL_FREQUENCY:
|
||||
return pll_frequency_info(hpriv, args);
|
||||
|
||||
case HL_INFO_POWER:
|
||||
return power_info(hpriv, args);
|
||||
|
||||
default:
|
||||
dev_err(dev, "Invalid request %d\n", args->op);
|
||||
rc = -ENOTTY;
|
||||
|
@ -596,7 +613,7 @@ static const struct hl_ioctl_desc hl_ioctls[] = {
|
|||
HL_IOCTL_DEF(HL_IOCTL_INFO, hl_info_ioctl),
|
||||
HL_IOCTL_DEF(HL_IOCTL_CB, hl_cb_ioctl),
|
||||
HL_IOCTL_DEF(HL_IOCTL_CS, hl_cs_ioctl),
|
||||
HL_IOCTL_DEF(HL_IOCTL_WAIT_CS, hl_cs_wait_ioctl),
|
||||
HL_IOCTL_DEF(HL_IOCTL_WAIT_CS, hl_wait_ioctl),
|
||||
HL_IOCTL_DEF(HL_IOCTL_MEMORY, hl_mem_ioctl),
|
||||
HL_IOCTL_DEF(HL_IOCTL_DEBUG, hl_debug_ioctl)
|
||||
};
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue