Char/Misc driver patches for 5.5-rc1
Here is the big set of char/misc and other driver patches for 5.5-rc1 Loads of different things in here, this feels like the catch-all of driver subsystems these days. Full details are in the shortlog, but nothing major overall, just lots of driver updates and additions. All of these have been in linux-next for a while with no reported issues. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> -----BEGIN PGP SIGNATURE----- iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCXd6ewA8cZ3JlZ0Brcm9h aC5jb20ACgkQMUfUDdst+ymNXACfebVkDrFOH9EqDgFArPvZ1i9EmZ4AoLbE1Wki ftJApk+Ov1BT2TvClOza =cXqg -----END PGP SIGNATURE----- Merge tag 'char-misc-5.5-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc Pull char/misc driver updates from Greg KH: "Here is the big set of char/misc and other driver patches for 5.5-rc1 Loads of different things in here, this feels like the catch-all of driver subsystems these days. Full details are in the shortlog, but nothing major overall, just lots of driver updates and additions. All of these have been in linux-next for a while with no reported issues" * tag 'char-misc-5.5-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (198 commits) char: Fix Kconfig indentation, continued habanalabs: add more protection of device during reset habanalabs: flush EQ workers in hard reset habanalabs: make the reset code more consistent habanalabs: expose reset counters via existing INFO IOCTL habanalabs: make code more concise habanalabs: use defines for F/W files habanalabs: remove prints on successful device initialization habanalabs: remove unnecessary checks habanalabs: invalidate MMU cache only once habanalabs: skip VA block list update in reset flow habanalabs: optimize MMU unmap habanalabs: prevent read/write from/to the device during hard reset habanalabs: split MMU properties to PCI/DRAM habanalabs: re-factor MMU masks and documentation habanalabs: type specific MMU cache invalidation habanalabs: re-factor memory module code habanalabs: export uapi defines to user-space habanalabs: don't print error when queues are full habanalabs: increase max jobs number to 512 ...
This commit is contained in:
commit
8f56e4ebe0
|
@ -1,25 +1,25 @@
|
|||
What: /sys/bus/platform/devices/fsi-master/rescan
|
||||
What: /sys/bus/platform/devices/../fsi-master/fsi0/rescan
|
||||
Date: May 2017
|
||||
KernelVersion: 4.12
|
||||
Contact: cbostic@linux.vnet.ibm.com
|
||||
Contact: linux-fsi@lists.ozlabs.org
|
||||
Description:
|
||||
Initiates a FSI master scan for all connected slave devices
|
||||
on its links.
|
||||
|
||||
What: /sys/bus/platform/devices/fsi-master/break
|
||||
What: /sys/bus/platform/devices/../fsi-master/fsi0/break
|
||||
Date: May 2017
|
||||
KernelVersion: 4.12
|
||||
Contact: cbostic@linux.vnet.ibm.com
|
||||
Contact: linux-fsi@lists.ozlabs.org
|
||||
Description:
|
||||
Sends an FSI BREAK command on a master's communication
|
||||
link to any connnected slaves. A BREAK resets connected
|
||||
device's logic and preps it to receive further commands
|
||||
from the master.
|
||||
|
||||
What: /sys/bus/platform/devices/fsi-master/slave@00:00/term
|
||||
What: /sys/bus/platform/devices/../fsi-master/fsi0/slave@00:00/term
|
||||
Date: May 2017
|
||||
KernelVersion: 4.12
|
||||
Contact: cbostic@linux.vnet.ibm.com
|
||||
Contact: linux-fsi@lists.ozlabs.org
|
||||
Description:
|
||||
Sends an FSI terminate command from the master to its
|
||||
connected slave. A terminate resets the slave's state machines
|
||||
|
@ -29,10 +29,10 @@ Description:
|
|||
ongoing operation in case of an expired 'Master Time Out'
|
||||
timer.
|
||||
|
||||
What: /sys/bus/platform/devices/fsi-master/slave@00:00/raw
|
||||
What: /sys/bus/platform/devices/../fsi-master/fsi0/slave@00:00/raw
|
||||
Date: May 2017
|
||||
KernelVersion: 4.12
|
||||
Contact: cbostic@linux.vnet.ibm.com
|
||||
Contact: linux-fsi@lists.ozlabs.org
|
||||
Description:
|
||||
Provides a means of reading/writing a 32 bit value from/to a
|
||||
specified FSI bus address.
|
||||
|
|
|
@ -4,7 +4,7 @@ KernelVersion: 3.10
|
|||
Contact: Samuel Ortiz <sameo@linux.intel.com>
|
||||
linux-mei@linux.intel.com
|
||||
Description: Stores the same MODALIAS value emitted by uevent
|
||||
Format: mei:<mei device name>:<device uuid>:
|
||||
Format: mei:<mei device name>:<device uuid>:<protocol version>
|
||||
|
||||
What: /sys/bus/mei/devices/.../name
|
||||
Date: May 2015
|
||||
|
@ -26,3 +26,24 @@ KernelVersion: 4.3
|
|||
Contact: Tomas Winkler <tomas.winkler@intel.com>
|
||||
Description: Stores mei client protocol version
|
||||
Format: %d
|
||||
|
||||
What: /sys/bus/mei/devices/.../max_conn
|
||||
Date: Nov 2019
|
||||
KernelVersion: 5.5
|
||||
Contact: Tomas Winkler <tomas.winkler@intel.com>
|
||||
Description: Stores mei client maximum number of connections
|
||||
Format: %d
|
||||
|
||||
What: /sys/bus/mei/devices/.../fixed
|
||||
Date: Nov 2019
|
||||
KernelVersion: 5.5
|
||||
Contact: Tomas Winkler <tomas.winkler@intel.com>
|
||||
Description: Stores mei client fixed address, if any
|
||||
Format: %d
|
||||
|
||||
What: /sys/bus/mei/devices/.../max_len
|
||||
Date: Nov 2019
|
||||
KernelVersion: 5.5
|
||||
Contact: Tomas Winkler <tomas.winkler@intel.com>
|
||||
Description: Stores mei client maximum message length
|
||||
Format: %d
|
||||
|
|
|
@ -80,6 +80,14 @@ Contact: thunderbolt-software@lists.01.org
|
|||
Description: This attribute contains 1 if Thunderbolt device was already
|
||||
authorized on boot and 0 otherwise.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/.../generation
|
||||
Date: Jan 2020
|
||||
KernelVersion: 5.5
|
||||
Contact: Christian Kellner <christian@kellner.me>
|
||||
Description: This attribute contains the generation of the Thunderbolt
|
||||
controller associated with the device. It will contain 4
|
||||
for USB4.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/.../key
|
||||
Date: Sep 2017
|
||||
KernelVersion: 4.13
|
||||
|
@ -104,6 +112,34 @@ Contact: thunderbolt-software@lists.01.org
|
|||
Description: This attribute contains name of this device extracted from
|
||||
the device DROM.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/.../rx_speed
|
||||
Date: Jan 2020
|
||||
KernelVersion: 5.5
|
||||
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
|
||||
Description: This attribute reports the device RX speed per lane.
|
||||
All RX lanes run at the same speed.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/.../rx_lanes
|
||||
Date: Jan 2020
|
||||
KernelVersion: 5.5
|
||||
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
|
||||
Description: This attribute reports number of RX lanes the device is
|
||||
using simultaneusly through its upstream port.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/.../tx_speed
|
||||
Date: Jan 2020
|
||||
KernelVersion: 5.5
|
||||
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
|
||||
Description: This attribute reports the TX speed per lane.
|
||||
All TX lanes run at the same speed.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/.../tx_lanes
|
||||
Date: Jan 2020
|
||||
KernelVersion: 5.5
|
||||
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
|
||||
Description: This attribute reports number of TX lanes the device is
|
||||
using simultaneusly through its upstream port.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/.../vendor
|
||||
Date: Sep 2017
|
||||
KernelVersion: 4.13
|
||||
|
|
|
@ -80,3 +80,13 @@ Description: Display the ME device state.
|
|||
DISABLED
|
||||
POWER_DOWN
|
||||
POWER_UP
|
||||
|
||||
What: /sys/class/mei/meiN/trc
|
||||
Date: Nov 2019
|
||||
KernelVersion: 5.5
|
||||
Contact: Tomas Winkler <tomas.winkler@intel.com>
|
||||
Description: Display trc status register content
|
||||
|
||||
The ME FW writes Glitch Detection HW (TRC)
|
||||
status information into trc status register
|
||||
for BIOS and OS to monitor fw health.
|
||||
|
|
|
@ -106,3 +106,135 @@ KernelVersion: 5.4
|
|||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-only. Read this file to get the second error detected by
|
||||
hardware.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/hwmon/hwmonX/name
|
||||
Date: October 2019
|
||||
KernelVersion: 5.5
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-Only. Read this file to get the name of hwmon device, it
|
||||
supports values:
|
||||
'dfl_fme_thermal' - thermal hwmon device name
|
||||
'dfl_fme_power' - power hwmon device name
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/hwmon/hwmonX/temp1_input
|
||||
Date: October 2019
|
||||
KernelVersion: 5.5
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-Only. It returns FPGA device temperature in millidegrees
|
||||
Celsius.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/hwmon/hwmonX/temp1_max
|
||||
Date: October 2019
|
||||
KernelVersion: 5.5
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-Only. It returns hardware threshold1 temperature in
|
||||
millidegrees Celsius. If temperature rises at or above this
|
||||
threshold, hardware starts 50% or 90% throttling (see
|
||||
'temp1_max_policy').
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/hwmon/hwmonX/temp1_crit
|
||||
Date: October 2019
|
||||
KernelVersion: 5.5
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-Only. It returns hardware threshold2 temperature in
|
||||
millidegrees Celsius. If temperature rises at or above this
|
||||
threshold, hardware starts 100% throttling.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/hwmon/hwmonX/temp1_emergency
|
||||
Date: October 2019
|
||||
KernelVersion: 5.5
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-Only. It returns hardware trip threshold temperature in
|
||||
millidegrees Celsius. If temperature rises at or above this
|
||||
threshold, a fatal event will be triggered to board management
|
||||
controller (BMC) to shutdown FPGA.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/hwmon/hwmonX/temp1_max_alarm
|
||||
Date: October 2019
|
||||
KernelVersion: 5.5
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-only. It returns 1 if temperature is currently at or above
|
||||
hardware threshold1 (see 'temp1_max'), otherwise 0.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/hwmon/hwmonX/temp1_crit_alarm
|
||||
Date: October 2019
|
||||
KernelVersion: 5.5
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-only. It returns 1 if temperature is currently at or above
|
||||
hardware threshold2 (see 'temp1_crit'), otherwise 0.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/hwmon/hwmonX/temp1_max_policy
|
||||
Date: October 2019
|
||||
KernelVersion: 5.5
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-Only. Read this file to get the policy of hardware threshold1
|
||||
(see 'temp1_max'). It only supports two values (policies):
|
||||
0 - AP2 state (90% throttling)
|
||||
1 - AP1 state (50% throttling)
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/hwmon/hwmonX/power1_input
|
||||
Date: October 2019
|
||||
KernelVersion: 5.5
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-Only. It returns current FPGA power consumption in uW.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/hwmon/hwmonX/power1_max
|
||||
Date: October 2019
|
||||
KernelVersion: 5.5
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-Write. Read this file to get current hardware power
|
||||
threshold1 in uW. If power consumption rises at or above
|
||||
this threshold, hardware starts 50% throttling.
|
||||
Write this file to set current hardware power threshold1 in uW.
|
||||
As hardware only accepts values in Watts, so input value will
|
||||
be round down per Watts (< 1 watts part will be discarded) and
|
||||
clamped within the range from 0 to 127 Watts. Write fails with
|
||||
-EINVAL if input parsing fails.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/hwmon/hwmonX/power1_crit
|
||||
Date: October 2019
|
||||
KernelVersion: 5.5
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-Write. Read this file to get current hardware power
|
||||
threshold2 in uW. If power consumption rises at or above
|
||||
this threshold, hardware starts 90% throttling.
|
||||
Write this file to set current hardware power threshold2 in uW.
|
||||
As hardware only accepts values in Watts, so input value will
|
||||
be round down per Watts (< 1 watts part will be discarded) and
|
||||
clamped within the range from 0 to 127 Watts. Write fails with
|
||||
-EINVAL if input parsing fails.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/hwmon/hwmonX/power1_max_alarm
|
||||
Date: October 2019
|
||||
KernelVersion: 5.5
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-only. It returns 1 if power consumption is currently at or
|
||||
above hardware threshold1 (see 'power1_max'), otherwise 0.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/hwmon/hwmonX/power1_crit_alarm
|
||||
Date: October 2019
|
||||
KernelVersion: 5.5
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-only. It returns 1 if power consumption is currently at or
|
||||
above hardware threshold2 (see 'power1_crit'), otherwise 0.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/hwmon/hwmonX/power1_xeon_limit
|
||||
Date: October 2019
|
||||
KernelVersion: 5.5
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-Only. It returns power limit for XEON in uW.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/hwmon/hwmonX/power1_fpga_limit
|
||||
Date: October 2019
|
||||
KernelVersion: 5.5
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-Only. It returns power limit for FPGA in uW.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/hwmon/hwmonX/power1_ltr
|
||||
Date: October 2019
|
||||
KernelVersion: 5.5
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-only. Read this file to get current Latency Tolerance
|
||||
Reporting (ltr) value. It returns 1 if all Accelerated
|
||||
Function Units (AFUs) can tolerate latency >= 40us for memory
|
||||
access or 0 if any AFU is latency sensitive (< 40us).
|
||||
|
|
|
@ -87,6 +87,15 @@ its hardware characteristcs.
|
|||
|
||||
* port or ports: see "Graph bindings for Coresight" below.
|
||||
|
||||
* Optional properties for all components:
|
||||
|
||||
* arm,coresight-loses-context-with-cpu : boolean. Indicates that the
|
||||
hardware will lose register context on CPU power down (e.g. CPUIdle).
|
||||
An example of where this may be needed are systems which contain a
|
||||
coresight component and CPU in the same power domain. When the CPU
|
||||
powers down the coresight component also powers down and loses its
|
||||
context. This property is currently only used for the ETM 4.x driver.
|
||||
|
||||
* Optional properties for ETM/PTMs:
|
||||
|
||||
* arm,cp14: must be present if the system accesses ETM/PTM management
|
||||
|
|
|
@ -0,0 +1,24 @@
|
|||
Device-tree bindings for AST2600 FSI master
|
||||
-------------------------------------------
|
||||
|
||||
The AST2600 contains two identical FSI masters. They share a clock and have a
|
||||
separate interrupt line and output pins.
|
||||
|
||||
Required properties:
|
||||
- compatible: "aspeed,ast2600-fsi-master"
|
||||
- reg: base address and length
|
||||
- clocks: phandle and clock number
|
||||
- interrupts: platform dependent interrupt description
|
||||
- pinctrl-0: phandle to pinctrl node
|
||||
- pinctrl-names: pinctrl state
|
||||
|
||||
Examples:
|
||||
|
||||
fsi-master {
|
||||
compatible = "aspeed,ast2600-fsi-master", "fsi-master";
|
||||
reg = <0x1e79b000 0x94>;
|
||||
interrupts = <GIC_SPI 100 IRQ_TYPE_LEVEL_HIGH>;
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&pinctrl_fsi1_default>;
|
||||
clocks = <&syscon ASPEED_CLK_GATE_FSICLK>;
|
||||
};
|
|
@ -0,0 +1,62 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/interconnect/qcom,msm8974.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Qualcomm MSM8974 Network-On-Chip Interconnect
|
||||
|
||||
maintainers:
|
||||
- Brian Masney <masneyb@onstation.org>
|
||||
|
||||
description: |
|
||||
The Qualcomm MSM8974 interconnect providers support setting system
|
||||
bandwidth requirements between various network-on-chip fabrics.
|
||||
|
||||
properties:
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
compatible:
|
||||
enum:
|
||||
- qcom,msm8974-bimc
|
||||
- qcom,msm8974-cnoc
|
||||
- qcom,msm8974-mmssnoc
|
||||
- qcom,msm8974-ocmemnoc
|
||||
- qcom,msm8974-pnoc
|
||||
- qcom,msm8974-snoc
|
||||
|
||||
'#interconnect-cells':
|
||||
const: 1
|
||||
|
||||
clock-names:
|
||||
items:
|
||||
- const: bus
|
||||
- const: bus_a
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: Bus Clock
|
||||
- description: Bus A Clock
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- '#interconnect-cells'
|
||||
- clock-names
|
||||
- clocks
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
#include <dt-bindings/clock/qcom,rpmcc.h>
|
||||
|
||||
bimc: interconnect@fc380000 {
|
||||
reg = <0xfc380000 0x6a000>;
|
||||
compatible = "qcom,msm8974-bimc";
|
||||
#interconnect-cells = <1>;
|
||||
clock-names = "bus", "bus_a";
|
||||
clocks = <&rpmcc RPM_SMD_BIMC_CLK>,
|
||||
<&rpmcc RPM_SMD_BIMC_A_CLK>;
|
||||
};
|
|
@ -0,0 +1,25 @@
|
|||
Rockchip internal OTP (One Time Programmable) memory device tree bindings
|
||||
|
||||
Required properties:
|
||||
- compatible: Should be one of the following.
|
||||
- "rockchip,px30-otp" - for PX30 SoCs.
|
||||
- "rockchip,rk3308-otp" - for RK3308 SoCs.
|
||||
- reg: Should contain the registers location and size
|
||||
- clocks: Must contain an entry for each entry in clock-names.
|
||||
- clock-names: Should be "otp", "apb_pclk" and "phy".
|
||||
- resets: Must contain an entry for each entry in reset-names.
|
||||
See ../../reset/reset.txt for details.
|
||||
- reset-names: Should be "phy".
|
||||
|
||||
See nvmem.txt for more information.
|
||||
|
||||
Example:
|
||||
otp: otp@ff290000 {
|
||||
compatible = "rockchip,px30-otp";
|
||||
reg = <0x0 0xff290000 0x0 0x4000>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
clocks = <&cru SCLK_OTP_USR>, <&cru PCLK_OTP_NS>,
|
||||
<&cru PCLK_OTP_PHY>;
|
||||
clock-names = "otp", "apb_pclk", "phy";
|
||||
};
|
|
@ -0,0 +1,39 @@
|
|||
= Spreadtrum eFuse device tree bindings =
|
||||
|
||||
Required properties:
|
||||
- compatible: Should be "sprd,ums312-efuse".
|
||||
- reg: Specify the address offset of efuse controller.
|
||||
- clock-names: Should be "enable".
|
||||
- clocks: The phandle and specifier referencing the controller's clock.
|
||||
- hwlocks: Reference to a phandle of a hwlock provider node.
|
||||
|
||||
= Data cells =
|
||||
Are child nodes of eFuse, bindings of which as described in
|
||||
bindings/nvmem/nvmem.txt
|
||||
|
||||
Example:
|
||||
|
||||
ap_efuse: efuse@32240000 {
|
||||
compatible = "sprd,ums312-efuse";
|
||||
reg = <0 0x32240000 0 0x10000>;
|
||||
clock-names = "enable";
|
||||
hwlocks = <&hwlock 8>;
|
||||
clocks = <&aonapb_gate CLK_EFUSE_EB>;
|
||||
|
||||
/* Data cells */
|
||||
thermal_calib: calib@10 {
|
||||
reg = <0x10 0x2>;
|
||||
};
|
||||
};
|
||||
|
||||
= Data consumers =
|
||||
Are device nodes which consume nvmem data cells.
|
||||
|
||||
Example:
|
||||
|
||||
thermal {
|
||||
...
|
||||
|
||||
nvmem-cells = <&thermal_calib>;
|
||||
nvmem-cell-names = "calibration";
|
||||
};
|
|
@ -0,0 +1,47 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
|
||||
# Copyright 2019 Ondrej Jirman <megous@megous.com>
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: "http://devicetree.org/schemas/phy/allwinner,sun50i-h6-usb3-phy.yaml#"
|
||||
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
|
||||
|
||||
title: Allwinner H6 USB3 PHY
|
||||
|
||||
maintainers:
|
||||
- Ondrej Jirman <megous@megous.com>
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
enum:
|
||||
- allwinner,sun50i-h6-usb3-phy
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
clocks:
|
||||
maxItems: 1
|
||||
|
||||
resets:
|
||||
maxItems: 1
|
||||
|
||||
"#phy-cells":
|
||||
const: 0
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- clocks
|
||||
- resets
|
||||
- "#phy-cells"
|
||||
|
||||
examples:
|
||||
- |
|
||||
#include <dt-bindings/clock/sun50i-h6-ccu.h>
|
||||
#include <dt-bindings/reset/sun50i-h6-ccu.h>
|
||||
phy@5210000 {
|
||||
compatible = "allwinner,sun50i-h6-usb3-phy";
|
||||
reg = <0x5210000 0x10000>;
|
||||
clocks = <&ccu CLK_USB_PHY1>;
|
||||
resets = <&ccu RST_USB_PHY1>;
|
||||
#phy-cells = <0>;
|
||||
};
|
|
@ -2,6 +2,7 @@ ROCKCHIP USB2.0 PHY WITH INNO IP BLOCK
|
|||
|
||||
Required properties (phy (parent) node):
|
||||
- compatible : should be one of the listed compatibles:
|
||||
* "rockchip,px30-usb2phy"
|
||||
* "rockchip,rk3228-usb2phy"
|
||||
* "rockchip,rk3328-usb2phy"
|
||||
* "rockchip,rk3366-usb2phy"
|
||||
|
|
|
@ -14,7 +14,8 @@ Required properties:
|
|||
"qcom,msm8998-qmp-pcie-phy" for PCIe QMP phy on msm8998,
|
||||
"qcom,sdm845-qmp-usb3-phy" for USB3 QMP V3 phy on sdm845,
|
||||
"qcom,sdm845-qmp-usb3-uni-phy" for USB3 QMP V3 UNI phy on sdm845,
|
||||
"qcom,sdm845-qmp-ufs-phy" for UFS QMP phy on sdm845.
|
||||
"qcom,sdm845-qmp-ufs-phy" for UFS QMP phy on sdm845,
|
||||
"qcom,sm8150-qmp-ufs-phy" for UFS QMP phy on sm8150.
|
||||
|
||||
- reg:
|
||||
- index 0: address and length of register set for PHY's common
|
||||
|
@ -57,6 +58,8 @@ Required properties:
|
|||
"aux", "cfg_ahb", "ref", "com_aux".
|
||||
For "qcom,sdm845-qmp-ufs-phy" must contain:
|
||||
"ref", "ref_aux".
|
||||
For "qcom,sm8150-qmp-ufs-phy" must contain:
|
||||
"ref", "ref_aux".
|
||||
|
||||
- resets: a list of phandles and reset controller specifier pairs,
|
||||
one for each entry in reset-names.
|
||||
|
@ -83,6 +86,8 @@ Required properties:
|
|||
"phy", "common".
|
||||
For "qcom,sdm845-qmp-ufs-phy": must contain:
|
||||
"ufsphy".
|
||||
For "qcom,sm8150-qmp-ufs-phy": must contain:
|
||||
"ufsphy".
|
||||
|
||||
- vdda-phy-supply: Phandle to a regulator supply to PHY core block.
|
||||
- vdda-pll-supply: Phandle to 1.8V regulator supply to PHY refclk pll block.
|
||||
|
|
|
@ -0,0 +1,75 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/phy/rockchip,px30-dsi-dphy.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Rockchip MIPI DPHY with additional LVDS/TTL modes
|
||||
|
||||
maintainers:
|
||||
- Heiko Stuebner <heiko@sntech.de>
|
||||
|
||||
properties:
|
||||
"#phy-cells":
|
||||
const: 0
|
||||
|
||||
"#clock-cells":
|
||||
const: 0
|
||||
|
||||
compatible:
|
||||
enum:
|
||||
- rockchip,px30-dsi-dphy
|
||||
- rockchip,rk3128-dsi-dphy
|
||||
- rockchip,rk3368-dsi-dphy
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: PLL reference clock
|
||||
- description: Module clock
|
||||
|
||||
clock-names:
|
||||
items:
|
||||
- const: ref
|
||||
- const: pclk
|
||||
|
||||
power-domains:
|
||||
maxItems: 1
|
||||
description: phandle to the associated power domain
|
||||
|
||||
resets:
|
||||
items:
|
||||
- description: exclusive PHY reset line
|
||||
|
||||
reset-names:
|
||||
items:
|
||||
- const: apb
|
||||
|
||||
required:
|
||||
- "#phy-cells"
|
||||
- "#clock-cells"
|
||||
- compatible
|
||||
- reg
|
||||
- clocks
|
||||
- clock-names
|
||||
- resets
|
||||
- reset-names
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
dsi_dphy: phy@ff2e0000 {
|
||||
compatible = "rockchip,px30-video-phy";
|
||||
reg = <0x0 0xff2e0000 0x0 0x10000>;
|
||||
clocks = <&pmucru 13>, <&cru 12>;
|
||||
clock-names = "ref", "pclk";
|
||||
#clock-cells = <0>;
|
||||
resets = <&cru 12>;
|
||||
reset-names = "apb";
|
||||
#phy-cells = <0>;
|
||||
};
|
||||
|
||||
...
|
|
@ -108,6 +108,16 @@ More functions are exposed through sysfs
|
|||
error reporting sysfs interfaces allow user to read errors detected by the
|
||||
hardware, and clear the logged errors.
|
||||
|
||||
Power management (dfl_fme_power hwmon)
|
||||
power management hwmon sysfs interfaces allow user to read power management
|
||||
information (power consumption, thresholds, threshold status, limits, etc.)
|
||||
and configure power thresholds for different throttling levels.
|
||||
|
||||
Thermal management (dfl_fme_thermal hwmon)
|
||||
thermal management hwmon sysfs interfaces allow user to read thermal
|
||||
management information (current temperature, thresholds, threshold status,
|
||||
etc.).
|
||||
|
||||
|
||||
FIU - PORT
|
||||
==========
|
||||
|
|
|
@ -44,7 +44,8 @@ Documentation/trace/stm.rst for more information on that.
|
|||
|
||||
MSU can be configured to collect trace data into a system memory
|
||||
buffer, which can later on be read from its device nodes via read() or
|
||||
mmap() interface.
|
||||
mmap() interface and directed to a "software sink" driver that will
|
||||
consume the data and/or relay it further.
|
||||
|
||||
On the whole, Intel(R) Trace Hub does not require any special
|
||||
userspace software to function; everything can be configured, started
|
||||
|
@ -122,3 +123,28 @@ In order to enable the host mode, set the 'host_mode' parameter of the
|
|||
will show up on the intel_th bus. Also, trace configuration and
|
||||
capture controlling attribute groups of the 'gth' device will not be
|
||||
exposed. The 'sth' device will operate as usual.
|
||||
|
||||
Software Sinks
|
||||
--------------
|
||||
|
||||
The Memory Storage Unit (MSU) driver provides an in-kernel API for
|
||||
drivers to register themselves as software sinks for the trace data.
|
||||
Such drivers can further export the data via other devices, such as
|
||||
USB device controllers or network cards.
|
||||
|
||||
The API has two main parts::
|
||||
- notifying the software sink that a particular window is full, and
|
||||
"locking" that window, that is, making it unavailable for the trace
|
||||
collection; when this happens, the MSU driver will automatically
|
||||
switch to the next window in the buffer if it is unlocked, or stop
|
||||
the trace capture if it's not;
|
||||
- tracking the "locked" state of windows and providing a way for the
|
||||
software sink driver to notify the MSU driver when a window is
|
||||
unlocked and can be used again to collect trace data.
|
||||
|
||||
An example sink driver, msu-sink illustrates the implementation of a
|
||||
software sink. Functionally, it simply unlocks windows as soon as they
|
||||
are full, keeping the MSU running in a circular buffer mode. Unlike the
|
||||
"multi" mode, it will fill out all the windows in the buffer as opposed
|
||||
to just the first one. It can be enabled by writing "sink" to the "mode"
|
||||
file (assuming msu-sink.ko is loaded).
|
||||
|
|
|
@ -65,6 +65,7 @@
|
|||
#include <linux/ratelimit.h>
|
||||
#include <linux/syscalls.h>
|
||||
#include <linux/task_work.h>
|
||||
#include <linux/sizes.h>
|
||||
|
||||
#include <uapi/linux/android/binder.h>
|
||||
#include <uapi/linux/android/binderfs.h>
|
||||
|
@ -92,11 +93,6 @@ static atomic_t binder_last_id;
|
|||
static int proc_show(struct seq_file *m, void *unused);
|
||||
DEFINE_SHOW_ATTRIBUTE(proc);
|
||||
|
||||
/* This is only defined in include/asm-arm/sizes.h */
|
||||
#ifndef SZ_1K
|
||||
#define SZ_1K 0x400
|
||||
#endif
|
||||
|
||||
#define FORBIDDEN_MMAP_FLAGS (VM_WRITE)
|
||||
|
||||
enum {
|
||||
|
|
|
@ -268,7 +268,6 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
|
|||
alloc->pages_high = index + 1;
|
||||
|
||||
trace_binder_alloc_page_end(alloc, index);
|
||||
/* vm_insert_page does not seem to increment the refcount */
|
||||
}
|
||||
if (mm) {
|
||||
up_read(&mm->mmap_sem);
|
||||
|
@ -277,8 +276,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
|
|||
return 0;
|
||||
|
||||
free_range:
|
||||
for (page_addr = end - PAGE_SIZE; page_addr >= start;
|
||||
page_addr -= PAGE_SIZE) {
|
||||
for (page_addr = end - PAGE_SIZE; 1; page_addr -= PAGE_SIZE) {
|
||||
bool ret;
|
||||
size_t index;
|
||||
|
||||
|
@ -291,6 +289,8 @@ free_range:
|
|||
WARN_ON(!ret);
|
||||
|
||||
trace_binder_free_lru_end(alloc, index);
|
||||
if (page_addr == start)
|
||||
break;
|
||||
continue;
|
||||
|
||||
err_vm_insert_page_failed:
|
||||
|
@ -298,7 +298,8 @@ err_vm_insert_page_failed:
|
|||
page->page_ptr = NULL;
|
||||
err_alloc_page_failed:
|
||||
err_page_ptr_cleared:
|
||||
;
|
||||
if (page_addr == start)
|
||||
break;
|
||||
}
|
||||
err_no_vma:
|
||||
if (mm) {
|
||||
|
@ -681,17 +682,17 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
|
|||
struct binder_buffer *buffer;
|
||||
|
||||
mutex_lock(&binder_alloc_mmap_lock);
|
||||
if (alloc->buffer) {
|
||||
if (alloc->buffer_size) {
|
||||
ret = -EBUSY;
|
||||
failure_string = "already mapped";
|
||||
goto err_already_mapped;
|
||||
}
|
||||
|
||||
alloc->buffer = (void __user *)vma->vm_start;
|
||||
mutex_unlock(&binder_alloc_mmap_lock);
|
||||
|
||||
alloc->buffer_size = min_t(unsigned long, vma->vm_end - vma->vm_start,
|
||||
SZ_4M);
|
||||
mutex_unlock(&binder_alloc_mmap_lock);
|
||||
|
||||
alloc->buffer = (void __user *)vma->vm_start;
|
||||
|
||||
alloc->pages = kcalloc(alloc->buffer_size / PAGE_SIZE,
|
||||
sizeof(alloc->pages[0]),
|
||||
GFP_KERNEL);
|
||||
|
@ -722,8 +723,9 @@ err_alloc_buf_struct_failed:
|
|||
kfree(alloc->pages);
|
||||
alloc->pages = NULL;
|
||||
err_alloc_pages_failed:
|
||||
mutex_lock(&binder_alloc_mmap_lock);
|
||||
alloc->buffer = NULL;
|
||||
mutex_lock(&binder_alloc_mmap_lock);
|
||||
alloc->buffer_size = 0;
|
||||
err_already_mapped:
|
||||
mutex_unlock(&binder_alloc_mmap_lock);
|
||||
binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
|
||||
|
@ -841,14 +843,20 @@ void binder_alloc_print_pages(struct seq_file *m,
|
|||
int free = 0;
|
||||
|
||||
mutex_lock(&alloc->mutex);
|
||||
for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
|
||||
page = &alloc->pages[i];
|
||||
if (!page->page_ptr)
|
||||
free++;
|
||||
else if (list_empty(&page->lru))
|
||||
active++;
|
||||
else
|
||||
lru++;
|
||||
/*
|
||||
* Make sure the binder_alloc is fully initialized, otherwise we might
|
||||
* read inconsistent state.
|
||||
*/
|
||||
if (binder_alloc_get_vma(alloc) != NULL) {
|
||||
for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
|
||||
page = &alloc->pages[i];
|
||||
if (!page->page_ptr)
|
||||
free++;
|
||||
else if (list_empty(&page->lru))
|
||||
active++;
|
||||
else
|
||||
lru++;
|
||||
}
|
||||
}
|
||||
mutex_unlock(&alloc->mutex);
|
||||
seq_printf(m, " pages: %d:%d:%d\n", active, lru, free);
|
||||
|
|
|
@ -439,8 +439,8 @@ config RAW_DRIVER
|
|||
Once bound, I/O against /dev/raw/rawN uses efficient zero-copy I/O.
|
||||
See the raw(8) manpage for more details.
|
||||
|
||||
Applications should preferably open the device (eg /dev/hda1)
|
||||
with the O_DIRECT flag.
|
||||
Applications should preferably open the device (eg /dev/hda1)
|
||||
with the O_DIRECT flag.
|
||||
|
||||
config MAX_RAW_DEVS
|
||||
int "Maximum number of RAW devices to support (1-65536)"
|
||||
|
|
|
@ -63,7 +63,7 @@ config AGP_AMD64
|
|||
This option gives you AGP support for the GLX component of
|
||||
X using the on-CPU northbridge of the AMD Athlon64/Opteron CPUs.
|
||||
You still need an external AGP bridge like the AMD 8151, VIA
|
||||
K8T400M, SiS755. It may also support other AGP bridges when loaded
|
||||
K8T400M, SiS755. It may also support other AGP bridges when loaded
|
||||
with agp_try_unsupported=1.
|
||||
|
||||
config AGP_INTEL
|
||||
|
|
|
@ -386,17 +386,17 @@ config HW_RANDOM_MESON
|
|||
If unsure, say Y.
|
||||
|
||||
config HW_RANDOM_CAVIUM
|
||||
tristate "Cavium ThunderX Random Number Generator support"
|
||||
depends on HW_RANDOM && PCI && (ARM64 || (COMPILE_TEST && 64BIT))
|
||||
default HW_RANDOM
|
||||
---help---
|
||||
This driver provides kernel-side support for the Random Number
|
||||
Generator hardware found on Cavium SoCs.
|
||||
tristate "Cavium ThunderX Random Number Generator support"
|
||||
depends on HW_RANDOM && PCI && (ARM64 || (COMPILE_TEST && 64BIT))
|
||||
default HW_RANDOM
|
||||
---help---
|
||||
This driver provides kernel-side support for the Random Number
|
||||
Generator hardware found on Cavium SoCs.
|
||||
|
||||
To compile this driver as a module, choose M here: the
|
||||
module will be called cavium_rng.
|
||||
To compile this driver as a module, choose M here: the
|
||||
module will be called cavium_rng.
|
||||
|
||||
If unsure, say Y.
|
||||
If unsure, say Y.
|
||||
|
||||
config HW_RANDOM_MTK
|
||||
tristate "Mediatek Random Number Generator support"
|
||||
|
|
|
@ -4,38 +4,38 @@
|
|||
#
|
||||
|
||||
menuconfig IPMI_HANDLER
|
||||
tristate 'IPMI top-level message handler'
|
||||
depends on HAS_IOMEM
|
||||
select IPMI_DMI_DECODE if DMI
|
||||
help
|
||||
This enables the central IPMI message handler, required for IPMI
|
||||
to work.
|
||||
tristate 'IPMI top-level message handler'
|
||||
depends on HAS_IOMEM
|
||||
select IPMI_DMI_DECODE if DMI
|
||||
help
|
||||
This enables the central IPMI message handler, required for IPMI
|
||||
to work.
|
||||
|
||||
IPMI is a standard for managing sensors (temperature,
|
||||
voltage, etc.) in a system.
|
||||
IPMI is a standard for managing sensors (temperature,
|
||||
voltage, etc.) in a system.
|
||||
|
||||
See <file:Documentation/IPMI.txt> for more details on the driver.
|
||||
See <file:Documentation/IPMI.txt> for more details on the driver.
|
||||
|
||||
If unsure, say N.
|
||||
If unsure, say N.
|
||||
|
||||
config IPMI_DMI_DECODE
|
||||
select IPMI_PLAT_DATA
|
||||
bool
|
||||
select IPMI_PLAT_DATA
|
||||
bool
|
||||
|
||||
config IPMI_PLAT_DATA
|
||||
bool
|
||||
bool
|
||||
|
||||
if IPMI_HANDLER
|
||||
|
||||
config IPMI_PANIC_EVENT
|
||||
bool 'Generate a panic event to all BMCs on a panic'
|
||||
help
|
||||
When a panic occurs, this will cause the IPMI message handler to,
|
||||
by default, generate an IPMI event describing the panic to each
|
||||
interface registered with the message handler. This is always
|
||||
available, the module parameter for ipmi_msghandler named
|
||||
panic_op can be set to "event" to chose this value, this config
|
||||
simply causes the default value to be set to "event".
|
||||
bool 'Generate a panic event to all BMCs on a panic'
|
||||
help
|
||||
When a panic occurs, this will cause the IPMI message handler to,
|
||||
by default, generate an IPMI event describing the panic to each
|
||||
interface registered with the message handler. This is always
|
||||
available, the module parameter for ipmi_msghandler named
|
||||
panic_op can be set to "event" to chose this value, this config
|
||||
simply causes the default value to be set to "event".
|
||||
|
||||
config IPMI_PANIC_STRING
|
||||
bool 'Generate OEM events containing the panic string'
|
||||
|
@ -54,43 +54,43 @@ config IPMI_PANIC_STRING
|
|||
causes the default value to be set to "string".
|
||||
|
||||
config IPMI_DEVICE_INTERFACE
|
||||
tristate 'Device interface for IPMI'
|
||||
help
|
||||
This provides an IOCTL interface to the IPMI message handler so
|
||||
userland processes may use IPMI. It supports poll() and select().
|
||||
tristate 'Device interface for IPMI'
|
||||
help
|
||||
This provides an IOCTL interface to the IPMI message handler so
|
||||
userland processes may use IPMI. It supports poll() and select().
|
||||
|
||||
config IPMI_SI
|
||||
tristate 'IPMI System Interface handler'
|
||||
select IPMI_PLAT_DATA
|
||||
help
|
||||
Provides a driver for System Interfaces (KCS, SMIC, BT).
|
||||
Currently, only KCS and SMIC are supported. If
|
||||
you are using IPMI, you should probably say "y" here.
|
||||
tristate 'IPMI System Interface handler'
|
||||
select IPMI_PLAT_DATA
|
||||
help
|
||||
Provides a driver for System Interfaces (KCS, SMIC, BT).
|
||||
Currently, only KCS and SMIC are supported. If
|
||||
you are using IPMI, you should probably say "y" here.
|
||||
|
||||
config IPMI_SSIF
|
||||
tristate 'IPMI SMBus handler (SSIF)'
|
||||
select I2C
|
||||
help
|
||||
Provides a driver for a SMBus interface to a BMC, meaning that you
|
||||
have a driver that must be accessed over an I2C bus instead of a
|
||||
standard interface. This module requires I2C support.
|
||||
tristate 'IPMI SMBus handler (SSIF)'
|
||||
select I2C
|
||||
help
|
||||
Provides a driver for a SMBus interface to a BMC, meaning that you
|
||||
have a driver that must be accessed over an I2C bus instead of a
|
||||
standard interface. This module requires I2C support.
|
||||
|
||||
config IPMI_POWERNV
|
||||
depends on PPC_POWERNV
|
||||
tristate 'POWERNV (OPAL firmware) IPMI interface'
|
||||
help
|
||||
Provides a driver for OPAL firmware-based IPMI interfaces.
|
||||
depends on PPC_POWERNV
|
||||
tristate 'POWERNV (OPAL firmware) IPMI interface'
|
||||
help
|
||||
Provides a driver for OPAL firmware-based IPMI interfaces.
|
||||
|
||||
config IPMI_WATCHDOG
|
||||
tristate 'IPMI Watchdog Timer'
|
||||
help
|
||||
This enables the IPMI watchdog timer.
|
||||
tristate 'IPMI Watchdog Timer'
|
||||
help
|
||||
This enables the IPMI watchdog timer.
|
||||
|
||||
config IPMI_POWEROFF
|
||||
tristate 'IPMI Poweroff'
|
||||
help
|
||||
This enables a function to power off the system with IPMI if
|
||||
the IPMI management controller is capable of this.
|
||||
tristate 'IPMI Poweroff'
|
||||
help
|
||||
This enables a function to power off the system with IPMI if
|
||||
the IPMI management controller is capable of this.
|
||||
|
||||
endif # IPMI_HANDLER
|
||||
|
||||
|
@ -126,7 +126,7 @@ config NPCM7XX_KCS_IPMI_BMC
|
|||
|
||||
config ASPEED_BT_IPMI_BMC
|
||||
depends on ARCH_ASPEED || COMPILE_TEST
|
||||
depends on REGMAP && REGMAP_MMIO && MFD_SYSCON
|
||||
depends on REGMAP && REGMAP_MMIO && MFD_SYSCON
|
||||
tristate "BT IPMI bmc driver"
|
||||
help
|
||||
Provides a driver for the BT (Block Transfer) IPMI interface
|
||||
|
|
|
@ -713,6 +713,10 @@ static int lp_set_timeout64(unsigned int minor, void __user *arg)
|
|||
if (copy_from_user(karg, arg, sizeof(karg)))
|
||||
return -EFAULT;
|
||||
|
||||
/* sparc64 suseconds_t is 32-bit only */
|
||||
if (IS_ENABLED(CONFIG_SPARC64) && !in_compat_syscall())
|
||||
karg[1] >>= 32;
|
||||
|
||||
return lp_set_timeout(minor, karg[0], karg[1]);
|
||||
}
|
||||
|
||||
|
|
|
@ -619,20 +619,27 @@ static int pp_do_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
|||
if (copy_from_user(time32, argp, sizeof(time32)))
|
||||
return -EFAULT;
|
||||
|
||||
if ((time32[0] < 0) || (time32[1] < 0))
|
||||
return -EINVAL;
|
||||
|
||||
return pp_set_timeout(pp->pdev, time32[0], time32[1]);
|
||||
|
||||
case PPSETTIME64:
|
||||
if (copy_from_user(time64, argp, sizeof(time64)))
|
||||
return -EFAULT;
|
||||
|
||||
if ((time64[0] < 0) || (time64[1] < 0))
|
||||
return -EINVAL;
|
||||
|
||||
if (IS_ENABLED(CONFIG_SPARC64) && !in_compat_syscall())
|
||||
time64[1] >>= 32;
|
||||
|
||||
return pp_set_timeout(pp->pdev, time64[0], time64[1]);
|
||||
|
||||
case PPGETTIME32:
|
||||
jiffies_to_timespec64(pp->pdev->timeout, &ts);
|
||||
time32[0] = ts.tv_sec;
|
||||
time32[1] = ts.tv_nsec / NSEC_PER_USEC;
|
||||
if ((time32[0] < 0) || (time32[1] < 0))
|
||||
return -EINVAL;
|
||||
|
||||
if (copy_to_user(argp, time32, sizeof(time32)))
|
||||
return -EFAULT;
|
||||
|
@ -643,8 +650,9 @@ static int pp_do_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
|||
jiffies_to_timespec64(pp->pdev->timeout, &ts);
|
||||
time64[0] = ts.tv_sec;
|
||||
time64[1] = ts.tv_nsec / NSEC_PER_USEC;
|
||||
if ((time64[0] < 0) || (time64[1] < 0))
|
||||
return -EINVAL;
|
||||
|
||||
if (IS_ENABLED(CONFIG_SPARC64) && !in_compat_syscall())
|
||||
time64[1] <<= 32;
|
||||
|
||||
if (copy_to_user(argp, time64, sizeof(time64)))
|
||||
return -EFAULT;
|
||||
|
|
|
@ -116,7 +116,6 @@ static int xilly_drv_probe(struct platform_device *op)
|
|||
struct xilly_endpoint *endpoint;
|
||||
int rc;
|
||||
int irq;
|
||||
struct resource *res;
|
||||
struct xilly_endpoint_hardware *ephw = &of_hw;
|
||||
|
||||
if (of_property_read_bool(dev->of_node, "dma-coherent"))
|
||||
|
@ -129,9 +128,7 @@ static int xilly_drv_probe(struct platform_device *op)
|
|||
|
||||
dev_set_drvdata(dev, endpoint);
|
||||
|
||||
res = platform_get_resource(op, IORESOURCE_MEM, 0);
|
||||
endpoint->registers = devm_ioremap_resource(dev, res);
|
||||
|
||||
endpoint->registers = devm_platform_ioremap_resource(op, 0);
|
||||
if (IS_ERR(endpoint->registers))
|
||||
return PTR_ERR(endpoint->registers);
|
||||
|
||||
|
|
|
@ -338,6 +338,7 @@ static int cht_wc_extcon_probe(struct platform_device *pdev)
|
|||
struct intel_soc_pmic *pmic = dev_get_drvdata(pdev->dev.parent);
|
||||
struct cht_wc_extcon_data *ext;
|
||||
unsigned long mask = ~(CHT_WC_PWRSRC_VBUS | CHT_WC_PWRSRC_USBID_MASK);
|
||||
int pwrsrc_sts, id;
|
||||
int irq, ret;
|
||||
|
||||
irq = platform_get_irq(pdev, 0);
|
||||
|
@ -387,8 +388,19 @@ static int cht_wc_extcon_probe(struct platform_device *pdev)
|
|||
goto disable_sw_control;
|
||||
}
|
||||
|
||||
/* Route D+ and D- to PMIC for initial charger detection */
|
||||
cht_wc_extcon_set_phymux(ext, MUX_SEL_PMIC);
|
||||
ret = regmap_read(ext->regmap, CHT_WC_PWRSRC_STS, &pwrsrc_sts);
|
||||
if (ret) {
|
||||
dev_err(ext->dev, "Error reading pwrsrc status: %d\n", ret);
|
||||
goto disable_sw_control;
|
||||
}
|
||||
|
||||
/*
|
||||
* If no USB host or device connected, route D+ and D- to PMIC for
|
||||
* initial charger detection
|
||||
*/
|
||||
id = cht_wc_extcon_get_id(ext, pwrsrc_sts);
|
||||
if (id != INTEL_USB_ID_GND)
|
||||
cht_wc_extcon_set_phymux(ext, MUX_SEL_PMIC);
|
||||
|
||||
/* Get initial state */
|
||||
cht_wc_extcon_pwrsrc_event(ext);
|
||||
|
|
|
@ -65,6 +65,10 @@ struct sm5502_muic_info {
|
|||
/* Default value of SM5502 register to bring up MUIC device. */
|
||||
static struct reg_data sm5502_reg_data[] = {
|
||||
{
|
||||
.reg = SM5502_REG_RESET,
|
||||
.val = SM5502_REG_RESET_MASK,
|
||||
.invert = true,
|
||||
}, {
|
||||
.reg = SM5502_REG_CONTROL,
|
||||
.val = SM5502_REG_CONTROL_MASK_INT_MASK,
|
||||
.invert = false,
|
||||
|
@ -272,7 +276,7 @@ static int sm5502_muic_set_path(struct sm5502_muic_info *info,
|
|||
/* Return cable type of attached or detached accessories */
|
||||
static unsigned int sm5502_muic_get_cable_type(struct sm5502_muic_info *info)
|
||||
{
|
||||
unsigned int cable_type = -1, adc, dev_type1;
|
||||
unsigned int cable_type, adc, dev_type1;
|
||||
int ret;
|
||||
|
||||
/* Read ADC value according to external cable or button */
|
||||
|
|
|
@ -237,6 +237,8 @@ enum sm5502_reg {
|
|||
#define DM_DP_SWITCH_UART ((DM_DP_CON_SWITCH_UART <<SM5502_REG_MANUAL_SW1_DP_SHIFT) \
|
||||
| (DM_DP_CON_SWITCH_UART <<SM5502_REG_MANUAL_SW1_DM_SHIFT))
|
||||
|
||||
#define SM5502_REG_RESET_MASK (0x1)
|
||||
|
||||
/* SM5502 Interrupts */
|
||||
enum sm5502_irq {
|
||||
/* INT1 */
|
||||
|
|
|
@ -20,7 +20,6 @@
|
|||
#define RSU_VERSION_MASK GENMASK_ULL(63, 32)
|
||||
#define RSU_ERROR_LOCATION_MASK GENMASK_ULL(31, 0)
|
||||
#define RSU_ERROR_DETAIL_MASK GENMASK_ULL(63, 32)
|
||||
#define RSU_FW_VERSION_MASK GENMASK_ULL(15, 0)
|
||||
|
||||
#define RSU_TIMEOUT (msecs_to_jiffies(SVC_RSU_REQUEST_TIMEOUT_MS))
|
||||
|
||||
|
@ -109,9 +108,12 @@ static void rsu_command_callback(struct stratix10_svc_client *client,
|
|||
{
|
||||
struct stratix10_rsu_priv *priv = client->priv;
|
||||
|
||||
if (data->status != BIT(SVC_STATUS_RSU_OK))
|
||||
dev_err(client->dev, "RSU returned status is %i\n",
|
||||
data->status);
|
||||
if (data->status == BIT(SVC_STATUS_RSU_NO_SUPPORT))
|
||||
dev_warn(client->dev, "Secure FW doesn't support notify\n");
|
||||
else if (data->status == BIT(SVC_STATUS_RSU_ERROR))
|
||||
dev_err(client->dev, "Failure, returned status is %lu\n",
|
||||
BIT(data->status));
|
||||
|
||||
complete(&priv->completion);
|
||||
}
|
||||
|
||||
|
@ -133,9 +135,11 @@ static void rsu_retry_callback(struct stratix10_svc_client *client,
|
|||
|
||||
if (data->status == BIT(SVC_STATUS_RSU_OK))
|
||||
priv->retry_counter = *counter;
|
||||
else if (data->status == BIT(SVC_STATUS_RSU_NO_SUPPORT))
|
||||
dev_warn(client->dev, "Secure FW doesn't support retry\n");
|
||||
else
|
||||
dev_err(client->dev, "Failed to get retry counter %i\n",
|
||||
data->status);
|
||||
dev_err(client->dev, "Failed to get retry counter %lu\n",
|
||||
BIT(data->status));
|
||||
|
||||
complete(&priv->completion);
|
||||
}
|
||||
|
@ -333,15 +337,10 @@ static ssize_t notify_store(struct device *dev,
|
|||
return ret;
|
||||
}
|
||||
|
||||
/* only 19.3 or late version FW supports retry counter feature */
|
||||
if (FIELD_GET(RSU_FW_VERSION_MASK, priv->status.version)) {
|
||||
ret = rsu_send_msg(priv, COMMAND_RSU_RETRY,
|
||||
0, rsu_retry_callback);
|
||||
if (ret) {
|
||||
dev_err(dev,
|
||||
"Error, getting RSU retry %i\n", ret);
|
||||
return ret;
|
||||
}
|
||||
ret = rsu_send_msg(priv, COMMAND_RSU_RETRY, 0, rsu_retry_callback);
|
||||
if (ret) {
|
||||
dev_err(dev, "Error, getting RSU retry %i\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return count;
|
||||
|
@ -413,15 +412,10 @@ static int stratix10_rsu_probe(struct platform_device *pdev)
|
|||
stratix10_svc_free_channel(priv->chan);
|
||||
}
|
||||
|
||||
/* only 19.3 or late version FW supports retry counter feature */
|
||||
if (FIELD_GET(RSU_FW_VERSION_MASK, priv->status.version)) {
|
||||
ret = rsu_send_msg(priv, COMMAND_RSU_RETRY, 0,
|
||||
rsu_retry_callback);
|
||||
if (ret) {
|
||||
dev_err(dev,
|
||||
"Error, getting RSU retry %i\n", ret);
|
||||
stratix10_svc_free_channel(priv->chan);
|
||||
}
|
||||
ret = rsu_send_msg(priv, COMMAND_RSU_RETRY, 0, rsu_retry_callback);
|
||||
if (ret) {
|
||||
dev_err(dev, "Error, getting RSU retry %i\n", ret);
|
||||
stratix10_svc_free_channel(priv->chan);
|
||||
}
|
||||
|
||||
return ret;
|
||||
|
|
|
@ -493,8 +493,24 @@ static int svc_normal_to_secure_thread(void *data)
|
|||
pdata->chan->scl->receive_cb(pdata->chan->scl, cbdata);
|
||||
break;
|
||||
default:
|
||||
pr_warn("it shouldn't happen\n");
|
||||
pr_warn("Secure firmware doesn't support...\n");
|
||||
|
||||
/*
|
||||
* be compatible with older version firmware which
|
||||
* doesn't support RSU notify or retry
|
||||
*/
|
||||
if ((pdata->command == COMMAND_RSU_RETRY) ||
|
||||
(pdata->command == COMMAND_RSU_NOTIFY)) {
|
||||
cbdata->status =
|
||||
BIT(SVC_STATUS_RSU_NO_SUPPORT);
|
||||
cbdata->kaddr1 = NULL;
|
||||
cbdata->kaddr2 = NULL;
|
||||
cbdata->kaddr3 = NULL;
|
||||
pdata->chan->scl->receive_cb(
|
||||
pdata->chan->scl, cbdata);
|
||||
}
|
||||
break;
|
||||
|
||||
}
|
||||
};
|
||||
|
||||
|
|
|
@ -156,7 +156,7 @@ config FPGA_DFL
|
|||
|
||||
config FPGA_DFL_FME
|
||||
tristate "FPGA DFL FME Driver"
|
||||
depends on FPGA_DFL
|
||||
depends on FPGA_DFL && HWMON
|
||||
help
|
||||
The FPGA Management Engine (FME) is a feature device implemented
|
||||
under Device Feature List (DFL) framework. Select this option to
|
||||
|
|
|
@ -14,6 +14,8 @@
|
|||
* Henry Mitchel <henry.mitchel@intel.com>
|
||||
*/
|
||||
|
||||
#include <linux/hwmon.h>
|
||||
#include <linux/hwmon-sysfs.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/uaccess.h>
|
||||
|
@ -181,6 +183,381 @@ static const struct dfl_feature_ops fme_hdr_ops = {
|
|||
.ioctl = fme_hdr_ioctl,
|
||||
};
|
||||
|
||||
#define FME_THERM_THRESHOLD 0x8
|
||||
#define TEMP_THRESHOLD1 GENMASK_ULL(6, 0)
|
||||
#define TEMP_THRESHOLD1_EN BIT_ULL(7)
|
||||
#define TEMP_THRESHOLD2 GENMASK_ULL(14, 8)
|
||||
#define TEMP_THRESHOLD2_EN BIT_ULL(15)
|
||||
#define TRIP_THRESHOLD GENMASK_ULL(30, 24)
|
||||
#define TEMP_THRESHOLD1_STATUS BIT_ULL(32) /* threshold1 reached */
|
||||
#define TEMP_THRESHOLD2_STATUS BIT_ULL(33) /* threshold2 reached */
|
||||
/* threshold1 policy: 0 - AP2 (90% throttle) / 1 - AP1 (50% throttle) */
|
||||
#define TEMP_THRESHOLD1_POLICY BIT_ULL(44)
|
||||
|
||||
#define FME_THERM_RDSENSOR_FMT1 0x10
|
||||
#define FPGA_TEMPERATURE GENMASK_ULL(6, 0)
|
||||
|
||||
#define FME_THERM_CAP 0x20
|
||||
#define THERM_NO_THROTTLE BIT_ULL(0)
|
||||
|
||||
#define MD_PRE_DEG
|
||||
|
||||
static bool fme_thermal_throttle_support(void __iomem *base)
|
||||
{
|
||||
u64 v = readq(base + FME_THERM_CAP);
|
||||
|
||||
return FIELD_GET(THERM_NO_THROTTLE, v) ? false : true;
|
||||
}
|
||||
|
||||
static umode_t thermal_hwmon_attrs_visible(const void *drvdata,
|
||||
enum hwmon_sensor_types type,
|
||||
u32 attr, int channel)
|
||||
{
|
||||
const struct dfl_feature *feature = drvdata;
|
||||
|
||||
/* temperature is always supported, and check hardware cap for others */
|
||||
if (attr == hwmon_temp_input)
|
||||
return 0444;
|
||||
|
||||
return fme_thermal_throttle_support(feature->ioaddr) ? 0444 : 0;
|
||||
}
|
||||
|
||||
static int thermal_hwmon_read(struct device *dev, enum hwmon_sensor_types type,
|
||||
u32 attr, int channel, long *val)
|
||||
{
|
||||
struct dfl_feature *feature = dev_get_drvdata(dev);
|
||||
u64 v;
|
||||
|
||||
switch (attr) {
|
||||
case hwmon_temp_input:
|
||||
v = readq(feature->ioaddr + FME_THERM_RDSENSOR_FMT1);
|
||||
*val = (long)(FIELD_GET(FPGA_TEMPERATURE, v) * 1000);
|
||||
break;
|
||||
case hwmon_temp_max:
|
||||
v = readq(feature->ioaddr + FME_THERM_THRESHOLD);
|
||||
*val = (long)(FIELD_GET(TEMP_THRESHOLD1, v) * 1000);
|
||||
break;
|
||||
case hwmon_temp_crit:
|
||||
v = readq(feature->ioaddr + FME_THERM_THRESHOLD);
|
||||
*val = (long)(FIELD_GET(TEMP_THRESHOLD2, v) * 1000);
|
||||
break;
|
||||
case hwmon_temp_emergency:
|
||||
v = readq(feature->ioaddr + FME_THERM_THRESHOLD);
|
||||
*val = (long)(FIELD_GET(TRIP_THRESHOLD, v) * 1000);
|
||||
break;
|
||||
case hwmon_temp_max_alarm:
|
||||
v = readq(feature->ioaddr + FME_THERM_THRESHOLD);
|
||||
*val = (long)FIELD_GET(TEMP_THRESHOLD1_STATUS, v);
|
||||
break;
|
||||
case hwmon_temp_crit_alarm:
|
||||
v = readq(feature->ioaddr + FME_THERM_THRESHOLD);
|
||||
*val = (long)FIELD_GET(TEMP_THRESHOLD2_STATUS, v);
|
||||
break;
|
||||
default:
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct hwmon_ops thermal_hwmon_ops = {
|
||||
.is_visible = thermal_hwmon_attrs_visible,
|
||||
.read = thermal_hwmon_read,
|
||||
};
|
||||
|
||||
static const struct hwmon_channel_info *thermal_hwmon_info[] = {
|
||||
HWMON_CHANNEL_INFO(temp, HWMON_T_INPUT | HWMON_T_EMERGENCY |
|
||||
HWMON_T_MAX | HWMON_T_MAX_ALARM |
|
||||
HWMON_T_CRIT | HWMON_T_CRIT_ALARM),
|
||||
NULL
|
||||
};
|
||||
|
||||
static const struct hwmon_chip_info thermal_hwmon_chip_info = {
|
||||
.ops = &thermal_hwmon_ops,
|
||||
.info = thermal_hwmon_info,
|
||||
};
|
||||
|
||||
static ssize_t temp1_max_policy_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct dfl_feature *feature = dev_get_drvdata(dev);
|
||||
u64 v;
|
||||
|
||||
v = readq(feature->ioaddr + FME_THERM_THRESHOLD);
|
||||
|
||||
return sprintf(buf, "%u\n",
|
||||
(unsigned int)FIELD_GET(TEMP_THRESHOLD1_POLICY, v));
|
||||
}
|
||||
|
||||
static DEVICE_ATTR_RO(temp1_max_policy);
|
||||
|
||||
static struct attribute *thermal_extra_attrs[] = {
|
||||
&dev_attr_temp1_max_policy.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static umode_t thermal_extra_attrs_visible(struct kobject *kobj,
|
||||
struct attribute *attr, int index)
|
||||
{
|
||||
struct device *dev = kobj_to_dev(kobj);
|
||||
struct dfl_feature *feature = dev_get_drvdata(dev);
|
||||
|
||||
return fme_thermal_throttle_support(feature->ioaddr) ? attr->mode : 0;
|
||||
}
|
||||
|
||||
static const struct attribute_group thermal_extra_group = {
|
||||
.attrs = thermal_extra_attrs,
|
||||
.is_visible = thermal_extra_attrs_visible,
|
||||
};
|
||||
__ATTRIBUTE_GROUPS(thermal_extra);
|
||||
|
||||
static int fme_thermal_mgmt_init(struct platform_device *pdev,
|
||||
struct dfl_feature *feature)
|
||||
{
|
||||
struct device *hwmon;
|
||||
|
||||
/*
|
||||
* create hwmon to allow userspace monitoring temperature and other
|
||||
* threshold information.
|
||||
*
|
||||
* temp1_input -> FPGA device temperature
|
||||
* temp1_max -> hardware threshold 1 -> 50% or 90% throttling
|
||||
* temp1_crit -> hardware threshold 2 -> 100% throttling
|
||||
* temp1_emergency -> hardware trip_threshold to shutdown FPGA
|
||||
* temp1_max_alarm -> hardware threshold 1 alarm
|
||||
* temp1_crit_alarm -> hardware threshold 2 alarm
|
||||
*
|
||||
* create device specific sysfs interfaces, e.g. read temp1_max_policy
|
||||
* to understand the actual hardware throttling action (50% vs 90%).
|
||||
*
|
||||
* If hardware doesn't support automatic throttling per thresholds,
|
||||
* then all above sysfs interfaces are not visible except temp1_input
|
||||
* for temperature.
|
||||
*/
|
||||
hwmon = devm_hwmon_device_register_with_info(&pdev->dev,
|
||||
"dfl_fme_thermal", feature,
|
||||
&thermal_hwmon_chip_info,
|
||||
thermal_extra_groups);
|
||||
if (IS_ERR(hwmon)) {
|
||||
dev_err(&pdev->dev, "Fail to register thermal hwmon\n");
|
||||
return PTR_ERR(hwmon);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct dfl_feature_id fme_thermal_mgmt_id_table[] = {
|
||||
{.id = FME_FEATURE_ID_THERMAL_MGMT,},
|
||||
{0,}
|
||||
};
|
||||
|
||||
static const struct dfl_feature_ops fme_thermal_mgmt_ops = {
|
||||
.init = fme_thermal_mgmt_init,
|
||||
};
|
||||
|
||||
#define FME_PWR_STATUS 0x8
|
||||
#define FME_LATENCY_TOLERANCE BIT_ULL(18)
|
||||
#define PWR_CONSUMED GENMASK_ULL(17, 0)
|
||||
|
||||
#define FME_PWR_THRESHOLD 0x10
|
||||
#define PWR_THRESHOLD1 GENMASK_ULL(6, 0) /* in Watts */
|
||||
#define PWR_THRESHOLD2 GENMASK_ULL(14, 8) /* in Watts */
|
||||
#define PWR_THRESHOLD_MAX 0x7f /* in Watts */
|
||||
#define PWR_THRESHOLD1_STATUS BIT_ULL(16)
|
||||
#define PWR_THRESHOLD2_STATUS BIT_ULL(17)
|
||||
|
||||
#define FME_PWR_XEON_LIMIT 0x18
|
||||
#define XEON_PWR_LIMIT GENMASK_ULL(14, 0) /* in 0.1 Watts */
|
||||
#define XEON_PWR_EN BIT_ULL(15)
|
||||
#define FME_PWR_FPGA_LIMIT 0x20
|
||||
#define FPGA_PWR_LIMIT GENMASK_ULL(14, 0) /* in 0.1 Watts */
|
||||
#define FPGA_PWR_EN BIT_ULL(15)
|
||||
|
||||
static int power_hwmon_read(struct device *dev, enum hwmon_sensor_types type,
|
||||
u32 attr, int channel, long *val)
|
||||
{
|
||||
struct dfl_feature *feature = dev_get_drvdata(dev);
|
||||
u64 v;
|
||||
|
||||
switch (attr) {
|
||||
case hwmon_power_input:
|
||||
v = readq(feature->ioaddr + FME_PWR_STATUS);
|
||||
*val = (long)(FIELD_GET(PWR_CONSUMED, v) * 1000000);
|
||||
break;
|
||||
case hwmon_power_max:
|
||||
v = readq(feature->ioaddr + FME_PWR_THRESHOLD);
|
||||
*val = (long)(FIELD_GET(PWR_THRESHOLD1, v) * 1000000);
|
||||
break;
|
||||
case hwmon_power_crit:
|
||||
v = readq(feature->ioaddr + FME_PWR_THRESHOLD);
|
||||
*val = (long)(FIELD_GET(PWR_THRESHOLD2, v) * 1000000);
|
||||
break;
|
||||
case hwmon_power_max_alarm:
|
||||
v = readq(feature->ioaddr + FME_PWR_THRESHOLD);
|
||||
*val = (long)FIELD_GET(PWR_THRESHOLD1_STATUS, v);
|
||||
break;
|
||||
case hwmon_power_crit_alarm:
|
||||
v = readq(feature->ioaddr + FME_PWR_THRESHOLD);
|
||||
*val = (long)FIELD_GET(PWR_THRESHOLD2_STATUS, v);
|
||||
break;
|
||||
default:
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int power_hwmon_write(struct device *dev, enum hwmon_sensor_types type,
|
||||
u32 attr, int channel, long val)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev->parent);
|
||||
struct dfl_feature *feature = dev_get_drvdata(dev);
|
||||
int ret = 0;
|
||||
u64 v;
|
||||
|
||||
val = clamp_val(val / 1000000, 0, PWR_THRESHOLD_MAX);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
|
||||
switch (attr) {
|
||||
case hwmon_power_max:
|
||||
v = readq(feature->ioaddr + FME_PWR_THRESHOLD);
|
||||
v &= ~PWR_THRESHOLD1;
|
||||
v |= FIELD_PREP(PWR_THRESHOLD1, val);
|
||||
writeq(v, feature->ioaddr + FME_PWR_THRESHOLD);
|
||||
break;
|
||||
case hwmon_power_crit:
|
||||
v = readq(feature->ioaddr + FME_PWR_THRESHOLD);
|
||||
v &= ~PWR_THRESHOLD2;
|
||||
v |= FIELD_PREP(PWR_THRESHOLD2, val);
|
||||
writeq(v, feature->ioaddr + FME_PWR_THRESHOLD);
|
||||
break;
|
||||
default:
|
||||
ret = -EOPNOTSUPP;
|
||||
break;
|
||||
}
|
||||
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static umode_t power_hwmon_attrs_visible(const void *drvdata,
|
||||
enum hwmon_sensor_types type,
|
||||
u32 attr, int channel)
|
||||
{
|
||||
switch (attr) {
|
||||
case hwmon_power_input:
|
||||
case hwmon_power_max_alarm:
|
||||
case hwmon_power_crit_alarm:
|
||||
return 0444;
|
||||
case hwmon_power_max:
|
||||
case hwmon_power_crit:
|
||||
return 0644;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct hwmon_ops power_hwmon_ops = {
|
||||
.is_visible = power_hwmon_attrs_visible,
|
||||
.read = power_hwmon_read,
|
||||
.write = power_hwmon_write,
|
||||
};
|
||||
|
||||
static const struct hwmon_channel_info *power_hwmon_info[] = {
|
||||
HWMON_CHANNEL_INFO(power, HWMON_P_INPUT |
|
||||
HWMON_P_MAX | HWMON_P_MAX_ALARM |
|
||||
HWMON_P_CRIT | HWMON_P_CRIT_ALARM),
|
||||
NULL
|
||||
};
|
||||
|
||||
static const struct hwmon_chip_info power_hwmon_chip_info = {
|
||||
.ops = &power_hwmon_ops,
|
||||
.info = power_hwmon_info,
|
||||
};
|
||||
|
||||
static ssize_t power1_xeon_limit_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct dfl_feature *feature = dev_get_drvdata(dev);
|
||||
u16 xeon_limit = 0;
|
||||
u64 v;
|
||||
|
||||
v = readq(feature->ioaddr + FME_PWR_XEON_LIMIT);
|
||||
|
||||
if (FIELD_GET(XEON_PWR_EN, v))
|
||||
xeon_limit = FIELD_GET(XEON_PWR_LIMIT, v);
|
||||
|
||||
return sprintf(buf, "%u\n", xeon_limit * 100000);
|
||||
}
|
||||
|
||||
static ssize_t power1_fpga_limit_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct dfl_feature *feature = dev_get_drvdata(dev);
|
||||
u16 fpga_limit = 0;
|
||||
u64 v;
|
||||
|
||||
v = readq(feature->ioaddr + FME_PWR_FPGA_LIMIT);
|
||||
|
||||
if (FIELD_GET(FPGA_PWR_EN, v))
|
||||
fpga_limit = FIELD_GET(FPGA_PWR_LIMIT, v);
|
||||
|
||||
return sprintf(buf, "%u\n", fpga_limit * 100000);
|
||||
}
|
||||
|
||||
static ssize_t power1_ltr_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct dfl_feature *feature = dev_get_drvdata(dev);
|
||||
u64 v;
|
||||
|
||||
v = readq(feature->ioaddr + FME_PWR_STATUS);
|
||||
|
||||
return sprintf(buf, "%u\n",
|
||||
(unsigned int)FIELD_GET(FME_LATENCY_TOLERANCE, v));
|
||||
}
|
||||
|
||||
static DEVICE_ATTR_RO(power1_xeon_limit);
|
||||
static DEVICE_ATTR_RO(power1_fpga_limit);
|
||||
static DEVICE_ATTR_RO(power1_ltr);
|
||||
|
||||
static struct attribute *power_extra_attrs[] = {
|
||||
&dev_attr_power1_xeon_limit.attr,
|
||||
&dev_attr_power1_fpga_limit.attr,
|
||||
&dev_attr_power1_ltr.attr,
|
||||
NULL
|
||||
};
|
||||
|
||||
ATTRIBUTE_GROUPS(power_extra);
|
||||
|
||||
static int fme_power_mgmt_init(struct platform_device *pdev,
|
||||
struct dfl_feature *feature)
|
||||
{
|
||||
struct device *hwmon;
|
||||
|
||||
hwmon = devm_hwmon_device_register_with_info(&pdev->dev,
|
||||
"dfl_fme_power", feature,
|
||||
&power_hwmon_chip_info,
|
||||
power_extra_groups);
|
||||
if (IS_ERR(hwmon)) {
|
||||
dev_err(&pdev->dev, "Fail to register power hwmon\n");
|
||||
return PTR_ERR(hwmon);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct dfl_feature_id fme_power_mgmt_id_table[] = {
|
||||
{.id = FME_FEATURE_ID_POWER_MGMT,},
|
||||
{0,}
|
||||
};
|
||||
|
||||
static const struct dfl_feature_ops fme_power_mgmt_ops = {
|
||||
.init = fme_power_mgmt_init,
|
||||
};
|
||||
|
||||
static struct dfl_feature_driver fme_feature_drvs[] = {
|
||||
{
|
||||
.id_table = fme_hdr_id_table,
|
||||
|
@ -194,6 +571,14 @@ static struct dfl_feature_driver fme_feature_drvs[] = {
|
|||
.id_table = fme_global_err_id_table,
|
||||
.ops = &fme_global_err_ops,
|
||||
},
|
||||
{
|
||||
.id_table = fme_thermal_mgmt_id_table,
|
||||
.ops = &fme_thermal_mgmt_ops,
|
||||
},
|
||||
{
|
||||
.id_table = fme_power_mgmt_id_table,
|
||||
.ops = &fme_power_mgmt_ops,
|
||||
},
|
||||
{
|
||||
.ops = NULL,
|
||||
},
|
||||
|
|
|
@ -578,10 +578,8 @@ static int zynq_fpga_probe(struct platform_device *pdev)
|
|||
init_completion(&priv->dma_done);
|
||||
|
||||
priv->irq = platform_get_irq(pdev, 0);
|
||||
if (priv->irq < 0) {
|
||||
dev_err(dev, "No IRQ available\n");
|
||||
if (priv->irq < 0)
|
||||
return priv->irq;
|
||||
}
|
||||
|
||||
priv->clk = devm_clk_get(dev, "ref_clk");
|
||||
if (IS_ERR(priv->clk)) {
|
||||
|
|
|
@ -53,6 +53,14 @@ config FSI_MASTER_AST_CF
|
|||
lines driven by the internal ColdFire coprocessor. This requires
|
||||
the corresponding machine specific ColdFire firmware to be available.
|
||||
|
||||
config FSI_MASTER_ASPEED
|
||||
tristate "FSI ASPEED master"
|
||||
help
|
||||
This option enables a FSI master that is present behind an OPB bridge
|
||||
in the AST2600.
|
||||
|
||||
Enable it for your BMC kernel in an OpenPower or IBM Power system.
|
||||
|
||||
config FSI_SCOM
|
||||
tristate "SCOM FSI client device driver"
|
||||
---help---
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
|
||||
obj-$(CONFIG_FSI) += fsi-core.o
|
||||
obj-$(CONFIG_FSI_MASTER_HUB) += fsi-master-hub.o
|
||||
obj-$(CONFIG_FSI_MASTER_ASPEED) += fsi-master-aspeed.o
|
||||
obj-$(CONFIG_FSI_MASTER_GPIO) += fsi-master-gpio.o
|
||||
obj-$(CONFIG_FSI_MASTER_AST_CF) += fsi-master-ast-cf.o
|
||||
obj-$(CONFIG_FSI_SCOM) += fsi-scom.o
|
||||
|
|
|
@ -544,6 +544,31 @@ static int fsi_slave_scan(struct fsi_slave *slave)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static unsigned long aligned_access_size(size_t offset, size_t count)
|
||||
{
|
||||
unsigned long offset_unit, count_unit;
|
||||
|
||||
/* Criteria:
|
||||
*
|
||||
* 1. Access size must be less than or equal to the maximum access
|
||||
* width or the highest power-of-two factor of offset
|
||||
* 2. Access size must be less than or equal to the amount specified by
|
||||
* count
|
||||
*
|
||||
* The access width is optimal if we can calculate 1 to be strictly
|
||||
* equal while still satisfying 2.
|
||||
*/
|
||||
|
||||
/* Find 1 by the bottom bit of offset (with a 4 byte access cap) */
|
||||
offset_unit = BIT(__builtin_ctzl(offset | 4));
|
||||
|
||||
/* Find 2 by the top bit of count */
|
||||
count_unit = BIT(8 * sizeof(unsigned long) - 1 - __builtin_clzl(count));
|
||||
|
||||
/* Constrain the maximum access width to the minimum of both criteria */
|
||||
return BIT(__builtin_ctzl(offset_unit | count_unit));
|
||||
}
|
||||
|
||||
static ssize_t fsi_slave_sysfs_raw_read(struct file *file,
|
||||
struct kobject *kobj, struct bin_attribute *attr, char *buf,
|
||||
loff_t off, size_t count)
|
||||
|
@ -559,8 +584,7 @@ static ssize_t fsi_slave_sysfs_raw_read(struct file *file,
|
|||
return -EINVAL;
|
||||
|
||||
for (total_len = 0; total_len < count; total_len += read_len) {
|
||||
read_len = min_t(size_t, count, 4);
|
||||
read_len -= off & 0x3;
|
||||
read_len = aligned_access_size(off, count - total_len);
|
||||
|
||||
rc = fsi_slave_read(slave, off, buf + total_len, read_len);
|
||||
if (rc)
|
||||
|
@ -587,8 +611,7 @@ static ssize_t fsi_slave_sysfs_raw_write(struct file *file,
|
|||
return -EINVAL;
|
||||
|
||||
for (total_len = 0; total_len < count; total_len += write_len) {
|
||||
write_len = min_t(size_t, count, 4);
|
||||
write_len -= off & 0x3;
|
||||
write_len = aligned_access_size(off, count - total_len);
|
||||
|
||||
rc = fsi_slave_write(slave, off, buf + total_len, write_len);
|
||||
if (rc)
|
||||
|
@ -1241,6 +1264,19 @@ static ssize_t master_break_store(struct device *dev,
|
|||
|
||||
static DEVICE_ATTR(break, 0200, NULL, master_break_store);
|
||||
|
||||
static struct attribute *master_attrs[] = {
|
||||
&dev_attr_break.attr,
|
||||
&dev_attr_rescan.attr,
|
||||
NULL
|
||||
};
|
||||
|
||||
ATTRIBUTE_GROUPS(master);
|
||||
|
||||
static struct class fsi_master_class = {
|
||||
.name = "fsi-master",
|
||||
.dev_groups = master_groups,
|
||||
};
|
||||
|
||||
int fsi_master_register(struct fsi_master *master)
|
||||
{
|
||||
int rc;
|
||||
|
@ -1249,6 +1285,7 @@ int fsi_master_register(struct fsi_master *master)
|
|||
mutex_init(&master->scan_lock);
|
||||
master->idx = ida_simple_get(&master_ida, 0, INT_MAX, GFP_KERNEL);
|
||||
dev_set_name(&master->dev, "fsi%d", master->idx);
|
||||
master->dev.class = &fsi_master_class;
|
||||
|
||||
rc = device_register(&master->dev);
|
||||
if (rc) {
|
||||
|
@ -1256,20 +1293,6 @@ int fsi_master_register(struct fsi_master *master)
|
|||
return rc;
|
||||
}
|
||||
|
||||
rc = device_create_file(&master->dev, &dev_attr_rescan);
|
||||
if (rc) {
|
||||
device_del(&master->dev);
|
||||
ida_simple_remove(&master_ida, master->idx);
|
||||
return rc;
|
||||
}
|
||||
|
||||
rc = device_create_file(&master->dev, &dev_attr_break);
|
||||
if (rc) {
|
||||
device_del(&master->dev);
|
||||
ida_simple_remove(&master_ida, master->idx);
|
||||
return rc;
|
||||
}
|
||||
|
||||
np = dev_of_node(&master->dev);
|
||||
if (!of_property_read_bool(np, "no-scan-on-init")) {
|
||||
mutex_lock(&master->scan_lock);
|
||||
|
@ -1350,8 +1373,15 @@ static int __init fsi_init(void)
|
|||
rc = bus_register(&fsi_bus_type);
|
||||
if (rc)
|
||||
goto fail_bus;
|
||||
|
||||
rc = class_register(&fsi_master_class);
|
||||
if (rc)
|
||||
goto fail_class;
|
||||
|
||||
return 0;
|
||||
|
||||
fail_class:
|
||||
bus_unregister(&fsi_bus_type);
|
||||
fail_bus:
|
||||
unregister_chrdev_region(fsi_base_dev, FSI_CHAR_MAX_DEVICES);
|
||||
return rc;
|
||||
|
@ -1360,6 +1390,7 @@ postcore_initcall(fsi_init);
|
|||
|
||||
static void fsi_exit(void)
|
||||
{
|
||||
class_unregister(&fsi_master_class);
|
||||
bus_unregister(&fsi_bus_type);
|
||||
unregister_chrdev_region(fsi_base_dev, FSI_CHAR_MAX_DEVICES);
|
||||
ida_destroy(&fsi_minor_ida);
|
||||
|
|
|
@ -0,0 +1,544 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-or-later
|
||||
// Copyright (C) IBM Corporation 2018
|
||||
// FSI master driver for AST2600
|
||||
|
||||
#include <linux/clk.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/fsi.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/mfd/syscon.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/regmap.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/iopoll.h>
|
||||
|
||||
#include "fsi-master.h"
|
||||
|
||||
struct fsi_master_aspeed {
|
||||
struct fsi_master master;
|
||||
struct device *dev;
|
||||
void __iomem *base;
|
||||
struct clk *clk;
|
||||
};
|
||||
|
||||
#define to_fsi_master_aspeed(m) \
|
||||
container_of(m, struct fsi_master_aspeed, master)
|
||||
|
||||
/* Control register (size 0x400) */
|
||||
static const u32 ctrl_base = 0x80000000;
|
||||
|
||||
static const u32 fsi_base = 0xa0000000;
|
||||
|
||||
#define OPB_FSI_VER 0x00
|
||||
#define OPB_TRIGGER 0x04
|
||||
#define OPB_CTRL_BASE 0x08
|
||||
#define OPB_FSI_BASE 0x0c
|
||||
#define OPB_CLK_SYNC 0x3c
|
||||
#define OPB_IRQ_CLEAR 0x40
|
||||
#define OPB_IRQ_MASK 0x44
|
||||
#define OPB_IRQ_STATUS 0x48
|
||||
|
||||
#define OPB0_SELECT 0x10
|
||||
#define OPB0_RW 0x14
|
||||
#define OPB0_XFER_SIZE 0x18
|
||||
#define OPB0_FSI_ADDR 0x1c
|
||||
#define OPB0_FSI_DATA_W 0x20
|
||||
#define OPB0_STATUS 0x80
|
||||
#define OPB0_FSI_DATA_R 0x84
|
||||
|
||||
#define OPB0_WRITE_ORDER1 0x4c
|
||||
#define OPB0_WRITE_ORDER2 0x50
|
||||
#define OPB1_WRITE_ORDER1 0x54
|
||||
#define OPB1_WRITE_ORDER2 0x58
|
||||
#define OPB0_READ_ORDER1 0x5c
|
||||
#define OPB1_READ_ORDER2 0x60
|
||||
|
||||
#define OPB_RETRY_COUNTER 0x64
|
||||
|
||||
/* OPBn_STATUS */
|
||||
#define STATUS_HALFWORD_ACK BIT(0)
|
||||
#define STATUS_FULLWORD_ACK BIT(1)
|
||||
#define STATUS_ERR_ACK BIT(2)
|
||||
#define STATUS_RETRY BIT(3)
|
||||
#define STATUS_TIMEOUT BIT(4)
|
||||
|
||||
/* OPB_IRQ_MASK */
|
||||
#define OPB1_XFER_ACK_EN BIT(17)
|
||||
#define OPB0_XFER_ACK_EN BIT(16)
|
||||
|
||||
/* OPB_RW */
|
||||
#define CMD_READ BIT(0)
|
||||
#define CMD_WRITE 0
|
||||
|
||||
/* OPBx_XFER_SIZE */
|
||||
#define XFER_FULLWORD (BIT(1) | BIT(0))
|
||||
#define XFER_HALFWORD (BIT(0))
|
||||
#define XFER_BYTE (0)
|
||||
|
||||
#define CREATE_TRACE_POINTS
|
||||
#include <trace/events/fsi_master_aspeed.h>
|
||||
|
||||
#define FSI_LINK_ENABLE_SETUP_TIME 10 /* in mS */
|
||||
|
||||
#define DEFAULT_DIVISOR 14
|
||||
#define OPB_POLL_TIMEOUT 10000
|
||||
|
||||
static int __opb_write(struct fsi_master_aspeed *aspeed, u32 addr,
|
||||
u32 val, u32 transfer_size)
|
||||
{
|
||||
void __iomem *base = aspeed->base;
|
||||
u32 reg, status;
|
||||
int ret;
|
||||
|
||||
writel(CMD_WRITE, base + OPB0_RW);
|
||||
writel(transfer_size, base + OPB0_XFER_SIZE);
|
||||
writel(addr, base + OPB0_FSI_ADDR);
|
||||
writel(val, base + OPB0_FSI_DATA_W);
|
||||
writel(0x1, base + OPB_IRQ_CLEAR);
|
||||
writel(0x1, base + OPB_TRIGGER);
|
||||
|
||||
ret = readl_poll_timeout(base + OPB_IRQ_STATUS, reg,
|
||||
(reg & OPB0_XFER_ACK_EN) != 0,
|
||||
0, OPB_POLL_TIMEOUT);
|
||||
|
||||
status = readl(base + OPB0_STATUS);
|
||||
|
||||
trace_fsi_master_aspeed_opb_write(addr, val, transfer_size, status, reg);
|
||||
|
||||
/* Return error when poll timed out */
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Command failed, master will reset */
|
||||
if (status & STATUS_ERR_ACK)
|
||||
return -EIO;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int opb_writeb(struct fsi_master_aspeed *aspeed, u32 addr, u8 val)
|
||||
{
|
||||
return __opb_write(aspeed, addr, val, XFER_BYTE);
|
||||
}
|
||||
|
||||
static int opb_writew(struct fsi_master_aspeed *aspeed, u32 addr, __be16 val)
|
||||
{
|
||||
return __opb_write(aspeed, addr, (__force u16)val, XFER_HALFWORD);
|
||||
}
|
||||
|
||||
static int opb_writel(struct fsi_master_aspeed *aspeed, u32 addr, __be32 val)
|
||||
{
|
||||
return __opb_write(aspeed, addr, (__force u32)val, XFER_FULLWORD);
|
||||
}
|
||||
|
||||
static int __opb_read(struct fsi_master_aspeed *aspeed, uint32_t addr,
|
||||
u32 transfer_size, void *out)
|
||||
{
|
||||
void __iomem *base = aspeed->base;
|
||||
u32 result, reg;
|
||||
int status, ret;
|
||||
|
||||
writel(CMD_READ, base + OPB0_RW);
|
||||
writel(transfer_size, base + OPB0_XFER_SIZE);
|
||||
writel(addr, base + OPB0_FSI_ADDR);
|
||||
writel(0x1, base + OPB_IRQ_CLEAR);
|
||||
writel(0x1, base + OPB_TRIGGER);
|
||||
|
||||
ret = readl_poll_timeout(base + OPB_IRQ_STATUS, reg,
|
||||
(reg & OPB0_XFER_ACK_EN) != 0,
|
||||
0, OPB_POLL_TIMEOUT);
|
||||
|
||||
status = readl(base + OPB0_STATUS);
|
||||
|
||||
result = readl(base + OPB0_FSI_DATA_R);
|
||||
|
||||
trace_fsi_master_aspeed_opb_read(addr, transfer_size, result,
|
||||
readl(base + OPB0_STATUS),
|
||||
reg);
|
||||
|
||||
/* Return error when poll timed out */
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Command failed, master will reset */
|
||||
if (status & STATUS_ERR_ACK)
|
||||
return -EIO;
|
||||
|
||||
if (out) {
|
||||
switch (transfer_size) {
|
||||
case XFER_BYTE:
|
||||
*(u8 *)out = result;
|
||||
break;
|
||||
case XFER_HALFWORD:
|
||||
*(u16 *)out = result;
|
||||
break;
|
||||
case XFER_FULLWORD:
|
||||
*(u32 *)out = result;
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int opb_readl(struct fsi_master_aspeed *aspeed, uint32_t addr, __be32 *out)
|
||||
{
|
||||
return __opb_read(aspeed, addr, XFER_FULLWORD, out);
|
||||
}
|
||||
|
||||
static int opb_readw(struct fsi_master_aspeed *aspeed, uint32_t addr, __be16 *out)
|
||||
{
|
||||
return __opb_read(aspeed, addr, XFER_HALFWORD, (void *)out);
|
||||
}
|
||||
|
||||
static int opb_readb(struct fsi_master_aspeed *aspeed, uint32_t addr, u8 *out)
|
||||
{
|
||||
return __opb_read(aspeed, addr, XFER_BYTE, (void *)out);
|
||||
}
|
||||
|
||||
static int check_errors(struct fsi_master_aspeed *aspeed, int err)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (trace_fsi_master_aspeed_opb_error_enabled()) {
|
||||
__be32 mresp0, mstap0, mesrb0;
|
||||
|
||||
opb_readl(aspeed, ctrl_base + FSI_MRESP0, &mresp0);
|
||||
opb_readl(aspeed, ctrl_base + FSI_MSTAP0, &mstap0);
|
||||
opb_readl(aspeed, ctrl_base + FSI_MESRB0, &mesrb0);
|
||||
|
||||
trace_fsi_master_aspeed_opb_error(
|
||||
be32_to_cpu(mresp0),
|
||||
be32_to_cpu(mstap0),
|
||||
be32_to_cpu(mesrb0));
|
||||
}
|
||||
|
||||
if (err == -EIO) {
|
||||
/* Check MAEB (0x70) ? */
|
||||
|
||||
/* Then clear errors in master */
|
||||
ret = opb_writel(aspeed, ctrl_base + FSI_MRESP0,
|
||||
cpu_to_be32(FSI_MRESP_RST_ALL_MASTER));
|
||||
if (ret) {
|
||||
/* TODO: log? return different code? */
|
||||
return ret;
|
||||
}
|
||||
/* TODO: confirm that 0x70 was okay */
|
||||
}
|
||||
|
||||
/* This will pass through timeout errors */
|
||||
return err;
|
||||
}
|
||||
|
||||
static int aspeed_master_read(struct fsi_master *master, int link,
|
||||
uint8_t id, uint32_t addr, void *val, size_t size)
|
||||
{
|
||||
struct fsi_master_aspeed *aspeed = to_fsi_master_aspeed(master);
|
||||
int ret;
|
||||
|
||||
if (id != 0)
|
||||
return -EINVAL;
|
||||
|
||||
addr += link * FSI_HUB_LINK_SIZE;
|
||||
|
||||
switch (size) {
|
||||
case 1:
|
||||
ret = opb_readb(aspeed, fsi_base + addr, val);
|
||||
break;
|
||||
case 2:
|
||||
ret = opb_readw(aspeed, fsi_base + addr, val);
|
||||
break;
|
||||
case 4:
|
||||
ret = opb_readl(aspeed, fsi_base + addr, val);
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ret = check_errors(aspeed, ret);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int aspeed_master_write(struct fsi_master *master, int link,
|
||||
uint8_t id, uint32_t addr, const void *val, size_t size)
|
||||
{
|
||||
struct fsi_master_aspeed *aspeed = to_fsi_master_aspeed(master);
|
||||
int ret;
|
||||
|
||||
if (id != 0)
|
||||
return -EINVAL;
|
||||
|
||||
addr += link * FSI_HUB_LINK_SIZE;
|
||||
|
||||
switch (size) {
|
||||
case 1:
|
||||
ret = opb_writeb(aspeed, fsi_base + addr, *(u8 *)val);
|
||||
break;
|
||||
case 2:
|
||||
ret = opb_writew(aspeed, fsi_base + addr, *(__be16 *)val);
|
||||
break;
|
||||
case 4:
|
||||
ret = opb_writel(aspeed, fsi_base + addr, *(__be32 *)val);
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ret = check_errors(aspeed, ret);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int aspeed_master_link_enable(struct fsi_master *master, int link)
|
||||
{
|
||||
struct fsi_master_aspeed *aspeed = to_fsi_master_aspeed(master);
|
||||
int idx, bit, ret;
|
||||
__be32 reg, result;
|
||||
|
||||
idx = link / 32;
|
||||
bit = link % 32;
|
||||
|
||||
reg = cpu_to_be32(0x80000000 >> bit);
|
||||
|
||||
ret = opb_writel(aspeed, ctrl_base + FSI_MSENP0 + (4 * idx), reg);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
mdelay(FSI_LINK_ENABLE_SETUP_TIME);
|
||||
|
||||
ret = opb_readl(aspeed, ctrl_base + FSI_MENP0 + (4 * idx), &result);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (result != reg) {
|
||||
dev_err(aspeed->dev, "%s failed: %08x\n", __func__, result);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int aspeed_master_term(struct fsi_master *master, int link, uint8_t id)
|
||||
{
|
||||
uint32_t addr;
|
||||
__be32 cmd;
|
||||
|
||||
addr = 0x4;
|
||||
cmd = cpu_to_be32(0xecc00000);
|
||||
|
||||
return aspeed_master_write(master, link, id, addr, &cmd, 4);
|
||||
}
|
||||
|
||||
static int aspeed_master_break(struct fsi_master *master, int link)
|
||||
{
|
||||
uint32_t addr;
|
||||
__be32 cmd;
|
||||
|
||||
addr = 0x0;
|
||||
cmd = cpu_to_be32(0xc0de0000);
|
||||
|
||||
return aspeed_master_write(master, link, 0, addr, &cmd, 4);
|
||||
}
|
||||
|
||||
static void aspeed_master_release(struct device *dev)
|
||||
{
|
||||
struct fsi_master_aspeed *aspeed =
|
||||
to_fsi_master_aspeed(dev_to_fsi_master(dev));
|
||||
|
||||
kfree(aspeed);
|
||||
}
|
||||
|
||||
/* mmode encoders */
|
||||
static inline u32 fsi_mmode_crs0(u32 x)
|
||||
{
|
||||
return (x & FSI_MMODE_CRS0MASK) << FSI_MMODE_CRS0SHFT;
|
||||
}
|
||||
|
||||
static inline u32 fsi_mmode_crs1(u32 x)
|
||||
{
|
||||
return (x & FSI_MMODE_CRS1MASK) << FSI_MMODE_CRS1SHFT;
|
||||
}
|
||||
|
||||
static int aspeed_master_init(struct fsi_master_aspeed *aspeed)
|
||||
{
|
||||
__be32 reg;
|
||||
|
||||
reg = cpu_to_be32(FSI_MRESP_RST_ALL_MASTER | FSI_MRESP_RST_ALL_LINK
|
||||
| FSI_MRESP_RST_MCR | FSI_MRESP_RST_PYE);
|
||||
opb_writel(aspeed, ctrl_base + FSI_MRESP0, reg);
|
||||
|
||||
/* Initialize the MFSI (hub master) engine */
|
||||
reg = cpu_to_be32(FSI_MRESP_RST_ALL_MASTER | FSI_MRESP_RST_ALL_LINK
|
||||
| FSI_MRESP_RST_MCR | FSI_MRESP_RST_PYE);
|
||||
opb_writel(aspeed, ctrl_base + FSI_MRESP0, reg);
|
||||
|
||||
reg = cpu_to_be32(FSI_MECTRL_EOAE | FSI_MECTRL_P8_AUTO_TERM);
|
||||
opb_writel(aspeed, ctrl_base + FSI_MECTRL, reg);
|
||||
|
||||
reg = cpu_to_be32(FSI_MMODE_ECRC | FSI_MMODE_EPC | FSI_MMODE_RELA
|
||||
| fsi_mmode_crs0(DEFAULT_DIVISOR)
|
||||
| fsi_mmode_crs1(DEFAULT_DIVISOR)
|
||||
| FSI_MMODE_P8_TO_LSB);
|
||||
opb_writel(aspeed, ctrl_base + FSI_MMODE, reg);
|
||||
|
||||
reg = cpu_to_be32(0xffff0000);
|
||||
opb_writel(aspeed, ctrl_base + FSI_MDLYR, reg);
|
||||
|
||||
reg = cpu_to_be32(~0);
|
||||
opb_writel(aspeed, ctrl_base + FSI_MSENP0, reg);
|
||||
|
||||
/* Leave enabled long enough for master logic to set up */
|
||||
mdelay(FSI_LINK_ENABLE_SETUP_TIME);
|
||||
|
||||
opb_writel(aspeed, ctrl_base + FSI_MCENP0, reg);
|
||||
|
||||
opb_readl(aspeed, ctrl_base + FSI_MAEB, NULL);
|
||||
|
||||
reg = cpu_to_be32(FSI_MRESP_RST_ALL_MASTER | FSI_MRESP_RST_ALL_LINK);
|
||||
opb_writel(aspeed, ctrl_base + FSI_MRESP0, reg);
|
||||
|
||||
opb_readl(aspeed, ctrl_base + FSI_MLEVP0, NULL);
|
||||
|
||||
/* Reset the master bridge */
|
||||
reg = cpu_to_be32(FSI_MRESB_RST_GEN);
|
||||
opb_writel(aspeed, ctrl_base + FSI_MRESB0, reg);
|
||||
|
||||
reg = cpu_to_be32(FSI_MRESB_RST_ERR);
|
||||
opb_writel(aspeed, ctrl_base + FSI_MRESB0, reg);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int fsi_master_aspeed_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct fsi_master_aspeed *aspeed;
|
||||
struct resource *res;
|
||||
int rc, links, reg;
|
||||
__be32 raw;
|
||||
|
||||
aspeed = devm_kzalloc(&pdev->dev, sizeof(*aspeed), GFP_KERNEL);
|
||||
if (!aspeed)
|
||||
return -ENOMEM;
|
||||
|
||||
aspeed->dev = &pdev->dev;
|
||||
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
aspeed->base = devm_ioremap_resource(&pdev->dev, res);
|
||||
if (IS_ERR(aspeed->base))
|
||||
return PTR_ERR(aspeed->base);
|
||||
|
||||
aspeed->clk = devm_clk_get(aspeed->dev, NULL);
|
||||
if (IS_ERR(aspeed->clk)) {
|
||||
dev_err(aspeed->dev, "couldn't get clock\n");
|
||||
return PTR_ERR(aspeed->clk);
|
||||
}
|
||||
rc = clk_prepare_enable(aspeed->clk);
|
||||
if (rc) {
|
||||
dev_err(aspeed->dev, "couldn't enable clock\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
writel(0x1, aspeed->base + OPB_CLK_SYNC);
|
||||
writel(OPB1_XFER_ACK_EN | OPB0_XFER_ACK_EN,
|
||||
aspeed->base + OPB_IRQ_MASK);
|
||||
|
||||
/* TODO: determine an appropriate value */
|
||||
writel(0x10, aspeed->base + OPB_RETRY_COUNTER);
|
||||
|
||||
writel(ctrl_base, aspeed->base + OPB_CTRL_BASE);
|
||||
writel(fsi_base, aspeed->base + OPB_FSI_BASE);
|
||||
|
||||
/* Set read data order */
|
||||
writel(0x00030b1b, aspeed->base + OPB0_READ_ORDER1);
|
||||
|
||||
/* Set write data order */
|
||||
writel(0x0011101b, aspeed->base + OPB0_WRITE_ORDER1);
|
||||
writel(0x0c330f3f, aspeed->base + OPB0_WRITE_ORDER2);
|
||||
|
||||
/*
|
||||
* Select OPB0 for all operations.
|
||||
* Will need to be reworked when enabling DMA or anything that uses
|
||||
* OPB1.
|
||||
*/
|
||||
writel(0x1, aspeed->base + OPB0_SELECT);
|
||||
|
||||
rc = opb_readl(aspeed, ctrl_base + FSI_MVER, &raw);
|
||||
if (rc) {
|
||||
dev_err(&pdev->dev, "failed to read hub version\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
reg = be32_to_cpu(raw);
|
||||
links = (reg >> 8) & 0xff;
|
||||
dev_info(&pdev->dev, "hub version %08x (%d links)\n", reg, links);
|
||||
|
||||
aspeed->master.dev.parent = &pdev->dev;
|
||||
aspeed->master.dev.release = aspeed_master_release;
|
||||
aspeed->master.dev.of_node = of_node_get(dev_of_node(&pdev->dev));
|
||||
|
||||
aspeed->master.n_links = links;
|
||||
aspeed->master.read = aspeed_master_read;
|
||||
aspeed->master.write = aspeed_master_write;
|
||||
aspeed->master.send_break = aspeed_master_break;
|
||||
aspeed->master.term = aspeed_master_term;
|
||||
aspeed->master.link_enable = aspeed_master_link_enable;
|
||||
|
||||
dev_set_drvdata(&pdev->dev, aspeed);
|
||||
|
||||
aspeed_master_init(aspeed);
|
||||
|
||||
rc = fsi_master_register(&aspeed->master);
|
||||
if (rc)
|
||||
goto err_release;
|
||||
|
||||
/* At this point, fsi_master_register performs the device_initialize(),
|
||||
* and holds the sole reference on master.dev. This means the device
|
||||
* will be freed (via ->release) during any subsequent call to
|
||||
* fsi_master_unregister. We add our own reference to it here, so we
|
||||
* can perform cleanup (in _remove()) without it being freed before
|
||||
* we're ready.
|
||||
*/
|
||||
get_device(&aspeed->master.dev);
|
||||
return 0;
|
||||
|
||||
err_release:
|
||||
clk_disable_unprepare(aspeed->clk);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int fsi_master_aspeed_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct fsi_master_aspeed *aspeed = platform_get_drvdata(pdev);
|
||||
|
||||
fsi_master_unregister(&aspeed->master);
|
||||
clk_disable_unprepare(aspeed->clk);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct of_device_id fsi_master_aspeed_match[] = {
|
||||
{ .compatible = "aspeed,ast2600-fsi-master" },
|
||||
{ },
|
||||
};
|
||||
|
||||
static struct platform_driver fsi_master_aspeed_driver = {
|
||||
.driver = {
|
||||
.name = "fsi-master-aspeed",
|
||||
.of_match_table = fsi_master_aspeed_match,
|
||||
},
|
||||
.probe = fsi_master_aspeed_probe,
|
||||
.remove = fsi_master_aspeed_remove,
|
||||
};
|
||||
|
||||
module_platform_driver(fsi_master_aspeed_driver);
|
||||
MODULE_LICENSE("GPL");
|
|
@ -13,53 +13,7 @@
|
|||
|
||||
#include "fsi-master.h"
|
||||
|
||||
/* Control Registers */
|
||||
#define FSI_MMODE 0x0 /* R/W: mode */
|
||||
#define FSI_MDLYR 0x4 /* R/W: delay */
|
||||
#define FSI_MCRSP 0x8 /* R/W: clock rate */
|
||||
#define FSI_MENP0 0x10 /* R/W: enable */
|
||||
#define FSI_MLEVP0 0x18 /* R: plug detect */
|
||||
#define FSI_MSENP0 0x18 /* S: Set enable */
|
||||
#define FSI_MCENP0 0x20 /* C: Clear enable */
|
||||
#define FSI_MAEB 0x70 /* R: Error address */
|
||||
#define FSI_MVER 0x74 /* R: master version/type */
|
||||
#define FSI_MRESP0 0xd0 /* W: Port reset */
|
||||
#define FSI_MESRB0 0x1d0 /* R: Master error status */
|
||||
#define FSI_MRESB0 0x1d0 /* W: Reset bridge */
|
||||
#define FSI_MECTRL 0x2e0 /* W: Error control */
|
||||
|
||||
/* MMODE: Mode control */
|
||||
#define FSI_MMODE_EIP 0x80000000 /* Enable interrupt polling */
|
||||
#define FSI_MMODE_ECRC 0x40000000 /* Enable error recovery */
|
||||
#define FSI_MMODE_EPC 0x10000000 /* Enable parity checking */
|
||||
#define FSI_MMODE_P8_TO_LSB 0x00000010 /* Timeout value LSB */
|
||||
/* MSB=1, LSB=0 is 0.8 ms */
|
||||
/* MSB=0, LSB=1 is 0.9 ms */
|
||||
#define FSI_MMODE_CRS0SHFT 18 /* Clk rate selection 0 shift */
|
||||
#define FSI_MMODE_CRS0MASK 0x3ff /* Clk rate selection 0 mask */
|
||||
#define FSI_MMODE_CRS1SHFT 8 /* Clk rate selection 1 shift */
|
||||
#define FSI_MMODE_CRS1MASK 0x3ff /* Clk rate selection 1 mask */
|
||||
|
||||
/* MRESB: Reset brindge */
|
||||
#define FSI_MRESB_RST_GEN 0x80000000 /* General reset */
|
||||
#define FSI_MRESB_RST_ERR 0x40000000 /* Error Reset */
|
||||
|
||||
/* MRESB: Reset port */
|
||||
#define FSI_MRESP_RST_ALL_MASTER 0x20000000 /* Reset all FSI masters */
|
||||
#define FSI_MRESP_RST_ALL_LINK 0x10000000 /* Reset all FSI port contr. */
|
||||
#define FSI_MRESP_RST_MCR 0x08000000 /* Reset FSI master reg. */
|
||||
#define FSI_MRESP_RST_PYE 0x04000000 /* Reset FSI parity error */
|
||||
#define FSI_MRESP_RST_ALL 0xfc000000 /* Reset any error */
|
||||
|
||||
/* MECTRL: Error control */
|
||||
#define FSI_MECTRL_EOAE 0x8000 /* Enable machine check when */
|
||||
/* master 0 in error */
|
||||
#define FSI_MECTRL_P8_AUTO_TERM 0x4000 /* Auto terminate */
|
||||
|
||||
#define FSI_ENGID_HUB_MASTER 0x1c
|
||||
#define FSI_HUB_LINK_OFFSET 0x80000
|
||||
#define FSI_HUB_LINK_SIZE 0x80000
|
||||
#define FSI_HUB_MASTER_MAX_LINKS 8
|
||||
|
||||
#define FSI_LINK_ENABLE_SETUP_TIME 10 /* in mS */
|
||||
|
||||
|
|
|
@ -12,6 +12,71 @@
|
|||
#include <linux/device.h>
|
||||
#include <linux/mutex.h>
|
||||
|
||||
/*
|
||||
* Master registers
|
||||
*
|
||||
* These are used by hardware masters, such as the one in the FSP2, AST2600 and
|
||||
* the hub master in POWER processors.
|
||||
*/
|
||||
|
||||
/* Control Registers */
|
||||
#define FSI_MMODE 0x0 /* R/W: mode */
|
||||
#define FSI_MDLYR 0x4 /* R/W: delay */
|
||||
#define FSI_MCRSP 0x8 /* R/W: clock rate */
|
||||
#define FSI_MENP0 0x10 /* R/W: enable */
|
||||
#define FSI_MLEVP0 0x18 /* R: plug detect */
|
||||
#define FSI_MSENP0 0x18 /* S: Set enable */
|
||||
#define FSI_MCENP0 0x20 /* C: Clear enable */
|
||||
#define FSI_MAEB 0x70 /* R: Error address */
|
||||
#define FSI_MVER 0x74 /* R: master version/type */
|
||||
#define FSI_MSTAP0 0xd0 /* R: Port status */
|
||||
#define FSI_MRESP0 0xd0 /* W: Port reset */
|
||||
#define FSI_MESRB0 0x1d0 /* R: Master error status */
|
||||
#define FSI_MRESB0 0x1d0 /* W: Reset bridge */
|
||||
#define FSI_MSCSB0 0x1d4 /* R: Master sub command stack */
|
||||
#define FSI_MATRB0 0x1d8 /* R: Master address trace */
|
||||
#define FSI_MDTRB0 0x1dc /* R: Master data trace */
|
||||
#define FSI_MECTRL 0x2e0 /* W: Error control */
|
||||
|
||||
/* MMODE: Mode control */
|
||||
#define FSI_MMODE_EIP 0x80000000 /* Enable interrupt polling */
|
||||
#define FSI_MMODE_ECRC 0x40000000 /* Enable error recovery */
|
||||
#define FSI_MMODE_RELA 0x20000000 /* Enable relative address commands */
|
||||
#define FSI_MMODE_EPC 0x10000000 /* Enable parity checking */
|
||||
#define FSI_MMODE_P8_TO_LSB 0x00000010 /* Timeout value LSB */
|
||||
/* MSB=1, LSB=0 is 0.8 ms */
|
||||
/* MSB=0, LSB=1 is 0.9 ms */
|
||||
#define FSI_MMODE_CRS0SHFT 18 /* Clk rate selection 0 shift */
|
||||
#define FSI_MMODE_CRS0MASK 0x3ff /* Clk rate selection 0 mask */
|
||||
#define FSI_MMODE_CRS1SHFT 8 /* Clk rate selection 1 shift */
|
||||
#define FSI_MMODE_CRS1MASK 0x3ff /* Clk rate selection 1 mask */
|
||||
|
||||
/* MRESB: Reset brindge */
|
||||
#define FSI_MRESB_RST_GEN 0x80000000 /* General reset */
|
||||
#define FSI_MRESB_RST_ERR 0x40000000 /* Error Reset */
|
||||
|
||||
/* MRESP: Reset port */
|
||||
#define FSI_MRESP_RST_ALL_MASTER 0x20000000 /* Reset all FSI masters */
|
||||
#define FSI_MRESP_RST_ALL_LINK 0x10000000 /* Reset all FSI port contr. */
|
||||
#define FSI_MRESP_RST_MCR 0x08000000 /* Reset FSI master reg. */
|
||||
#define FSI_MRESP_RST_PYE 0x04000000 /* Reset FSI parity error */
|
||||
#define FSI_MRESP_RST_ALL 0xfc000000 /* Reset any error */
|
||||
|
||||
/* MECTRL: Error control */
|
||||
#define FSI_MECTRL_EOAE 0x8000 /* Enable machine check when */
|
||||
/* master 0 in error */
|
||||
#define FSI_MECTRL_P8_AUTO_TERM 0x4000 /* Auto terminate */
|
||||
|
||||
#define FSI_HUB_LINK_OFFSET 0x80000
|
||||
#define FSI_HUB_LINK_SIZE 0x80000
|
||||
#define FSI_HUB_MASTER_MAX_LINKS 8
|
||||
|
||||
/*
|
||||
* Protocol definitions
|
||||
*
|
||||
* These are used by low level masters that bit-bang out the protocol
|
||||
*/
|
||||
|
||||
/* Various protocol delays */
|
||||
#define FSI_ECHO_DELAY_CLOCKS 16 /* Number clocks for echo delay */
|
||||
#define FSI_SEND_DELAY_CLOCKS 16 /* Number clocks for send delay */
|
||||
|
@ -47,6 +112,12 @@
|
|||
/* fsi-master definition and flags */
|
||||
#define FSI_MASTER_FLAG_SWCLOCK 0x1
|
||||
|
||||
/*
|
||||
* Structures and function prototypes
|
||||
*
|
||||
* These are common to all masters
|
||||
*/
|
||||
|
||||
struct fsi_master {
|
||||
struct device dev;
|
||||
int idx;
|
||||
|
|
|
@ -211,3 +211,4 @@ MODULE_AUTHOR("Andreas Werner <andreas.werner@men.de>");
|
|||
MODULE_DESCRIPTION("MEN 16z127 GPIO Controller");
|
||||
MODULE_LICENSE("GPL v2");
|
||||
MODULE_ALIAS("mcb:16z127");
|
||||
MODULE_IMPORT_NS(MCB);
|
||||
|
|
|
@ -361,9 +361,6 @@ static int gb_connection_hd_cport_quiesce(struct gb_connection *connection)
|
|||
if (connection->mode_switch)
|
||||
peer_space += sizeof(struct gb_operation_msg_hdr);
|
||||
|
||||
if (!hd->driver->cport_quiesce)
|
||||
return 0;
|
||||
|
||||
ret = hd->driver->cport_quiesce(hd, connection->hd_cport_id,
|
||||
peer_space,
|
||||
GB_CONNECTION_CPORT_QUIESCE_TIMEOUT);
|
||||
|
|
|
@ -4,6 +4,7 @@
|
|||
#
|
||||
menuconfig CORESIGHT
|
||||
bool "CoreSight Tracing Support"
|
||||
depends on ARM || ARM64
|
||||
depends on OF || ACPI
|
||||
select ARM_AMBA
|
||||
select PERF_EVENTS
|
||||
|
|
|
@ -217,6 +217,7 @@ static ssize_t reset_store(struct device *dev,
|
|||
|
||||
/* No start-stop filtering for ViewInst */
|
||||
config->vissctlr = 0x0;
|
||||
config->vipcssctlr = 0x0;
|
||||
|
||||
/* Disable seq events */
|
||||
for (i = 0; i < drvdata->nrseqstate-1; i++)
|
||||
|
@ -238,6 +239,7 @@ static ssize_t reset_store(struct device *dev,
|
|||
for (i = 0; i < drvdata->nr_resource; i++)
|
||||
config->res_ctrl[i] = 0x0;
|
||||
|
||||
config->ss_idx = 0x0;
|
||||
for (i = 0; i < drvdata->nr_ss_cmp; i++) {
|
||||
config->ss_ctrl[i] = 0x0;
|
||||
config->ss_pe_cmp[i] = 0x0;
|
||||
|
@ -296,8 +298,6 @@ static ssize_t mode_store(struct device *dev,
|
|||
|
||||
spin_lock(&drvdata->spinlock);
|
||||
config->mode = val & ETMv4_MODE_ALL;
|
||||
etm4_set_mode_exclude(drvdata,
|
||||
config->mode & ETM_MODE_EXCLUDE ? true : false);
|
||||
|
||||
if (drvdata->instrp0 == true) {
|
||||
/* start by clearing instruction P0 field */
|
||||
|
@ -652,10 +652,13 @@ static ssize_t cyc_threshold_store(struct device *dev,
|
|||
|
||||
if (kstrtoul(buf, 16, &val))
|
||||
return -EINVAL;
|
||||
|
||||
/* mask off max threshold before checking min value */
|
||||
val &= ETM_CYC_THRESHOLD_MASK;
|
||||
if (val < drvdata->ccitmin)
|
||||
return -EINVAL;
|
||||
|
||||
config->ccctlr = val & ETM_CYC_THRESHOLD_MASK;
|
||||
config->ccctlr = val;
|
||||
return size;
|
||||
}
|
||||
static DEVICE_ATTR_RW(cyc_threshold);
|
||||
|
@ -686,14 +689,16 @@ static ssize_t bb_ctrl_store(struct device *dev,
|
|||
return -EINVAL;
|
||||
if (!drvdata->nr_addr_cmp)
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* Bit[7:0] selects which address range comparator is used for
|
||||
* branch broadcast control.
|
||||
* Bit[8] controls include(1) / exclude(0), bits[0-7] select
|
||||
* individual range comparators. If include then at least 1
|
||||
* range must be selected.
|
||||
*/
|
||||
if (BMVAL(val, 0, 7) > drvdata->nr_addr_cmp)
|
||||
if ((val & BIT(8)) && (BMVAL(val, 0, 7) == 0))
|
||||
return -EINVAL;
|
||||
|
||||
config->bb_ctrl = val;
|
||||
config->bb_ctrl = val & GENMASK(8, 0);
|
||||
return size;
|
||||
}
|
||||
static DEVICE_ATTR_RW(bb_ctrl);
|
||||
|
@ -738,7 +743,7 @@ static ssize_t s_exlevel_vinst_show(struct device *dev,
|
|||
struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
|
||||
struct etmv4_config *config = &drvdata->config;
|
||||
|
||||
val = BMVAL(config->vinst_ctrl, 16, 19);
|
||||
val = (config->vinst_ctrl & ETM_EXLEVEL_S_VICTLR_MASK) >> 16;
|
||||
return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
|
||||
}
|
||||
|
||||
|
@ -754,8 +759,8 @@ static ssize_t s_exlevel_vinst_store(struct device *dev,
|
|||
return -EINVAL;
|
||||
|
||||
spin_lock(&drvdata->spinlock);
|
||||
/* clear all EXLEVEL_S bits (bit[18] is never implemented) */
|
||||
config->vinst_ctrl &= ~(BIT(16) | BIT(17) | BIT(19));
|
||||
/* clear all EXLEVEL_S bits */
|
||||
config->vinst_ctrl &= ~(ETM_EXLEVEL_S_VICTLR_MASK);
|
||||
/* enable instruction tracing for corresponding exception level */
|
||||
val &= drvdata->s_ex_level;
|
||||
config->vinst_ctrl |= (val << 16);
|
||||
|
@ -773,7 +778,7 @@ static ssize_t ns_exlevel_vinst_show(struct device *dev,
|
|||
struct etmv4_config *config = &drvdata->config;
|
||||
|
||||
/* EXLEVEL_NS, bits[23:20] */
|
||||
val = BMVAL(config->vinst_ctrl, 20, 23);
|
||||
val = (config->vinst_ctrl & ETM_EXLEVEL_NS_VICTLR_MASK) >> 20;
|
||||
return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
|
||||
}
|
||||
|
||||
|
@ -789,8 +794,8 @@ static ssize_t ns_exlevel_vinst_store(struct device *dev,
|
|||
return -EINVAL;
|
||||
|
||||
spin_lock(&drvdata->spinlock);
|
||||
/* clear EXLEVEL_NS bits (bit[23] is never implemented */
|
||||
config->vinst_ctrl &= ~(BIT(20) | BIT(21) | BIT(22));
|
||||
/* clear EXLEVEL_NS bits */
|
||||
config->vinst_ctrl &= ~(ETM_EXLEVEL_NS_VICTLR_MASK);
|
||||
/* enable instruction tracing for corresponding exception level */
|
||||
val &= drvdata->ns_ex_level;
|
||||
config->vinst_ctrl |= (val << 20);
|
||||
|
@ -966,8 +971,12 @@ static ssize_t addr_range_store(struct device *dev,
|
|||
unsigned long val1, val2;
|
||||
struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
|
||||
struct etmv4_config *config = &drvdata->config;
|
||||
int elements, exclude;
|
||||
|
||||
if (sscanf(buf, "%lx %lx", &val1, &val2) != 2)
|
||||
elements = sscanf(buf, "%lx %lx %x", &val1, &val2, &exclude);
|
||||
|
||||
/* exclude is optional, but need at least two parameter */
|
||||
if (elements < 2)
|
||||
return -EINVAL;
|
||||
/* lower address comparator cannot have a higher address value */
|
||||
if (val1 > val2)
|
||||
|
@ -995,9 +1004,11 @@ static ssize_t addr_range_store(struct device *dev,
|
|||
/*
|
||||
* Program include or exclude control bits for vinst or vdata
|
||||
* whenever we change addr comparators to ETM_ADDR_TYPE_RANGE
|
||||
* use supplied value, or default to bit set in 'mode'
|
||||
*/
|
||||
etm4_set_mode_exclude(drvdata,
|
||||
config->mode & ETM_MODE_EXCLUDE ? true : false);
|
||||
if (elements != 3)
|
||||
exclude = config->mode & ETM_MODE_EXCLUDE;
|
||||
etm4_set_mode_exclude(drvdata, exclude ? true : false);
|
||||
|
||||
spin_unlock(&drvdata->spinlock);
|
||||
return size;
|
||||
|
@ -1054,8 +1065,6 @@ static ssize_t addr_start_store(struct device *dev,
|
|||
config->addr_val[idx] = (u64)val;
|
||||
config->addr_type[idx] = ETM_ADDR_TYPE_START;
|
||||
config->vissctlr |= BIT(idx);
|
||||
/* SSSTATUS, bit[9] - turn on start/stop logic */
|
||||
config->vinst_ctrl |= BIT(9);
|
||||
spin_unlock(&drvdata->spinlock);
|
||||
return size;
|
||||
}
|
||||
|
@ -1111,8 +1120,6 @@ static ssize_t addr_stop_store(struct device *dev,
|
|||
config->addr_val[idx] = (u64)val;
|
||||
config->addr_type[idx] = ETM_ADDR_TYPE_STOP;
|
||||
config->vissctlr |= BIT(idx + 16);
|
||||
/* SSSTATUS, bit[9] - turn on start/stop logic */
|
||||
config->vinst_ctrl |= BIT(9);
|
||||
spin_unlock(&drvdata->spinlock);
|
||||
return size;
|
||||
}
|
||||
|
@ -1228,6 +1235,131 @@ static ssize_t addr_context_store(struct device *dev,
|
|||
}
|
||||
static DEVICE_ATTR_RW(addr_context);
|
||||
|
||||
static ssize_t addr_exlevel_s_ns_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
u8 idx;
|
||||
unsigned long val;
|
||||
struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
|
||||
struct etmv4_config *config = &drvdata->config;
|
||||
|
||||
spin_lock(&drvdata->spinlock);
|
||||
idx = config->addr_idx;
|
||||
val = BMVAL(config->addr_acc[idx], 8, 14);
|
||||
spin_unlock(&drvdata->spinlock);
|
||||
return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
|
||||
}
|
||||
|
||||
static ssize_t addr_exlevel_s_ns_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t size)
|
||||
{
|
||||
u8 idx;
|
||||
unsigned long val;
|
||||
struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
|
||||
struct etmv4_config *config = &drvdata->config;
|
||||
|
||||
if (kstrtoul(buf, 0, &val))
|
||||
return -EINVAL;
|
||||
|
||||
if (val & ~((GENMASK(14, 8) >> 8)))
|
||||
return -EINVAL;
|
||||
|
||||
spin_lock(&drvdata->spinlock);
|
||||
idx = config->addr_idx;
|
||||
/* clear Exlevel_ns & Exlevel_s bits[14:12, 11:8], bit[15] is res0 */
|
||||
config->addr_acc[idx] &= ~(GENMASK(14, 8));
|
||||
config->addr_acc[idx] |= (val << 8);
|
||||
spin_unlock(&drvdata->spinlock);
|
||||
return size;
|
||||
}
|
||||
static DEVICE_ATTR_RW(addr_exlevel_s_ns);
|
||||
|
||||
static const char * const addr_type_names[] = {
|
||||
"unused",
|
||||
"single",
|
||||
"range",
|
||||
"start",
|
||||
"stop"
|
||||
};
|
||||
|
||||
static ssize_t addr_cmp_view_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
u8 idx, addr_type;
|
||||
unsigned long addr_v, addr_v2, addr_ctrl;
|
||||
struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
|
||||
struct etmv4_config *config = &drvdata->config;
|
||||
int size = 0;
|
||||
bool exclude = false;
|
||||
|
||||
spin_lock(&drvdata->spinlock);
|
||||
idx = config->addr_idx;
|
||||
addr_v = config->addr_val[idx];
|
||||
addr_ctrl = config->addr_acc[idx];
|
||||
addr_type = config->addr_type[idx];
|
||||
if (addr_type == ETM_ADDR_TYPE_RANGE) {
|
||||
if (idx & 0x1) {
|
||||
idx -= 1;
|
||||
addr_v2 = addr_v;
|
||||
addr_v = config->addr_val[idx];
|
||||
} else {
|
||||
addr_v2 = config->addr_val[idx + 1];
|
||||
}
|
||||
exclude = config->viiectlr & BIT(idx / 2 + 16);
|
||||
}
|
||||
spin_unlock(&drvdata->spinlock);
|
||||
if (addr_type) {
|
||||
size = scnprintf(buf, PAGE_SIZE, "addr_cmp[%i] %s %#lx", idx,
|
||||
addr_type_names[addr_type], addr_v);
|
||||
if (addr_type == ETM_ADDR_TYPE_RANGE) {
|
||||
size += scnprintf(buf + size, PAGE_SIZE - size,
|
||||
" %#lx %s", addr_v2,
|
||||
exclude ? "exclude" : "include");
|
||||
}
|
||||
size += scnprintf(buf + size, PAGE_SIZE - size,
|
||||
" ctrl(%#lx)\n", addr_ctrl);
|
||||
} else {
|
||||
size = scnprintf(buf, PAGE_SIZE, "addr_cmp[%i] unused\n", idx);
|
||||
}
|
||||
return size;
|
||||
}
|
||||
static DEVICE_ATTR_RO(addr_cmp_view);
|
||||
|
||||
static ssize_t vinst_pe_cmp_start_stop_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
unsigned long val;
|
||||
struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
|
||||
struct etmv4_config *config = &drvdata->config;
|
||||
|
||||
if (!drvdata->nr_pe_cmp)
|
||||
return -EINVAL;
|
||||
val = config->vipcssctlr;
|
||||
return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
|
||||
}
|
||||
static ssize_t vinst_pe_cmp_start_stop_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t size)
|
||||
{
|
||||
unsigned long val;
|
||||
struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
|
||||
struct etmv4_config *config = &drvdata->config;
|
||||
|
||||
if (kstrtoul(buf, 16, &val))
|
||||
return -EINVAL;
|
||||
if (!drvdata->nr_pe_cmp)
|
||||
return -EINVAL;
|
||||
|
||||
spin_lock(&drvdata->spinlock);
|
||||
config->vipcssctlr = val;
|
||||
spin_unlock(&drvdata->spinlock);
|
||||
return size;
|
||||
}
|
||||
static DEVICE_ATTR_RW(vinst_pe_cmp_start_stop);
|
||||
|
||||
static ssize_t seq_idx_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
|
@ -1324,8 +1456,8 @@ static ssize_t seq_event_store(struct device *dev,
|
|||
|
||||
spin_lock(&drvdata->spinlock);
|
||||
idx = config->seq_idx;
|
||||
/* RST, bits[7:0] */
|
||||
config->seq_ctrl[idx] = val & 0xFF;
|
||||
/* Seq control has two masks B[15:8] F[7:0] */
|
||||
config->seq_ctrl[idx] = val & 0xFFFF;
|
||||
spin_unlock(&drvdata->spinlock);
|
||||
return size;
|
||||
}
|
||||
|
@ -1580,12 +1712,129 @@ static ssize_t res_ctrl_store(struct device *dev,
|
|||
if (idx % 2 != 0)
|
||||
/* PAIRINV, bit[21] */
|
||||
val &= ~BIT(21);
|
||||
config->res_ctrl[idx] = val;
|
||||
config->res_ctrl[idx] = val & GENMASK(21, 0);
|
||||
spin_unlock(&drvdata->spinlock);
|
||||
return size;
|
||||
}
|
||||
static DEVICE_ATTR_RW(res_ctrl);
|
||||
|
||||
static ssize_t sshot_idx_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
unsigned long val;
|
||||
struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
|
||||
struct etmv4_config *config = &drvdata->config;
|
||||
|
||||
val = config->ss_idx;
|
||||
return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
|
||||
}
|
||||
|
||||
static ssize_t sshot_idx_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t size)
|
||||
{
|
||||
unsigned long val;
|
||||
struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
|
||||
struct etmv4_config *config = &drvdata->config;
|
||||
|
||||
if (kstrtoul(buf, 16, &val))
|
||||
return -EINVAL;
|
||||
if (val >= drvdata->nr_ss_cmp)
|
||||
return -EINVAL;
|
||||
|
||||
spin_lock(&drvdata->spinlock);
|
||||
config->ss_idx = val;
|
||||
spin_unlock(&drvdata->spinlock);
|
||||
return size;
|
||||
}
|
||||
static DEVICE_ATTR_RW(sshot_idx);
|
||||
|
||||
static ssize_t sshot_ctrl_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
unsigned long val;
|
||||
struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
|
||||
struct etmv4_config *config = &drvdata->config;
|
||||
|
||||
spin_lock(&drvdata->spinlock);
|
||||
val = config->ss_ctrl[config->ss_idx];
|
||||
spin_unlock(&drvdata->spinlock);
|
||||
return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
|
||||
}
|
||||
|
||||
static ssize_t sshot_ctrl_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t size)
|
||||
{
|
||||
u8 idx;
|
||||
unsigned long val;
|
||||
struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
|
||||
struct etmv4_config *config = &drvdata->config;
|
||||
|
||||
if (kstrtoul(buf, 16, &val))
|
||||
return -EINVAL;
|
||||
|
||||
spin_lock(&drvdata->spinlock);
|
||||
idx = config->ss_idx;
|
||||
config->ss_ctrl[idx] = val & GENMASK(24, 0);
|
||||
/* must clear bit 31 in related status register on programming */
|
||||
config->ss_status[idx] &= ~BIT(31);
|
||||
spin_unlock(&drvdata->spinlock);
|
||||
return size;
|
||||
}
|
||||
static DEVICE_ATTR_RW(sshot_ctrl);
|
||||
|
||||
static ssize_t sshot_status_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
unsigned long val;
|
||||
struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
|
||||
struct etmv4_config *config = &drvdata->config;
|
||||
|
||||
spin_lock(&drvdata->spinlock);
|
||||
val = config->ss_status[config->ss_idx];
|
||||
spin_unlock(&drvdata->spinlock);
|
||||
return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
|
||||
}
|
||||
static DEVICE_ATTR_RO(sshot_status);
|
||||
|
||||
static ssize_t sshot_pe_ctrl_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
unsigned long val;
|
||||
struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
|
||||
struct etmv4_config *config = &drvdata->config;
|
||||
|
||||
spin_lock(&drvdata->spinlock);
|
||||
val = config->ss_pe_cmp[config->ss_idx];
|
||||
spin_unlock(&drvdata->spinlock);
|
||||
return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
|
||||
}
|
||||
|
||||
static ssize_t sshot_pe_ctrl_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t size)
|
||||
{
|
||||
u8 idx;
|
||||
unsigned long val;
|
||||
struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
|
||||
struct etmv4_config *config = &drvdata->config;
|
||||
|
||||
if (kstrtoul(buf, 16, &val))
|
||||
return -EINVAL;
|
||||
|
||||
spin_lock(&drvdata->spinlock);
|
||||
idx = config->ss_idx;
|
||||
config->ss_pe_cmp[idx] = val & GENMASK(7, 0);
|
||||
/* must clear bit 31 in related status register on programming */
|
||||
config->ss_status[idx] &= ~BIT(31);
|
||||
spin_unlock(&drvdata->spinlock);
|
||||
return size;
|
||||
}
|
||||
static DEVICE_ATTR_RW(sshot_pe_ctrl);
|
||||
|
||||
static ssize_t ctxid_idx_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
|
@ -1714,6 +1963,7 @@ static ssize_t ctxid_masks_store(struct device *dev,
|
|||
unsigned long val1, val2, mask;
|
||||
struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
|
||||
struct etmv4_config *config = &drvdata->config;
|
||||
int nr_inputs;
|
||||
|
||||
/*
|
||||
* Don't use contextID tracing if coming from a PID namespace. See
|
||||
|
@ -1729,7 +1979,9 @@ static ssize_t ctxid_masks_store(struct device *dev,
|
|||
*/
|
||||
if (!drvdata->ctxid_size || !drvdata->numcidc)
|
||||
return -EINVAL;
|
||||
if (sscanf(buf, "%lx %lx", &val1, &val2) != 2)
|
||||
/* one mask if <= 4 comparators, two for up to 8 */
|
||||
nr_inputs = sscanf(buf, "%lx %lx", &val1, &val2);
|
||||
if ((drvdata->numcidc > 4) && (nr_inputs != 2))
|
||||
return -EINVAL;
|
||||
|
||||
spin_lock(&drvdata->spinlock);
|
||||
|
@ -1903,6 +2155,7 @@ static ssize_t vmid_masks_store(struct device *dev,
|
|||
unsigned long val1, val2, mask;
|
||||
struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
|
||||
struct etmv4_config *config = &drvdata->config;
|
||||
int nr_inputs;
|
||||
|
||||
/*
|
||||
* only implemented when vmid tracing is enabled, i.e. at least one
|
||||
|
@ -1910,7 +2163,9 @@ static ssize_t vmid_masks_store(struct device *dev,
|
|||
*/
|
||||
if (!drvdata->vmid_size || !drvdata->numvmidc)
|
||||
return -EINVAL;
|
||||
if (sscanf(buf, "%lx %lx", &val1, &val2) != 2)
|
||||
/* one mask if <= 4 comparators, two for up to 8 */
|
||||
nr_inputs = sscanf(buf, "%lx %lx", &val1, &val2);
|
||||
if ((drvdata->numvmidc > 4) && (nr_inputs != 2))
|
||||
return -EINVAL;
|
||||
|
||||
spin_lock(&drvdata->spinlock);
|
||||
|
@ -2033,6 +2288,13 @@ static struct attribute *coresight_etmv4_attrs[] = {
|
|||
&dev_attr_addr_stop.attr,
|
||||
&dev_attr_addr_ctxtype.attr,
|
||||
&dev_attr_addr_context.attr,
|
||||
&dev_attr_addr_exlevel_s_ns.attr,
|
||||
&dev_attr_addr_cmp_view.attr,
|
||||
&dev_attr_vinst_pe_cmp_start_stop.attr,
|
||||
&dev_attr_sshot_idx.attr,
|
||||
&dev_attr_sshot_ctrl.attr,
|
||||
&dev_attr_sshot_pe_ctrl.attr,
|
||||
&dev_attr_sshot_status.attr,
|
||||
&dev_attr_seq_idx.attr,
|
||||
&dev_attr_seq_state.attr,
|
||||
&dev_attr_seq_event.attr,
|
||||
|
|
|
@ -18,6 +18,7 @@
|
|||
#include <linux/stat.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/cpu.h>
|
||||
#include <linux/cpu_pm.h>
|
||||
#include <linux/coresight.h>
|
||||
#include <linux/coresight-pmu.h>
|
||||
#include <linux/pm_wakeup.h>
|
||||
|
@ -26,6 +27,7 @@
|
|||
#include <linux/uaccess.h>
|
||||
#include <linux/perf_event.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <linux/property.h>
|
||||
#include <asm/sections.h>
|
||||
#include <asm/local.h>
|
||||
#include <asm/virt.h>
|
||||
|
@ -37,6 +39,15 @@ static int boot_enable;
|
|||
module_param(boot_enable, int, 0444);
|
||||
MODULE_PARM_DESC(boot_enable, "Enable tracing on boot");
|
||||
|
||||
#define PARAM_PM_SAVE_FIRMWARE 0 /* save self-hosted state as per firmware */
|
||||
#define PARAM_PM_SAVE_NEVER 1 /* never save any state */
|
||||
#define PARAM_PM_SAVE_SELF_HOSTED 2 /* save self-hosted state only */
|
||||
|
||||
static int pm_save_enable = PARAM_PM_SAVE_FIRMWARE;
|
||||
module_param(pm_save_enable, int, 0444);
|
||||
MODULE_PARM_DESC(pm_save_enable,
|
||||
"Save/restore state on power down: 1 = never, 2 = self-hosted");
|
||||
|
||||
/* The number of ETMv4 currently registered */
|
||||
static int etm4_count;
|
||||
static struct etmv4_drvdata *etmdrvdata[NR_CPUS];
|
||||
|
@ -54,6 +65,14 @@ static void etm4_os_unlock(struct etmv4_drvdata *drvdata)
|
|||
isb();
|
||||
}
|
||||
|
||||
static void etm4_os_lock(struct etmv4_drvdata *drvdata)
|
||||
{
|
||||
/* Writing 0x1 to TRCOSLAR locks the trace registers */
|
||||
writel_relaxed(0x1, drvdata->base + TRCOSLAR);
|
||||
drvdata->os_unlock = false;
|
||||
isb();
|
||||
}
|
||||
|
||||
static bool etm4_arch_supported(u8 arch)
|
||||
{
|
||||
/* Mask out the minor version number */
|
||||
|
@ -149,6 +168,9 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
|
|||
drvdata->base + TRCRSCTLRn(i));
|
||||
|
||||
for (i = 0; i < drvdata->nr_ss_cmp; i++) {
|
||||
/* always clear status bit on restart if using single-shot */
|
||||
if (config->ss_ctrl[i] || config->ss_pe_cmp[i])
|
||||
config->ss_status[i] &= ~BIT(31);
|
||||
writel_relaxed(config->ss_ctrl[i],
|
||||
drvdata->base + TRCSSCCRn(i));
|
||||
writel_relaxed(config->ss_status[i],
|
||||
|
@ -448,6 +470,9 @@ static void etm4_disable_hw(void *info)
|
|||
{
|
||||
u32 control;
|
||||
struct etmv4_drvdata *drvdata = info;
|
||||
struct etmv4_config *config = &drvdata->config;
|
||||
struct device *etm_dev = &drvdata->csdev->dev;
|
||||
int i;
|
||||
|
||||
CS_UNLOCK(drvdata->base);
|
||||
|
||||
|
@ -470,6 +495,18 @@ static void etm4_disable_hw(void *info)
|
|||
isb();
|
||||
writel_relaxed(control, drvdata->base + TRCPRGCTLR);
|
||||
|
||||
/* wait for TRCSTATR.PMSTABLE to go to '1' */
|
||||
if (coresight_timeout(drvdata->base, TRCSTATR,
|
||||
TRCSTATR_PMSTABLE_BIT, 1))
|
||||
dev_err(etm_dev,
|
||||
"timeout while waiting for PM stable Trace Status\n");
|
||||
|
||||
/* read the status of the single shot comparators */
|
||||
for (i = 0; i < drvdata->nr_ss_cmp; i++) {
|
||||
config->ss_status[i] =
|
||||
readl_relaxed(drvdata->base + TRCSSCSRn(i));
|
||||
}
|
||||
|
||||
coresight_disclaim_device_unlocked(drvdata->base);
|
||||
|
||||
CS_LOCK(drvdata->base);
|
||||
|
@ -576,6 +613,7 @@ static void etm4_init_arch_data(void *info)
|
|||
u32 etmidr4;
|
||||
u32 etmidr5;
|
||||
struct etmv4_drvdata *drvdata = info;
|
||||
int i;
|
||||
|
||||
/* Make sure all registers are accessible */
|
||||
etm4_os_unlock(drvdata);
|
||||
|
@ -629,6 +667,7 @@ static void etm4_init_arch_data(void *info)
|
|||
* TRCARCHMAJ, bits[11:8] architecture major versin number
|
||||
*/
|
||||
drvdata->arch = BMVAL(etmidr1, 4, 11);
|
||||
drvdata->config.arch = drvdata->arch;
|
||||
|
||||
/* maximum size of resources */
|
||||
etmidr2 = readl_relaxed(drvdata->base + TRCIDR2);
|
||||
|
@ -698,9 +737,14 @@ static void etm4_init_arch_data(void *info)
|
|||
drvdata->nr_resource = BMVAL(etmidr4, 16, 19) + 1;
|
||||
/*
|
||||
* NUMSSCC, bits[23:20] the number of single-shot
|
||||
* comparator control for tracing
|
||||
* comparator control for tracing. Read any status regs as these
|
||||
* also contain RO capability data.
|
||||
*/
|
||||
drvdata->nr_ss_cmp = BMVAL(etmidr4, 20, 23);
|
||||
for (i = 0; i < drvdata->nr_ss_cmp; i++) {
|
||||
drvdata->config.ss_status[i] =
|
||||
readl_relaxed(drvdata->base + TRCSSCSRn(i));
|
||||
}
|
||||
/* NUMCIDC, bits[27:24] number of Context ID comparators for tracing */
|
||||
drvdata->numcidc = BMVAL(etmidr4, 24, 27);
|
||||
/* NUMVMIDC, bits[31:28] number of VMID comparators for tracing */
|
||||
|
@ -780,6 +824,7 @@ static u64 etm4_get_ns_access_type(struct etmv4_config *config)
|
|||
static u64 etm4_get_access_type(struct etmv4_config *config)
|
||||
{
|
||||
u64 access_type = etm4_get_ns_access_type(config);
|
||||
u64 s_hyp = (config->arch & 0x0f) >= 0x4 ? ETM_EXLEVEL_S_HYP : 0;
|
||||
|
||||
/*
|
||||
* EXLEVEL_S, bits[11:8], don't trace anything happening
|
||||
|
@ -787,7 +832,8 @@ static u64 etm4_get_access_type(struct etmv4_config *config)
|
|||
*/
|
||||
access_type |= (ETM_EXLEVEL_S_APP |
|
||||
ETM_EXLEVEL_S_OS |
|
||||
ETM_EXLEVEL_S_HYP);
|
||||
s_hyp |
|
||||
ETM_EXLEVEL_S_MON);
|
||||
|
||||
return access_type;
|
||||
}
|
||||
|
@ -865,6 +911,7 @@ static void etm4_set_default_filter(struct etmv4_config *config)
|
|||
* in the started state
|
||||
*/
|
||||
config->vinst_ctrl |= BIT(9);
|
||||
config->mode |= ETM_MODE_VIEWINST_STARTSTOP;
|
||||
|
||||
/* No start-stop filtering for ViewInst */
|
||||
config->vissctlr = 0x0;
|
||||
|
@ -1085,6 +1132,288 @@ static void etm4_init_trace_id(struct etmv4_drvdata *drvdata)
|
|||
drvdata->trcid = coresight_get_trace_id(drvdata->cpu);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_CPU_PM
|
||||
static int etm4_cpu_save(struct etmv4_drvdata *drvdata)
|
||||
{
|
||||
int i, ret = 0;
|
||||
struct etmv4_save_state *state;
|
||||
struct device *etm_dev = &drvdata->csdev->dev;
|
||||
|
||||
/*
|
||||
* As recommended by 3.4.1 ("The procedure when powering down the PE")
|
||||
* of ARM IHI 0064D
|
||||
*/
|
||||
dsb(sy);
|
||||
isb();
|
||||
|
||||
CS_UNLOCK(drvdata->base);
|
||||
|
||||
/* Lock the OS lock to disable trace and external debugger access */
|
||||
etm4_os_lock(drvdata);
|
||||
|
||||
/* wait for TRCSTATR.PMSTABLE to go up */
|
||||
if (coresight_timeout(drvdata->base, TRCSTATR,
|
||||
TRCSTATR_PMSTABLE_BIT, 1)) {
|
||||
dev_err(etm_dev,
|
||||
"timeout while waiting for PM Stable Status\n");
|
||||
etm4_os_unlock(drvdata);
|
||||
ret = -EBUSY;
|
||||
goto out;
|
||||
}
|
||||
|
||||
state = drvdata->save_state;
|
||||
|
||||
state->trcprgctlr = readl(drvdata->base + TRCPRGCTLR);
|
||||
state->trcprocselr = readl(drvdata->base + TRCPROCSELR);
|
||||
state->trcconfigr = readl(drvdata->base + TRCCONFIGR);
|
||||
state->trcauxctlr = readl(drvdata->base + TRCAUXCTLR);
|
||||
state->trceventctl0r = readl(drvdata->base + TRCEVENTCTL0R);
|
||||
state->trceventctl1r = readl(drvdata->base + TRCEVENTCTL1R);
|
||||
state->trcstallctlr = readl(drvdata->base + TRCSTALLCTLR);
|
||||
state->trctsctlr = readl(drvdata->base + TRCTSCTLR);
|
||||
state->trcsyncpr = readl(drvdata->base + TRCSYNCPR);
|
||||
state->trcccctlr = readl(drvdata->base + TRCCCCTLR);
|
||||
state->trcbbctlr = readl(drvdata->base + TRCBBCTLR);
|
||||
state->trctraceidr = readl(drvdata->base + TRCTRACEIDR);
|
||||
state->trcqctlr = readl(drvdata->base + TRCQCTLR);
|
||||
|
||||
state->trcvictlr = readl(drvdata->base + TRCVICTLR);
|
||||
state->trcviiectlr = readl(drvdata->base + TRCVIIECTLR);
|
||||
state->trcvissctlr = readl(drvdata->base + TRCVISSCTLR);
|
||||
state->trcvipcssctlr = readl(drvdata->base + TRCVIPCSSCTLR);
|
||||
state->trcvdctlr = readl(drvdata->base + TRCVDCTLR);
|
||||
state->trcvdsacctlr = readl(drvdata->base + TRCVDSACCTLR);
|
||||
state->trcvdarcctlr = readl(drvdata->base + TRCVDARCCTLR);
|
||||
|
||||
for (i = 0; i < drvdata->nrseqstate; i++)
|
||||
state->trcseqevr[i] = readl(drvdata->base + TRCSEQEVRn(i));
|
||||
|
||||
state->trcseqrstevr = readl(drvdata->base + TRCSEQRSTEVR);
|
||||
state->trcseqstr = readl(drvdata->base + TRCSEQSTR);
|
||||
state->trcextinselr = readl(drvdata->base + TRCEXTINSELR);
|
||||
|
||||
for (i = 0; i < drvdata->nr_cntr; i++) {
|
||||
state->trccntrldvr[i] = readl(drvdata->base + TRCCNTRLDVRn(i));
|
||||
state->trccntctlr[i] = readl(drvdata->base + TRCCNTCTLRn(i));
|
||||
state->trccntvr[i] = readl(drvdata->base + TRCCNTVRn(i));
|
||||
}
|
||||
|
||||
for (i = 0; i < drvdata->nr_resource * 2; i++)
|
||||
state->trcrsctlr[i] = readl(drvdata->base + TRCRSCTLRn(i));
|
||||
|
||||
for (i = 0; i < drvdata->nr_ss_cmp; i++) {
|
||||
state->trcssccr[i] = readl(drvdata->base + TRCSSCCRn(i));
|
||||
state->trcsscsr[i] = readl(drvdata->base + TRCSSCSRn(i));
|
||||
state->trcsspcicr[i] = readl(drvdata->base + TRCSSPCICRn(i));
|
||||
}
|
||||
|
||||
for (i = 0; i < drvdata->nr_addr_cmp * 2; i++) {
|
||||
state->trcacvr[i] = readl(drvdata->base + TRCACVRn(i));
|
||||
state->trcacatr[i] = readl(drvdata->base + TRCACATRn(i));
|
||||
}
|
||||
|
||||
/*
|
||||
* Data trace stream is architecturally prohibited for A profile cores
|
||||
* so we don't save (or later restore) trcdvcvr and trcdvcmr - As per
|
||||
* section 1.3.4 ("Possible functional configurations of an ETMv4 trace
|
||||
* unit") of ARM IHI 0064D.
|
||||
*/
|
||||
|
||||
for (i = 0; i < drvdata->numcidc; i++)
|
||||
state->trccidcvr[i] = readl(drvdata->base + TRCCIDCVRn(i));
|
||||
|
||||
for (i = 0; i < drvdata->numvmidc; i++)
|
||||
state->trcvmidcvr[i] = readl(drvdata->base + TRCVMIDCVRn(i));
|
||||
|
||||
state->trccidcctlr0 = readl(drvdata->base + TRCCIDCCTLR0);
|
||||
state->trccidcctlr1 = readl(drvdata->base + TRCCIDCCTLR1);
|
||||
|
||||
state->trcvmidcctlr0 = readl(drvdata->base + TRCVMIDCCTLR0);
|
||||
state->trcvmidcctlr0 = readl(drvdata->base + TRCVMIDCCTLR1);
|
||||
|
||||
state->trcclaimset = readl(drvdata->base + TRCCLAIMCLR);
|
||||
|
||||
state->trcpdcr = readl(drvdata->base + TRCPDCR);
|
||||
|
||||
/* wait for TRCSTATR.IDLE to go up */
|
||||
if (coresight_timeout(drvdata->base, TRCSTATR, TRCSTATR_IDLE_BIT, 1)) {
|
||||
dev_err(etm_dev,
|
||||
"timeout while waiting for Idle Trace Status\n");
|
||||
etm4_os_unlock(drvdata);
|
||||
ret = -EBUSY;
|
||||
goto out;
|
||||
}
|
||||
|
||||
drvdata->state_needs_restore = true;
|
||||
|
||||
/*
|
||||
* Power can be removed from the trace unit now. We do this to
|
||||
* potentially save power on systems that respect the TRCPDCR_PU
|
||||
* despite requesting software to save/restore state.
|
||||
*/
|
||||
writel_relaxed((state->trcpdcr & ~TRCPDCR_PU),
|
||||
drvdata->base + TRCPDCR);
|
||||
|
||||
out:
|
||||
CS_LOCK(drvdata->base);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void etm4_cpu_restore(struct etmv4_drvdata *drvdata)
|
||||
{
|
||||
int i;
|
||||
struct etmv4_save_state *state = drvdata->save_state;
|
||||
|
||||
CS_UNLOCK(drvdata->base);
|
||||
|
||||
writel_relaxed(state->trcclaimset, drvdata->base + TRCCLAIMSET);
|
||||
|
||||
writel_relaxed(state->trcprgctlr, drvdata->base + TRCPRGCTLR);
|
||||
writel_relaxed(state->trcprocselr, drvdata->base + TRCPROCSELR);
|
||||
writel_relaxed(state->trcconfigr, drvdata->base + TRCCONFIGR);
|
||||
writel_relaxed(state->trcauxctlr, drvdata->base + TRCAUXCTLR);
|
||||
writel_relaxed(state->trceventctl0r, drvdata->base + TRCEVENTCTL0R);
|
||||
writel_relaxed(state->trceventctl1r, drvdata->base + TRCEVENTCTL1R);
|
||||
writel_relaxed(state->trcstallctlr, drvdata->base + TRCSTALLCTLR);
|
||||
writel_relaxed(state->trctsctlr, drvdata->base + TRCTSCTLR);
|
||||
writel_relaxed(state->trcsyncpr, drvdata->base + TRCSYNCPR);
|
||||
writel_relaxed(state->trcccctlr, drvdata->base + TRCCCCTLR);
|
||||
writel_relaxed(state->trcbbctlr, drvdata->base + TRCBBCTLR);
|
||||
writel_relaxed(state->trctraceidr, drvdata->base + TRCTRACEIDR);
|
||||
writel_relaxed(state->trcqctlr, drvdata->base + TRCQCTLR);
|
||||
|
||||
writel_relaxed(state->trcvictlr, drvdata->base + TRCVICTLR);
|
||||
writel_relaxed(state->trcviiectlr, drvdata->base + TRCVIIECTLR);
|
||||
writel_relaxed(state->trcvissctlr, drvdata->base + TRCVISSCTLR);
|
||||
writel_relaxed(state->trcvipcssctlr, drvdata->base + TRCVIPCSSCTLR);
|
||||
writel_relaxed(state->trcvdctlr, drvdata->base + TRCVDCTLR);
|
||||
writel_relaxed(state->trcvdsacctlr, drvdata->base + TRCVDSACCTLR);
|
||||
writel_relaxed(state->trcvdarcctlr, drvdata->base + TRCVDARCCTLR);
|
||||
|
||||
for (i = 0; i < drvdata->nrseqstate; i++)
|
||||
writel_relaxed(state->trcseqevr[i],
|
||||
drvdata->base + TRCSEQEVRn(i));
|
||||
|
||||
writel_relaxed(state->trcseqrstevr, drvdata->base + TRCSEQRSTEVR);
|
||||
writel_relaxed(state->trcseqstr, drvdata->base + TRCSEQSTR);
|
||||
writel_relaxed(state->trcextinselr, drvdata->base + TRCEXTINSELR);
|
||||
|
||||
for (i = 0; i < drvdata->nr_cntr; i++) {
|
||||
writel_relaxed(state->trccntrldvr[i],
|
||||
drvdata->base + TRCCNTRLDVRn(i));
|
||||
writel_relaxed(state->trccntctlr[i],
|
||||
drvdata->base + TRCCNTCTLRn(i));
|
||||
writel_relaxed(state->trccntvr[i],
|
||||
drvdata->base + TRCCNTVRn(i));
|
||||
}
|
||||
|
||||
for (i = 0; i < drvdata->nr_resource * 2; i++)
|
||||
writel_relaxed(state->trcrsctlr[i],
|
||||
drvdata->base + TRCRSCTLRn(i));
|
||||
|
||||
for (i = 0; i < drvdata->nr_ss_cmp; i++) {
|
||||
writel_relaxed(state->trcssccr[i],
|
||||
drvdata->base + TRCSSCCRn(i));
|
||||
writel_relaxed(state->trcsscsr[i],
|
||||
drvdata->base + TRCSSCSRn(i));
|
||||
writel_relaxed(state->trcsspcicr[i],
|
||||
drvdata->base + TRCSSPCICRn(i));
|
||||
}
|
||||
|
||||
for (i = 0; i < drvdata->nr_addr_cmp * 2; i++) {
|
||||
writel_relaxed(state->trcacvr[i],
|
||||
drvdata->base + TRCACVRn(i));
|
||||
writel_relaxed(state->trcacatr[i],
|
||||
drvdata->base + TRCACATRn(i));
|
||||
}
|
||||
|
||||
for (i = 0; i < drvdata->numcidc; i++)
|
||||
writel_relaxed(state->trccidcvr[i],
|
||||
drvdata->base + TRCCIDCVRn(i));
|
||||
|
||||
for (i = 0; i < drvdata->numvmidc; i++)
|
||||
writel_relaxed(state->trcvmidcvr[i],
|
||||
drvdata->base + TRCVMIDCVRn(i));
|
||||
|
||||
writel_relaxed(state->trccidcctlr0, drvdata->base + TRCCIDCCTLR0);
|
||||
writel_relaxed(state->trccidcctlr1, drvdata->base + TRCCIDCCTLR1);
|
||||
|
||||
writel_relaxed(state->trcvmidcctlr0, drvdata->base + TRCVMIDCCTLR0);
|
||||
writel_relaxed(state->trcvmidcctlr0, drvdata->base + TRCVMIDCCTLR1);
|
||||
|
||||
writel_relaxed(state->trcclaimset, drvdata->base + TRCCLAIMSET);
|
||||
|
||||
writel_relaxed(state->trcpdcr, drvdata->base + TRCPDCR);
|
||||
|
||||
drvdata->state_needs_restore = false;
|
||||
|
||||
/*
|
||||
* As recommended by section 4.3.7 ("Synchronization when using the
|
||||
* memory-mapped interface") of ARM IHI 0064D
|
||||
*/
|
||||
dsb(sy);
|
||||
isb();
|
||||
|
||||
/* Unlock the OS lock to re-enable trace and external debug access */
|
||||
etm4_os_unlock(drvdata);
|
||||
CS_LOCK(drvdata->base);
|
||||
}
|
||||
|
||||
static int etm4_cpu_pm_notify(struct notifier_block *nb, unsigned long cmd,
|
||||
void *v)
|
||||
{
|
||||
struct etmv4_drvdata *drvdata;
|
||||
unsigned int cpu = smp_processor_id();
|
||||
|
||||
if (!etmdrvdata[cpu])
|
||||
return NOTIFY_OK;
|
||||
|
||||
drvdata = etmdrvdata[cpu];
|
||||
|
||||
if (!drvdata->save_state)
|
||||
return NOTIFY_OK;
|
||||
|
||||
if (WARN_ON_ONCE(drvdata->cpu != cpu))
|
||||
return NOTIFY_BAD;
|
||||
|
||||
switch (cmd) {
|
||||
case CPU_PM_ENTER:
|
||||
/* save the state if self-hosted coresight is in use */
|
||||
if (local_read(&drvdata->mode))
|
||||
if (etm4_cpu_save(drvdata))
|
||||
return NOTIFY_BAD;
|
||||
break;
|
||||
case CPU_PM_EXIT:
|
||||
/* fallthrough */
|
||||
case CPU_PM_ENTER_FAILED:
|
||||
if (drvdata->state_needs_restore)
|
||||
etm4_cpu_restore(drvdata);
|
||||
break;
|
||||
default:
|
||||
return NOTIFY_DONE;
|
||||
}
|
||||
|
||||
return NOTIFY_OK;
|
||||
}
|
||||
|
||||
static struct notifier_block etm4_cpu_pm_nb = {
|
||||
.notifier_call = etm4_cpu_pm_notify,
|
||||
};
|
||||
|
||||
static int etm4_cpu_pm_register(void)
|
||||
{
|
||||
return cpu_pm_register_notifier(&etm4_cpu_pm_nb);
|
||||
}
|
||||
|
||||
static void etm4_cpu_pm_unregister(void)
|
||||
{
|
||||
cpu_pm_unregister_notifier(&etm4_cpu_pm_nb);
|
||||
}
|
||||
#else
|
||||
static int etm4_cpu_pm_register(void) { return 0; }
|
||||
static void etm4_cpu_pm_unregister(void) { }
|
||||
#endif
|
||||
|
||||
static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
|
||||
{
|
||||
int ret;
|
||||
|
@ -1101,6 +1430,17 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
|
|||
|
||||
dev_set_drvdata(dev, drvdata);
|
||||
|
||||
if (pm_save_enable == PARAM_PM_SAVE_FIRMWARE)
|
||||
pm_save_enable = coresight_loses_context_with_cpu(dev) ?
|
||||
PARAM_PM_SAVE_SELF_HOSTED : PARAM_PM_SAVE_NEVER;
|
||||
|
||||
if (pm_save_enable != PARAM_PM_SAVE_NEVER) {
|
||||
drvdata->save_state = devm_kmalloc(dev,
|
||||
sizeof(struct etmv4_save_state), GFP_KERNEL);
|
||||
if (!drvdata->save_state)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/* Validity for the resource is already checked by the AMBA core */
|
||||
base = devm_ioremap_resource(dev, res);
|
||||
if (IS_ERR(base))
|
||||
|
@ -1135,6 +1475,10 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
|
|||
if (ret < 0)
|
||||
goto err_arch_supported;
|
||||
hp_online = ret;
|
||||
|
||||
ret = etm4_cpu_pm_register();
|
||||
if (ret)
|
||||
goto err_arch_supported;
|
||||
}
|
||||
|
||||
cpus_read_unlock();
|
||||
|
@ -1185,6 +1529,8 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
|
|||
|
||||
err_arch_supported:
|
||||
if (--etm4_count == 0) {
|
||||
etm4_cpu_pm_unregister();
|
||||
|
||||
cpuhp_remove_state_nocalls(CPUHP_AP_ARM_CORESIGHT_STARTING);
|
||||
if (hp_online)
|
||||
cpuhp_remove_state_nocalls(hp_online);
|
||||
|
@ -1211,6 +1557,7 @@ static const struct amba_id etm4_ids[] = {
|
|||
CS_AMBA_UCI_ID(0x000f0211, uci_id_etm4),/* Qualcomm Kryo */
|
||||
CS_AMBA_ID(0x000bb802), /* Qualcomm Kryo 385 Cortex-A55 */
|
||||
CS_AMBA_ID(0x000bb803), /* Qualcomm Kryo 385 Cortex-A75 */
|
||||
CS_AMBA_UCI_ID(0x000cc0af, uci_id_etm4),/* Marvell ThunderX2 */
|
||||
{},
|
||||
};
|
||||
|
||||
|
|
|
@ -175,22 +175,28 @@
|
|||
ETM_MODE_EXCL_USER)
|
||||
|
||||
#define TRCSTATR_IDLE_BIT 0
|
||||
#define TRCSTATR_PMSTABLE_BIT 1
|
||||
#define ETM_DEFAULT_ADDR_COMP 0
|
||||
|
||||
/* PowerDown Control Register bits */
|
||||
#define TRCPDCR_PU BIT(3)
|
||||
|
||||
/* secure state access levels */
|
||||
/* secure state access levels - TRCACATRn */
|
||||
#define ETM_EXLEVEL_S_APP BIT(8)
|
||||
#define ETM_EXLEVEL_S_OS BIT(9)
|
||||
#define ETM_EXLEVEL_S_NA BIT(10)
|
||||
#define ETM_EXLEVEL_S_HYP BIT(11)
|
||||
/* non-secure state access levels */
|
||||
#define ETM_EXLEVEL_S_HYP BIT(10)
|
||||
#define ETM_EXLEVEL_S_MON BIT(11)
|
||||
/* non-secure state access levels - TRCACATRn */
|
||||
#define ETM_EXLEVEL_NS_APP BIT(12)
|
||||
#define ETM_EXLEVEL_NS_OS BIT(13)
|
||||
#define ETM_EXLEVEL_NS_HYP BIT(14)
|
||||
#define ETM_EXLEVEL_NS_NA BIT(15)
|
||||
|
||||
/* secure / non secure masks - TRCVICTLR, IDR3 */
|
||||
#define ETM_EXLEVEL_S_VICTLR_MASK GENMASK(19, 16)
|
||||
/* NS MON (EL3) mode never implemented */
|
||||
#define ETM_EXLEVEL_NS_VICTLR_MASK GENMASK(22, 20)
|
||||
|
||||
/**
|
||||
* struct etmv4_config - configuration information related to an ETMv4
|
||||
* @mode: Controls various modes supported by this ETM.
|
||||
|
@ -221,6 +227,7 @@
|
|||
* @cntr_val: Sets or returns the value for a counter.
|
||||
* @res_idx: Resource index selector.
|
||||
* @res_ctrl: Controls the selection of the resources in the trace unit.
|
||||
* @ss_idx: Single-shot index selector.
|
||||
* @ss_ctrl: Controls the corresponding single-shot comparator resource.
|
||||
* @ss_status: The status of the corresponding single-shot comparator.
|
||||
* @ss_pe_cmp: Selects the PE comparator inputs for Single-shot control.
|
||||
|
@ -237,6 +244,7 @@
|
|||
* @vmid_mask0: VM ID comparator mask for comparator 0-3.
|
||||
* @vmid_mask1: VM ID comparator mask for comparator 4-7.
|
||||
* @ext_inp: External input selection.
|
||||
* @arch: ETM architecture version (for arch dependent config).
|
||||
*/
|
||||
struct etmv4_config {
|
||||
u32 mode;
|
||||
|
@ -263,6 +271,7 @@ struct etmv4_config {
|
|||
u32 cntr_val[ETMv4_MAX_CNTR];
|
||||
u8 res_idx;
|
||||
u32 res_ctrl[ETM_MAX_RES_SEL];
|
||||
u8 ss_idx;
|
||||
u32 ss_ctrl[ETM_MAX_SS_CMP];
|
||||
u32 ss_status[ETM_MAX_SS_CMP];
|
||||
u32 ss_pe_cmp[ETM_MAX_SS_CMP];
|
||||
|
@ -279,6 +288,66 @@ struct etmv4_config {
|
|||
u32 vmid_mask0;
|
||||
u32 vmid_mask1;
|
||||
u32 ext_inp;
|
||||
u8 arch;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct etm4_save_state - state to be preserved when ETM is without power
|
||||
*/
|
||||
struct etmv4_save_state {
|
||||
u32 trcprgctlr;
|
||||
u32 trcprocselr;
|
||||
u32 trcconfigr;
|
||||
u32 trcauxctlr;
|
||||
u32 trceventctl0r;
|
||||
u32 trceventctl1r;
|
||||
u32 trcstallctlr;
|
||||
u32 trctsctlr;
|
||||
u32 trcsyncpr;
|
||||
u32 trcccctlr;
|
||||
u32 trcbbctlr;
|
||||
u32 trctraceidr;
|
||||
u32 trcqctlr;
|
||||
|
||||
u32 trcvictlr;
|
||||
u32 trcviiectlr;
|
||||
u32 trcvissctlr;
|
||||
u32 trcvipcssctlr;
|
||||
u32 trcvdctlr;
|
||||
u32 trcvdsacctlr;
|
||||
u32 trcvdarcctlr;
|
||||
|
||||
u32 trcseqevr[ETM_MAX_SEQ_STATES];
|
||||
u32 trcseqrstevr;
|
||||
u32 trcseqstr;
|
||||
u32 trcextinselr;
|
||||
u32 trccntrldvr[ETMv4_MAX_CNTR];
|
||||
u32 trccntctlr[ETMv4_MAX_CNTR];
|
||||
u32 trccntvr[ETMv4_MAX_CNTR];
|
||||
|
||||
u32 trcrsctlr[ETM_MAX_RES_SEL * 2];
|
||||
|
||||
u32 trcssccr[ETM_MAX_SS_CMP];
|
||||
u32 trcsscsr[ETM_MAX_SS_CMP];
|
||||
u32 trcsspcicr[ETM_MAX_SS_CMP];
|
||||
|
||||
u64 trcacvr[ETM_MAX_SINGLE_ADDR_CMP];
|
||||
u64 trcacatr[ETM_MAX_SINGLE_ADDR_CMP];
|
||||
u64 trccidcvr[ETMv4_MAX_CTXID_CMP];
|
||||
u32 trcvmidcvr[ETM_MAX_VMID_CMP];
|
||||
u32 trccidcctlr0;
|
||||
u32 trccidcctlr1;
|
||||
u32 trcvmidcctlr0;
|
||||
u32 trcvmidcctlr1;
|
||||
|
||||
u32 trcclaimset;
|
||||
|
||||
u32 cntr_val[ETMv4_MAX_CNTR];
|
||||
u32 seq_state;
|
||||
u32 vinst_ctrl;
|
||||
u32 ss_status[ETM_MAX_SS_CMP];
|
||||
|
||||
u32 trcpdcr;
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -336,6 +405,8 @@ struct etmv4_config {
|
|||
* @atbtrig: If the implementation can support ATB triggers
|
||||
* @lpoverride: If the implementation can support low-power state over.
|
||||
* @config: structure holding configuration parameters.
|
||||
* @save_state: State to be preserved across power loss
|
||||
* @state_needs_restore: True when there is context to restore after PM exit
|
||||
*/
|
||||
struct etmv4_drvdata {
|
||||
void __iomem *base;
|
||||
|
@ -381,6 +452,8 @@ struct etmv4_drvdata {
|
|||
bool atbtrig;
|
||||
bool lpoverride;
|
||||
struct etmv4_config config;
|
||||
struct etmv4_save_state *save_state;
|
||||
bool state_needs_restore;
|
||||
};
|
||||
|
||||
/* Address comparator access types */
|
||||
|
|
|
@ -38,12 +38,14 @@ DEFINE_CORESIGHT_DEVLIST(funnel_devs, "funnel");
|
|||
* @atclk: optional clock for the core parts of the funnel.
|
||||
* @csdev: component vitals needed by the framework.
|
||||
* @priority: port selection order.
|
||||
* @spinlock: serialize enable/disable operations.
|
||||
*/
|
||||
struct funnel_drvdata {
|
||||
void __iomem *base;
|
||||
struct clk *atclk;
|
||||
struct coresight_device *csdev;
|
||||
unsigned long priority;
|
||||
spinlock_t spinlock;
|
||||
};
|
||||
|
||||
static int dynamic_funnel_enable_hw(struct funnel_drvdata *drvdata, int port)
|
||||
|
@ -76,11 +78,21 @@ static int funnel_enable(struct coresight_device *csdev, int inport,
|
|||
{
|
||||
int rc = 0;
|
||||
struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
|
||||
unsigned long flags;
|
||||
bool first_enable = false;
|
||||
|
||||
if (drvdata->base)
|
||||
rc = dynamic_funnel_enable_hw(drvdata, inport);
|
||||
|
||||
spin_lock_irqsave(&drvdata->spinlock, flags);
|
||||
if (atomic_read(&csdev->refcnt[inport]) == 0) {
|
||||
if (drvdata->base)
|
||||
rc = dynamic_funnel_enable_hw(drvdata, inport);
|
||||
if (!rc)
|
||||
first_enable = true;
|
||||
}
|
||||
if (!rc)
|
||||
atomic_inc(&csdev->refcnt[inport]);
|
||||
spin_unlock_irqrestore(&drvdata->spinlock, flags);
|
||||
|
||||
if (first_enable)
|
||||
dev_dbg(&csdev->dev, "FUNNEL inport %d enabled\n", inport);
|
||||
return rc;
|
||||
}
|
||||
|
@ -107,11 +119,19 @@ static void funnel_disable(struct coresight_device *csdev, int inport,
|
|||
int outport)
|
||||
{
|
||||
struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
|
||||
unsigned long flags;
|
||||
bool last_disable = false;
|
||||
|
||||
if (drvdata->base)
|
||||
dynamic_funnel_disable_hw(drvdata, inport);
|
||||
spin_lock_irqsave(&drvdata->spinlock, flags);
|
||||
if (atomic_dec_return(&csdev->refcnt[inport]) == 0) {
|
||||
if (drvdata->base)
|
||||
dynamic_funnel_disable_hw(drvdata, inport);
|
||||
last_disable = true;
|
||||
}
|
||||
spin_unlock_irqrestore(&drvdata->spinlock, flags);
|
||||
|
||||
dev_dbg(&csdev->dev, "FUNNEL inport %d disabled\n", inport);
|
||||
if (last_disable)
|
||||
dev_dbg(&csdev->dev, "FUNNEL inport %d disabled\n", inport);
|
||||
}
|
||||
|
||||
static const struct coresight_ops_link funnel_link_ops = {
|
||||
|
@ -233,6 +253,7 @@ static int funnel_probe(struct device *dev, struct resource *res)
|
|||
}
|
||||
dev->platform_data = pdata;
|
||||
|
||||
spin_lock_init(&drvdata->spinlock);
|
||||
desc.type = CORESIGHT_DEV_TYPE_LINK;
|
||||
desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_MERG;
|
||||
desc.ops = &funnel_cs_ops;
|
||||
|
|
|
@ -31,11 +31,13 @@ DEFINE_CORESIGHT_DEVLIST(replicator_devs, "replicator");
|
|||
* whether this one is programmable or not.
|
||||
* @atclk: optional clock for the core parts of the replicator.
|
||||
* @csdev: component vitals needed by the framework
|
||||
* @spinlock: serialize enable/disable operations.
|
||||
*/
|
||||
struct replicator_drvdata {
|
||||
void __iomem *base;
|
||||
struct clk *atclk;
|
||||
struct coresight_device *csdev;
|
||||
spinlock_t spinlock;
|
||||
};
|
||||
|
||||
static void dynamic_replicator_reset(struct replicator_drvdata *drvdata)
|
||||
|
@ -97,10 +99,22 @@ static int replicator_enable(struct coresight_device *csdev, int inport,
|
|||
{
|
||||
int rc = 0;
|
||||
struct replicator_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
|
||||
unsigned long flags;
|
||||
bool first_enable = false;
|
||||
|
||||
if (drvdata->base)
|
||||
rc = dynamic_replicator_enable(drvdata, inport, outport);
|
||||
spin_lock_irqsave(&drvdata->spinlock, flags);
|
||||
if (atomic_read(&csdev->refcnt[outport]) == 0) {
|
||||
if (drvdata->base)
|
||||
rc = dynamic_replicator_enable(drvdata, inport,
|
||||
outport);
|
||||
if (!rc)
|
||||
first_enable = true;
|
||||
}
|
||||
if (!rc)
|
||||
atomic_inc(&csdev->refcnt[outport]);
|
||||
spin_unlock_irqrestore(&drvdata->spinlock, flags);
|
||||
|
||||
if (first_enable)
|
||||
dev_dbg(&csdev->dev, "REPLICATOR enabled\n");
|
||||
return rc;
|
||||
}
|
||||
|
@ -137,10 +151,19 @@ static void replicator_disable(struct coresight_device *csdev, int inport,
|
|||
int outport)
|
||||
{
|
||||
struct replicator_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
|
||||
unsigned long flags;
|
||||
bool last_disable = false;
|
||||
|
||||
if (drvdata->base)
|
||||
dynamic_replicator_disable(drvdata, inport, outport);
|
||||
dev_dbg(&csdev->dev, "REPLICATOR disabled\n");
|
||||
spin_lock_irqsave(&drvdata->spinlock, flags);
|
||||
if (atomic_dec_return(&csdev->refcnt[outport]) == 0) {
|
||||
if (drvdata->base)
|
||||
dynamic_replicator_disable(drvdata, inport, outport);
|
||||
last_disable = true;
|
||||
}
|
||||
spin_unlock_irqrestore(&drvdata->spinlock, flags);
|
||||
|
||||
if (last_disable)
|
||||
dev_dbg(&csdev->dev, "REPLICATOR disabled\n");
|
||||
}
|
||||
|
||||
static const struct coresight_ops_link replicator_link_ops = {
|
||||
|
@ -225,6 +248,7 @@ static int replicator_probe(struct device *dev, struct resource *res)
|
|||
}
|
||||
dev->platform_data = pdata;
|
||||
|
||||
spin_lock_init(&drvdata->spinlock);
|
||||
desc.type = CORESIGHT_DEV_TYPE_LINK;
|
||||
desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_SPLIT;
|
||||
desc.ops = &replicator_cs_ops;
|
||||
|
|
|
@ -334,9 +334,10 @@ static int tmc_disable_etf_sink(struct coresight_device *csdev)
|
|||
static int tmc_enable_etf_link(struct coresight_device *csdev,
|
||||
int inport, int outport)
|
||||
{
|
||||
int ret;
|
||||
int ret = 0;
|
||||
unsigned long flags;
|
||||
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
|
||||
bool first_enable = false;
|
||||
|
||||
spin_lock_irqsave(&drvdata->spinlock, flags);
|
||||
if (drvdata->reading) {
|
||||
|
@ -344,12 +345,18 @@ static int tmc_enable_etf_link(struct coresight_device *csdev,
|
|||
return -EBUSY;
|
||||
}
|
||||
|
||||
ret = tmc_etf_enable_hw(drvdata);
|
||||
if (atomic_read(&csdev->refcnt[0]) == 0) {
|
||||
ret = tmc_etf_enable_hw(drvdata);
|
||||
if (!ret) {
|
||||
drvdata->mode = CS_MODE_SYSFS;
|
||||
first_enable = true;
|
||||
}
|
||||
}
|
||||
if (!ret)
|
||||
drvdata->mode = CS_MODE_SYSFS;
|
||||
atomic_inc(&csdev->refcnt[0]);
|
||||
spin_unlock_irqrestore(&drvdata->spinlock, flags);
|
||||
|
||||
if (!ret)
|
||||
if (first_enable)
|
||||
dev_dbg(&csdev->dev, "TMC-ETF enabled\n");
|
||||
return ret;
|
||||
}
|
||||
|
@ -359,6 +366,7 @@ static void tmc_disable_etf_link(struct coresight_device *csdev,
|
|||
{
|
||||
unsigned long flags;
|
||||
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
|
||||
bool last_disable = false;
|
||||
|
||||
spin_lock_irqsave(&drvdata->spinlock, flags);
|
||||
if (drvdata->reading) {
|
||||
|
@ -366,11 +374,15 @@ static void tmc_disable_etf_link(struct coresight_device *csdev,
|
|||
return;
|
||||
}
|
||||
|
||||
tmc_etf_disable_hw(drvdata);
|
||||
drvdata->mode = CS_MODE_DISABLED;
|
||||
if (atomic_dec_return(&csdev->refcnt[0]) == 0) {
|
||||
tmc_etf_disable_hw(drvdata);
|
||||
drvdata->mode = CS_MODE_DISABLED;
|
||||
last_disable = true;
|
||||
}
|
||||
spin_unlock_irqrestore(&drvdata->spinlock, flags);
|
||||
|
||||
dev_dbg(&csdev->dev, "TMC-ETF disabled\n");
|
||||
if (last_disable)
|
||||
dev_dbg(&csdev->dev, "TMC-ETF disabled\n");
|
||||
}
|
||||
|
||||
static void *tmc_alloc_etf_buffer(struct coresight_device *csdev,
|
||||
|
|
|
@ -253,9 +253,9 @@ static int coresight_enable_link(struct coresight_device *csdev,
|
|||
struct coresight_device *parent,
|
||||
struct coresight_device *child)
|
||||
{
|
||||
int ret;
|
||||
int ret = 0;
|
||||
int link_subtype;
|
||||
int refport, inport, outport;
|
||||
int inport, outport;
|
||||
|
||||
if (!parent || !child)
|
||||
return -EINVAL;
|
||||
|
@ -264,29 +264,17 @@ static int coresight_enable_link(struct coresight_device *csdev,
|
|||
outport = coresight_find_link_outport(csdev, child);
|
||||
link_subtype = csdev->subtype.link_subtype;
|
||||
|
||||
if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_MERG)
|
||||
refport = inport;
|
||||
else if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_SPLIT)
|
||||
refport = outport;
|
||||
else
|
||||
refport = 0;
|
||||
if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_MERG && inport < 0)
|
||||
return inport;
|
||||
if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_SPLIT && outport < 0)
|
||||
return outport;
|
||||
|
||||
if (refport < 0)
|
||||
return refport;
|
||||
if (link_ops(csdev)->enable)
|
||||
ret = link_ops(csdev)->enable(csdev, inport, outport);
|
||||
if (!ret)
|
||||
csdev->enable = true;
|
||||
|
||||
if (atomic_inc_return(&csdev->refcnt[refport]) == 1) {
|
||||
if (link_ops(csdev)->enable) {
|
||||
ret = link_ops(csdev)->enable(csdev, inport, outport);
|
||||
if (ret) {
|
||||
atomic_dec(&csdev->refcnt[refport]);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
csdev->enable = true;
|
||||
|
||||
return 0;
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void coresight_disable_link(struct coresight_device *csdev,
|
||||
|
@ -295,7 +283,7 @@ static void coresight_disable_link(struct coresight_device *csdev,
|
|||
{
|
||||
int i, nr_conns;
|
||||
int link_subtype;
|
||||
int refport, inport, outport;
|
||||
int inport, outport;
|
||||
|
||||
if (!parent || !child)
|
||||
return;
|
||||
|
@ -305,20 +293,15 @@ static void coresight_disable_link(struct coresight_device *csdev,
|
|||
link_subtype = csdev->subtype.link_subtype;
|
||||
|
||||
if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_MERG) {
|
||||
refport = inport;
|
||||
nr_conns = csdev->pdata->nr_inport;
|
||||
} else if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_SPLIT) {
|
||||
refport = outport;
|
||||
nr_conns = csdev->pdata->nr_outport;
|
||||
} else {
|
||||
refport = 0;
|
||||
nr_conns = 1;
|
||||
}
|
||||
|
||||
if (atomic_dec_return(&csdev->refcnt[refport]) == 0) {
|
||||
if (link_ops(csdev)->disable)
|
||||
link_ops(csdev)->disable(csdev, inport, outport);
|
||||
}
|
||||
if (link_ops(csdev)->disable)
|
||||
link_ops(csdev)->disable(csdev, inport, outport);
|
||||
|
||||
for (i = 0; i < nr_conns; i++)
|
||||
if (atomic_read(&csdev->refcnt[i]) != 0)
|
||||
|
@ -1308,6 +1291,12 @@ static inline int coresight_search_device_idx(struct coresight_dev_list *dict,
|
|||
return -ENOENT;
|
||||
}
|
||||
|
||||
bool coresight_loses_context_with_cpu(struct device *dev)
|
||||
{
|
||||
return fwnode_property_present(dev_fwnode(dev),
|
||||
"arm,coresight-loses-context-with-cpu");
|
||||
}
|
||||
|
||||
/*
|
||||
* coresight_alloc_device_name - Get an index for a given device in the
|
||||
* device index list specific to a driver. An index is allocated for a
|
||||
|
|
|
@ -649,10 +649,8 @@ intel_th_subdevice_alloc(struct intel_th *th,
|
|||
}
|
||||
|
||||
err = intel_th_device_add_resources(thdev, res, subdev->nres);
|
||||
if (err) {
|
||||
put_device(&thdev->dev);
|
||||
if (err)
|
||||
goto fail_put_device;
|
||||
}
|
||||
|
||||
if (subdev->type == INTEL_TH_OUTPUT) {
|
||||
if (subdev->mknode)
|
||||
|
@ -667,10 +665,8 @@ intel_th_subdevice_alloc(struct intel_th *th,
|
|||
}
|
||||
|
||||
err = device_add(&thdev->dev);
|
||||
if (err) {
|
||||
put_device(&thdev->dev);
|
||||
if (err)
|
||||
goto fail_free_res;
|
||||
}
|
||||
|
||||
/* need switch driver to be loaded to enumerate the rest */
|
||||
if (subdev->type == INTEL_TH_SWITCH && !req) {
|
||||
|
|
|
@ -209,6 +209,16 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
|
|||
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x45c5),
|
||||
.driver_data = (kernel_ulong_t)&intel_th_2x,
|
||||
},
|
||||
{
|
||||
/* Ice Lake CPU */
|
||||
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x8a29),
|
||||
.driver_data = (kernel_ulong_t)&intel_th_2x,
|
||||
},
|
||||
{
|
||||
/* Tiger Lake CPU */
|
||||
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x9a33),
|
||||
.driver_data = (kernel_ulong_t)&intel_th_2x,
|
||||
},
|
||||
{
|
||||
/* Tiger Lake PCH */
|
||||
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa0a6),
|
||||
|
|
|
@ -345,7 +345,11 @@ void stp_policy_unbind(struct stp_policy *policy)
|
|||
stm->policy = NULL;
|
||||
policy->stm = NULL;
|
||||
|
||||
/*
|
||||
* Drop the reference on the protocol driver and lose the link.
|
||||
*/
|
||||
stm_put_protocol(stm->pdrv);
|
||||
stm->pdrv = NULL;
|
||||
stm_put_device(stm);
|
||||
}
|
||||
|
||||
|
|
|
@ -167,3 +167,4 @@ MODULE_AUTHOR("Johannes Thumshirn <johannes.thumshirn@men.de>");
|
|||
MODULE_LICENSE("GPL");
|
||||
MODULE_DESCRIPTION("IIO ADC driver for MEN 16z188 ADC Core");
|
||||
MODULE_ALIAS("mcb:16z188");
|
||||
MODULE_IMPORT_NS(MCB);
|
||||
|
|
|
@ -5,6 +5,15 @@ config INTERCONNECT_QCOM
|
|||
help
|
||||
Support for Qualcomm's Network-on-Chip interconnect hardware.
|
||||
|
||||
config INTERCONNECT_QCOM_MSM8974
|
||||
tristate "Qualcomm MSM8974 interconnect driver"
|
||||
depends on INTERCONNECT_QCOM
|
||||
depends on QCOM_SMD_RPM
|
||||
select INTERCONNECT_QCOM_SMD_RPM
|
||||
help
|
||||
This is a driver for the Qualcomm Network-on-Chip on msm8974-based
|
||||
platforms.
|
||||
|
||||
config INTERCONNECT_QCOM_QCS404
|
||||
tristate "Qualcomm QCS404 interconnect driver"
|
||||
depends on INTERCONNECT_QCOM
|
||||
|
|
|
@ -1,9 +1,11 @@
|
|||
# SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
qnoc-msm8974-objs := msm8974.o
|
||||
qnoc-qcs404-objs := qcs404.o
|
||||
qnoc-sdm845-objs := sdm845.o
|
||||
icc-smd-rpm-objs := smd-rpm.o
|
||||
|
||||
obj-$(CONFIG_INTERCONNECT_QCOM_MSM8974) += qnoc-msm8974.o
|
||||
obj-$(CONFIG_INTERCONNECT_QCOM_QCS404) += qnoc-qcs404.o
|
||||
obj-$(CONFIG_INTERCONNECT_QCOM_SDM845) += qnoc-sdm845.o
|
||||
obj-$(CONFIG_INTERCONNECT_QCOM_SMD_RPM) += icc-smd-rpm.o
|
||||
|
|
|
@ -0,0 +1,784 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (C) 2019 Brian Masney <masneyb@onstation.org>
|
||||
*
|
||||
* Based on MSM bus code from downstream MSM kernel sources.
|
||||
* Copyright (c) 2012-2013 The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* Based on qcs404.c
|
||||
* Copyright (C) 2019 Linaro Ltd
|
||||
*
|
||||
* Here's a rough representation that shows the various buses that form the
|
||||
* Network On Chip (NOC) for the msm8974:
|
||||
*
|
||||
* Multimedia Subsystem (MMSS)
|
||||
* |----------+-----------------------------------+-----------|
|
||||
* | |
|
||||
* | |
|
||||
* Config | Bus Interface | Memory Controller
|
||||
* |------------+-+-----------| |------------+-+-----------|
|
||||
* | |
|
||||
* | |
|
||||
* | System |
|
||||
* |--------------+-+---------------------------------+-+-------------|
|
||||
* | |
|
||||
* | |
|
||||
* Peripheral | On Chip | Memory (OCMEM)
|
||||
* |------------+-------------| |------------+-------------|
|
||||
*/
|
||||
|
||||
#include <dt-bindings/interconnect/qcom,msm8974.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/interconnect-provider.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include "smd-rpm.h"
|
||||
|
||||
enum {
|
||||
MSM8974_BIMC_MAS_AMPSS_M0 = 1,
|
||||
MSM8974_BIMC_MAS_AMPSS_M1,
|
||||
MSM8974_BIMC_MAS_MSS_PROC,
|
||||
MSM8974_BIMC_TO_MNOC,
|
||||
MSM8974_BIMC_TO_SNOC,
|
||||
MSM8974_BIMC_SLV_EBI_CH0,
|
||||
MSM8974_BIMC_SLV_AMPSS_L2,
|
||||
MSM8974_CNOC_MAS_RPM_INST,
|
||||
MSM8974_CNOC_MAS_RPM_DATA,
|
||||
MSM8974_CNOC_MAS_RPM_SYS,
|
||||
MSM8974_CNOC_MAS_DEHR,
|
||||
MSM8974_CNOC_MAS_QDSS_DAP,
|
||||
MSM8974_CNOC_MAS_SPDM,
|
||||
MSM8974_CNOC_MAS_TIC,
|
||||
MSM8974_CNOC_SLV_CLK_CTL,
|
||||
MSM8974_CNOC_SLV_CNOC_MSS,
|
||||
MSM8974_CNOC_SLV_SECURITY,
|
||||
MSM8974_CNOC_SLV_TCSR,
|
||||
MSM8974_CNOC_SLV_TLMM,
|
||||
MSM8974_CNOC_SLV_CRYPTO_0_CFG,
|
||||
MSM8974_CNOC_SLV_CRYPTO_1_CFG,
|
||||
MSM8974_CNOC_SLV_IMEM_CFG,
|
||||
MSM8974_CNOC_SLV_MESSAGE_RAM,
|
||||
MSM8974_CNOC_SLV_BIMC_CFG,
|
||||
MSM8974_CNOC_SLV_BOOT_ROM,
|
||||
MSM8974_CNOC_SLV_PMIC_ARB,
|
||||
MSM8974_CNOC_SLV_SPDM_WRAPPER,
|
||||
MSM8974_CNOC_SLV_DEHR_CFG,
|
||||
MSM8974_CNOC_SLV_MPM,
|
||||
MSM8974_CNOC_SLV_QDSS_CFG,
|
||||
MSM8974_CNOC_SLV_RBCPR_CFG,
|
||||
MSM8974_CNOC_SLV_RBCPR_QDSS_APU_CFG,
|
||||
MSM8974_CNOC_TO_SNOC,
|
||||
MSM8974_CNOC_SLV_CNOC_ONOC_CFG,
|
||||
MSM8974_CNOC_SLV_CNOC_MNOC_MMSS_CFG,
|
||||
MSM8974_CNOC_SLV_CNOC_MNOC_CFG,
|
||||
MSM8974_CNOC_SLV_PNOC_CFG,
|
||||
MSM8974_CNOC_SLV_SNOC_MPU_CFG,
|
||||
MSM8974_CNOC_SLV_SNOC_CFG,
|
||||
MSM8974_CNOC_SLV_EBI1_DLL_CFG,
|
||||
MSM8974_CNOC_SLV_PHY_APU_CFG,
|
||||
MSM8974_CNOC_SLV_EBI1_PHY_CFG,
|
||||
MSM8974_CNOC_SLV_RPM,
|
||||
MSM8974_CNOC_SLV_SERVICE_CNOC,
|
||||
MSM8974_MNOC_MAS_GRAPHICS_3D,
|
||||
MSM8974_MNOC_MAS_JPEG,
|
||||
MSM8974_MNOC_MAS_MDP_PORT0,
|
||||
MSM8974_MNOC_MAS_VIDEO_P0,
|
||||
MSM8974_MNOC_MAS_VIDEO_P1,
|
||||
MSM8974_MNOC_MAS_VFE,
|
||||
MSM8974_MNOC_TO_CNOC,
|
||||
MSM8974_MNOC_TO_BIMC,
|
||||
MSM8974_MNOC_SLV_CAMERA_CFG,
|
||||
MSM8974_MNOC_SLV_DISPLAY_CFG,
|
||||
MSM8974_MNOC_SLV_OCMEM_CFG,
|
||||
MSM8974_MNOC_SLV_CPR_CFG,
|
||||
MSM8974_MNOC_SLV_CPR_XPU_CFG,
|
||||
MSM8974_MNOC_SLV_MISC_CFG,
|
||||
MSM8974_MNOC_SLV_MISC_XPU_CFG,
|
||||
MSM8974_MNOC_SLV_VENUS_CFG,
|
||||
MSM8974_MNOC_SLV_GRAPHICS_3D_CFG,
|
||||
MSM8974_MNOC_SLV_MMSS_CLK_CFG,
|
||||
MSM8974_MNOC_SLV_MMSS_CLK_XPU_CFG,
|
||||
MSM8974_MNOC_SLV_MNOC_MPU_CFG,
|
||||
MSM8974_MNOC_SLV_ONOC_MPU_CFG,
|
||||
MSM8974_MNOC_SLV_SERVICE_MNOC,
|
||||
MSM8974_OCMEM_NOC_TO_OCMEM_VNOC,
|
||||
MSM8974_OCMEM_MAS_JPEG_OCMEM,
|
||||
MSM8974_OCMEM_MAS_MDP_OCMEM,
|
||||
MSM8974_OCMEM_MAS_VIDEO_P0_OCMEM,
|
||||
MSM8974_OCMEM_MAS_VIDEO_P1_OCMEM,
|
||||
MSM8974_OCMEM_MAS_VFE_OCMEM,
|
||||
MSM8974_OCMEM_MAS_CNOC_ONOC_CFG,
|
||||
MSM8974_OCMEM_SLV_SERVICE_ONOC,
|
||||
MSM8974_OCMEM_VNOC_TO_SNOC,
|
||||
MSM8974_OCMEM_VNOC_TO_OCMEM_NOC,
|
||||
MSM8974_OCMEM_VNOC_MAS_GFX3D,
|
||||
MSM8974_OCMEM_SLV_OCMEM,
|
||||
MSM8974_PNOC_MAS_PNOC_CFG,
|
||||
MSM8974_PNOC_MAS_SDCC_1,
|
||||
MSM8974_PNOC_MAS_SDCC_3,
|
||||
MSM8974_PNOC_MAS_SDCC_4,
|
||||
MSM8974_PNOC_MAS_SDCC_2,
|
||||
MSM8974_PNOC_MAS_TSIF,
|
||||
MSM8974_PNOC_MAS_BAM_DMA,
|
||||
MSM8974_PNOC_MAS_BLSP_2,
|
||||
MSM8974_PNOC_MAS_USB_HSIC,
|
||||
MSM8974_PNOC_MAS_BLSP_1,
|
||||
MSM8974_PNOC_MAS_USB_HS,
|
||||
MSM8974_PNOC_TO_SNOC,
|
||||
MSM8974_PNOC_SLV_SDCC_1,
|
||||
MSM8974_PNOC_SLV_SDCC_3,
|
||||
MSM8974_PNOC_SLV_SDCC_2,
|
||||
MSM8974_PNOC_SLV_SDCC_4,
|
||||
MSM8974_PNOC_SLV_TSIF,
|
||||
MSM8974_PNOC_SLV_BAM_DMA,
|
||||
MSM8974_PNOC_SLV_BLSP_2,
|
||||
MSM8974_PNOC_SLV_USB_HSIC,
|
||||
MSM8974_PNOC_SLV_BLSP_1,
|
||||
MSM8974_PNOC_SLV_USB_HS,
|
||||
MSM8974_PNOC_SLV_PDM,
|
||||
MSM8974_PNOC_SLV_PERIPH_APU_CFG,
|
||||
MSM8974_PNOC_SLV_PNOC_MPU_CFG,
|
||||
MSM8974_PNOC_SLV_PRNG,
|
||||
MSM8974_PNOC_SLV_SERVICE_PNOC,
|
||||
MSM8974_SNOC_MAS_LPASS_AHB,
|
||||
MSM8974_SNOC_MAS_QDSS_BAM,
|
||||
MSM8974_SNOC_MAS_SNOC_CFG,
|
||||
MSM8974_SNOC_TO_BIMC,
|
||||
MSM8974_SNOC_TO_CNOC,
|
||||
MSM8974_SNOC_TO_PNOC,
|
||||
MSM8974_SNOC_TO_OCMEM_VNOC,
|
||||
MSM8974_SNOC_MAS_CRYPTO_CORE0,
|
||||
MSM8974_SNOC_MAS_CRYPTO_CORE1,
|
||||
MSM8974_SNOC_MAS_LPASS_PROC,
|
||||
MSM8974_SNOC_MAS_MSS,
|
||||
MSM8974_SNOC_MAS_MSS_NAV,
|
||||
MSM8974_SNOC_MAS_OCMEM_DMA,
|
||||
MSM8974_SNOC_MAS_WCSS,
|
||||
MSM8974_SNOC_MAS_QDSS_ETR,
|
||||
MSM8974_SNOC_MAS_USB3,
|
||||
MSM8974_SNOC_SLV_AMPSS,
|
||||
MSM8974_SNOC_SLV_LPASS,
|
||||
MSM8974_SNOC_SLV_USB3,
|
||||
MSM8974_SNOC_SLV_WCSS,
|
||||
MSM8974_SNOC_SLV_OCIMEM,
|
||||
MSM8974_SNOC_SLV_SNOC_OCMEM,
|
||||
MSM8974_SNOC_SLV_SERVICE_SNOC,
|
||||
MSM8974_SNOC_SLV_QDSS_STM,
|
||||
};
|
||||
|
||||
#define RPM_BUS_MASTER_REQ 0x73616d62
|
||||
#define RPM_BUS_SLAVE_REQ 0x766c7362
|
||||
|
||||
#define to_msm8974_icc_provider(_provider) \
|
||||
container_of(_provider, struct msm8974_icc_provider, provider)
|
||||
|
||||
static const struct clk_bulk_data msm8974_icc_bus_clocks[] = {
|
||||
{ .id = "bus" },
|
||||
{ .id = "bus_a" },
|
||||
};
|
||||
|
||||
/**
|
||||
* struct msm8974_icc_provider - Qualcomm specific interconnect provider
|
||||
* @provider: generic interconnect provider
|
||||
* @bus_clks: the clk_bulk_data table of bus clocks
|
||||
* @num_clks: the total number of clk_bulk_data entries
|
||||
*/
|
||||
struct msm8974_icc_provider {
|
||||
struct icc_provider provider;
|
||||
struct clk_bulk_data *bus_clks;
|
||||
int num_clks;
|
||||
};
|
||||
|
||||
#define MSM8974_ICC_MAX_LINKS 3
|
||||
|
||||
/**
|
||||
* struct msm8974_icc_node - Qualcomm specific interconnect nodes
|
||||
* @name: the node name used in debugfs
|
||||
* @id: a unique node identifier
|
||||
* @links: an array of nodes where we can go next while traversing
|
||||
* @num_links: the total number of @links
|
||||
* @buswidth: width of the interconnect between a node and the bus (bytes)
|
||||
* @mas_rpm_id: RPM ID for devices that are bus masters
|
||||
* @slv_rpm_id: RPM ID for devices that are bus slaves
|
||||
* @rate: current bus clock rate in Hz
|
||||
*/
|
||||
struct msm8974_icc_node {
|
||||
unsigned char *name;
|
||||
u16 id;
|
||||
u16 links[MSM8974_ICC_MAX_LINKS];
|
||||
u16 num_links;
|
||||
u16 buswidth;
|
||||
int mas_rpm_id;
|
||||
int slv_rpm_id;
|
||||
u64 rate;
|
||||
};
|
||||
|
||||
struct msm8974_icc_desc {
|
||||
struct msm8974_icc_node **nodes;
|
||||
size_t num_nodes;
|
||||
};
|
||||
|
||||
#define DEFINE_QNODE(_name, _id, _buswidth, _mas_rpm_id, _slv_rpm_id, \
|
||||
...) \
|
||||
static struct msm8974_icc_node _name = { \
|
||||
.name = #_name, \
|
||||
.id = _id, \
|
||||
.buswidth = _buswidth, \
|
||||
.mas_rpm_id = _mas_rpm_id, \
|
||||
.slv_rpm_id = _slv_rpm_id, \
|
||||
.num_links = ARRAY_SIZE(((int[]){ __VA_ARGS__ })), \
|
||||
.links = { __VA_ARGS__ }, \
|
||||
}
|
||||
|
||||
DEFINE_QNODE(mas_ampss_m0, MSM8974_BIMC_MAS_AMPSS_M0, 8, 0, -1);
|
||||
DEFINE_QNODE(mas_ampss_m1, MSM8974_BIMC_MAS_AMPSS_M1, 8, 0, -1);
|
||||
DEFINE_QNODE(mas_mss_proc, MSM8974_BIMC_MAS_MSS_PROC, 8, 1, -1);
|
||||
DEFINE_QNODE(bimc_to_mnoc, MSM8974_BIMC_TO_MNOC, 8, 2, -1, MSM8974_BIMC_SLV_EBI_CH0);
|
||||
DEFINE_QNODE(bimc_to_snoc, MSM8974_BIMC_TO_SNOC, 8, 3, 2, MSM8974_SNOC_TO_BIMC, MSM8974_BIMC_SLV_EBI_CH0, MSM8974_BIMC_MAS_AMPSS_M0);
|
||||
DEFINE_QNODE(slv_ebi_ch0, MSM8974_BIMC_SLV_EBI_CH0, 8, -1, 0);
|
||||
DEFINE_QNODE(slv_ampss_l2, MSM8974_BIMC_SLV_AMPSS_L2, 8, -1, 1);
|
||||
|
||||
static struct msm8974_icc_node *msm8974_bimc_nodes[] = {
|
||||
[BIMC_MAS_AMPSS_M0] = &mas_ampss_m0,
|
||||
[BIMC_MAS_AMPSS_M1] = &mas_ampss_m1,
|
||||
[BIMC_MAS_MSS_PROC] = &mas_mss_proc,
|
||||
[BIMC_TO_MNOC] = &bimc_to_mnoc,
|
||||
[BIMC_TO_SNOC] = &bimc_to_snoc,
|
||||
[BIMC_SLV_EBI_CH0] = &slv_ebi_ch0,
|
||||
[BIMC_SLV_AMPSS_L2] = &slv_ampss_l2,
|
||||
};
|
||||
|
||||
static struct msm8974_icc_desc msm8974_bimc = {
|
||||
.nodes = msm8974_bimc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(msm8974_bimc_nodes),
|
||||
};
|
||||
|
||||
DEFINE_QNODE(mas_rpm_inst, MSM8974_CNOC_MAS_RPM_INST, 8, 45, -1);
|
||||
DEFINE_QNODE(mas_rpm_data, MSM8974_CNOC_MAS_RPM_DATA, 8, 46, -1);
|
||||
DEFINE_QNODE(mas_rpm_sys, MSM8974_CNOC_MAS_RPM_SYS, 8, 47, -1);
|
||||
DEFINE_QNODE(mas_dehr, MSM8974_CNOC_MAS_DEHR, 8, 48, -1);
|
||||
DEFINE_QNODE(mas_qdss_dap, MSM8974_CNOC_MAS_QDSS_DAP, 8, 49, -1);
|
||||
DEFINE_QNODE(mas_spdm, MSM8974_CNOC_MAS_SPDM, 8, 50, -1);
|
||||
DEFINE_QNODE(mas_tic, MSM8974_CNOC_MAS_TIC, 8, 51, -1);
|
||||
DEFINE_QNODE(slv_clk_ctl, MSM8974_CNOC_SLV_CLK_CTL, 8, -1, 47);
|
||||
DEFINE_QNODE(slv_cnoc_mss, MSM8974_CNOC_SLV_CNOC_MSS, 8, -1, 48);
|
||||
DEFINE_QNODE(slv_security, MSM8974_CNOC_SLV_SECURITY, 8, -1, 49);
|
||||
DEFINE_QNODE(slv_tcsr, MSM8974_CNOC_SLV_TCSR, 8, -1, 50);
|
||||
DEFINE_QNODE(slv_tlmm, MSM8974_CNOC_SLV_TLMM, 8, -1, 51);
|
||||
DEFINE_QNODE(slv_crypto_0_cfg, MSM8974_CNOC_SLV_CRYPTO_0_CFG, 8, -1, 52);
|
||||
DEFINE_QNODE(slv_crypto_1_cfg, MSM8974_CNOC_SLV_CRYPTO_1_CFG, 8, -1, 53);
|
||||
DEFINE_QNODE(slv_imem_cfg, MSM8974_CNOC_SLV_IMEM_CFG, 8, -1, 54);
|
||||
DEFINE_QNODE(slv_message_ram, MSM8974_CNOC_SLV_MESSAGE_RAM, 8, -1, 55);
|
||||
DEFINE_QNODE(slv_bimc_cfg, MSM8974_CNOC_SLV_BIMC_CFG, 8, -1, 56);
|
||||
DEFINE_QNODE(slv_boot_rom, MSM8974_CNOC_SLV_BOOT_ROM, 8, -1, 57);
|
||||
DEFINE_QNODE(slv_pmic_arb, MSM8974_CNOC_SLV_PMIC_ARB, 8, -1, 59);
|
||||
DEFINE_QNODE(slv_spdm_wrapper, MSM8974_CNOC_SLV_SPDM_WRAPPER, 8, -1, 60);
|
||||
DEFINE_QNODE(slv_dehr_cfg, MSM8974_CNOC_SLV_DEHR_CFG, 8, -1, 61);
|
||||
DEFINE_QNODE(slv_mpm, MSM8974_CNOC_SLV_MPM, 8, -1, 62);
|
||||
DEFINE_QNODE(slv_qdss_cfg, MSM8974_CNOC_SLV_QDSS_CFG, 8, -1, 63);
|
||||
DEFINE_QNODE(slv_rbcpr_cfg, MSM8974_CNOC_SLV_RBCPR_CFG, 8, -1, 64);
|
||||
DEFINE_QNODE(slv_rbcpr_qdss_apu_cfg, MSM8974_CNOC_SLV_RBCPR_QDSS_APU_CFG, 8, -1, 65);
|
||||
DEFINE_QNODE(cnoc_to_snoc, MSM8974_CNOC_TO_SNOC, 8, 52, 75);
|
||||
DEFINE_QNODE(slv_cnoc_onoc_cfg, MSM8974_CNOC_SLV_CNOC_ONOC_CFG, 8, -1, 68);
|
||||
DEFINE_QNODE(slv_cnoc_mnoc_mmss_cfg, MSM8974_CNOC_SLV_CNOC_MNOC_MMSS_CFG, 8, -1, 58);
|
||||
DEFINE_QNODE(slv_cnoc_mnoc_cfg, MSM8974_CNOC_SLV_CNOC_MNOC_CFG, 8, -1, 66);
|
||||
DEFINE_QNODE(slv_pnoc_cfg, MSM8974_CNOC_SLV_PNOC_CFG, 8, -1, 69);
|
||||
DEFINE_QNODE(slv_snoc_mpu_cfg, MSM8974_CNOC_SLV_SNOC_MPU_CFG, 8, -1, 67);
|
||||
DEFINE_QNODE(slv_snoc_cfg, MSM8974_CNOC_SLV_SNOC_CFG, 8, -1, 70);
|
||||
DEFINE_QNODE(slv_ebi1_dll_cfg, MSM8974_CNOC_SLV_EBI1_DLL_CFG, 8, -1, 71);
|
||||
DEFINE_QNODE(slv_phy_apu_cfg, MSM8974_CNOC_SLV_PHY_APU_CFG, 8, -1, 72);
|
||||
DEFINE_QNODE(slv_ebi1_phy_cfg, MSM8974_CNOC_SLV_EBI1_PHY_CFG, 8, -1, 73);
|
||||
DEFINE_QNODE(slv_rpm, MSM8974_CNOC_SLV_RPM, 8, -1, 74);
|
||||
DEFINE_QNODE(slv_service_cnoc, MSM8974_CNOC_SLV_SERVICE_CNOC, 8, -1, 76);
|
||||
|
||||
static struct msm8974_icc_node *msm8974_cnoc_nodes[] = {
|
||||
[CNOC_MAS_RPM_INST] = &mas_rpm_inst,
|
||||
[CNOC_MAS_RPM_DATA] = &mas_rpm_data,
|
||||
[CNOC_MAS_RPM_SYS] = &mas_rpm_sys,
|
||||
[CNOC_MAS_DEHR] = &mas_dehr,
|
||||
[CNOC_MAS_QDSS_DAP] = &mas_qdss_dap,
|
||||
[CNOC_MAS_SPDM] = &mas_spdm,
|
||||
[CNOC_MAS_TIC] = &mas_tic,
|
||||
[CNOC_SLV_CLK_CTL] = &slv_clk_ctl,
|
||||
[CNOC_SLV_CNOC_MSS] = &slv_cnoc_mss,
|
||||
[CNOC_SLV_SECURITY] = &slv_security,
|
||||
[CNOC_SLV_TCSR] = &slv_tcsr,
|
||||
[CNOC_SLV_TLMM] = &slv_tlmm,
|
||||
[CNOC_SLV_CRYPTO_0_CFG] = &slv_crypto_0_cfg,
|
||||
[CNOC_SLV_CRYPTO_1_CFG] = &slv_crypto_1_cfg,
|
||||
[CNOC_SLV_IMEM_CFG] = &slv_imem_cfg,
|
||||
[CNOC_SLV_MESSAGE_RAM] = &slv_message_ram,
|
||||
[CNOC_SLV_BIMC_CFG] = &slv_bimc_cfg,
|
||||
[CNOC_SLV_BOOT_ROM] = &slv_boot_rom,
|
||||
[CNOC_SLV_PMIC_ARB] = &slv_pmic_arb,
|
||||
[CNOC_SLV_SPDM_WRAPPER] = &slv_spdm_wrapper,
|
||||
[CNOC_SLV_DEHR_CFG] = &slv_dehr_cfg,
|
||||
[CNOC_SLV_MPM] = &slv_mpm,
|
||||
[CNOC_SLV_QDSS_CFG] = &slv_qdss_cfg,
|
||||
[CNOC_SLV_RBCPR_CFG] = &slv_rbcpr_cfg,
|
||||
[CNOC_SLV_RBCPR_QDSS_APU_CFG] = &slv_rbcpr_qdss_apu_cfg,
|
||||
[CNOC_TO_SNOC] = &cnoc_to_snoc,
|
||||
[CNOC_SLV_CNOC_ONOC_CFG] = &slv_cnoc_onoc_cfg,
|
||||
[CNOC_SLV_CNOC_MNOC_MMSS_CFG] = &slv_cnoc_mnoc_mmss_cfg,
|
||||
[CNOC_SLV_CNOC_MNOC_CFG] = &slv_cnoc_mnoc_cfg,
|
||||
[CNOC_SLV_PNOC_CFG] = &slv_pnoc_cfg,
|
||||
[CNOC_SLV_SNOC_MPU_CFG] = &slv_snoc_mpu_cfg,
|
||||
[CNOC_SLV_SNOC_CFG] = &slv_snoc_cfg,
|
||||
[CNOC_SLV_EBI1_DLL_CFG] = &slv_ebi1_dll_cfg,
|
||||
[CNOC_SLV_PHY_APU_CFG] = &slv_phy_apu_cfg,
|
||||
[CNOC_SLV_EBI1_PHY_CFG] = &slv_ebi1_phy_cfg,
|
||||
[CNOC_SLV_RPM] = &slv_rpm,
|
||||
[CNOC_SLV_SERVICE_CNOC] = &slv_service_cnoc,
|
||||
};
|
||||
|
||||
static struct msm8974_icc_desc msm8974_cnoc = {
|
||||
.nodes = msm8974_cnoc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(msm8974_cnoc_nodes),
|
||||
};
|
||||
|
||||
DEFINE_QNODE(mas_graphics_3d, MSM8974_MNOC_MAS_GRAPHICS_3D, 16, 6, -1, MSM8974_MNOC_TO_BIMC);
|
||||
DEFINE_QNODE(mas_jpeg, MSM8974_MNOC_MAS_JPEG, 16, 7, -1, MSM8974_MNOC_TO_BIMC);
|
||||
DEFINE_QNODE(mas_mdp_port0, MSM8974_MNOC_MAS_MDP_PORT0, 16, 8, -1, MSM8974_MNOC_TO_BIMC);
|
||||
DEFINE_QNODE(mas_video_p0, MSM8974_MNOC_MAS_VIDEO_P0, 16, 9, -1);
|
||||
DEFINE_QNODE(mas_video_p1, MSM8974_MNOC_MAS_VIDEO_P1, 16, 10, -1);
|
||||
DEFINE_QNODE(mas_vfe, MSM8974_MNOC_MAS_VFE, 16, 11, -1, MSM8974_MNOC_TO_BIMC);
|
||||
DEFINE_QNODE(mnoc_to_cnoc, MSM8974_MNOC_TO_CNOC, 16, 4, -1);
|
||||
DEFINE_QNODE(mnoc_to_bimc, MSM8974_MNOC_TO_BIMC, 16, -1, 16, MSM8974_BIMC_TO_MNOC);
|
||||
DEFINE_QNODE(slv_camera_cfg, MSM8974_MNOC_SLV_CAMERA_CFG, 16, -1, 3);
|
||||
DEFINE_QNODE(slv_display_cfg, MSM8974_MNOC_SLV_DISPLAY_CFG, 16, -1, 4);
|
||||
DEFINE_QNODE(slv_ocmem_cfg, MSM8974_MNOC_SLV_OCMEM_CFG, 16, -1, 5);
|
||||
DEFINE_QNODE(slv_cpr_cfg, MSM8974_MNOC_SLV_CPR_CFG, 16, -1, 6);
|
||||
DEFINE_QNODE(slv_cpr_xpu_cfg, MSM8974_MNOC_SLV_CPR_XPU_CFG, 16, -1, 7);
|
||||
DEFINE_QNODE(slv_misc_cfg, MSM8974_MNOC_SLV_MISC_CFG, 16, -1, 8);
|
||||
DEFINE_QNODE(slv_misc_xpu_cfg, MSM8974_MNOC_SLV_MISC_XPU_CFG, 16, -1, 9);
|
||||
DEFINE_QNODE(slv_venus_cfg, MSM8974_MNOC_SLV_VENUS_CFG, 16, -1, 10);
|
||||
DEFINE_QNODE(slv_graphics_3d_cfg, MSM8974_MNOC_SLV_GRAPHICS_3D_CFG, 16, -1, 11);
|
||||
DEFINE_QNODE(slv_mmss_clk_cfg, MSM8974_MNOC_SLV_MMSS_CLK_CFG, 16, -1, 12);
|
||||
DEFINE_QNODE(slv_mmss_clk_xpu_cfg, MSM8974_MNOC_SLV_MMSS_CLK_XPU_CFG, 16, -1, 13);
|
||||
DEFINE_QNODE(slv_mnoc_mpu_cfg, MSM8974_MNOC_SLV_MNOC_MPU_CFG, 16, -1, 14);
|
||||
DEFINE_QNODE(slv_onoc_mpu_cfg, MSM8974_MNOC_SLV_ONOC_MPU_CFG, 16, -1, 15);
|
||||
DEFINE_QNODE(slv_service_mnoc, MSM8974_MNOC_SLV_SERVICE_MNOC, 16, -1, 17);
|
||||
|
||||
static struct msm8974_icc_node *msm8974_mnoc_nodes[] = {
|
||||
[MNOC_MAS_GRAPHICS_3D] = &mas_graphics_3d,
|
||||
[MNOC_MAS_JPEG] = &mas_jpeg,
|
||||
[MNOC_MAS_MDP_PORT0] = &mas_mdp_port0,
|
||||
[MNOC_MAS_VIDEO_P0] = &mas_video_p0,
|
||||
[MNOC_MAS_VIDEO_P1] = &mas_video_p1,
|
||||
[MNOC_MAS_VFE] = &mas_vfe,
|
||||
[MNOC_TO_CNOC] = &mnoc_to_cnoc,
|
||||
[MNOC_TO_BIMC] = &mnoc_to_bimc,
|
||||
[MNOC_SLV_CAMERA_CFG] = &slv_camera_cfg,
|
||||
[MNOC_SLV_DISPLAY_CFG] = &slv_display_cfg,
|
||||
[MNOC_SLV_OCMEM_CFG] = &slv_ocmem_cfg,
|
||||
[MNOC_SLV_CPR_CFG] = &slv_cpr_cfg,
|
||||
[MNOC_SLV_CPR_XPU_CFG] = &slv_cpr_xpu_cfg,
|
||||
[MNOC_SLV_MISC_CFG] = &slv_misc_cfg,
|
||||
[MNOC_SLV_MISC_XPU_CFG] = &slv_misc_xpu_cfg,
|
||||
[MNOC_SLV_VENUS_CFG] = &slv_venus_cfg,
|
||||
[MNOC_SLV_GRAPHICS_3D_CFG] = &slv_graphics_3d_cfg,
|
||||
[MNOC_SLV_MMSS_CLK_CFG] = &slv_mmss_clk_cfg,
|
||||
[MNOC_SLV_MMSS_CLK_XPU_CFG] = &slv_mmss_clk_xpu_cfg,
|
||||
[MNOC_SLV_MNOC_MPU_CFG] = &slv_mnoc_mpu_cfg,
|
||||
[MNOC_SLV_ONOC_MPU_CFG] = &slv_onoc_mpu_cfg,
|
||||
[MNOC_SLV_SERVICE_MNOC] = &slv_service_mnoc,
|
||||
};
|
||||
|
||||
static struct msm8974_icc_desc msm8974_mnoc = {
|
||||
.nodes = msm8974_mnoc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(msm8974_mnoc_nodes),
|
||||
};
|
||||
|
||||
DEFINE_QNODE(ocmem_noc_to_ocmem_vnoc, MSM8974_OCMEM_NOC_TO_OCMEM_VNOC, 16, 54, 78, MSM8974_OCMEM_SLV_OCMEM);
|
||||
DEFINE_QNODE(mas_jpeg_ocmem, MSM8974_OCMEM_MAS_JPEG_OCMEM, 16, 13, -1);
|
||||
DEFINE_QNODE(mas_mdp_ocmem, MSM8974_OCMEM_MAS_MDP_OCMEM, 16, 14, -1);
|
||||
DEFINE_QNODE(mas_video_p0_ocmem, MSM8974_OCMEM_MAS_VIDEO_P0_OCMEM, 16, 15, -1);
|
||||
DEFINE_QNODE(mas_video_p1_ocmem, MSM8974_OCMEM_MAS_VIDEO_P1_OCMEM, 16, 16, -1);
|
||||
DEFINE_QNODE(mas_vfe_ocmem, MSM8974_OCMEM_MAS_VFE_OCMEM, 16, 17, -1);
|
||||
DEFINE_QNODE(mas_cnoc_onoc_cfg, MSM8974_OCMEM_MAS_CNOC_ONOC_CFG, 16, 12, -1);
|
||||
DEFINE_QNODE(slv_service_onoc, MSM8974_OCMEM_SLV_SERVICE_ONOC, 16, -1, 19);
|
||||
DEFINE_QNODE(slv_ocmem, MSM8974_OCMEM_SLV_OCMEM, 16, -1, 18);
|
||||
|
||||
/* Virtual NoC is needed for connection to OCMEM */
|
||||
DEFINE_QNODE(ocmem_vnoc_to_onoc, MSM8974_OCMEM_VNOC_TO_OCMEM_NOC, 16, 56, 79, MSM8974_OCMEM_NOC_TO_OCMEM_VNOC);
|
||||
DEFINE_QNODE(ocmem_vnoc_to_snoc, MSM8974_OCMEM_VNOC_TO_SNOC, 8, 57, 80);
|
||||
DEFINE_QNODE(mas_v_ocmem_gfx3d, MSM8974_OCMEM_VNOC_MAS_GFX3D, 8, 55, -1, MSM8974_OCMEM_VNOC_TO_OCMEM_NOC);
|
||||
|
||||
static struct msm8974_icc_node *msm8974_onoc_nodes[] = {
|
||||
[OCMEM_NOC_TO_OCMEM_VNOC] = &ocmem_noc_to_ocmem_vnoc,
|
||||
[OCMEM_MAS_JPEG_OCMEM] = &mas_jpeg_ocmem,
|
||||
[OCMEM_MAS_MDP_OCMEM] = &mas_mdp_ocmem,
|
||||
[OCMEM_MAS_VIDEO_P0_OCMEM] = &mas_video_p0_ocmem,
|
||||
[OCMEM_MAS_VIDEO_P1_OCMEM] = &mas_video_p1_ocmem,
|
||||
[OCMEM_MAS_VFE_OCMEM] = &mas_vfe_ocmem,
|
||||
[OCMEM_MAS_CNOC_ONOC_CFG] = &mas_cnoc_onoc_cfg,
|
||||
[OCMEM_SLV_SERVICE_ONOC] = &slv_service_onoc,
|
||||
[OCMEM_VNOC_TO_SNOC] = &ocmem_vnoc_to_snoc,
|
||||
[OCMEM_VNOC_TO_OCMEM_NOC] = &ocmem_vnoc_to_onoc,
|
||||
[OCMEM_VNOC_MAS_GFX3D] = &mas_v_ocmem_gfx3d,
|
||||
[OCMEM_SLV_OCMEM] = &slv_ocmem,
|
||||
};
|
||||
|
||||
static struct msm8974_icc_desc msm8974_onoc = {
|
||||
.nodes = msm8974_onoc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(msm8974_onoc_nodes),
|
||||
};
|
||||
|
||||
DEFINE_QNODE(mas_pnoc_cfg, MSM8974_PNOC_MAS_PNOC_CFG, 8, 43, -1);
|
||||
DEFINE_QNODE(mas_sdcc_1, MSM8974_PNOC_MAS_SDCC_1, 8, 33, -1, MSM8974_PNOC_TO_SNOC);
|
||||
DEFINE_QNODE(mas_sdcc_3, MSM8974_PNOC_MAS_SDCC_3, 8, 34, -1, MSM8974_PNOC_TO_SNOC);
|
||||
DEFINE_QNODE(mas_sdcc_4, MSM8974_PNOC_MAS_SDCC_4, 8, 36, -1, MSM8974_PNOC_TO_SNOC);
|
||||
DEFINE_QNODE(mas_sdcc_2, MSM8974_PNOC_MAS_SDCC_2, 8, 35, -1, MSM8974_PNOC_TO_SNOC);
|
||||
DEFINE_QNODE(mas_tsif, MSM8974_PNOC_MAS_TSIF, 8, 37, -1, MSM8974_PNOC_TO_SNOC);
|
||||
DEFINE_QNODE(mas_bam_dma, MSM8974_PNOC_MAS_BAM_DMA, 8, 38, -1);
|
||||
DEFINE_QNODE(mas_blsp_2, MSM8974_PNOC_MAS_BLSP_2, 8, 39, -1, MSM8974_PNOC_TO_SNOC);
|
||||
DEFINE_QNODE(mas_usb_hsic, MSM8974_PNOC_MAS_USB_HSIC, 8, 40, -1, MSM8974_PNOC_TO_SNOC);
|
||||
DEFINE_QNODE(mas_blsp_1, MSM8974_PNOC_MAS_BLSP_1, 8, 41, -1, MSM8974_PNOC_TO_SNOC);
|
||||
DEFINE_QNODE(mas_usb_hs, MSM8974_PNOC_MAS_USB_HS, 8, 42, -1, MSM8974_PNOC_TO_SNOC);
|
||||
DEFINE_QNODE(pnoc_to_snoc, MSM8974_PNOC_TO_SNOC, 8, 44, 45, MSM8974_SNOC_TO_PNOC, MSM8974_PNOC_SLV_PRNG);
|
||||
DEFINE_QNODE(slv_sdcc_1, MSM8974_PNOC_SLV_SDCC_1, 8, -1, 31);
|
||||
DEFINE_QNODE(slv_sdcc_3, MSM8974_PNOC_SLV_SDCC_3, 8, -1, 32);
|
||||
DEFINE_QNODE(slv_sdcc_2, MSM8974_PNOC_SLV_SDCC_2, 8, -1, 33);
|
||||
DEFINE_QNODE(slv_sdcc_4, MSM8974_PNOC_SLV_SDCC_4, 8, -1, 34);
|
||||
DEFINE_QNODE(slv_tsif, MSM8974_PNOC_SLV_TSIF, 8, -1, 35);
|
||||
DEFINE_QNODE(slv_bam_dma, MSM8974_PNOC_SLV_BAM_DMA, 8, -1, 36);
|
||||
DEFINE_QNODE(slv_blsp_2, MSM8974_PNOC_SLV_BLSP_2, 8, -1, 37);
|
||||
DEFINE_QNODE(slv_usb_hsic, MSM8974_PNOC_SLV_USB_HSIC, 8, -1, 38);
|
||||
DEFINE_QNODE(slv_blsp_1, MSM8974_PNOC_SLV_BLSP_1, 8, -1, 39);
|
||||
DEFINE_QNODE(slv_usb_hs, MSM8974_PNOC_SLV_USB_HS, 8, -1, 40);
|
||||
DEFINE_QNODE(slv_pdm, MSM8974_PNOC_SLV_PDM, 8, -1, 41);
|
||||
DEFINE_QNODE(slv_periph_apu_cfg, MSM8974_PNOC_SLV_PERIPH_APU_CFG, 8, -1, 42);
|
||||
DEFINE_QNODE(slv_pnoc_mpu_cfg, MSM8974_PNOC_SLV_PNOC_MPU_CFG, 8, -1, 43);
|
||||
DEFINE_QNODE(slv_prng, MSM8974_PNOC_SLV_PRNG, 8, -1, 44, MSM8974_PNOC_TO_SNOC);
|
||||
DEFINE_QNODE(slv_service_pnoc, MSM8974_PNOC_SLV_SERVICE_PNOC, 8, -1, 46);
|
||||
|
||||
static struct msm8974_icc_node *msm8974_pnoc_nodes[] = {
|
||||
[PNOC_MAS_PNOC_CFG] = &mas_pnoc_cfg,
|
||||
[PNOC_MAS_SDCC_1] = &mas_sdcc_1,
|
||||
[PNOC_MAS_SDCC_3] = &mas_sdcc_3,
|
||||
[PNOC_MAS_SDCC_4] = &mas_sdcc_4,
|
||||
[PNOC_MAS_SDCC_2] = &mas_sdcc_2,
|
||||
[PNOC_MAS_TSIF] = &mas_tsif,
|
||||
[PNOC_MAS_BAM_DMA] = &mas_bam_dma,
|
||||
[PNOC_MAS_BLSP_2] = &mas_blsp_2,
|
||||
[PNOC_MAS_USB_HSIC] = &mas_usb_hsic,
|
||||
[PNOC_MAS_BLSP_1] = &mas_blsp_1,
|
||||
[PNOC_MAS_USB_HS] = &mas_usb_hs,
|
||||
[PNOC_TO_SNOC] = &pnoc_to_snoc,
|
||||
[PNOC_SLV_SDCC_1] = &slv_sdcc_1,
|
||||
[PNOC_SLV_SDCC_3] = &slv_sdcc_3,
|
||||
[PNOC_SLV_SDCC_2] = &slv_sdcc_2,
|
||||
[PNOC_SLV_SDCC_4] = &slv_sdcc_4,
|
||||
[PNOC_SLV_TSIF] = &slv_tsif,
|
||||
[PNOC_SLV_BAM_DMA] = &slv_bam_dma,
|
||||
[PNOC_SLV_BLSP_2] = &slv_blsp_2,
|
||||
[PNOC_SLV_USB_HSIC] = &slv_usb_hsic,
|
||||
[PNOC_SLV_BLSP_1] = &slv_blsp_1,
|
||||
[PNOC_SLV_USB_HS] = &slv_usb_hs,
|
||||
[PNOC_SLV_PDM] = &slv_pdm,
|
||||
[PNOC_SLV_PERIPH_APU_CFG] = &slv_periph_apu_cfg,
|
||||
[PNOC_SLV_PNOC_MPU_CFG] = &slv_pnoc_mpu_cfg,
|
||||
[PNOC_SLV_PRNG] = &slv_prng,
|
||||
[PNOC_SLV_SERVICE_PNOC] = &slv_service_pnoc,
|
||||
};
|
||||
|
||||
static struct msm8974_icc_desc msm8974_pnoc = {
|
||||
.nodes = msm8974_pnoc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(msm8974_pnoc_nodes),
|
||||
};
|
||||
|
||||
DEFINE_QNODE(mas_lpass_ahb, MSM8974_SNOC_MAS_LPASS_AHB, 8, 18, -1);
|
||||
DEFINE_QNODE(mas_qdss_bam, MSM8974_SNOC_MAS_QDSS_BAM, 8, 19, -1);
|
||||
DEFINE_QNODE(mas_snoc_cfg, MSM8974_SNOC_MAS_SNOC_CFG, 8, 20, -1);
|
||||
DEFINE_QNODE(snoc_to_bimc, MSM8974_SNOC_TO_BIMC, 8, 21, 24, MSM8974_BIMC_TO_SNOC);
|
||||
DEFINE_QNODE(snoc_to_cnoc, MSM8974_SNOC_TO_CNOC, 8, 22, 25);
|
||||
DEFINE_QNODE(snoc_to_pnoc, MSM8974_SNOC_TO_PNOC, 8, 29, 28, MSM8974_PNOC_TO_SNOC);
|
||||
DEFINE_QNODE(snoc_to_ocmem_vnoc, MSM8974_SNOC_TO_OCMEM_VNOC, 8, 53, 77, MSM8974_OCMEM_VNOC_TO_OCMEM_NOC);
|
||||
DEFINE_QNODE(mas_crypto_core0, MSM8974_SNOC_MAS_CRYPTO_CORE0, 8, 23, -1, MSM8974_SNOC_TO_BIMC);
|
||||
DEFINE_QNODE(mas_crypto_core1, MSM8974_SNOC_MAS_CRYPTO_CORE1, 8, 24, -1);
|
||||
DEFINE_QNODE(mas_lpass_proc, MSM8974_SNOC_MAS_LPASS_PROC, 8, 25, -1, MSM8974_SNOC_TO_OCMEM_VNOC);
|
||||
DEFINE_QNODE(mas_mss, MSM8974_SNOC_MAS_MSS, 8, 26, -1);
|
||||
DEFINE_QNODE(mas_mss_nav, MSM8974_SNOC_MAS_MSS_NAV, 8, 27, -1);
|
||||
DEFINE_QNODE(mas_ocmem_dma, MSM8974_SNOC_MAS_OCMEM_DMA, 8, 28, -1);
|
||||
DEFINE_QNODE(mas_wcss, MSM8974_SNOC_MAS_WCSS, 8, 30, -1);
|
||||
DEFINE_QNODE(mas_qdss_etr, MSM8974_SNOC_MAS_QDSS_ETR, 8, 31, -1);
|
||||
DEFINE_QNODE(mas_usb3, MSM8974_SNOC_MAS_USB3, 8, 32, -1, MSM8974_SNOC_TO_BIMC);
|
||||
DEFINE_QNODE(slv_ampss, MSM8974_SNOC_SLV_AMPSS, 8, -1, 20);
|
||||
DEFINE_QNODE(slv_lpass, MSM8974_SNOC_SLV_LPASS, 8, -1, 21);
|
||||
DEFINE_QNODE(slv_usb3, MSM8974_SNOC_SLV_USB3, 8, -1, 22);
|
||||
DEFINE_QNODE(slv_wcss, MSM8974_SNOC_SLV_WCSS, 8, -1, 23);
|
||||
DEFINE_QNODE(slv_ocimem, MSM8974_SNOC_SLV_OCIMEM, 8, -1, 26);
|
||||
DEFINE_QNODE(slv_snoc_ocmem, MSM8974_SNOC_SLV_SNOC_OCMEM, 8, -1, 27);
|
||||
DEFINE_QNODE(slv_service_snoc, MSM8974_SNOC_SLV_SERVICE_SNOC, 8, -1, 29);
|
||||
DEFINE_QNODE(slv_qdss_stm, MSM8974_SNOC_SLV_QDSS_STM, 8, -1, 30);
|
||||
|
||||
static struct msm8974_icc_node *msm8974_snoc_nodes[] = {
|
||||
[SNOC_MAS_LPASS_AHB] = &mas_lpass_ahb,
|
||||
[SNOC_MAS_QDSS_BAM] = &mas_qdss_bam,
|
||||
[SNOC_MAS_SNOC_CFG] = &mas_snoc_cfg,
|
||||
[SNOC_TO_BIMC] = &snoc_to_bimc,
|
||||
[SNOC_TO_CNOC] = &snoc_to_cnoc,
|
||||
[SNOC_TO_PNOC] = &snoc_to_pnoc,
|
||||
[SNOC_TO_OCMEM_VNOC] = &snoc_to_ocmem_vnoc,
|
||||
[SNOC_MAS_CRYPTO_CORE0] = &mas_crypto_core0,
|
||||
[SNOC_MAS_CRYPTO_CORE1] = &mas_crypto_core1,
|
||||
[SNOC_MAS_LPASS_PROC] = &mas_lpass_proc,
|
||||
[SNOC_MAS_MSS] = &mas_mss,
|
||||
[SNOC_MAS_MSS_NAV] = &mas_mss_nav,
|
||||
[SNOC_MAS_OCMEM_DMA] = &mas_ocmem_dma,
|
||||
[SNOC_MAS_WCSS] = &mas_wcss,
|
||||
[SNOC_MAS_QDSS_ETR] = &mas_qdss_etr,
|
||||
[SNOC_MAS_USB3] = &mas_usb3,
|
||||
[SNOC_SLV_AMPSS] = &slv_ampss,
|
||||
[SNOC_SLV_LPASS] = &slv_lpass,
|
||||
[SNOC_SLV_USB3] = &slv_usb3,
|
||||
[SNOC_SLV_WCSS] = &slv_wcss,
|
||||
[SNOC_SLV_OCIMEM] = &slv_ocimem,
|
||||
[SNOC_SLV_SNOC_OCMEM] = &slv_snoc_ocmem,
|
||||
[SNOC_SLV_SERVICE_SNOC] = &slv_service_snoc,
|
||||
[SNOC_SLV_QDSS_STM] = &slv_qdss_stm,
|
||||
};
|
||||
|
||||
static struct msm8974_icc_desc msm8974_snoc = {
|
||||
.nodes = msm8974_snoc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(msm8974_snoc_nodes),
|
||||
};
|
||||
|
||||
static int msm8974_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
|
||||
u32 peak_bw, u32 *agg_avg, u32 *agg_peak)
|
||||
{
|
||||
*agg_avg += avg_bw;
|
||||
*agg_peak = max(*agg_peak, peak_bw);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void msm8974_icc_rpm_smd_send(struct device *dev, int rsc_type,
|
||||
char *name, int id, u64 val)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (id == -1)
|
||||
return;
|
||||
|
||||
/*
|
||||
* Setting the bandwidth requests for some nodes fails and this same
|
||||
* behavior occurs on the downstream MSM 3.4 kernel sources based on
|
||||
* errors like this in that kernel:
|
||||
*
|
||||
* msm_rpm_get_error_from_ack(): RPM NACK Unsupported resource
|
||||
* AXI: msm_bus_rpm_req(): RPM: Ack failed
|
||||
* AXI: msm_bus_rpm_commit_arb(): RPM: Req fail: mas:32, bw:240000000
|
||||
*
|
||||
* Since there's no publicly available documentation for this hardware,
|
||||
* and the bandwidth for some nodes in the path can be set properly,
|
||||
* let's not return an error.
|
||||
*/
|
||||
ret = qcom_icc_rpm_smd_send(QCOM_SMD_RPM_ACTIVE_STATE, rsc_type, id,
|
||||
val);
|
||||
if (ret)
|
||||
dev_dbg(dev, "Cannot set bandwidth for node %s (%d): %d\n",
|
||||
name, id, ret);
|
||||
}
|
||||
|
||||
static int msm8974_icc_set(struct icc_node *src, struct icc_node *dst)
|
||||
{
|
||||
struct msm8974_icc_node *src_qn, *dst_qn;
|
||||
struct msm8974_icc_provider *qp;
|
||||
u64 sum_bw, max_peak_bw, rate;
|
||||
u32 agg_avg = 0, agg_peak = 0;
|
||||
struct icc_provider *provider;
|
||||
struct icc_node *n;
|
||||
int ret, i;
|
||||
|
||||
src_qn = src->data;
|
||||
dst_qn = dst->data;
|
||||
provider = src->provider;
|
||||
qp = to_msm8974_icc_provider(provider);
|
||||
|
||||
list_for_each_entry(n, &provider->nodes, node_list)
|
||||
msm8974_icc_aggregate(n, 0, n->avg_bw, n->peak_bw,
|
||||
&agg_avg, &agg_peak);
|
||||
|
||||
sum_bw = icc_units_to_bps(agg_avg);
|
||||
max_peak_bw = icc_units_to_bps(agg_peak);
|
||||
|
||||
/* Set bandwidth on source node */
|
||||
msm8974_icc_rpm_smd_send(provider->dev, RPM_BUS_MASTER_REQ,
|
||||
src_qn->name, src_qn->mas_rpm_id, sum_bw);
|
||||
|
||||
msm8974_icc_rpm_smd_send(provider->dev, RPM_BUS_SLAVE_REQ,
|
||||
src_qn->name, src_qn->slv_rpm_id, sum_bw);
|
||||
|
||||
/* Set bandwidth on destination node */
|
||||
msm8974_icc_rpm_smd_send(provider->dev, RPM_BUS_MASTER_REQ,
|
||||
dst_qn->name, dst_qn->mas_rpm_id, sum_bw);
|
||||
|
||||
msm8974_icc_rpm_smd_send(provider->dev, RPM_BUS_SLAVE_REQ,
|
||||
dst_qn->name, dst_qn->slv_rpm_id, sum_bw);
|
||||
|
||||
rate = max(sum_bw, max_peak_bw);
|
||||
|
||||
do_div(rate, src_qn->buswidth);
|
||||
|
||||
if (src_qn->rate == rate)
|
||||
return 0;
|
||||
|
||||
for (i = 0; i < qp->num_clks; i++) {
|
||||
ret = clk_set_rate(qp->bus_clks[i].clk, rate);
|
||||
if (ret) {
|
||||
dev_err(provider->dev, "%s clk_set_rate error: %d\n",
|
||||
qp->bus_clks[i].id, ret);
|
||||
ret = 0;
|
||||
}
|
||||
}
|
||||
|
||||
src_qn->rate = rate;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int msm8974_icc_probe(struct platform_device *pdev)
|
||||
{
|
||||
const struct msm8974_icc_desc *desc;
|
||||
struct msm8974_icc_node **qnodes;
|
||||
struct msm8974_icc_provider *qp;
|
||||
struct device *dev = &pdev->dev;
|
||||
struct icc_onecell_data *data;
|
||||
struct icc_provider *provider;
|
||||
struct icc_node *node;
|
||||
size_t num_nodes, i;
|
||||
int ret;
|
||||
|
||||
/* wait for the RPM proxy */
|
||||
if (!qcom_icc_rpm_smd_available())
|
||||
return -EPROBE_DEFER;
|
||||
|
||||
desc = of_device_get_match_data(dev);
|
||||
if (!desc)
|
||||
return -EINVAL;
|
||||
|
||||
qnodes = desc->nodes;
|
||||
num_nodes = desc->num_nodes;
|
||||
|
||||
qp = devm_kzalloc(dev, sizeof(*qp), GFP_KERNEL);
|
||||
if (!qp)
|
||||
return -ENOMEM;
|
||||
|
||||
data = devm_kzalloc(dev, struct_size(data, nodes, num_nodes),
|
||||
GFP_KERNEL);
|
||||
if (!data)
|
||||
return -ENOMEM;
|
||||
|
||||
qp->bus_clks = devm_kmemdup(dev, msm8974_icc_bus_clocks,
|
||||
sizeof(msm8974_icc_bus_clocks), GFP_KERNEL);
|
||||
if (!qp->bus_clks)
|
||||
return -ENOMEM;
|
||||
|
||||
qp->num_clks = ARRAY_SIZE(msm8974_icc_bus_clocks);
|
||||
ret = devm_clk_bulk_get(dev, qp->num_clks, qp->bus_clks);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = clk_bulk_prepare_enable(qp->num_clks, qp->bus_clks);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
provider = &qp->provider;
|
||||
INIT_LIST_HEAD(&provider->nodes);
|
||||
provider->dev = dev;
|
||||
provider->set = msm8974_icc_set;
|
||||
provider->aggregate = msm8974_icc_aggregate;
|
||||
provider->xlate = of_icc_xlate_onecell;
|
||||
provider->data = data;
|
||||
|
||||
ret = icc_provider_add(provider);
|
||||
if (ret) {
|
||||
dev_err(dev, "error adding interconnect provider: %d\n", ret);
|
||||
goto err_disable_clks;
|
||||
}
|
||||
|
||||
for (i = 0; i < num_nodes; i++) {
|
||||
size_t j;
|
||||
|
||||
node = icc_node_create(qnodes[i]->id);
|
||||
if (IS_ERR(node)) {
|
||||
ret = PTR_ERR(node);
|
||||
goto err_del_icc;
|
||||
}
|
||||
|
||||
node->name = qnodes[i]->name;
|
||||
node->data = qnodes[i];
|
||||
icc_node_add(node, provider);
|
||||
|
||||
dev_dbg(dev, "registered node %s\n", node->name);
|
||||
|
||||
/* populate links */
|
||||
for (j = 0; j < qnodes[i]->num_links; j++)
|
||||
icc_link_create(node, qnodes[i]->links[j]);
|
||||
|
||||
data->nodes[i] = node;
|
||||
}
|
||||
data->num_nodes = num_nodes;
|
||||
|
||||
platform_set_drvdata(pdev, qp);
|
||||
|
||||
return 0;
|
||||
|
||||
err_del_icc:
|
||||
list_for_each_entry(node, &provider->nodes, node_list) {
|
||||
icc_node_del(node);
|
||||
icc_node_destroy(node->id);
|
||||
}
|
||||
icc_provider_del(provider);
|
||||
|
||||
err_disable_clks:
|
||||
clk_bulk_disable_unprepare(qp->num_clks, qp->bus_clks);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int msm8974_icc_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct msm8974_icc_provider *qp = platform_get_drvdata(pdev);
|
||||
struct icc_provider *provider = &qp->provider;
|
||||
struct icc_node *n;
|
||||
|
||||
list_for_each_entry(n, &provider->nodes, node_list) {
|
||||
icc_node_del(n);
|
||||
icc_node_destroy(n->id);
|
||||
}
|
||||
clk_bulk_disable_unprepare(qp->num_clks, qp->bus_clks);
|
||||
|
||||
return icc_provider_del(provider);
|
||||
}
|
||||
|
||||
static const struct of_device_id msm8974_noc_of_match[] = {
|
||||
{ .compatible = "qcom,msm8974-bimc", .data = &msm8974_bimc},
|
||||
{ .compatible = "qcom,msm8974-cnoc", .data = &msm8974_cnoc},
|
||||
{ .compatible = "qcom,msm8974-mmssnoc", .data = &msm8974_mnoc},
|
||||
{ .compatible = "qcom,msm8974-ocmemnoc", .data = &msm8974_onoc},
|
||||
{ .compatible = "qcom,msm8974-pnoc", .data = &msm8974_pnoc},
|
||||
{ .compatible = "qcom,msm8974-snoc", .data = &msm8974_snoc},
|
||||
{ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, msm8974_noc_of_match);
|
||||
|
||||
static struct platform_driver msm8974_noc_driver = {
|
||||
.probe = msm8974_icc_probe,
|
||||
.remove = msm8974_icc_remove,
|
||||
.driver = {
|
||||
.name = "qnoc-msm8974",
|
||||
.of_match_table = msm8974_noc_of_match,
|
||||
},
|
||||
};
|
||||
module_platform_driver(msm8974_noc_driver);
|
||||
MODULE_DESCRIPTION("Qualcomm MSM8974 NoC driver");
|
||||
MODULE_AUTHOR("Brian Masney <masneyb@onstation.org>");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -191,7 +191,7 @@ int __mcb_register_driver(struct mcb_driver *drv, struct module *owner,
|
|||
|
||||
return driver_register(&drv->driver);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(__mcb_register_driver);
|
||||
EXPORT_SYMBOL_NS_GPL(__mcb_register_driver, MCB);
|
||||
|
||||
/**
|
||||
* mcb_unregister_driver() - Unregister a @mcb_driver from the system
|
||||
|
@ -203,7 +203,7 @@ void mcb_unregister_driver(struct mcb_driver *drv)
|
|||
{
|
||||
driver_unregister(&drv->driver);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mcb_unregister_driver);
|
||||
EXPORT_SYMBOL_NS_GPL(mcb_unregister_driver, MCB);
|
||||
|
||||
static void mcb_release_dev(struct device *dev)
|
||||
{
|
||||
|
@ -249,7 +249,7 @@ out:
|
|||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mcb_device_register);
|
||||
EXPORT_SYMBOL_NS_GPL(mcb_device_register, MCB);
|
||||
|
||||
static void mcb_free_bus(struct device *dev)
|
||||
{
|
||||
|
@ -301,7 +301,7 @@ err_free:
|
|||
kfree(bus);
|
||||
return ERR_PTR(rc);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mcb_alloc_bus);
|
||||
EXPORT_SYMBOL_NS_GPL(mcb_alloc_bus, MCB);
|
||||
|
||||
static int __mcb_devices_unregister(struct device *dev, void *data)
|
||||
{
|
||||
|
@ -323,7 +323,7 @@ void mcb_release_bus(struct mcb_bus *bus)
|
|||
{
|
||||
mcb_devices_unregister(bus);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mcb_release_bus);
|
||||
EXPORT_SYMBOL_NS_GPL(mcb_release_bus, MCB);
|
||||
|
||||
/**
|
||||
* mcb_bus_put() - Increment refcnt
|
||||
|
@ -338,7 +338,7 @@ struct mcb_bus *mcb_bus_get(struct mcb_bus *bus)
|
|||
|
||||
return bus;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mcb_bus_get);
|
||||
EXPORT_SYMBOL_NS_GPL(mcb_bus_get, MCB);
|
||||
|
||||
/**
|
||||
* mcb_bus_put() - Decrement refcnt
|
||||
|
@ -351,7 +351,7 @@ void mcb_bus_put(struct mcb_bus *bus)
|
|||
if (bus)
|
||||
put_device(&bus->dev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mcb_bus_put);
|
||||
EXPORT_SYMBOL_NS_GPL(mcb_bus_put, MCB);
|
||||
|
||||
/**
|
||||
* mcb_alloc_dev() - Allocate a device
|
||||
|
@ -371,7 +371,7 @@ struct mcb_device *mcb_alloc_dev(struct mcb_bus *bus)
|
|||
|
||||
return dev;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mcb_alloc_dev);
|
||||
EXPORT_SYMBOL_NS_GPL(mcb_alloc_dev, MCB);
|
||||
|
||||
/**
|
||||
* mcb_free_dev() - Free @mcb_device
|
||||
|
@ -383,7 +383,7 @@ void mcb_free_dev(struct mcb_device *dev)
|
|||
{
|
||||
kfree(dev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mcb_free_dev);
|
||||
EXPORT_SYMBOL_NS_GPL(mcb_free_dev, MCB);
|
||||
|
||||
static int __mcb_bus_add_devices(struct device *dev, void *data)
|
||||
{
|
||||
|
@ -412,7 +412,7 @@ void mcb_bus_add_devices(const struct mcb_bus *bus)
|
|||
{
|
||||
bus_for_each_dev(&mcb_bus_type, NULL, NULL, __mcb_bus_add_devices);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mcb_bus_add_devices);
|
||||
EXPORT_SYMBOL_NS_GPL(mcb_bus_add_devices, MCB);
|
||||
|
||||
/**
|
||||
* mcb_get_resource() - get a resource for a mcb device
|
||||
|
@ -428,7 +428,7 @@ struct resource *mcb_get_resource(struct mcb_device *dev, unsigned int type)
|
|||
else
|
||||
return NULL;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mcb_get_resource);
|
||||
EXPORT_SYMBOL_NS_GPL(mcb_get_resource, MCB);
|
||||
|
||||
/**
|
||||
* mcb_request_mem() - Request memory
|
||||
|
@ -454,7 +454,7 @@ struct resource *mcb_request_mem(struct mcb_device *dev, const char *name)
|
|||
|
||||
return mem;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mcb_request_mem);
|
||||
EXPORT_SYMBOL_NS_GPL(mcb_request_mem, MCB);
|
||||
|
||||
/**
|
||||
* mcb_release_mem() - Release memory requested by device
|
||||
|
@ -469,7 +469,7 @@ void mcb_release_mem(struct resource *mem)
|
|||
size = resource_size(mem);
|
||||
release_mem_region(mem->start, size);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mcb_release_mem);
|
||||
EXPORT_SYMBOL_NS_GPL(mcb_release_mem, MCB);
|
||||
|
||||
static int __mcb_get_irq(struct mcb_device *dev)
|
||||
{
|
||||
|
@ -495,7 +495,7 @@ int mcb_get_irq(struct mcb_device *dev)
|
|||
|
||||
return __mcb_get_irq(dev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mcb_get_irq);
|
||||
EXPORT_SYMBOL_NS_GPL(mcb_get_irq, MCB);
|
||||
|
||||
static int mcb_init(void)
|
||||
{
|
||||
|
|
|
@ -168,3 +168,4 @@ module_exit(mcb_lpc_exit);
|
|||
MODULE_AUTHOR("Andreas Werner <andreas.werner@men.de>");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_DESCRIPTION("MCB over LPC support");
|
||||
MODULE_IMPORT_NS(MCB);
|
||||
|
|
|
@ -253,4 +253,4 @@ free_header:
|
|||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(chameleon_parse_cells);
|
||||
EXPORT_SYMBOL_NS_GPL(chameleon_parse_cells, MCB);
|
||||
|
|
|
@ -131,3 +131,4 @@ module_pci_driver(mcb_pci_driver);
|
|||
MODULE_AUTHOR("Johannes Thumshirn <johannes.thumshirn@men.de>");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_DESCRIPTION("MCB over PCI support");
|
||||
MODULE_IMPORT_NS(MCB);
|
||||
|
|
|
@ -8,7 +8,6 @@ menu "Misc devices"
|
|||
config SENSORS_LIS3LV02D
|
||||
tristate
|
||||
depends on INPUT
|
||||
select INPUT_POLLDEV
|
||||
|
||||
config AD525X_DPOT
|
||||
tristate "Analog Devices Digital Potentiometers"
|
||||
|
@ -326,14 +325,14 @@ config SENSORS_TSL2550
|
|||
will be called tsl2550.
|
||||
|
||||
config SENSORS_BH1770
|
||||
tristate "BH1770GLC / SFH7770 combined ALS - Proximity sensor"
|
||||
depends on I2C
|
||||
---help---
|
||||
Say Y here if you want to build a driver for BH1770GLC (ROHM) or
|
||||
tristate "BH1770GLC / SFH7770 combined ALS - Proximity sensor"
|
||||
depends on I2C
|
||||
---help---
|
||||
Say Y here if you want to build a driver for BH1770GLC (ROHM) or
|
||||
SFH7770 (Osram) combined ambient light and proximity sensor chip.
|
||||
|
||||
To compile this driver as a module, choose M here: the
|
||||
module will be called bh1770glc. If unsure, say N here.
|
||||
To compile this driver as a module, choose M here: the
|
||||
module will be called bh1770glc. If unsure, say N here.
|
||||
|
||||
config SENSORS_APDS990X
|
||||
tristate "APDS990X combined als and proximity sensors"
|
||||
|
@ -438,8 +437,8 @@ config PCI_ENDPOINT_TEST
|
|||
select CRC32
|
||||
tristate "PCI Endpoint Test driver"
|
||||
---help---
|
||||
Enable this configuration option to enable the host side test driver
|
||||
for PCI Endpoint.
|
||||
Enable this configuration option to enable the host side test driver
|
||||
for PCI Endpoint.
|
||||
|
||||
config XILINX_SDFEC
|
||||
tristate "Xilinx SDFEC 16"
|
||||
|
|
|
@ -109,7 +109,6 @@ static int __init tc_probe(struct platform_device *pdev)
|
|||
struct atmel_tc *tc;
|
||||
struct clk *clk;
|
||||
int irq;
|
||||
struct resource *r;
|
||||
unsigned int i;
|
||||
|
||||
if (of_get_child_count(pdev->dev.of_node))
|
||||
|
@ -133,8 +132,7 @@ static int __init tc_probe(struct platform_device *pdev)
|
|||
if (IS_ERR(tc->slow_clk))
|
||||
return PTR_ERR(tc->slow_clk);
|
||||
|
||||
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
tc->regs = devm_ioremap_resource(&pdev->dev, r);
|
||||
tc->regs = devm_platform_ioremap_resource(pdev, 0);
|
||||
if (IS_ERR(tc->regs))
|
||||
return PTR_ERR(tc->regs);
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
obj-$(CONFIG_MISC_ALCOR_PCI) += alcor_pci.o
|
||||
obj-$(CONFIG_MISC_RTSX_PCI) += rtsx_pci.o
|
||||
rtsx_pci-objs := rtsx_pcr.o rts5209.o rts5229.o rtl8411.o rts5227.o rts5249.o rts5260.o
|
||||
rtsx_pci-objs := rtsx_pcr.o rts5209.o rts5229.o rtl8411.o rts5227.o rts5249.o rts5260.o rts5261.o
|
||||
obj-$(CONFIG_MISC_RTSX_USB) += rtsx_usb.o
|
||||
|
|
|
@ -191,7 +191,6 @@ static int sd_set_sample_push_timing_sd30(struct rtsx_pcr *pcr)
|
|||
|
||||
static int rts5260_card_power_on(struct rtsx_pcr *pcr, int card)
|
||||
{
|
||||
int err = 0;
|
||||
struct rtsx_cr_option *option = &pcr->option;
|
||||
|
||||
if (option->ocp_en)
|
||||
|
@ -231,7 +230,7 @@ static int rts5260_card_power_on(struct rtsx_pcr *pcr, int card)
|
|||
|
||||
rtsx_pci_write_register(pcr, REG_PRE_RW_MODE, EN_INFINITE_MODE, 0);
|
||||
|
||||
return err;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int rts5260_switch_output_voltage(struct rtsx_pcr *pcr, u8 voltage)
|
||||
|
|
|
@ -0,0 +1,792 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-or-later
|
||||
/* Driver for Realtek PCI-Express card reader
|
||||
*
|
||||
* Copyright(c) 2018-2019 Realtek Semiconductor Corp. All rights reserved.
|
||||
*
|
||||
* Author:
|
||||
* Rui FENG <rui_feng@realsil.com.cn>
|
||||
* Wei WANG <wei_wang@realsil.com.cn>
|
||||
*/
|
||||
|
||||
#include <linux/module.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/rtsx_pci.h>
|
||||
|
||||
#include "rts5261.h"
|
||||
#include "rtsx_pcr.h"
|
||||
|
||||
static u8 rts5261_get_ic_version(struct rtsx_pcr *pcr)
|
||||
{
|
||||
u8 val;
|
||||
|
||||
rtsx_pci_read_register(pcr, DUMMY_REG_RESET_0, &val);
|
||||
return val & IC_VERSION_MASK;
|
||||
}
|
||||
|
||||
static void rts5261_fill_driving(struct rtsx_pcr *pcr, u8 voltage)
|
||||
{
|
||||
u8 driving_3v3[4][3] = {
|
||||
{0x13, 0x13, 0x13},
|
||||
{0x96, 0x96, 0x96},
|
||||
{0x7F, 0x7F, 0x7F},
|
||||
{0x96, 0x96, 0x96},
|
||||
};
|
||||
u8 driving_1v8[4][3] = {
|
||||
{0x99, 0x99, 0x99},
|
||||
{0x3A, 0x3A, 0x3A},
|
||||
{0xE6, 0xE6, 0xE6},
|
||||
{0xB3, 0xB3, 0xB3},
|
||||
};
|
||||
u8 (*driving)[3], drive_sel;
|
||||
|
||||
if (voltage == OUTPUT_3V3) {
|
||||
driving = driving_3v3;
|
||||
drive_sel = pcr->sd30_drive_sel_3v3;
|
||||
} else {
|
||||
driving = driving_1v8;
|
||||
drive_sel = pcr->sd30_drive_sel_1v8;
|
||||
}
|
||||
|
||||
rtsx_pci_write_register(pcr, SD30_CLK_DRIVE_SEL,
|
||||
0xFF, driving[drive_sel][0]);
|
||||
|
||||
rtsx_pci_write_register(pcr, SD30_CMD_DRIVE_SEL,
|
||||
0xFF, driving[drive_sel][1]);
|
||||
|
||||
rtsx_pci_write_register(pcr, SD30_DAT_DRIVE_SEL,
|
||||
0xFF, driving[drive_sel][2]);
|
||||
}
|
||||
|
||||
static void rtsx5261_fetch_vendor_settings(struct rtsx_pcr *pcr)
|
||||
{
|
||||
u32 reg;
|
||||
/* 0x814~0x817 */
|
||||
rtsx_pci_read_config_dword(pcr, PCR_SETTING_REG2, ®);
|
||||
pcr_dbg(pcr, "Cfg 0x%x: 0x%x\n", PCR_SETTING_REG2, reg);
|
||||
|
||||
if (!rts5261_vendor_setting_valid(reg)) {
|
||||
pcr_dbg(pcr, "skip fetch vendor setting\n");
|
||||
return;
|
||||
}
|
||||
|
||||
pcr->card_drive_sel &= 0x3F;
|
||||
pcr->card_drive_sel |= rts5261_reg_to_card_drive_sel(reg);
|
||||
|
||||
if (rts5261_reg_check_reverse_socket(reg))
|
||||
pcr->flags |= PCR_REVERSE_SOCKET;
|
||||
|
||||
/* 0x724~0x727 */
|
||||
rtsx_pci_read_config_dword(pcr, PCR_SETTING_REG1, ®);
|
||||
pcr_dbg(pcr, "Cfg 0x%x: 0x%x\n", PCR_SETTING_REG1, reg);
|
||||
|
||||
pcr->aspm_en = rts5261_reg_to_aspm(reg);
|
||||
pcr->sd30_drive_sel_1v8 = rts5261_reg_to_sd30_drive_sel_1v8(reg);
|
||||
pcr->sd30_drive_sel_3v3 = rts5261_reg_to_sd30_drive_sel_3v3(reg);
|
||||
}
|
||||
|
||||
static void rts5261_force_power_down(struct rtsx_pcr *pcr, u8 pm_state)
|
||||
{
|
||||
/* Set relink_time to 0 */
|
||||
rtsx_pci_write_register(pcr, AUTOLOAD_CFG_BASE + 1, MASK_8_BIT_DEF, 0);
|
||||
rtsx_pci_write_register(pcr, AUTOLOAD_CFG_BASE + 2, MASK_8_BIT_DEF, 0);
|
||||
rtsx_pci_write_register(pcr, AUTOLOAD_CFG_BASE + 3,
|
||||
RELINK_TIME_MASK, 0);
|
||||
|
||||
if (pm_state == HOST_ENTER_S3)
|
||||
rtsx_pci_write_register(pcr, pcr->reg_pm_ctrl3,
|
||||
D3_DELINK_MODE_EN, D3_DELINK_MODE_EN);
|
||||
|
||||
rtsx_pci_write_register(pcr, RTS5261_REG_FPDCTL,
|
||||
SSC_POWER_DOWN, SSC_POWER_DOWN);
|
||||
}
|
||||
|
||||
static int rts5261_enable_auto_blink(struct rtsx_pcr *pcr)
|
||||
{
|
||||
return rtsx_pci_write_register(pcr, OLT_LED_CTL,
|
||||
LED_SHINE_MASK, LED_SHINE_EN);
|
||||
}
|
||||
|
||||
static int rts5261_disable_auto_blink(struct rtsx_pcr *pcr)
|
||||
{
|
||||
return rtsx_pci_write_register(pcr, OLT_LED_CTL,
|
||||
LED_SHINE_MASK, LED_SHINE_DISABLE);
|
||||
}
|
||||
|
||||
static int rts5261_turn_on_led(struct rtsx_pcr *pcr)
|
||||
{
|
||||
return rtsx_pci_write_register(pcr, GPIO_CTL,
|
||||
0x02, 0x02);
|
||||
}
|
||||
|
||||
static int rts5261_turn_off_led(struct rtsx_pcr *pcr)
|
||||
{
|
||||
return rtsx_pci_write_register(pcr, GPIO_CTL,
|
||||
0x02, 0x00);
|
||||
}
|
||||
|
||||
/* SD Pull Control Enable:
|
||||
* SD_DAT[3:0] ==> pull up
|
||||
* SD_CD ==> pull up
|
||||
* SD_WP ==> pull up
|
||||
* SD_CMD ==> pull up
|
||||
* SD_CLK ==> pull down
|
||||
*/
|
||||
static const u32 rts5261_sd_pull_ctl_enable_tbl[] = {
|
||||
RTSX_REG_PAIR(CARD_PULL_CTL2, 0xAA),
|
||||
RTSX_REG_PAIR(CARD_PULL_CTL3, 0xE9),
|
||||
0,
|
||||
};
|
||||
|
||||
/* SD Pull Control Disable:
|
||||
* SD_DAT[3:0] ==> pull down
|
||||
* SD_CD ==> pull up
|
||||
* SD_WP ==> pull down
|
||||
* SD_CMD ==> pull down
|
||||
* SD_CLK ==> pull down
|
||||
*/
|
||||
static const u32 rts5261_sd_pull_ctl_disable_tbl[] = {
|
||||
RTSX_REG_PAIR(CARD_PULL_CTL2, 0x55),
|
||||
RTSX_REG_PAIR(CARD_PULL_CTL3, 0xD5),
|
||||
0,
|
||||
};
|
||||
|
||||
static int rts5261_sd_set_sample_push_timing_sd30(struct rtsx_pcr *pcr)
|
||||
{
|
||||
rtsx_pci_write_register(pcr, SD_CFG1, SD_MODE_SELECT_MASK
|
||||
| SD_ASYNC_FIFO_NOT_RST, SD_30_MODE | SD_ASYNC_FIFO_NOT_RST);
|
||||
rtsx_pci_write_register(pcr, CLK_CTL, CLK_LOW_FREQ, CLK_LOW_FREQ);
|
||||
rtsx_pci_write_register(pcr, CARD_CLK_SOURCE, 0xFF,
|
||||
CRC_VAR_CLK0 | SD30_FIX_CLK | SAMPLE_VAR_CLK1);
|
||||
rtsx_pci_write_register(pcr, CLK_CTL, CLK_LOW_FREQ, 0);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int rts5261_card_power_on(struct rtsx_pcr *pcr, int card)
|
||||
{
|
||||
struct rtsx_cr_option *option = &pcr->option;
|
||||
|
||||
if (option->ocp_en)
|
||||
rtsx_pci_enable_ocp(pcr);
|
||||
|
||||
|
||||
rtsx_pci_write_register(pcr, RTS5261_LDO1_CFG1,
|
||||
RTS5261_LDO1_TUNE_MASK, RTS5261_LDO1_33);
|
||||
rtsx_pci_write_register(pcr, RTS5261_LDO1233318_POW_CTL,
|
||||
RTS5261_LDO1_POWERON, RTS5261_LDO1_POWERON);
|
||||
|
||||
rtsx_pci_write_register(pcr, RTS5261_LDO1233318_POW_CTL,
|
||||
RTS5261_LDO3318_POWERON, RTS5261_LDO3318_POWERON);
|
||||
|
||||
msleep(20);
|
||||
|
||||
rtsx_pci_write_register(pcr, CARD_OE, SD_OUTPUT_EN, SD_OUTPUT_EN);
|
||||
|
||||
/* Initialize SD_CFG1 register */
|
||||
rtsx_pci_write_register(pcr, SD_CFG1, 0xFF,
|
||||
SD_CLK_DIVIDE_128 | SD_20_MODE | SD_BUS_WIDTH_1BIT);
|
||||
|
||||
rtsx_pci_write_register(pcr, SD_SAMPLE_POINT_CTL,
|
||||
0xFF, SD20_RX_POS_EDGE);
|
||||
rtsx_pci_write_register(pcr, SD_PUSH_POINT_CTL, 0xFF, 0);
|
||||
rtsx_pci_write_register(pcr, CARD_STOP, SD_STOP | SD_CLR_ERR,
|
||||
SD_STOP | SD_CLR_ERR);
|
||||
|
||||
/* Reset SD_CFG3 register */
|
||||
rtsx_pci_write_register(pcr, SD_CFG3, SD30_CLK_END_EN, 0);
|
||||
rtsx_pci_write_register(pcr, REG_SD_STOP_SDCLK_CFG,
|
||||
SD30_CLK_STOP_CFG_EN | SD30_CLK_STOP_CFG1 |
|
||||
SD30_CLK_STOP_CFG0, 0);
|
||||
|
||||
if (pcr->extra_caps & EXTRA_CAPS_SD_SDR50 ||
|
||||
pcr->extra_caps & EXTRA_CAPS_SD_SDR104)
|
||||
rts5261_sd_set_sample_push_timing_sd30(pcr);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int rts5261_switch_output_voltage(struct rtsx_pcr *pcr, u8 voltage)
|
||||
{
|
||||
int err;
|
||||
u16 val = 0;
|
||||
|
||||
rtsx_pci_write_register(pcr, RTS5261_CARD_PWR_CTL,
|
||||
RTS5261_PUPDC, RTS5261_PUPDC);
|
||||
|
||||
switch (voltage) {
|
||||
case OUTPUT_3V3:
|
||||
rtsx_pci_read_phy_register(pcr, PHY_TUNE, &val);
|
||||
val |= PHY_TUNE_SDBUS_33;
|
||||
err = rtsx_pci_write_phy_register(pcr, PHY_TUNE, val);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
rtsx_pci_write_register(pcr, RTS5261_DV3318_CFG,
|
||||
RTS5261_DV3318_TUNE_MASK, RTS5261_DV3318_33);
|
||||
rtsx_pci_write_register(pcr, SD_PAD_CTL,
|
||||
SD_IO_USING_1V8, 0);
|
||||
break;
|
||||
case OUTPUT_1V8:
|
||||
rtsx_pci_read_phy_register(pcr, PHY_TUNE, &val);
|
||||
val &= ~PHY_TUNE_SDBUS_33;
|
||||
err = rtsx_pci_write_phy_register(pcr, PHY_TUNE, val);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
rtsx_pci_write_register(pcr, RTS5261_DV3318_CFG,
|
||||
RTS5261_DV3318_TUNE_MASK, RTS5261_DV3318_18);
|
||||
rtsx_pci_write_register(pcr, SD_PAD_CTL,
|
||||
SD_IO_USING_1V8, SD_IO_USING_1V8);
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* set pad drive */
|
||||
rts5261_fill_driving(pcr, voltage);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void rts5261_stop_cmd(struct rtsx_pcr *pcr)
|
||||
{
|
||||
rtsx_pci_writel(pcr, RTSX_HCBCTLR, STOP_CMD);
|
||||
rtsx_pci_writel(pcr, RTSX_HDBCTLR, STOP_DMA);
|
||||
rtsx_pci_write_register(pcr, RTS5260_DMA_RST_CTL_0,
|
||||
RTS5260_DMA_RST | RTS5260_ADMA3_RST,
|
||||
RTS5260_DMA_RST | RTS5260_ADMA3_RST);
|
||||
rtsx_pci_write_register(pcr, RBCTL, RB_FLUSH, RB_FLUSH);
|
||||
}
|
||||
|
||||
static void rts5261_card_before_power_off(struct rtsx_pcr *pcr)
|
||||
{
|
||||
rts5261_stop_cmd(pcr);
|
||||
rts5261_switch_output_voltage(pcr, OUTPUT_3V3);
|
||||
|
||||
}
|
||||
|
||||
static void rts5261_enable_ocp(struct rtsx_pcr *pcr)
|
||||
{
|
||||
u8 val = 0;
|
||||
|
||||
val = SD_OCP_INT_EN | SD_DETECT_EN;
|
||||
rtsx_pci_write_register(pcr, REG_OCPCTL, 0xFF, val);
|
||||
|
||||
}
|
||||
|
||||
static void rts5261_disable_ocp(struct rtsx_pcr *pcr)
|
||||
{
|
||||
u8 mask = 0;
|
||||
|
||||
mask = SD_OCP_INT_EN | SD_DETECT_EN;
|
||||
rtsx_pci_write_register(pcr, REG_OCPCTL, mask, 0);
|
||||
rtsx_pci_write_register(pcr, RTS5261_LDO1_CFG0,
|
||||
RTS5261_LDO1_OCP_EN | RTS5261_LDO1_OCP_LMT_EN, 0);
|
||||
|
||||
}
|
||||
|
||||
static int rts5261_card_power_off(struct rtsx_pcr *pcr, int card)
|
||||
{
|
||||
int err = 0;
|
||||
|
||||
rts5261_card_before_power_off(pcr);
|
||||
err = rtsx_pci_write_register(pcr, RTS5261_LDO1233318_POW_CTL,
|
||||
RTS5261_LDO_POWERON_MASK, 0);
|
||||
|
||||
if (pcr->option.ocp_en)
|
||||
rtsx_pci_disable_ocp(pcr);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static void rts5261_init_ocp(struct rtsx_pcr *pcr)
|
||||
{
|
||||
struct rtsx_cr_option *option = &pcr->option;
|
||||
|
||||
if (option->ocp_en) {
|
||||
u8 mask, val;
|
||||
|
||||
rtsx_pci_write_register(pcr, RTS5261_LDO1_CFG0,
|
||||
RTS5261_LDO1_OCP_EN | RTS5261_LDO1_OCP_LMT_EN,
|
||||
RTS5261_LDO1_OCP_EN | RTS5261_LDO1_OCP_LMT_EN);
|
||||
|
||||
rtsx_pci_write_register(pcr, RTS5261_LDO1_CFG0,
|
||||
RTS5261_LDO1_OCP_THD_MASK, option->sd_800mA_ocp_thd);
|
||||
|
||||
rtsx_pci_write_register(pcr, RTS5261_LDO1_CFG0,
|
||||
RTS5261_LDO1_OCP_LMT_THD_MASK,
|
||||
RTS5261_LDO1_LMT_THD_2000);
|
||||
|
||||
mask = SD_OCP_GLITCH_MASK;
|
||||
val = pcr->hw_param.ocp_glitch;
|
||||
rtsx_pci_write_register(pcr, REG_OCPGLITCH, mask, val);
|
||||
|
||||
rts5261_enable_ocp(pcr);
|
||||
} else {
|
||||
rtsx_pci_write_register(pcr, RTS5261_LDO1_CFG0,
|
||||
RTS5261_LDO1_OCP_EN | RTS5261_LDO1_OCP_LMT_EN, 0);
|
||||
}
|
||||
}
|
||||
|
||||
static void rts5261_clear_ocpstat(struct rtsx_pcr *pcr)
|
||||
{
|
||||
u8 mask = 0;
|
||||
u8 val = 0;
|
||||
|
||||
mask = SD_OCP_INT_CLR | SD_OC_CLR;
|
||||
val = SD_OCP_INT_CLR | SD_OC_CLR;
|
||||
|
||||
rtsx_pci_write_register(pcr, REG_OCPCTL, mask, val);
|
||||
|
||||
udelay(10);
|
||||
rtsx_pci_write_register(pcr, REG_OCPCTL, mask, 0);
|
||||
|
||||
}
|
||||
|
||||
static void rts5261_process_ocp(struct rtsx_pcr *pcr)
|
||||
{
|
||||
if (!pcr->option.ocp_en)
|
||||
return;
|
||||
|
||||
rtsx_pci_get_ocpstat(pcr, &pcr->ocp_stat);
|
||||
|
||||
if (pcr->ocp_stat & (SD_OC_NOW | SD_OC_EVER)) {
|
||||
rts5261_card_power_off(pcr, RTSX_SD_CARD);
|
||||
rtsx_pci_write_register(pcr, CARD_OE, SD_OUTPUT_EN, 0);
|
||||
rts5261_clear_ocpstat(pcr);
|
||||
pcr->ocp_stat = 0;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
static int rts5261_init_from_hw(struct rtsx_pcr *pcr)
|
||||
{
|
||||
int retval;
|
||||
u32 lval, i;
|
||||
u8 valid, efuse_valid, tmp;
|
||||
|
||||
rtsx_pci_write_register(pcr, RTS5261_REG_PME_FORCE_CTL,
|
||||
REG_EFUSE_POR | REG_EFUSE_POWER_MASK,
|
||||
REG_EFUSE_POR | REG_EFUSE_POWERON);
|
||||
udelay(1);
|
||||
rtsx_pci_write_register(pcr, RTS5261_EFUSE_ADDR,
|
||||
RTS5261_EFUSE_ADDR_MASK, 0x00);
|
||||
rtsx_pci_write_register(pcr, RTS5261_EFUSE_CTL,
|
||||
RTS5261_EFUSE_ENABLE | RTS5261_EFUSE_MODE_MASK,
|
||||
RTS5261_EFUSE_ENABLE);
|
||||
|
||||
/* Wait transfer end */
|
||||
for (i = 0; i < MAX_RW_REG_CNT; i++) {
|
||||
rtsx_pci_read_register(pcr, RTS5261_EFUSE_CTL, &tmp);
|
||||
if ((tmp & 0x80) == 0)
|
||||
break;
|
||||
}
|
||||
rtsx_pci_read_register(pcr, RTS5261_EFUSE_READ_DATA, &tmp);
|
||||
efuse_valid = ((tmp & 0x0C) >> 2);
|
||||
pcr_dbg(pcr, "Load efuse valid: 0x%x\n", efuse_valid);
|
||||
|
||||
if (efuse_valid == 0) {
|
||||
retval = rtsx_pci_read_config_dword(pcr,
|
||||
PCR_SETTING_REG2, &lval);
|
||||
if (retval != 0)
|
||||
pcr_dbg(pcr, "read 0x814 DW fail\n");
|
||||
pcr_dbg(pcr, "DW from 0x814: 0x%x\n", lval);
|
||||
/* 0x816 */
|
||||
valid = (u8)((lval >> 16) & 0x03);
|
||||
pcr_dbg(pcr, "0x816: %d\n", valid);
|
||||
}
|
||||
rtsx_pci_write_register(pcr, RTS5261_REG_PME_FORCE_CTL,
|
||||
REG_EFUSE_POR, 0);
|
||||
pcr_dbg(pcr, "Disable efuse por!\n");
|
||||
|
||||
rtsx_pci_read_config_dword(pcr, PCR_SETTING_REG2, &lval);
|
||||
lval = lval & 0x00FFFFFF;
|
||||
retval = rtsx_pci_write_config_dword(pcr, PCR_SETTING_REG2, lval);
|
||||
if (retval != 0)
|
||||
pcr_dbg(pcr, "write config fail\n");
|
||||
|
||||
return retval;
|
||||
}
|
||||
|
||||
static void rts5261_init_from_cfg(struct rtsx_pcr *pcr)
|
||||
{
|
||||
u32 lval;
|
||||
struct rtsx_cr_option *option = &pcr->option;
|
||||
|
||||
rtsx_pci_read_config_dword(pcr, PCR_ASPM_SETTING_REG1, &lval);
|
||||
|
||||
if (lval & ASPM_L1_1_EN_MASK)
|
||||
rtsx_set_dev_flag(pcr, ASPM_L1_1_EN);
|
||||
else
|
||||
rtsx_clear_dev_flag(pcr, ASPM_L1_1_EN);
|
||||
|
||||
if (lval & ASPM_L1_2_EN_MASK)
|
||||
rtsx_set_dev_flag(pcr, ASPM_L1_2_EN);
|
||||
else
|
||||
rtsx_clear_dev_flag(pcr, ASPM_L1_2_EN);
|
||||
|
||||
if (lval & PM_L1_1_EN_MASK)
|
||||
rtsx_set_dev_flag(pcr, PM_L1_1_EN);
|
||||
else
|
||||
rtsx_clear_dev_flag(pcr, PM_L1_1_EN);
|
||||
|
||||
if (lval & PM_L1_2_EN_MASK)
|
||||
rtsx_set_dev_flag(pcr, PM_L1_2_EN);
|
||||
else
|
||||
rtsx_clear_dev_flag(pcr, PM_L1_2_EN);
|
||||
|
||||
rtsx_pci_write_register(pcr, ASPM_FORCE_CTL, 0xFF, 0);
|
||||
if (option->ltr_en) {
|
||||
u16 val;
|
||||
|
||||
pcie_capability_read_word(pcr->pci, PCI_EXP_DEVCTL2, &val);
|
||||
if (val & PCI_EXP_DEVCTL2_LTR_EN) {
|
||||
option->ltr_enabled = true;
|
||||
option->ltr_active = true;
|
||||
rtsx_set_ltr_latency(pcr, option->ltr_active_latency);
|
||||
} else {
|
||||
option->ltr_enabled = false;
|
||||
}
|
||||
}
|
||||
|
||||
if (rtsx_check_dev_flag(pcr, ASPM_L1_1_EN | ASPM_L1_2_EN
|
||||
| PM_L1_1_EN | PM_L1_2_EN))
|
||||
option->force_clkreq_0 = false;
|
||||
else
|
||||
option->force_clkreq_0 = true;
|
||||
}
|
||||
|
||||
static int rts5261_extra_init_hw(struct rtsx_pcr *pcr)
|
||||
{
|
||||
struct rtsx_cr_option *option = &pcr->option;
|
||||
|
||||
rtsx_pci_write_register(pcr, RTS5261_AUTOLOAD_CFG1,
|
||||
CD_RESUME_EN_MASK, CD_RESUME_EN_MASK);
|
||||
|
||||
rts5261_init_from_cfg(pcr);
|
||||
rts5261_init_from_hw(pcr);
|
||||
|
||||
/* power off efuse */
|
||||
rtsx_pci_write_register(pcr, RTS5261_REG_PME_FORCE_CTL,
|
||||
REG_EFUSE_POWER_MASK, REG_EFUSE_POWEROFF);
|
||||
rtsx_pci_write_register(pcr, L1SUB_CONFIG1,
|
||||
AUX_CLK_ACTIVE_SEL_MASK, MAC_CKSW_DONE);
|
||||
rtsx_pci_write_register(pcr, L1SUB_CONFIG3, 0xFF, 0);
|
||||
|
||||
rtsx_pci_write_register(pcr, RTS5261_AUTOLOAD_CFG4,
|
||||
RTS5261_AUX_CLK_16M_EN, 0);
|
||||
|
||||
/* Release PRSNT# */
|
||||
rtsx_pci_write_register(pcr, RTS5261_AUTOLOAD_CFG4,
|
||||
RTS5261_FORCE_PRSNT_LOW, 0);
|
||||
rtsx_pci_write_register(pcr, FUNC_FORCE_CTL,
|
||||
FUNC_FORCE_UPME_XMT_DBG, FUNC_FORCE_UPME_XMT_DBG);
|
||||
|
||||
rtsx_pci_write_register(pcr, PCLK_CTL,
|
||||
PCLK_MODE_SEL, PCLK_MODE_SEL);
|
||||
|
||||
rtsx_pci_write_register(pcr, PM_EVENT_DEBUG, PME_DEBUG_0, PME_DEBUG_0);
|
||||
rtsx_pci_write_register(pcr, PM_CLK_FORCE_CTL, CLK_PM_EN, CLK_PM_EN);
|
||||
|
||||
/* LED shine disabled, set initial shine cycle period */
|
||||
rtsx_pci_write_register(pcr, OLT_LED_CTL, 0x0F, 0x02);
|
||||
|
||||
/* Configure driving */
|
||||
rts5261_fill_driving(pcr, OUTPUT_3V3);
|
||||
|
||||
/*
|
||||
* If u_force_clkreq_0 is enabled, CLKREQ# PIN will be forced
|
||||
* to drive low, and we forcibly request clock.
|
||||
*/
|
||||
if (option->force_clkreq_0)
|
||||
rtsx_pci_write_register(pcr, PETXCFG,
|
||||
FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_LOW);
|
||||
else
|
||||
rtsx_pci_write_register(pcr, PETXCFG,
|
||||
FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_HIGH);
|
||||
|
||||
rtsx_pci_write_register(pcr, pcr->reg_pm_ctrl3, 0x10, 0x00);
|
||||
rtsx_pci_write_register(pcr, RTS5261_REG_PME_FORCE_CTL,
|
||||
FORCE_PM_CONTROL | FORCE_PM_VALUE, FORCE_PM_CONTROL);
|
||||
|
||||
/* Clear Enter RTD3_cold Information*/
|
||||
rtsx_pci_write_register(pcr, RTS5261_FW_CTL,
|
||||
RTS5261_INFORM_RTD3_COLD, 0);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void rts5261_enable_aspm(struct rtsx_pcr *pcr, bool enable)
|
||||
{
|
||||
struct rtsx_cr_option *option = &pcr->option;
|
||||
u8 val = 0;
|
||||
|
||||
if (pcr->aspm_enabled == enable)
|
||||
return;
|
||||
|
||||
if (option->dev_aspm_mode == DEV_ASPM_DYNAMIC) {
|
||||
val = pcr->aspm_en;
|
||||
rtsx_pci_update_cfg_byte(pcr, pcr->pcie_cap + PCI_EXP_LNKCTL,
|
||||
ASPM_MASK_NEG, val);
|
||||
} else if (option->dev_aspm_mode == DEV_ASPM_BACKDOOR) {
|
||||
u8 mask = FORCE_ASPM_VAL_MASK | FORCE_ASPM_CTL0;
|
||||
|
||||
val = FORCE_ASPM_CTL0;
|
||||
val |= (pcr->aspm_en & 0x02);
|
||||
rtsx_pci_write_register(pcr, ASPM_FORCE_CTL, mask, val);
|
||||
val = pcr->aspm_en;
|
||||
rtsx_pci_update_cfg_byte(pcr, pcr->pcie_cap + PCI_EXP_LNKCTL,
|
||||
ASPM_MASK_NEG, val);
|
||||
}
|
||||
pcr->aspm_enabled = enable;
|
||||
|
||||
}
|
||||
|
||||
static void rts5261_disable_aspm(struct rtsx_pcr *pcr, bool enable)
|
||||
{
|
||||
struct rtsx_cr_option *option = &pcr->option;
|
||||
u8 val = 0;
|
||||
|
||||
if (pcr->aspm_enabled == enable)
|
||||
return;
|
||||
|
||||
if (option->dev_aspm_mode == DEV_ASPM_DYNAMIC) {
|
||||
val = 0;
|
||||
rtsx_pci_update_cfg_byte(pcr, pcr->pcie_cap + PCI_EXP_LNKCTL,
|
||||
ASPM_MASK_NEG, val);
|
||||
} else if (option->dev_aspm_mode == DEV_ASPM_BACKDOOR) {
|
||||
u8 mask = FORCE_ASPM_VAL_MASK | FORCE_ASPM_CTL0;
|
||||
|
||||
val = 0;
|
||||
rtsx_pci_update_cfg_byte(pcr, pcr->pcie_cap + PCI_EXP_LNKCTL,
|
||||
ASPM_MASK_NEG, val);
|
||||
val = FORCE_ASPM_CTL0;
|
||||
rtsx_pci_write_register(pcr, ASPM_FORCE_CTL, mask, val);
|
||||
}
|
||||
rtsx_pci_write_register(pcr, SD_CFG1, SD_ASYNC_FIFO_NOT_RST, 0);
|
||||
udelay(10);
|
||||
pcr->aspm_enabled = enable;
|
||||
}
|
||||
|
||||
static void rts5261_set_aspm(struct rtsx_pcr *pcr, bool enable)
|
||||
{
|
||||
if (enable)
|
||||
rts5261_enable_aspm(pcr, true);
|
||||
else
|
||||
rts5261_disable_aspm(pcr, false);
|
||||
}
|
||||
|
||||
static void rts5261_set_l1off_cfg_sub_d0(struct rtsx_pcr *pcr, int active)
|
||||
{
|
||||
struct rtsx_cr_option *option = &pcr->option;
|
||||
int aspm_L1_1, aspm_L1_2;
|
||||
u8 val = 0;
|
||||
|
||||
aspm_L1_1 = rtsx_check_dev_flag(pcr, ASPM_L1_1_EN);
|
||||
aspm_L1_2 = rtsx_check_dev_flag(pcr, ASPM_L1_2_EN);
|
||||
|
||||
if (active) {
|
||||
/* run, latency: 60us */
|
||||
if (aspm_L1_1)
|
||||
val = option->ltr_l1off_snooze_sspwrgate;
|
||||
} else {
|
||||
/* l1off, latency: 300us */
|
||||
if (aspm_L1_2)
|
||||
val = option->ltr_l1off_sspwrgate;
|
||||
}
|
||||
|
||||
rtsx_set_l1off_sub(pcr, val);
|
||||
}
|
||||
|
||||
static const struct pcr_ops rts5261_pcr_ops = {
|
||||
.fetch_vendor_settings = rtsx5261_fetch_vendor_settings,
|
||||
.turn_on_led = rts5261_turn_on_led,
|
||||
.turn_off_led = rts5261_turn_off_led,
|
||||
.extra_init_hw = rts5261_extra_init_hw,
|
||||
.enable_auto_blink = rts5261_enable_auto_blink,
|
||||
.disable_auto_blink = rts5261_disable_auto_blink,
|
||||
.card_power_on = rts5261_card_power_on,
|
||||
.card_power_off = rts5261_card_power_off,
|
||||
.switch_output_voltage = rts5261_switch_output_voltage,
|
||||
.force_power_down = rts5261_force_power_down,
|
||||
.stop_cmd = rts5261_stop_cmd,
|
||||
.set_aspm = rts5261_set_aspm,
|
||||
.set_l1off_cfg_sub_d0 = rts5261_set_l1off_cfg_sub_d0,
|
||||
.enable_ocp = rts5261_enable_ocp,
|
||||
.disable_ocp = rts5261_disable_ocp,
|
||||
.init_ocp = rts5261_init_ocp,
|
||||
.process_ocp = rts5261_process_ocp,
|
||||
.clear_ocpstat = rts5261_clear_ocpstat,
|
||||
};
|
||||
|
||||
static inline u8 double_ssc_depth(u8 depth)
|
||||
{
|
||||
return ((depth > 1) ? (depth - 1) : depth);
|
||||
}
|
||||
|
||||
int rts5261_pci_switch_clock(struct rtsx_pcr *pcr, unsigned int card_clock,
|
||||
u8 ssc_depth, bool initial_mode, bool double_clk, bool vpclk)
|
||||
{
|
||||
int err, clk;
|
||||
u8 n, clk_divider, mcu_cnt, div;
|
||||
static const u8 depth[] = {
|
||||
[RTSX_SSC_DEPTH_4M] = RTS5261_SSC_DEPTH_4M,
|
||||
[RTSX_SSC_DEPTH_2M] = RTS5261_SSC_DEPTH_2M,
|
||||
[RTSX_SSC_DEPTH_1M] = RTS5261_SSC_DEPTH_1M,
|
||||
[RTSX_SSC_DEPTH_500K] = RTS5261_SSC_DEPTH_512K,
|
||||
};
|
||||
|
||||
if (initial_mode) {
|
||||
/* We use 250k(around) here, in initial stage */
|
||||
clk_divider = SD_CLK_DIVIDE_128;
|
||||
card_clock = 30000000;
|
||||
} else {
|
||||
clk_divider = SD_CLK_DIVIDE_0;
|
||||
}
|
||||
err = rtsx_pci_write_register(pcr, SD_CFG1,
|
||||
SD_CLK_DIVIDE_MASK, clk_divider);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
card_clock /= 1000000;
|
||||
pcr_dbg(pcr, "Switch card clock to %dMHz\n", card_clock);
|
||||
|
||||
clk = card_clock;
|
||||
if (!initial_mode && double_clk)
|
||||
clk = card_clock * 2;
|
||||
pcr_dbg(pcr, "Internal SSC clock: %dMHz (cur_clock = %d)\n",
|
||||
clk, pcr->cur_clock);
|
||||
|
||||
if (clk == pcr->cur_clock)
|
||||
return 0;
|
||||
|
||||
if (pcr->ops->conv_clk_and_div_n)
|
||||
n = (u8)pcr->ops->conv_clk_and_div_n(clk, CLK_TO_DIV_N);
|
||||
else
|
||||
n = (u8)(clk - 4);
|
||||
if ((clk <= 4) || (n > 396))
|
||||
return -EINVAL;
|
||||
|
||||
mcu_cnt = (u8)(125/clk + 3);
|
||||
if (mcu_cnt > 15)
|
||||
mcu_cnt = 15;
|
||||
|
||||
div = CLK_DIV_1;
|
||||
while ((n < MIN_DIV_N_PCR - 4) && (div < CLK_DIV_8)) {
|
||||
if (pcr->ops->conv_clk_and_div_n) {
|
||||
int dbl_clk = pcr->ops->conv_clk_and_div_n(n,
|
||||
DIV_N_TO_CLK) * 2;
|
||||
n = (u8)pcr->ops->conv_clk_and_div_n(dbl_clk,
|
||||
CLK_TO_DIV_N);
|
||||
} else {
|
||||
n = (n + 4) * 2 - 4;
|
||||
}
|
||||
div++;
|
||||
}
|
||||
|
||||
n = (n / 2);
|
||||
pcr_dbg(pcr, "n = %d, div = %d\n", n, div);
|
||||
|
||||
ssc_depth = depth[ssc_depth];
|
||||
if (double_clk)
|
||||
ssc_depth = double_ssc_depth(ssc_depth);
|
||||
|
||||
if (ssc_depth) {
|
||||
if (div == CLK_DIV_2) {
|
||||
if (ssc_depth > 1)
|
||||
ssc_depth -= 1;
|
||||
else
|
||||
ssc_depth = RTS5261_SSC_DEPTH_8M;
|
||||
} else if (div == CLK_DIV_4) {
|
||||
if (ssc_depth > 2)
|
||||
ssc_depth -= 2;
|
||||
else
|
||||
ssc_depth = RTS5261_SSC_DEPTH_8M;
|
||||
} else if (div == CLK_DIV_8) {
|
||||
if (ssc_depth > 3)
|
||||
ssc_depth -= 3;
|
||||
else
|
||||
ssc_depth = RTS5261_SSC_DEPTH_8M;
|
||||
}
|
||||
} else {
|
||||
ssc_depth = 0;
|
||||
}
|
||||
pcr_dbg(pcr, "ssc_depth = %d\n", ssc_depth);
|
||||
|
||||
rtsx_pci_init_cmd(pcr);
|
||||
rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, CLK_CTL,
|
||||
CLK_LOW_FREQ, CLK_LOW_FREQ);
|
||||
rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, CLK_DIV,
|
||||
0xFF, (div << 4) | mcu_cnt);
|
||||
rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, SSC_CTL1, SSC_RSTB, 0);
|
||||
rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, SSC_CTL2,
|
||||
SSC_DEPTH_MASK, ssc_depth);
|
||||
rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, SSC_DIV_N_0, 0xFF, n);
|
||||
rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, SSC_CTL1, SSC_RSTB, SSC_RSTB);
|
||||
if (vpclk) {
|
||||
rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, SD_VPCLK0_CTL,
|
||||
PHASE_NOT_RESET, 0);
|
||||
rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, SD_VPCLK1_CTL,
|
||||
PHASE_NOT_RESET, 0);
|
||||
rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, SD_VPCLK0_CTL,
|
||||
PHASE_NOT_RESET, PHASE_NOT_RESET);
|
||||
rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, SD_VPCLK1_CTL,
|
||||
PHASE_NOT_RESET, PHASE_NOT_RESET);
|
||||
}
|
||||
|
||||
err = rtsx_pci_send_cmd(pcr, 2000);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
/* Wait SSC clock stable */
|
||||
udelay(SSC_CLOCK_STABLE_WAIT);
|
||||
err = rtsx_pci_write_register(pcr, CLK_CTL, CLK_LOW_FREQ, 0);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
pcr->cur_clock = clk;
|
||||
return 0;
|
||||
|
||||
}
|
||||
|
||||
void rts5261_init_params(struct rtsx_pcr *pcr)
|
||||
{
|
||||
struct rtsx_cr_option *option = &pcr->option;
|
||||
struct rtsx_hw_param *hw_param = &pcr->hw_param;
|
||||
|
||||
pcr->extra_caps = EXTRA_CAPS_SD_SDR50 | EXTRA_CAPS_SD_SDR104;
|
||||
pcr->num_slots = 1;
|
||||
pcr->ops = &rts5261_pcr_ops;
|
||||
|
||||
pcr->flags = 0;
|
||||
pcr->card_drive_sel = RTSX_CARD_DRIVE_DEFAULT;
|
||||
pcr->sd30_drive_sel_1v8 = CFG_DRIVER_TYPE_B;
|
||||
pcr->sd30_drive_sel_3v3 = CFG_DRIVER_TYPE_B;
|
||||
pcr->aspm_en = ASPM_L1_EN;
|
||||
pcr->tx_initial_phase = SET_CLOCK_PHASE(20, 27, 16);
|
||||
pcr->rx_initial_phase = SET_CLOCK_PHASE(24, 6, 5);
|
||||
|
||||
pcr->ic_version = rts5261_get_ic_version(pcr);
|
||||
pcr->sd_pull_ctl_enable_tbl = rts5261_sd_pull_ctl_enable_tbl;
|
||||
pcr->sd_pull_ctl_disable_tbl = rts5261_sd_pull_ctl_disable_tbl;
|
||||
|
||||
pcr->reg_pm_ctrl3 = RTS5261_AUTOLOAD_CFG3;
|
||||
|
||||
option->dev_flags = (LTR_L1SS_PWR_GATE_CHECK_CARD_EN
|
||||
| LTR_L1SS_PWR_GATE_EN);
|
||||
option->ltr_en = true;
|
||||
|
||||
/* init latency of active, idle, L1OFF to 60us, 300us, 3ms */
|
||||
option->ltr_active_latency = LTR_ACTIVE_LATENCY_DEF;
|
||||
option->ltr_idle_latency = LTR_IDLE_LATENCY_DEF;
|
||||
option->ltr_l1off_latency = LTR_L1OFF_LATENCY_DEF;
|
||||
option->l1_snooze_delay = L1_SNOOZE_DELAY_DEF;
|
||||
option->ltr_l1off_sspwrgate = 0x7F;
|
||||
option->ltr_l1off_snooze_sspwrgate = 0x78;
|
||||
option->dev_aspm_mode = DEV_ASPM_DYNAMIC;
|
||||
|
||||
option->ocp_en = 1;
|
||||
hw_param->interrupt_en |= SD_OC_INT_EN;
|
||||
hw_param->ocp_glitch = SD_OCP_GLITCH_800U;
|
||||
option->sd_800mA_ocp_thd = RTS5261_LDO1_OCP_THD_1040;
|
||||
}
|
|
@ -0,0 +1,233 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
/* Driver for Realtek PCI-Express card reader
|
||||
*
|
||||
* Copyright(c) 2018-2019 Realtek Semiconductor Corp. All rights reserved.
|
||||
*
|
||||
* Author:
|
||||
* Rui FENG <rui_feng@realsil.com.cn>
|
||||
* Wei WANG <wei_wang@realsil.com.cn>
|
||||
*/
|
||||
#ifndef RTS5261_H
|
||||
#define RTS5261_H
|
||||
|
||||
/*New add*/
|
||||
#define rts5261_vendor_setting_valid(reg) ((reg) & 0x010000)
|
||||
#define rts5261_reg_to_aspm(reg) (((reg) >> 28) ^ 0x03)
|
||||
#define rts5261_reg_check_reverse_socket(reg) ((reg) & 0x04)
|
||||
#define rts5261_reg_to_card_drive_sel(reg) ((((reg) >> 6) & 0x01) << 6)
|
||||
#define rts5261_reg_to_sd30_drive_sel_1v8(reg) (((reg) >> 22) ^ 0x03)
|
||||
#define rts5261_reg_to_sd30_drive_sel_3v3(reg) (((reg) >> 16) ^ 0x03)
|
||||
|
||||
|
||||
#define RTS5261_AUTOLOAD_CFG0 0xFF7B
|
||||
#define RTS5261_AUTOLOAD_CFG1 0xFF7C
|
||||
#define RTS5261_AUTOLOAD_CFG2 0xFF7D
|
||||
#define RTS5261_AUTOLOAD_CFG3 0xFF7E
|
||||
#define RTS5261_AUTOLOAD_CFG4 0xFF7F
|
||||
#define RTS5261_FORCE_PRSNT_LOW (1 << 6)
|
||||
#define RTS5261_AUX_CLK_16M_EN (1 << 5)
|
||||
|
||||
#define RTS5261_REG_VREF 0xFE97
|
||||
#define RTS5261_PWD_SUSPND_EN (1 << 4)
|
||||
|
||||
#define RTS5261_PAD_H3L1 0xFF79
|
||||
#define PAD_GPIO_H3L1 (1 << 3)
|
||||
|
||||
/* SSC_CTL2 0xFC12 */
|
||||
#define RTS5261_SSC_DEPTH_MASK 0x07
|
||||
#define RTS5261_SSC_DEPTH_DISALBE 0x00
|
||||
#define RTS5261_SSC_DEPTH_8M 0x01
|
||||
#define RTS5261_SSC_DEPTH_4M 0x02
|
||||
#define RTS5261_SSC_DEPTH_2M 0x03
|
||||
#define RTS5261_SSC_DEPTH_1M 0x04
|
||||
#define RTS5261_SSC_DEPTH_512K 0x05
|
||||
#define RTS5261_SSC_DEPTH_256K 0x06
|
||||
#define RTS5261_SSC_DEPTH_128K 0x07
|
||||
|
||||
/* efuse control register*/
|
||||
#define RTS5261_EFUSE_CTL 0xFC30
|
||||
#define RTS5261_EFUSE_ENABLE 0x80
|
||||
/* EFUSE_MODE: 0=READ 1=PROGRAM */
|
||||
#define RTS5261_EFUSE_MODE_MASK 0x40
|
||||
#define RTS5261_EFUSE_PROGRAM 0x40
|
||||
|
||||
#define RTS5261_EFUSE_ADDR 0xFC31
|
||||
#define RTS5261_EFUSE_ADDR_MASK 0x3F
|
||||
|
||||
#define RTS5261_EFUSE_WRITE_DATA 0xFC32
|
||||
#define RTS5261_EFUSE_READ_DATA 0xFC34
|
||||
|
||||
/* DMACTL 0xFE2C */
|
||||
#define RTS5261_DMA_PACK_SIZE_MASK 0xF0
|
||||
|
||||
/* FW config info register */
|
||||
#define RTS5261_FW_CFG_INFO0 0xFF50
|
||||
#define RTS5261_FW_EXPRESS_TEST_MASK (0x01<<0)
|
||||
#define RTS5261_FW_EA_MODE_MASK (0x01<<5)
|
||||
|
||||
/* FW config register */
|
||||
#define RTS5261_FW_CFG0 0xFF54
|
||||
#define RTS5261_FW_ENTER_EXPRESS (0x01<<0)
|
||||
|
||||
#define RTS5261_FW_CFG1 0xFF55
|
||||
#define RTS5261_SYS_CLK_SEL_MCU_CLK (0x01<<7)
|
||||
#define RTS5261_CRC_CLK_SEL_MCU_CLK (0x01<<6)
|
||||
#define RTS5261_FAKE_MCU_CLOCK_GATING (0x01<<5)
|
||||
/*MCU_bus_mode_sel: 0=real 8051 1=fake mcu*/
|
||||
#define RTS5261_MCU_BUS_SEL_MASK (0x01<<4)
|
||||
/*MCU_clock_sel:VerA 00=aux16M 01=aux400K 1x=REFCLK100M*/
|
||||
/*MCU_clock_sel:VerB 00=aux400K 01=aux16M 10=REFCLK100M*/
|
||||
#define RTS5261_MCU_CLOCK_SEL_MASK (0x03<<2)
|
||||
#define RTS5261_MCU_CLOCK_SEL_16M (0x01<<2)
|
||||
#define RTS5261_MCU_CLOCK_GATING (0x01<<1)
|
||||
#define RTS5261_DRIVER_ENABLE_FW (0x01<<0)
|
||||
|
||||
/* FW status register */
|
||||
#define RTS5261_FW_STATUS 0xFF56
|
||||
#define RTS5261_EXPRESS_LINK_FAIL_MASK (0x01<<7)
|
||||
|
||||
/* FW control register */
|
||||
#define RTS5261_FW_CTL 0xFF5F
|
||||
#define RTS5261_INFORM_RTD3_COLD (0x01<<5)
|
||||
|
||||
#define RTS5261_REG_FPDCTL 0xFF60
|
||||
|
||||
#define RTS5261_REG_LDO12_CFG 0xFF6E
|
||||
#define RTS5261_LDO12_VO_TUNE_MASK (0x07<<1)
|
||||
#define RTS5261_LDO12_115 (0x03<<1)
|
||||
#define RTS5261_LDO12_120 (0x04<<1)
|
||||
#define RTS5261_LDO12_125 (0x05<<1)
|
||||
#define RTS5261_LDO12_130 (0x06<<1)
|
||||
#define RTS5261_LDO12_135 (0x07<<1)
|
||||
|
||||
/* LDO control register */
|
||||
#define RTS5261_CARD_PWR_CTL 0xFD50
|
||||
#define RTS5261_SD_CLK_ISO (0x01<<7)
|
||||
#define RTS5261_PAD_SD_DAT_FW_CTRL (0x01<<6)
|
||||
#define RTS5261_PUPDC (0x01<<5)
|
||||
#define RTS5261_SD_CMD_ISO (0x01<<4)
|
||||
#define RTS5261_SD_DAT_ISO_MASK (0x0F<<0)
|
||||
|
||||
#define RTS5261_LDO1233318_POW_CTL 0xFF70
|
||||
#define RTS5261_LDO3318_POWERON (0x01<<3)
|
||||
#define RTS5261_LDO3_POWERON (0x01<<2)
|
||||
#define RTS5261_LDO2_POWERON (0x01<<1)
|
||||
#define RTS5261_LDO1_POWERON (0x01<<0)
|
||||
#define RTS5261_LDO_POWERON_MASK (0x0F<<0)
|
||||
|
||||
#define RTS5261_DV3318_CFG 0xFF71
|
||||
#define RTS5261_DV3318_TUNE_MASK (0x07<<4)
|
||||
#define RTS5261_DV3318_18 (0x02<<4)
|
||||
#define RTS5261_DV3318_19 (0x04<<4)
|
||||
#define RTS5261_DV3318_33 (0x07<<4)
|
||||
|
||||
#define RTS5261_LDO1_CFG0 0xFF72
|
||||
#define RTS5261_LDO1_OCP_THD_MASK (0x07<<5)
|
||||
#define RTS5261_LDO1_OCP_EN (0x01<<4)
|
||||
#define RTS5261_LDO1_OCP_LMT_THD_MASK (0x03<<2)
|
||||
#define RTS5261_LDO1_OCP_LMT_EN (0x01<<1)
|
||||
|
||||
/* CRD6603-433 190319 request changed */
|
||||
#define RTS5261_LDO1_OCP_THD_740 (0x00<<5)
|
||||
#define RTS5261_LDO1_OCP_THD_800 (0x01<<5)
|
||||
#define RTS5261_LDO1_OCP_THD_860 (0x02<<5)
|
||||
#define RTS5261_LDO1_OCP_THD_920 (0x03<<5)
|
||||
#define RTS5261_LDO1_OCP_THD_980 (0x04<<5)
|
||||
#define RTS5261_LDO1_OCP_THD_1040 (0x05<<5)
|
||||
#define RTS5261_LDO1_OCP_THD_1100 (0x06<<5)
|
||||
#define RTS5261_LDO1_OCP_THD_1160 (0x07<<5)
|
||||
|
||||
#define RTS5261_LDO1_LMT_THD_450 (0x00<<2)
|
||||
#define RTS5261_LDO1_LMT_THD_1000 (0x01<<2)
|
||||
#define RTS5261_LDO1_LMT_THD_1500 (0x02<<2)
|
||||
#define RTS5261_LDO1_LMT_THD_2000 (0x03<<2)
|
||||
|
||||
#define RTS5261_LDO1_CFG1 0xFF73
|
||||
#define RTS5261_LDO1_TUNE_MASK (0x07<<1)
|
||||
#define RTS5261_LDO1_18 (0x05<<1)
|
||||
#define RTS5261_LDO1_33 (0x07<<1)
|
||||
#define RTS5261_LDO1_PWD_MASK (0x01<<0)
|
||||
|
||||
#define RTS5261_LDO2_CFG0 0xFF74
|
||||
#define RTS5261_LDO2_OCP_THD_MASK (0x07<<5)
|
||||
#define RTS5261_LDO2_OCP_EN (0x01<<4)
|
||||
#define RTS5261_LDO2_OCP_LMT_THD_MASK (0x03<<2)
|
||||
#define RTS5261_LDO2_OCP_LMT_EN (0x01<<1)
|
||||
|
||||
#define RTS5261_LDO2_OCP_THD_620 (0x00<<5)
|
||||
#define RTS5261_LDO2_OCP_THD_650 (0x01<<5)
|
||||
#define RTS5261_LDO2_OCP_THD_680 (0x02<<5)
|
||||
#define RTS5261_LDO2_OCP_THD_720 (0x03<<5)
|
||||
#define RTS5261_LDO2_OCP_THD_750 (0x04<<5)
|
||||
#define RTS5261_LDO2_OCP_THD_780 (0x05<<5)
|
||||
#define RTS5261_LDO2_OCP_THD_810 (0x06<<5)
|
||||
#define RTS5261_LDO2_OCP_THD_840 (0x07<<5)
|
||||
|
||||
#define RTS5261_LDO2_CFG1 0xFF75
|
||||
#define RTS5261_LDO2_TUNE_MASK (0x07<<1)
|
||||
#define RTS5261_LDO2_18 (0x05<<1)
|
||||
#define RTS5261_LDO2_33 (0x07<<1)
|
||||
#define RTS5261_LDO2_PWD_MASK (0x01<<0)
|
||||
|
||||
#define RTS5261_LDO3_CFG0 0xFF76
|
||||
#define RTS5261_LDO3_OCP_THD_MASK (0x07<<5)
|
||||
#define RTS5261_LDO3_OCP_EN (0x01<<4)
|
||||
#define RTS5261_LDO3_OCP_LMT_THD_MASK (0x03<<2)
|
||||
#define RTS5261_LDO3_OCP_LMT_EN (0x01<<1)
|
||||
|
||||
#define RTS5261_LDO3_OCP_THD_620 (0x00<<5)
|
||||
#define RTS5261_LDO3_OCP_THD_650 (0x01<<5)
|
||||
#define RTS5261_LDO3_OCP_THD_680 (0x02<<5)
|
||||
#define RTS5261_LDO3_OCP_THD_720 (0x03<<5)
|
||||
#define RTS5261_LDO3_OCP_THD_750 (0x04<<5)
|
||||
#define RTS5261_LDO3_OCP_THD_780 (0x05<<5)
|
||||
#define RTS5261_LDO3_OCP_THD_810 (0x06<<5)
|
||||
#define RTS5261_LDO3_OCP_THD_840 (0x07<<5)
|
||||
|
||||
#define RTS5261_LDO3_CFG1 0xFF77
|
||||
#define RTS5261_LDO3_TUNE_MASK (0x07<<1)
|
||||
#define RTS5261_LDO3_18 (0x05<<1)
|
||||
#define RTS5261_LDO3_33 (0x07<<1)
|
||||
#define RTS5261_LDO3_PWD_MASK (0x01<<0)
|
||||
|
||||
#define RTS5261_REG_PME_FORCE_CTL 0xFF78
|
||||
#define FORCE_PM_CONTROL 0x20
|
||||
#define FORCE_PM_VALUE 0x10
|
||||
#define REG_EFUSE_BYPASS 0x08
|
||||
#define REG_EFUSE_POR 0x04
|
||||
#define REG_EFUSE_POWER_MASK 0x03
|
||||
#define REG_EFUSE_POWERON 0x03
|
||||
#define REG_EFUSE_POWEROFF 0x00
|
||||
|
||||
|
||||
/* Single LUN, support SD/SD EXPRESS */
|
||||
#define DEFAULT_SINGLE 0
|
||||
#define SD_LUN 1
|
||||
#define SD_EXPRESS_LUN 2
|
||||
|
||||
/* For Change_FPGA_SSCClock Function */
|
||||
#define MULTIPLY_BY_1 0x00
|
||||
#define MULTIPLY_BY_2 0x01
|
||||
#define MULTIPLY_BY_3 0x02
|
||||
#define MULTIPLY_BY_4 0x03
|
||||
#define MULTIPLY_BY_5 0x04
|
||||
#define MULTIPLY_BY_6 0x05
|
||||
#define MULTIPLY_BY_7 0x06
|
||||
#define MULTIPLY_BY_8 0x07
|
||||
#define MULTIPLY_BY_9 0x08
|
||||
#define MULTIPLY_BY_10 0x09
|
||||
|
||||
#define DIVIDE_BY_2 0x01
|
||||
#define DIVIDE_BY_3 0x02
|
||||
#define DIVIDE_BY_4 0x03
|
||||
#define DIVIDE_BY_5 0x04
|
||||
#define DIVIDE_BY_6 0x05
|
||||
#define DIVIDE_BY_7 0x06
|
||||
#define DIVIDE_BY_8 0x07
|
||||
#define DIVIDE_BY_9 0x08
|
||||
#define DIVIDE_BY_10 0x09
|
||||
|
||||
int rts5261_pci_switch_clock(struct rtsx_pcr *pcr, unsigned int card_clock,
|
||||
u8 ssc_depth, bool initial_mode, bool double_clk, bool vpclk);
|
||||
|
||||
#endif /* RTS5261_H */
|
|
@ -22,6 +22,7 @@
|
|||
#include <asm/unaligned.h>
|
||||
|
||||
#include "rtsx_pcr.h"
|
||||
#include "rts5261.h"
|
||||
|
||||
static bool msi_en = true;
|
||||
module_param(msi_en, bool, S_IRUGO | S_IWUSR);
|
||||
|
@ -34,9 +35,6 @@ static struct mfd_cell rtsx_pcr_cells[] = {
|
|||
[RTSX_SD_CARD] = {
|
||||
.name = DRV_NAME_RTSX_PCI_SDMMC,
|
||||
},
|
||||
[RTSX_MS_CARD] = {
|
||||
.name = DRV_NAME_RTSX_PCI_MS,
|
||||
},
|
||||
};
|
||||
|
||||
static const struct pci_device_id rtsx_pci_ids[] = {
|
||||
|
@ -51,6 +49,7 @@ static const struct pci_device_id rtsx_pci_ids[] = {
|
|||
{ PCI_DEVICE(0x10EC, 0x524A), PCI_CLASS_OTHERS << 16, 0xFF0000 },
|
||||
{ PCI_DEVICE(0x10EC, 0x525A), PCI_CLASS_OTHERS << 16, 0xFF0000 },
|
||||
{ PCI_DEVICE(0x10EC, 0x5260), PCI_CLASS_OTHERS << 16, 0xFF0000 },
|
||||
{ PCI_DEVICE(0x10EC, 0x5261), PCI_CLASS_OTHERS << 16, 0xFF0000 },
|
||||
{ 0, }
|
||||
};
|
||||
|
||||
|
@ -438,8 +437,16 @@ static void rtsx_pci_add_sg_tbl(struct rtsx_pcr *pcr,
|
|||
|
||||
if (end)
|
||||
option |= RTSX_SG_END;
|
||||
val = ((u64)addr << 32) | ((u64)len << 12) | option;
|
||||
|
||||
if (PCI_PID(pcr) == PID_5261) {
|
||||
if (len > 0xFFFF)
|
||||
val = ((u64)addr << 32) | (((u64)len & 0xFFFF) << 16)
|
||||
| (((u64)len >> 16) << 6) | option;
|
||||
else
|
||||
val = ((u64)addr << 32) | ((u64)len << 16) | option;
|
||||
} else {
|
||||
val = ((u64)addr << 32) | ((u64)len << 12) | option;
|
||||
}
|
||||
put_unaligned_le64(val, ptr);
|
||||
pcr->sgi++;
|
||||
}
|
||||
|
@ -684,7 +691,6 @@ int rtsx_pci_card_pull_ctl_disable(struct rtsx_pcr *pcr, int card)
|
|||
else
|
||||
return -EINVAL;
|
||||
|
||||
|
||||
return rtsx_pci_set_pull_ctl(pcr, tbl);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rtsx_pci_card_pull_ctl_disable);
|
||||
|
@ -735,6 +741,10 @@ int rtsx_pci_switch_clock(struct rtsx_pcr *pcr, unsigned int card_clock,
|
|||
[RTSX_SSC_DEPTH_250K] = SSC_DEPTH_250K,
|
||||
};
|
||||
|
||||
if (PCI_PID(pcr) == PID_5261)
|
||||
return rts5261_pci_switch_clock(pcr, card_clock,
|
||||
ssc_depth, initial_mode, double_clk, vpclk);
|
||||
|
||||
if (initial_mode) {
|
||||
/* We use 250k(around) here, in initial stage */
|
||||
clk_divider = SD_CLK_DIVIDE_128;
|
||||
|
@ -1253,7 +1263,15 @@ static int rtsx_pci_init_hw(struct rtsx_pcr *pcr)
|
|||
rtsx_pci_enable_bus_int(pcr);
|
||||
|
||||
/* Power on SSC */
|
||||
err = rtsx_pci_write_register(pcr, FPDCTL, SSC_POWER_DOWN, 0);
|
||||
if (PCI_PID(pcr) == PID_5261) {
|
||||
/* Gating real mcu clock */
|
||||
err = rtsx_pci_write_register(pcr, RTS5261_FW_CFG1,
|
||||
RTS5261_MCU_CLOCK_GATING, 0);
|
||||
err = rtsx_pci_write_register(pcr, RTS5261_REG_FPDCTL,
|
||||
SSC_POWER_DOWN, 0);
|
||||
} else {
|
||||
err = rtsx_pci_write_register(pcr, FPDCTL, SSC_POWER_DOWN, 0);
|
||||
}
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
|
@ -1283,7 +1301,12 @@ static int rtsx_pci_init_hw(struct rtsx_pcr *pcr)
|
|||
/* Enable SSC Clock */
|
||||
rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, SSC_CTL1,
|
||||
0xFF, SSC_8X_EN | SSC_SEL_4M);
|
||||
rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, SSC_CTL2, 0xFF, 0x12);
|
||||
if (PCI_PID(pcr) == PID_5261)
|
||||
rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, SSC_CTL2, 0xFF,
|
||||
RTS5261_SSC_DEPTH_2M);
|
||||
else
|
||||
rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, SSC_CTL2, 0xFF, 0x12);
|
||||
|
||||
/* Disable cd_pwr_save */
|
||||
rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, CHANGE_LINK_STATE, 0x16, 0x10);
|
||||
/* Clear Link Ready Interrupt */
|
||||
|
@ -1314,6 +1337,7 @@ static int rtsx_pci_init_hw(struct rtsx_pcr *pcr)
|
|||
case PID_524A:
|
||||
case PID_525A:
|
||||
case PID_5260:
|
||||
case PID_5261:
|
||||
rtsx_pci_write_register(pcr, PM_CLK_FORCE_CTL, 1, 1);
|
||||
break;
|
||||
default:
|
||||
|
@ -1393,9 +1417,14 @@ static int rtsx_pci_init_chip(struct rtsx_pcr *pcr)
|
|||
case 0x5286:
|
||||
rtl8402_init_params(pcr);
|
||||
break;
|
||||
|
||||
case 0x5260:
|
||||
rts5260_init_params(pcr);
|
||||
break;
|
||||
|
||||
case 0x5261:
|
||||
rts5261_init_params(pcr);
|
||||
break;
|
||||
}
|
||||
|
||||
pcr_dbg(pcr, "PID: 0x%04x, IC version: 0x%02x\n",
|
||||
|
|
|
@ -53,6 +53,7 @@ void rts524a_init_params(struct rtsx_pcr *pcr);
|
|||
void rts525a_init_params(struct rtsx_pcr *pcr);
|
||||
void rtl8411b_init_params(struct rtsx_pcr *pcr);
|
||||
void rts5260_init_params(struct rtsx_pcr *pcr);
|
||||
void rts5261_init_params(struct rtsx_pcr *pcr);
|
||||
|
||||
static inline u8 map_sd_drive(int idx)
|
||||
{
|
||||
|
|
|
@ -175,6 +175,10 @@ static int eeprom_probe(struct i2c_client *client,
|
|||
}
|
||||
}
|
||||
|
||||
/* Let the users know they are using deprecated driver */
|
||||
dev_notice(&client->dev,
|
||||
"eeprom driver is deprecated, please use at24 instead\n");
|
||||
|
||||
/* create the sysfs eeprom file */
|
||||
return sysfs_create_bin_file(&client->dev.kobj, &eeprom_attr);
|
||||
}
|
||||
|
|
|
@ -32,8 +32,9 @@
|
|||
#define FASTRPC_CTX_MAX (256)
|
||||
#define FASTRPC_INIT_HANDLE 1
|
||||
#define FASTRPC_CTXID_MASK (0xFF0)
|
||||
#define INIT_FILELEN_MAX (64 * 1024 * 1024)
|
||||
#define INIT_FILELEN_MAX (2 * 1024 * 1024)
|
||||
#define FASTRPC_DEVICE_NAME "fastrpc"
|
||||
#define ADSP_MMAP_ADD_PAGES 0x1000
|
||||
|
||||
/* Retrives number of input buffers from the scalars parameter */
|
||||
#define REMOTE_SCALARS_INBUFS(sc) (((sc) >> 16) & 0x0ff)
|
||||
|
@ -66,6 +67,8 @@
|
|||
/* Remote Method id table */
|
||||
#define FASTRPC_RMID_INIT_ATTACH 0
|
||||
#define FASTRPC_RMID_INIT_RELEASE 1
|
||||
#define FASTRPC_RMID_INIT_MMAP 4
|
||||
#define FASTRPC_RMID_INIT_MUNMAP 5
|
||||
#define FASTRPC_RMID_INIT_CREATE 6
|
||||
#define FASTRPC_RMID_INIT_CREATE_ATTR 7
|
||||
#define FASTRPC_RMID_INIT_CREATE_STATIC 8
|
||||
|
@ -89,6 +92,23 @@ struct fastrpc_remote_arg {
|
|||
u64 len;
|
||||
};
|
||||
|
||||
struct fastrpc_mmap_rsp_msg {
|
||||
u64 vaddr;
|
||||
};
|
||||
|
||||
struct fastrpc_mmap_req_msg {
|
||||
s32 pgid;
|
||||
u32 flags;
|
||||
u64 vaddr;
|
||||
s32 num;
|
||||
};
|
||||
|
||||
struct fastrpc_munmap_req_msg {
|
||||
s32 pgid;
|
||||
u64 vaddr;
|
||||
u64 size;
|
||||
};
|
||||
|
||||
struct fastrpc_msg {
|
||||
int pid; /* process group id */
|
||||
int tid; /* thread id */
|
||||
|
@ -123,6 +143,9 @@ struct fastrpc_buf {
|
|||
/* Lock for dma buf attachments */
|
||||
struct mutex lock;
|
||||
struct list_head attachments;
|
||||
/* mmap support */
|
||||
struct list_head node; /* list of user requested mmaps */
|
||||
uintptr_t raddr;
|
||||
};
|
||||
|
||||
struct fastrpc_dma_buf_attachment {
|
||||
|
@ -192,6 +215,7 @@ struct fastrpc_user {
|
|||
struct list_head user;
|
||||
struct list_head maps;
|
||||
struct list_head pending;
|
||||
struct list_head mmaps;
|
||||
|
||||
struct fastrpc_channel_ctx *cctx;
|
||||
struct fastrpc_session_ctx *sctx;
|
||||
|
@ -269,6 +293,7 @@ static int fastrpc_buf_alloc(struct fastrpc_user *fl, struct device *dev,
|
|||
return -ENOMEM;
|
||||
|
||||
INIT_LIST_HEAD(&buf->attachments);
|
||||
INIT_LIST_HEAD(&buf->node);
|
||||
mutex_init(&buf->lock);
|
||||
|
||||
buf->fl = fl;
|
||||
|
@ -276,6 +301,7 @@ static int fastrpc_buf_alloc(struct fastrpc_user *fl, struct device *dev,
|
|||
buf->phys = 0;
|
||||
buf->size = size;
|
||||
buf->dev = dev;
|
||||
buf->raddr = 0;
|
||||
|
||||
buf->virt = dma_alloc_coherent(dev, buf->size, (dma_addr_t *)&buf->phys,
|
||||
GFP_KERNEL);
|
||||
|
@ -934,8 +960,13 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel,
|
|||
if (err)
|
||||
goto bail;
|
||||
|
||||
/* Wait for remote dsp to respond or time out */
|
||||
err = wait_for_completion_interruptible(&ctx->work);
|
||||
if (kernel) {
|
||||
if (!wait_for_completion_timeout(&ctx->work, 10 * HZ))
|
||||
err = -ETIMEDOUT;
|
||||
} else {
|
||||
err = wait_for_completion_interruptible(&ctx->work);
|
||||
}
|
||||
|
||||
if (err)
|
||||
goto bail;
|
||||
|
||||
|
@ -954,12 +985,13 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel,
|
|||
}
|
||||
|
||||
bail:
|
||||
/* We are done with this compute context, remove it from pending list */
|
||||
spin_lock(&fl->lock);
|
||||
list_del(&ctx->node);
|
||||
spin_unlock(&fl->lock);
|
||||
fastrpc_context_put(ctx);
|
||||
|
||||
if (err != -ERESTARTSYS && err != -ETIMEDOUT) {
|
||||
/* We are done with this compute context */
|
||||
spin_lock(&fl->lock);
|
||||
list_del(&ctx->node);
|
||||
spin_unlock(&fl->lock);
|
||||
fastrpc_context_put(ctx);
|
||||
}
|
||||
if (err)
|
||||
dev_dbg(fl->sctx->dev, "Error: Invoke Failed %d\n", err);
|
||||
|
||||
|
@ -1131,6 +1163,7 @@ static int fastrpc_device_release(struct inode *inode, struct file *file)
|
|||
struct fastrpc_channel_ctx *cctx = fl->cctx;
|
||||
struct fastrpc_invoke_ctx *ctx, *n;
|
||||
struct fastrpc_map *map, *m;
|
||||
struct fastrpc_buf *buf, *b;
|
||||
unsigned long flags;
|
||||
|
||||
fastrpc_release_current_dsp_process(fl);
|
||||
|
@ -1152,6 +1185,11 @@ static int fastrpc_device_release(struct inode *inode, struct file *file)
|
|||
fastrpc_map_put(map);
|
||||
}
|
||||
|
||||
list_for_each_entry_safe(buf, b, &fl->mmaps, node) {
|
||||
list_del(&buf->node);
|
||||
fastrpc_buf_free(buf);
|
||||
}
|
||||
|
||||
fastrpc_session_free(cctx, fl->sctx);
|
||||
fastrpc_channel_ctx_put(cctx);
|
||||
|
||||
|
@ -1180,6 +1218,7 @@ static int fastrpc_device_open(struct inode *inode, struct file *filp)
|
|||
mutex_init(&fl->mutex);
|
||||
INIT_LIST_HEAD(&fl->pending);
|
||||
INIT_LIST_HEAD(&fl->maps);
|
||||
INIT_LIST_HEAD(&fl->mmaps);
|
||||
INIT_LIST_HEAD(&fl->user);
|
||||
fl->tgid = current->tgid;
|
||||
fl->cctx = cctx;
|
||||
|
@ -1285,6 +1324,148 @@ static int fastrpc_invoke(struct fastrpc_user *fl, char __user *argp)
|
|||
return err;
|
||||
}
|
||||
|
||||
static int fastrpc_req_munmap_impl(struct fastrpc_user *fl,
|
||||
struct fastrpc_req_munmap *req)
|
||||
{
|
||||
struct fastrpc_invoke_args args[1] = { [0] = { 0 } };
|
||||
struct fastrpc_buf *buf, *b;
|
||||
struct fastrpc_munmap_req_msg req_msg;
|
||||
struct device *dev = fl->sctx->dev;
|
||||
int err;
|
||||
u32 sc;
|
||||
|
||||
spin_lock(&fl->lock);
|
||||
list_for_each_entry_safe(buf, b, &fl->mmaps, node) {
|
||||
if ((buf->raddr == req->vaddrout) && (buf->size == req->size))
|
||||
break;
|
||||
buf = NULL;
|
||||
}
|
||||
spin_unlock(&fl->lock);
|
||||
|
||||
if (!buf) {
|
||||
dev_err(dev, "mmap not in list\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
req_msg.pgid = fl->tgid;
|
||||
req_msg.size = buf->size;
|
||||
req_msg.vaddr = buf->raddr;
|
||||
|
||||
args[0].ptr = (u64) (uintptr_t) &req_msg;
|
||||
args[0].length = sizeof(req_msg);
|
||||
|
||||
sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MUNMAP, 1, 0);
|
||||
err = fastrpc_internal_invoke(fl, true, FASTRPC_INIT_HANDLE, sc,
|
||||
&args[0]);
|
||||
if (!err) {
|
||||
dev_dbg(dev, "unmmap\tpt 0x%09lx OK\n", buf->raddr);
|
||||
spin_lock(&fl->lock);
|
||||
list_del(&buf->node);
|
||||
spin_unlock(&fl->lock);
|
||||
fastrpc_buf_free(buf);
|
||||
} else {
|
||||
dev_err(dev, "unmmap\tpt 0x%09lx ERROR\n", buf->raddr);
|
||||
}
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static int fastrpc_req_munmap(struct fastrpc_user *fl, char __user *argp)
|
||||
{
|
||||
struct fastrpc_req_munmap req;
|
||||
|
||||
if (copy_from_user(&req, argp, sizeof(req)))
|
||||
return -EFAULT;
|
||||
|
||||
return fastrpc_req_munmap_impl(fl, &req);
|
||||
}
|
||||
|
||||
static int fastrpc_req_mmap(struct fastrpc_user *fl, char __user *argp)
|
||||
{
|
||||
struct fastrpc_invoke_args args[3] = { [0 ... 2] = { 0 } };
|
||||
struct fastrpc_buf *buf = NULL;
|
||||
struct fastrpc_mmap_req_msg req_msg;
|
||||
struct fastrpc_mmap_rsp_msg rsp_msg;
|
||||
struct fastrpc_req_munmap req_unmap;
|
||||
struct fastrpc_phy_page pages;
|
||||
struct fastrpc_req_mmap req;
|
||||
struct device *dev = fl->sctx->dev;
|
||||
int err;
|
||||
u32 sc;
|
||||
|
||||
if (copy_from_user(&req, argp, sizeof(req)))
|
||||
return -EFAULT;
|
||||
|
||||
if (req.flags != ADSP_MMAP_ADD_PAGES) {
|
||||
dev_err(dev, "flag not supported 0x%x\n", req.flags);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (req.vaddrin) {
|
||||
dev_err(dev, "adding user allocated pages is not supported\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
err = fastrpc_buf_alloc(fl, fl->sctx->dev, req.size, &buf);
|
||||
if (err) {
|
||||
dev_err(dev, "failed to allocate buffer\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
req_msg.pgid = fl->tgid;
|
||||
req_msg.flags = req.flags;
|
||||
req_msg.vaddr = req.vaddrin;
|
||||
req_msg.num = sizeof(pages);
|
||||
|
||||
args[0].ptr = (u64) (uintptr_t) &req_msg;
|
||||
args[0].length = sizeof(req_msg);
|
||||
|
||||
pages.addr = buf->phys;
|
||||
pages.size = buf->size;
|
||||
|
||||
args[1].ptr = (u64) (uintptr_t) &pages;
|
||||
args[1].length = sizeof(pages);
|
||||
|
||||
args[2].ptr = (u64) (uintptr_t) &rsp_msg;
|
||||
args[2].length = sizeof(rsp_msg);
|
||||
|
||||
sc = FASTRPC_SCALARS(FASTRPC_RMID_INIT_MMAP, 2, 1);
|
||||
err = fastrpc_internal_invoke(fl, true, FASTRPC_INIT_HANDLE, sc,
|
||||
&args[0]);
|
||||
if (err) {
|
||||
dev_err(dev, "mmap error (len 0x%08llx)\n", buf->size);
|
||||
goto err_invoke;
|
||||
}
|
||||
|
||||
/* update the buffer to be able to deallocate the memory on the DSP */
|
||||
buf->raddr = (uintptr_t) rsp_msg.vaddr;
|
||||
|
||||
/* let the client know the address to use */
|
||||
req.vaddrout = rsp_msg.vaddr;
|
||||
|
||||
spin_lock(&fl->lock);
|
||||
list_add_tail(&buf->node, &fl->mmaps);
|
||||
spin_unlock(&fl->lock);
|
||||
|
||||
if (copy_to_user((void __user *)argp, &req, sizeof(req))) {
|
||||
/* unmap the memory and release the buffer */
|
||||
req_unmap.vaddrout = buf->raddr;
|
||||
req_unmap.size = buf->size;
|
||||
fastrpc_req_munmap_impl(fl, &req_unmap);
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
dev_dbg(dev, "mmap\t\tpt 0x%09lx OK [len 0x%08llx]\n",
|
||||
buf->raddr, buf->size);
|
||||
|
||||
return 0;
|
||||
|
||||
err_invoke:
|
||||
fastrpc_buf_free(buf);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static long fastrpc_device_ioctl(struct file *file, unsigned int cmd,
|
||||
unsigned long arg)
|
||||
{
|
||||
|
@ -1305,6 +1486,12 @@ static long fastrpc_device_ioctl(struct file *file, unsigned int cmd,
|
|||
case FASTRPC_IOCTL_ALLOC_DMA_BUFF:
|
||||
err = fastrpc_dmabuf_alloc(fl, argp);
|
||||
break;
|
||||
case FASTRPC_IOCTL_MMAP:
|
||||
err = fastrpc_req_mmap(fl, argp);
|
||||
break;
|
||||
case FASTRPC_IOCTL_MUNMAP:
|
||||
err = fastrpc_req_munmap(fl, argp);
|
||||
break;
|
||||
default:
|
||||
err = -ENOTTY;
|
||||
break;
|
||||
|
@ -1430,8 +1617,8 @@ static int fastrpc_rpmsg_probe(struct rpmsg_device *rpdev)
|
|||
return -ENOMEM;
|
||||
|
||||
data->miscdev.minor = MISC_DYNAMIC_MINOR;
|
||||
data->miscdev.name = kasprintf(GFP_KERNEL, "fastrpc-%s",
|
||||
domains[domain_id]);
|
||||
data->miscdev.name = devm_kasprintf(rdev, GFP_KERNEL, "fastrpc-%s",
|
||||
domains[domain_id]);
|
||||
data->miscdev.fops = &fastrpc_fops;
|
||||
err = misc_register(&data->miscdev);
|
||||
if (err)
|
||||
|
|
|
@ -65,6 +65,18 @@ static void cs_put(struct hl_cs *cs)
|
|||
kref_put(&cs->refcount, cs_do_release);
|
||||
}
|
||||
|
||||
static bool is_cb_patched(struct hl_device *hdev, struct hl_cs_job *job)
|
||||
{
|
||||
/*
|
||||
* Patched CB is created for external queues jobs, and for H/W queues
|
||||
* jobs if the user CB was allocated by driver and MMU is disabled.
|
||||
*/
|
||||
return (job->queue_type == QUEUE_TYPE_EXT ||
|
||||
(job->queue_type == QUEUE_TYPE_HW &&
|
||||
job->is_kernel_allocated_cb &&
|
||||
!hdev->mmu_enable));
|
||||
}
|
||||
|
||||
/*
|
||||
* cs_parser - parse the user command submission
|
||||
*
|
||||
|
@ -91,11 +103,13 @@ static int cs_parser(struct hl_fpriv *hpriv, struct hl_cs_job *job)
|
|||
parser.patched_cb = NULL;
|
||||
parser.user_cb = job->user_cb;
|
||||
parser.user_cb_size = job->user_cb_size;
|
||||
parser.ext_queue = job->ext_queue;
|
||||
parser.queue_type = job->queue_type;
|
||||
parser.is_kernel_allocated_cb = job->is_kernel_allocated_cb;
|
||||
job->patched_cb = NULL;
|
||||
|
||||
rc = hdev->asic_funcs->cs_parser(hdev, &parser);
|
||||
if (job->ext_queue) {
|
||||
|
||||
if (is_cb_patched(hdev, job)) {
|
||||
if (!rc) {
|
||||
job->patched_cb = parser.patched_cb;
|
||||
job->job_cb_size = parser.patched_cb_size;
|
||||
|
@ -124,7 +138,7 @@ static void free_job(struct hl_device *hdev, struct hl_cs_job *job)
|
|||
{
|
||||
struct hl_cs *cs = job->cs;
|
||||
|
||||
if (job->ext_queue) {
|
||||
if (is_cb_patched(hdev, job)) {
|
||||
hl_userptr_delete_list(hdev, &job->userptr_list);
|
||||
|
||||
/*
|
||||
|
@ -140,6 +154,19 @@ static void free_job(struct hl_device *hdev, struct hl_cs_job *job)
|
|||
}
|
||||
}
|
||||
|
||||
/* For H/W queue jobs, if a user CB was allocated by driver and MMU is
|
||||
* enabled, the user CB isn't released in cs_parser() and thus should be
|
||||
* released here.
|
||||
*/
|
||||
if (job->queue_type == QUEUE_TYPE_HW &&
|
||||
job->is_kernel_allocated_cb && hdev->mmu_enable) {
|
||||
spin_lock(&job->user_cb->lock);
|
||||
job->user_cb->cs_cnt--;
|
||||
spin_unlock(&job->user_cb->lock);
|
||||
|
||||
hl_cb_put(job->user_cb);
|
||||
}
|
||||
|
||||
/*
|
||||
* This is the only place where there can be multiple threads
|
||||
* modifying the list at the same time
|
||||
|
@ -150,7 +177,8 @@ static void free_job(struct hl_device *hdev, struct hl_cs_job *job)
|
|||
|
||||
hl_debugfs_remove_job(hdev, job);
|
||||
|
||||
if (job->ext_queue)
|
||||
if (job->queue_type == QUEUE_TYPE_EXT ||
|
||||
job->queue_type == QUEUE_TYPE_HW)
|
||||
cs_put(cs);
|
||||
|
||||
kfree(job);
|
||||
|
@ -387,18 +415,13 @@ static void job_wq_completion(struct work_struct *work)
|
|||
free_job(hdev, job);
|
||||
}
|
||||
|
||||
static struct hl_cb *validate_queue_index(struct hl_device *hdev,
|
||||
struct hl_cb_mgr *cb_mgr,
|
||||
struct hl_cs_chunk *chunk,
|
||||
bool *ext_queue)
|
||||
static int validate_queue_index(struct hl_device *hdev,
|
||||
struct hl_cs_chunk *chunk,
|
||||
enum hl_queue_type *queue_type,
|
||||
bool *is_kernel_allocated_cb)
|
||||
{
|
||||
struct asic_fixed_properties *asic = &hdev->asic_prop;
|
||||
struct hw_queue_properties *hw_queue_prop;
|
||||
u32 cb_handle;
|
||||
struct hl_cb *cb;
|
||||
|
||||
/* Assume external queue */
|
||||
*ext_queue = true;
|
||||
|
||||
hw_queue_prop = &asic->hw_queues_props[chunk->queue_index];
|
||||
|
||||
|
@ -406,20 +429,29 @@ static struct hl_cb *validate_queue_index(struct hl_device *hdev,
|
|||
(hw_queue_prop->type == QUEUE_TYPE_NA)) {
|
||||
dev_err(hdev->dev, "Queue index %d is invalid\n",
|
||||
chunk->queue_index);
|
||||
return NULL;
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (hw_queue_prop->driver_only) {
|
||||
dev_err(hdev->dev,
|
||||
"Queue index %d is restricted for the kernel driver\n",
|
||||
chunk->queue_index);
|
||||
return NULL;
|
||||
} else if (hw_queue_prop->type == QUEUE_TYPE_INT) {
|
||||
*ext_queue = false;
|
||||
return (struct hl_cb *) (uintptr_t) chunk->cb_handle;
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Retrieve CB object */
|
||||
*queue_type = hw_queue_prop->type;
|
||||
*is_kernel_allocated_cb = !!hw_queue_prop->requires_kernel_cb;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct hl_cb *get_cb_from_cs_chunk(struct hl_device *hdev,
|
||||
struct hl_cb_mgr *cb_mgr,
|
||||
struct hl_cs_chunk *chunk)
|
||||
{
|
||||
struct hl_cb *cb;
|
||||
u32 cb_handle;
|
||||
|
||||
cb_handle = (u32) (chunk->cb_handle >> PAGE_SHIFT);
|
||||
|
||||
cb = hl_cb_get(hdev, cb_mgr, cb_handle);
|
||||
|
@ -444,7 +476,8 @@ release_cb:
|
|||
return NULL;
|
||||
}
|
||||
|
||||
struct hl_cs_job *hl_cs_allocate_job(struct hl_device *hdev, bool ext_queue)
|
||||
struct hl_cs_job *hl_cs_allocate_job(struct hl_device *hdev,
|
||||
enum hl_queue_type queue_type, bool is_kernel_allocated_cb)
|
||||
{
|
||||
struct hl_cs_job *job;
|
||||
|
||||
|
@ -452,12 +485,14 @@ struct hl_cs_job *hl_cs_allocate_job(struct hl_device *hdev, bool ext_queue)
|
|||
if (!job)
|
||||
return NULL;
|
||||
|
||||
job->ext_queue = ext_queue;
|
||||
job->queue_type = queue_type;
|
||||
job->is_kernel_allocated_cb = is_kernel_allocated_cb;
|
||||
|
||||
if (job->ext_queue) {
|
||||
if (is_cb_patched(hdev, job))
|
||||
INIT_LIST_HEAD(&job->userptr_list);
|
||||
|
||||
if (job->queue_type == QUEUE_TYPE_EXT)
|
||||
INIT_WORK(&job->finish_work, job_wq_completion);
|
||||
}
|
||||
|
||||
return job;
|
||||
}
|
||||
|
@ -470,7 +505,7 @@ static int _hl_cs_ioctl(struct hl_fpriv *hpriv, void __user *chunks,
|
|||
struct hl_cs_job *job;
|
||||
struct hl_cs *cs;
|
||||
struct hl_cb *cb;
|
||||
bool ext_queue_present = false;
|
||||
bool int_queues_only = true;
|
||||
u32 size_to_copy;
|
||||
int rc, i, parse_cnt;
|
||||
|
||||
|
@ -514,23 +549,33 @@ static int _hl_cs_ioctl(struct hl_fpriv *hpriv, void __user *chunks,
|
|||
/* Validate ALL the CS chunks before submitting the CS */
|
||||
for (i = 0, parse_cnt = 0 ; i < num_chunks ; i++, parse_cnt++) {
|
||||
struct hl_cs_chunk *chunk = &cs_chunk_array[i];
|
||||
bool ext_queue;
|
||||
enum hl_queue_type queue_type;
|
||||
bool is_kernel_allocated_cb;
|
||||
|
||||
cb = validate_queue_index(hdev, &hpriv->cb_mgr, chunk,
|
||||
&ext_queue);
|
||||
if (ext_queue) {
|
||||
ext_queue_present = true;
|
||||
rc = validate_queue_index(hdev, chunk, &queue_type,
|
||||
&is_kernel_allocated_cb);
|
||||
if (rc)
|
||||
goto free_cs_object;
|
||||
|
||||
if (is_kernel_allocated_cb) {
|
||||
cb = get_cb_from_cs_chunk(hdev, &hpriv->cb_mgr, chunk);
|
||||
if (!cb) {
|
||||
rc = -EINVAL;
|
||||
goto free_cs_object;
|
||||
}
|
||||
} else {
|
||||
cb = (struct hl_cb *) (uintptr_t) chunk->cb_handle;
|
||||
}
|
||||
|
||||
job = hl_cs_allocate_job(hdev, ext_queue);
|
||||
if (queue_type == QUEUE_TYPE_EXT || queue_type == QUEUE_TYPE_HW)
|
||||
int_queues_only = false;
|
||||
|
||||
job = hl_cs_allocate_job(hdev, queue_type,
|
||||
is_kernel_allocated_cb);
|
||||
if (!job) {
|
||||
dev_err(hdev->dev, "Failed to allocate a new job\n");
|
||||
rc = -ENOMEM;
|
||||
if (ext_queue)
|
||||
if (is_kernel_allocated_cb)
|
||||
goto release_cb;
|
||||
else
|
||||
goto free_cs_object;
|
||||
|
@ -540,7 +585,7 @@ static int _hl_cs_ioctl(struct hl_fpriv *hpriv, void __user *chunks,
|
|||
job->cs = cs;
|
||||
job->user_cb = cb;
|
||||
job->user_cb_size = chunk->cb_size;
|
||||
if (job->ext_queue)
|
||||
if (is_kernel_allocated_cb)
|
||||
job->job_cb_size = cb->size;
|
||||
else
|
||||
job->job_cb_size = chunk->cb_size;
|
||||
|
@ -553,10 +598,11 @@ static int _hl_cs_ioctl(struct hl_fpriv *hpriv, void __user *chunks,
|
|||
/*
|
||||
* Increment CS reference. When CS reference is 0, CS is
|
||||
* done and can be signaled to user and free all its resources
|
||||
* Only increment for JOB on external queues, because only
|
||||
* for those JOBs we get completion
|
||||
* Only increment for JOB on external or H/W queues, because
|
||||
* only for those JOBs we get completion
|
||||
*/
|
||||
if (job->ext_queue)
|
||||
if (job->queue_type == QUEUE_TYPE_EXT ||
|
||||
job->queue_type == QUEUE_TYPE_HW)
|
||||
cs_get(cs);
|
||||
|
||||
hl_debugfs_add_job(hdev, job);
|
||||
|
@ -570,9 +616,9 @@ static int _hl_cs_ioctl(struct hl_fpriv *hpriv, void __user *chunks,
|
|||
}
|
||||
}
|
||||
|
||||
if (!ext_queue_present) {
|
||||
if (int_queues_only) {
|
||||
dev_err(hdev->dev,
|
||||
"Reject CS %d.%llu because no external queues jobs\n",
|
||||
"Reject CS %d.%llu because only internal queues jobs are present\n",
|
||||
cs->ctx->asid, cs->sequence);
|
||||
rc = -EINVAL;
|
||||
goto free_cs_object;
|
||||
|
@ -580,9 +626,10 @@ static int _hl_cs_ioctl(struct hl_fpriv *hpriv, void __user *chunks,
|
|||
|
||||
rc = hl_hw_queue_schedule_cs(cs);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev,
|
||||
"Failed to submit CS %d.%llu to H/W queues, error %d\n",
|
||||
cs->ctx->asid, cs->sequence, rc);
|
||||
if (rc != -EAGAIN)
|
||||
dev_err(hdev->dev,
|
||||
"Failed to submit CS %d.%llu to H/W queues, error %d\n",
|
||||
cs->ctx->asid, cs->sequence, rc);
|
||||
goto free_cs_object;
|
||||
}
|
||||
|
||||
|
|
|
@ -307,45 +307,57 @@ static inline u64 get_hop0_addr(struct hl_ctx *ctx)
|
|||
(ctx->asid * ctx->hdev->asic_prop.mmu_hop_table_size);
|
||||
}
|
||||
|
||||
static inline u64 get_hop0_pte_addr(struct hl_ctx *ctx, u64 hop_addr,
|
||||
u64 virt_addr)
|
||||
static inline u64 get_hopN_pte_addr(struct hl_ctx *ctx, u64 hop_addr,
|
||||
u64 virt_addr, u64 mask, u64 shift)
|
||||
{
|
||||
return hop_addr + ctx->hdev->asic_prop.mmu_pte_size *
|
||||
((virt_addr & HOP0_MASK) >> HOP0_SHIFT);
|
||||
((virt_addr & mask) >> shift);
|
||||
}
|
||||
|
||||
static inline u64 get_hop1_pte_addr(struct hl_ctx *ctx, u64 hop_addr,
|
||||
u64 virt_addr)
|
||||
static inline u64 get_hop0_pte_addr(struct hl_ctx *ctx,
|
||||
struct hl_mmu_properties *mmu_specs,
|
||||
u64 hop_addr, u64 vaddr)
|
||||
{
|
||||
return hop_addr + ctx->hdev->asic_prop.mmu_pte_size *
|
||||
((virt_addr & HOP1_MASK) >> HOP1_SHIFT);
|
||||
return get_hopN_pte_addr(ctx, hop_addr, vaddr, mmu_specs->hop0_mask,
|
||||
mmu_specs->hop0_shift);
|
||||
}
|
||||
|
||||
static inline u64 get_hop2_pte_addr(struct hl_ctx *ctx, u64 hop_addr,
|
||||
u64 virt_addr)
|
||||
static inline u64 get_hop1_pte_addr(struct hl_ctx *ctx,
|
||||
struct hl_mmu_properties *mmu_specs,
|
||||
u64 hop_addr, u64 vaddr)
|
||||
{
|
||||
return hop_addr + ctx->hdev->asic_prop.mmu_pte_size *
|
||||
((virt_addr & HOP2_MASK) >> HOP2_SHIFT);
|
||||
return get_hopN_pte_addr(ctx, hop_addr, vaddr, mmu_specs->hop1_mask,
|
||||
mmu_specs->hop1_shift);
|
||||
}
|
||||
|
||||
static inline u64 get_hop3_pte_addr(struct hl_ctx *ctx, u64 hop_addr,
|
||||
u64 virt_addr)
|
||||
static inline u64 get_hop2_pte_addr(struct hl_ctx *ctx,
|
||||
struct hl_mmu_properties *mmu_specs,
|
||||
u64 hop_addr, u64 vaddr)
|
||||
{
|
||||
return hop_addr + ctx->hdev->asic_prop.mmu_pte_size *
|
||||
((virt_addr & HOP3_MASK) >> HOP3_SHIFT);
|
||||
return get_hopN_pte_addr(ctx, hop_addr, vaddr, mmu_specs->hop2_mask,
|
||||
mmu_specs->hop2_shift);
|
||||
}
|
||||
|
||||
static inline u64 get_hop4_pte_addr(struct hl_ctx *ctx, u64 hop_addr,
|
||||
u64 virt_addr)
|
||||
static inline u64 get_hop3_pte_addr(struct hl_ctx *ctx,
|
||||
struct hl_mmu_properties *mmu_specs,
|
||||
u64 hop_addr, u64 vaddr)
|
||||
{
|
||||
return hop_addr + ctx->hdev->asic_prop.mmu_pte_size *
|
||||
((virt_addr & HOP4_MASK) >> HOP4_SHIFT);
|
||||
return get_hopN_pte_addr(ctx, hop_addr, vaddr, mmu_specs->hop3_mask,
|
||||
mmu_specs->hop3_shift);
|
||||
}
|
||||
|
||||
static inline u64 get_hop4_pte_addr(struct hl_ctx *ctx,
|
||||
struct hl_mmu_properties *mmu_specs,
|
||||
u64 hop_addr, u64 vaddr)
|
||||
{
|
||||
return get_hopN_pte_addr(ctx, hop_addr, vaddr, mmu_specs->hop4_mask,
|
||||
mmu_specs->hop4_shift);
|
||||
}
|
||||
|
||||
static inline u64 get_next_hop_addr(u64 curr_pte)
|
||||
{
|
||||
if (curr_pte & PAGE_PRESENT_MASK)
|
||||
return curr_pte & PHYS_ADDR_MASK;
|
||||
return curr_pte & HOP_PHYS_ADDR_MASK;
|
||||
else
|
||||
return ULLONG_MAX;
|
||||
}
|
||||
|
@ -355,7 +367,10 @@ static int mmu_show(struct seq_file *s, void *data)
|
|||
struct hl_debugfs_entry *entry = s->private;
|
||||
struct hl_dbg_device_entry *dev_entry = entry->dev_entry;
|
||||
struct hl_device *hdev = dev_entry->hdev;
|
||||
struct asic_fixed_properties *prop = &hdev->asic_prop;
|
||||
struct hl_mmu_properties *mmu_prop;
|
||||
struct hl_ctx *ctx;
|
||||
bool is_dram_addr;
|
||||
|
||||
u64 hop0_addr = 0, hop0_pte_addr = 0, hop0_pte = 0,
|
||||
hop1_addr = 0, hop1_pte_addr = 0, hop1_pte = 0,
|
||||
|
@ -377,33 +392,39 @@ static int mmu_show(struct seq_file *s, void *data)
|
|||
return 0;
|
||||
}
|
||||
|
||||
is_dram_addr = hl_mem_area_inside_range(virt_addr, prop->dmmu.page_size,
|
||||
prop->va_space_dram_start_address,
|
||||
prop->va_space_dram_end_address);
|
||||
|
||||
mmu_prop = is_dram_addr ? &prop->dmmu : &prop->pmmu;
|
||||
|
||||
mutex_lock(&ctx->mmu_lock);
|
||||
|
||||
/* the following lookup is copied from unmap() in mmu.c */
|
||||
|
||||
hop0_addr = get_hop0_addr(ctx);
|
||||
hop0_pte_addr = get_hop0_pte_addr(ctx, hop0_addr, virt_addr);
|
||||
hop0_pte_addr = get_hop0_pte_addr(ctx, mmu_prop, hop0_addr, virt_addr);
|
||||
hop0_pte = hdev->asic_funcs->read_pte(hdev, hop0_pte_addr);
|
||||
hop1_addr = get_next_hop_addr(hop0_pte);
|
||||
|
||||
if (hop1_addr == ULLONG_MAX)
|
||||
goto not_mapped;
|
||||
|
||||
hop1_pte_addr = get_hop1_pte_addr(ctx, hop1_addr, virt_addr);
|
||||
hop1_pte_addr = get_hop1_pte_addr(ctx, mmu_prop, hop1_addr, virt_addr);
|
||||
hop1_pte = hdev->asic_funcs->read_pte(hdev, hop1_pte_addr);
|
||||
hop2_addr = get_next_hop_addr(hop1_pte);
|
||||
|
||||
if (hop2_addr == ULLONG_MAX)
|
||||
goto not_mapped;
|
||||
|
||||
hop2_pte_addr = get_hop2_pte_addr(ctx, hop2_addr, virt_addr);
|
||||
hop2_pte_addr = get_hop2_pte_addr(ctx, mmu_prop, hop2_addr, virt_addr);
|
||||
hop2_pte = hdev->asic_funcs->read_pte(hdev, hop2_pte_addr);
|
||||
hop3_addr = get_next_hop_addr(hop2_pte);
|
||||
|
||||
if (hop3_addr == ULLONG_MAX)
|
||||
goto not_mapped;
|
||||
|
||||
hop3_pte_addr = get_hop3_pte_addr(ctx, hop3_addr, virt_addr);
|
||||
hop3_pte_addr = get_hop3_pte_addr(ctx, mmu_prop, hop3_addr, virt_addr);
|
||||
hop3_pte = hdev->asic_funcs->read_pte(hdev, hop3_pte_addr);
|
||||
|
||||
if (!(hop3_pte & LAST_MASK)) {
|
||||
|
@ -412,7 +433,8 @@ static int mmu_show(struct seq_file *s, void *data)
|
|||
if (hop4_addr == ULLONG_MAX)
|
||||
goto not_mapped;
|
||||
|
||||
hop4_pte_addr = get_hop4_pte_addr(ctx, hop4_addr, virt_addr);
|
||||
hop4_pte_addr = get_hop4_pte_addr(ctx, mmu_prop, hop4_addr,
|
||||
virt_addr);
|
||||
hop4_pte = hdev->asic_funcs->read_pte(hdev, hop4_pte_addr);
|
||||
if (!(hop4_pte & PAGE_PRESENT_MASK))
|
||||
goto not_mapped;
|
||||
|
@ -506,6 +528,12 @@ static int engines_show(struct seq_file *s, void *data)
|
|||
struct hl_dbg_device_entry *dev_entry = entry->dev_entry;
|
||||
struct hl_device *hdev = dev_entry->hdev;
|
||||
|
||||
if (atomic_read(&hdev->in_reset)) {
|
||||
dev_warn_ratelimited(hdev->dev,
|
||||
"Can't check device idle during reset\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
hdev->asic_funcs->is_device_idle(hdev, NULL, s);
|
||||
|
||||
return 0;
|
||||
|
@ -534,41 +562,50 @@ static int device_va_to_pa(struct hl_device *hdev, u64 virt_addr,
|
|||
u64 *phys_addr)
|
||||
{
|
||||
struct hl_ctx *ctx = hdev->compute_ctx;
|
||||
struct asic_fixed_properties *prop = &hdev->asic_prop;
|
||||
struct hl_mmu_properties *mmu_prop;
|
||||
u64 hop_addr, hop_pte_addr, hop_pte;
|
||||
u64 offset_mask = HOP4_MASK | OFFSET_MASK;
|
||||
u64 offset_mask = HOP4_MASK | FLAGS_MASK;
|
||||
int rc = 0;
|
||||
bool is_dram_addr;
|
||||
|
||||
if (!ctx) {
|
||||
dev_err(hdev->dev, "no ctx available\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
is_dram_addr = hl_mem_area_inside_range(virt_addr, prop->dmmu.page_size,
|
||||
prop->va_space_dram_start_address,
|
||||
prop->va_space_dram_end_address);
|
||||
|
||||
mmu_prop = is_dram_addr ? &prop->dmmu : &prop->pmmu;
|
||||
|
||||
mutex_lock(&ctx->mmu_lock);
|
||||
|
||||
/* hop 0 */
|
||||
hop_addr = get_hop0_addr(ctx);
|
||||
hop_pte_addr = get_hop0_pte_addr(ctx, hop_addr, virt_addr);
|
||||
hop_pte_addr = get_hop0_pte_addr(ctx, mmu_prop, hop_addr, virt_addr);
|
||||
hop_pte = hdev->asic_funcs->read_pte(hdev, hop_pte_addr);
|
||||
|
||||
/* hop 1 */
|
||||
hop_addr = get_next_hop_addr(hop_pte);
|
||||
if (hop_addr == ULLONG_MAX)
|
||||
goto not_mapped;
|
||||
hop_pte_addr = get_hop1_pte_addr(ctx, hop_addr, virt_addr);
|
||||
hop_pte_addr = get_hop1_pte_addr(ctx, mmu_prop, hop_addr, virt_addr);
|
||||
hop_pte = hdev->asic_funcs->read_pte(hdev, hop_pte_addr);
|
||||
|
||||
/* hop 2 */
|
||||
hop_addr = get_next_hop_addr(hop_pte);
|
||||
if (hop_addr == ULLONG_MAX)
|
||||
goto not_mapped;
|
||||
hop_pte_addr = get_hop2_pte_addr(ctx, hop_addr, virt_addr);
|
||||
hop_pte_addr = get_hop2_pte_addr(ctx, mmu_prop, hop_addr, virt_addr);
|
||||
hop_pte = hdev->asic_funcs->read_pte(hdev, hop_pte_addr);
|
||||
|
||||
/* hop 3 */
|
||||
hop_addr = get_next_hop_addr(hop_pte);
|
||||
if (hop_addr == ULLONG_MAX)
|
||||
goto not_mapped;
|
||||
hop_pte_addr = get_hop3_pte_addr(ctx, hop_addr, virt_addr);
|
||||
hop_pte_addr = get_hop3_pte_addr(ctx, mmu_prop, hop_addr, virt_addr);
|
||||
hop_pte = hdev->asic_funcs->read_pte(hdev, hop_pte_addr);
|
||||
|
||||
if (!(hop_pte & LAST_MASK)) {
|
||||
|
@ -576,10 +613,11 @@ static int device_va_to_pa(struct hl_device *hdev, u64 virt_addr,
|
|||
hop_addr = get_next_hop_addr(hop_pte);
|
||||
if (hop_addr == ULLONG_MAX)
|
||||
goto not_mapped;
|
||||
hop_pte_addr = get_hop4_pte_addr(ctx, hop_addr, virt_addr);
|
||||
hop_pte_addr = get_hop4_pte_addr(ctx, mmu_prop, hop_addr,
|
||||
virt_addr);
|
||||
hop_pte = hdev->asic_funcs->read_pte(hdev, hop_pte_addr);
|
||||
|
||||
offset_mask = OFFSET_MASK;
|
||||
offset_mask = FLAGS_MASK;
|
||||
}
|
||||
|
||||
if (!(hop_pte & PAGE_PRESENT_MASK))
|
||||
|
@ -608,6 +646,11 @@ static ssize_t hl_data_read32(struct file *f, char __user *buf,
|
|||
u32 val;
|
||||
ssize_t rc;
|
||||
|
||||
if (atomic_read(&hdev->in_reset)) {
|
||||
dev_warn_ratelimited(hdev->dev, "Can't read during reset\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (*ppos)
|
||||
return 0;
|
||||
|
||||
|
@ -637,6 +680,11 @@ static ssize_t hl_data_write32(struct file *f, const char __user *buf,
|
|||
u32 value;
|
||||
ssize_t rc;
|
||||
|
||||
if (atomic_read(&hdev->in_reset)) {
|
||||
dev_warn_ratelimited(hdev->dev, "Can't write during reset\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
rc = kstrtouint_from_user(buf, count, 16, &value);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
|
|
@ -42,12 +42,10 @@ static void hpriv_release(struct kref *ref)
|
|||
{
|
||||
struct hl_fpriv *hpriv;
|
||||
struct hl_device *hdev;
|
||||
struct hl_ctx *ctx;
|
||||
|
||||
hpriv = container_of(ref, struct hl_fpriv, refcount);
|
||||
|
||||
hdev = hpriv->hdev;
|
||||
ctx = hpriv->ctx;
|
||||
|
||||
put_pid(hpriv->taskpid);
|
||||
|
||||
|
@ -889,13 +887,19 @@ again:
|
|||
/* Go over all the queues, release all CS and their jobs */
|
||||
hl_cs_rollback_all(hdev);
|
||||
|
||||
/* Kill processes here after CS rollback. This is because the process
|
||||
* can't really exit until all its CSs are done, which is what we
|
||||
* do in cs rollback
|
||||
*/
|
||||
if (from_hard_reset_thread)
|
||||
if (hard_reset) {
|
||||
/* Kill processes here after CS rollback. This is because the
|
||||
* process can't really exit until all its CSs are done, which
|
||||
* is what we do in cs rollback
|
||||
*/
|
||||
device_kill_open_processes(hdev);
|
||||
|
||||
/* Flush the Event queue workers to make sure no other thread is
|
||||
* reading or writing to registers during the reset
|
||||
*/
|
||||
flush_workqueue(hdev->eq_wq);
|
||||
}
|
||||
|
||||
/* Release kernel context */
|
||||
if ((hard_reset) && (hl_ctx_put(hdev->kernel_ctx) == 1))
|
||||
hdev->kernel_ctx = NULL;
|
||||
|
|
|
@ -143,10 +143,7 @@ int hl_fw_test_cpu_queue(struct hl_device *hdev)
|
|||
sizeof(test_pkt), HL_DEVICE_TIMEOUT_USEC, &result);
|
||||
|
||||
if (!rc) {
|
||||
if (result == ARMCP_PACKET_FENCE_VAL)
|
||||
dev_info(hdev->dev,
|
||||
"queue test on CPU queue succeeded\n");
|
||||
else
|
||||
if (result != ARMCP_PACKET_FENCE_VAL)
|
||||
dev_err(hdev->dev,
|
||||
"CPU queue test failed (0x%08lX)\n", result);
|
||||
} else {
|
||||
|
|
|
@ -72,6 +72,9 @@
|
|||
*
|
||||
*/
|
||||
|
||||
#define GOYA_UBOOT_FW_FILE "habanalabs/goya/goya-u-boot.bin"
|
||||
#define GOYA_LINUX_FW_FILE "habanalabs/goya/goya-fit.itb"
|
||||
|
||||
#define GOYA_MMU_REGS_NUM 63
|
||||
|
||||
#define GOYA_DMA_POOL_BLK_SIZE 0x100 /* 256 bytes */
|
||||
|
@ -337,17 +340,20 @@ void goya_get_fixed_properties(struct hl_device *hdev)
|
|||
for (i = 0 ; i < NUMBER_OF_EXT_HW_QUEUES ; i++) {
|
||||
prop->hw_queues_props[i].type = QUEUE_TYPE_EXT;
|
||||
prop->hw_queues_props[i].driver_only = 0;
|
||||
prop->hw_queues_props[i].requires_kernel_cb = 1;
|
||||
}
|
||||
|
||||
for (; i < NUMBER_OF_EXT_HW_QUEUES + NUMBER_OF_CPU_HW_QUEUES ; i++) {
|
||||
prop->hw_queues_props[i].type = QUEUE_TYPE_CPU;
|
||||
prop->hw_queues_props[i].driver_only = 1;
|
||||
prop->hw_queues_props[i].requires_kernel_cb = 0;
|
||||
}
|
||||
|
||||
for (; i < NUMBER_OF_EXT_HW_QUEUES + NUMBER_OF_CPU_HW_QUEUES +
|
||||
NUMBER_OF_INT_HW_QUEUES; i++) {
|
||||
prop->hw_queues_props[i].type = QUEUE_TYPE_INT;
|
||||
prop->hw_queues_props[i].driver_only = 0;
|
||||
prop->hw_queues_props[i].requires_kernel_cb = 0;
|
||||
}
|
||||
|
||||
for (; i < HL_MAX_QUEUES; i++)
|
||||
|
@ -377,6 +383,23 @@ void goya_get_fixed_properties(struct hl_device *hdev)
|
|||
prop->mmu_hop0_tables_total_size = HOP0_TABLES_TOTAL_SIZE;
|
||||
prop->dram_page_size = PAGE_SIZE_2MB;
|
||||
|
||||
prop->dmmu.hop0_shift = HOP0_SHIFT;
|
||||
prop->dmmu.hop1_shift = HOP1_SHIFT;
|
||||
prop->dmmu.hop2_shift = HOP2_SHIFT;
|
||||
prop->dmmu.hop3_shift = HOP3_SHIFT;
|
||||
prop->dmmu.hop4_shift = HOP4_SHIFT;
|
||||
prop->dmmu.hop0_mask = HOP0_MASK;
|
||||
prop->dmmu.hop1_mask = HOP1_MASK;
|
||||
prop->dmmu.hop2_mask = HOP2_MASK;
|
||||
prop->dmmu.hop3_mask = HOP3_MASK;
|
||||
prop->dmmu.hop4_mask = HOP4_MASK;
|
||||
prop->dmmu.huge_page_size = PAGE_SIZE_2MB;
|
||||
|
||||
/* No difference between PMMU and DMMU except of page size */
|
||||
memcpy(&prop->pmmu, &prop->dmmu, sizeof(prop->dmmu));
|
||||
prop->dmmu.page_size = PAGE_SIZE_2MB;
|
||||
prop->pmmu.page_size = PAGE_SIZE_4KB;
|
||||
|
||||
prop->va_space_host_start_address = VA_HOST_SPACE_START;
|
||||
prop->va_space_host_end_address = VA_HOST_SPACE_END;
|
||||
prop->va_space_dram_start_address = VA_DDR_SPACE_START;
|
||||
|
@ -393,6 +416,9 @@ void goya_get_fixed_properties(struct hl_device *hdev)
|
|||
prop->tpc_enabled_mask = TPC_ENABLED_MASK;
|
||||
prop->pcie_dbi_base_address = mmPCIE_DBI_BASE;
|
||||
prop->pcie_aux_dbi_reg_addr = CFG_BASE + mmPCIE_AUX_DBI;
|
||||
|
||||
strncpy(prop->armcp_info.card_name, GOYA_DEFAULT_CARD_NAME,
|
||||
CARD_NAME_MAX_LEN);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -1454,6 +1480,9 @@ static void goya_init_golden_registers(struct hl_device *hdev)
|
|||
1 << TPC0_NRTR_SCRAMB_EN_VAL_SHIFT);
|
||||
WREG32(mmTPC0_NRTR_NON_LIN_SCRAMB + offset,
|
||||
1 << TPC0_NRTR_NON_LIN_SCRAMB_EN_SHIFT);
|
||||
|
||||
WREG32_FIELD(TPC0_CFG_MSS_CONFIG, offset,
|
||||
ICACHE_FETCH_LINE_NUM, 2);
|
||||
}
|
||||
|
||||
WREG32(mmDMA_NRTR_SCRAMB_EN, 1 << DMA_NRTR_SCRAMB_EN_VAL_SHIFT);
|
||||
|
@ -1533,7 +1562,6 @@ static void goya_init_mme_cmdq(struct hl_device *hdev)
|
|||
u32 mtr_base_lo, mtr_base_hi;
|
||||
u32 so_base_lo, so_base_hi;
|
||||
u32 gic_base_lo, gic_base_hi;
|
||||
u64 qman_base_addr;
|
||||
|
||||
mtr_base_lo = lower_32_bits(CFG_BASE + mmSYNC_MNGR_MON_PAY_ADDRL_0);
|
||||
mtr_base_hi = upper_32_bits(CFG_BASE + mmSYNC_MNGR_MON_PAY_ADDRL_0);
|
||||
|
@ -1545,9 +1573,6 @@ static void goya_init_mme_cmdq(struct hl_device *hdev)
|
|||
gic_base_hi =
|
||||
upper_32_bits(CFG_BASE + mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR);
|
||||
|
||||
qman_base_addr = hdev->asic_prop.sram_base_address +
|
||||
MME_QMAN_BASE_OFFSET;
|
||||
|
||||
WREG32(mmMME_CMDQ_CP_MSG_BASE0_ADDR_LO, mtr_base_lo);
|
||||
WREG32(mmMME_CMDQ_CP_MSG_BASE0_ADDR_HI, mtr_base_hi);
|
||||
WREG32(mmMME_CMDQ_CP_MSG_BASE1_ADDR_LO, so_base_lo);
|
||||
|
@ -2141,13 +2166,11 @@ static void goya_halt_engines(struct hl_device *hdev, bool hard_reset)
|
|||
*/
|
||||
static int goya_push_uboot_to_device(struct hl_device *hdev)
|
||||
{
|
||||
char fw_name[200];
|
||||
void __iomem *dst;
|
||||
|
||||
snprintf(fw_name, sizeof(fw_name), "habanalabs/goya/goya-u-boot.bin");
|
||||
dst = hdev->pcie_bar[SRAM_CFG_BAR_ID] + UBOOT_FW_OFFSET;
|
||||
|
||||
return hl_fw_push_fw_to_device(hdev, fw_name, dst);
|
||||
return hl_fw_push_fw_to_device(hdev, GOYA_UBOOT_FW_FILE, dst);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -2160,13 +2183,11 @@ static int goya_push_uboot_to_device(struct hl_device *hdev)
|
|||
*/
|
||||
static int goya_push_linux_to_device(struct hl_device *hdev)
|
||||
{
|
||||
char fw_name[200];
|
||||
void __iomem *dst;
|
||||
|
||||
snprintf(fw_name, sizeof(fw_name), "habanalabs/goya/goya-fit.itb");
|
||||
dst = hdev->pcie_bar[DDR_BAR_ID] + LINUX_FW_OFFSET;
|
||||
|
||||
return hl_fw_push_fw_to_device(hdev, fw_name, dst);
|
||||
return hl_fw_push_fw_to_device(hdev, GOYA_LINUX_FW_FILE, dst);
|
||||
}
|
||||
|
||||
static int goya_pldm_init_cpu(struct hl_device *hdev)
|
||||
|
@ -2291,6 +2312,10 @@ static int goya_init_cpu(struct hl_device *hdev, u32 cpu_timeout)
|
|||
10000,
|
||||
cpu_timeout);
|
||||
|
||||
/* Read U-Boot version now in case we will later fail */
|
||||
goya_read_device_fw_version(hdev, FW_COMP_UBOOT);
|
||||
goya_read_device_fw_version(hdev, FW_COMP_PREBOOT);
|
||||
|
||||
if (rc) {
|
||||
dev_err(hdev->dev, "Error in ARM u-boot!");
|
||||
switch (status) {
|
||||
|
@ -2328,6 +2353,11 @@ static int goya_init_cpu(struct hl_device *hdev, u32 cpu_timeout)
|
|||
"ARM status %d - u-boot stopped by user\n",
|
||||
status);
|
||||
break;
|
||||
case CPU_BOOT_STATUS_TS_INIT_FAIL:
|
||||
dev_err(hdev->dev,
|
||||
"ARM status %d - Thermal Sensor initialization failed\n",
|
||||
status);
|
||||
break;
|
||||
default:
|
||||
dev_err(hdev->dev,
|
||||
"ARM status %d - Invalid status code\n",
|
||||
|
@ -2337,10 +2367,6 @@ static int goya_init_cpu(struct hl_device *hdev, u32 cpu_timeout)
|
|||
return -EIO;
|
||||
}
|
||||
|
||||
/* Read U-Boot version now in case we will later fail */
|
||||
goya_read_device_fw_version(hdev, FW_COMP_UBOOT);
|
||||
goya_read_device_fw_version(hdev, FW_COMP_PREBOOT);
|
||||
|
||||
if (!hdev->fw_loading) {
|
||||
dev_info(hdev->dev, "Skip loading FW\n");
|
||||
goto out;
|
||||
|
@ -2453,7 +2479,8 @@ int goya_mmu_init(struct hl_device *hdev)
|
|||
WREG32_AND(mmSTLB_STLB_FEATURE_EN,
|
||||
(~STLB_STLB_FEATURE_EN_FOLLOWER_EN_MASK));
|
||||
|
||||
hdev->asic_funcs->mmu_invalidate_cache(hdev, true);
|
||||
hdev->asic_funcs->mmu_invalidate_cache(hdev, true,
|
||||
VM_TYPE_USERPTR | VM_TYPE_PHYS_PACK);
|
||||
|
||||
WREG32(mmMMU_MMU_ENABLE, 1);
|
||||
WREG32(mmMMU_SPI_MASK, 0xF);
|
||||
|
@ -2978,9 +3005,6 @@ int goya_test_queue(struct hl_device *hdev, u32 hw_queue_id)
|
|||
"H/W queue %d test failed (scratch(0x%08llX) == 0x%08X)\n",
|
||||
hw_queue_id, (unsigned long long) fence_dma_addr, tmp);
|
||||
rc = -EIO;
|
||||
} else {
|
||||
dev_info(hdev->dev, "queue test on H/W queue %d succeeded\n",
|
||||
hw_queue_id);
|
||||
}
|
||||
|
||||
free_pkt:
|
||||
|
@ -3925,7 +3949,7 @@ static int goya_parse_cb_no_ext_queue(struct hl_device *hdev,
|
|||
return 0;
|
||||
|
||||
dev_err(hdev->dev,
|
||||
"Internal CB address %px + 0x%x is not in SRAM nor in DRAM\n",
|
||||
"Internal CB address 0x%px + 0x%x is not in SRAM nor in DRAM\n",
|
||||
parser->user_cb, parser->user_cb_size);
|
||||
|
||||
return -EFAULT;
|
||||
|
@ -3935,7 +3959,7 @@ int goya_cs_parser(struct hl_device *hdev, struct hl_cs_parser *parser)
|
|||
{
|
||||
struct goya_device *goya = hdev->asic_specific;
|
||||
|
||||
if (!parser->ext_queue)
|
||||
if (parser->queue_type == QUEUE_TYPE_INT)
|
||||
return goya_parse_cb_no_ext_queue(hdev, parser);
|
||||
|
||||
if (goya->hw_cap_initialized & HW_CAP_MMU)
|
||||
|
@ -4606,7 +4630,7 @@ static int goya_memset_device_memory(struct hl_device *hdev, u64 addr, u64 size,
|
|||
lin_dma_pkt++;
|
||||
} while (--lin_dma_pkts_cnt);
|
||||
|
||||
job = hl_cs_allocate_job(hdev, true);
|
||||
job = hl_cs_allocate_job(hdev, QUEUE_TYPE_EXT, true);
|
||||
if (!job) {
|
||||
dev_err(hdev->dev, "Failed to allocate a new job\n");
|
||||
rc = -ENOMEM;
|
||||
|
@ -4835,13 +4859,15 @@ static void goya_mmu_prepare(struct hl_device *hdev, u32 asid)
|
|||
goya_mmu_prepare_reg(hdev, goya_mmu_regs[i], asid);
|
||||
}
|
||||
|
||||
static void goya_mmu_invalidate_cache(struct hl_device *hdev, bool is_hard)
|
||||
static void goya_mmu_invalidate_cache(struct hl_device *hdev, bool is_hard,
|
||||
u32 flags)
|
||||
{
|
||||
struct goya_device *goya = hdev->asic_specific;
|
||||
u32 status, timeout_usec;
|
||||
int rc;
|
||||
|
||||
if (!(goya->hw_cap_initialized & HW_CAP_MMU))
|
||||
if (!(goya->hw_cap_initialized & HW_CAP_MMU) ||
|
||||
hdev->hard_reset_pending)
|
||||
return;
|
||||
|
||||
/* no need in L1 only invalidation in Goya */
|
||||
|
@ -4880,7 +4906,8 @@ static void goya_mmu_invalidate_cache_range(struct hl_device *hdev,
|
|||
u32 status, timeout_usec, inv_data, pi;
|
||||
int rc;
|
||||
|
||||
if (!(goya->hw_cap_initialized & HW_CAP_MMU))
|
||||
if (!(goya->hw_cap_initialized & HW_CAP_MMU) ||
|
||||
hdev->hard_reset_pending)
|
||||
return;
|
||||
|
||||
/* no need in L1 only invalidation in Goya */
|
||||
|
@ -5137,7 +5164,8 @@ static const struct hl_asic_funcs goya_funcs = {
|
|||
.init_iatu = goya_init_iatu,
|
||||
.rreg = hl_rreg,
|
||||
.wreg = hl_wreg,
|
||||
.halt_coresight = goya_halt_coresight
|
||||
.halt_coresight = goya_halt_coresight,
|
||||
.get_clk_rate = goya_get_clk_rate
|
||||
};
|
||||
|
||||
/*
|
||||
|
|
|
@ -233,4 +233,6 @@ void goya_cpu_accessible_dma_pool_free(struct hl_device *hdev, size_t size,
|
|||
void *vaddr);
|
||||
void goya_mmu_remove_device_cpu_mappings(struct hl_device *hdev);
|
||||
|
||||
int goya_get_clk_rate(struct hl_device *hdev, u32 *cur_clk, u32 *max_clk);
|
||||
|
||||
#endif /* GOYAP_H_ */
|
||||
|
|
|
@ -8,6 +8,7 @@
|
|||
#include "goyaP.h"
|
||||
#include "include/goya/goya_coresight.h"
|
||||
#include "include/goya/asic_reg/goya_regs.h"
|
||||
#include "include/goya/asic_reg/goya_masks.h"
|
||||
|
||||
#include <uapi/misc/habanalabs.h>
|
||||
|
||||
|
@ -377,33 +378,32 @@ static int goya_config_etr(struct hl_device *hdev,
|
|||
struct hl_debug_params *params)
|
||||
{
|
||||
struct hl_debug_params_etr *input;
|
||||
u64 base_reg = mmPSOC_ETR_BASE - CFG_BASE;
|
||||
u32 val;
|
||||
int rc;
|
||||
|
||||
WREG32(base_reg + 0xFB0, CORESIGHT_UNLOCK);
|
||||
WREG32(mmPSOC_ETR_LAR, CORESIGHT_UNLOCK);
|
||||
|
||||
val = RREG32(base_reg + 0x304);
|
||||
val = RREG32(mmPSOC_ETR_FFCR);
|
||||
val |= 0x1000;
|
||||
WREG32(base_reg + 0x304, val);
|
||||
WREG32(mmPSOC_ETR_FFCR, val);
|
||||
val |= 0x40;
|
||||
WREG32(base_reg + 0x304, val);
|
||||
WREG32(mmPSOC_ETR_FFCR, val);
|
||||
|
||||
rc = goya_coresight_timeout(hdev, base_reg + 0x304, 6, false);
|
||||
rc = goya_coresight_timeout(hdev, mmPSOC_ETR_FFCR, 6, false);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev, "Failed to %s ETR on timeout, error %d\n",
|
||||
params->enable ? "enable" : "disable", rc);
|
||||
return rc;
|
||||
}
|
||||
|
||||
rc = goya_coresight_timeout(hdev, base_reg + 0xC, 2, true);
|
||||
rc = goya_coresight_timeout(hdev, mmPSOC_ETR_STS, 2, true);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev, "Failed to %s ETR on timeout, error %d\n",
|
||||
params->enable ? "enable" : "disable", rc);
|
||||
return rc;
|
||||
}
|
||||
|
||||
WREG32(base_reg + 0x20, 0);
|
||||
WREG32(mmPSOC_ETR_CTL, 0);
|
||||
|
||||
if (params->enable) {
|
||||
input = params->input;
|
||||
|
@ -423,25 +423,26 @@ static int goya_config_etr(struct hl_device *hdev,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
WREG32(base_reg + 0x34, 0x3FFC);
|
||||
WREG32(base_reg + 0x4, input->buffer_size);
|
||||
WREG32(base_reg + 0x28, input->sink_mode);
|
||||
WREG32(base_reg + 0x110, 0x700);
|
||||
WREG32(base_reg + 0x118,
|
||||
WREG32(mmPSOC_ETR_BUFWM, 0x3FFC);
|
||||
WREG32(mmPSOC_ETR_RSZ, input->buffer_size);
|
||||
WREG32(mmPSOC_ETR_MODE, input->sink_mode);
|
||||
WREG32(mmPSOC_ETR_AXICTL,
|
||||
0x700 | PSOC_ETR_AXICTL_PROTCTRLBIT1_SHIFT);
|
||||
WREG32(mmPSOC_ETR_DBALO,
|
||||
lower_32_bits(input->buffer_address));
|
||||
WREG32(base_reg + 0x11C,
|
||||
WREG32(mmPSOC_ETR_DBAHI,
|
||||
upper_32_bits(input->buffer_address));
|
||||
WREG32(base_reg + 0x304, 3);
|
||||
WREG32(base_reg + 0x308, 0xA);
|
||||
WREG32(base_reg + 0x20, 1);
|
||||
WREG32(mmPSOC_ETR_FFCR, 3);
|
||||
WREG32(mmPSOC_ETR_PSCR, 0xA);
|
||||
WREG32(mmPSOC_ETR_CTL, 1);
|
||||
} else {
|
||||
WREG32(base_reg + 0x34, 0);
|
||||
WREG32(base_reg + 0x4, 0x400);
|
||||
WREG32(base_reg + 0x118, 0);
|
||||
WREG32(base_reg + 0x11C, 0);
|
||||
WREG32(base_reg + 0x308, 0);
|
||||
WREG32(base_reg + 0x28, 0);
|
||||
WREG32(base_reg + 0x304, 0);
|
||||
WREG32(mmPSOC_ETR_BUFWM, 0);
|
||||
WREG32(mmPSOC_ETR_RSZ, 0x400);
|
||||
WREG32(mmPSOC_ETR_DBALO, 0);
|
||||
WREG32(mmPSOC_ETR_DBAHI, 0);
|
||||
WREG32(mmPSOC_ETR_PSCR, 0);
|
||||
WREG32(mmPSOC_ETR_MODE, 0);
|
||||
WREG32(mmPSOC_ETR_FFCR, 0);
|
||||
|
||||
if (params->output_size >= sizeof(u64)) {
|
||||
u32 rwp, rwphi;
|
||||
|
@ -451,8 +452,8 @@ static int goya_config_etr(struct hl_device *hdev,
|
|||
* the buffer is set in the RWP register (lower 32
|
||||
* bits), and in the RWPHI register (upper 8 bits).
|
||||
*/
|
||||
rwp = RREG32(base_reg + 0x18);
|
||||
rwphi = RREG32(base_reg + 0x3c) & 0xff;
|
||||
rwp = RREG32(mmPSOC_ETR_RWP);
|
||||
rwphi = RREG32(mmPSOC_ETR_RWPHI) & 0xff;
|
||||
*(u64 *) params->output = ((u64) rwphi << 32) | rwp;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -32,6 +32,37 @@ void goya_set_pll_profile(struct hl_device *hdev, enum hl_pll_frequency freq)
|
|||
}
|
||||
}
|
||||
|
||||
int goya_get_clk_rate(struct hl_device *hdev, u32 *cur_clk, u32 *max_clk)
|
||||
{
|
||||
long value;
|
||||
|
||||
if (hl_device_disabled_or_in_reset(hdev))
|
||||
return -ENODEV;
|
||||
|
||||
value = hl_get_frequency(hdev, MME_PLL, false);
|
||||
|
||||
if (value < 0) {
|
||||
dev_err(hdev->dev, "Failed to retrieve device max clock %ld\n",
|
||||
value);
|
||||
return value;
|
||||
}
|
||||
|
||||
*max_clk = (value / 1000 / 1000);
|
||||
|
||||
value = hl_get_frequency(hdev, MME_PLL, true);
|
||||
|
||||
if (value < 0) {
|
||||
dev_err(hdev->dev,
|
||||
"Failed to retrieve device current clock %ld\n",
|
||||
value);
|
||||
return value;
|
||||
}
|
||||
|
||||
*cur_clk = (value / 1000 / 1000);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static ssize_t mme_clk_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
|
|
|
@ -40,8 +40,6 @@
|
|||
|
||||
#define HL_MAX_QUEUES 128
|
||||
|
||||
#define HL_MAX_JOBS_PER_CS 64
|
||||
|
||||
/* MUST BE POWER OF 2 and larger than 1 */
|
||||
#define HL_MAX_PENDING_CS 64
|
||||
|
||||
|
@ -85,12 +83,15 @@ struct hl_fpriv;
|
|||
* @QUEUE_TYPE_INT: internal queue that performs DMA inside the device's
|
||||
* memories and/or operates the compute engines.
|
||||
* @QUEUE_TYPE_CPU: S/W queue for communication with the device's CPU.
|
||||
* @QUEUE_TYPE_HW: queue of DMA and compute engines jobs, for which completion
|
||||
* notifications are sent by H/W.
|
||||
*/
|
||||
enum hl_queue_type {
|
||||
QUEUE_TYPE_NA,
|
||||
QUEUE_TYPE_EXT,
|
||||
QUEUE_TYPE_INT,
|
||||
QUEUE_TYPE_CPU
|
||||
QUEUE_TYPE_CPU,
|
||||
QUEUE_TYPE_HW
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -98,10 +99,13 @@ enum hl_queue_type {
|
|||
* @type: queue type.
|
||||
* @driver_only: true if only the driver is allowed to send a job to this queue,
|
||||
* false otherwise.
|
||||
* @requires_kernel_cb: true if a CB handle must be provided for jobs on this
|
||||
* queue, false otherwise (a CB address must be provided).
|
||||
*/
|
||||
struct hw_queue_properties {
|
||||
enum hl_queue_type type;
|
||||
u8 driver_only;
|
||||
u8 requires_kernel_cb;
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -110,8 +114,8 @@ struct hw_queue_properties {
|
|||
* @VM_TYPE_PHYS_PACK: mapping of DRAM memory to device virtual address.
|
||||
*/
|
||||
enum vm_type_t {
|
||||
VM_TYPE_USERPTR,
|
||||
VM_TYPE_PHYS_PACK
|
||||
VM_TYPE_USERPTR = 0x1,
|
||||
VM_TYPE_PHYS_PACK = 0x2
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -126,6 +130,36 @@ enum hl_device_hw_state {
|
|||
HL_DEVICE_HW_STATE_DIRTY
|
||||
};
|
||||
|
||||
/**
|
||||
* struct hl_mmu_properties - ASIC specific MMU address translation properties.
|
||||
* @hop0_shift: shift of hop 0 mask.
|
||||
* @hop1_shift: shift of hop 1 mask.
|
||||
* @hop2_shift: shift of hop 2 mask.
|
||||
* @hop3_shift: shift of hop 3 mask.
|
||||
* @hop4_shift: shift of hop 4 mask.
|
||||
* @hop0_mask: mask to get the PTE address in hop 0.
|
||||
* @hop1_mask: mask to get the PTE address in hop 1.
|
||||
* @hop2_mask: mask to get the PTE address in hop 2.
|
||||
* @hop3_mask: mask to get the PTE address in hop 3.
|
||||
* @hop4_mask: mask to get the PTE address in hop 4.
|
||||
* @page_size: default page size used to allocate memory.
|
||||
* @huge_page_size: page size used to allocate memory with huge pages.
|
||||
*/
|
||||
struct hl_mmu_properties {
|
||||
u64 hop0_shift;
|
||||
u64 hop1_shift;
|
||||
u64 hop2_shift;
|
||||
u64 hop3_shift;
|
||||
u64 hop4_shift;
|
||||
u64 hop0_mask;
|
||||
u64 hop1_mask;
|
||||
u64 hop2_mask;
|
||||
u64 hop3_mask;
|
||||
u64 hop4_mask;
|
||||
u32 page_size;
|
||||
u32 huge_page_size;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct asic_fixed_properties - ASIC specific immutable properties.
|
||||
* @hw_queues_props: H/W queues properties.
|
||||
|
@ -133,6 +167,8 @@ enum hl_device_hw_state {
|
|||
* available sensors.
|
||||
* @uboot_ver: F/W U-boot version.
|
||||
* @preboot_ver: F/W Preboot version.
|
||||
* @dmmu: DRAM MMU address translation properties.
|
||||
* @pmmu: PCI (host) MMU address translation properties.
|
||||
* @sram_base_address: SRAM physical start address.
|
||||
* @sram_end_address: SRAM physical end address.
|
||||
* @sram_user_base_address - SRAM physical start address for user access.
|
||||
|
@ -169,53 +205,55 @@ enum hl_device_hw_state {
|
|||
* @psoc_pci_pll_nf: PCI PLL NF value.
|
||||
* @psoc_pci_pll_od: PCI PLL OD value.
|
||||
* @psoc_pci_pll_div_factor: PCI PLL DIV FACTOR 1 value.
|
||||
* @completion_queues_count: number of completion queues.
|
||||
* @high_pll: high PLL frequency used by the device.
|
||||
* @cb_pool_cb_cnt: number of CBs in the CB pool.
|
||||
* @cb_pool_cb_size: size of each CB in the CB pool.
|
||||
* @tpc_enabled_mask: which TPCs are enabled.
|
||||
* @completion_queues_count: number of completion queues.
|
||||
*/
|
||||
struct asic_fixed_properties {
|
||||
struct hw_queue_properties hw_queues_props[HL_MAX_QUEUES];
|
||||
struct armcp_info armcp_info;
|
||||
char uboot_ver[VERSION_MAX_LEN];
|
||||
char preboot_ver[VERSION_MAX_LEN];
|
||||
u64 sram_base_address;
|
||||
u64 sram_end_address;
|
||||
u64 sram_user_base_address;
|
||||
u64 dram_base_address;
|
||||
u64 dram_end_address;
|
||||
u64 dram_user_base_address;
|
||||
u64 dram_size;
|
||||
u64 dram_pci_bar_size;
|
||||
u64 max_power_default;
|
||||
u64 va_space_host_start_address;
|
||||
u64 va_space_host_end_address;
|
||||
u64 va_space_dram_start_address;
|
||||
u64 va_space_dram_end_address;
|
||||
u64 dram_size_for_default_page_mapping;
|
||||
u64 pcie_dbi_base_address;
|
||||
u64 pcie_aux_dbi_reg_addr;
|
||||
u64 mmu_pgt_addr;
|
||||
u64 mmu_dram_default_page_addr;
|
||||
u32 mmu_pgt_size;
|
||||
u32 mmu_pte_size;
|
||||
u32 mmu_hop_table_size;
|
||||
u32 mmu_hop0_tables_total_size;
|
||||
u32 dram_page_size;
|
||||
u32 cfg_size;
|
||||
u32 sram_size;
|
||||
u32 max_asid;
|
||||
u32 num_of_events;
|
||||
u32 psoc_pci_pll_nr;
|
||||
u32 psoc_pci_pll_nf;
|
||||
u32 psoc_pci_pll_od;
|
||||
u32 psoc_pci_pll_div_factor;
|
||||
u32 high_pll;
|
||||
u32 cb_pool_cb_cnt;
|
||||
u32 cb_pool_cb_size;
|
||||
u8 completion_queues_count;
|
||||
u8 tpc_enabled_mask;
|
||||
struct armcp_info armcp_info;
|
||||
char uboot_ver[VERSION_MAX_LEN];
|
||||
char preboot_ver[VERSION_MAX_LEN];
|
||||
struct hl_mmu_properties dmmu;
|
||||
struct hl_mmu_properties pmmu;
|
||||
u64 sram_base_address;
|
||||
u64 sram_end_address;
|
||||
u64 sram_user_base_address;
|
||||
u64 dram_base_address;
|
||||
u64 dram_end_address;
|
||||
u64 dram_user_base_address;
|
||||
u64 dram_size;
|
||||
u64 dram_pci_bar_size;
|
||||
u64 max_power_default;
|
||||
u64 va_space_host_start_address;
|
||||
u64 va_space_host_end_address;
|
||||
u64 va_space_dram_start_address;
|
||||
u64 va_space_dram_end_address;
|
||||
u64 dram_size_for_default_page_mapping;
|
||||
u64 pcie_dbi_base_address;
|
||||
u64 pcie_aux_dbi_reg_addr;
|
||||
u64 mmu_pgt_addr;
|
||||
u64 mmu_dram_default_page_addr;
|
||||
u32 mmu_pgt_size;
|
||||
u32 mmu_pte_size;
|
||||
u32 mmu_hop_table_size;
|
||||
u32 mmu_hop0_tables_total_size;
|
||||
u32 dram_page_size;
|
||||
u32 cfg_size;
|
||||
u32 sram_size;
|
||||
u32 max_asid;
|
||||
u32 num_of_events;
|
||||
u32 psoc_pci_pll_nr;
|
||||
u32 psoc_pci_pll_nf;
|
||||
u32 psoc_pci_pll_od;
|
||||
u32 psoc_pci_pll_div_factor;
|
||||
u32 high_pll;
|
||||
u32 cb_pool_cb_cnt;
|
||||
u32 cb_pool_cb_size;
|
||||
u8 tpc_enabled_mask;
|
||||
u8 completion_queues_count;
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -236,8 +274,6 @@ struct hl_dma_fence {
|
|||
* Command Buffers
|
||||
*/
|
||||
|
||||
#define HL_MAX_CB_SIZE 0x200000 /* 2MB */
|
||||
|
||||
/**
|
||||
* struct hl_cb_mgr - describes a Command Buffer Manager.
|
||||
* @cb_lock: protects cb_handles.
|
||||
|
@ -481,8 +517,8 @@ enum hl_pll_frequency {
|
|||
* @get_events_stat: retrieve event queue entries histogram.
|
||||
* @read_pte: read MMU page table entry from DRAM.
|
||||
* @write_pte: write MMU page table entry to DRAM.
|
||||
* @mmu_invalidate_cache: flush MMU STLB cache, either with soft (L1 only) or
|
||||
* hard (L0 & L1) flush.
|
||||
* @mmu_invalidate_cache: flush MMU STLB host/DRAM cache, either with soft
|
||||
* (L1 only) or hard (L0 & L1) flush.
|
||||
* @mmu_invalidate_cache_range: flush specific MMU STLB cache lines with
|
||||
* ASID-VA-size mask.
|
||||
* @send_heartbeat: send is-alive packet to ArmCP and verify response.
|
||||
|
@ -502,6 +538,7 @@ enum hl_pll_frequency {
|
|||
* @rreg: Read a register. Needed for simulator support.
|
||||
* @wreg: Write a register. Needed for simulator support.
|
||||
* @halt_coresight: stop the ETF and ETR traces.
|
||||
* @get_clk_rate: Retrieve the ASIC current and maximum clock rate in MHz
|
||||
*/
|
||||
struct hl_asic_funcs {
|
||||
int (*early_init)(struct hl_device *hdev);
|
||||
|
@ -562,7 +599,8 @@ struct hl_asic_funcs {
|
|||
u32 *size);
|
||||
u64 (*read_pte)(struct hl_device *hdev, u64 addr);
|
||||
void (*write_pte)(struct hl_device *hdev, u64 addr, u64 val);
|
||||
void (*mmu_invalidate_cache)(struct hl_device *hdev, bool is_hard);
|
||||
void (*mmu_invalidate_cache)(struct hl_device *hdev, bool is_hard,
|
||||
u32 flags);
|
||||
void (*mmu_invalidate_cache_range)(struct hl_device *hdev, bool is_hard,
|
||||
u32 asid, u64 va, u64 size);
|
||||
int (*send_heartbeat)(struct hl_device *hdev);
|
||||
|
@ -584,6 +622,7 @@ struct hl_asic_funcs {
|
|||
u32 (*rreg)(struct hl_device *hdev, u32 reg);
|
||||
void (*wreg)(struct hl_device *hdev, u32 reg, u32 val);
|
||||
void (*halt_coresight)(struct hl_device *hdev);
|
||||
int (*get_clk_rate)(struct hl_device *hdev, u32 *cur_clk, u32 *max_clk);
|
||||
};
|
||||
|
||||
|
||||
|
@ -688,7 +727,7 @@ struct hl_ctx_mgr {
|
|||
* @sgt: pointer to the scatter-gather table that holds the pages.
|
||||
* @dir: for DMA unmapping, the direction must be supplied, so save it.
|
||||
* @debugfs_list: node in debugfs list of command submissions.
|
||||
* @addr: user-space virtual pointer to the start of the memory area.
|
||||
* @addr: user-space virtual address of the start of the memory area.
|
||||
* @size: size of the memory area to pin & map.
|
||||
* @dma_mapped: true if the SG was mapped to DMA addresses, false otherwise.
|
||||
*/
|
||||
|
@ -752,11 +791,14 @@ struct hl_cs {
|
|||
* @userptr_list: linked-list of userptr mappings that belong to this job and
|
||||
* wait for completion.
|
||||
* @debugfs_list: node in debugfs list of command submission jobs.
|
||||
* @queue_type: the type of the H/W queue this job is submitted to.
|
||||
* @id: the id of this job inside a CS.
|
||||
* @hw_queue_id: the id of the H/W queue this job is submitted to.
|
||||
* @user_cb_size: the actual size of the CB we got from the user.
|
||||
* @job_cb_size: the actual size of the CB that we put on the queue.
|
||||
* @ext_queue: whether the job is for external queue or internal queue.
|
||||
* @is_kernel_allocated_cb: true if the CB handle we got from the user holds a
|
||||
* handle to a kernel-allocated CB object, false
|
||||
* otherwise (SRAM/DRAM/host address).
|
||||
*/
|
||||
struct hl_cs_job {
|
||||
struct list_head cs_node;
|
||||
|
@ -766,39 +808,44 @@ struct hl_cs_job {
|
|||
struct work_struct finish_work;
|
||||
struct list_head userptr_list;
|
||||
struct list_head debugfs_list;
|
||||
enum hl_queue_type queue_type;
|
||||
u32 id;
|
||||
u32 hw_queue_id;
|
||||
u32 user_cb_size;
|
||||
u32 job_cb_size;
|
||||
u8 ext_queue;
|
||||
u8 is_kernel_allocated_cb;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct hl_cs_parser - command submission paerser properties.
|
||||
* struct hl_cs_parser - command submission parser properties.
|
||||
* @user_cb: the CB we got from the user.
|
||||
* @patched_cb: in case of patching, this is internal CB which is submitted on
|
||||
* the queue instead of the CB we got from the IOCTL.
|
||||
* @job_userptr_list: linked-list of userptr mappings that belong to the related
|
||||
* job and wait for completion.
|
||||
* @cs_sequence: the sequence number of the related CS.
|
||||
* @queue_type: the type of the H/W queue this job is submitted to.
|
||||
* @ctx_id: the ID of the context the related CS belongs to.
|
||||
* @hw_queue_id: the id of the H/W queue this job is submitted to.
|
||||
* @user_cb_size: the actual size of the CB we got from the user.
|
||||
* @patched_cb_size: the size of the CB after parsing.
|
||||
* @ext_queue: whether the job is for external queue or internal queue.
|
||||
* @job_id: the id of the related job inside the related CS.
|
||||
* @is_kernel_allocated_cb: true if the CB handle we got from the user holds a
|
||||
* handle to a kernel-allocated CB object, false
|
||||
* otherwise (SRAM/DRAM/host address).
|
||||
*/
|
||||
struct hl_cs_parser {
|
||||
struct hl_cb *user_cb;
|
||||
struct hl_cb *patched_cb;
|
||||
struct list_head *job_userptr_list;
|
||||
u64 cs_sequence;
|
||||
enum hl_queue_type queue_type;
|
||||
u32 ctx_id;
|
||||
u32 hw_queue_id;
|
||||
u32 user_cb_size;
|
||||
u32 patched_cb_size;
|
||||
u8 ext_queue;
|
||||
u8 job_id;
|
||||
u8 is_kernel_allocated_cb;
|
||||
};
|
||||
|
||||
|
||||
|
@ -1048,9 +1095,10 @@ void hl_wreg(struct hl_device *hdev, u32 reg, u32 val);
|
|||
|
||||
#define REG_FIELD_SHIFT(reg, field) reg##_##field##_SHIFT
|
||||
#define REG_FIELD_MASK(reg, field) reg##_##field##_MASK
|
||||
#define WREG32_FIELD(reg, field, val) \
|
||||
WREG32(mm##reg, (RREG32(mm##reg) & ~REG_FIELD_MASK(reg, field)) | \
|
||||
(val) << REG_FIELD_SHIFT(reg, field))
|
||||
#define WREG32_FIELD(reg, offset, field, val) \
|
||||
WREG32(mm##reg + offset, (RREG32(mm##reg + offset) & \
|
||||
~REG_FIELD_MASK(reg, field)) | \
|
||||
(val) << REG_FIELD_SHIFT(reg, field))
|
||||
|
||||
/* Timeout should be longer when working with simulator but cap the
|
||||
* increased timeout to some maximum
|
||||
|
@ -1501,7 +1549,8 @@ int hl_cb_pool_init(struct hl_device *hdev);
|
|||
int hl_cb_pool_fini(struct hl_device *hdev);
|
||||
|
||||
void hl_cs_rollback_all(struct hl_device *hdev);
|
||||
struct hl_cs_job *hl_cs_allocate_job(struct hl_device *hdev, bool ext_queue);
|
||||
struct hl_cs_job *hl_cs_allocate_job(struct hl_device *hdev,
|
||||
enum hl_queue_type queue_type, bool is_kernel_allocated_cb);
|
||||
|
||||
void goya_set_asic_funcs(struct hl_device *hdev);
|
||||
|
||||
|
@ -1513,7 +1562,7 @@ void hl_vm_fini(struct hl_device *hdev);
|
|||
|
||||
int hl_pin_host_memory(struct hl_device *hdev, u64 addr, u64 size,
|
||||
struct hl_userptr *userptr);
|
||||
int hl_unpin_host_memory(struct hl_device *hdev, struct hl_userptr *userptr);
|
||||
void hl_unpin_host_memory(struct hl_device *hdev, struct hl_userptr *userptr);
|
||||
void hl_userptr_delete_list(struct hl_device *hdev,
|
||||
struct list_head *userptr_list);
|
||||
bool hl_userptr_is_pinned(struct hl_device *hdev, u64 addr, u32 size,
|
||||
|
|
|
@ -60,11 +60,16 @@ static int hw_ip_info(struct hl_device *hdev, struct hl_info_args *args)
|
|||
hw_ip.tpc_enabled_mask = prop->tpc_enabled_mask;
|
||||
hw_ip.sram_size = prop->sram_size - sram_kmd_size;
|
||||
hw_ip.dram_size = prop->dram_size - dram_kmd_size;
|
||||
if (hw_ip.dram_size > 0)
|
||||
if (hw_ip.dram_size > PAGE_SIZE)
|
||||
hw_ip.dram_enabled = 1;
|
||||
hw_ip.num_of_events = prop->num_of_events;
|
||||
memcpy(hw_ip.armcp_version,
|
||||
prop->armcp_info.armcp_version, VERSION_MAX_LEN);
|
||||
|
||||
memcpy(hw_ip.armcp_version, prop->armcp_info.armcp_version,
|
||||
min(VERSION_MAX_LEN, HL_INFO_VERSION_MAX_LEN));
|
||||
|
||||
memcpy(hw_ip.card_name, prop->armcp_info.card_name,
|
||||
min(CARD_NAME_MAX_LEN, HL_INFO_CARD_NAME_MAX_LEN));
|
||||
|
||||
hw_ip.armcp_cpld_version = le32_to_cpu(prop->armcp_info.cpld_version);
|
||||
hw_ip.psoc_pci_pll_nr = prop->psoc_pci_pll_nr;
|
||||
hw_ip.psoc_pci_pll_nf = prop->psoc_pci_pll_nf;
|
||||
|
@ -179,17 +184,14 @@ static int debug_coresight(struct hl_device *hdev, struct hl_debug_args *args)
|
|||
goto out;
|
||||
}
|
||||
|
||||
if (output) {
|
||||
if (copy_to_user((void __user *) (uintptr_t) args->output_ptr,
|
||||
output,
|
||||
args->output_size)) {
|
||||
dev_err(hdev->dev,
|
||||
"copy to user failed in debug ioctl\n");
|
||||
rc = -EFAULT;
|
||||
goto out;
|
||||
}
|
||||
if (output && copy_to_user((void __user *) (uintptr_t) args->output_ptr,
|
||||
output, args->output_size)) {
|
||||
dev_err(hdev->dev, "copy to user failed in debug ioctl\n");
|
||||
rc = -EFAULT;
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
||||
out:
|
||||
kfree(params);
|
||||
kfree(output);
|
||||
|
@ -221,6 +223,41 @@ static int device_utilization(struct hl_device *hdev, struct hl_info_args *args)
|
|||
min((size_t) max_size, sizeof(device_util))) ? -EFAULT : 0;
|
||||
}
|
||||
|
||||
static int get_clk_rate(struct hl_device *hdev, struct hl_info_args *args)
|
||||
{
|
||||
struct hl_info_clk_rate clk_rate = {0};
|
||||
u32 max_size = args->return_size;
|
||||
void __user *out = (void __user *) (uintptr_t) args->return_pointer;
|
||||
int rc;
|
||||
|
||||
if ((!max_size) || (!out))
|
||||
return -EINVAL;
|
||||
|
||||
rc = hdev->asic_funcs->get_clk_rate(hdev, &clk_rate.cur_clk_rate_mhz,
|
||||
&clk_rate.max_clk_rate_mhz);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
return copy_to_user(out, &clk_rate,
|
||||
min((size_t) max_size, sizeof(clk_rate))) ? -EFAULT : 0;
|
||||
}
|
||||
|
||||
static int get_reset_count(struct hl_device *hdev, struct hl_info_args *args)
|
||||
{
|
||||
struct hl_info_reset_count reset_count = {0};
|
||||
u32 max_size = args->return_size;
|
||||
void __user *out = (void __user *) (uintptr_t) args->return_pointer;
|
||||
|
||||
if ((!max_size) || (!out))
|
||||
return -EINVAL;
|
||||
|
||||
reset_count.hard_reset_cnt = hdev->hard_reset_cnt;
|
||||
reset_count.soft_reset_cnt = hdev->soft_reset_cnt;
|
||||
|
||||
return copy_to_user(out, &reset_count,
|
||||
min((size_t) max_size, sizeof(reset_count))) ? -EFAULT : 0;
|
||||
}
|
||||
|
||||
static int _hl_info_ioctl(struct hl_fpriv *hpriv, void *data,
|
||||
struct device *dev)
|
||||
{
|
||||
|
@ -239,6 +276,9 @@ static int _hl_info_ioctl(struct hl_fpriv *hpriv, void *data,
|
|||
case HL_INFO_DEVICE_STATUS:
|
||||
return device_status_info(hdev, args);
|
||||
|
||||
case HL_INFO_RESET_COUNT:
|
||||
return get_reset_count(hdev, args);
|
||||
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
@ -271,6 +311,10 @@ static int _hl_info_ioctl(struct hl_fpriv *hpriv, void *data,
|
|||
rc = hw_events_info(hdev, true, args);
|
||||
break;
|
||||
|
||||
case HL_INFO_CLK_RATE:
|
||||
rc = get_clk_rate(hdev, args);
|
||||
break;
|
||||
|
||||
default:
|
||||
dev_err(dev, "Invalid request %d\n", args->op);
|
||||
rc = -ENOTTY;
|
||||
|
@ -406,9 +450,8 @@ static long _hl_ioctl(struct file *filep, unsigned int cmd, unsigned long arg,
|
|||
|
||||
retcode = func(hpriv, kdata);
|
||||
|
||||
if (cmd & IOC_OUT)
|
||||
if (copy_to_user((void __user *)arg, kdata, usize))
|
||||
retcode = -EFAULT;
|
||||
if ((cmd & IOC_OUT) && copy_to_user((void __user *)arg, kdata, usize))
|
||||
retcode = -EFAULT;
|
||||
|
||||
out_err:
|
||||
if (retcode)
|
||||
|
|
|
@ -58,8 +58,8 @@ out:
|
|||
}
|
||||
|
||||
/*
|
||||
* ext_queue_submit_bd - Submit a buffer descriptor to an external queue
|
||||
*
|
||||
* ext_and_hw_queue_submit_bd() - Submit a buffer descriptor to an external or a
|
||||
* H/W queue.
|
||||
* @hdev: pointer to habanalabs device structure
|
||||
* @q: pointer to habanalabs queue structure
|
||||
* @ctl: BD's control word
|
||||
|
@ -73,8 +73,8 @@ out:
|
|||
* This function must be called when the scheduler mutex is taken
|
||||
*
|
||||
*/
|
||||
static void ext_queue_submit_bd(struct hl_device *hdev, struct hl_hw_queue *q,
|
||||
u32 ctl, u32 len, u64 ptr)
|
||||
static void ext_and_hw_queue_submit_bd(struct hl_device *hdev,
|
||||
struct hl_hw_queue *q, u32 ctl, u32 len, u64 ptr)
|
||||
{
|
||||
struct hl_bd *bd;
|
||||
|
||||
|
@ -173,6 +173,45 @@ static int int_queue_sanity_checks(struct hl_device *hdev,
|
|||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* hw_queue_sanity_checks() - Perform some sanity checks on a H/W queue.
|
||||
* @hdev: Pointer to hl_device structure.
|
||||
* @q: Pointer to hl_hw_queue structure.
|
||||
* @num_of_entries: How many entries to check for space.
|
||||
*
|
||||
* Perform the following:
|
||||
* - Make sure we have enough space in the completion queue.
|
||||
* This check also ensures that there is enough space in the h/w queue, as
|
||||
* both queues are of the same size.
|
||||
* - Reserve space in the completion queue (needs to be reversed if there
|
||||
* is a failure down the road before the actual submission of work).
|
||||
*
|
||||
* Both operations are done using the "free_slots_cnt" field of the completion
|
||||
* queue. The CI counters of the queue and the completion queue are not
|
||||
* needed/used for the H/W queue type.
|
||||
*/
|
||||
static int hw_queue_sanity_checks(struct hl_device *hdev, struct hl_hw_queue *q,
|
||||
int num_of_entries)
|
||||
{
|
||||
atomic_t *free_slots =
|
||||
&hdev->completion_queue[q->hw_queue_id].free_slots_cnt;
|
||||
|
||||
/*
|
||||
* Check we have enough space in the completion queue.
|
||||
* Add -1 to counter (decrement) unless counter was already 0.
|
||||
* In that case, CQ is full so we can't submit a new CB.
|
||||
* atomic_add_unless will return 0 if counter was already 0.
|
||||
*/
|
||||
if (atomic_add_negative(num_of_entries * -1, free_slots)) {
|
||||
dev_dbg(hdev->dev, "No space for %d entries on CQ %d\n",
|
||||
num_of_entries, q->hw_queue_id);
|
||||
atomic_add(num_of_entries, free_slots);
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* hl_hw_queue_send_cb_no_cmpl - send a single CB (not a JOB) without completion
|
||||
*
|
||||
|
@ -188,7 +227,7 @@ int hl_hw_queue_send_cb_no_cmpl(struct hl_device *hdev, u32 hw_queue_id,
|
|||
u32 cb_size, u64 cb_ptr)
|
||||
{
|
||||
struct hl_hw_queue *q = &hdev->kernel_queues[hw_queue_id];
|
||||
int rc;
|
||||
int rc = 0;
|
||||
|
||||
/*
|
||||
* The CPU queue is a synchronous queue with an effective depth of
|
||||
|
@ -206,11 +245,18 @@ int hl_hw_queue_send_cb_no_cmpl(struct hl_device *hdev, u32 hw_queue_id,
|
|||
goto out;
|
||||
}
|
||||
|
||||
rc = ext_queue_sanity_checks(hdev, q, 1, false);
|
||||
if (rc)
|
||||
goto out;
|
||||
/*
|
||||
* hl_hw_queue_send_cb_no_cmpl() is called for queues of a H/W queue
|
||||
* type only on init phase, when the queues are empty and being tested,
|
||||
* so there is no need for sanity checks.
|
||||
*/
|
||||
if (q->queue_type != QUEUE_TYPE_HW) {
|
||||
rc = ext_queue_sanity_checks(hdev, q, 1, false);
|
||||
if (rc)
|
||||
goto out;
|
||||
}
|
||||
|
||||
ext_queue_submit_bd(hdev, q, 0, cb_size, cb_ptr);
|
||||
ext_and_hw_queue_submit_bd(hdev, q, 0, cb_size, cb_ptr);
|
||||
|
||||
out:
|
||||
if (q->queue_type != QUEUE_TYPE_CPU)
|
||||
|
@ -220,14 +266,14 @@ out:
|
|||
}
|
||||
|
||||
/*
|
||||
* ext_hw_queue_schedule_job - submit an JOB to an external queue
|
||||
* ext_queue_schedule_job - submit a JOB to an external queue
|
||||
*
|
||||
* @job: pointer to the job that needs to be submitted to the queue
|
||||
*
|
||||
* This function must be called when the scheduler mutex is taken
|
||||
*
|
||||
*/
|
||||
static void ext_hw_queue_schedule_job(struct hl_cs_job *job)
|
||||
static void ext_queue_schedule_job(struct hl_cs_job *job)
|
||||
{
|
||||
struct hl_device *hdev = job->cs->ctx->hdev;
|
||||
struct hl_hw_queue *q = &hdev->kernel_queues[job->hw_queue_id];
|
||||
|
@ -260,7 +306,7 @@ static void ext_hw_queue_schedule_job(struct hl_cs_job *job)
|
|||
* H/W queues is done under the scheduler mutex
|
||||
*
|
||||
* No need to check if CQ is full because it was already
|
||||
* checked in hl_queue_sanity_checks
|
||||
* checked in ext_queue_sanity_checks
|
||||
*/
|
||||
cq = &hdev->completion_queue[q->hw_queue_id];
|
||||
cq_addr = cq->bus_address + cq->pi * sizeof(struct hl_cq_entry);
|
||||
|
@ -274,18 +320,18 @@ static void ext_hw_queue_schedule_job(struct hl_cs_job *job)
|
|||
|
||||
cq->pi = hl_cq_inc_ptr(cq->pi);
|
||||
|
||||
ext_queue_submit_bd(hdev, q, ctl, len, ptr);
|
||||
ext_and_hw_queue_submit_bd(hdev, q, ctl, len, ptr);
|
||||
}
|
||||
|
||||
/*
|
||||
* int_hw_queue_schedule_job - submit an JOB to an internal queue
|
||||
* int_queue_schedule_job - submit a JOB to an internal queue
|
||||
*
|
||||
* @job: pointer to the job that needs to be submitted to the queue
|
||||
*
|
||||
* This function must be called when the scheduler mutex is taken
|
||||
*
|
||||
*/
|
||||
static void int_hw_queue_schedule_job(struct hl_cs_job *job)
|
||||
static void int_queue_schedule_job(struct hl_cs_job *job)
|
||||
{
|
||||
struct hl_device *hdev = job->cs->ctx->hdev;
|
||||
struct hl_hw_queue *q = &hdev->kernel_queues[job->hw_queue_id];
|
||||
|
@ -307,6 +353,60 @@ static void int_hw_queue_schedule_job(struct hl_cs_job *job)
|
|||
hdev->asic_funcs->ring_doorbell(hdev, q->hw_queue_id, q->pi);
|
||||
}
|
||||
|
||||
/*
|
||||
* hw_queue_schedule_job - submit a JOB to a H/W queue
|
||||
*
|
||||
* @job: pointer to the job that needs to be submitted to the queue
|
||||
*
|
||||
* This function must be called when the scheduler mutex is taken
|
||||
*
|
||||
*/
|
||||
static void hw_queue_schedule_job(struct hl_cs_job *job)
|
||||
{
|
||||
struct hl_device *hdev = job->cs->ctx->hdev;
|
||||
struct hl_hw_queue *q = &hdev->kernel_queues[job->hw_queue_id];
|
||||
struct hl_cq *cq;
|
||||
u64 ptr;
|
||||
u32 offset, ctl, len;
|
||||
|
||||
/*
|
||||
* Upon PQE completion, COMP_DATA is used as the write data to the
|
||||
* completion queue (QMAN HBW message), and COMP_OFFSET is used as the
|
||||
* write address offset in the SM block (QMAN LBW message).
|
||||
* The write address offset is calculated as "COMP_OFFSET << 2".
|
||||
*/
|
||||
offset = job->cs->sequence & (HL_MAX_PENDING_CS - 1);
|
||||
ctl = ((offset << BD_CTL_COMP_OFFSET_SHIFT) & BD_CTL_COMP_OFFSET_MASK) |
|
||||
((q->pi << BD_CTL_COMP_DATA_SHIFT) & BD_CTL_COMP_DATA_MASK);
|
||||
|
||||
len = job->job_cb_size;
|
||||
|
||||
/*
|
||||
* A patched CB is created only if a user CB was allocated by driver and
|
||||
* MMU is disabled. If MMU is enabled, the user CB should be used
|
||||
* instead. If the user CB wasn't allocated by driver, assume that it
|
||||
* holds an address.
|
||||
*/
|
||||
if (job->patched_cb)
|
||||
ptr = job->patched_cb->bus_address;
|
||||
else if (job->is_kernel_allocated_cb)
|
||||
ptr = job->user_cb->bus_address;
|
||||
else
|
||||
ptr = (u64) (uintptr_t) job->user_cb;
|
||||
|
||||
/*
|
||||
* No need to protect pi_offset because scheduling to the
|
||||
* H/W queues is done under the scheduler mutex
|
||||
*
|
||||
* No need to check if CQ is full because it was already
|
||||
* checked in hw_queue_sanity_checks
|
||||
*/
|
||||
cq = &hdev->completion_queue[q->hw_queue_id];
|
||||
cq->pi = hl_cq_inc_ptr(cq->pi);
|
||||
|
||||
ext_and_hw_queue_submit_bd(hdev, q, ctl, len, ptr);
|
||||
}
|
||||
|
||||
/*
|
||||
* hl_hw_queue_schedule_cs - schedule a command submission
|
||||
*
|
||||
|
@ -330,23 +430,34 @@ int hl_hw_queue_schedule_cs(struct hl_cs *cs)
|
|||
}
|
||||
|
||||
q = &hdev->kernel_queues[0];
|
||||
/* This loop assumes all external queues are consecutive */
|
||||
for (i = 0, cq_cnt = 0 ; i < HL_MAX_QUEUES ; i++, q++) {
|
||||
if (q->queue_type == QUEUE_TYPE_EXT) {
|
||||
if (cs->jobs_in_queue_cnt[i]) {
|
||||
if (cs->jobs_in_queue_cnt[i]) {
|
||||
switch (q->queue_type) {
|
||||
case QUEUE_TYPE_EXT:
|
||||
rc = ext_queue_sanity_checks(hdev, q,
|
||||
cs->jobs_in_queue_cnt[i], true);
|
||||
if (rc)
|
||||
goto unroll_cq_resv;
|
||||
cq_cnt++;
|
||||
}
|
||||
} else if (q->queue_type == QUEUE_TYPE_INT) {
|
||||
if (cs->jobs_in_queue_cnt[i]) {
|
||||
cs->jobs_in_queue_cnt[i], true);
|
||||
break;
|
||||
case QUEUE_TYPE_INT:
|
||||
rc = int_queue_sanity_checks(hdev, q,
|
||||
cs->jobs_in_queue_cnt[i]);
|
||||
if (rc)
|
||||
goto unroll_cq_resv;
|
||||
cs->jobs_in_queue_cnt[i]);
|
||||
break;
|
||||
case QUEUE_TYPE_HW:
|
||||
rc = hw_queue_sanity_checks(hdev, q,
|
||||
cs->jobs_in_queue_cnt[i]);
|
||||
break;
|
||||
default:
|
||||
dev_err(hdev->dev, "Queue type %d is invalid\n",
|
||||
q->queue_type);
|
||||
rc = -EINVAL;
|
||||
break;
|
||||
}
|
||||
|
||||
if (rc)
|
||||
goto unroll_cq_resv;
|
||||
|
||||
if (q->queue_type == QUEUE_TYPE_EXT ||
|
||||
q->queue_type == QUEUE_TYPE_HW)
|
||||
cq_cnt++;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -373,21 +484,30 @@ int hl_hw_queue_schedule_cs(struct hl_cs *cs)
|
|||
}
|
||||
|
||||
list_for_each_entry_safe(job, tmp, &cs->job_list, cs_node)
|
||||
if (job->ext_queue)
|
||||
ext_hw_queue_schedule_job(job);
|
||||
else
|
||||
int_hw_queue_schedule_job(job);
|
||||
switch (job->queue_type) {
|
||||
case QUEUE_TYPE_EXT:
|
||||
ext_queue_schedule_job(job);
|
||||
break;
|
||||
case QUEUE_TYPE_INT:
|
||||
int_queue_schedule_job(job);
|
||||
break;
|
||||
case QUEUE_TYPE_HW:
|
||||
hw_queue_schedule_job(job);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
cs->submitted = true;
|
||||
|
||||
goto out;
|
||||
|
||||
unroll_cq_resv:
|
||||
/* This loop assumes all external queues are consecutive */
|
||||
q = &hdev->kernel_queues[0];
|
||||
for (i = 0 ; (i < HL_MAX_QUEUES) && (cq_cnt > 0) ; i++, q++) {
|
||||
if ((q->queue_type == QUEUE_TYPE_EXT) &&
|
||||
(cs->jobs_in_queue_cnt[i])) {
|
||||
if ((q->queue_type == QUEUE_TYPE_EXT ||
|
||||
q->queue_type == QUEUE_TYPE_HW) &&
|
||||
cs->jobs_in_queue_cnt[i]) {
|
||||
atomic_t *free_slots =
|
||||
&hdev->completion_queue[i].free_slots_cnt;
|
||||
atomic_add(cs->jobs_in_queue_cnt[i], free_slots);
|
||||
|
@ -414,8 +534,8 @@ void hl_hw_queue_inc_ci_kernel(struct hl_device *hdev, u32 hw_queue_id)
|
|||
q->ci = hl_queue_inc_ptr(q->ci);
|
||||
}
|
||||
|
||||
static int ext_and_cpu_hw_queue_init(struct hl_device *hdev,
|
||||
struct hl_hw_queue *q, bool is_cpu_queue)
|
||||
static int ext_and_cpu_queue_init(struct hl_device *hdev, struct hl_hw_queue *q,
|
||||
bool is_cpu_queue)
|
||||
{
|
||||
void *p;
|
||||
int rc;
|
||||
|
@ -465,7 +585,7 @@ free_queue:
|
|||
return rc;
|
||||
}
|
||||
|
||||
static int int_hw_queue_init(struct hl_device *hdev, struct hl_hw_queue *q)
|
||||
static int int_queue_init(struct hl_device *hdev, struct hl_hw_queue *q)
|
||||
{
|
||||
void *p;
|
||||
|
||||
|
@ -485,18 +605,38 @@ static int int_hw_queue_init(struct hl_device *hdev, struct hl_hw_queue *q)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int cpu_hw_queue_init(struct hl_device *hdev, struct hl_hw_queue *q)
|
||||
static int cpu_queue_init(struct hl_device *hdev, struct hl_hw_queue *q)
|
||||
{
|
||||
return ext_and_cpu_hw_queue_init(hdev, q, true);
|
||||
return ext_and_cpu_queue_init(hdev, q, true);
|
||||
}
|
||||
|
||||
static int ext_hw_queue_init(struct hl_device *hdev, struct hl_hw_queue *q)
|
||||
static int ext_queue_init(struct hl_device *hdev, struct hl_hw_queue *q)
|
||||
{
|
||||
return ext_and_cpu_hw_queue_init(hdev, q, false);
|
||||
return ext_and_cpu_queue_init(hdev, q, false);
|
||||
}
|
||||
|
||||
static int hw_queue_init(struct hl_device *hdev, struct hl_hw_queue *q)
|
||||
{
|
||||
void *p;
|
||||
|
||||
p = hdev->asic_funcs->asic_dma_alloc_coherent(hdev,
|
||||
HL_QUEUE_SIZE_IN_BYTES,
|
||||
&q->bus_address,
|
||||
GFP_KERNEL | __GFP_ZERO);
|
||||
if (!p)
|
||||
return -ENOMEM;
|
||||
|
||||
q->kernel_address = (u64) (uintptr_t) p;
|
||||
|
||||
/* Make sure read/write pointers are initialized to start of queue */
|
||||
q->ci = 0;
|
||||
q->pi = 0;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* hw_queue_init - main initialization function for H/W queue object
|
||||
* queue_init - main initialization function for H/W queue object
|
||||
*
|
||||
* @hdev: pointer to hl_device device structure
|
||||
* @q: pointer to hl_hw_queue queue structure
|
||||
|
@ -505,7 +645,7 @@ static int ext_hw_queue_init(struct hl_device *hdev, struct hl_hw_queue *q)
|
|||
* Allocate dma-able memory for the queue and initialize fields
|
||||
* Returns 0 on success
|
||||
*/
|
||||
static int hw_queue_init(struct hl_device *hdev, struct hl_hw_queue *q,
|
||||
static int queue_init(struct hl_device *hdev, struct hl_hw_queue *q,
|
||||
u32 hw_queue_id)
|
||||
{
|
||||
int rc;
|
||||
|
@ -516,21 +656,20 @@ static int hw_queue_init(struct hl_device *hdev, struct hl_hw_queue *q,
|
|||
|
||||
switch (q->queue_type) {
|
||||
case QUEUE_TYPE_EXT:
|
||||
rc = ext_hw_queue_init(hdev, q);
|
||||
rc = ext_queue_init(hdev, q);
|
||||
break;
|
||||
|
||||
case QUEUE_TYPE_INT:
|
||||
rc = int_hw_queue_init(hdev, q);
|
||||
rc = int_queue_init(hdev, q);
|
||||
break;
|
||||
|
||||
case QUEUE_TYPE_CPU:
|
||||
rc = cpu_hw_queue_init(hdev, q);
|
||||
rc = cpu_queue_init(hdev, q);
|
||||
break;
|
||||
case QUEUE_TYPE_HW:
|
||||
rc = hw_queue_init(hdev, q);
|
||||
break;
|
||||
|
||||
case QUEUE_TYPE_NA:
|
||||
q->valid = 0;
|
||||
return 0;
|
||||
|
||||
default:
|
||||
dev_crit(hdev->dev, "wrong queue type %d during init\n",
|
||||
q->queue_type);
|
||||
|
@ -554,7 +693,7 @@ static int hw_queue_init(struct hl_device *hdev, struct hl_hw_queue *q,
|
|||
*
|
||||
* Free the queue memory
|
||||
*/
|
||||
static void hw_queue_fini(struct hl_device *hdev, struct hl_hw_queue *q)
|
||||
static void queue_fini(struct hl_device *hdev, struct hl_hw_queue *q)
|
||||
{
|
||||
if (!q->valid)
|
||||
return;
|
||||
|
@ -612,7 +751,7 @@ int hl_hw_queues_create(struct hl_device *hdev)
|
|||
i < HL_MAX_QUEUES ; i++, q_ready_cnt++, q++) {
|
||||
|
||||
q->queue_type = asic->hw_queues_props[i].type;
|
||||
rc = hw_queue_init(hdev, q, i);
|
||||
rc = queue_init(hdev, q, i);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev,
|
||||
"failed to initialize queue %d\n", i);
|
||||
|
@ -624,7 +763,7 @@ int hl_hw_queues_create(struct hl_device *hdev)
|
|||
|
||||
release_queues:
|
||||
for (i = 0, q = hdev->kernel_queues ; i < q_ready_cnt ; i++, q++)
|
||||
hw_queue_fini(hdev, q);
|
||||
queue_fini(hdev, q);
|
||||
|
||||
kfree(hdev->kernel_queues);
|
||||
|
||||
|
@ -637,7 +776,7 @@ void hl_hw_queues_destroy(struct hl_device *hdev)
|
|||
int i;
|
||||
|
||||
for (i = 0, q = hdev->kernel_queues ; i < HL_MAX_QUEUES ; i++, q++)
|
||||
hw_queue_fini(hdev, q);
|
||||
queue_fini(hdev, q);
|
||||
|
||||
kfree(hdev->kernel_queues);
|
||||
}
|
||||
|
|
|
@ -260,4 +260,6 @@
|
|||
#define DMA_QM_3_GLBL_CFG1_DMA_STOP_SHIFT DMA_QM_0_GLBL_CFG1_DMA_STOP_SHIFT
|
||||
#define DMA_QM_4_GLBL_CFG1_DMA_STOP_SHIFT DMA_QM_0_GLBL_CFG1_DMA_STOP_SHIFT
|
||||
|
||||
#define PSOC_ETR_AXICTL_PROTCTRLBIT1_SHIFT 1
|
||||
|
||||
#endif /* ASIC_REG_GOYA_MASKS_H_ */
|
||||
|
|
|
@ -84,6 +84,7 @@
|
|||
#include "tpc6_rtr_regs.h"
|
||||
#include "tpc7_nrtr_regs.h"
|
||||
#include "tpc0_eml_cfg_regs.h"
|
||||
#include "psoc_etr_regs.h"
|
||||
|
||||
#include "psoc_global_conf_masks.h"
|
||||
#include "dma_macro_masks.h"
|
||||
|
|
|
@ -0,0 +1,114 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0
|
||||
*
|
||||
* Copyright 2016-2018 HabanaLabs, Ltd.
|
||||
* All Rights Reserved.
|
||||
*
|
||||
*/
|
||||
|
||||
/************************************
|
||||
** This is an auto-generated file **
|
||||
** DO NOT EDIT BELOW **
|
||||
************************************/
|
||||
|
||||
#ifndef ASIC_REG_PSOC_ETR_REGS_H_
|
||||
#define ASIC_REG_PSOC_ETR_REGS_H_
|
||||
|
||||
/*
|
||||
*****************************************
|
||||
* PSOC_ETR (Prototype: ETR)
|
||||
*****************************************
|
||||
*/
|
||||
|
||||
#define mmPSOC_ETR_RSZ 0x2C43004
|
||||
|
||||
#define mmPSOC_ETR_STS 0x2C4300C
|
||||
|
||||
#define mmPSOC_ETR_RRD 0x2C43010
|
||||
|
||||
#define mmPSOC_ETR_RRP 0x2C43014
|
||||
|
||||
#define mmPSOC_ETR_RWP 0x2C43018
|
||||
|
||||
#define mmPSOC_ETR_TRG 0x2C4301C
|
||||
|
||||
#define mmPSOC_ETR_CTL 0x2C43020
|
||||
|
||||
#define mmPSOC_ETR_RWD 0x2C43024
|
||||
|
||||
#define mmPSOC_ETR_MODE 0x2C43028
|
||||
|
||||
#define mmPSOC_ETR_LBUFLEVEL 0x2C4302C
|
||||
|
||||
#define mmPSOC_ETR_CBUFLEVEL 0x2C43030
|
||||
|
||||
#define mmPSOC_ETR_BUFWM 0x2C43034
|
||||
|
||||
#define mmPSOC_ETR_RRPHI 0x2C43038
|
||||
|
||||
#define mmPSOC_ETR_RWPHI 0x2C4303C
|
||||
|
||||
#define mmPSOC_ETR_AXICTL 0x2C43110
|
||||
|
||||
#define mmPSOC_ETR_DBALO 0x2C43118
|
||||
|
||||
#define mmPSOC_ETR_DBAHI 0x2C4311C
|
||||
|
||||
#define mmPSOC_ETR_FFSR 0x2C43300
|
||||
|
||||
#define mmPSOC_ETR_FFCR 0x2C43304
|
||||
|
||||
#define mmPSOC_ETR_PSCR 0x2C43308
|
||||
|
||||
#define mmPSOC_ETR_ITMISCOP0 0x2C43EE0
|
||||
|
||||
#define mmPSOC_ETR_ITTRFLIN 0x2C43EE8
|
||||
|
||||
#define mmPSOC_ETR_ITATBDATA0 0x2C43EEC
|
||||
|
||||
#define mmPSOC_ETR_ITATBCTR2 0x2C43EF0
|
||||
|
||||
#define mmPSOC_ETR_ITATBCTR1 0x2C43EF4
|
||||
|
||||
#define mmPSOC_ETR_ITATBCTR0 0x2C43EF8
|
||||
|
||||
#define mmPSOC_ETR_ITCTRL 0x2C43F00
|
||||
|
||||
#define mmPSOC_ETR_CLAIMSET 0x2C43FA0
|
||||
|
||||
#define mmPSOC_ETR_CLAIMCLR 0x2C43FA4
|
||||
|
||||
#define mmPSOC_ETR_LAR 0x2C43FB0
|
||||
|
||||
#define mmPSOC_ETR_LSR 0x2C43FB4
|
||||
|
||||
#define mmPSOC_ETR_AUTHSTATUS 0x2C43FB8
|
||||
|
||||
#define mmPSOC_ETR_DEVID 0x2C43FC8
|
||||
|
||||
#define mmPSOC_ETR_DEVTYPE 0x2C43FCC
|
||||
|
||||
#define mmPSOC_ETR_PERIPHID4 0x2C43FD0
|
||||
|
||||
#define mmPSOC_ETR_PERIPHID5 0x2C43FD4
|
||||
|
||||
#define mmPSOC_ETR_PERIPHID6 0x2C43FD8
|
||||
|
||||
#define mmPSOC_ETR_PERIPHID7 0x2C43FDC
|
||||
|
||||
#define mmPSOC_ETR_PERIPHID0 0x2C43FE0
|
||||
|
||||
#define mmPSOC_ETR_PERIPHID1 0x2C43FE4
|
||||
|
||||
#define mmPSOC_ETR_PERIPHID2 0x2C43FE8
|
||||
|
||||
#define mmPSOC_ETR_PERIPHID3 0x2C43FEC
|
||||
|
||||
#define mmPSOC_ETR_COMPID0 0x2C43FF0
|
||||
|
||||
#define mmPSOC_ETR_COMPID1 0x2C43FF4
|
||||
|
||||
#define mmPSOC_ETR_COMPID2 0x2C43FF8
|
||||
|
||||
#define mmPSOC_ETR_COMPID3 0x2C43FFC
|
||||
|
||||
#endif /* ASIC_REG_PSOC_ETR_REGS_H_ */
|
|
@ -20,6 +20,8 @@ enum cpu_boot_status {
|
|||
CPU_BOOT_STATUS_DRAM_INIT_FAIL,
|
||||
CPU_BOOT_STATUS_FIT_CORRUPTED,
|
||||
CPU_BOOT_STATUS_UBOOT_NOT_READY,
|
||||
CPU_BOOT_STATUS_RESERVED,
|
||||
CPU_BOOT_STATUS_TS_INIT_FAIL,
|
||||
};
|
||||
|
||||
enum kmd_msg {
|
||||
|
|
|
@ -12,18 +12,16 @@
|
|||
#define PAGE_SHIFT_2MB 21
|
||||
#define PAGE_SIZE_2MB (_AC(1, UL) << PAGE_SHIFT_2MB)
|
||||
#define PAGE_SIZE_4KB (_AC(1, UL) << PAGE_SHIFT_4KB)
|
||||
#define PAGE_MASK_2MB (~(PAGE_SIZE_2MB - 1))
|
||||
|
||||
#define PAGE_PRESENT_MASK 0x0000000000001ull
|
||||
#define SWAP_OUT_MASK 0x0000000000004ull
|
||||
#define LAST_MASK 0x0000000000800ull
|
||||
#define PHYS_ADDR_MASK 0xFFFFFFFFFFFFF000ull
|
||||
#define HOP0_MASK 0x3000000000000ull
|
||||
#define HOP1_MASK 0x0FF8000000000ull
|
||||
#define HOP2_MASK 0x0007FC0000000ull
|
||||
#define HOP3_MASK 0x000003FE00000ull
|
||||
#define HOP4_MASK 0x00000001FF000ull
|
||||
#define OFFSET_MASK 0x0000000000FFFull
|
||||
#define FLAGS_MASK 0x0000000000FFFull
|
||||
|
||||
#define HOP0_SHIFT 48
|
||||
#define HOP1_SHIFT 39
|
||||
|
@ -31,8 +29,7 @@
|
|||
#define HOP3_SHIFT 21
|
||||
#define HOP4_SHIFT 12
|
||||
|
||||
#define PTE_PHYS_ADDR_SHIFT 12
|
||||
#define PTE_PHYS_ADDR_MASK ~OFFSET_MASK
|
||||
#define HOP_PHYS_ADDR_MASK (~FLAGS_MASK)
|
||||
|
||||
#define HL_PTE_SIZE sizeof(u64)
|
||||
#define HOP_TABLE_SIZE PAGE_SIZE_4KB
|
||||
|
|
|
@ -23,6 +23,8 @@ struct hl_bd {
|
|||
#define HL_BD_SIZE sizeof(struct hl_bd)
|
||||
|
||||
/*
|
||||
* S/W CTL FIELDS.
|
||||
*
|
||||
* BD_CTL_REPEAT_VALID tells the CP whether the repeat field in the BD CTL is
|
||||
* valid. 1 means the repeat field is valid, 0 means not-valid,
|
||||
* i.e. repeat == 1
|
||||
|
@ -33,6 +35,16 @@ struct hl_bd {
|
|||
#define BD_CTL_SHADOW_INDEX_SHIFT 0
|
||||
#define BD_CTL_SHADOW_INDEX_MASK 0x00000FFF
|
||||
|
||||
/*
|
||||
* H/W CTL FIELDS
|
||||
*/
|
||||
|
||||
#define BD_CTL_COMP_OFFSET_SHIFT 16
|
||||
#define BD_CTL_COMP_OFFSET_MASK 0x00FF0000
|
||||
|
||||
#define BD_CTL_COMP_DATA_SHIFT 0
|
||||
#define BD_CTL_COMP_DATA_MASK 0x0000FFFF
|
||||
|
||||
/*
|
||||
* COMPLETION QUEUE
|
||||
*/
|
||||
|
|
|
@ -13,7 +13,6 @@
|
|||
#include <linux/slab.h>
|
||||
#include <linux/genalloc.h>
|
||||
|
||||
#define PGS_IN_2MB_PAGE (PAGE_SIZE_2MB >> PAGE_SHIFT)
|
||||
#define HL_MMU_DEBUG 0
|
||||
|
||||
/*
|
||||
|
@ -159,20 +158,19 @@ pages_pack_err:
|
|||
}
|
||||
|
||||
/*
|
||||
* get_userptr_from_host_va - initialize userptr structure from given host
|
||||
* virtual address
|
||||
*
|
||||
* @hdev : habanalabs device structure
|
||||
* @args : parameters containing the virtual address and size
|
||||
* @p_userptr : pointer to result userptr structure
|
||||
* dma_map_host_va - DMA mapping of the given host virtual address.
|
||||
* @hdev: habanalabs device structure
|
||||
* @addr: the host virtual address of the memory area
|
||||
* @size: the size of the memory area
|
||||
* @p_userptr: pointer to result userptr structure
|
||||
*
|
||||
* This function does the following:
|
||||
* - Allocate userptr structure
|
||||
* - Pin the given host memory using the userptr structure
|
||||
* - Perform DMA mapping to have the DMA addresses of the pages
|
||||
*/
|
||||
static int get_userptr_from_host_va(struct hl_device *hdev,
|
||||
struct hl_mem_in *args, struct hl_userptr **p_userptr)
|
||||
static int dma_map_host_va(struct hl_device *hdev, u64 addr, u64 size,
|
||||
struct hl_userptr **p_userptr)
|
||||
{
|
||||
struct hl_userptr *userptr;
|
||||
int rc;
|
||||
|
@ -183,8 +181,7 @@ static int get_userptr_from_host_va(struct hl_device *hdev,
|
|||
goto userptr_err;
|
||||
}
|
||||
|
||||
rc = hl_pin_host_memory(hdev, args->map_host.host_virt_addr,
|
||||
args->map_host.mem_size, userptr);
|
||||
rc = hl_pin_host_memory(hdev, addr, size, userptr);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev, "Failed to pin host memory\n");
|
||||
goto pin_err;
|
||||
|
@ -215,16 +212,16 @@ userptr_err:
|
|||
}
|
||||
|
||||
/*
|
||||
* free_userptr - free userptr structure
|
||||
*
|
||||
* @hdev : habanalabs device structure
|
||||
* @userptr : userptr to free
|
||||
* dma_unmap_host_va - DMA unmapping of the given host virtual address.
|
||||
* @hdev: habanalabs device structure
|
||||
* @userptr: userptr to free
|
||||
*
|
||||
* This function does the following:
|
||||
* - Unpins the physical pages
|
||||
* - Frees the userptr structure
|
||||
*/
|
||||
static void free_userptr(struct hl_device *hdev, struct hl_userptr *userptr)
|
||||
static void dma_unmap_host_va(struct hl_device *hdev,
|
||||
struct hl_userptr *userptr)
|
||||
{
|
||||
hl_unpin_host_memory(hdev, userptr);
|
||||
kfree(userptr);
|
||||
|
@ -253,10 +250,9 @@ static void dram_pg_pool_do_release(struct kref *ref)
|
|||
}
|
||||
|
||||
/*
|
||||
* free_phys_pg_pack - free physical page pack
|
||||
*
|
||||
* @hdev : habanalabs device structure
|
||||
* @phys_pg_pack : physical page pack to free
|
||||
* free_phys_pg_pack - free physical page pack
|
||||
* @hdev: habanalabs device structure
|
||||
* @phys_pg_pack: physical page pack to free
|
||||
*
|
||||
* This function does the following:
|
||||
* - For DRAM memory only, iterate over the pack and free each physical block
|
||||
|
@ -264,7 +260,7 @@ static void dram_pg_pool_do_release(struct kref *ref)
|
|||
* - Free the hl_vm_phys_pg_pack structure
|
||||
*/
|
||||
static void free_phys_pg_pack(struct hl_device *hdev,
|
||||
struct hl_vm_phys_pg_pack *phys_pg_pack)
|
||||
struct hl_vm_phys_pg_pack *phys_pg_pack)
|
||||
{
|
||||
struct hl_vm *vm = &hdev->vm;
|
||||
u64 i;
|
||||
|
@ -519,8 +515,8 @@ static inline int add_va_block(struct hl_device *hdev,
|
|||
* - Return the start address of the virtual block
|
||||
*/
|
||||
static u64 get_va_block(struct hl_device *hdev,
|
||||
struct hl_va_range *va_range, u64 size, u64 hint_addr,
|
||||
bool is_userptr)
|
||||
struct hl_va_range *va_range, u64 size, u64 hint_addr,
|
||||
bool is_userptr)
|
||||
{
|
||||
struct hl_vm_va_block *va_block, *new_va_block = NULL;
|
||||
u64 valid_start, valid_size, prev_start, prev_end, page_mask,
|
||||
|
@ -528,18 +524,17 @@ static u64 get_va_block(struct hl_device *hdev,
|
|||
u32 page_size;
|
||||
bool add_prev = false;
|
||||
|
||||
if (is_userptr) {
|
||||
if (is_userptr)
|
||||
/*
|
||||
* We cannot know if the user allocated memory with huge pages
|
||||
* or not, hence we continue with the biggest possible
|
||||
* granularity.
|
||||
*/
|
||||
page_size = PAGE_SIZE_2MB;
|
||||
page_mask = PAGE_MASK_2MB;
|
||||
} else {
|
||||
page_size = hdev->asic_prop.dram_page_size;
|
||||
page_mask = ~((u64)page_size - 1);
|
||||
}
|
||||
page_size = hdev->asic_prop.pmmu.huge_page_size;
|
||||
else
|
||||
page_size = hdev->asic_prop.dmmu.page_size;
|
||||
|
||||
page_mask = ~((u64)page_size - 1);
|
||||
|
||||
mutex_lock(&va_range->lock);
|
||||
|
||||
|
@ -549,7 +544,6 @@ static u64 get_va_block(struct hl_device *hdev,
|
|||
/* calc the first possible aligned addr */
|
||||
valid_start = va_block->start;
|
||||
|
||||
|
||||
if (valid_start & (page_size - 1)) {
|
||||
valid_start &= page_mask;
|
||||
valid_start += page_size;
|
||||
|
@ -561,7 +555,6 @@ static u64 get_va_block(struct hl_device *hdev,
|
|||
|
||||
if (valid_size >= size &&
|
||||
(!new_va_block || valid_size < res_valid_size)) {
|
||||
|
||||
new_va_block = va_block;
|
||||
res_valid_start = valid_start;
|
||||
res_valid_size = valid_size;
|
||||
|
@ -631,11 +624,10 @@ static u32 get_sg_info(struct scatterlist *sg, dma_addr_t *dma_addr)
|
|||
|
||||
/*
|
||||
* init_phys_pg_pack_from_userptr - initialize physical page pack from host
|
||||
* memory
|
||||
*
|
||||
* @ctx : current context
|
||||
* @userptr : userptr to initialize from
|
||||
* @pphys_pg_pack : res pointer
|
||||
* memory
|
||||
* @ctx: current context
|
||||
* @userptr: userptr to initialize from
|
||||
* @pphys_pg_pack: result pointer
|
||||
*
|
||||
* This function does the following:
|
||||
* - Pin the physical pages related to the given virtual block
|
||||
|
@ -643,16 +635,19 @@ static u32 get_sg_info(struct scatterlist *sg, dma_addr_t *dma_addr)
|
|||
* virtual block
|
||||
*/
|
||||
static int init_phys_pg_pack_from_userptr(struct hl_ctx *ctx,
|
||||
struct hl_userptr *userptr,
|
||||
struct hl_vm_phys_pg_pack **pphys_pg_pack)
|
||||
struct hl_userptr *userptr,
|
||||
struct hl_vm_phys_pg_pack **pphys_pg_pack)
|
||||
{
|
||||
struct hl_mmu_properties *mmu_prop = &ctx->hdev->asic_prop.pmmu;
|
||||
struct hl_vm_phys_pg_pack *phys_pg_pack;
|
||||
struct scatterlist *sg;
|
||||
dma_addr_t dma_addr;
|
||||
u64 page_mask, total_npages;
|
||||
u32 npages, page_size = PAGE_SIZE;
|
||||
u32 npages, page_size = PAGE_SIZE,
|
||||
huge_page_size = mmu_prop->huge_page_size;
|
||||
bool first = true, is_huge_page_opt = true;
|
||||
int rc, i, j;
|
||||
u32 pgs_in_huge_page = huge_page_size >> __ffs(page_size);
|
||||
|
||||
phys_pg_pack = kzalloc(sizeof(*phys_pg_pack), GFP_KERNEL);
|
||||
if (!phys_pg_pack)
|
||||
|
@ -675,14 +670,14 @@ static int init_phys_pg_pack_from_userptr(struct hl_ctx *ctx,
|
|||
|
||||
total_npages += npages;
|
||||
|
||||
if ((npages % PGS_IN_2MB_PAGE) ||
|
||||
(dma_addr & (PAGE_SIZE_2MB - 1)))
|
||||
if ((npages % pgs_in_huge_page) ||
|
||||
(dma_addr & (huge_page_size - 1)))
|
||||
is_huge_page_opt = false;
|
||||
}
|
||||
|
||||
if (is_huge_page_opt) {
|
||||
page_size = PAGE_SIZE_2MB;
|
||||
total_npages /= PGS_IN_2MB_PAGE;
|
||||
page_size = huge_page_size;
|
||||
do_div(total_npages, pgs_in_huge_page);
|
||||
}
|
||||
|
||||
page_mask = ~(((u64) page_size) - 1);
|
||||
|
@ -714,7 +709,7 @@ static int init_phys_pg_pack_from_userptr(struct hl_ctx *ctx,
|
|||
dma_addr += page_size;
|
||||
|
||||
if (is_huge_page_opt)
|
||||
npages -= PGS_IN_2MB_PAGE;
|
||||
npages -= pgs_in_huge_page;
|
||||
else
|
||||
npages--;
|
||||
}
|
||||
|
@ -731,19 +726,18 @@ page_pack_arr_mem_err:
|
|||
}
|
||||
|
||||
/*
|
||||
* map_phys_page_pack - maps the physical page pack
|
||||
*
|
||||
* @ctx : current context
|
||||
* @vaddr : start address of the virtual area to map from
|
||||
* @phys_pg_pack : the pack of physical pages to map to
|
||||
* map_phys_pg_pack - maps the physical page pack.
|
||||
* @ctx: current context
|
||||
* @vaddr: start address of the virtual area to map from
|
||||
* @phys_pg_pack: the pack of physical pages to map to
|
||||
*
|
||||
* This function does the following:
|
||||
* - Maps each chunk of virtual memory to matching physical chunk
|
||||
* - Stores number of successful mappings in the given argument
|
||||
* - Returns 0 on success, error code otherwise.
|
||||
* - Returns 0 on success, error code otherwise
|
||||
*/
|
||||
static int map_phys_page_pack(struct hl_ctx *ctx, u64 vaddr,
|
||||
struct hl_vm_phys_pg_pack *phys_pg_pack)
|
||||
static int map_phys_pg_pack(struct hl_ctx *ctx, u64 vaddr,
|
||||
struct hl_vm_phys_pg_pack *phys_pg_pack)
|
||||
{
|
||||
struct hl_device *hdev = ctx->hdev;
|
||||
u64 next_vaddr = vaddr, paddr, mapped_pg_cnt = 0, i;
|
||||
|
@ -783,6 +777,36 @@ err:
|
|||
return rc;
|
||||
}
|
||||
|
||||
/*
|
||||
* unmap_phys_pg_pack - unmaps the physical page pack
|
||||
* @ctx: current context
|
||||
* @vaddr: start address of the virtual area to unmap
|
||||
* @phys_pg_pack: the pack of physical pages to unmap
|
||||
*/
|
||||
static void unmap_phys_pg_pack(struct hl_ctx *ctx, u64 vaddr,
|
||||
struct hl_vm_phys_pg_pack *phys_pg_pack)
|
||||
{
|
||||
struct hl_device *hdev = ctx->hdev;
|
||||
u64 next_vaddr, i;
|
||||
u32 page_size;
|
||||
|
||||
page_size = phys_pg_pack->page_size;
|
||||
next_vaddr = vaddr;
|
||||
|
||||
for (i = 0 ; i < phys_pg_pack->npages ; i++, next_vaddr += page_size) {
|
||||
if (hl_mmu_unmap(ctx, next_vaddr, page_size))
|
||||
dev_warn_ratelimited(hdev->dev,
|
||||
"unmap failed for vaddr: 0x%llx\n", next_vaddr);
|
||||
|
||||
/*
|
||||
* unmapping on Palladium can be really long, so avoid a CPU
|
||||
* soft lockup bug by sleeping a little between unmapping pages
|
||||
*/
|
||||
if (hdev->pldm)
|
||||
usleep_range(500, 1000);
|
||||
}
|
||||
}
|
||||
|
||||
static int get_paddr_from_handle(struct hl_ctx *ctx, struct hl_mem_in *args,
|
||||
u64 *paddr)
|
||||
{
|
||||
|
@ -839,7 +863,10 @@ static int map_device_va(struct hl_ctx *ctx, struct hl_mem_in *args,
|
|||
*device_addr = 0;
|
||||
|
||||
if (is_userptr) {
|
||||
rc = get_userptr_from_host_va(hdev, args, &userptr);
|
||||
u64 addr = args->map_host.host_virt_addr,
|
||||
size = args->map_host.mem_size;
|
||||
|
||||
rc = dma_map_host_va(hdev, addr, size, &userptr);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev, "failed to get userptr from va\n");
|
||||
return rc;
|
||||
|
@ -850,7 +877,7 @@ static int map_device_va(struct hl_ctx *ctx, struct hl_mem_in *args,
|
|||
if (rc) {
|
||||
dev_err(hdev->dev,
|
||||
"unable to init page pack for vaddr 0x%llx\n",
|
||||
args->map_host.host_virt_addr);
|
||||
addr);
|
||||
goto init_page_pack_err;
|
||||
}
|
||||
|
||||
|
@ -909,7 +936,7 @@ static int map_device_va(struct hl_ctx *ctx, struct hl_mem_in *args,
|
|||
|
||||
mutex_lock(&ctx->mmu_lock);
|
||||
|
||||
rc = map_phys_page_pack(ctx, ret_vaddr, phys_pg_pack);
|
||||
rc = map_phys_pg_pack(ctx, ret_vaddr, phys_pg_pack);
|
||||
if (rc) {
|
||||
mutex_unlock(&ctx->mmu_lock);
|
||||
dev_err(hdev->dev, "mapping page pack failed for handle %u\n",
|
||||
|
@ -917,7 +944,7 @@ static int map_device_va(struct hl_ctx *ctx, struct hl_mem_in *args,
|
|||
goto map_err;
|
||||
}
|
||||
|
||||
hdev->asic_funcs->mmu_invalidate_cache(hdev, false);
|
||||
hdev->asic_funcs->mmu_invalidate_cache(hdev, false, *vm_type);
|
||||
|
||||
mutex_unlock(&ctx->mmu_lock);
|
||||
|
||||
|
@ -955,7 +982,7 @@ shared_err:
|
|||
free_phys_pg_pack(hdev, phys_pg_pack);
|
||||
init_page_pack_err:
|
||||
if (is_userptr)
|
||||
free_userptr(hdev, userptr);
|
||||
dma_unmap_host_va(hdev, userptr);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
@ -965,20 +992,20 @@ init_page_pack_err:
|
|||
*
|
||||
* @ctx : current context
|
||||
* @vaddr : device virtual address to unmap
|
||||
* @ctx_free : true if in context free flow, false otherwise.
|
||||
*
|
||||
* This function does the following:
|
||||
* - Unmap the physical pages related to the given virtual address
|
||||
* - return the device virtual block to the virtual block list
|
||||
*/
|
||||
static int unmap_device_va(struct hl_ctx *ctx, u64 vaddr)
|
||||
static int unmap_device_va(struct hl_ctx *ctx, u64 vaddr, bool ctx_free)
|
||||
{
|
||||
struct hl_device *hdev = ctx->hdev;
|
||||
struct hl_vm_phys_pg_pack *phys_pg_pack = NULL;
|
||||
struct hl_vm_hash_node *hnode = NULL;
|
||||
struct hl_userptr *userptr = NULL;
|
||||
struct hl_va_range *va_range;
|
||||
enum vm_type_t *vm_type;
|
||||
u64 next_vaddr, i;
|
||||
u32 page_size;
|
||||
bool is_userptr;
|
||||
int rc;
|
||||
|
||||
|
@ -1003,9 +1030,10 @@ static int unmap_device_va(struct hl_ctx *ctx, u64 vaddr)
|
|||
|
||||
if (*vm_type == VM_TYPE_USERPTR) {
|
||||
is_userptr = true;
|
||||
va_range = &ctx->host_va_range;
|
||||
userptr = hnode->ptr;
|
||||
rc = init_phys_pg_pack_from_userptr(ctx, userptr,
|
||||
&phys_pg_pack);
|
||||
&phys_pg_pack);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev,
|
||||
"unable to init page pack for vaddr 0x%llx\n",
|
||||
|
@ -1014,6 +1042,7 @@ static int unmap_device_va(struct hl_ctx *ctx, u64 vaddr)
|
|||
}
|
||||
} else if (*vm_type == VM_TYPE_PHYS_PACK) {
|
||||
is_userptr = false;
|
||||
va_range = &ctx->dram_va_range;
|
||||
phys_pg_pack = hnode->ptr;
|
||||
} else {
|
||||
dev_warn(hdev->dev,
|
||||
|
@ -1029,42 +1058,41 @@ static int unmap_device_va(struct hl_ctx *ctx, u64 vaddr)
|
|||
goto mapping_cnt_err;
|
||||
}
|
||||
|
||||
page_size = phys_pg_pack->page_size;
|
||||
vaddr &= ~(((u64) page_size) - 1);
|
||||
|
||||
next_vaddr = vaddr;
|
||||
vaddr &= ~(((u64) phys_pg_pack->page_size) - 1);
|
||||
|
||||
mutex_lock(&ctx->mmu_lock);
|
||||
|
||||
for (i = 0 ; i < phys_pg_pack->npages ; i++, next_vaddr += page_size) {
|
||||
if (hl_mmu_unmap(ctx, next_vaddr, page_size))
|
||||
dev_warn_ratelimited(hdev->dev,
|
||||
"unmap failed for vaddr: 0x%llx\n", next_vaddr);
|
||||
unmap_phys_pg_pack(ctx, vaddr, phys_pg_pack);
|
||||
|
||||
/* unmapping on Palladium can be really long, so avoid a CPU
|
||||
* soft lockup bug by sleeping a little between unmapping pages
|
||||
*/
|
||||
if (hdev->pldm)
|
||||
usleep_range(500, 1000);
|
||||
}
|
||||
|
||||
hdev->asic_funcs->mmu_invalidate_cache(hdev, true);
|
||||
/*
|
||||
* During context free this function is called in a loop to clean all
|
||||
* the context mappings. Hence the cache invalidation can be called once
|
||||
* at the loop end rather than for each iteration
|
||||
*/
|
||||
if (!ctx_free)
|
||||
hdev->asic_funcs->mmu_invalidate_cache(hdev, true, *vm_type);
|
||||
|
||||
mutex_unlock(&ctx->mmu_lock);
|
||||
|
||||
if (add_va_block(hdev,
|
||||
is_userptr ? &ctx->host_va_range : &ctx->dram_va_range,
|
||||
vaddr,
|
||||
vaddr + phys_pg_pack->total_size - 1))
|
||||
dev_warn(hdev->dev, "add va block failed for vaddr: 0x%llx\n",
|
||||
vaddr);
|
||||
/*
|
||||
* No point in maintaining the free VA block list if the context is
|
||||
* closing as the list will be freed anyway
|
||||
*/
|
||||
if (!ctx_free) {
|
||||
rc = add_va_block(hdev, va_range, vaddr,
|
||||
vaddr + phys_pg_pack->total_size - 1);
|
||||
if (rc)
|
||||
dev_warn(hdev->dev,
|
||||
"add va block failed for vaddr: 0x%llx\n",
|
||||
vaddr);
|
||||
}
|
||||
|
||||
atomic_dec(&phys_pg_pack->mapping_cnt);
|
||||
kfree(hnode);
|
||||
|
||||
if (is_userptr) {
|
||||
free_phys_pg_pack(hdev, phys_pg_pack);
|
||||
free_userptr(hdev, userptr);
|
||||
dma_unmap_host_va(hdev, userptr);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
@ -1189,8 +1217,8 @@ int hl_mem_ioctl(struct hl_fpriv *hpriv, void *data)
|
|||
break;
|
||||
|
||||
case HL_MEM_OP_UNMAP:
|
||||
rc = unmap_device_va(ctx,
|
||||
args->in.unmap.device_virt_addr);
|
||||
rc = unmap_device_va(ctx, args->in.unmap.device_virt_addr,
|
||||
false);
|
||||
break;
|
||||
|
||||
default:
|
||||
|
@ -1203,57 +1231,17 @@ out:
|
|||
return rc;
|
||||
}
|
||||
|
||||
/*
|
||||
* hl_pin_host_memory - pins a chunk of host memory
|
||||
*
|
||||
* @hdev : pointer to the habanalabs device structure
|
||||
* @addr : the user-space virtual address of the memory area
|
||||
* @size : the size of the memory area
|
||||
* @userptr : pointer to hl_userptr structure
|
||||
*
|
||||
* This function does the following:
|
||||
* - Pins the physical pages
|
||||
* - Create a SG list from those pages
|
||||
*/
|
||||
int hl_pin_host_memory(struct hl_device *hdev, u64 addr, u64 size,
|
||||
struct hl_userptr *userptr)
|
||||
static int get_user_memory(struct hl_device *hdev, u64 addr, u64 size,
|
||||
u32 npages, u64 start, u32 offset,
|
||||
struct hl_userptr *userptr)
|
||||
{
|
||||
u64 start, end;
|
||||
u32 npages, offset;
|
||||
int rc;
|
||||
|
||||
if (!size) {
|
||||
dev_err(hdev->dev, "size to pin is invalid - %llu\n", size);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (!access_ok((void __user *) (uintptr_t) addr, size)) {
|
||||
dev_err(hdev->dev, "user pointer is invalid - 0x%llx\n", addr);
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
/*
|
||||
* If the combination of the address and size requested for this memory
|
||||
* region causes an integer overflow, return error.
|
||||
*/
|
||||
if (((addr + size) < addr) ||
|
||||
PAGE_ALIGN(addr + size) < (addr + size)) {
|
||||
dev_err(hdev->dev,
|
||||
"user pointer 0x%llx + %llu causes integer overflow\n",
|
||||
addr, size);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
start = addr & PAGE_MASK;
|
||||
offset = addr & ~PAGE_MASK;
|
||||
end = PAGE_ALIGN(addr + size);
|
||||
npages = (end - start) >> PAGE_SHIFT;
|
||||
|
||||
userptr->size = size;
|
||||
userptr->addr = addr;
|
||||
userptr->dma_mapped = false;
|
||||
INIT_LIST_HEAD(&userptr->job_node);
|
||||
|
||||
userptr->vec = frame_vector_create(npages);
|
||||
if (!userptr->vec) {
|
||||
dev_err(hdev->dev, "Failed to create frame vector\n");
|
||||
|
@ -1279,17 +1267,82 @@ int hl_pin_host_memory(struct hl_device *hdev, u64 addr, u64 size,
|
|||
goto put_framevec;
|
||||
}
|
||||
|
||||
userptr->sgt = kzalloc(sizeof(*userptr->sgt), GFP_ATOMIC);
|
||||
if (!userptr->sgt) {
|
||||
rc = -ENOMEM;
|
||||
goto put_framevec;
|
||||
}
|
||||
|
||||
rc = sg_alloc_table_from_pages(userptr->sgt,
|
||||
frame_vector_pages(userptr->vec),
|
||||
npages, offset, size, GFP_ATOMIC);
|
||||
if (rc < 0) {
|
||||
dev_err(hdev->dev, "failed to create SG table from pages\n");
|
||||
goto put_framevec;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
put_framevec:
|
||||
put_vaddr_frames(userptr->vec);
|
||||
destroy_framevec:
|
||||
frame_vector_destroy(userptr->vec);
|
||||
return rc;
|
||||
}
|
||||
|
||||
/*
|
||||
* hl_pin_host_memory - pins a chunk of host memory.
|
||||
* @hdev: pointer to the habanalabs device structure
|
||||
* @addr: the host virtual address of the memory area
|
||||
* @size: the size of the memory area
|
||||
* @userptr: pointer to hl_userptr structure
|
||||
*
|
||||
* This function does the following:
|
||||
* - Pins the physical pages
|
||||
* - Create an SG list from those pages
|
||||
*/
|
||||
int hl_pin_host_memory(struct hl_device *hdev, u64 addr, u64 size,
|
||||
struct hl_userptr *userptr)
|
||||
{
|
||||
u64 start, end;
|
||||
u32 npages, offset;
|
||||
int rc;
|
||||
|
||||
if (!size) {
|
||||
dev_err(hdev->dev, "size to pin is invalid - %llu\n", size);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/*
|
||||
* If the combination of the address and size requested for this memory
|
||||
* region causes an integer overflow, return error.
|
||||
*/
|
||||
if (((addr + size) < addr) ||
|
||||
PAGE_ALIGN(addr + size) < (addr + size)) {
|
||||
dev_err(hdev->dev,
|
||||
"user pointer 0x%llx + %llu causes integer overflow\n",
|
||||
addr, size);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/*
|
||||
* This function can be called also from data path, hence use atomic
|
||||
* always as it is not a big allocation.
|
||||
*/
|
||||
userptr->sgt = kzalloc(sizeof(*userptr->sgt), GFP_ATOMIC);
|
||||
if (!userptr->sgt)
|
||||
return -ENOMEM;
|
||||
|
||||
start = addr & PAGE_MASK;
|
||||
offset = addr & ~PAGE_MASK;
|
||||
end = PAGE_ALIGN(addr + size);
|
||||
npages = (end - start) >> PAGE_SHIFT;
|
||||
|
||||
userptr->size = size;
|
||||
userptr->addr = addr;
|
||||
userptr->dma_mapped = false;
|
||||
INIT_LIST_HEAD(&userptr->job_node);
|
||||
|
||||
rc = get_user_memory(hdev, addr, size, npages, start, offset,
|
||||
userptr);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev,
|
||||
"failed to get user memory for address 0x%llx\n",
|
||||
addr);
|
||||
goto free_sgt;
|
||||
}
|
||||
|
||||
|
@ -1299,34 +1352,28 @@ int hl_pin_host_memory(struct hl_device *hdev, u64 addr, u64 size,
|
|||
|
||||
free_sgt:
|
||||
kfree(userptr->sgt);
|
||||
put_framevec:
|
||||
put_vaddr_frames(userptr->vec);
|
||||
destroy_framevec:
|
||||
frame_vector_destroy(userptr->vec);
|
||||
return rc;
|
||||
}
|
||||
|
||||
/*
|
||||
* hl_unpin_host_memory - unpins a chunk of host memory
|
||||
*
|
||||
* @hdev : pointer to the habanalabs device structure
|
||||
* @userptr : pointer to hl_userptr structure
|
||||
* hl_unpin_host_memory - unpins a chunk of host memory.
|
||||
* @hdev: pointer to the habanalabs device structure
|
||||
* @userptr: pointer to hl_userptr structure
|
||||
*
|
||||
* This function does the following:
|
||||
* - Unpins the physical pages related to the host memory
|
||||
* - Free the SG list
|
||||
*/
|
||||
int hl_unpin_host_memory(struct hl_device *hdev, struct hl_userptr *userptr)
|
||||
void hl_unpin_host_memory(struct hl_device *hdev, struct hl_userptr *userptr)
|
||||
{
|
||||
struct page **pages;
|
||||
|
||||
hl_debugfs_remove_userptr(hdev, userptr);
|
||||
|
||||
if (userptr->dma_mapped)
|
||||
hdev->asic_funcs->hl_dma_unmap_sg(hdev,
|
||||
userptr->sgt->sgl,
|
||||
userptr->sgt->nents,
|
||||
userptr->dir);
|
||||
hdev->asic_funcs->hl_dma_unmap_sg(hdev, userptr->sgt->sgl,
|
||||
userptr->sgt->nents,
|
||||
userptr->dir);
|
||||
|
||||
pages = frame_vector_pages(userptr->vec);
|
||||
if (!IS_ERR(pages)) {
|
||||
|
@ -1342,8 +1389,6 @@ int hl_unpin_host_memory(struct hl_device *hdev, struct hl_userptr *userptr)
|
|||
|
||||
sg_free_table(userptr->sgt);
|
||||
kfree(userptr->sgt);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -1542,43 +1587,16 @@ int hl_vm_ctx_init(struct hl_ctx *ctx)
|
|||
* @hdev : pointer to the habanalabs structure
|
||||
* va_range : pointer to virtual addresses range
|
||||
*
|
||||
* This function initializes the following:
|
||||
* - Checks that the given range contains the whole initial range
|
||||
* This function does the following:
|
||||
* - Frees the virtual addresses block list and its lock
|
||||
*/
|
||||
static void hl_va_range_fini(struct hl_device *hdev,
|
||||
struct hl_va_range *va_range)
|
||||
{
|
||||
struct hl_vm_va_block *va_block;
|
||||
|
||||
if (list_empty(&va_range->list)) {
|
||||
dev_warn(hdev->dev,
|
||||
"va list should not be empty on cleanup!\n");
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (!list_is_singular(&va_range->list)) {
|
||||
dev_warn(hdev->dev,
|
||||
"va list should not contain multiple blocks on cleanup!\n");
|
||||
goto free_va_list;
|
||||
}
|
||||
|
||||
va_block = list_first_entry(&va_range->list, typeof(*va_block), node);
|
||||
|
||||
if (va_block->start != va_range->start_addr ||
|
||||
va_block->end != va_range->end_addr) {
|
||||
dev_warn(hdev->dev,
|
||||
"wrong va block on cleanup, from 0x%llx to 0x%llx\n",
|
||||
va_block->start, va_block->end);
|
||||
goto free_va_list;
|
||||
}
|
||||
|
||||
free_va_list:
|
||||
mutex_lock(&va_range->lock);
|
||||
clear_va_list_locked(hdev, &va_range->list);
|
||||
mutex_unlock(&va_range->lock);
|
||||
|
||||
out:
|
||||
mutex_destroy(&va_range->lock);
|
||||
}
|
||||
|
||||
|
@ -1613,21 +1631,31 @@ void hl_vm_ctx_fini(struct hl_ctx *ctx)
|
|||
|
||||
hl_debugfs_remove_ctx_mem_hash(hdev, ctx);
|
||||
|
||||
if (!hash_empty(ctx->mem_hash))
|
||||
dev_notice(hdev->dev, "ctx is freed while it has va in use\n");
|
||||
/*
|
||||
* Clearly something went wrong on hard reset so no point in printing
|
||||
* another side effect error
|
||||
*/
|
||||
if (!hdev->hard_reset_pending && !hash_empty(ctx->mem_hash))
|
||||
dev_notice(hdev->dev,
|
||||
"ctx %d is freed while it has va in use\n",
|
||||
ctx->asid);
|
||||
|
||||
hash_for_each_safe(ctx->mem_hash, i, tmp_node, hnode, node) {
|
||||
dev_dbg(hdev->dev,
|
||||
"hl_mem_hash_node of vaddr 0x%llx of asid %d is still alive\n",
|
||||
hnode->vaddr, ctx->asid);
|
||||
unmap_device_va(ctx, hnode->vaddr);
|
||||
unmap_device_va(ctx, hnode->vaddr, true);
|
||||
}
|
||||
|
||||
/* invalidate the cache once after the unmapping loop */
|
||||
hdev->asic_funcs->mmu_invalidate_cache(hdev, true, VM_TYPE_USERPTR);
|
||||
hdev->asic_funcs->mmu_invalidate_cache(hdev, true, VM_TYPE_PHYS_PACK);
|
||||
|
||||
spin_lock(&vm->idr_lock);
|
||||
idr_for_each_entry(&vm->phys_pg_pack_handles, phys_pg_list, i)
|
||||
if (phys_pg_list->asid == ctx->asid) {
|
||||
dev_dbg(hdev->dev,
|
||||
"page list 0x%p of asid %d is still alive\n",
|
||||
"page list 0x%px of asid %d is still alive\n",
|
||||
phys_pg_list, ctx->asid);
|
||||
atomic64_sub(phys_pg_list->total_size,
|
||||
&hdev->dram_used_mem);
|
||||
|
|
|
@ -25,10 +25,9 @@ static struct pgt_info *get_pgt_info(struct hl_ctx *ctx, u64 hop_addr)
|
|||
return pgt_info;
|
||||
}
|
||||
|
||||
static void free_hop(struct hl_ctx *ctx, u64 hop_addr)
|
||||
static void _free_hop(struct hl_ctx *ctx, struct pgt_info *pgt_info)
|
||||
{
|
||||
struct hl_device *hdev = ctx->hdev;
|
||||
struct pgt_info *pgt_info = get_pgt_info(ctx, hop_addr);
|
||||
|
||||
gen_pool_free(hdev->mmu_pgt_pool, pgt_info->phys_addr,
|
||||
hdev->asic_prop.mmu_hop_table_size);
|
||||
|
@ -37,6 +36,13 @@ static void free_hop(struct hl_ctx *ctx, u64 hop_addr)
|
|||
kfree(pgt_info);
|
||||
}
|
||||
|
||||
static void free_hop(struct hl_ctx *ctx, u64 hop_addr)
|
||||
{
|
||||
struct pgt_info *pgt_info = get_pgt_info(ctx, hop_addr);
|
||||
|
||||
_free_hop(ctx, pgt_info);
|
||||
}
|
||||
|
||||
static u64 alloc_hop(struct hl_ctx *ctx)
|
||||
{
|
||||
struct hl_device *hdev = ctx->hdev;
|
||||
|
@ -105,8 +111,8 @@ static inline void write_pte(struct hl_ctx *ctx, u64 shadow_pte_addr, u64 val)
|
|||
* clear the 12 LSBs and translate the shadow hop to its associated
|
||||
* physical hop, and add back the original 12 LSBs.
|
||||
*/
|
||||
u64 phys_val = get_phys_addr(ctx, val & PTE_PHYS_ADDR_MASK) |
|
||||
(val & OFFSET_MASK);
|
||||
u64 phys_val = get_phys_addr(ctx, val & HOP_PHYS_ADDR_MASK) |
|
||||
(val & FLAGS_MASK);
|
||||
|
||||
ctx->hdev->asic_funcs->write_pte(ctx->hdev,
|
||||
get_phys_addr(ctx, shadow_pte_addr),
|
||||
|
@ -159,7 +165,7 @@ static inline int put_pte(struct hl_ctx *ctx, u64 hop_addr)
|
|||
*/
|
||||
num_of_ptes_left = pgt_info->num_of_ptes;
|
||||
if (!num_of_ptes_left)
|
||||
free_hop(ctx, hop_addr);
|
||||
_free_hop(ctx, pgt_info);
|
||||
|
||||
return num_of_ptes_left;
|
||||
}
|
||||
|
@ -171,35 +177,50 @@ static inline u64 get_hopN_pte_addr(struct hl_ctx *ctx, u64 hop_addr,
|
|||
((virt_addr & mask) >> shift);
|
||||
}
|
||||
|
||||
static inline u64 get_hop0_pte_addr(struct hl_ctx *ctx, u64 hop_addr, u64 vaddr)
|
||||
static inline u64 get_hop0_pte_addr(struct hl_ctx *ctx,
|
||||
struct hl_mmu_properties *mmu_prop,
|
||||
u64 hop_addr, u64 vaddr)
|
||||
{
|
||||
return get_hopN_pte_addr(ctx, hop_addr, vaddr, HOP0_MASK, HOP0_SHIFT);
|
||||
return get_hopN_pte_addr(ctx, hop_addr, vaddr, mmu_prop->hop0_mask,
|
||||
mmu_prop->hop0_shift);
|
||||
}
|
||||
|
||||
static inline u64 get_hop1_pte_addr(struct hl_ctx *ctx, u64 hop_addr, u64 vaddr)
|
||||
static inline u64 get_hop1_pte_addr(struct hl_ctx *ctx,
|
||||
struct hl_mmu_properties *mmu_prop,
|
||||
u64 hop_addr, u64 vaddr)
|
||||
{
|
||||
return get_hopN_pte_addr(ctx, hop_addr, vaddr, HOP1_MASK, HOP1_SHIFT);
|
||||
return get_hopN_pte_addr(ctx, hop_addr, vaddr, mmu_prop->hop1_mask,
|
||||
mmu_prop->hop1_shift);
|
||||
}
|
||||
|
||||
static inline u64 get_hop2_pte_addr(struct hl_ctx *ctx, u64 hop_addr, u64 vaddr)
|
||||
static inline u64 get_hop2_pte_addr(struct hl_ctx *ctx,
|
||||
struct hl_mmu_properties *mmu_prop,
|
||||
u64 hop_addr, u64 vaddr)
|
||||
{
|
||||
return get_hopN_pte_addr(ctx, hop_addr, vaddr, HOP2_MASK, HOP2_SHIFT);
|
||||
return get_hopN_pte_addr(ctx, hop_addr, vaddr, mmu_prop->hop2_mask,
|
||||
mmu_prop->hop2_shift);
|
||||
}
|
||||
|
||||
static inline u64 get_hop3_pte_addr(struct hl_ctx *ctx, u64 hop_addr, u64 vaddr)
|
||||
static inline u64 get_hop3_pte_addr(struct hl_ctx *ctx,
|
||||
struct hl_mmu_properties *mmu_prop,
|
||||
u64 hop_addr, u64 vaddr)
|
||||
{
|
||||
return get_hopN_pte_addr(ctx, hop_addr, vaddr, HOP3_MASK, HOP3_SHIFT);
|
||||
return get_hopN_pte_addr(ctx, hop_addr, vaddr, mmu_prop->hop3_mask,
|
||||
mmu_prop->hop3_shift);
|
||||
}
|
||||
|
||||
static inline u64 get_hop4_pte_addr(struct hl_ctx *ctx, u64 hop_addr, u64 vaddr)
|
||||
static inline u64 get_hop4_pte_addr(struct hl_ctx *ctx,
|
||||
struct hl_mmu_properties *mmu_prop,
|
||||
u64 hop_addr, u64 vaddr)
|
||||
{
|
||||
return get_hopN_pte_addr(ctx, hop_addr, vaddr, HOP4_MASK, HOP4_SHIFT);
|
||||
return get_hopN_pte_addr(ctx, hop_addr, vaddr, mmu_prop->hop4_mask,
|
||||
mmu_prop->hop4_shift);
|
||||
}
|
||||
|
||||
static inline u64 get_next_hop_addr(struct hl_ctx *ctx, u64 curr_pte)
|
||||
{
|
||||
if (curr_pte & PAGE_PRESENT_MASK)
|
||||
return curr_pte & PHYS_ADDR_MASK;
|
||||
return curr_pte & HOP_PHYS_ADDR_MASK;
|
||||
else
|
||||
return ULLONG_MAX;
|
||||
}
|
||||
|
@ -288,23 +309,23 @@ static int dram_default_mapping_init(struct hl_ctx *ctx)
|
|||
}
|
||||
|
||||
/* need only pte 0 in hops 0 and 1 */
|
||||
pte_val = (hop1_addr & PTE_PHYS_ADDR_MASK) | PAGE_PRESENT_MASK;
|
||||
pte_val = (hop1_addr & HOP_PHYS_ADDR_MASK) | PAGE_PRESENT_MASK;
|
||||
write_pte(ctx, hop0_addr, pte_val);
|
||||
|
||||
pte_val = (hop2_addr & PTE_PHYS_ADDR_MASK) | PAGE_PRESENT_MASK;
|
||||
pte_val = (hop2_addr & HOP_PHYS_ADDR_MASK) | PAGE_PRESENT_MASK;
|
||||
write_pte(ctx, hop1_addr, pte_val);
|
||||
get_pte(ctx, hop1_addr);
|
||||
|
||||
hop2_pte_addr = hop2_addr;
|
||||
for (i = 0 ; i < num_of_hop3 ; i++) {
|
||||
pte_val = (ctx->dram_default_hops[i] & PTE_PHYS_ADDR_MASK) |
|
||||
pte_val = (ctx->dram_default_hops[i] & HOP_PHYS_ADDR_MASK) |
|
||||
PAGE_PRESENT_MASK;
|
||||
write_pte(ctx, hop2_pte_addr, pte_val);
|
||||
get_pte(ctx, hop2_addr);
|
||||
hop2_pte_addr += HL_PTE_SIZE;
|
||||
}
|
||||
|
||||
pte_val = (prop->mmu_dram_default_page_addr & PTE_PHYS_ADDR_MASK) |
|
||||
pte_val = (prop->mmu_dram_default_page_addr & HOP_PHYS_ADDR_MASK) |
|
||||
LAST_MASK | PAGE_PRESENT_MASK;
|
||||
|
||||
for (i = 0 ; i < num_of_hop3 ; i++) {
|
||||
|
@ -400,8 +421,6 @@ int hl_mmu_init(struct hl_device *hdev)
|
|||
if (!hdev->mmu_enable)
|
||||
return 0;
|
||||
|
||||
/* MMU H/W init was already done in device hw_init() */
|
||||
|
||||
hdev->mmu_pgt_pool =
|
||||
gen_pool_create(__ffs(prop->mmu_hop_table_size), -1);
|
||||
|
||||
|
@ -427,6 +446,8 @@ int hl_mmu_init(struct hl_device *hdev)
|
|||
goto err_pool_add;
|
||||
}
|
||||
|
||||
/* MMU H/W init will be done in device hw_init() */
|
||||
|
||||
return 0;
|
||||
|
||||
err_pool_add:
|
||||
|
@ -450,10 +471,10 @@ void hl_mmu_fini(struct hl_device *hdev)
|
|||
if (!hdev->mmu_enable)
|
||||
return;
|
||||
|
||||
/* MMU H/W fini was already done in device hw_fini() */
|
||||
|
||||
kvfree(hdev->mmu_shadow_hop0);
|
||||
gen_pool_destroy(hdev->mmu_pgt_pool);
|
||||
|
||||
/* MMU H/W fini will be done in device hw_fini() */
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -501,36 +522,36 @@ void hl_mmu_ctx_fini(struct hl_ctx *ctx)
|
|||
dram_default_mapping_fini(ctx);
|
||||
|
||||
if (!hash_empty(ctx->mmu_shadow_hash))
|
||||
dev_err(hdev->dev, "ctx is freed while it has pgts in use\n");
|
||||
dev_err(hdev->dev, "ctx %d is freed while it has pgts in use\n",
|
||||
ctx->asid);
|
||||
|
||||
hash_for_each_safe(ctx->mmu_shadow_hash, i, tmp, pgt_info, node) {
|
||||
dev_err(hdev->dev,
|
||||
dev_err_ratelimited(hdev->dev,
|
||||
"pgt_info of addr 0x%llx of asid %d was not destroyed, num_ptes: %d\n",
|
||||
pgt_info->phys_addr, ctx->asid, pgt_info->num_of_ptes);
|
||||
free_hop(ctx, pgt_info->shadow_addr);
|
||||
_free_hop(ctx, pgt_info);
|
||||
}
|
||||
|
||||
mutex_destroy(&ctx->mmu_lock);
|
||||
}
|
||||
|
||||
static int _hl_mmu_unmap(struct hl_ctx *ctx, u64 virt_addr)
|
||||
static int _hl_mmu_unmap(struct hl_ctx *ctx, u64 virt_addr, bool is_dram_addr)
|
||||
{
|
||||
struct hl_device *hdev = ctx->hdev;
|
||||
struct asic_fixed_properties *prop = &hdev->asic_prop;
|
||||
struct hl_mmu_properties *mmu_prop;
|
||||
u64 hop0_addr = 0, hop0_pte_addr = 0,
|
||||
hop1_addr = 0, hop1_pte_addr = 0,
|
||||
hop2_addr = 0, hop2_pte_addr = 0,
|
||||
hop3_addr = 0, hop3_pte_addr = 0,
|
||||
hop4_addr = 0, hop4_pte_addr = 0,
|
||||
curr_pte;
|
||||
bool is_dram_addr, is_huge, clear_hop3 = true;
|
||||
bool is_huge, clear_hop3 = true;
|
||||
|
||||
is_dram_addr = hl_mem_area_inside_range(virt_addr, PAGE_SIZE_2MB,
|
||||
prop->va_space_dram_start_address,
|
||||
prop->va_space_dram_end_address);
|
||||
mmu_prop = is_dram_addr ? &prop->dmmu : &prop->pmmu;
|
||||
|
||||
hop0_addr = get_hop0_addr(ctx);
|
||||
hop0_pte_addr = get_hop0_pte_addr(ctx, hop0_addr, virt_addr);
|
||||
hop0_pte_addr = get_hop0_pte_addr(ctx, mmu_prop, hop0_addr, virt_addr);
|
||||
|
||||
curr_pte = *(u64 *) (uintptr_t) hop0_pte_addr;
|
||||
|
||||
|
@ -539,7 +560,7 @@ static int _hl_mmu_unmap(struct hl_ctx *ctx, u64 virt_addr)
|
|||
if (hop1_addr == ULLONG_MAX)
|
||||
goto not_mapped;
|
||||
|
||||
hop1_pte_addr = get_hop1_pte_addr(ctx, hop1_addr, virt_addr);
|
||||
hop1_pte_addr = get_hop1_pte_addr(ctx, mmu_prop, hop1_addr, virt_addr);
|
||||
|
||||
curr_pte = *(u64 *) (uintptr_t) hop1_pte_addr;
|
||||
|
||||
|
@ -548,7 +569,7 @@ static int _hl_mmu_unmap(struct hl_ctx *ctx, u64 virt_addr)
|
|||
if (hop2_addr == ULLONG_MAX)
|
||||
goto not_mapped;
|
||||
|
||||
hop2_pte_addr = get_hop2_pte_addr(ctx, hop2_addr, virt_addr);
|
||||
hop2_pte_addr = get_hop2_pte_addr(ctx, mmu_prop, hop2_addr, virt_addr);
|
||||
|
||||
curr_pte = *(u64 *) (uintptr_t) hop2_pte_addr;
|
||||
|
||||
|
@ -557,7 +578,7 @@ static int _hl_mmu_unmap(struct hl_ctx *ctx, u64 virt_addr)
|
|||
if (hop3_addr == ULLONG_MAX)
|
||||
goto not_mapped;
|
||||
|
||||
hop3_pte_addr = get_hop3_pte_addr(ctx, hop3_addr, virt_addr);
|
||||
hop3_pte_addr = get_hop3_pte_addr(ctx, mmu_prop, hop3_addr, virt_addr);
|
||||
|
||||
curr_pte = *(u64 *) (uintptr_t) hop3_pte_addr;
|
||||
|
||||
|
@ -575,7 +596,8 @@ static int _hl_mmu_unmap(struct hl_ctx *ctx, u64 virt_addr)
|
|||
if (hop4_addr == ULLONG_MAX)
|
||||
goto not_mapped;
|
||||
|
||||
hop4_pte_addr = get_hop4_pte_addr(ctx, hop4_addr, virt_addr);
|
||||
hop4_pte_addr = get_hop4_pte_addr(ctx, mmu_prop, hop4_addr,
|
||||
virt_addr);
|
||||
|
||||
curr_pte = *(u64 *) (uintptr_t) hop4_pte_addr;
|
||||
|
||||
|
@ -584,7 +606,7 @@ static int _hl_mmu_unmap(struct hl_ctx *ctx, u64 virt_addr)
|
|||
|
||||
if (hdev->dram_default_page_mapping && is_dram_addr) {
|
||||
u64 default_pte = (prop->mmu_dram_default_page_addr &
|
||||
PTE_PHYS_ADDR_MASK) | LAST_MASK |
|
||||
HOP_PHYS_ADDR_MASK) | LAST_MASK |
|
||||
PAGE_PRESENT_MASK;
|
||||
if (curr_pte == default_pte) {
|
||||
dev_err(hdev->dev,
|
||||
|
@ -667,25 +689,36 @@ not_mapped:
|
|||
int hl_mmu_unmap(struct hl_ctx *ctx, u64 virt_addr, u32 page_size)
|
||||
{
|
||||
struct hl_device *hdev = ctx->hdev;
|
||||
struct asic_fixed_properties *prop = &hdev->asic_prop;
|
||||
struct hl_mmu_properties *mmu_prop;
|
||||
u64 real_virt_addr;
|
||||
u32 real_page_size, npages;
|
||||
int i, rc;
|
||||
bool is_dram_addr;
|
||||
|
||||
if (!hdev->mmu_enable)
|
||||
return 0;
|
||||
|
||||
is_dram_addr = hl_mem_area_inside_range(virt_addr, prop->dmmu.page_size,
|
||||
prop->va_space_dram_start_address,
|
||||
prop->va_space_dram_end_address);
|
||||
|
||||
mmu_prop = is_dram_addr ? &prop->dmmu : &prop->pmmu;
|
||||
|
||||
/*
|
||||
* The H/W handles mapping of 4KB/2MB page. Hence if the host page size
|
||||
* is bigger, we break it to sub-pages and unmap them separately.
|
||||
* The H/W handles mapping of specific page sizes. Hence if the page
|
||||
* size is bigger, we break it to sub-pages and unmap them separately.
|
||||
*/
|
||||
if ((page_size % PAGE_SIZE_2MB) == 0) {
|
||||
real_page_size = PAGE_SIZE_2MB;
|
||||
} else if ((page_size % PAGE_SIZE_4KB) == 0) {
|
||||
real_page_size = PAGE_SIZE_4KB;
|
||||
if ((page_size % mmu_prop->huge_page_size) == 0) {
|
||||
real_page_size = mmu_prop->huge_page_size;
|
||||
} else if ((page_size % mmu_prop->page_size) == 0) {
|
||||
real_page_size = mmu_prop->page_size;
|
||||
} else {
|
||||
dev_err(hdev->dev,
|
||||
"page size of %u is not 4KB nor 2MB aligned, can't unmap\n",
|
||||
page_size);
|
||||
"page size of %u is not %uKB nor %uMB aligned, can't unmap\n",
|
||||
page_size,
|
||||
mmu_prop->page_size >> 10,
|
||||
mmu_prop->huge_page_size >> 20);
|
||||
|
||||
return -EFAULT;
|
||||
}
|
||||
|
@ -694,7 +727,7 @@ int hl_mmu_unmap(struct hl_ctx *ctx, u64 virt_addr, u32 page_size)
|
|||
real_virt_addr = virt_addr;
|
||||
|
||||
for (i = 0 ; i < npages ; i++) {
|
||||
rc = _hl_mmu_unmap(ctx, real_virt_addr);
|
||||
rc = _hl_mmu_unmap(ctx, real_virt_addr, is_dram_addr);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
|
@ -705,10 +738,11 @@ int hl_mmu_unmap(struct hl_ctx *ctx, u64 virt_addr, u32 page_size)
|
|||
}
|
||||
|
||||
static int _hl_mmu_map(struct hl_ctx *ctx, u64 virt_addr, u64 phys_addr,
|
||||
u32 page_size)
|
||||
u32 page_size, bool is_dram_addr)
|
||||
{
|
||||
struct hl_device *hdev = ctx->hdev;
|
||||
struct asic_fixed_properties *prop = &hdev->asic_prop;
|
||||
struct hl_mmu_properties *mmu_prop;
|
||||
u64 hop0_addr = 0, hop0_pte_addr = 0,
|
||||
hop1_addr = 0, hop1_pte_addr = 0,
|
||||
hop2_addr = 0, hop2_pte_addr = 0,
|
||||
|
@ -716,21 +750,19 @@ static int _hl_mmu_map(struct hl_ctx *ctx, u64 virt_addr, u64 phys_addr,
|
|||
hop4_addr = 0, hop4_pte_addr = 0,
|
||||
curr_pte = 0;
|
||||
bool hop1_new = false, hop2_new = false, hop3_new = false,
|
||||
hop4_new = false, is_huge, is_dram_addr;
|
||||
hop4_new = false, is_huge;
|
||||
int rc = -ENOMEM;
|
||||
|
||||
/*
|
||||
* This mapping function can map a 4KB/2MB page. For 2MB page there are
|
||||
* only 3 hops rather than 4. Currently the DRAM allocation uses 2MB
|
||||
* pages only but user memory could have been allocated with one of the
|
||||
* two page sizes. Since this is a common code for all the three cases,
|
||||
* we need this hugs page check.
|
||||
*/
|
||||
is_huge = page_size == PAGE_SIZE_2MB;
|
||||
mmu_prop = is_dram_addr ? &prop->dmmu : &prop->pmmu;
|
||||
|
||||
is_dram_addr = hl_mem_area_inside_range(virt_addr, page_size,
|
||||
prop->va_space_dram_start_address,
|
||||
prop->va_space_dram_end_address);
|
||||
/*
|
||||
* This mapping function can map a page or a huge page. For huge page
|
||||
* there are only 3 hops rather than 4. Currently the DRAM allocation
|
||||
* uses huge pages only but user memory could have been allocated with
|
||||
* one of the two page sizes. Since this is a common code for all the
|
||||
* three cases, we need this hugs page check.
|
||||
*/
|
||||
is_huge = page_size == mmu_prop->huge_page_size;
|
||||
|
||||
if (is_dram_addr && !is_huge) {
|
||||
dev_err(hdev->dev, "DRAM mapping should use huge pages only\n");
|
||||
|
@ -738,28 +770,28 @@ static int _hl_mmu_map(struct hl_ctx *ctx, u64 virt_addr, u64 phys_addr,
|
|||
}
|
||||
|
||||
hop0_addr = get_hop0_addr(ctx);
|
||||
hop0_pte_addr = get_hop0_pte_addr(ctx, hop0_addr, virt_addr);
|
||||
hop0_pte_addr = get_hop0_pte_addr(ctx, mmu_prop, hop0_addr, virt_addr);
|
||||
curr_pte = *(u64 *) (uintptr_t) hop0_pte_addr;
|
||||
|
||||
hop1_addr = get_alloc_next_hop_addr(ctx, curr_pte, &hop1_new);
|
||||
if (hop1_addr == ULLONG_MAX)
|
||||
goto err;
|
||||
|
||||
hop1_pte_addr = get_hop1_pte_addr(ctx, hop1_addr, virt_addr);
|
||||
hop1_pte_addr = get_hop1_pte_addr(ctx, mmu_prop, hop1_addr, virt_addr);
|
||||
curr_pte = *(u64 *) (uintptr_t) hop1_pte_addr;
|
||||
|
||||
hop2_addr = get_alloc_next_hop_addr(ctx, curr_pte, &hop2_new);
|
||||
if (hop2_addr == ULLONG_MAX)
|
||||
goto err;
|
||||
|
||||
hop2_pte_addr = get_hop2_pte_addr(ctx, hop2_addr, virt_addr);
|
||||
hop2_pte_addr = get_hop2_pte_addr(ctx, mmu_prop, hop2_addr, virt_addr);
|
||||
curr_pte = *(u64 *) (uintptr_t) hop2_pte_addr;
|
||||
|
||||
hop3_addr = get_alloc_next_hop_addr(ctx, curr_pte, &hop3_new);
|
||||
if (hop3_addr == ULLONG_MAX)
|
||||
goto err;
|
||||
|
||||
hop3_pte_addr = get_hop3_pte_addr(ctx, hop3_addr, virt_addr);
|
||||
hop3_pte_addr = get_hop3_pte_addr(ctx, mmu_prop, hop3_addr, virt_addr);
|
||||
curr_pte = *(u64 *) (uintptr_t) hop3_pte_addr;
|
||||
|
||||
if (!is_huge) {
|
||||
|
@ -767,13 +799,14 @@ static int _hl_mmu_map(struct hl_ctx *ctx, u64 virt_addr, u64 phys_addr,
|
|||
if (hop4_addr == ULLONG_MAX)
|
||||
goto err;
|
||||
|
||||
hop4_pte_addr = get_hop4_pte_addr(ctx, hop4_addr, virt_addr);
|
||||
hop4_pte_addr = get_hop4_pte_addr(ctx, mmu_prop, hop4_addr,
|
||||
virt_addr);
|
||||
curr_pte = *(u64 *) (uintptr_t) hop4_pte_addr;
|
||||
}
|
||||
|
||||
if (hdev->dram_default_page_mapping && is_dram_addr) {
|
||||
u64 default_pte = (prop->mmu_dram_default_page_addr &
|
||||
PTE_PHYS_ADDR_MASK) | LAST_MASK |
|
||||
HOP_PHYS_ADDR_MASK) | LAST_MASK |
|
||||
PAGE_PRESENT_MASK;
|
||||
|
||||
if (curr_pte != default_pte) {
|
||||
|
@ -813,7 +846,7 @@ static int _hl_mmu_map(struct hl_ctx *ctx, u64 virt_addr, u64 phys_addr,
|
|||
goto err;
|
||||
}
|
||||
|
||||
curr_pte = (phys_addr & PTE_PHYS_ADDR_MASK) | LAST_MASK
|
||||
curr_pte = (phys_addr & HOP_PHYS_ADDR_MASK) | LAST_MASK
|
||||
| PAGE_PRESENT_MASK;
|
||||
|
||||
if (is_huge)
|
||||
|
@ -823,25 +856,25 @@ static int _hl_mmu_map(struct hl_ctx *ctx, u64 virt_addr, u64 phys_addr,
|
|||
|
||||
if (hop1_new) {
|
||||
curr_pte =
|
||||
(hop1_addr & PTE_PHYS_ADDR_MASK) | PAGE_PRESENT_MASK;
|
||||
(hop1_addr & HOP_PHYS_ADDR_MASK) | PAGE_PRESENT_MASK;
|
||||
write_pte(ctx, hop0_pte_addr, curr_pte);
|
||||
}
|
||||
if (hop2_new) {
|
||||
curr_pte =
|
||||
(hop2_addr & PTE_PHYS_ADDR_MASK) | PAGE_PRESENT_MASK;
|
||||
(hop2_addr & HOP_PHYS_ADDR_MASK) | PAGE_PRESENT_MASK;
|
||||
write_pte(ctx, hop1_pte_addr, curr_pte);
|
||||
get_pte(ctx, hop1_addr);
|
||||
}
|
||||
if (hop3_new) {
|
||||
curr_pte =
|
||||
(hop3_addr & PTE_PHYS_ADDR_MASK) | PAGE_PRESENT_MASK;
|
||||
(hop3_addr & HOP_PHYS_ADDR_MASK) | PAGE_PRESENT_MASK;
|
||||
write_pte(ctx, hop2_pte_addr, curr_pte);
|
||||
get_pte(ctx, hop2_addr);
|
||||
}
|
||||
|
||||
if (!is_huge) {
|
||||
if (hop4_new) {
|
||||
curr_pte = (hop4_addr & PTE_PHYS_ADDR_MASK) |
|
||||
curr_pte = (hop4_addr & HOP_PHYS_ADDR_MASK) |
|
||||
PAGE_PRESENT_MASK;
|
||||
write_pte(ctx, hop3_pte_addr, curr_pte);
|
||||
get_pte(ctx, hop3_addr);
|
||||
|
@ -890,25 +923,36 @@ err:
|
|||
int hl_mmu_map(struct hl_ctx *ctx, u64 virt_addr, u64 phys_addr, u32 page_size)
|
||||
{
|
||||
struct hl_device *hdev = ctx->hdev;
|
||||
struct asic_fixed_properties *prop = &hdev->asic_prop;
|
||||
struct hl_mmu_properties *mmu_prop;
|
||||
u64 real_virt_addr, real_phys_addr;
|
||||
u32 real_page_size, npages;
|
||||
int i, rc, mapped_cnt = 0;
|
||||
bool is_dram_addr;
|
||||
|
||||
if (!hdev->mmu_enable)
|
||||
return 0;
|
||||
|
||||
is_dram_addr = hl_mem_area_inside_range(virt_addr, prop->dmmu.page_size,
|
||||
prop->va_space_dram_start_address,
|
||||
prop->va_space_dram_end_address);
|
||||
|
||||
mmu_prop = is_dram_addr ? &prop->dmmu : &prop->pmmu;
|
||||
|
||||
/*
|
||||
* The H/W handles mapping of 4KB/2MB page. Hence if the host page size
|
||||
* is bigger, we break it to sub-pages and map them separately.
|
||||
* The H/W handles mapping of specific page sizes. Hence if the page
|
||||
* size is bigger, we break it to sub-pages and map them separately.
|
||||
*/
|
||||
if ((page_size % PAGE_SIZE_2MB) == 0) {
|
||||
real_page_size = PAGE_SIZE_2MB;
|
||||
} else if ((page_size % PAGE_SIZE_4KB) == 0) {
|
||||
real_page_size = PAGE_SIZE_4KB;
|
||||
if ((page_size % mmu_prop->huge_page_size) == 0) {
|
||||
real_page_size = mmu_prop->huge_page_size;
|
||||
} else if ((page_size % mmu_prop->page_size) == 0) {
|
||||
real_page_size = mmu_prop->page_size;
|
||||
} else {
|
||||
dev_err(hdev->dev,
|
||||
"page size of %u is not 4KB nor 2MB aligned, can't map\n",
|
||||
page_size);
|
||||
"page size of %u is not %dKB nor %dMB aligned, can't unmap\n",
|
||||
page_size,
|
||||
mmu_prop->page_size >> 10,
|
||||
mmu_prop->huge_page_size >> 20);
|
||||
|
||||
return -EFAULT;
|
||||
}
|
||||
|
@ -923,7 +967,7 @@ int hl_mmu_map(struct hl_ctx *ctx, u64 virt_addr, u64 phys_addr, u32 page_size)
|
|||
|
||||
for (i = 0 ; i < npages ; i++) {
|
||||
rc = _hl_mmu_map(ctx, real_virt_addr, real_phys_addr,
|
||||
real_page_size);
|
||||
real_page_size, is_dram_addr);
|
||||
if (rc)
|
||||
goto err;
|
||||
|
||||
|
@ -937,7 +981,7 @@ int hl_mmu_map(struct hl_ctx *ctx, u64 virt_addr, u64 phys_addr, u32 page_size)
|
|||
err:
|
||||
real_virt_addr = virt_addr;
|
||||
for (i = 0 ; i < mapped_cnt ; i++) {
|
||||
if (_hl_mmu_unmap(ctx, real_virt_addr))
|
||||
if (_hl_mmu_unmap(ctx, real_virt_addr, is_dram_addr))
|
||||
dev_warn_ratelimited(hdev->dev,
|
||||
"failed to unmap va: 0x%llx\n", real_virt_addr);
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* linux/drivers/char/hpilo.h
|
||||
*
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0+
|
||||
*
|
||||
/* SPDX-License-Identifier: GPL-2.0+ */
|
||||
/*
|
||||
* linux/drivers/misc/ibmvmc.h
|
||||
*
|
||||
* IBM Power Systems Virtual Management Channel Support.
|
||||
|
|
|
@ -16,7 +16,7 @@
|
|||
#include <linux/types.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/input-polldev.h>
|
||||
#include <linux/input.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/wait.h>
|
||||
#include <linux/poll.h>
|
||||
|
@ -434,23 +434,23 @@ int lis3lv02d_poweron(struct lis3lv02d *lis3)
|
|||
EXPORT_SYMBOL_GPL(lis3lv02d_poweron);
|
||||
|
||||
|
||||
static void lis3lv02d_joystick_poll(struct input_polled_dev *pidev)
|
||||
static void lis3lv02d_joystick_poll(struct input_dev *input)
|
||||
{
|
||||
struct lis3lv02d *lis3 = pidev->private;
|
||||
struct lis3lv02d *lis3 = input_get_drvdata(input);
|
||||
int x, y, z;
|
||||
|
||||
mutex_lock(&lis3->mutex);
|
||||
lis3lv02d_get_xyz(lis3, &x, &y, &z);
|
||||
input_report_abs(pidev->input, ABS_X, x);
|
||||
input_report_abs(pidev->input, ABS_Y, y);
|
||||
input_report_abs(pidev->input, ABS_Z, z);
|
||||
input_sync(pidev->input);
|
||||
input_report_abs(input, ABS_X, x);
|
||||
input_report_abs(input, ABS_Y, y);
|
||||
input_report_abs(input, ABS_Z, z);
|
||||
input_sync(input);
|
||||
mutex_unlock(&lis3->mutex);
|
||||
}
|
||||
|
||||
static void lis3lv02d_joystick_open(struct input_polled_dev *pidev)
|
||||
static int lis3lv02d_joystick_open(struct input_dev *input)
|
||||
{
|
||||
struct lis3lv02d *lis3 = pidev->private;
|
||||
struct lis3lv02d *lis3 = input_get_drvdata(input);
|
||||
|
||||
if (lis3->pm_dev)
|
||||
pm_runtime_get_sync(lis3->pm_dev);
|
||||
|
@ -461,12 +461,14 @@ static void lis3lv02d_joystick_open(struct input_polled_dev *pidev)
|
|||
* Update coordinates for the case where poll interval is 0 and
|
||||
* the chip in running purely under interrupt control
|
||||
*/
|
||||
lis3lv02d_joystick_poll(pidev);
|
||||
lis3lv02d_joystick_poll(input);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void lis3lv02d_joystick_close(struct input_polled_dev *pidev)
|
||||
static void lis3lv02d_joystick_close(struct input_dev *input)
|
||||
{
|
||||
struct lis3lv02d *lis3 = pidev->private;
|
||||
struct lis3lv02d *lis3 = input_get_drvdata(input);
|
||||
|
||||
atomic_set(&lis3->wake_thread, 0);
|
||||
if (lis3->pm_dev)
|
||||
|
@ -497,7 +499,7 @@ out:
|
|||
|
||||
static void lis302dl_interrupt_handle_click(struct lis3lv02d *lis3)
|
||||
{
|
||||
struct input_dev *dev = lis3->idev->input;
|
||||
struct input_dev *dev = lis3->idev;
|
||||
u8 click_src;
|
||||
|
||||
mutex_lock(&lis3->mutex);
|
||||
|
@ -677,26 +679,19 @@ int lis3lv02d_joystick_enable(struct lis3lv02d *lis3)
|
|||
if (lis3->idev)
|
||||
return -EINVAL;
|
||||
|
||||
lis3->idev = input_allocate_polled_device();
|
||||
if (!lis3->idev)
|
||||
input_dev = input_allocate_device();
|
||||
if (!input_dev)
|
||||
return -ENOMEM;
|
||||
|
||||
lis3->idev->poll = lis3lv02d_joystick_poll;
|
||||
lis3->idev->open = lis3lv02d_joystick_open;
|
||||
lis3->idev->close = lis3lv02d_joystick_close;
|
||||
lis3->idev->poll_interval = MDPS_POLL_INTERVAL;
|
||||
lis3->idev->poll_interval_min = MDPS_POLL_MIN;
|
||||
lis3->idev->poll_interval_max = MDPS_POLL_MAX;
|
||||
lis3->idev->private = lis3;
|
||||
input_dev = lis3->idev->input;
|
||||
|
||||
input_dev->name = "ST LIS3LV02DL Accelerometer";
|
||||
input_dev->phys = DRIVER_NAME "/input0";
|
||||
input_dev->id.bustype = BUS_HOST;
|
||||
input_dev->id.vendor = 0;
|
||||
input_dev->dev.parent = &lis3->pdev->dev;
|
||||
|
||||
set_bit(EV_ABS, input_dev->evbit);
|
||||
input_dev->open = lis3lv02d_joystick_open;
|
||||
input_dev->close = lis3lv02d_joystick_close;
|
||||
|
||||
max_val = (lis3->mdps_max_val * lis3->scale) / LIS3_ACCURACY;
|
||||
if (lis3->whoami == WAI_12B) {
|
||||
fuzz = LIS3_DEFAULT_FUZZ_12B;
|
||||
|
@ -712,17 +707,32 @@ int lis3lv02d_joystick_enable(struct lis3lv02d *lis3)
|
|||
input_set_abs_params(input_dev, ABS_Y, -max_val, max_val, fuzz, flat);
|
||||
input_set_abs_params(input_dev, ABS_Z, -max_val, max_val, fuzz, flat);
|
||||
|
||||
input_set_drvdata(input_dev, lis3);
|
||||
lis3->idev = input_dev;
|
||||
|
||||
err = input_setup_polling(input_dev, lis3lv02d_joystick_poll);
|
||||
if (err)
|
||||
goto err_free_input;
|
||||
|
||||
input_set_poll_interval(input_dev, MDPS_POLL_INTERVAL);
|
||||
input_set_min_poll_interval(input_dev, MDPS_POLL_MIN);
|
||||
input_set_max_poll_interval(input_dev, MDPS_POLL_MAX);
|
||||
|
||||
lis3->mapped_btns[0] = lis3lv02d_get_axis(abs(lis3->ac.x), btns);
|
||||
lis3->mapped_btns[1] = lis3lv02d_get_axis(abs(lis3->ac.y), btns);
|
||||
lis3->mapped_btns[2] = lis3lv02d_get_axis(abs(lis3->ac.z), btns);
|
||||
|
||||
err = input_register_polled_device(lis3->idev);
|
||||
if (err) {
|
||||
input_free_polled_device(lis3->idev);
|
||||
lis3->idev = NULL;
|
||||
}
|
||||
err = input_register_device(lis3->idev);
|
||||
if (err)
|
||||
goto err_free_input;
|
||||
|
||||
return 0;
|
||||
|
||||
err_free_input:
|
||||
input_free_device(input_dev);
|
||||
lis3->idev = NULL;
|
||||
return err;
|
||||
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(lis3lv02d_joystick_enable);
|
||||
|
||||
|
@ -738,8 +748,7 @@ void lis3lv02d_joystick_disable(struct lis3lv02d *lis3)
|
|||
|
||||
if (lis3->irq)
|
||||
misc_deregister(&lis3->miscdev);
|
||||
input_unregister_polled_device(lis3->idev);
|
||||
input_free_polled_device(lis3->idev);
|
||||
input_unregister_device(lis3->idev);
|
||||
lis3->idev = NULL;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(lis3lv02d_joystick_disable);
|
||||
|
@ -895,10 +904,9 @@ static void lis3lv02d_8b_configure(struct lis3lv02d *lis3,
|
|||
(p->click_thresh_y << 4));
|
||||
|
||||
if (lis3->idev) {
|
||||
struct input_dev *input_dev = lis3->idev->input;
|
||||
input_set_capability(input_dev, EV_KEY, BTN_X);
|
||||
input_set_capability(input_dev, EV_KEY, BTN_Y);
|
||||
input_set_capability(input_dev, EV_KEY, BTN_Z);
|
||||
input_set_capability(lis3->idev, EV_KEY, BTN_X);
|
||||
input_set_capability(lis3->idev, EV_KEY, BTN_Y);
|
||||
input_set_capability(lis3->idev, EV_KEY, BTN_Z);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
* Copyright (C) 2008-2009 Eric Piel
|
||||
*/
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/input-polldev.h>
|
||||
#include <linux/input.h>
|
||||
#include <linux/regulator/consumer.h>
|
||||
#include <linux/miscdevice.h>
|
||||
|
||||
|
@ -281,7 +281,7 @@ struct lis3lv02d {
|
|||
* (1/1000th of earth gravity)
|
||||
*/
|
||||
|
||||
struct input_polled_dev *idev; /* input device */
|
||||
struct input_dev *idev; /* input device */
|
||||
struct platform_device *pdev; /* platform device */
|
||||
struct regulator_bulk_data regulators[2];
|
||||
atomic_t count; /* interrupt count after last read */
|
||||
|
|
|
@ -46,8 +46,6 @@ static const uuid_le mei_nfc_info_guid = MEI_UUID_NFC_INFO;
|
|||
*/
|
||||
static void number_of_connections(struct mei_cl_device *cldev)
|
||||
{
|
||||
dev_dbg(&cldev->dev, "running hook %s\n", __func__);
|
||||
|
||||
if (cldev->me_cl->props.max_number_of_connections > 1)
|
||||
cldev->do_match = 0;
|
||||
}
|
||||
|
@ -59,8 +57,6 @@ static void number_of_connections(struct mei_cl_device *cldev)
|
|||
*/
|
||||
static void blacklist(struct mei_cl_device *cldev)
|
||||
{
|
||||
dev_dbg(&cldev->dev, "running hook %s\n", __func__);
|
||||
|
||||
cldev->do_match = 0;
|
||||
}
|
||||
|
||||
|
@ -71,8 +67,6 @@ static void blacklist(struct mei_cl_device *cldev)
|
|||
*/
|
||||
static void whitelist(struct mei_cl_device *cldev)
|
||||
{
|
||||
dev_dbg(&cldev->dev, "running hook %s\n", __func__);
|
||||
|
||||
cldev->do_match = 1;
|
||||
}
|
||||
|
||||
|
@ -256,7 +250,6 @@ static void mei_wd(struct mei_cl_device *cldev)
|
|||
{
|
||||
struct pci_dev *pdev = to_pci_dev(cldev->dev.parent);
|
||||
|
||||
dev_dbg(&cldev->dev, "running hook %s\n", __func__);
|
||||
if (pdev->device == MEI_DEV_ID_WPT_LP ||
|
||||
pdev->device == MEI_DEV_ID_SPT ||
|
||||
pdev->device == MEI_DEV_ID_SPT_H)
|
||||
|
@ -410,8 +403,6 @@ static void mei_nfc(struct mei_cl_device *cldev)
|
|||
|
||||
bus = cldev->bus;
|
||||
|
||||
dev_dbg(&cldev->dev, "running hook %s\n", __func__);
|
||||
|
||||
mutex_lock(&bus->device_lock);
|
||||
/* we need to connect to INFO GUID */
|
||||
cl = mei_cl_alloc_linked(bus);
|
||||
|
|
|
@ -791,11 +791,44 @@ static ssize_t modalias_show(struct device *dev, struct device_attribute *a,
|
|||
}
|
||||
static DEVICE_ATTR_RO(modalias);
|
||||
|
||||
static ssize_t max_conn_show(struct device *dev, struct device_attribute *a,
|
||||
char *buf)
|
||||
{
|
||||
struct mei_cl_device *cldev = to_mei_cl_device(dev);
|
||||
u8 maxconn = mei_me_cl_max_conn(cldev->me_cl);
|
||||
|
||||
return scnprintf(buf, PAGE_SIZE, "%d", maxconn);
|
||||
}
|
||||
static DEVICE_ATTR_RO(max_conn);
|
||||
|
||||
static ssize_t fixed_show(struct device *dev, struct device_attribute *a,
|
||||
char *buf)
|
||||
{
|
||||
struct mei_cl_device *cldev = to_mei_cl_device(dev);
|
||||
u8 fixed = mei_me_cl_fixed(cldev->me_cl);
|
||||
|
||||
return scnprintf(buf, PAGE_SIZE, "%d", fixed);
|
||||
}
|
||||
static DEVICE_ATTR_RO(fixed);
|
||||
|
||||
static ssize_t max_len_show(struct device *dev, struct device_attribute *a,
|
||||
char *buf)
|
||||
{
|
||||
struct mei_cl_device *cldev = to_mei_cl_device(dev);
|
||||
u32 maxlen = mei_me_cl_max_len(cldev->me_cl);
|
||||
|
||||
return scnprintf(buf, PAGE_SIZE, "%u", maxlen);
|
||||
}
|
||||
static DEVICE_ATTR_RO(max_len);
|
||||
|
||||
static struct attribute *mei_cldev_attrs[] = {
|
||||
&dev_attr_name.attr,
|
||||
&dev_attr_uuid.attr,
|
||||
&dev_attr_version.attr,
|
||||
&dev_attr_modalias.attr,
|
||||
&dev_attr_max_conn.attr,
|
||||
&dev_attr_fixed.attr,
|
||||
&dev_attr_max_len.attr,
|
||||
NULL,
|
||||
};
|
||||
ATTRIBUTE_GROUPS(mei_cldev);
|
||||
|
@ -873,15 +906,16 @@ static const struct device_type mei_cl_device_type = {
|
|||
|
||||
/**
|
||||
* mei_cl_bus_set_name - set device name for me client device
|
||||
* <controller>-<client device>
|
||||
* Example: 0000:00:16.0-55213584-9a29-4916-badf-0fb7ed682aeb
|
||||
*
|
||||
* @cldev: me client device
|
||||
*/
|
||||
static inline void mei_cl_bus_set_name(struct mei_cl_device *cldev)
|
||||
{
|
||||
dev_set_name(&cldev->dev, "mei:%s:%pUl:%02X",
|
||||
cldev->name,
|
||||
mei_me_cl_uuid(cldev->me_cl),
|
||||
mei_me_cl_ver(cldev->me_cl));
|
||||
dev_set_name(&cldev->dev, "%s-%pUl",
|
||||
dev_name(cldev->bus->dev),
|
||||
mei_me_cl_uuid(cldev->me_cl));
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -69,6 +69,42 @@ static inline u8 mei_me_cl_ver(const struct mei_me_client *me_cl)
|
|||
return me_cl->props.protocol_version;
|
||||
}
|
||||
|
||||
/**
|
||||
* mei_me_cl_max_conn - return me client max number of connections
|
||||
*
|
||||
* @me_cl: me client
|
||||
*
|
||||
* Return: me client max number of connections
|
||||
*/
|
||||
static inline u8 mei_me_cl_max_conn(const struct mei_me_client *me_cl)
|
||||
{
|
||||
return me_cl->props.max_number_of_connections;
|
||||
}
|
||||
|
||||
/**
|
||||
* mei_me_cl_fixed - return me client fixed address, if any
|
||||
*
|
||||
* @me_cl: me client
|
||||
*
|
||||
* Return: me client fixed address
|
||||
*/
|
||||
static inline u8 mei_me_cl_fixed(const struct mei_me_client *me_cl)
|
||||
{
|
||||
return me_cl->props.fixed_address;
|
||||
}
|
||||
|
||||
/**
|
||||
* mei_me_cl_max_len - return me client max msg length
|
||||
*
|
||||
* @me_cl: me client
|
||||
*
|
||||
* Return: me client max msg length
|
||||
*/
|
||||
static inline u32 mei_me_cl_max_len(const struct mei_me_client *me_cl)
|
||||
{
|
||||
return me_cl->props.max_msg_length;
|
||||
}
|
||||
|
||||
/*
|
||||
* MEI IO Functions
|
||||
*/
|
||||
|
|
|
@ -81,6 +81,7 @@
|
|||
|
||||
#define MEI_DEV_ID_CMP_LP 0x02e0 /* Comet Point LP */
|
||||
#define MEI_DEV_ID_CMP_LP_3 0x02e4 /* Comet Point LP 3 (iTouch) */
|
||||
#define MEI_DEV_ID_CMP_V 0xA3BA /* Comet Point Lake V */
|
||||
|
||||
#define MEI_DEV_ID_ICP_LP 0x34E0 /* Ice Lake Point LP */
|
||||
|
||||
|
@ -162,7 +163,8 @@ access to ME_CBD */
|
|||
#define ME_IS_HRA 0x00000002
|
||||
/* ME Interrupt Enable HRA - host read only access to ME_IE */
|
||||
#define ME_IE_HRA 0x00000001
|
||||
|
||||
/* TRC control shadow register */
|
||||
#define ME_TRC 0x00000030
|
||||
|
||||
/* H_HPG_CSR register bits */
|
||||
#define H_HPG_CSR_PGIHEXR 0x00000001
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (c) 2003-2018, Intel Corporation. All rights reserved.
|
||||
* Copyright (c) 2003-2019, Intel Corporation. All rights reserved.
|
||||
* Intel Management Engine Interface (Intel MEI) Linux driver
|
||||
*/
|
||||
|
||||
|
@ -172,6 +172,27 @@ static inline void mei_me_d0i3c_write(struct mei_device *dev, u32 reg)
|
|||
mei_me_reg_write(to_me_hw(dev), H_D0I3C, reg);
|
||||
}
|
||||
|
||||
/**
|
||||
* mei_me_trc_status - read trc status register
|
||||
*
|
||||
* @dev: mei device
|
||||
* @trc: trc status register value
|
||||
*
|
||||
* Return: 0 on success, error otherwise
|
||||
*/
|
||||
static int mei_me_trc_status(struct mei_device *dev, u32 *trc)
|
||||
{
|
||||
struct mei_me_hw *hw = to_me_hw(dev);
|
||||
|
||||
if (!hw->cfg->hw_trc_supported)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
*trc = mei_me_reg_read(hw, ME_TRC);
|
||||
trace_mei_reg_read(dev->dev, "ME_TRC", ME_TRC, *trc);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* mei_me_fw_status - read fw status register from pci config space
|
||||
*
|
||||
|
@ -183,20 +204,19 @@ static inline void mei_me_d0i3c_write(struct mei_device *dev, u32 reg)
|
|||
static int mei_me_fw_status(struct mei_device *dev,
|
||||
struct mei_fw_status *fw_status)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev->dev);
|
||||
struct mei_me_hw *hw = to_me_hw(dev);
|
||||
const struct mei_fw_status *fw_src = &hw->cfg->fw_status;
|
||||
int ret;
|
||||
int i;
|
||||
|
||||
if (!fw_status)
|
||||
if (!fw_status || !hw->read_fws)
|
||||
return -EINVAL;
|
||||
|
||||
fw_status->count = fw_src->count;
|
||||
for (i = 0; i < fw_src->count && i < MEI_FW_STATUS_MAX; i++) {
|
||||
ret = pci_read_config_dword(pdev, fw_src->status[i],
|
||||
&fw_status->status[i]);
|
||||
trace_mei_pci_cfg_read(dev->dev, "PCI_CFG_HSF_X",
|
||||
ret = hw->read_fws(dev, fw_src->status[i],
|
||||
&fw_status->status[i]);
|
||||
trace_mei_pci_cfg_read(dev->dev, "PCI_CFG_HFS_X",
|
||||
fw_src->status[i],
|
||||
fw_status->status[i]);
|
||||
if (ret)
|
||||
|
@ -210,19 +230,26 @@ static int mei_me_fw_status(struct mei_device *dev,
|
|||
* mei_me_hw_config - configure hw dependent settings
|
||||
*
|
||||
* @dev: mei device
|
||||
*
|
||||
* Return:
|
||||
* * -EINVAL when read_fws is not set
|
||||
* * 0 on success
|
||||
*
|
||||
*/
|
||||
static void mei_me_hw_config(struct mei_device *dev)
|
||||
static int mei_me_hw_config(struct mei_device *dev)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev->dev);
|
||||
struct mei_me_hw *hw = to_me_hw(dev);
|
||||
u32 hcsr, reg;
|
||||
|
||||
if (WARN_ON(!hw->read_fws))
|
||||
return -EINVAL;
|
||||
|
||||
/* Doesn't change in runtime */
|
||||
hcsr = mei_hcsr_read(dev);
|
||||
hw->hbuf_depth = (hcsr & H_CBD) >> 24;
|
||||
|
||||
reg = 0;
|
||||
pci_read_config_dword(pdev, PCI_CFG_HFS_1, ®);
|
||||
hw->read_fws(dev, PCI_CFG_HFS_1, ®);
|
||||
trace_mei_pci_cfg_read(dev->dev, "PCI_CFG_HFS_1", PCI_CFG_HFS_1, reg);
|
||||
hw->d0i3_supported =
|
||||
((reg & PCI_CFG_HFS_1_D0I3_MSK) == PCI_CFG_HFS_1_D0I3_MSK);
|
||||
|
@ -233,6 +260,8 @@ static void mei_me_hw_config(struct mei_device *dev)
|
|||
if (reg & H_D0I3C_I3)
|
||||
hw->pg_state = MEI_PG_ON;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -269,7 +298,7 @@ static inline void me_intr_disable(struct mei_device *dev, u32 hcsr)
|
|||
}
|
||||
|
||||
/**
|
||||
* mei_me_intr_clear - clear and stop interrupts
|
||||
* me_intr_clear - clear and stop interrupts
|
||||
*
|
||||
* @dev: the device structure
|
||||
* @hcsr: supplied hcsr register value
|
||||
|
@ -323,9 +352,9 @@ static void mei_me_intr_disable(struct mei_device *dev)
|
|||
*/
|
||||
static void mei_me_synchronize_irq(struct mei_device *dev)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev->dev);
|
||||
struct mei_me_hw *hw = to_me_hw(dev);
|
||||
|
||||
synchronize_irq(pdev->irq);
|
||||
synchronize_irq(hw->irq);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1294,6 +1323,7 @@ end:
|
|||
|
||||
static const struct mei_hw_ops mei_me_hw_ops = {
|
||||
|
||||
.trc_status = mei_me_trc_status,
|
||||
.fw_status = mei_me_fw_status,
|
||||
.pg_state = mei_me_pg_state,
|
||||
|
||||
|
@ -1384,6 +1414,9 @@ static bool mei_me_fw_type_sps(struct pci_dev *pdev)
|
|||
.dma_size[DMA_DSCR_DEVICE] = SZ_128K, \
|
||||
.dma_size[DMA_DSCR_CTRL] = PAGE_SIZE
|
||||
|
||||
#define MEI_CFG_TRC \
|
||||
.hw_trc_supported = 1
|
||||
|
||||
/* ICH Legacy devices */
|
||||
static const struct mei_cfg mei_me_ich_cfg = {
|
||||
MEI_CFG_ICH_HFS,
|
||||
|
@ -1432,6 +1465,14 @@ static const struct mei_cfg mei_me_pch12_cfg = {
|
|||
MEI_CFG_DMA_128,
|
||||
};
|
||||
|
||||
/* Tiger Lake and newer devices */
|
||||
static const struct mei_cfg mei_me_pch15_cfg = {
|
||||
MEI_CFG_PCH8_HFS,
|
||||
MEI_CFG_FW_VER_SUPP,
|
||||
MEI_CFG_DMA_128,
|
||||
MEI_CFG_TRC,
|
||||
};
|
||||
|
||||
/*
|
||||
* mei_cfg_list - A list of platform platform specific configurations.
|
||||
* Note: has to be synchronized with enum mei_cfg_idx.
|
||||
|
@ -1446,6 +1487,7 @@ static const struct mei_cfg *const mei_cfg_list[] = {
|
|||
[MEI_ME_PCH8_CFG] = &mei_me_pch8_cfg,
|
||||
[MEI_ME_PCH8_SPS_CFG] = &mei_me_pch8_sps_cfg,
|
||||
[MEI_ME_PCH12_CFG] = &mei_me_pch12_cfg,
|
||||
[MEI_ME_PCH15_CFG] = &mei_me_pch15_cfg,
|
||||
};
|
||||
|
||||
const struct mei_cfg *mei_me_get_cfg(kernel_ulong_t idx)
|
||||
|
@ -1461,19 +1503,19 @@ const struct mei_cfg *mei_me_get_cfg(kernel_ulong_t idx)
|
|||
/**
|
||||
* mei_me_dev_init - allocates and initializes the mei device structure
|
||||
*
|
||||
* @pdev: The pci device structure
|
||||
* @parent: device associated with physical device (pci/platform)
|
||||
* @cfg: per device generation config
|
||||
*
|
||||
* Return: The mei_device pointer on success, NULL on failure.
|
||||
*/
|
||||
struct mei_device *mei_me_dev_init(struct pci_dev *pdev,
|
||||
struct mei_device *mei_me_dev_init(struct device *parent,
|
||||
const struct mei_cfg *cfg)
|
||||
{
|
||||
struct mei_device *dev;
|
||||
struct mei_me_hw *hw;
|
||||
int i;
|
||||
|
||||
dev = devm_kzalloc(&pdev->dev, sizeof(struct mei_device) +
|
||||
dev = devm_kzalloc(parent, sizeof(struct mei_device) +
|
||||
sizeof(struct mei_me_hw), GFP_KERNEL);
|
||||
if (!dev)
|
||||
return NULL;
|
||||
|
@ -1483,7 +1525,7 @@ struct mei_device *mei_me_dev_init(struct pci_dev *pdev,
|
|||
for (i = 0; i < DMA_DSCR_NUM; i++)
|
||||
dev->dr_dscr[i].size = cfg->dma_size[i];
|
||||
|
||||
mei_device_init(dev, &pdev->dev, &mei_me_hw_ops);
|
||||
mei_device_init(dev, parent, &mei_me_hw_ops);
|
||||
hw->cfg = cfg;
|
||||
|
||||
dev->fw_f_fw_ver_supported = cfg->fw_ver_supported;
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Copyright (c) 2012-2018, Intel Corporation. All rights reserved.
|
||||
* Copyright (c) 2012-2019, Intel Corporation. All rights reserved.
|
||||
* Intel Management Engine Interface (Intel MEI) Linux driver
|
||||
*/
|
||||
|
||||
|
@ -21,12 +21,14 @@
|
|||
* @quirk_probe: device exclusion quirk
|
||||
* @dma_size: device DMA buffers size
|
||||
* @fw_ver_supported: is fw version retrievable from FW
|
||||
* @hw_trc_supported: does the hw support trc register
|
||||
*/
|
||||
struct mei_cfg {
|
||||
const struct mei_fw_status fw_status;
|
||||
bool (*quirk_probe)(struct pci_dev *pdev);
|
||||
size_t dma_size[DMA_DSCR_NUM];
|
||||
u32 fw_ver_supported:1;
|
||||
u32 hw_trc_supported:1;
|
||||
};
|
||||
|
||||
|
||||
|
@ -42,16 +44,20 @@ struct mei_cfg {
|
|||
*
|
||||
* @cfg: per device generation config and ops
|
||||
* @mem_addr: io memory address
|
||||
* @irq: irq number
|
||||
* @pg_state: power gating state
|
||||
* @d0i3_supported: di03 support
|
||||
* @hbuf_depth: depth of hardware host/write buffer in slots
|
||||
* @read_fws: read FW status register handler
|
||||
*/
|
||||
struct mei_me_hw {
|
||||
const struct mei_cfg *cfg;
|
||||
void __iomem *mem_addr;
|
||||
int irq;
|
||||
enum mei_pg_state pg_state;
|
||||
bool d0i3_supported;
|
||||
u8 hbuf_depth;
|
||||
int (*read_fws)(const struct mei_device *dev, int where, u32 *val);
|
||||
};
|
||||
|
||||
#define to_me_hw(dev) (struct mei_me_hw *)((dev)->hw)
|
||||
|
@ -74,6 +80,7 @@ struct mei_me_hw {
|
|||
* servers platforms with quirk for
|
||||
* SPS firmware exclusion.
|
||||
* @MEI_ME_PCH12_CFG: Platform Controller Hub Gen12 and newer
|
||||
* @MEI_ME_PCH15_CFG: Platform Controller Hub Gen15 and newer
|
||||
* @MEI_ME_NUM_CFG: Upper Sentinel.
|
||||
*/
|
||||
enum mei_cfg_idx {
|
||||
|
@ -86,12 +93,13 @@ enum mei_cfg_idx {
|
|||
MEI_ME_PCH8_CFG,
|
||||
MEI_ME_PCH8_SPS_CFG,
|
||||
MEI_ME_PCH12_CFG,
|
||||
MEI_ME_PCH15_CFG,
|
||||
MEI_ME_NUM_CFG,
|
||||
};
|
||||
|
||||
const struct mei_cfg *mei_me_get_cfg(kernel_ulong_t idx);
|
||||
|
||||
struct mei_device *mei_me_dev_init(struct pci_dev *pdev,
|
||||
struct mei_device *mei_me_dev_init(struct device *parent,
|
||||
const struct mei_cfg *cfg);
|
||||
|
||||
int mei_me_pg_enter_sync(struct mei_device *dev);
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (c) 2013-2014, Intel Corporation. All rights reserved.
|
||||
* Copyright (c) 2013-2019, Intel Corporation. All rights reserved.
|
||||
* Intel Management Engine Interface (Intel MEI) Linux driver
|
||||
*/
|
||||
|
||||
|
@ -660,14 +660,16 @@ static int mei_txe_fw_status(struct mei_device *dev,
|
|||
}
|
||||
|
||||
/**
|
||||
* mei_txe_hw_config - configure hardware at the start of the devices
|
||||
* mei_txe_hw_config - configure hardware at the start of the devices
|
||||
*
|
||||
* @dev: the device structure
|
||||
*
|
||||
* Configure hardware at the start of the device should be done only
|
||||
* once at the device probe time
|
||||
*
|
||||
* Return: always 0
|
||||
*/
|
||||
static void mei_txe_hw_config(struct mei_device *dev)
|
||||
static int mei_txe_hw_config(struct mei_device *dev)
|
||||
{
|
||||
|
||||
struct mei_txe_hw *hw = to_txe_hw(dev);
|
||||
|
@ -677,6 +679,8 @@ static void mei_txe_hw_config(struct mei_device *dev)
|
|||
|
||||
dev_dbg(dev->dev, "aliveness_resp = 0x%08x, readiness = 0x%08x.\n",
|
||||
hw->aliveness, hw->readiness);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue