Char/Misc driver patches for 5.10-rc1
Here is the big set of char, misc, and other assorted driver subsystem patches for 5.10-rc1. There's a lot of different things in here, all over the drivers/ directory. Some summaries: - soundwire driver updates - habanalabs driver updates - extcon driver updates - nitro_enclaves new driver - fsl-mc driver and core updates - mhi core and bus updates - nvmem driver updates - eeprom driver updates - binder driver updates and fixes - vbox minor bugfixes - fsi driver updates - w1 driver updates - coresight driver updates - interconnect driver updates - misc driver updates - other minor driver updates All of these have been in linux-next for a while with no reported issues. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> -----BEGIN PGP SIGNATURE----- iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCX4g8YQ8cZ3JlZ0Brcm9h aC5jb20ACgkQMUfUDdst+yngKgCeNpArCP/9vQJRK9upnDm8ZLunSCUAn1wUT/2A /bTQ42c/WRQ+LU828GSM =6sO2 -----END PGP SIGNATURE----- Merge tag 'char-misc-5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc Pull char/misc driver updates from Greg KH: "Here is the big set of char, misc, and other assorted driver subsystem patches for 5.10-rc1. There's a lot of different things in here, all over the drivers/ directory. Some summaries: - soundwire driver updates - habanalabs driver updates - extcon driver updates - nitro_enclaves new driver - fsl-mc driver and core updates - mhi core and bus updates - nvmem driver updates - eeprom driver updates - binder driver updates and fixes - vbox minor bugfixes - fsi driver updates - w1 driver updates - coresight driver updates - interconnect driver updates - misc driver updates - other minor driver updates All of these have been in linux-next for a while with no reported issues" * tag 'char-misc-5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (396 commits) binder: fix UAF when releasing todo list docs: w1: w1_therm: Fix broken xref, mistakes, clarify text misc: Kconfig: fix a HISI_HIKEY_USB dependency LSM: Fix type of id parameter in kernel_post_load_data prototype misc: Kconfig: add a new dependency for HISI_HIKEY_USB firmware_loader: fix a kernel-doc markup w1: w1_therm: make w1_poll_completion static binder: simplify the return expression of binder_mmap test_firmware: Test partial read support firmware: Add request_partial_firmware_into_buf() firmware: Store opt_flags in fw_priv fs/kernel_file_read: Add "offset" arg for partial reads IMA: Add support for file reads without contents LSM: Add "contents" flag to kernel_read_file hook module: Call security_kernel_post_load_data() firmware_loader: Use security_post_load_data() LSM: Introduce kernel_post_load_data() hook fs/kernel_read_file: Add file_size output argument fs/kernel_read_file: Switch buffer size arg to size_t fs/kernel_read_file: Remove redundant size argument ...
This commit is contained in:
commit
726eb70e0d
|
@ -0,0 +1,21 @@
|
|||
What: /sys/bus/mhi/devices/.../serialnumber
|
||||
Date: Sept 2020
|
||||
KernelVersion: 5.10
|
||||
Contact: Bhaumik Bhatt <bbhatt@codeaurora.org>
|
||||
Description: The file holds the serial number of the client device obtained
|
||||
using a BHI (Boot Host Interface) register read after at least
|
||||
one attempt to power up the device has been done. If read
|
||||
without having the device power on at least once, the file will
|
||||
read all 0's.
|
||||
Users: Any userspace application or clients interested in device info.
|
||||
|
||||
What: /sys/bus/mhi/devices/.../oem_pk_hash
|
||||
Date: Sept 2020
|
||||
KernelVersion: 5.10
|
||||
Contact: Bhaumik Bhatt <bbhatt@codeaurora.org>
|
||||
Description: The file holds the OEM PK Hash value of the endpoint device
|
||||
obtained using a BHI (Boot Host Interface) register read after
|
||||
at least one attempt to power up the device has been done. If
|
||||
read without having the device power on at least once, the file
|
||||
will read all 0's.
|
||||
Users: Any userspace application or clients interested in device info.
|
|
@ -0,0 +1,15 @@
|
|||
What: /sys/bus/dfl/devices/dfl_dev.X/type
|
||||
Date: Aug 2020
|
||||
KernelVersion: 5.10
|
||||
Contact: Xu Yilun <yilun.xu@intel.com>
|
||||
Description: Read-only. It returns type of DFL FIU of the device. Now DFL
|
||||
supports 2 FIU types, 0 for FME, 1 for PORT.
|
||||
Format: 0x%x
|
||||
|
||||
What: /sys/bus/dfl/devices/dfl_dev.X/feature_id
|
||||
Date: Aug 2020
|
||||
KernelVersion: 5.10
|
||||
Contact: Xu Yilun <yilun.xu@intel.com>
|
||||
Description: Read-only. It returns feature identifier local to its DFL FIU
|
||||
type.
|
||||
Format: 0x%x
|
|
@ -36,3 +36,11 @@ Contact: linux-fsi@lists.ozlabs.org
|
|||
Description:
|
||||
Provides a means of reading/writing a 32 bit value from/to a
|
||||
specified FSI bus address.
|
||||
|
||||
What: /sys/bus/platform/devices/../cfam_reset
|
||||
Date: Sept 2020
|
||||
KernelVersion: 5.10
|
||||
Contact: linux-fsi@lists.ozlabs.org
|
||||
Description:
|
||||
Provides a means of resetting the cfam that is attached to the
|
||||
FSI device.
|
||||
|
|
|
@ -41,6 +41,13 @@ Contact: Tomas Winkler <tomas.winkler@intel.com>
|
|||
Description: Stores mei client fixed address, if any
|
||||
Format: %d
|
||||
|
||||
What: /sys/bus/mei/devices/.../vtag
|
||||
Date: Nov 2020
|
||||
KernelVersion: 5.9
|
||||
Contact: Tomas Winkler <tomas.winkler@intel.com>
|
||||
Description: Stores mei client vtag support status
|
||||
Format: %d
|
||||
|
||||
What: /sys/bus/mei/devices/.../max_len
|
||||
Date: Nov 2019
|
||||
KernelVersion: 5.5
|
||||
|
|
|
@ -1,3 +1,21 @@
|
|||
What: /sys/bus/soundwire/devices/sdw:.../status
|
||||
/sys/bus/soundwire/devices/sdw:.../device_number
|
||||
|
||||
Date: September 2020
|
||||
|
||||
Contact: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
|
||||
Bard Liao <yung-chuan.liao@linux.intel.com>
|
||||
Vinod Koul <vkoul@kernel.org>
|
||||
|
||||
Description: SoundWire Slave status
|
||||
|
||||
These properties report the Slave status, e.g. if it
|
||||
is UNATTACHED or not, and in the latter case show the
|
||||
device_number. This status information is useful to
|
||||
detect devices exposed by platform firmware but not
|
||||
physically present on the bus, and conversely devices
|
||||
not exposed in platform firmware but enumerated.
|
||||
|
||||
What: /sys/bus/soundwire/devices/sdw:.../dev-properties/mipi_revision
|
||||
/sys/bus/soundwire/devices/sdw:.../dev-properties/wake_capable
|
||||
/sys/bus/soundwire/devices/sdw:.../dev-properties/test_mode_capable
|
||||
|
|
|
@ -2,13 +2,17 @@ What: /sys/class/habanalabs/hl<n>/armcp_kernel_ver
|
|||
Date: Jan 2019
|
||||
KernelVersion: 5.1
|
||||
Contact: oded.gabbay@gmail.com
|
||||
Description: Version of the Linux kernel running on the device's CPU
|
||||
Description: Version of the Linux kernel running on the device's CPU.
|
||||
Will be DEPRECATED in Linux kernel version 5.10, and be
|
||||
replaced with cpucp_kernel_ver
|
||||
|
||||
What: /sys/class/habanalabs/hl<n>/armcp_ver
|
||||
Date: Jan 2019
|
||||
KernelVersion: 5.1
|
||||
Contact: oded.gabbay@gmail.com
|
||||
Description: Version of the application running on the device's CPU
|
||||
Will be DEPRECATED in Linux kernel version 5.10, and be
|
||||
replaced with cpucp_ver
|
||||
|
||||
What: /sys/class/habanalabs/hl<n>/clk_max_freq_mhz
|
||||
Date: Jun 2019
|
||||
|
@ -33,6 +37,18 @@ KernelVersion: 5.1
|
|||
Contact: oded.gabbay@gmail.com
|
||||
Description: Version of the Device's CPLD F/W
|
||||
|
||||
What: /sys/class/habanalabs/hl<n>/cpucp_kernel_ver
|
||||
Date: Oct 2020
|
||||
KernelVersion: 5.10
|
||||
Contact: oded.gabbay@gmail.com
|
||||
Description: Version of the Linux kernel running on the device's CPU
|
||||
|
||||
What: /sys/class/habanalabs/hl<n>/cpucp_ver
|
||||
Date: Oct 2020
|
||||
KernelVersion: 5.10
|
||||
Contact: oded.gabbay@gmail.com
|
||||
Description: Version of the application running on the device's CPU
|
||||
|
||||
What: /sys/class/habanalabs/hl<n>/device_type
|
||||
Date: Jan 2019
|
||||
KernelVersion: 5.1
|
||||
|
|
|
@ -49,10 +49,13 @@ Description:
|
|||
will be changed only in device RAM, so it will be cleared when
|
||||
power is lost. Trigger a 'save' to EEPROM command to keep
|
||||
values after power-on. Read or write are :
|
||||
* '9..12': device resolution in bit
|
||||
* '9..14': device resolution in bit
|
||||
or resolution to set in bit
|
||||
* '-xx': xx is kernel error when reading the resolution
|
||||
* Anything else: do nothing
|
||||
Some DS18B20 clones are fixed in 12-bit resolution, so the
|
||||
actual resolution is read back from the chip and verified. Error
|
||||
is reported if the results differ.
|
||||
Users: any user space application which wants to communicate with
|
||||
w1_term device
|
||||
|
||||
|
@ -86,7 +89,7 @@ Description:
|
|||
*write* :
|
||||
* '0' : save the 2 or 3 bytes to the device EEPROM
|
||||
(i.e. TH, TL and config register)
|
||||
* '9..12' : set the device resolution in RAM
|
||||
* '9..14' : set the device resolution in RAM
|
||||
(if supported)
|
||||
* Anything else: do nothing
|
||||
refer to Documentation/w1/slaves/w1_therm.rst for detailed
|
||||
|
@ -114,3 +117,47 @@ Description:
|
|||
of the bulk read command (not the current temperature).
|
||||
Users: any user space application which wants to communicate with
|
||||
w1_term device
|
||||
|
||||
|
||||
What: /sys/bus/w1/devices/.../conv_time
|
||||
Date: July 2020
|
||||
Contact: Ivan Zaentsev <ivan.zaentsev@wirenboard.ru>
|
||||
Description:
|
||||
(RW) Get, set, or measure a temperature conversion time. The
|
||||
setting remains active until a resolution change. Then it is
|
||||
reset to default (datasheet) conversion time for a new
|
||||
resolution.
|
||||
|
||||
*read*: Actual conversion time in milliseconds. *write*:
|
||||
'0': Set the default conversion time from the datasheet.
|
||||
'1': Measure and set the conversion time. Make a single
|
||||
temperature conversion, measure an actual value.
|
||||
Increase it by 20% for temperature range. A new
|
||||
conversion time can be obtained by reading this
|
||||
same attribute.
|
||||
other positive value:
|
||||
Set the conversion time in milliseconds.
|
||||
|
||||
Users: An application using the w1_term device
|
||||
|
||||
|
||||
What: /sys/bus/w1/devices/.../features
|
||||
Date: July 2020
|
||||
Contact: Ivan Zaentsev <ivan.zaentsev@wirenboard.ru>
|
||||
Description:
|
||||
(RW) Control optional driver settings.
|
||||
Bit masks to read/write (bitwise OR):
|
||||
|
||||
1: Enable check for conversion success. If byte 6 of
|
||||
scratchpad memory is 0xC after conversion, and
|
||||
temperature reads 85.00 (powerup value) or 127.94
|
||||
(insufficient power) - return a conversion error.
|
||||
|
||||
2: Enable poll for conversion completion. Generate read cycles
|
||||
after the conversion start and wait for 1's. In parasite
|
||||
power mode this feature is not available.
|
||||
|
||||
*read*: Currently selected features.
|
||||
*write*: Select features.
|
||||
|
||||
Users: An application using the w1_term device
|
||||
|
|
|
@ -1,27 +0,0 @@
|
|||
* PTN5150 CC (Configuration Channel) Logic device
|
||||
|
||||
PTN5150 is a small thin low power CC logic chip supporting the USB Type-C
|
||||
connector application with CC control logic detection and indication functions.
|
||||
It is interfaced to the host controller using an I2C interface.
|
||||
|
||||
Required properties:
|
||||
- compatible: should be "nxp,ptn5150"
|
||||
- reg: specifies the I2C slave address of the device
|
||||
- int-gpio: should contain a phandle and GPIO specifier for the GPIO pin
|
||||
connected to the PTN5150's INTB pin.
|
||||
- vbus-gpio: should contain a phandle and GPIO specifier for the GPIO pin which
|
||||
is used to control VBUS.
|
||||
- pinctrl-names : a pinctrl state named "default" must be defined.
|
||||
- pinctrl-0 : phandle referencing pin configuration of interrupt and vbus
|
||||
control.
|
||||
|
||||
Example:
|
||||
ptn5150@1d {
|
||||
compatible = "nxp,ptn5150";
|
||||
reg = <0x1d>;
|
||||
int-gpio = <&msmgpio 78 GPIO_ACTIVE_HIGH>;
|
||||
vbus-gpio = <&msmgpio 148 GPIO_ACTIVE_HIGH>;
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&ptn5150_default>;
|
||||
status = "okay";
|
||||
};
|
|
@ -0,0 +1,60 @@
|
|||
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/extcon/extcon-ptn5150.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: PTN5150 CC (Configuration Channel) Logic device
|
||||
|
||||
maintainers:
|
||||
- Krzysztof Kozlowski <krzk@kernel.org>
|
||||
|
||||
description: |
|
||||
PTN5150 is a small thin low power CC logic chip supporting the USB Type-C
|
||||
connector application with CC control logic detection and indication
|
||||
functions. It is interfaced to the host controller using an I2C interface.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
const: nxp,ptn5150
|
||||
|
||||
int-gpios:
|
||||
deprecated: true
|
||||
description:
|
||||
GPIO pin (input) connected to the PTN5150's INTB pin.
|
||||
Use "interrupts" instead.
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
vbus-gpios:
|
||||
description:
|
||||
GPIO pin (output) used to control VBUS. If skipped, no such control
|
||||
takes place.
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- interrupts
|
||||
- reg
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
#include <dt-bindings/gpio/gpio.h>
|
||||
#include <dt-bindings/interrupt-controller/irq.h>
|
||||
i2c {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
|
||||
ptn5150@1d {
|
||||
compatible = "nxp,ptn5150";
|
||||
reg = <0x1d>;
|
||||
interrupt-parent = <&msmgpio>;
|
||||
interrupts = <78 IRQ_TYPE_LEVEL_HIGH>;
|
||||
vbus-gpios = <&msmgpio 148 GPIO_ACTIVE_HIGH>;
|
||||
};
|
||||
};
|
|
@ -12,6 +12,13 @@ Required properties:
|
|||
- pinctrl-0: phandle to pinctrl node
|
||||
- pinctrl-names: pinctrl state
|
||||
|
||||
Optional properties:
|
||||
- cfam-reset-gpios: GPIO for CFAM reset
|
||||
|
||||
- fsi-routing-gpios: GPIO for setting the FSI mux (internal or cabled)
|
||||
- fsi-mux-gpios: GPIO for detecting the desired FSI mux state
|
||||
|
||||
|
||||
Examples:
|
||||
|
||||
fsi-master {
|
||||
|
@ -21,4 +28,9 @@ Examples:
|
|||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&pinctrl_fsi1_default>;
|
||||
clocks = <&syscon ASPEED_CLK_GATE_FSICLK>;
|
||||
|
||||
fsi-routing-gpios = <&gpio0 ASPEED_GPIO(Q, 7) GPIO_ACTIVE_HIGH>;
|
||||
fsi-mux-gpios = <&gpio0 ASPEED_GPIO(B, 0) GPIO_ACTIVE_HIGH>;
|
||||
|
||||
cfam-reset-gpios = <&gpio0 ASPEED_GPIO(Q, 0) GPIO_ACTIVE_LOW>;
|
||||
};
|
||||
|
|
|
@ -19,7 +19,8 @@ directly.
|
|||
Required properties:
|
||||
- compatible : contains the interconnect provider compatible string
|
||||
- #interconnect-cells : number of cells in a interconnect specifier needed to
|
||||
encode the interconnect node id
|
||||
encode the interconnect node id and optionally add a
|
||||
path tag
|
||||
|
||||
Example:
|
||||
|
||||
|
@ -44,6 +45,10 @@ components it has to interact with.
|
|||
Required properties:
|
||||
interconnects : Pairs of phandles and interconnect provider specifier to denote
|
||||
the edge source and destination ports of the interconnect path.
|
||||
An optional path tag value could specified as additional argument
|
||||
to both endpoints and in such cases, this information will be passed
|
||||
to the interconnect framework to do aggregation based on the attached
|
||||
tag.
|
||||
|
||||
Optional properties:
|
||||
interconnect-names : List of interconnect path name strings sorted in the same
|
||||
|
@ -62,3 +67,20 @@ Example:
|
|||
interconnects = <&pnoc MASTER_SDCC_1 &bimc SLAVE_EBI_CH0>;
|
||||
interconnect-names = "sdhc-mem";
|
||||
};
|
||||
|
||||
Example with path tags:
|
||||
|
||||
gnoc: interconnect@17900000 {
|
||||
...
|
||||
interconnect-cells = <2>;
|
||||
};
|
||||
|
||||
mnoc: interconnect@1380000 {
|
||||
...
|
||||
interconnect-cells = <2>;
|
||||
};
|
||||
|
||||
cpu@0 {
|
||||
...
|
||||
interconnects = <&gnoc MASTER_APPSS_PROC 3 &mnoc SLAVE_EBI1 3>;
|
||||
}
|
||||
|
|
|
@ -21,6 +21,23 @@ properties:
|
|||
enum:
|
||||
- qcom,bcm-voter
|
||||
|
||||
qcom,tcs-wait:
|
||||
description: |
|
||||
Optional mask of which TCSs (Triggered Command Sets) wait for completion
|
||||
upon triggering. If not specified, then the AMC and WAKE sets wait for
|
||||
completion. The mask bits are available in the QCOM_ICC_TAG_* defines.
|
||||
|
||||
The AMC TCS is triggered immediately when icc_set_bw() is called. The
|
||||
WAKE/SLEEP TCSs are triggered when the RSC transitions between active and
|
||||
sleep modes.
|
||||
|
||||
In most cases, it's necessary to wait in both the AMC and WAKE sets to
|
||||
ensure resources are available before use. If a specific RSC and its use
|
||||
cases can ensure sufficient delay by other means, then this can be
|
||||
overridden to reduce latencies.
|
||||
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
|
||||
required:
|
||||
- compatible
|
||||
|
||||
|
@ -39,7 +56,10 @@ examples:
|
|||
# as defined in Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt
|
||||
- |
|
||||
|
||||
#include <dt-bindings/interconnect/qcom,icc.h>
|
||||
|
||||
disp_bcm_voter: bcm_voter {
|
||||
compatible = "qcom,bcm-voter";
|
||||
qcom,tcs-wait = <QCOM_ICC_TAG_AMC>;
|
||||
};
|
||||
...
|
||||
|
|
|
@ -19,6 +19,8 @@ properties:
|
|||
enum:
|
||||
- qcom,sc7180-osm-l3
|
||||
- qcom,sdm845-osm-l3
|
||||
- qcom,sm8150-osm-l3
|
||||
- qcom,sm8250-epss-l3
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
|
|
@ -1,16 +1,17 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/interconnect/qcom,sdm845.yaml#
|
||||
$id: http://devicetree.org/schemas/interconnect/qcom,rpmh.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Qualcomm SDM845 Network-On-Chip Interconnect
|
||||
title: Qualcomm RPMh Network-On-Chip Interconnect
|
||||
|
||||
maintainers:
|
||||
- Georgi Djakov <georgi.djakov@linaro.org>
|
||||
- Odelu Kukatla <okukatla@codeaurora.org>
|
||||
|
||||
description: |
|
||||
SDM845 interconnect providers support system bandwidth requirements through
|
||||
RPMh interconnect providers support system bandwidth requirements through
|
||||
RPMh hardware accelerators known as Bus Clock Manager (BCM). The provider is
|
||||
able to communicate with the BCM through the Resource State Coordinator (RSC)
|
||||
associated with each execution environment. Provider nodes must point to at
|
||||
|
@ -23,6 +24,19 @@ properties:
|
|||
|
||||
compatible:
|
||||
enum:
|
||||
- qcom,sc7180-aggre1-noc
|
||||
- qcom,sc7180-aggre2-noc
|
||||
- qcom,sc7180-camnoc-virt
|
||||
- qcom,sc7180-compute-noc
|
||||
- qcom,sc7180-config-noc
|
||||
- qcom,sc7180-dc-noc
|
||||
- qcom,sc7180-gem-noc
|
||||
- qcom,sc7180-ipa-virt
|
||||
- qcom,sc7180-mc-virt
|
||||
- qcom,sc7180-mmss-noc
|
||||
- qcom,sc7180-npu-noc
|
||||
- qcom,sc7180-qup-virt
|
||||
- qcom,sc7180-system-noc
|
||||
- qcom,sdm845-aggre1-noc
|
||||
- qcom,sdm845-aggre2-noc
|
||||
- qcom,sdm845-config-noc
|
||||
|
@ -31,6 +45,28 @@ properties:
|
|||
- qcom,sdm845-mem-noc
|
||||
- qcom,sdm845-mmss-noc
|
||||
- qcom,sdm845-system-noc
|
||||
- qcom,sm8150-aggre1-noc
|
||||
- qcom,sm8150-aggre2-noc
|
||||
- qcom,sm8150-camnoc-noc
|
||||
- qcom,sm8150-compute-noc
|
||||
- qcom,sm8150-config-noc
|
||||
- qcom,sm8150-dc-noc
|
||||
- qcom,sm8150-gem-noc
|
||||
- qcom,sm8150-ipa-virt
|
||||
- qcom,sm8150-mc-virt
|
||||
- qcom,sm8150-mmss-noc
|
||||
- qcom,sm8150-system-noc
|
||||
- qcom,sm8250-aggre1-noc
|
||||
- qcom,sm8250-aggre2-noc
|
||||
- qcom,sm8250-compute-noc
|
||||
- qcom,sm8250-config-noc
|
||||
- qcom,sm8250-dc-noc
|
||||
- qcom,sm8250-gem-noc
|
||||
- qcom,sm8250-ipa-virt
|
||||
- qcom,sm8250-mc-virt
|
||||
- qcom,sm8250-mmss-noc
|
||||
- qcom,sm8250-npu-noc
|
||||
- qcom,sm8250-system-noc
|
||||
|
||||
'#interconnect-cells':
|
||||
const: 1
|
|
@ -1,85 +0,0 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/interconnect/qcom,sc7180.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Qualcomm SC7180 Network-On-Chip Interconnect
|
||||
|
||||
maintainers:
|
||||
- Odelu Kukatla <okukatla@codeaurora.org>
|
||||
|
||||
description: |
|
||||
SC7180 interconnect providers support system bandwidth requirements through
|
||||
RPMh hardware accelerators known as Bus Clock Manager (BCM). The provider is
|
||||
able to communicate with the BCM through the Resource State Coordinator (RSC)
|
||||
associated with each execution environment. Provider nodes must point to at
|
||||
least one RPMh device child node pertaining to their RSC and each provider
|
||||
can map to multiple RPMh resources.
|
||||
|
||||
properties:
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
compatible:
|
||||
enum:
|
||||
- qcom,sc7180-aggre1-noc
|
||||
- qcom,sc7180-aggre2-noc
|
||||
- qcom,sc7180-camnoc-virt
|
||||
- qcom,sc7180-compute-noc
|
||||
- qcom,sc7180-config-noc
|
||||
- qcom,sc7180-dc-noc
|
||||
- qcom,sc7180-gem-noc
|
||||
- qcom,sc7180-ipa-virt
|
||||
- qcom,sc7180-mc-virt
|
||||
- qcom,sc7180-mmss-noc
|
||||
- qcom,sc7180-npu-noc
|
||||
- qcom,sc7180-qup-virt
|
||||
- qcom,sc7180-system-noc
|
||||
|
||||
'#interconnect-cells':
|
||||
const: 1
|
||||
|
||||
qcom,bcm-voters:
|
||||
$ref: /schemas/types.yaml#/definitions/phandle-array
|
||||
description: |
|
||||
List of phandles to qcom,bcm-voter nodes that are required by
|
||||
this interconnect to send RPMh commands.
|
||||
|
||||
qcom,bcm-voter-names:
|
||||
$ref: /schemas/types.yaml#/definitions/string-array
|
||||
description: |
|
||||
Names for each of the qcom,bcm-voters specified.
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- '#interconnect-cells'
|
||||
- qcom,bcm-voters
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
#include <dt-bindings/interconnect/qcom,sc7180.h>
|
||||
|
||||
config_noc: interconnect@1500000 {
|
||||
compatible = "qcom,sc7180-config-noc";
|
||||
reg = <0x01500000 0x28000>;
|
||||
#interconnect-cells = <1>;
|
||||
qcom,bcm-voters = <&apps_bcm_voter>;
|
||||
};
|
||||
|
||||
system_noc: interconnect@1620000 {
|
||||
compatible = "qcom,sc7180-system-noc";
|
||||
reg = <0x01620000 0x17080>;
|
||||
#interconnect-cells = <1>;
|
||||
qcom,bcm-voters = <&apps_bcm_voter>;
|
||||
};
|
||||
|
||||
mmss_noc: interconnect@1740000 {
|
||||
compatible = "qcom,sc7180-mmss-noc";
|
||||
reg = <0x01740000 0x1c100>;
|
||||
#interconnect-cells = <1>;
|
||||
qcom,bcm-voters = <&apps_bcm_voter>;
|
||||
};
|
|
@ -11,6 +11,7 @@ board specific bus parameters.
|
|||
Example:
|
||||
"qcom,soundwire-v1.3.0"
|
||||
"qcom,soundwire-v1.5.0"
|
||||
"qcom,soundwire-v1.5.1"
|
||||
"qcom,soundwire-v1.6.0"
|
||||
- reg:
|
||||
Usage: required
|
||||
|
|
|
@ -42,6 +42,11 @@ The session is terminated calling :c:func:`close(int fd)`.
|
|||
|
||||
A code snippet for an application communicating with Intel AMTHI client:
|
||||
|
||||
In order to support virtualization or sandboxing a trusted supervisor
|
||||
can use :c:macro:`MEI_CONNECT_CLIENT_IOCTL_VTAG` to create
|
||||
virtual channels with an Intel ME feature. Not all features support
|
||||
virtual channels such client with answer EOPNOTSUPP.
|
||||
|
||||
.. code-block:: C
|
||||
|
||||
struct mei_connect_client_data data;
|
||||
|
@ -110,6 +115,38 @@ Connect to firmware Feature/Client.
|
|||
data that can be sent or received. (e.g. if MTU=2K, can send
|
||||
requests up to bytes 2k and received responses up to 2k bytes).
|
||||
|
||||
IOCTL_MEI_CONNECT_CLIENT_VTAG:
|
||||
------------------------------
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
Usage:
|
||||
|
||||
struct mei_connect_client_data_vtag client_data_vtag;
|
||||
|
||||
ioctl(fd, IOCTL_MEI_CONNECT_CLIENT_VTAG, &client_data_vtag);
|
||||
|
||||
Inputs:
|
||||
|
||||
struct mei_connect_client_data_vtag - contain the following
|
||||
Input field:
|
||||
|
||||
in_client_uuid - GUID of the FW Feature that needs
|
||||
to connect to.
|
||||
vtag - virtual tag [1, 255]
|
||||
|
||||
Outputs:
|
||||
out_client_properties - Client Properties: MTU and Protocol Version.
|
||||
|
||||
Error returns:
|
||||
|
||||
ENOTTY No such client (i.e. wrong GUID) or connection is not allowed.
|
||||
EINVAL Wrong IOCTL Number or tag == 0
|
||||
ENODEV Device or Connection is not initialized or ready.
|
||||
ENOMEM Unable to allocate memory to client internal data.
|
||||
EFAULT Fatal Error (e.g. Unable to access user input data)
|
||||
EBUSY Connection Already Open
|
||||
EOPNOTSUPP Vtag is not supported
|
||||
|
||||
IOCTL_MEI_NOTIFY_SET
|
||||
---------------------
|
||||
|
|
|
@ -328,8 +328,11 @@ Code Seq# Include File Comments
|
|||
0xAC 00-1F linux/raw.h
|
||||
0xAD 00 Netfilter device in development:
|
||||
<mailto:rusty@rustcorp.com.au>
|
||||
0xAE all linux/kvm.h Kernel-based Virtual Machine
|
||||
0xAE 00-1F linux/kvm.h Kernel-based Virtual Machine
|
||||
<mailto:kvm@vger.kernel.org>
|
||||
0xAE 40-FF linux/kvm.h Kernel-based Virtual Machine
|
||||
<mailto:kvm@vger.kernel.org>
|
||||
0xAE 20-3F linux/nitro_enclaves.h Nitro Enclaves
|
||||
0xAF 00-1F linux/fsl_hypervisor.h Freescale hypervisor
|
||||
0xB0 all RATIO devices in development:
|
||||
<mailto:vgo@ratio.de>
|
||||
|
|
|
@ -11,6 +11,7 @@ Linux Virtualization Support
|
|||
uml/user_mode_linux_howto_v2
|
||||
paravirt_ops
|
||||
guest-halt-polling
|
||||
ne_overview
|
||||
|
||||
.. only:: html and subproject
|
||||
|
||||
|
|
|
@ -0,0 +1,95 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
==============
|
||||
Nitro Enclaves
|
||||
==============
|
||||
|
||||
Overview
|
||||
========
|
||||
|
||||
Nitro Enclaves (NE) is a new Amazon Elastic Compute Cloud (EC2) capability
|
||||
that allows customers to carve out isolated compute environments within EC2
|
||||
instances [1].
|
||||
|
||||
For example, an application that processes sensitive data and runs in a VM,
|
||||
can be separated from other applications running in the same VM. This
|
||||
application then runs in a separate VM than the primary VM, namely an enclave.
|
||||
|
||||
An enclave runs alongside the VM that spawned it. This setup matches low latency
|
||||
applications needs. The resources that are allocated for the enclave, such as
|
||||
memory and CPUs, are carved out of the primary VM. Each enclave is mapped to a
|
||||
process running in the primary VM, that communicates with the NE driver via an
|
||||
ioctl interface.
|
||||
|
||||
In this sense, there are two components:
|
||||
|
||||
1. An enclave abstraction process - a user space process running in the primary
|
||||
VM guest that uses the provided ioctl interface of the NE driver to spawn an
|
||||
enclave VM (that's 2 below).
|
||||
|
||||
There is a NE emulated PCI device exposed to the primary VM. The driver for this
|
||||
new PCI device is included in the NE driver.
|
||||
|
||||
The ioctl logic is mapped to PCI device commands e.g. the NE_START_ENCLAVE ioctl
|
||||
maps to an enclave start PCI command. The PCI device commands are then
|
||||
translated into actions taken on the hypervisor side; that's the Nitro
|
||||
hypervisor running on the host where the primary VM is running. The Nitro
|
||||
hypervisor is based on core KVM technology.
|
||||
|
||||
2. The enclave itself - a VM running on the same host as the primary VM that
|
||||
spawned it. Memory and CPUs are carved out of the primary VM and are dedicated
|
||||
for the enclave VM. An enclave does not have persistent storage attached.
|
||||
|
||||
The memory regions carved out of the primary VM and given to an enclave need to
|
||||
be aligned 2 MiB / 1 GiB physically contiguous memory regions (or multiple of
|
||||
this size e.g. 8 MiB). The memory can be allocated e.g. by using hugetlbfs from
|
||||
user space [2][3]. The memory size for an enclave needs to be at least 64 MiB.
|
||||
The enclave memory and CPUs need to be from the same NUMA node.
|
||||
|
||||
An enclave runs on dedicated cores. CPU 0 and its CPU siblings need to remain
|
||||
available for the primary VM. A CPU pool has to be set for NE purposes by an
|
||||
user with admin capability. See the cpu list section from the kernel
|
||||
documentation [4] for how a CPU pool format looks.
|
||||
|
||||
An enclave communicates with the primary VM via a local communication channel,
|
||||
using virtio-vsock [5]. The primary VM has virtio-pci vsock emulated device,
|
||||
while the enclave VM has a virtio-mmio vsock emulated device. The vsock device
|
||||
uses eventfd for signaling. The enclave VM sees the usual interfaces - local
|
||||
APIC and IOAPIC - to get interrupts from virtio-vsock device. The virtio-mmio
|
||||
device is placed in memory below the typical 4 GiB.
|
||||
|
||||
The application that runs in the enclave needs to be packaged in an enclave
|
||||
image together with the OS ( e.g. kernel, ramdisk, init ) that will run in the
|
||||
enclave VM. The enclave VM has its own kernel and follows the standard Linux
|
||||
boot protocol [6].
|
||||
|
||||
The kernel bzImage, the kernel command line, the ramdisk(s) are part of the
|
||||
Enclave Image Format (EIF); plus an EIF header including metadata such as magic
|
||||
number, eif version, image size and CRC.
|
||||
|
||||
Hash values are computed for the entire enclave image (EIF), the kernel and
|
||||
ramdisk(s). That's used, for example, to check that the enclave image that is
|
||||
loaded in the enclave VM is the one that was intended to be run.
|
||||
|
||||
These crypto measurements are included in a signed attestation document
|
||||
generated by the Nitro Hypervisor and further used to prove the identity of the
|
||||
enclave; KMS is an example of service that NE is integrated with and that checks
|
||||
the attestation doc.
|
||||
|
||||
The enclave image (EIF) is loaded in the enclave memory at offset 8 MiB. The
|
||||
init process in the enclave connects to the vsock CID of the primary VM and a
|
||||
predefined port - 9000 - to send a heartbeat value - 0xb7. This mechanism is
|
||||
used to check in the primary VM that the enclave has booted. The CID of the
|
||||
primary VM is 3.
|
||||
|
||||
If the enclave VM crashes or gracefully exits, an interrupt event is received by
|
||||
the NE driver. This event is sent further to the user space enclave process
|
||||
running in the primary VM via a poll notification mechanism. Then the user space
|
||||
enclave process can exit.
|
||||
|
||||
[1] https://aws.amazon.com/ec2/nitro/nitro-enclaves/
|
||||
[2] https://www.kernel.org/doc/html/latest/admin-guide/mm/hugetlbpage.html
|
||||
[3] https://lwn.net/Articles/807108/
|
||||
[4] https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html
|
||||
[5] https://man7.org/linux/man-pages/man7/vsock.7.html
|
||||
[6] https://www.kernel.org/doc/html/latest/x86/boot.html
|
|
@ -6,6 +6,7 @@ Supported chips:
|
|||
|
||||
* Maxim ds18*20 based temperature sensors.
|
||||
* Maxim ds1825 based temperature sensors.
|
||||
* GXCAS GC20MH01 temperature sensor.
|
||||
|
||||
Author: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
|
||||
|
||||
|
@ -13,8 +14,8 @@ Author: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
|
|||
Description
|
||||
-----------
|
||||
|
||||
w1_therm provides basic temperature conversion for ds18*20 devices, and the
|
||||
ds28ea00 device.
|
||||
w1_therm provides basic temperature conversion for ds18*20, ds28ea00, GX20MH01
|
||||
devices.
|
||||
|
||||
Supported family codes:
|
||||
|
||||
|
@ -26,59 +27,72 @@ W1_THERM_DS1825 0x3B
|
|||
W1_THERM_DS28EA00 0x42
|
||||
==================== ====
|
||||
|
||||
Support is provided through the sysfs w1_slave file. Each open and
|
||||
read sequence will initiate a temperature conversion then provide two
|
||||
Support is provided through the sysfs entry ``w1_slave``. Each open and
|
||||
read sequence will initiate a temperature conversion, then provide two
|
||||
lines of ASCII output. The first line contains the nine hex bytes
|
||||
read along with a calculated crc value and YES or NO if it matched.
|
||||
If the crc matched the returned values are retained. The second line
|
||||
displays the retained values along with a temperature in millidegrees
|
||||
Centigrade after t=.
|
||||
|
||||
Alternatively, temperature can be read using temperature sysfs, it
|
||||
return only temperature in millidegrees Centigrade.
|
||||
Alternatively, temperature can be read using ``temperature`` sysfs, it
|
||||
returns only the temperature in millidegrees Centigrade.
|
||||
|
||||
A bulk read of all devices on the bus could be done writing 'trigger'
|
||||
in the therm_bulk_read sysfs entry at w1_bus_master level. This will
|
||||
sent the convert command on all devices on the bus, and if parasite
|
||||
powered devices are detected on the bus (and strong pullup is enable
|
||||
A bulk read of all devices on the bus could be done writing ``trigger``
|
||||
to ``therm_bulk_read`` entry at w1_bus_master level. This will
|
||||
send the convert command to all devices on the bus, and if parasite
|
||||
powered devices are detected on the bus (and strong pullup is enabled
|
||||
in the module), it will drive the line high during the longer conversion
|
||||
time required by parasited powered device on the line. Reading
|
||||
therm_bulk_read will return 0 if no bulk conversion pending,
|
||||
``therm_bulk_read`` will return 0 if no bulk conversion pending,
|
||||
-1 if at least one sensor still in conversion, 1 if conversion is complete
|
||||
but at least one sensor value has not been read yet. Result temperature is
|
||||
then accessed by reading the temperature sysfs entry of each device, which
|
||||
then accessed by reading the ``temperature`` entry of each device, which
|
||||
may return empty if conversion is still in progress. Note that if a bulk
|
||||
read is sent but one sensor is not read immediately, the next access to
|
||||
temperature on this device will return the temperature measured at the
|
||||
``temperature`` on this device will return the temperature measured at the
|
||||
time of issue of the bulk read command (not the current temperature).
|
||||
|
||||
Writing a value between 9 and 12 to the sysfs w1_slave file will change the
|
||||
precision of the sensor for the next readings. This value is in (volatile)
|
||||
SRAM, so it is reset when the sensor gets power-cycled.
|
||||
A strong pullup will be applied during the conversion if required.
|
||||
|
||||
To store the current precision configuration into EEPROM, the value 0
|
||||
has to be written to the sysfs w1_slave file. Since the EEPROM has a limited
|
||||
amount of writes (>50k), this command should be used wisely.
|
||||
``conv_time`` is used to get current conversion time (read), and
|
||||
adjust it (write). A temperature conversion time depends on the device type and
|
||||
it's current resolution. Default conversion time is set by the driver according
|
||||
to the device datasheet. A conversion time for many original device clones
|
||||
deviate from datasheet specs. There are three options: 1) manually set the
|
||||
correct conversion time by writing a value in milliseconds to ``conv_time``; 2)
|
||||
auto measure and set a conversion time by writing ``1`` to
|
||||
``conv_time``; 3) use ``features`` to enable poll for conversion
|
||||
completion. Options 2, 3 can't be used in parasite power mode. To get back to
|
||||
the default conversion time write ``0`` to ``conv_time``.
|
||||
|
||||
Alternatively, resolution can be set or read (value from 9 to 12) using the
|
||||
dedicated resolution sysfs entry on each device. This sysfs entry is not
|
||||
present for devices not supporting this feature. Driver will adjust the
|
||||
correct conversion time for each device regarding to its resolution setting.
|
||||
In particular, strong pullup will be applied if required during the conversion
|
||||
duration.
|
||||
Writing a resolution value (in bits) to ``w1_slave`` will change the
|
||||
precision of the sensor for the next readings. Allowed resolutions are defined by
|
||||
the sensor. Resolution is reset when the sensor gets power-cycled.
|
||||
|
||||
The write-only sysfs entry eeprom is an alternative for EEPROM operations:
|
||||
* 'save': will save device RAM to EEPROM
|
||||
* 'restore': will restore EEPROM data in device RAM.
|
||||
To store the current resolution in EEPROM, write ``0`` to ``w1_slave``.
|
||||
Since the EEPROM has a limited amount of writes (>50k), this command should be
|
||||
used wisely.
|
||||
|
||||
ext_power syfs entry allow tho check the power status of each device.
|
||||
* '0': device parasite powered
|
||||
* '1': device externally powered
|
||||
Alternatively, resolution can be read or written using the dedicated
|
||||
``resolution`` entry on each device, if supported by the sensor.
|
||||
|
||||
sysfs alarms allow read or write TH and TL (Temperature High an Low) alarms.
|
||||
Some non-genuine DS18B20 chips are fixed in 12-bit mode only, so the actual
|
||||
resolution is read back from the chip and verified.
|
||||
|
||||
Note: Changing the resolution reverts the conversion time to default.
|
||||
|
||||
The write-only sysfs entry ``eeprom`` is an alternative for EEPROM operations.
|
||||
Write ``save`` to save device RAM to EEPROM. Write ``restore`` to restore EEPROM
|
||||
data in device RAM.
|
||||
|
||||
``ext_power`` entry allows checking the power state of each device. Reads
|
||||
``0`` if the device is parasite powered, ``1`` if the device is externally powered.
|
||||
|
||||
Sysfs ``alarms`` allow read or write TH and TL (Temperature High an Low) alarms.
|
||||
Values shall be space separated and in the device range (typical -55 degC
|
||||
to 125 degC). Values are integer as they are store in a 8bit register in
|
||||
the device. Lowest value is automatically put to TL.Once set, alarms could
|
||||
the device. Lowest value is automatically put to TL. Once set, alarms could
|
||||
be search at master level.
|
||||
|
||||
The module parameter strong_pullup can be set to 0 to disable the
|
||||
|
@ -102,5 +116,24 @@ The DS28EA00 provides an additional two pins for implementing a sequence
|
|||
detection algorithm. This feature allows you to determine the physical
|
||||
location of the chip in the 1-wire bus without needing pre-existing
|
||||
knowledge of the bus ordering. Support is provided through the sysfs
|
||||
w1_seq file. The file will contain a single line with an integer value
|
||||
``w1_seq``. The file will contain a single line with an integer value
|
||||
representing the device index in the bus starting at 0.
|
||||
|
||||
``features`` sysfs entry controls optional driver settings per device.
|
||||
Insufficient power in parasite mode, line noise and insufficient conversion
|
||||
time may lead to conversion failure. Original DS18B20 and some clones allow for
|
||||
detection of invalid conversion. Write bit mask ``1`` to ``features`` to enable
|
||||
checking the conversion success. If byte 6 of scratchpad memory is 0xC after
|
||||
conversion and temperature reads 85.00 (powerup value) or 127.94 (insufficient
|
||||
power), the driver returns a conversion error. Bit mask ``2`` enables poll for
|
||||
conversion completion (normal power only) by generating read cycles on the bus
|
||||
after conversion starts. In parasite power mode this feature is not available.
|
||||
Feature bit masks may be combined (OR). More details in
|
||||
Documentation/ABI/testing/sysfs-driver-w1_therm
|
||||
|
||||
GX20MH01 device shares family number 0x28 with DS18*20. The device is generally
|
||||
compatible with DS18B20. Added are lowest 2\ :sup:`-5`, 2\ :sup:`-6` temperature
|
||||
bits in Config register; R2 bit in Config register enabling 13 and 14 bit
|
||||
resolutions. The device is powered up in 14-bit resolution mode. The conversion
|
||||
times specified in the datasheet are too low and have to be increased. The
|
||||
device supports driver features ``1`` and ``2``.
|
||||
|
|
37
MAINTAINERS
37
MAINTAINERS
|
@ -1725,6 +1725,7 @@ ARM/CORESIGHT FRAMEWORK AND DRIVERS
|
|||
M: Mathieu Poirier <mathieu.poirier@linaro.org>
|
||||
R: Suzuki K Poulose <suzuki.poulose@arm.com>
|
||||
R: Mike Leach <mike.leach@linaro.org>
|
||||
L: coresight@lists.linaro.org (moderated for non-subscribers)
|
||||
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
|
||||
S: Maintained
|
||||
F: Documentation/ABI/testing/sysfs-bus-coresight-devices-*
|
||||
|
@ -4101,6 +4102,11 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git
|
|||
F: drivers/char/
|
||||
F: drivers/misc/
|
||||
F: include/linux/miscdevice.h
|
||||
X: drivers/char/agp/
|
||||
X: drivers/char/hw_random/
|
||||
X: drivers/char/ipmi/
|
||||
X: drivers/char/random.c
|
||||
X: drivers/char/tpm/
|
||||
|
||||
CHECKPATCH
|
||||
M: Andy Whitcroft <apw@canonical.com>
|
||||
|
@ -6828,14 +6834,17 @@ F: drivers/net/ethernet/nvidia/*
|
|||
|
||||
FPGA DFL DRIVERS
|
||||
M: Wu Hao <hao.wu@intel.com>
|
||||
R: Tom Rix <trix@redhat.com>
|
||||
L: linux-fpga@vger.kernel.org
|
||||
S: Maintained
|
||||
F: Documentation/ABI/testing/sysfs-bus-dfl
|
||||
F: Documentation/fpga/dfl.rst
|
||||
F: drivers/fpga/dfl*
|
||||
F: include/uapi/linux/fpga-dfl.h
|
||||
|
||||
FPGA MANAGER FRAMEWORK
|
||||
M: Moritz Fischer <mdf@kernel.org>
|
||||
R: Tom Rix <trix@redhat.com>
|
||||
L: linux-fpga@vger.kernel.org
|
||||
S: Maintained
|
||||
W: http://www.rocketboards.org
|
||||
|
@ -7885,6 +7894,13 @@ W: http://www.hisilicon.com
|
|||
F: Documentation/devicetree/bindings/net/hisilicon*.txt
|
||||
F: drivers/net/ethernet/hisilicon/
|
||||
|
||||
HIKEY960 ONBOARD USB GPIO HUB DRIVER
|
||||
M: John Stultz <john.stultz@linaro.org>
|
||||
L: linux-kernel@vger.kernel.org
|
||||
S: Maintained
|
||||
F: drivers/misc/hisi_hikey_usb.c
|
||||
F: Documentation/devicetree/bindings/misc/hisilicon-hikey-usb.yaml
|
||||
|
||||
HISILICON PMU DRIVER
|
||||
M: Shaokun Zhang <zhangshaokun@hisilicon.com>
|
||||
S: Supported
|
||||
|
@ -11353,6 +11369,7 @@ M: Hemant Kumar <hemantk@codeaurora.org>
|
|||
L: linux-arm-msm@vger.kernel.org
|
||||
S: Maintained
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mani/mhi.git
|
||||
F: Documentation/ABI/stable/sysfs-bus-mhi
|
||||
F: Documentation/mhi/
|
||||
F: drivers/bus/mhi/
|
||||
F: include/linux/mhi.h
|
||||
|
@ -12307,6 +12324,19 @@ S: Maintained
|
|||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/lftan/nios2.git
|
||||
F: arch/nios2/
|
||||
|
||||
NITRO ENCLAVES (NE)
|
||||
M: Andra Paraschiv <andraprs@amazon.com>
|
||||
M: Alexandru Vasile <lexnv@amazon.com>
|
||||
M: Alexandru Ciobotaru <alcioa@amazon.com>
|
||||
L: linux-kernel@vger.kernel.org
|
||||
S: Supported
|
||||
W: https://aws.amazon.com/ec2/nitro/nitro-enclaves/
|
||||
F: Documentation/virt/ne_overview.rst
|
||||
F: drivers/virt/nitro_enclaves/
|
||||
F: include/linux/nitro_enclaves.h
|
||||
F: include/uapi/linux/nitro_enclaves.h
|
||||
F: samples/nitro_enclaves/
|
||||
|
||||
NOHZ, DYNTICKS SUPPORT
|
||||
M: Frederic Weisbecker <fweisbec@gmail.com>
|
||||
M: Thomas Gleixner <tglx@linutronix.de>
|
||||
|
@ -12469,6 +12499,13 @@ F: drivers/iio/gyro/fxas21002c_core.c
|
|||
F: drivers/iio/gyro/fxas21002c_i2c.c
|
||||
F: drivers/iio/gyro/fxas21002c_spi.c
|
||||
|
||||
NXP PTN5150A CC LOGIC AND EXTCON DRIVER
|
||||
M: Krzysztof Kozlowski <krzk@kernel.org>
|
||||
L: linux-kernel@vger.kernel.org
|
||||
S: Maintained
|
||||
F: Documentation/devicetree/bindings/extcon/extcon-ptn5150.yaml
|
||||
F: drivers/extcon/extcon-ptn5150.c
|
||||
|
||||
NXP SGTL5000 DRIVER
|
||||
M: Fabio Estevam <festevam@gmail.com>
|
||||
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
|
||||
|
|
|
@ -223,7 +223,7 @@ static struct binder_transaction_log_entry *binder_transaction_log_add(
|
|||
struct binder_work {
|
||||
struct list_head entry;
|
||||
|
||||
enum {
|
||||
enum binder_work_type {
|
||||
BINDER_WORK_TRANSACTION = 1,
|
||||
BINDER_WORK_TRANSACTION_COMPLETE,
|
||||
BINDER_WORK_RETURN_ERROR,
|
||||
|
@ -885,27 +885,6 @@ static struct binder_work *binder_dequeue_work_head_ilocked(
|
|||
return w;
|
||||
}
|
||||
|
||||
/**
|
||||
* binder_dequeue_work_head() - Dequeues the item at head of list
|
||||
* @proc: binder_proc associated with list
|
||||
* @list: list to dequeue head
|
||||
*
|
||||
* Removes the head of the list if there are items on the list
|
||||
*
|
||||
* Return: pointer dequeued binder_work, NULL if list was empty
|
||||
*/
|
||||
static struct binder_work *binder_dequeue_work_head(
|
||||
struct binder_proc *proc,
|
||||
struct list_head *list)
|
||||
{
|
||||
struct binder_work *w;
|
||||
|
||||
binder_inner_proc_lock(proc);
|
||||
w = binder_dequeue_work_head_ilocked(list);
|
||||
binder_inner_proc_unlock(proc);
|
||||
return w;
|
||||
}
|
||||
|
||||
static void
|
||||
binder_defer_work(struct binder_proc *proc, enum binder_deferred_state defer);
|
||||
static void binder_free_thread(struct binder_thread *thread);
|
||||
|
@ -2344,8 +2323,6 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
|
|||
* file is done when the transaction is torn
|
||||
* down.
|
||||
*/
|
||||
WARN_ON(failed_at &&
|
||||
proc->tsk == current->group_leader);
|
||||
} break;
|
||||
case BINDER_TYPE_PTR:
|
||||
/*
|
||||
|
@ -3136,7 +3113,7 @@ static void binder_transaction(struct binder_proc *proc,
|
|||
|
||||
t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,
|
||||
tr->offsets_size, extra_buffers_size,
|
||||
!reply && (t->flags & TF_ONE_WAY));
|
||||
!reply && (t->flags & TF_ONE_WAY), current->tgid);
|
||||
if (IS_ERR(t->buffer)) {
|
||||
/*
|
||||
* -ESRCH indicates VMA cleared. The target is dying.
|
||||
|
@ -4587,13 +4564,17 @@ static void binder_release_work(struct binder_proc *proc,
|
|||
struct list_head *list)
|
||||
{
|
||||
struct binder_work *w;
|
||||
enum binder_work_type wtype;
|
||||
|
||||
while (1) {
|
||||
w = binder_dequeue_work_head(proc, list);
|
||||
binder_inner_proc_lock(proc);
|
||||
w = binder_dequeue_work_head_ilocked(list);
|
||||
wtype = w ? w->type : 0;
|
||||
binder_inner_proc_unlock(proc);
|
||||
if (!w)
|
||||
return;
|
||||
|
||||
switch (w->type) {
|
||||
switch (wtype) {
|
||||
case BINDER_WORK_TRANSACTION: {
|
||||
struct binder_transaction *t;
|
||||
|
||||
|
@ -4627,9 +4608,11 @@ static void binder_release_work(struct binder_proc *proc,
|
|||
kfree(death);
|
||||
binder_stats_deleted(BINDER_STAT_DEATH);
|
||||
} break;
|
||||
case BINDER_WORK_NODE:
|
||||
break;
|
||||
default:
|
||||
pr_err("unexpected work type, %d, not freed\n",
|
||||
w->type);
|
||||
wtype);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
@ -5182,9 +5165,7 @@ static const struct vm_operations_struct binder_vm_ops = {
|
|||
|
||||
static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
|
||||
{
|
||||
int ret;
|
||||
struct binder_proc *proc = filp->private_data;
|
||||
const char *failure_string;
|
||||
|
||||
if (proc->tsk != current->group_leader)
|
||||
return -EINVAL;
|
||||
|
@ -5196,9 +5177,9 @@ static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
|
|||
(unsigned long)pgprot_val(vma->vm_page_prot));
|
||||
|
||||
if (vma->vm_flags & FORBIDDEN_MMAP_FLAGS) {
|
||||
ret = -EPERM;
|
||||
failure_string = "bad vm_flags";
|
||||
goto err_bad_arg;
|
||||
pr_err("%s: %d %lx-%lx %s failed %d\n", __func__,
|
||||
proc->pid, vma->vm_start, vma->vm_end, "bad vm_flags", -EPERM);
|
||||
return -EPERM;
|
||||
}
|
||||
vma->vm_flags |= VM_DONTCOPY | VM_MIXEDMAP;
|
||||
vma->vm_flags &= ~VM_MAYWRITE;
|
||||
|
@ -5206,15 +5187,7 @@ static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
|
|||
vma->vm_ops = &binder_vm_ops;
|
||||
vma->vm_private_data = proc;
|
||||
|
||||
ret = binder_alloc_mmap_handler(&proc->alloc, vma);
|
||||
if (ret)
|
||||
return ret;
|
||||
return 0;
|
||||
|
||||
err_bad_arg:
|
||||
pr_err("%s: %d %lx-%lx %s failed %d\n", __func__,
|
||||
proc->pid, vma->vm_start, vma->vm_end, failure_string, ret);
|
||||
return ret;
|
||||
return binder_alloc_mmap_handler(&proc->alloc, vma);
|
||||
}
|
||||
|
||||
static int binder_open(struct inode *nodp, struct file *filp)
|
||||
|
|
|
@ -338,12 +338,50 @@ static inline struct vm_area_struct *binder_alloc_get_vma(
|
|||
return vma;
|
||||
}
|
||||
|
||||
static void debug_low_async_space_locked(struct binder_alloc *alloc, int pid)
|
||||
{
|
||||
/*
|
||||
* Find the amount and size of buffers allocated by the current caller;
|
||||
* The idea is that once we cross the threshold, whoever is responsible
|
||||
* for the low async space is likely to try to send another async txn,
|
||||
* and at some point we'll catch them in the act. This is more efficient
|
||||
* than keeping a map per pid.
|
||||
*/
|
||||
struct rb_node *n;
|
||||
struct binder_buffer *buffer;
|
||||
size_t total_alloc_size = 0;
|
||||
size_t num_buffers = 0;
|
||||
|
||||
for (n = rb_first(&alloc->allocated_buffers); n != NULL;
|
||||
n = rb_next(n)) {
|
||||
buffer = rb_entry(n, struct binder_buffer, rb_node);
|
||||
if (buffer->pid != pid)
|
||||
continue;
|
||||
if (!buffer->async_transaction)
|
||||
continue;
|
||||
total_alloc_size += binder_alloc_buffer_size(alloc, buffer)
|
||||
+ sizeof(struct binder_buffer);
|
||||
num_buffers++;
|
||||
}
|
||||
|
||||
/*
|
||||
* Warn if this pid has more than 50 transactions, or more than 50% of
|
||||
* async space (which is 25% of total buffer size).
|
||||
*/
|
||||
if (num_buffers > 50 || total_alloc_size > alloc->buffer_size / 4) {
|
||||
binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
|
||||
"%d: pid %d spamming oneway? %zd buffers allocated for a total size of %zd\n",
|
||||
alloc->pid, pid, num_buffers, total_alloc_size);
|
||||
}
|
||||
}
|
||||
|
||||
static struct binder_buffer *binder_alloc_new_buf_locked(
|
||||
struct binder_alloc *alloc,
|
||||
size_t data_size,
|
||||
size_t offsets_size,
|
||||
size_t extra_buffers_size,
|
||||
int is_async)
|
||||
int is_async,
|
||||
int pid)
|
||||
{
|
||||
struct rb_node *n = alloc->free_buffers.rb_node;
|
||||
struct binder_buffer *buffer;
|
||||
|
@ -486,11 +524,20 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
|
|||
buffer->offsets_size = offsets_size;
|
||||
buffer->async_transaction = is_async;
|
||||
buffer->extra_buffers_size = extra_buffers_size;
|
||||
buffer->pid = pid;
|
||||
if (is_async) {
|
||||
alloc->free_async_space -= size + sizeof(struct binder_buffer);
|
||||
binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC_ASYNC,
|
||||
"%d: binder_alloc_buf size %zd async free %zd\n",
|
||||
alloc->pid, size, alloc->free_async_space);
|
||||
if (alloc->free_async_space < alloc->buffer_size / 10) {
|
||||
/*
|
||||
* Start detecting spammers once we have less than 20%
|
||||
* of async space left (which is less than 10% of total
|
||||
* buffer size).
|
||||
*/
|
||||
debug_low_async_space_locked(alloc, pid);
|
||||
}
|
||||
}
|
||||
return buffer;
|
||||
|
||||
|
@ -508,6 +555,7 @@ err_alloc_buf_struct_failed:
|
|||
* @offsets_size: user specified buffer offset
|
||||
* @extra_buffers_size: size of extra space for meta-data (eg, security context)
|
||||
* @is_async: buffer for async transaction
|
||||
* @pid: pid to attribute allocation to (used for debugging)
|
||||
*
|
||||
* Allocate a new buffer given the requested sizes. Returns
|
||||
* the kernel version of the buffer pointer. The size allocated
|
||||
|
@ -520,13 +568,14 @@ struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
|
|||
size_t data_size,
|
||||
size_t offsets_size,
|
||||
size_t extra_buffers_size,
|
||||
int is_async)
|
||||
int is_async,
|
||||
int pid)
|
||||
{
|
||||
struct binder_buffer *buffer;
|
||||
|
||||
mutex_lock(&alloc->mutex);
|
||||
buffer = binder_alloc_new_buf_locked(alloc, data_size, offsets_size,
|
||||
extra_buffers_size, is_async);
|
||||
extra_buffers_size, is_async, pid);
|
||||
mutex_unlock(&alloc->mutex);
|
||||
return buffer;
|
||||
}
|
||||
|
@ -652,7 +701,7 @@ static void binder_free_buf_locked(struct binder_alloc *alloc,
|
|||
* @alloc: binder_alloc for this proc
|
||||
* @buffer: kernel pointer to buffer
|
||||
*
|
||||
* Free the buffer allocated via binder_alloc_new_buffer()
|
||||
* Free the buffer allocated via binder_alloc_new_buf()
|
||||
*/
|
||||
void binder_alloc_free_buf(struct binder_alloc *alloc,
|
||||
struct binder_buffer *buffer)
|
||||
|
|
|
@ -32,6 +32,7 @@ struct binder_transaction;
|
|||
* @offsets_size: size of array of offsets
|
||||
* @extra_buffers_size: size of space for other objects (like sg lists)
|
||||
* @user_data: user pointer to base of buffer space
|
||||
* @pid: pid to attribute the buffer to (caller)
|
||||
*
|
||||
* Bookkeeping structure for binder transaction buffers
|
||||
*/
|
||||
|
@ -51,6 +52,7 @@ struct binder_buffer {
|
|||
size_t offsets_size;
|
||||
size_t extra_buffers_size;
|
||||
void __user *user_data;
|
||||
int pid;
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -117,7 +119,8 @@ extern struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
|
|||
size_t data_size,
|
||||
size_t offsets_size,
|
||||
size_t extra_buffers_size,
|
||||
int is_async);
|
||||
int is_async,
|
||||
int pid);
|
||||
extern void binder_alloc_init(struct binder_alloc *alloc);
|
||||
extern int binder_alloc_shrinker_init(void);
|
||||
extern void binder_alloc_vma_close(struct binder_alloc *alloc);
|
||||
|
|
|
@ -119,7 +119,7 @@ static void binder_selftest_alloc_buf(struct binder_alloc *alloc,
|
|||
int i;
|
||||
|
||||
for (i = 0; i < BUFFER_NUM; i++) {
|
||||
buffers[i] = binder_alloc_new_buf(alloc, sizes[i], 0, 0, 0);
|
||||
buffers[i] = binder_alloc_new_buf(alloc, sizes[i], 0, 0, 0, 0);
|
||||
if (IS_ERR(buffers[i]) ||
|
||||
!check_buffer_pages_allocated(alloc, buffers[i],
|
||||
sizes[i])) {
|
||||
|
|
|
@ -63,7 +63,7 @@ static const struct constant_table binderfs_param_stats[] = {
|
|||
{}
|
||||
};
|
||||
|
||||
const struct fs_parameter_spec binderfs_fs_parameters[] = {
|
||||
static const struct fs_parameter_spec binderfs_fs_parameters[] = {
|
||||
fsparam_u32("max", Opt_max),
|
||||
fsparam_enum("stats", Opt_stats_mode, binderfs_param_stats),
|
||||
{}
|
||||
|
|
|
@ -272,9 +272,9 @@ static ssize_t firmware_loading_store(struct device *dev,
|
|||
dev_err(dev, "%s: map pages failed\n",
|
||||
__func__);
|
||||
else
|
||||
rc = security_kernel_post_read_file(NULL,
|
||||
fw_priv->data, fw_priv->size,
|
||||
READING_FIRMWARE);
|
||||
rc = security_kernel_post_load_data(fw_priv->data,
|
||||
fw_priv->size,
|
||||
LOADING_FIRMWARE, "blob");
|
||||
|
||||
/*
|
||||
* Same logic as fw_load_abort, only the DONE bit
|
||||
|
@ -490,13 +490,11 @@ exit:
|
|||
/**
|
||||
* fw_load_sysfs_fallback() - load a firmware via the sysfs fallback mechanism
|
||||
* @fw_sysfs: firmware sysfs information for the firmware to load
|
||||
* @opt_flags: flags of options, FW_OPT_*
|
||||
* @timeout: timeout to wait for the load
|
||||
*
|
||||
* In charge of constructing a sysfs fallback interface for firmware loading.
|
||||
**/
|
||||
static int fw_load_sysfs_fallback(struct fw_sysfs *fw_sysfs,
|
||||
u32 opt_flags, long timeout)
|
||||
static int fw_load_sysfs_fallback(struct fw_sysfs *fw_sysfs, long timeout)
|
||||
{
|
||||
int retval = 0;
|
||||
struct device *f_dev = &fw_sysfs->dev;
|
||||
|
@ -518,7 +516,7 @@ static int fw_load_sysfs_fallback(struct fw_sysfs *fw_sysfs,
|
|||
list_add(&fw_priv->pending_list, &pending_fw_head);
|
||||
mutex_unlock(&fw_lock);
|
||||
|
||||
if (opt_flags & FW_OPT_UEVENT) {
|
||||
if (fw_priv->opt_flags & FW_OPT_UEVENT) {
|
||||
fw_priv->need_uevent = true;
|
||||
dev_set_uevent_suppress(f_dev, false);
|
||||
dev_dbg(f_dev, "firmware: requesting %s\n", fw_priv->fw_name);
|
||||
|
@ -580,10 +578,10 @@ static int fw_load_from_user_helper(struct firmware *firmware,
|
|||
}
|
||||
|
||||
fw_sysfs->fw_priv = firmware->priv;
|
||||
ret = fw_load_sysfs_fallback(fw_sysfs, opt_flags, timeout);
|
||||
ret = fw_load_sysfs_fallback(fw_sysfs, timeout);
|
||||
|
||||
if (!ret)
|
||||
ret = assign_fw(firmware, device, opt_flags);
|
||||
ret = assign_fw(firmware, device);
|
||||
|
||||
out_unlock:
|
||||
usermodehelper_read_unlock();
|
||||
|
@ -613,7 +611,7 @@ static bool fw_run_sysfs_fallback(u32 opt_flags)
|
|||
return false;
|
||||
|
||||
/* Also permit LSMs and IMA to fail firmware sysfs fallback */
|
||||
ret = security_kernel_load_data(LOADING_FIRMWARE);
|
||||
ret = security_kernel_load_data(LOADING_FIRMWARE, true);
|
||||
if (ret < 0)
|
||||
return false;
|
||||
|
||||
|
@ -625,7 +623,8 @@ static bool fw_run_sysfs_fallback(u32 opt_flags)
|
|||
* @fw: pointer to firmware image
|
||||
* @name: name of firmware file to look for
|
||||
* @device: device for which firmware is being loaded
|
||||
* @opt_flags: options to control firmware loading behaviour
|
||||
* @opt_flags: options to control firmware loading behaviour, as defined by
|
||||
* &enum fw_opt
|
||||
* @ret: return value from direct lookup which triggered the fallback mechanism
|
||||
*
|
||||
* This function is called if direct lookup for the firmware failed, it enables
|
||||
|
|
|
@ -67,10 +67,9 @@ static inline void unregister_sysfs_loader(void)
|
|||
#endif /* CONFIG_FW_LOADER_USER_HELPER */
|
||||
|
||||
#ifdef CONFIG_EFI_EMBEDDED_FIRMWARE
|
||||
int firmware_fallback_platform(struct fw_priv *fw_priv, u32 opt_flags);
|
||||
int firmware_fallback_platform(struct fw_priv *fw_priv);
|
||||
#else
|
||||
static inline int firmware_fallback_platform(struct fw_priv *fw_priv,
|
||||
u32 opt_flags)
|
||||
static inline int firmware_fallback_platform(struct fw_priv *fw_priv)
|
||||
{
|
||||
return -ENOENT;
|
||||
}
|
||||
|
|
|
@ -8,16 +8,16 @@
|
|||
#include "fallback.h"
|
||||
#include "firmware.h"
|
||||
|
||||
int firmware_fallback_platform(struct fw_priv *fw_priv, u32 opt_flags)
|
||||
int firmware_fallback_platform(struct fw_priv *fw_priv)
|
||||
{
|
||||
const u8 *data;
|
||||
size_t size;
|
||||
int rc;
|
||||
|
||||
if (!(opt_flags & FW_OPT_FALLBACK_PLATFORM))
|
||||
if (!(fw_priv->opt_flags & FW_OPT_FALLBACK_PLATFORM))
|
||||
return -ENOENT;
|
||||
|
||||
rc = security_kernel_load_data(LOADING_FIRMWARE_EFI_EMBEDDED);
|
||||
rc = security_kernel_load_data(LOADING_FIRMWARE, true);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
|
@ -27,6 +27,12 @@ int firmware_fallback_platform(struct fw_priv *fw_priv, u32 opt_flags)
|
|||
|
||||
if (fw_priv->data && size > fw_priv->allocated_size)
|
||||
return -ENOMEM;
|
||||
|
||||
rc = security_kernel_post_load_data((u8 *)data, size, LOADING_FIRMWARE,
|
||||
"platform");
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
if (!fw_priv->data)
|
||||
fw_priv->data = vmalloc(size);
|
||||
if (!fw_priv->data)
|
||||
|
|
|
@ -32,6 +32,8 @@
|
|||
* @FW_OPT_FALLBACK_PLATFORM: Enable fallback to device fw copy embedded in
|
||||
* the platform's main firmware. If both this fallback and the sysfs
|
||||
* fallback are enabled, then this fallback will be tried first.
|
||||
* @FW_OPT_PARTIAL: Allow partial read of firmware instead of needing to read
|
||||
* entire file.
|
||||
*/
|
||||
enum fw_opt {
|
||||
FW_OPT_UEVENT = BIT(0),
|
||||
|
@ -41,6 +43,7 @@ enum fw_opt {
|
|||
FW_OPT_NOCACHE = BIT(4),
|
||||
FW_OPT_NOFALLBACK_SYSFS = BIT(5),
|
||||
FW_OPT_FALLBACK_PLATFORM = BIT(6),
|
||||
FW_OPT_PARTIAL = BIT(7),
|
||||
};
|
||||
|
||||
enum fw_status {
|
||||
|
@ -68,6 +71,8 @@ struct fw_priv {
|
|||
void *data;
|
||||
size_t size;
|
||||
size_t allocated_size;
|
||||
size_t offset;
|
||||
u32 opt_flags;
|
||||
#ifdef CONFIG_FW_LOADER_PAGED_BUF
|
||||
bool is_paged_buf;
|
||||
struct page **pages;
|
||||
|
@ -136,7 +141,7 @@ static inline void fw_state_done(struct fw_priv *fw_priv)
|
|||
__fw_state_set(fw_priv, FW_STATUS_DONE);
|
||||
}
|
||||
|
||||
int assign_fw(struct firmware *fw, struct device *device, u32 opt_flags);
|
||||
int assign_fw(struct firmware *fw, struct device *device);
|
||||
|
||||
#ifdef CONFIG_FW_LOADER_PAGED_BUF
|
||||
void fw_free_paged_buf(struct fw_priv *fw_priv);
|
||||
|
|
|
@ -12,6 +12,7 @@
|
|||
|
||||
#include <linux/capability.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/kernel_read_file.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/timer.h>
|
||||
|
@ -167,10 +168,21 @@ static int fw_cache_piggyback_on_request(const char *name);
|
|||
|
||||
static struct fw_priv *__allocate_fw_priv(const char *fw_name,
|
||||
struct firmware_cache *fwc,
|
||||
void *dbuf, size_t size)
|
||||
void *dbuf,
|
||||
size_t size,
|
||||
size_t offset,
|
||||
u32 opt_flags)
|
||||
{
|
||||
struct fw_priv *fw_priv;
|
||||
|
||||
/* For a partial read, the buffer must be preallocated. */
|
||||
if ((opt_flags & FW_OPT_PARTIAL) && !dbuf)
|
||||
return NULL;
|
||||
|
||||
/* Only partial reads are allowed to use an offset. */
|
||||
if (offset != 0 && !(opt_flags & FW_OPT_PARTIAL))
|
||||
return NULL;
|
||||
|
||||
fw_priv = kzalloc(sizeof(*fw_priv), GFP_ATOMIC);
|
||||
if (!fw_priv)
|
||||
return NULL;
|
||||
|
@ -185,6 +197,8 @@ static struct fw_priv *__allocate_fw_priv(const char *fw_name,
|
|||
fw_priv->fwc = fwc;
|
||||
fw_priv->data = dbuf;
|
||||
fw_priv->allocated_size = size;
|
||||
fw_priv->offset = offset;
|
||||
fw_priv->opt_flags = opt_flags;
|
||||
fw_state_init(fw_priv);
|
||||
#ifdef CONFIG_FW_LOADER_USER_HELPER
|
||||
INIT_LIST_HEAD(&fw_priv->pending_list);
|
||||
|
@ -209,13 +223,20 @@ static struct fw_priv *__lookup_fw_priv(const char *fw_name)
|
|||
/* Returns 1 for batching firmware requests with the same name */
|
||||
static int alloc_lookup_fw_priv(const char *fw_name,
|
||||
struct firmware_cache *fwc,
|
||||
struct fw_priv **fw_priv, void *dbuf,
|
||||
size_t size, u32 opt_flags)
|
||||
struct fw_priv **fw_priv,
|
||||
void *dbuf,
|
||||
size_t size,
|
||||
size_t offset,
|
||||
u32 opt_flags)
|
||||
{
|
||||
struct fw_priv *tmp;
|
||||
|
||||
spin_lock(&fwc->lock);
|
||||
if (!(opt_flags & FW_OPT_NOCACHE)) {
|
||||
/*
|
||||
* Do not merge requests that are marked to be non-cached or
|
||||
* are performing partial reads.
|
||||
*/
|
||||
if (!(opt_flags & (FW_OPT_NOCACHE | FW_OPT_PARTIAL))) {
|
||||
tmp = __lookup_fw_priv(fw_name);
|
||||
if (tmp) {
|
||||
kref_get(&tmp->ref);
|
||||
|
@ -226,7 +247,7 @@ static int alloc_lookup_fw_priv(const char *fw_name,
|
|||
}
|
||||
}
|
||||
|
||||
tmp = __allocate_fw_priv(fw_name, fwc, dbuf, size);
|
||||
tmp = __allocate_fw_priv(fw_name, fwc, dbuf, size, offset, opt_flags);
|
||||
if (tmp) {
|
||||
INIT_LIST_HEAD(&tmp->list);
|
||||
if (!(opt_flags & FW_OPT_NOCACHE))
|
||||
|
@ -466,18 +487,16 @@ fw_get_filesystem_firmware(struct device *device, struct fw_priv *fw_priv,
|
|||
size_t in_size,
|
||||
const void *in_buffer))
|
||||
{
|
||||
loff_t size;
|
||||
size_t size;
|
||||
int i, len;
|
||||
int rc = -ENOENT;
|
||||
char *path;
|
||||
enum kernel_read_file_id id = READING_FIRMWARE;
|
||||
size_t msize = INT_MAX;
|
||||
void *buffer = NULL;
|
||||
|
||||
/* Already populated data member means we're loading into a buffer */
|
||||
if (!decompress && fw_priv->data) {
|
||||
buffer = fw_priv->data;
|
||||
id = READING_FIRMWARE_PREALLOC_BUFFER;
|
||||
msize = fw_priv->allocated_size;
|
||||
}
|
||||
|
||||
|
@ -486,6 +505,9 @@ fw_get_filesystem_firmware(struct device *device, struct fw_priv *fw_priv,
|
|||
return -ENOMEM;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(fw_path); i++) {
|
||||
size_t file_size = 0;
|
||||
size_t *file_size_ptr = NULL;
|
||||
|
||||
/* skip the unset customized path */
|
||||
if (!fw_path[i][0])
|
||||
continue;
|
||||
|
@ -499,10 +521,20 @@ fw_get_filesystem_firmware(struct device *device, struct fw_priv *fw_priv,
|
|||
|
||||
fw_priv->size = 0;
|
||||
|
||||
/*
|
||||
* The total file size is only examined when doing a partial
|
||||
* read; the "full read" case needs to fail if the whole
|
||||
* firmware was not completely loaded.
|
||||
*/
|
||||
if ((fw_priv->opt_flags & FW_OPT_PARTIAL) && buffer)
|
||||
file_size_ptr = &file_size;
|
||||
|
||||
/* load firmware files from the mount namespace of init */
|
||||
rc = kernel_read_file_from_path_initns(path, &buffer,
|
||||
&size, msize, id);
|
||||
if (rc) {
|
||||
rc = kernel_read_file_from_path_initns(path, fw_priv->offset,
|
||||
&buffer, msize,
|
||||
file_size_ptr,
|
||||
READING_FIRMWARE);
|
||||
if (rc < 0) {
|
||||
if (rc != -ENOENT)
|
||||
dev_warn(device, "loading %s failed with error %d\n",
|
||||
path, rc);
|
||||
|
@ -511,6 +543,9 @@ fw_get_filesystem_firmware(struct device *device, struct fw_priv *fw_priv,
|
|||
path);
|
||||
continue;
|
||||
}
|
||||
size = rc;
|
||||
rc = 0;
|
||||
|
||||
dev_dbg(device, "Loading firmware from %s\n", path);
|
||||
if (decompress) {
|
||||
dev_dbg(device, "f/w decompressing %s\n",
|
||||
|
@ -637,7 +672,7 @@ static int fw_add_devm_name(struct device *dev, const char *name)
|
|||
}
|
||||
#endif
|
||||
|
||||
int assign_fw(struct firmware *fw, struct device *device, u32 opt_flags)
|
||||
int assign_fw(struct firmware *fw, struct device *device)
|
||||
{
|
||||
struct fw_priv *fw_priv = fw->priv;
|
||||
int ret;
|
||||
|
@ -656,8 +691,8 @@ int assign_fw(struct firmware *fw, struct device *device, u32 opt_flags)
|
|||
* should be fixed in devres or driver core.
|
||||
*/
|
||||
/* don't cache firmware handled without uevent */
|
||||
if (device && (opt_flags & FW_OPT_UEVENT) &&
|
||||
!(opt_flags & FW_OPT_NOCACHE)) {
|
||||
if (device && (fw_priv->opt_flags & FW_OPT_UEVENT) &&
|
||||
!(fw_priv->opt_flags & FW_OPT_NOCACHE)) {
|
||||
ret = fw_add_devm_name(device, fw_priv->fw_name);
|
||||
if (ret) {
|
||||
mutex_unlock(&fw_lock);
|
||||
|
@ -669,7 +704,7 @@ int assign_fw(struct firmware *fw, struct device *device, u32 opt_flags)
|
|||
* After caching firmware image is started, let it piggyback
|
||||
* on request firmware.
|
||||
*/
|
||||
if (!(opt_flags & FW_OPT_NOCACHE) &&
|
||||
if (!(fw_priv->opt_flags & FW_OPT_NOCACHE) &&
|
||||
fw_priv->fwc->state == FW_LOADER_START_CACHE) {
|
||||
if (fw_cache_piggyback_on_request(fw_priv->fw_name))
|
||||
kref_get(&fw_priv->ref);
|
||||
|
@ -688,7 +723,7 @@ int assign_fw(struct firmware *fw, struct device *device, u32 opt_flags)
|
|||
static int
|
||||
_request_firmware_prepare(struct firmware **firmware_p, const char *name,
|
||||
struct device *device, void *dbuf, size_t size,
|
||||
u32 opt_flags)
|
||||
size_t offset, u32 opt_flags)
|
||||
{
|
||||
struct firmware *firmware;
|
||||
struct fw_priv *fw_priv;
|
||||
|
@ -707,7 +742,7 @@ _request_firmware_prepare(struct firmware **firmware_p, const char *name,
|
|||
}
|
||||
|
||||
ret = alloc_lookup_fw_priv(name, &fw_cache, &fw_priv, dbuf, size,
|
||||
opt_flags);
|
||||
offset, opt_flags);
|
||||
|
||||
/*
|
||||
* bind with 'priv' now to avoid warning in failure path
|
||||
|
@ -754,9 +789,10 @@ static void fw_abort_batch_reqs(struct firmware *fw)
|
|||
static int
|
||||
_request_firmware(const struct firmware **firmware_p, const char *name,
|
||||
struct device *device, void *buf, size_t size,
|
||||
u32 opt_flags)
|
||||
size_t offset, u32 opt_flags)
|
||||
{
|
||||
struct firmware *fw = NULL;
|
||||
bool nondirect = false;
|
||||
int ret;
|
||||
|
||||
if (!firmware_p)
|
||||
|
@ -768,28 +804,34 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
|
|||
}
|
||||
|
||||
ret = _request_firmware_prepare(&fw, name, device, buf, size,
|
||||
opt_flags);
|
||||
offset, opt_flags);
|
||||
if (ret <= 0) /* error or already assigned */
|
||||
goto out;
|
||||
|
||||
ret = fw_get_filesystem_firmware(device, fw->priv, "", NULL);
|
||||
|
||||
/* Only full reads can support decompression, platform, and sysfs. */
|
||||
if (!(opt_flags & FW_OPT_PARTIAL))
|
||||
nondirect = true;
|
||||
|
||||
#ifdef CONFIG_FW_LOADER_COMPRESS
|
||||
if (ret == -ENOENT)
|
||||
if (ret == -ENOENT && nondirect)
|
||||
ret = fw_get_filesystem_firmware(device, fw->priv, ".xz",
|
||||
fw_decompress_xz);
|
||||
#endif
|
||||
|
||||
if (ret == -ENOENT)
|
||||
ret = firmware_fallback_platform(fw->priv, opt_flags);
|
||||
if (ret == -ENOENT && nondirect)
|
||||
ret = firmware_fallback_platform(fw->priv);
|
||||
|
||||
if (ret) {
|
||||
if (!(opt_flags & FW_OPT_NO_WARN))
|
||||
dev_warn(device,
|
||||
"Direct firmware load for %s failed with error %d\n",
|
||||
name, ret);
|
||||
ret = firmware_fallback_sysfs(fw, name, device, opt_flags, ret);
|
||||
if (nondirect)
|
||||
ret = firmware_fallback_sysfs(fw, name, device,
|
||||
opt_flags, ret);
|
||||
} else
|
||||
ret = assign_fw(fw, device, opt_flags);
|
||||
ret = assign_fw(fw, device);
|
||||
|
||||
out:
|
||||
if (ret < 0) {
|
||||
|
@ -830,7 +872,7 @@ request_firmware(const struct firmware **firmware_p, const char *name,
|
|||
|
||||
/* Need to pin this module until return */
|
||||
__module_get(THIS_MODULE);
|
||||
ret = _request_firmware(firmware_p, name, device, NULL, 0,
|
||||
ret = _request_firmware(firmware_p, name, device, NULL, 0, 0,
|
||||
FW_OPT_UEVENT);
|
||||
module_put(THIS_MODULE);
|
||||
return ret;
|
||||
|
@ -857,7 +899,7 @@ int firmware_request_nowarn(const struct firmware **firmware, const char *name,
|
|||
|
||||
/* Need to pin this module until return */
|
||||
__module_get(THIS_MODULE);
|
||||
ret = _request_firmware(firmware, name, device, NULL, 0,
|
||||
ret = _request_firmware(firmware, name, device, NULL, 0, 0,
|
||||
FW_OPT_UEVENT | FW_OPT_NO_WARN);
|
||||
module_put(THIS_MODULE);
|
||||
return ret;
|
||||
|
@ -881,7 +923,7 @@ int request_firmware_direct(const struct firmware **firmware_p,
|
|||
int ret;
|
||||
|
||||
__module_get(THIS_MODULE);
|
||||
ret = _request_firmware(firmware_p, name, device, NULL, 0,
|
||||
ret = _request_firmware(firmware_p, name, device, NULL, 0, 0,
|
||||
FW_OPT_UEVENT | FW_OPT_NO_WARN |
|
||||
FW_OPT_NOFALLBACK_SYSFS);
|
||||
module_put(THIS_MODULE);
|
||||
|
@ -906,7 +948,7 @@ int firmware_request_platform(const struct firmware **firmware,
|
|||
|
||||
/* Need to pin this module until return */
|
||||
__module_get(THIS_MODULE);
|
||||
ret = _request_firmware(firmware, name, device, NULL, 0,
|
||||
ret = _request_firmware(firmware, name, device, NULL, 0, 0,
|
||||
FW_OPT_UEVENT | FW_OPT_FALLBACK_PLATFORM);
|
||||
module_put(THIS_MODULE);
|
||||
return ret;
|
||||
|
@ -962,13 +1004,44 @@ request_firmware_into_buf(const struct firmware **firmware_p, const char *name,
|
|||
return -EOPNOTSUPP;
|
||||
|
||||
__module_get(THIS_MODULE);
|
||||
ret = _request_firmware(firmware_p, name, device, buf, size,
|
||||
ret = _request_firmware(firmware_p, name, device, buf, size, 0,
|
||||
FW_OPT_UEVENT | FW_OPT_NOCACHE);
|
||||
module_put(THIS_MODULE);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(request_firmware_into_buf);
|
||||
|
||||
/**
|
||||
* request_partial_firmware_into_buf() - load partial firmware into a previously allocated buffer
|
||||
* @firmware_p: pointer to firmware image
|
||||
* @name: name of firmware file
|
||||
* @device: device for which firmware is being loaded and DMA region allocated
|
||||
* @buf: address of buffer to load firmware into
|
||||
* @size: size of buffer
|
||||
* @offset: offset into file to read
|
||||
*
|
||||
* This function works pretty much like request_firmware_into_buf except
|
||||
* it allows a partial read of the file.
|
||||
*/
|
||||
int
|
||||
request_partial_firmware_into_buf(const struct firmware **firmware_p,
|
||||
const char *name, struct device *device,
|
||||
void *buf, size_t size, size_t offset)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (fw_cache_is_setup(device, name))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
__module_get(THIS_MODULE);
|
||||
ret = _request_firmware(firmware_p, name, device, buf, size, offset,
|
||||
FW_OPT_UEVENT | FW_OPT_NOCACHE |
|
||||
FW_OPT_PARTIAL);
|
||||
module_put(THIS_MODULE);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(request_partial_firmware_into_buf);
|
||||
|
||||
/**
|
||||
* release_firmware() - release the resource associated with a firmware image
|
||||
* @fw: firmware resource to release
|
||||
|
@ -1001,7 +1074,7 @@ static void request_firmware_work_func(struct work_struct *work)
|
|||
|
||||
fw_work = container_of(work, struct firmware_work, work);
|
||||
|
||||
_request_firmware(&fw, fw_work->name, fw_work->device, NULL, 0,
|
||||
_request_firmware(&fw, fw_work->name, fw_work->device, NULL, 0, 0,
|
||||
fw_work->opt_flags);
|
||||
fw_work->cont(fw, fw_work->context);
|
||||
put_device(fw_work->device); /* taken in request_firmware_nowait() */
|
||||
|
|
|
@ -3,6 +3,7 @@
|
|||
* Freescale data path resource container (DPRC) driver
|
||||
*
|
||||
* Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
|
||||
* Copyright 2019-2020 NXP
|
||||
* Author: German Rivera <German.Rivera@freescale.com>
|
||||
*
|
||||
*/
|
||||
|
@ -80,9 +81,9 @@ static int __fsl_mc_device_remove(struct device *dev, void *data)
|
|||
* the MC by removing devices that represent MC objects that have
|
||||
* been dynamically removed in the physical DPRC.
|
||||
*/
|
||||
static void dprc_remove_devices(struct fsl_mc_device *mc_bus_dev,
|
||||
struct fsl_mc_obj_desc *obj_desc_array,
|
||||
int num_child_objects_in_mc)
|
||||
void dprc_remove_devices(struct fsl_mc_device *mc_bus_dev,
|
||||
struct fsl_mc_obj_desc *obj_desc_array,
|
||||
int num_child_objects_in_mc)
|
||||
{
|
||||
if (num_child_objects_in_mc != 0) {
|
||||
/*
|
||||
|
@ -104,6 +105,7 @@ static void dprc_remove_devices(struct fsl_mc_device *mc_bus_dev,
|
|||
__fsl_mc_device_remove);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dprc_remove_devices);
|
||||
|
||||
static int __fsl_mc_device_match(struct device *dev, void *data)
|
||||
{
|
||||
|
@ -220,8 +222,8 @@ static void dprc_add_new_devices(struct fsl_mc_device *mc_bus_dev,
|
|||
* dprc_scan_objects - Discover objects in a DPRC
|
||||
*
|
||||
* @mc_bus_dev: pointer to the fsl-mc device that represents a DPRC object
|
||||
* @total_irq_count: If argument is provided the function populates the
|
||||
* total number of IRQs created by objects in the DPRC.
|
||||
* @alloc_interrupts: if true the function allocates the interrupt pool,
|
||||
* otherwise the interrupt allocation is delayed
|
||||
*
|
||||
* Detects objects added and removed from a DPRC and synchronizes the
|
||||
* state of the Linux bus driver, MC by adding and removing
|
||||
|
@ -236,7 +238,7 @@ static void dprc_add_new_devices(struct fsl_mc_device *mc_bus_dev,
|
|||
* of the device drivers for the non-allocatable devices.
|
||||
*/
|
||||
static int dprc_scan_objects(struct fsl_mc_device *mc_bus_dev,
|
||||
unsigned int *total_irq_count)
|
||||
bool alloc_interrupts)
|
||||
{
|
||||
int num_child_objects;
|
||||
int dprc_get_obj_failures;
|
||||
|
@ -317,22 +319,21 @@ static int dprc_scan_objects(struct fsl_mc_device *mc_bus_dev,
|
|||
* Allocate IRQ's before binding the scanned devices with their
|
||||
* respective drivers.
|
||||
*/
|
||||
if (dev_get_msi_domain(&mc_bus_dev->dev) && !mc_bus->irq_resources) {
|
||||
if (dev_get_msi_domain(&mc_bus_dev->dev)) {
|
||||
if (irq_count > FSL_MC_IRQ_POOL_MAX_TOTAL_IRQS) {
|
||||
dev_warn(&mc_bus_dev->dev,
|
||||
"IRQs needed (%u) exceed IRQs preallocated (%u)\n",
|
||||
irq_count, FSL_MC_IRQ_POOL_MAX_TOTAL_IRQS);
|
||||
}
|
||||
|
||||
error = fsl_mc_populate_irq_pool(mc_bus,
|
||||
FSL_MC_IRQ_POOL_MAX_TOTAL_IRQS);
|
||||
if (error < 0)
|
||||
return error;
|
||||
if (alloc_interrupts && !mc_bus->irq_resources) {
|
||||
error = fsl_mc_populate_irq_pool(mc_bus_dev,
|
||||
FSL_MC_IRQ_POOL_MAX_TOTAL_IRQS);
|
||||
if (error < 0)
|
||||
return error;
|
||||
}
|
||||
}
|
||||
|
||||
if (total_irq_count)
|
||||
*total_irq_count = irq_count;
|
||||
|
||||
dprc_remove_devices(mc_bus_dev, child_obj_desc_array,
|
||||
num_child_objects);
|
||||
|
||||
|
@ -354,9 +355,10 @@ static int dprc_scan_objects(struct fsl_mc_device *mc_bus_dev,
|
|||
* bus driver with the actual state of the MC by adding and removing
|
||||
* devices as appropriate.
|
||||
*/
|
||||
static int dprc_scan_container(struct fsl_mc_device *mc_bus_dev)
|
||||
int dprc_scan_container(struct fsl_mc_device *mc_bus_dev,
|
||||
bool alloc_interrupts)
|
||||
{
|
||||
int error;
|
||||
int error = 0;
|
||||
struct fsl_mc_bus *mc_bus = to_fsl_mc_bus(mc_bus_dev);
|
||||
|
||||
fsl_mc_init_all_resource_pools(mc_bus_dev);
|
||||
|
@ -365,16 +367,12 @@ static int dprc_scan_container(struct fsl_mc_device *mc_bus_dev)
|
|||
* Discover objects in the DPRC:
|
||||
*/
|
||||
mutex_lock(&mc_bus->scan_mutex);
|
||||
error = dprc_scan_objects(mc_bus_dev, NULL);
|
||||
error = dprc_scan_objects(mc_bus_dev, alloc_interrupts);
|
||||
mutex_unlock(&mc_bus->scan_mutex);
|
||||
if (error < 0) {
|
||||
fsl_mc_cleanup_all_resource_pools(mc_bus_dev);
|
||||
return error;
|
||||
}
|
||||
|
||||
return 0;
|
||||
return error;
|
||||
}
|
||||
|
||||
EXPORT_SYMBOL_GPL(dprc_scan_container);
|
||||
/**
|
||||
* dprc_irq0_handler - Regular ISR for DPRC interrupt 0
|
||||
*
|
||||
|
@ -434,9 +432,8 @@ static irqreturn_t dprc_irq0_handler_thread(int irq_num, void *arg)
|
|||
DPRC_IRQ_EVENT_CONTAINER_DESTROYED |
|
||||
DPRC_IRQ_EVENT_OBJ_DESTROYED |
|
||||
DPRC_IRQ_EVENT_OBJ_CREATED)) {
|
||||
unsigned int irq_count;
|
||||
|
||||
error = dprc_scan_objects(mc_dev, &irq_count);
|
||||
error = dprc_scan_objects(mc_dev, true);
|
||||
if (error < 0) {
|
||||
/*
|
||||
* If the error is -ENXIO, we ignore it, as it indicates
|
||||
|
@ -451,12 +448,6 @@ static irqreturn_t dprc_irq0_handler_thread(int irq_num, void *arg)
|
|||
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (irq_count > FSL_MC_IRQ_POOL_MAX_TOTAL_IRQS) {
|
||||
dev_warn(dev,
|
||||
"IRQs needed (%u) exceed IRQs preallocated (%u)\n",
|
||||
irq_count, FSL_MC_IRQ_POOL_MAX_TOTAL_IRQS);
|
||||
}
|
||||
}
|
||||
|
||||
out:
|
||||
|
@ -597,25 +588,24 @@ error_free_irqs:
|
|||
}
|
||||
|
||||
/**
|
||||
* dprc_probe - callback invoked when a DPRC is being bound to this driver
|
||||
* dprc_setup - opens and creates a mc_io for DPRC
|
||||
*
|
||||
* @mc_dev: Pointer to fsl-mc device representing a DPRC
|
||||
*
|
||||
* It opens the physical DPRC in the MC.
|
||||
* It scans the DPRC to discover the MC objects contained in it.
|
||||
* It creates the interrupt pool for the MC bus associated with the DPRC.
|
||||
* It configures the interrupts for the DPRC device itself.
|
||||
* It configures the DPRC portal used to communicate with MC
|
||||
*/
|
||||
static int dprc_probe(struct fsl_mc_device *mc_dev)
|
||||
|
||||
int dprc_setup(struct fsl_mc_device *mc_dev)
|
||||
{
|
||||
int error;
|
||||
size_t region_size;
|
||||
struct device *parent_dev = mc_dev->dev.parent;
|
||||
struct fsl_mc_bus *mc_bus = to_fsl_mc_bus(mc_dev);
|
||||
struct irq_domain *mc_msi_domain;
|
||||
bool mc_io_created = false;
|
||||
bool msi_domain_set = false;
|
||||
u16 major_ver, minor_ver;
|
||||
struct irq_domain *mc_msi_domain;
|
||||
size_t region_size;
|
||||
int error;
|
||||
|
||||
if (!is_fsl_mc_bus_dprc(mc_dev))
|
||||
return -EINVAL;
|
||||
|
@ -690,23 +680,6 @@ static int dprc_probe(struct fsl_mc_device *mc_dev)
|
|||
goto error_cleanup_open;
|
||||
}
|
||||
|
||||
mutex_init(&mc_bus->scan_mutex);
|
||||
|
||||
/*
|
||||
* Discover MC objects in DPRC object:
|
||||
*/
|
||||
error = dprc_scan_container(mc_dev);
|
||||
if (error < 0)
|
||||
goto error_cleanup_open;
|
||||
|
||||
/*
|
||||
* Configure interrupt for the DPRC object associated with this MC bus:
|
||||
*/
|
||||
error = dprc_setup_irq(mc_dev);
|
||||
if (error < 0)
|
||||
goto error_cleanup_open;
|
||||
|
||||
dev_info(&mc_dev->dev, "DPRC device bound to driver");
|
||||
return 0;
|
||||
|
||||
error_cleanup_open:
|
||||
|
@ -723,6 +696,49 @@ error_cleanup_msi_domain:
|
|||
|
||||
return error;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dprc_setup);
|
||||
|
||||
/**
|
||||
* dprc_probe - callback invoked when a DPRC is being bound to this driver
|
||||
*
|
||||
* @mc_dev: Pointer to fsl-mc device representing a DPRC
|
||||
*
|
||||
* It opens the physical DPRC in the MC.
|
||||
* It scans the DPRC to discover the MC objects contained in it.
|
||||
* It creates the interrupt pool for the MC bus associated with the DPRC.
|
||||
* It configures the interrupts for the DPRC device itself.
|
||||
*/
|
||||
static int dprc_probe(struct fsl_mc_device *mc_dev)
|
||||
{
|
||||
int error;
|
||||
|
||||
error = dprc_setup(mc_dev);
|
||||
if (error < 0)
|
||||
return error;
|
||||
|
||||
/*
|
||||
* Discover MC objects in DPRC object:
|
||||
*/
|
||||
error = dprc_scan_container(mc_dev, true);
|
||||
if (error < 0)
|
||||
goto dprc_cleanup;
|
||||
|
||||
/*
|
||||
* Configure interrupt for the DPRC object associated with this MC bus:
|
||||
*/
|
||||
error = dprc_setup_irq(mc_dev);
|
||||
if (error < 0)
|
||||
goto scan_cleanup;
|
||||
|
||||
dev_info(&mc_dev->dev, "DPRC device bound to driver");
|
||||
return 0;
|
||||
|
||||
scan_cleanup:
|
||||
device_for_each_child(&mc_dev->dev, NULL, __fsl_mc_device_remove);
|
||||
dprc_cleanup:
|
||||
dprc_cleanup(mc_dev);
|
||||
return error;
|
||||
}
|
||||
|
||||
/*
|
||||
* Tear down interrupt for a given DPRC object
|
||||
|
@ -738,6 +754,53 @@ static void dprc_teardown_irq(struct fsl_mc_device *mc_dev)
|
|||
fsl_mc_free_irqs(mc_dev);
|
||||
}
|
||||
|
||||
/**
|
||||
* dprc_cleanup - function that cleanups a DPRC
|
||||
*
|
||||
* @mc_dev: Pointer to fsl-mc device representing the DPRC
|
||||
*
|
||||
* It closes the DPRC device in the MC.
|
||||
* It destroys the interrupt pool associated with this MC bus.
|
||||
*/
|
||||
|
||||
int dprc_cleanup(struct fsl_mc_device *mc_dev)
|
||||
{
|
||||
int error;
|
||||
|
||||
/* this function should be called only for DPRCs, it
|
||||
* is an error to call it for regular objects
|
||||
*/
|
||||
if (!is_fsl_mc_bus_dprc(mc_dev))
|
||||
return -EINVAL;
|
||||
|
||||
if (dev_get_msi_domain(&mc_dev->dev)) {
|
||||
fsl_mc_cleanup_irq_pool(mc_dev);
|
||||
dev_set_msi_domain(&mc_dev->dev, NULL);
|
||||
}
|
||||
|
||||
fsl_mc_cleanup_all_resource_pools(mc_dev);
|
||||
|
||||
/* if this step fails we cannot go further with cleanup as there is no way of
|
||||
* communicating with the firmware
|
||||
*/
|
||||
if (!mc_dev->mc_io) {
|
||||
dev_err(&mc_dev->dev, "mc_io is NULL, tear down cannot be performed in firmware\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
error = dprc_close(mc_dev->mc_io, 0, mc_dev->mc_handle);
|
||||
if (error < 0)
|
||||
dev_err(&mc_dev->dev, "dprc_close() failed: %d\n", error);
|
||||
|
||||
if (!fsl_mc_is_root_dprc(&mc_dev->dev)) {
|
||||
fsl_destroy_mc_io(mc_dev->mc_io);
|
||||
mc_dev->mc_io = NULL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dprc_cleanup);
|
||||
|
||||
/**
|
||||
* dprc_remove - callback invoked when a DPRC is being unbound from this driver
|
||||
*
|
||||
|
@ -750,13 +813,10 @@ static void dprc_teardown_irq(struct fsl_mc_device *mc_dev)
|
|||
*/
|
||||
static int dprc_remove(struct fsl_mc_device *mc_dev)
|
||||
{
|
||||
int error;
|
||||
struct fsl_mc_bus *mc_bus = to_fsl_mc_bus(mc_dev);
|
||||
|
||||
if (!is_fsl_mc_bus_dprc(mc_dev))
|
||||
return -EINVAL;
|
||||
if (!mc_dev->mc_io)
|
||||
return -EINVAL;
|
||||
|
||||
if (!mc_bus->irq_resources)
|
||||
return -EINVAL;
|
||||
|
@ -766,21 +826,7 @@ static int dprc_remove(struct fsl_mc_device *mc_dev)
|
|||
|
||||
device_for_each_child(&mc_dev->dev, NULL, __fsl_mc_device_remove);
|
||||
|
||||
if (dev_get_msi_domain(&mc_dev->dev)) {
|
||||
fsl_mc_cleanup_irq_pool(mc_bus);
|
||||
dev_set_msi_domain(&mc_dev->dev, NULL);
|
||||
}
|
||||
|
||||
fsl_mc_cleanup_all_resource_pools(mc_dev);
|
||||
|
||||
error = dprc_close(mc_dev->mc_io, 0, mc_dev->mc_handle);
|
||||
if (error < 0)
|
||||
dev_err(&mc_dev->dev, "dprc_close() failed: %d\n", error);
|
||||
|
||||
if (!fsl_mc_is_root_dprc(&mc_dev->dev)) {
|
||||
fsl_destroy_mc_io(mc_dev->mc_io);
|
||||
mc_dev->mc_io = NULL;
|
||||
}
|
||||
dprc_cleanup(mc_dev);
|
||||
|
||||
dev_info(&mc_dev->dev, "DPRC device unbound from driver");
|
||||
return 0;
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
// SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause)
|
||||
/*
|
||||
* Copyright 2013-2016 Freescale Semiconductor Inc.
|
||||
* Copyright 2020 NXP
|
||||
*
|
||||
*/
|
||||
#include <linux/kernel.h>
|
||||
|
@ -8,6 +9,13 @@
|
|||
|
||||
#include "fsl-mc-private.h"
|
||||
|
||||
/*
|
||||
* cache the DPRC version to reduce the number of commands
|
||||
* towards the mc firmware
|
||||
*/
|
||||
static u16 dprc_major_ver;
|
||||
static u16 dprc_minor_ver;
|
||||
|
||||
/**
|
||||
* dprc_open() - Open DPRC object for use
|
||||
* @mc_io: Pointer to MC portal's I/O object
|
||||
|
@ -72,6 +80,77 @@ int dprc_close(struct fsl_mc_io *mc_io,
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(dprc_close);
|
||||
|
||||
/**
|
||||
* dprc_reset_container - Reset child container.
|
||||
* @mc_io: Pointer to MC portal's I/O object
|
||||
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
|
||||
* @token: Token of DPRC object
|
||||
* @child_container_id: ID of the container to reset
|
||||
* @options: 32 bit options:
|
||||
* - 0 (no bits set) - all the objects inside the container are
|
||||
* reset. The child containers are entered recursively and the
|
||||
* objects reset. All the objects (including the child containers)
|
||||
* are closed.
|
||||
* - bit 0 set - all the objects inside the container are reset.
|
||||
* However the child containers are not entered recursively.
|
||||
* This option is supported for API versions >= 6.5
|
||||
* In case a software context crashes or becomes non-responsive, the parent
|
||||
* may wish to reset its resources container before the software context is
|
||||
* restarted.
|
||||
*
|
||||
* This routine informs all objects assigned to the child container that the
|
||||
* container is being reset, so they may perform any cleanup operations that are
|
||||
* needed. All objects handles that were owned by the child container shall be
|
||||
* closed.
|
||||
*
|
||||
* Note that such request may be submitted even if the child software context
|
||||
* has not crashed, but the resulting object cleanup operations will not be
|
||||
* aware of that.
|
||||
*
|
||||
* Return: '0' on Success; Error code otherwise.
|
||||
*/
|
||||
int dprc_reset_container(struct fsl_mc_io *mc_io,
|
||||
u32 cmd_flags,
|
||||
u16 token,
|
||||
int child_container_id,
|
||||
u32 options)
|
||||
{
|
||||
struct fsl_mc_command cmd = { 0 };
|
||||
struct dprc_cmd_reset_container *cmd_params;
|
||||
u32 cmdid = DPRC_CMDID_RESET_CONT;
|
||||
int err;
|
||||
|
||||
/*
|
||||
* If the DPRC object version was not yet cached, cache it now.
|
||||
* Otherwise use the already cached value.
|
||||
*/
|
||||
if (!dprc_major_ver && !dprc_minor_ver) {
|
||||
err = dprc_get_api_version(mc_io, 0,
|
||||
&dprc_major_ver,
|
||||
&dprc_minor_ver);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
/*
|
||||
* MC API 6.5 introduced a new field in the command used to pass
|
||||
* some flags.
|
||||
* Bit 0 indicates that the child containers are not recursively reset.
|
||||
*/
|
||||
if (dprc_major_ver > 6 || (dprc_major_ver == 6 && dprc_minor_ver >= 5))
|
||||
cmdid = DPRC_CMDID_RESET_CONT_V2;
|
||||
|
||||
/* prepare command */
|
||||
cmd.header = mc_encode_cmd_header(cmdid, cmd_flags, token);
|
||||
cmd_params = (struct dprc_cmd_reset_container *)cmd.params;
|
||||
cmd_params->child_container_id = cpu_to_le32(child_container_id);
|
||||
cmd_params->options = cpu_to_le32(options);
|
||||
|
||||
/* send command to mc*/
|
||||
return mc_send_command(mc_io, &cmd);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dprc_reset_container);
|
||||
|
||||
/**
|
||||
* dprc_set_irq() - Set IRQ information for the DPRC to trigger an interrupt.
|
||||
* @mc_io: Pointer to MC portal's I/O object
|
||||
|
@ -281,7 +360,7 @@ int dprc_get_attributes(struct fsl_mc_io *mc_io,
|
|||
/* retrieve response parameters */
|
||||
rsp_params = (struct dprc_rsp_get_attributes *)cmd.params;
|
||||
attr->container_id = le32_to_cpu(rsp_params->container_id);
|
||||
attr->icid = le16_to_cpu(rsp_params->icid);
|
||||
attr->icid = le32_to_cpu(rsp_params->icid);
|
||||
attr->options = le32_to_cpu(rsp_params->options);
|
||||
attr->portal_id = le32_to_cpu(rsp_params->portal_id);
|
||||
|
||||
|
@ -443,30 +522,44 @@ int dprc_get_obj_region(struct fsl_mc_io *mc_io,
|
|||
struct fsl_mc_command cmd = { 0 };
|
||||
struct dprc_cmd_get_obj_region *cmd_params;
|
||||
struct dprc_rsp_get_obj_region *rsp_params;
|
||||
u16 major_ver, minor_ver;
|
||||
int err;
|
||||
|
||||
/* prepare command */
|
||||
err = dprc_get_api_version(mc_io, 0,
|
||||
&major_ver,
|
||||
&minor_ver);
|
||||
if (err)
|
||||
return err;
|
||||
/*
|
||||
* If the DPRC object version was not yet cached, cache it now.
|
||||
* Otherwise use the already cached value.
|
||||
*/
|
||||
if (!dprc_major_ver && !dprc_minor_ver) {
|
||||
err = dprc_get_api_version(mc_io, 0,
|
||||
&dprc_major_ver,
|
||||
&dprc_minor_ver);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
/**
|
||||
* MC API version 6.3 introduced a new field to the region
|
||||
* descriptor: base_address. If the older API is in use then the base
|
||||
* address is set to zero to indicate it needs to be obtained elsewhere
|
||||
* (typically the device tree).
|
||||
*/
|
||||
if (major_ver > 6 || (major_ver == 6 && minor_ver >= 3))
|
||||
cmd.header =
|
||||
mc_encode_cmd_header(DPRC_CMDID_GET_OBJ_REG_V2,
|
||||
cmd_flags, token);
|
||||
else
|
||||
cmd.header =
|
||||
mc_encode_cmd_header(DPRC_CMDID_GET_OBJ_REG,
|
||||
cmd_flags, token);
|
||||
if (dprc_major_ver > 6 || (dprc_major_ver == 6 && dprc_minor_ver >= 6)) {
|
||||
/*
|
||||
* MC API version 6.6 changed the size of the MC portals and software
|
||||
* portals to 64K (as implemented by hardware). If older API is in use the
|
||||
* size reported is less (64 bytes for mc portals and 4K for software
|
||||
* portals).
|
||||
*/
|
||||
|
||||
cmd.header = mc_encode_cmd_header(DPRC_CMDID_GET_OBJ_REG_V3,
|
||||
cmd_flags, token);
|
||||
|
||||
} else if (dprc_major_ver == 6 && dprc_minor_ver >= 3) {
|
||||
/*
|
||||
* MC API version 6.3 introduced a new field to the region
|
||||
* descriptor: base_address. If the older API is in use then the base
|
||||
* address is set to zero to indicate it needs to be obtained elsewhere
|
||||
* (typically the device tree).
|
||||
*/
|
||||
cmd.header = mc_encode_cmd_header(DPRC_CMDID_GET_OBJ_REG_V2,
|
||||
cmd_flags, token);
|
||||
} else {
|
||||
cmd.header = mc_encode_cmd_header(DPRC_CMDID_GET_OBJ_REG,
|
||||
cmd_flags, token);
|
||||
}
|
||||
|
||||
cmd_params = (struct dprc_cmd_get_obj_region *)cmd.params;
|
||||
cmd_params->obj_id = cpu_to_le32(obj_id);
|
||||
|
@ -483,7 +576,7 @@ int dprc_get_obj_region(struct fsl_mc_io *mc_io,
|
|||
rsp_params = (struct dprc_rsp_get_obj_region *)cmd.params;
|
||||
region_desc->base_offset = le64_to_cpu(rsp_params->base_offset);
|
||||
region_desc->size = le32_to_cpu(rsp_params->size);
|
||||
if (major_ver > 6 || (major_ver == 6 && minor_ver >= 3))
|
||||
if (dprc_major_ver > 6 || (dprc_major_ver == 6 && dprc_minor_ver >= 3))
|
||||
region_desc->base_address = le64_to_cpu(rsp_params->base_addr);
|
||||
else
|
||||
region_desc->base_address = 0;
|
||||
|
|
|
@ -344,7 +344,7 @@ EXPORT_SYMBOL_GPL(fsl_mc_object_free);
|
|||
* Initialize the interrupt pool associated with an fsl-mc bus.
|
||||
* It allocates a block of IRQs from the GIC-ITS.
|
||||
*/
|
||||
int fsl_mc_populate_irq_pool(struct fsl_mc_bus *mc_bus,
|
||||
int fsl_mc_populate_irq_pool(struct fsl_mc_device *mc_bus_dev,
|
||||
unsigned int irq_count)
|
||||
{
|
||||
unsigned int i;
|
||||
|
@ -352,10 +352,14 @@ int fsl_mc_populate_irq_pool(struct fsl_mc_bus *mc_bus,
|
|||
struct fsl_mc_device_irq *irq_resources;
|
||||
struct fsl_mc_device_irq *mc_dev_irq;
|
||||
int error;
|
||||
struct fsl_mc_device *mc_bus_dev = &mc_bus->mc_dev;
|
||||
struct fsl_mc_bus *mc_bus = to_fsl_mc_bus(mc_bus_dev);
|
||||
struct fsl_mc_resource_pool *res_pool =
|
||||
&mc_bus->resource_pools[FSL_MC_POOL_IRQ];
|
||||
|
||||
/* do nothing if the IRQ pool is already populated */
|
||||
if (mc_bus->irq_resources)
|
||||
return 0;
|
||||
|
||||
if (irq_count == 0 ||
|
||||
irq_count > FSL_MC_IRQ_POOL_MAX_TOTAL_IRQS)
|
||||
return -EINVAL;
|
||||
|
@ -407,9 +411,9 @@ EXPORT_SYMBOL_GPL(fsl_mc_populate_irq_pool);
|
|||
* Teardown the interrupt pool associated with an fsl-mc bus.
|
||||
* It frees the IRQs that were allocated to the pool, back to the GIC-ITS.
|
||||
*/
|
||||
void fsl_mc_cleanup_irq_pool(struct fsl_mc_bus *mc_bus)
|
||||
void fsl_mc_cleanup_irq_pool(struct fsl_mc_device *mc_bus_dev)
|
||||
{
|
||||
struct fsl_mc_device *mc_bus_dev = &mc_bus->mc_dev;
|
||||
struct fsl_mc_bus *mc_bus = to_fsl_mc_bus(mc_bus_dev);
|
||||
struct fsl_mc_resource_pool *res_pool =
|
||||
&mc_bus->resource_pools[FSL_MC_POOL_IRQ];
|
||||
|
||||
|
|
|
@ -3,6 +3,7 @@
|
|||
* Freescale Management Complex (MC) bus driver
|
||||
*
|
||||
* Copyright (C) 2014-2016 Freescale Semiconductor, Inc.
|
||||
* Copyright 2019-2020 NXP
|
||||
* Author: German Rivera <German.Rivera@freescale.com>
|
||||
*
|
||||
*/
|
||||
|
@ -78,6 +79,12 @@ static int fsl_mc_bus_match(struct device *dev, struct device_driver *drv)
|
|||
struct fsl_mc_driver *mc_drv = to_fsl_mc_driver(drv);
|
||||
bool found = false;
|
||||
|
||||
/* When driver_override is set, only bind to the matching driver */
|
||||
if (mc_dev->driver_override) {
|
||||
found = !strcmp(mc_dev->driver_override, mc_drv->driver.name);
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (!mc_drv->match_id_table)
|
||||
goto out;
|
||||
|
||||
|
@ -147,8 +154,52 @@ static ssize_t modalias_show(struct device *dev, struct device_attribute *attr,
|
|||
}
|
||||
static DEVICE_ATTR_RO(modalias);
|
||||
|
||||
static ssize_t driver_override_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct fsl_mc_device *mc_dev = to_fsl_mc_device(dev);
|
||||
char *driver_override, *old = mc_dev->driver_override;
|
||||
char *cp;
|
||||
|
||||
if (WARN_ON(dev->bus != &fsl_mc_bus_type))
|
||||
return -EINVAL;
|
||||
|
||||
if (count >= (PAGE_SIZE - 1))
|
||||
return -EINVAL;
|
||||
|
||||
driver_override = kstrndup(buf, count, GFP_KERNEL);
|
||||
if (!driver_override)
|
||||
return -ENOMEM;
|
||||
|
||||
cp = strchr(driver_override, '\n');
|
||||
if (cp)
|
||||
*cp = '\0';
|
||||
|
||||
if (strlen(driver_override)) {
|
||||
mc_dev->driver_override = driver_override;
|
||||
} else {
|
||||
kfree(driver_override);
|
||||
mc_dev->driver_override = NULL;
|
||||
}
|
||||
|
||||
kfree(old);
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static ssize_t driver_override_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct fsl_mc_device *mc_dev = to_fsl_mc_device(dev);
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%s\n", mc_dev->driver_override);
|
||||
}
|
||||
static DEVICE_ATTR_RW(driver_override);
|
||||
|
||||
static struct attribute *fsl_mc_dev_attrs[] = {
|
||||
&dev_attr_modalias.attr,
|
||||
&dev_attr_driver_override.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
|
@ -452,7 +503,7 @@ common_cleanup:
|
|||
}
|
||||
|
||||
static int get_dprc_icid(struct fsl_mc_io *mc_io,
|
||||
int container_id, u16 *icid)
|
||||
int container_id, u32 *icid)
|
||||
{
|
||||
struct dprc_attributes attr;
|
||||
int error;
|
||||
|
@ -564,11 +615,8 @@ static int fsl_mc_device_get_mmio_regions(struct fsl_mc_device *mc_dev,
|
|||
|
||||
regions[i].end = regions[i].start + region_desc.size - 1;
|
||||
regions[i].name = "fsl-mc object MMIO region";
|
||||
regions[i].flags = IORESOURCE_IO;
|
||||
if (region_desc.flags & DPRC_REGION_CACHEABLE)
|
||||
regions[i].flags |= IORESOURCE_CACHEABLE;
|
||||
if (region_desc.flags & DPRC_REGION_SHAREABLE)
|
||||
regions[i].flags |= IORESOURCE_MEM;
|
||||
regions[i].flags = region_desc.flags & IORESOURCE_BITS;
|
||||
regions[i].flags |= IORESOURCE_MEM;
|
||||
}
|
||||
|
||||
mc_dev->regions = regions;
|
||||
|
@ -630,6 +678,7 @@ int fsl_mc_device_add(struct fsl_mc_obj_desc *obj_desc,
|
|||
if (!mc_bus)
|
||||
return -ENOMEM;
|
||||
|
||||
mutex_init(&mc_bus->scan_mutex);
|
||||
mc_dev = &mc_bus->mc_dev;
|
||||
} else {
|
||||
/*
|
||||
|
@ -748,6 +797,9 @@ EXPORT_SYMBOL_GPL(fsl_mc_device_add);
|
|||
*/
|
||||
void fsl_mc_device_remove(struct fsl_mc_device *mc_dev)
|
||||
{
|
||||
kfree(mc_dev->driver_override);
|
||||
mc_dev->driver_override = NULL;
|
||||
|
||||
/*
|
||||
* The device-specific remove callback will get invoked by device_del()
|
||||
*/
|
||||
|
@ -908,9 +960,6 @@ static int fsl_mc_bus_probe(struct platform_device *pdev)
|
|||
u32 mc_portal_size, mc_stream_id;
|
||||
struct resource *plat_res;
|
||||
|
||||
if (!iommu_present(&fsl_mc_bus_type))
|
||||
return -EPROBE_DEFER;
|
||||
|
||||
mc = devm_kzalloc(&pdev->dev, sizeof(*mc), GFP_KERNEL);
|
||||
if (!mc)
|
||||
return -ENOMEM;
|
||||
|
@ -918,11 +967,11 @@ static int fsl_mc_bus_probe(struct platform_device *pdev)
|
|||
platform_set_drvdata(pdev, mc);
|
||||
|
||||
plat_res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
|
||||
mc->fsl_mc_regs = devm_ioremap_resource(&pdev->dev, plat_res);
|
||||
if (IS_ERR(mc->fsl_mc_regs))
|
||||
return PTR_ERR(mc->fsl_mc_regs);
|
||||
if (plat_res)
|
||||
mc->fsl_mc_regs = devm_ioremap_resource(&pdev->dev, plat_res);
|
||||
|
||||
if (IS_ENABLED(CONFIG_ACPI) && !dev_of_node(&pdev->dev)) {
|
||||
if (mc->fsl_mc_regs && IS_ENABLED(CONFIG_ACPI) &&
|
||||
!dev_of_node(&pdev->dev)) {
|
||||
mc_stream_id = readl(mc->fsl_mc_regs + FSL_MC_FAPR);
|
||||
/*
|
||||
* HW ORs the PL and BMT bit, places the result in bit 15 of
|
||||
|
|
|
@ -80,10 +80,12 @@ int dpmcp_reset(struct fsl_mc_io *mc_io,
|
|||
/* DPRC command versioning */
|
||||
#define DPRC_CMD_BASE_VERSION 1
|
||||
#define DPRC_CMD_2ND_VERSION 2
|
||||
#define DPRC_CMD_3RD_VERSION 3
|
||||
#define DPRC_CMD_ID_OFFSET 4
|
||||
|
||||
#define DPRC_CMD(id) (((id) << DPRC_CMD_ID_OFFSET) | DPRC_CMD_BASE_VERSION)
|
||||
#define DPRC_CMD_V2(id) (((id) << DPRC_CMD_ID_OFFSET) | DPRC_CMD_2ND_VERSION)
|
||||
#define DPRC_CMD_V3(id) (((id) << DPRC_CMD_ID_OFFSET) | DPRC_CMD_3RD_VERSION)
|
||||
|
||||
/* DPRC command IDs */
|
||||
#define DPRC_CMDID_CLOSE DPRC_CMD(0x800)
|
||||
|
@ -91,6 +93,8 @@ int dpmcp_reset(struct fsl_mc_io *mc_io,
|
|||
#define DPRC_CMDID_GET_API_VERSION DPRC_CMD(0xa05)
|
||||
|
||||
#define DPRC_CMDID_GET_ATTR DPRC_CMD(0x004)
|
||||
#define DPRC_CMDID_RESET_CONT DPRC_CMD(0x005)
|
||||
#define DPRC_CMDID_RESET_CONT_V2 DPRC_CMD_V2(0x005)
|
||||
|
||||
#define DPRC_CMDID_SET_IRQ DPRC_CMD(0x010)
|
||||
#define DPRC_CMDID_SET_IRQ_ENABLE DPRC_CMD(0x012)
|
||||
|
@ -103,6 +107,7 @@ int dpmcp_reset(struct fsl_mc_io *mc_io,
|
|||
#define DPRC_CMDID_GET_OBJ DPRC_CMD(0x15A)
|
||||
#define DPRC_CMDID_GET_OBJ_REG DPRC_CMD(0x15E)
|
||||
#define DPRC_CMDID_GET_OBJ_REG_V2 DPRC_CMD_V2(0x15E)
|
||||
#define DPRC_CMDID_GET_OBJ_REG_V3 DPRC_CMD_V3(0x15E)
|
||||
#define DPRC_CMDID_SET_OBJ_IRQ DPRC_CMD(0x15F)
|
||||
|
||||
#define DPRC_CMDID_GET_CONNECTION DPRC_CMD(0x16C)
|
||||
|
@ -111,6 +116,11 @@ struct dprc_cmd_open {
|
|||
__le32 container_id;
|
||||
};
|
||||
|
||||
struct dprc_cmd_reset_container {
|
||||
__le32 child_container_id;
|
||||
__le32 options;
|
||||
};
|
||||
|
||||
struct dprc_cmd_set_irq {
|
||||
/* cmd word 0 */
|
||||
__le32 irq_val;
|
||||
|
@ -152,8 +162,7 @@ struct dprc_cmd_clear_irq_status {
|
|||
struct dprc_rsp_get_attributes {
|
||||
/* response word 0 */
|
||||
__le32 container_id;
|
||||
__le16 icid;
|
||||
__le16 pad;
|
||||
__le32 icid;
|
||||
/* response word 1 */
|
||||
__le32 options;
|
||||
__le32 portal_id;
|
||||
|
@ -330,7 +339,7 @@ int dprc_clear_irq_status(struct fsl_mc_io *mc_io,
|
|||
*/
|
||||
struct dprc_attributes {
|
||||
int container_id;
|
||||
u16 icid;
|
||||
u32 icid;
|
||||
int portal_id;
|
||||
u64 options;
|
||||
};
|
||||
|
@ -358,12 +367,6 @@ int dprc_set_obj_irq(struct fsl_mc_io *mc_io,
|
|||
int obj_id,
|
||||
u8 irq_index,
|
||||
struct dprc_irq_cfg *irq_cfg);
|
||||
|
||||
/* Region flags */
|
||||
/* Cacheable - Indicates that region should be mapped as cacheable */
|
||||
#define DPRC_REGION_CACHEABLE 0x00000001
|
||||
#define DPRC_REGION_SHAREABLE 0x00000002
|
||||
|
||||
/**
|
||||
* enum dprc_region_type - Region type
|
||||
* @DPRC_REGION_TYPE_MC_PORTAL: MC portal region
|
||||
|
@ -518,11 +521,6 @@ struct dpcon_cmd_set_notification {
|
|||
__le64 user_ctx;
|
||||
};
|
||||
|
||||
/**
|
||||
* Maximum number of total IRQs that can be pre-allocated for an MC bus'
|
||||
* IRQ pool
|
||||
*/
|
||||
#define FSL_MC_IRQ_POOL_MAX_TOTAL_IRQS 256
|
||||
|
||||
/**
|
||||
* struct fsl_mc_resource_pool - Pool of MC resources of a given
|
||||
|
@ -597,11 +595,6 @@ void fsl_mc_msi_domain_free_irqs(struct device *dev);
|
|||
|
||||
struct irq_domain *fsl_mc_find_msi_domain(struct device *dev);
|
||||
|
||||
int fsl_mc_populate_irq_pool(struct fsl_mc_bus *mc_bus,
|
||||
unsigned int irq_count);
|
||||
|
||||
void fsl_mc_cleanup_irq_pool(struct fsl_mc_bus *mc_bus);
|
||||
|
||||
int __must_check fsl_create_mc_io(struct device *dev,
|
||||
phys_addr_t mc_portal_phys_addr,
|
||||
u32 mc_portal_size,
|
||||
|
|
|
@ -129,7 +129,12 @@ error_destroy_mc_io:
|
|||
*/
|
||||
void fsl_destroy_mc_io(struct fsl_mc_io *mc_io)
|
||||
{
|
||||
struct fsl_mc_device *dpmcp_dev = mc_io->dpmcp_dev;
|
||||
struct fsl_mc_device *dpmcp_dev;
|
||||
|
||||
if (!mc_io)
|
||||
return;
|
||||
|
||||
dpmcp_dev = mc_io->dpmcp_dev;
|
||||
|
||||
if (dpmcp_dev)
|
||||
fsl_mc_io_unset_dpmcp(mc_io);
|
||||
|
|
|
@ -6,9 +6,17 @@
|
|||
#
|
||||
|
||||
config MHI_BUS
|
||||
tristate "Modem Host Interface (MHI) bus"
|
||||
help
|
||||
Bus driver for MHI protocol. Modem Host Interface (MHI) is a
|
||||
communication protocol used by the host processors to control
|
||||
and communicate with modem devices over a high speed peripheral
|
||||
bus or shared memory.
|
||||
tristate "Modem Host Interface (MHI) bus"
|
||||
help
|
||||
Bus driver for MHI protocol. Modem Host Interface (MHI) is a
|
||||
communication protocol used by the host processors to control
|
||||
and communicate with modem devices over a high speed peripheral
|
||||
bus or shared memory.
|
||||
|
||||
config MHI_BUS_DEBUG
|
||||
bool "Debugfs support for the MHI bus"
|
||||
depends on MHI_BUS && DEBUG_FS
|
||||
help
|
||||
Enable debugfs support for use with the MHI transport. Allows
|
||||
reading and/or modifying some values within the MHI controller
|
||||
for debug and test purposes.
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
obj-$(CONFIG_MHI_BUS) := mhi.o
|
||||
obj-$(CONFIG_MHI_BUS) += mhi.o
|
||||
|
||||
mhi-y := init.o main.o pm.o boot.o
|
||||
mhi-$(CONFIG_MHI_BUS_DEBUG) += debugfs.o
|
||||
|
|
|
@ -392,13 +392,28 @@ void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
|
|||
void *buf;
|
||||
dma_addr_t dma_addr;
|
||||
size_t size;
|
||||
int ret;
|
||||
int i, ret;
|
||||
|
||||
if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
|
||||
dev_err(dev, "Device MHI is not in valid state\n");
|
||||
return;
|
||||
}
|
||||
|
||||
/* save hardware info from BHI */
|
||||
ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_SERIALNU,
|
||||
&mhi_cntrl->serial_number);
|
||||
if (ret)
|
||||
dev_err(dev, "Could not capture serial number via BHI\n");
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(mhi_cntrl->oem_pk_hash); i++) {
|
||||
ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_OEMPKHASH(i),
|
||||
&mhi_cntrl->oem_pk_hash[i]);
|
||||
if (ret) {
|
||||
dev_err(dev, "Could not capture OEM PK HASH via BHI\n");
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/* If device is in pass through, do reset to ready state transition */
|
||||
if (mhi_cntrl->ee == MHI_EE_PTHRU)
|
||||
goto fw_load_ee_pthru;
|
||||
|
|
|
@ -0,0 +1,411 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (c) 2020, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
*/
|
||||
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/mhi.h>
|
||||
#include <linux/module.h>
|
||||
#include "internal.h"
|
||||
|
||||
static int mhi_debugfs_states_show(struct seq_file *m, void *d)
|
||||
{
|
||||
struct mhi_controller *mhi_cntrl = m->private;
|
||||
|
||||
/* states */
|
||||
seq_printf(m, "PM state: %s Device: %s MHI state: %s EE: %s wake: %s\n",
|
||||
to_mhi_pm_state_str(mhi_cntrl->pm_state),
|
||||
mhi_is_active(mhi_cntrl) ? "Active" : "Inactive",
|
||||
TO_MHI_STATE_STR(mhi_cntrl->dev_state),
|
||||
TO_MHI_EXEC_STR(mhi_cntrl->ee),
|
||||
mhi_cntrl->wake_set ? "true" : "false");
|
||||
|
||||
/* counters */
|
||||
seq_printf(m, "M0: %u M2: %u M3: %u", mhi_cntrl->M0, mhi_cntrl->M2,
|
||||
mhi_cntrl->M3);
|
||||
|
||||
seq_printf(m, " device wake: %u pending packets: %u\n",
|
||||
atomic_read(&mhi_cntrl->dev_wake),
|
||||
atomic_read(&mhi_cntrl->pending_pkts));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mhi_debugfs_events_show(struct seq_file *m, void *d)
|
||||
{
|
||||
struct mhi_controller *mhi_cntrl = m->private;
|
||||
struct mhi_event *mhi_event;
|
||||
struct mhi_event_ctxt *er_ctxt;
|
||||
int i;
|
||||
|
||||
if (!mhi_is_active(mhi_cntrl)) {
|
||||
seq_puts(m, "Device not ready\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
er_ctxt = mhi_cntrl->mhi_ctxt->er_ctxt;
|
||||
mhi_event = mhi_cntrl->mhi_event;
|
||||
for (i = 0; i < mhi_cntrl->total_ev_rings;
|
||||
i++, er_ctxt++, mhi_event++) {
|
||||
struct mhi_ring *ring = &mhi_event->ring;
|
||||
|
||||
if (mhi_event->offload_ev) {
|
||||
seq_printf(m, "Index: %d is an offload event ring\n",
|
||||
i);
|
||||
continue;
|
||||
}
|
||||
|
||||
seq_printf(m, "Index: %d intmod count: %lu time: %lu",
|
||||
i, (er_ctxt->intmod & EV_CTX_INTMODC_MASK) >>
|
||||
EV_CTX_INTMODC_SHIFT,
|
||||
(er_ctxt->intmod & EV_CTX_INTMODT_MASK) >>
|
||||
EV_CTX_INTMODT_SHIFT);
|
||||
|
||||
seq_printf(m, " base: 0x%0llx len: 0x%llx", er_ctxt->rbase,
|
||||
er_ctxt->rlen);
|
||||
|
||||
seq_printf(m, " rp: 0x%llx wp: 0x%llx", er_ctxt->rp,
|
||||
er_ctxt->wp);
|
||||
|
||||
seq_printf(m, " local rp: 0x%pK db: 0x%pad\n", ring->rp,
|
||||
&mhi_event->db_cfg.db_val);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mhi_debugfs_channels_show(struct seq_file *m, void *d)
|
||||
{
|
||||
struct mhi_controller *mhi_cntrl = m->private;
|
||||
struct mhi_chan *mhi_chan;
|
||||
struct mhi_chan_ctxt *chan_ctxt;
|
||||
int i;
|
||||
|
||||
if (!mhi_is_active(mhi_cntrl)) {
|
||||
seq_puts(m, "Device not ready\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
mhi_chan = mhi_cntrl->mhi_chan;
|
||||
chan_ctxt = mhi_cntrl->mhi_ctxt->chan_ctxt;
|
||||
for (i = 0; i < mhi_cntrl->max_chan; i++, chan_ctxt++, mhi_chan++) {
|
||||
struct mhi_ring *ring = &mhi_chan->tre_ring;
|
||||
|
||||
if (mhi_chan->offload_ch) {
|
||||
seq_printf(m, "%s(%u) is an offload channel\n",
|
||||
mhi_chan->name, mhi_chan->chan);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!mhi_chan->mhi_dev)
|
||||
continue;
|
||||
|
||||
seq_printf(m,
|
||||
"%s(%u) state: 0x%lx brstmode: 0x%lx pollcfg: 0x%lx",
|
||||
mhi_chan->name, mhi_chan->chan, (chan_ctxt->chcfg &
|
||||
CHAN_CTX_CHSTATE_MASK) >> CHAN_CTX_CHSTATE_SHIFT,
|
||||
(chan_ctxt->chcfg & CHAN_CTX_BRSTMODE_MASK) >>
|
||||
CHAN_CTX_BRSTMODE_SHIFT, (chan_ctxt->chcfg &
|
||||
CHAN_CTX_POLLCFG_MASK) >> CHAN_CTX_POLLCFG_SHIFT);
|
||||
|
||||
seq_printf(m, " type: 0x%x event ring: %u", chan_ctxt->chtype,
|
||||
chan_ctxt->erindex);
|
||||
|
||||
seq_printf(m, " base: 0x%llx len: 0x%llx rp: 0x%llx wp: 0x%llx",
|
||||
chan_ctxt->rbase, chan_ctxt->rlen, chan_ctxt->rp,
|
||||
chan_ctxt->wp);
|
||||
|
||||
seq_printf(m, " local rp: 0x%pK local wp: 0x%pK db: 0x%pad\n",
|
||||
ring->rp, ring->wp,
|
||||
&mhi_chan->db_cfg.db_val);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mhi_device_info_show(struct device *dev, void *data)
|
||||
{
|
||||
struct mhi_device *mhi_dev;
|
||||
|
||||
if (dev->bus != &mhi_bus_type)
|
||||
return 0;
|
||||
|
||||
mhi_dev = to_mhi_device(dev);
|
||||
|
||||
seq_printf((struct seq_file *)data, "%s: type: %s dev_wake: %u",
|
||||
mhi_dev->name, mhi_dev->dev_type ? "Controller" : "Transfer",
|
||||
mhi_dev->dev_wake);
|
||||
|
||||
/* for transfer device types only */
|
||||
if (mhi_dev->dev_type == MHI_DEVICE_XFER)
|
||||
seq_printf((struct seq_file *)data, " channels: %u(UL)/%u(DL)",
|
||||
mhi_dev->ul_chan_id, mhi_dev->dl_chan_id);
|
||||
|
||||
seq_puts((struct seq_file *)data, "\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mhi_debugfs_devices_show(struct seq_file *m, void *d)
|
||||
{
|
||||
struct mhi_controller *mhi_cntrl = m->private;
|
||||
|
||||
if (!mhi_is_active(mhi_cntrl)) {
|
||||
seq_puts(m, "Device not ready\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
device_for_each_child(mhi_cntrl->cntrl_dev, m, mhi_device_info_show);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mhi_debugfs_regdump_show(struct seq_file *m, void *d)
|
||||
{
|
||||
struct mhi_controller *mhi_cntrl = m->private;
|
||||
enum mhi_state state;
|
||||
enum mhi_ee_type ee;
|
||||
int i, ret = -EIO;
|
||||
u32 val;
|
||||
void __iomem *mhi_base = mhi_cntrl->regs;
|
||||
void __iomem *bhi_base = mhi_cntrl->bhi;
|
||||
void __iomem *bhie_base = mhi_cntrl->bhie;
|
||||
void __iomem *wake_db = mhi_cntrl->wake_db;
|
||||
struct {
|
||||
const char *name;
|
||||
int offset;
|
||||
void __iomem *base;
|
||||
} regs[] = {
|
||||
{ "MHI_REGLEN", MHIREGLEN, mhi_base},
|
||||
{ "MHI_VER", MHIVER, mhi_base},
|
||||
{ "MHI_CFG", MHICFG, mhi_base},
|
||||
{ "MHI_CTRL", MHICTRL, mhi_base},
|
||||
{ "MHI_STATUS", MHISTATUS, mhi_base},
|
||||
{ "MHI_WAKE_DB", 0, wake_db},
|
||||
{ "BHI_EXECENV", BHI_EXECENV, bhi_base},
|
||||
{ "BHI_STATUS", BHI_STATUS, bhi_base},
|
||||
{ "BHI_ERRCODE", BHI_ERRCODE, bhi_base},
|
||||
{ "BHI_ERRDBG1", BHI_ERRDBG1, bhi_base},
|
||||
{ "BHI_ERRDBG2", BHI_ERRDBG2, bhi_base},
|
||||
{ "BHI_ERRDBG3", BHI_ERRDBG3, bhi_base},
|
||||
{ "BHIE_TXVEC_DB", BHIE_TXVECDB_OFFS, bhie_base},
|
||||
{ "BHIE_TXVEC_STATUS", BHIE_TXVECSTATUS_OFFS, bhie_base},
|
||||
{ "BHIE_RXVEC_DB", BHIE_RXVECDB_OFFS, bhie_base},
|
||||
{ "BHIE_RXVEC_STATUS", BHIE_RXVECSTATUS_OFFS, bhie_base},
|
||||
{ NULL },
|
||||
};
|
||||
|
||||
if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
|
||||
return ret;
|
||||
|
||||
seq_printf(m, "Host PM state: %s Device state: %s EE: %s\n",
|
||||
to_mhi_pm_state_str(mhi_cntrl->pm_state),
|
||||
TO_MHI_STATE_STR(mhi_cntrl->dev_state),
|
||||
TO_MHI_EXEC_STR(mhi_cntrl->ee));
|
||||
|
||||
state = mhi_get_mhi_state(mhi_cntrl);
|
||||
ee = mhi_get_exec_env(mhi_cntrl);
|
||||
seq_printf(m, "Device EE: %s state: %s\n", TO_MHI_EXEC_STR(ee),
|
||||
TO_MHI_STATE_STR(state));
|
||||
|
||||
for (i = 0; regs[i].name; i++) {
|
||||
if (!regs[i].base)
|
||||
continue;
|
||||
ret = mhi_read_reg(mhi_cntrl, regs[i].base, regs[i].offset,
|
||||
&val);
|
||||
if (ret)
|
||||
continue;
|
||||
|
||||
seq_printf(m, "%s: 0x%x\n", regs[i].name, val);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mhi_debugfs_device_wake_show(struct seq_file *m, void *d)
|
||||
{
|
||||
struct mhi_controller *mhi_cntrl = m->private;
|
||||
struct mhi_device *mhi_dev = mhi_cntrl->mhi_dev;
|
||||
|
||||
if (!mhi_is_active(mhi_cntrl)) {
|
||||
seq_puts(m, "Device not ready\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
seq_printf(m,
|
||||
"Wake count: %d\n%s\n", mhi_dev->dev_wake,
|
||||
"Usage: echo get/put > device_wake to vote/unvote for M0");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static ssize_t mhi_debugfs_device_wake_write(struct file *file,
|
||||
const char __user *ubuf,
|
||||
size_t count, loff_t *ppos)
|
||||
{
|
||||
struct seq_file *m = file->private_data;
|
||||
struct mhi_controller *mhi_cntrl = m->private;
|
||||
struct mhi_device *mhi_dev = mhi_cntrl->mhi_dev;
|
||||
char buf[16];
|
||||
int ret = -EINVAL;
|
||||
|
||||
if (copy_from_user(&buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
|
||||
return -EFAULT;
|
||||
|
||||
if (!strncmp(buf, "get", 3)) {
|
||||
ret = mhi_device_get_sync(mhi_dev);
|
||||
} else if (!strncmp(buf, "put", 3)) {
|
||||
mhi_device_put(mhi_dev);
|
||||
ret = 0;
|
||||
}
|
||||
|
||||
return ret ? ret : count;
|
||||
}
|
||||
|
||||
static int mhi_debugfs_timeout_ms_show(struct seq_file *m, void *d)
|
||||
{
|
||||
struct mhi_controller *mhi_cntrl = m->private;
|
||||
|
||||
seq_printf(m, "%u ms\n", mhi_cntrl->timeout_ms);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static ssize_t mhi_debugfs_timeout_ms_write(struct file *file,
|
||||
const char __user *ubuf,
|
||||
size_t count, loff_t *ppos)
|
||||
{
|
||||
struct seq_file *m = file->private_data;
|
||||
struct mhi_controller *mhi_cntrl = m->private;
|
||||
u32 timeout_ms;
|
||||
|
||||
if (kstrtou32_from_user(ubuf, count, 0, &timeout_ms))
|
||||
return -EINVAL;
|
||||
|
||||
mhi_cntrl->timeout_ms = timeout_ms;
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static int mhi_debugfs_states_open(struct inode *inode, struct file *fp)
|
||||
{
|
||||
return single_open(fp, mhi_debugfs_states_show, inode->i_private);
|
||||
}
|
||||
|
||||
static int mhi_debugfs_events_open(struct inode *inode, struct file *fp)
|
||||
{
|
||||
return single_open(fp, mhi_debugfs_events_show, inode->i_private);
|
||||
}
|
||||
|
||||
static int mhi_debugfs_channels_open(struct inode *inode, struct file *fp)
|
||||
{
|
||||
return single_open(fp, mhi_debugfs_channels_show, inode->i_private);
|
||||
}
|
||||
|
||||
static int mhi_debugfs_devices_open(struct inode *inode, struct file *fp)
|
||||
{
|
||||
return single_open(fp, mhi_debugfs_devices_show, inode->i_private);
|
||||
}
|
||||
|
||||
static int mhi_debugfs_regdump_open(struct inode *inode, struct file *fp)
|
||||
{
|
||||
return single_open(fp, mhi_debugfs_regdump_show, inode->i_private);
|
||||
}
|
||||
|
||||
static int mhi_debugfs_device_wake_open(struct inode *inode, struct file *fp)
|
||||
{
|
||||
return single_open(fp, mhi_debugfs_device_wake_show, inode->i_private);
|
||||
}
|
||||
|
||||
static int mhi_debugfs_timeout_ms_open(struct inode *inode, struct file *fp)
|
||||
{
|
||||
return single_open(fp, mhi_debugfs_timeout_ms_show, inode->i_private);
|
||||
}
|
||||
|
||||
static const struct file_operations debugfs_states_fops = {
|
||||
.open = mhi_debugfs_states_open,
|
||||
.release = single_release,
|
||||
.read = seq_read,
|
||||
};
|
||||
|
||||
static const struct file_operations debugfs_events_fops = {
|
||||
.open = mhi_debugfs_events_open,
|
||||
.release = single_release,
|
||||
.read = seq_read,
|
||||
};
|
||||
|
||||
static const struct file_operations debugfs_channels_fops = {
|
||||
.open = mhi_debugfs_channels_open,
|
||||
.release = single_release,
|
||||
.read = seq_read,
|
||||
};
|
||||
|
||||
static const struct file_operations debugfs_devices_fops = {
|
||||
.open = mhi_debugfs_devices_open,
|
||||
.release = single_release,
|
||||
.read = seq_read,
|
||||
};
|
||||
|
||||
static const struct file_operations debugfs_regdump_fops = {
|
||||
.open = mhi_debugfs_regdump_open,
|
||||
.release = single_release,
|
||||
.read = seq_read,
|
||||
};
|
||||
|
||||
static const struct file_operations debugfs_device_wake_fops = {
|
||||
.open = mhi_debugfs_device_wake_open,
|
||||
.write = mhi_debugfs_device_wake_write,
|
||||
.release = single_release,
|
||||
.read = seq_read,
|
||||
};
|
||||
|
||||
static const struct file_operations debugfs_timeout_ms_fops = {
|
||||
.open = mhi_debugfs_timeout_ms_open,
|
||||
.write = mhi_debugfs_timeout_ms_write,
|
||||
.release = single_release,
|
||||
.read = seq_read,
|
||||
};
|
||||
|
||||
static struct dentry *mhi_debugfs_root;
|
||||
|
||||
void mhi_create_debugfs(struct mhi_controller *mhi_cntrl)
|
||||
{
|
||||
mhi_cntrl->debugfs_dentry =
|
||||
debugfs_create_dir(dev_name(mhi_cntrl->cntrl_dev),
|
||||
mhi_debugfs_root);
|
||||
|
||||
debugfs_create_file("states", 0444, mhi_cntrl->debugfs_dentry,
|
||||
mhi_cntrl, &debugfs_states_fops);
|
||||
debugfs_create_file("events", 0444, mhi_cntrl->debugfs_dentry,
|
||||
mhi_cntrl, &debugfs_events_fops);
|
||||
debugfs_create_file("channels", 0444, mhi_cntrl->debugfs_dentry,
|
||||
mhi_cntrl, &debugfs_channels_fops);
|
||||
debugfs_create_file("devices", 0444, mhi_cntrl->debugfs_dentry,
|
||||
mhi_cntrl, &debugfs_devices_fops);
|
||||
debugfs_create_file("regdump", 0444, mhi_cntrl->debugfs_dentry,
|
||||
mhi_cntrl, &debugfs_regdump_fops);
|
||||
debugfs_create_file("device_wake", 0644, mhi_cntrl->debugfs_dentry,
|
||||
mhi_cntrl, &debugfs_device_wake_fops);
|
||||
debugfs_create_file("timeout_ms", 0644, mhi_cntrl->debugfs_dentry,
|
||||
mhi_cntrl, &debugfs_timeout_ms_fops);
|
||||
}
|
||||
|
||||
void mhi_destroy_debugfs(struct mhi_controller *mhi_cntrl)
|
||||
{
|
||||
debugfs_remove_recursive(mhi_cntrl->debugfs_dentry);
|
||||
mhi_cntrl->debugfs_dentry = NULL;
|
||||
}
|
||||
|
||||
void mhi_debugfs_init(void)
|
||||
{
|
||||
mhi_debugfs_root = debugfs_create_dir(mhi_bus_type.name, NULL);
|
||||
}
|
||||
|
||||
void mhi_debugfs_exit(void)
|
||||
{
|
||||
debugfs_remove_recursive(mhi_debugfs_root);
|
||||
}
|
|
@ -4,6 +4,7 @@
|
|||
*
|
||||
*/
|
||||
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/dma-direction.h>
|
||||
#include <linux/dma-mapping.h>
|
||||
|
@ -75,6 +76,42 @@ const char *to_mhi_pm_state_str(enum mhi_pm_state state)
|
|||
return mhi_pm_state_str[index];
|
||||
}
|
||||
|
||||
static ssize_t serial_number_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct mhi_device *mhi_dev = to_mhi_device(dev);
|
||||
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "Serial Number: %u\n",
|
||||
mhi_cntrl->serial_number);
|
||||
}
|
||||
static DEVICE_ATTR_RO(serial_number);
|
||||
|
||||
static ssize_t oem_pk_hash_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct mhi_device *mhi_dev = to_mhi_device(dev);
|
||||
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
|
||||
int i, cnt = 0;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(mhi_cntrl->oem_pk_hash); i++)
|
||||
cnt += snprintf(buf + cnt, PAGE_SIZE - cnt,
|
||||
"OEMPKHASH[%d]: 0x%x\n", i,
|
||||
mhi_cntrl->oem_pk_hash[i]);
|
||||
|
||||
return cnt;
|
||||
}
|
||||
static DEVICE_ATTR_RO(oem_pk_hash);
|
||||
|
||||
static struct attribute *mhi_dev_attrs[] = {
|
||||
&dev_attr_serial_number.attr,
|
||||
&dev_attr_oem_pk_hash.attr,
|
||||
NULL,
|
||||
};
|
||||
ATTRIBUTE_GROUPS(mhi_dev);
|
||||
|
||||
/* MHI protocol requires the transfer ring to be aligned with ring length */
|
||||
static int mhi_alloc_aligned_ring(struct mhi_controller *mhi_cntrl,
|
||||
struct mhi_ring *ring,
|
||||
|
@ -125,6 +162,13 @@ int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl)
|
|||
if (mhi_event->offload_ev)
|
||||
continue;
|
||||
|
||||
if (mhi_event->irq >= mhi_cntrl->nr_irqs) {
|
||||
dev_err(dev, "irq %d not available for event ring\n",
|
||||
mhi_event->irq);
|
||||
ret = -EINVAL;
|
||||
goto error_request;
|
||||
}
|
||||
|
||||
ret = request_irq(mhi_cntrl->irq[mhi_event->irq],
|
||||
mhi_irq_handler,
|
||||
IRQF_SHARED | IRQF_NO_SUSPEND,
|
||||
|
@ -562,10 +606,10 @@ int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
|
|||
}
|
||||
|
||||
static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
|
||||
struct mhi_controller_config *config)
|
||||
const struct mhi_controller_config *config)
|
||||
{
|
||||
struct mhi_event *mhi_event;
|
||||
struct mhi_event_config *event_cfg;
|
||||
const struct mhi_event_config *event_cfg;
|
||||
struct device *dev = &mhi_cntrl->mhi_dev->dev;
|
||||
int i, num;
|
||||
|
||||
|
@ -636,9 +680,6 @@ static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
|
|||
mhi_event++;
|
||||
}
|
||||
|
||||
/* We need IRQ for each event ring + additional one for BHI */
|
||||
mhi_cntrl->nr_irqs_req = mhi_cntrl->total_ev_rings + 1;
|
||||
|
||||
return 0;
|
||||
|
||||
error_ev_cfg:
|
||||
|
@ -648,9 +689,9 @@ error_ev_cfg:
|
|||
}
|
||||
|
||||
static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
|
||||
struct mhi_controller_config *config)
|
||||
const struct mhi_controller_config *config)
|
||||
{
|
||||
struct mhi_channel_config *ch_cfg;
|
||||
const struct mhi_channel_config *ch_cfg;
|
||||
struct device *dev = &mhi_cntrl->mhi_dev->dev;
|
||||
int i;
|
||||
u32 chan;
|
||||
|
@ -766,7 +807,7 @@ error_chan_cfg:
|
|||
}
|
||||
|
||||
static int parse_config(struct mhi_controller *mhi_cntrl,
|
||||
struct mhi_controller_config *config)
|
||||
const struct mhi_controller_config *config)
|
||||
{
|
||||
int ret;
|
||||
|
||||
|
@ -803,7 +844,7 @@ error_ev_cfg:
|
|||
}
|
||||
|
||||
int mhi_register_controller(struct mhi_controller *mhi_cntrl,
|
||||
struct mhi_controller_config *config)
|
||||
const struct mhi_controller_config *config)
|
||||
{
|
||||
struct mhi_event *mhi_event;
|
||||
struct mhi_chan *mhi_chan;
|
||||
|
@ -904,6 +945,7 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
|
|||
mhi_dev->dev_type = MHI_DEVICE_CONTROLLER;
|
||||
mhi_dev->mhi_cntrl = mhi_cntrl;
|
||||
dev_set_name(&mhi_dev->dev, "%s", dev_name(mhi_cntrl->cntrl_dev));
|
||||
mhi_dev->name = dev_name(mhi_cntrl->cntrl_dev);
|
||||
|
||||
/* Init wakeup source */
|
||||
device_init_wakeup(&mhi_dev->dev, true);
|
||||
|
@ -914,6 +956,8 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
|
|||
|
||||
mhi_cntrl->mhi_dev = mhi_dev;
|
||||
|
||||
mhi_create_debugfs(mhi_cntrl);
|
||||
|
||||
return 0;
|
||||
|
||||
error_add_dev:
|
||||
|
@ -936,6 +980,8 @@ void mhi_unregister_controller(struct mhi_controller *mhi_cntrl)
|
|||
struct mhi_chan *mhi_chan = mhi_cntrl->mhi_chan;
|
||||
unsigned int i;
|
||||
|
||||
mhi_destroy_debugfs(mhi_cntrl);
|
||||
|
||||
kfree(mhi_cntrl->mhi_cmd);
|
||||
kfree(mhi_cntrl->mhi_event);
|
||||
|
||||
|
@ -953,6 +999,22 @@ void mhi_unregister_controller(struct mhi_controller *mhi_cntrl)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(mhi_unregister_controller);
|
||||
|
||||
struct mhi_controller *mhi_alloc_controller(void)
|
||||
{
|
||||
struct mhi_controller *mhi_cntrl;
|
||||
|
||||
mhi_cntrl = kzalloc(sizeof(*mhi_cntrl), GFP_KERNEL);
|
||||
|
||||
return mhi_cntrl;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mhi_alloc_controller);
|
||||
|
||||
void mhi_free_controller(struct mhi_controller *mhi_cntrl)
|
||||
{
|
||||
kfree(mhi_cntrl);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mhi_free_controller);
|
||||
|
||||
int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl)
|
||||
{
|
||||
struct device *dev = &mhi_cntrl->mhi_dev->dev;
|
||||
|
@ -1249,7 +1311,7 @@ static int mhi_uevent(struct device *dev, struct kobj_uevent_env *env)
|
|||
struct mhi_device *mhi_dev = to_mhi_device(dev);
|
||||
|
||||
return add_uevent_var(env, "MODALIAS=" MHI_DEVICE_MODALIAS_FMT,
|
||||
mhi_dev->chan_name);
|
||||
mhi_dev->name);
|
||||
}
|
||||
|
||||
static int mhi_match(struct device *dev, struct device_driver *drv)
|
||||
|
@ -1266,7 +1328,7 @@ static int mhi_match(struct device *dev, struct device_driver *drv)
|
|||
return 0;
|
||||
|
||||
for (id = mhi_drv->id_table; id->chan[0]; id++)
|
||||
if (!strcmp(mhi_dev->chan_name, id->chan)) {
|
||||
if (!strcmp(mhi_dev->name, id->chan)) {
|
||||
mhi_dev->id = id;
|
||||
return 1;
|
||||
}
|
||||
|
@ -1279,15 +1341,18 @@ struct bus_type mhi_bus_type = {
|
|||
.dev_name = "mhi",
|
||||
.match = mhi_match,
|
||||
.uevent = mhi_uevent,
|
||||
.dev_groups = mhi_dev_groups,
|
||||
};
|
||||
|
||||
static int __init mhi_init(void)
|
||||
{
|
||||
mhi_debugfs_init();
|
||||
return bus_register(&mhi_bus_type);
|
||||
}
|
||||
|
||||
static void __exit mhi_exit(void)
|
||||
{
|
||||
mhi_debugfs_exit();
|
||||
bus_unregister(&mhi_bus_type);
|
||||
}
|
||||
|
||||
|
|
|
@ -570,6 +570,30 @@ struct mhi_chan {
|
|||
/* Default MHI timeout */
|
||||
#define MHI_TIMEOUT_MS (1000)
|
||||
|
||||
/* debugfs related functions */
|
||||
#ifdef CONFIG_MHI_BUS_DEBUG
|
||||
void mhi_create_debugfs(struct mhi_controller *mhi_cntrl);
|
||||
void mhi_destroy_debugfs(struct mhi_controller *mhi_cntrl);
|
||||
void mhi_debugfs_init(void);
|
||||
void mhi_debugfs_exit(void);
|
||||
#else
|
||||
static inline void mhi_create_debugfs(struct mhi_controller *mhi_cntrl)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void mhi_destroy_debugfs(struct mhi_controller *mhi_cntrl)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void mhi_debugfs_init(void)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void mhi_debugfs_exit(void)
|
||||
{
|
||||
}
|
||||
#endif
|
||||
|
||||
struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl);
|
||||
|
||||
int mhi_destroy_device(struct device *dev, void *data);
|
||||
|
@ -592,13 +616,24 @@ void mhi_pm_st_worker(struct work_struct *work);
|
|||
void mhi_pm_sys_err_handler(struct mhi_controller *mhi_cntrl);
|
||||
void mhi_fw_load_worker(struct work_struct *work);
|
||||
int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl);
|
||||
void mhi_ctrl_ev_task(unsigned long data);
|
||||
int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl);
|
||||
void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl);
|
||||
int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl);
|
||||
int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl);
|
||||
int mhi_send_cmd(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
|
||||
enum mhi_cmd_type cmd);
|
||||
static inline bool mhi_is_active(struct mhi_controller *mhi_cntrl)
|
||||
{
|
||||
return (mhi_cntrl->dev_state >= MHI_STATE_M0 &&
|
||||
mhi_cntrl->dev_state <= MHI_STATE_M3_FAST);
|
||||
}
|
||||
|
||||
static inline void mhi_trigger_resume(struct mhi_controller *mhi_cntrl)
|
||||
{
|
||||
pm_wakeup_event(&mhi_cntrl->mhi_dev->dev, 0);
|
||||
mhi_cntrl->runtime_get(mhi_cntrl);
|
||||
mhi_cntrl->runtime_put(mhi_cntrl);
|
||||
}
|
||||
|
||||
/* Register access methods */
|
||||
void mhi_db_brstmode(struct mhi_controller *mhi_cntrl, struct db_cfg *db_cfg,
|
||||
|
|
|
@ -249,7 +249,7 @@ int mhi_destroy_device(struct device *dev, void *data)
|
|||
put_device(&mhi_dev->dl_chan->mhi_dev->dev);
|
||||
|
||||
dev_dbg(&mhi_cntrl->mhi_dev->dev, "destroy device for chan:%s\n",
|
||||
mhi_dev->chan_name);
|
||||
mhi_dev->name);
|
||||
|
||||
/* Notify the client and remove the device from MHI bus */
|
||||
device_del(dev);
|
||||
|
@ -327,10 +327,10 @@ void mhi_create_devices(struct mhi_controller *mhi_cntrl)
|
|||
}
|
||||
|
||||
/* Channel name is same for both UL and DL */
|
||||
mhi_dev->chan_name = mhi_chan->name;
|
||||
mhi_dev->name = mhi_chan->name;
|
||||
dev_set_name(&mhi_dev->dev, "%s_%s",
|
||||
dev_name(mhi_cntrl->cntrl_dev),
|
||||
mhi_dev->chan_name);
|
||||
mhi_dev->name);
|
||||
|
||||
/* Init wakeup source if available */
|
||||
if (mhi_dev->dl_chan && mhi_dev->dl_chan->wake_capable)
|
||||
|
@ -909,8 +909,7 @@ void mhi_ctrl_ev_task(unsigned long data)
|
|||
* process it since we are probably in a suspended state,
|
||||
* so trigger a resume.
|
||||
*/
|
||||
mhi_cntrl->runtime_get(mhi_cntrl);
|
||||
mhi_cntrl->runtime_put(mhi_cntrl);
|
||||
mhi_trigger_resume(mhi_cntrl);
|
||||
|
||||
return;
|
||||
}
|
||||
|
@ -971,10 +970,8 @@ int mhi_queue_skb(struct mhi_device *mhi_dev, enum dma_data_direction dir,
|
|||
}
|
||||
|
||||
/* we're in M3 or transitioning to M3 */
|
||||
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
|
||||
mhi_cntrl->runtime_get(mhi_cntrl);
|
||||
mhi_cntrl->runtime_put(mhi_cntrl);
|
||||
}
|
||||
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
|
||||
mhi_trigger_resume(mhi_cntrl);
|
||||
|
||||
/* Toggle wake to exit out of M2 */
|
||||
mhi_cntrl->wake_toggle(mhi_cntrl);
|
||||
|
@ -1032,10 +1029,8 @@ int mhi_queue_dma(struct mhi_device *mhi_dev, enum dma_data_direction dir,
|
|||
}
|
||||
|
||||
/* we're in M3 or transitioning to M3 */
|
||||
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
|
||||
mhi_cntrl->runtime_get(mhi_cntrl);
|
||||
mhi_cntrl->runtime_put(mhi_cntrl);
|
||||
}
|
||||
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
|
||||
mhi_trigger_resume(mhi_cntrl);
|
||||
|
||||
/* Toggle wake to exit out of M2 */
|
||||
mhi_cntrl->wake_toggle(mhi_cntrl);
|
||||
|
@ -1147,10 +1142,8 @@ int mhi_queue_buf(struct mhi_device *mhi_dev, enum dma_data_direction dir,
|
|||
read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
|
||||
|
||||
/* we're in M3 or transitioning to M3 */
|
||||
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
|
||||
mhi_cntrl->runtime_get(mhi_cntrl);
|
||||
mhi_cntrl->runtime_put(mhi_cntrl);
|
||||
}
|
||||
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
|
||||
mhi_trigger_resume(mhi_cntrl);
|
||||
|
||||
/* Toggle wake to exit out of M2 */
|
||||
mhi_cntrl->wake_toggle(mhi_cntrl);
|
||||
|
|
|
@ -256,6 +256,7 @@ int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl)
|
|||
dev_err(dev, "Unable to transition to M0 state\n");
|
||||
return -EIO;
|
||||
}
|
||||
mhi_cntrl->M0++;
|
||||
|
||||
/* Wake up the device */
|
||||
read_lock_bh(&mhi_cntrl->pm_lock);
|
||||
|
@ -326,6 +327,8 @@ void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl)
|
|||
mhi_cntrl->dev_state = MHI_STATE_M2;
|
||||
|
||||
write_unlock_irq(&mhi_cntrl->pm_lock);
|
||||
|
||||
mhi_cntrl->M2++;
|
||||
wake_up_all(&mhi_cntrl->state_event);
|
||||
|
||||
/* If there are any pending resources, exit M2 immediately */
|
||||
|
@ -362,6 +365,7 @@ int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl)
|
|||
return -EIO;
|
||||
}
|
||||
|
||||
mhi_cntrl->M3++;
|
||||
wake_up_all(&mhi_cntrl->state_event);
|
||||
|
||||
return 0;
|
||||
|
@ -686,7 +690,8 @@ int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
|
|||
return -EIO;
|
||||
|
||||
/* Return busy if there are any pending resources */
|
||||
if (atomic_read(&mhi_cntrl->dev_wake))
|
||||
if (atomic_read(&mhi_cntrl->dev_wake) ||
|
||||
atomic_read(&mhi_cntrl->pending_pkts))
|
||||
return -EBUSY;
|
||||
|
||||
/* Take MHI out of M2 state */
|
||||
|
@ -712,7 +717,8 @@ int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
|
|||
|
||||
write_lock_irq(&mhi_cntrl->pm_lock);
|
||||
|
||||
if (atomic_read(&mhi_cntrl->dev_wake)) {
|
||||
if (atomic_read(&mhi_cntrl->dev_wake) ||
|
||||
atomic_read(&mhi_cntrl->pending_pkts)) {
|
||||
write_unlock_irq(&mhi_cntrl->pm_lock);
|
||||
return -EBUSY;
|
||||
}
|
||||
|
@ -822,11 +828,8 @@ int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl)
|
|||
/* Wake up the device */
|
||||
read_lock_bh(&mhi_cntrl->pm_lock);
|
||||
mhi_cntrl->wake_get(mhi_cntrl, true);
|
||||
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
|
||||
pm_wakeup_event(&mhi_cntrl->mhi_dev->dev, 0);
|
||||
mhi_cntrl->runtime_get(mhi_cntrl);
|
||||
mhi_cntrl->runtime_put(mhi_cntrl);
|
||||
}
|
||||
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
|
||||
mhi_trigger_resume(mhi_cntrl);
|
||||
read_unlock_bh(&mhi_cntrl->pm_lock);
|
||||
|
||||
ret = wait_event_timeout(mhi_cntrl->state_event,
|
||||
|
@ -915,7 +918,7 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
|
|||
|
||||
dev_info(dev, "Requested to power ON\n");
|
||||
|
||||
if (mhi_cntrl->nr_irqs < mhi_cntrl->total_ev_rings)
|
||||
if (mhi_cntrl->nr_irqs < 1)
|
||||
return -EINVAL;
|
||||
|
||||
/* Supply default wake routines if not provided by controller driver */
|
||||
|
@ -1113,6 +1116,9 @@ void mhi_device_get(struct mhi_device *mhi_dev)
|
|||
|
||||
mhi_dev->dev_wake++;
|
||||
read_lock_bh(&mhi_cntrl->pm_lock);
|
||||
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
|
||||
mhi_trigger_resume(mhi_cntrl);
|
||||
|
||||
mhi_cntrl->wake_get(mhi_cntrl, true);
|
||||
read_unlock_bh(&mhi_cntrl->pm_lock);
|
||||
}
|
||||
|
@ -1137,10 +1143,8 @@ void mhi_device_put(struct mhi_device *mhi_dev)
|
|||
|
||||
mhi_dev->dev_wake--;
|
||||
read_lock_bh(&mhi_cntrl->pm_lock);
|
||||
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
|
||||
mhi_cntrl->runtime_get(mhi_cntrl);
|
||||
mhi_cntrl->runtime_put(mhi_cntrl);
|
||||
}
|
||||
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
|
||||
mhi_trigger_resume(mhi_cntrl);
|
||||
|
||||
mhi_cntrl->wake_put(mhi_cntrl, false);
|
||||
read_unlock_bh(&mhi_cntrl->pm_lock);
|
||||
|
|
|
@ -93,8 +93,9 @@ config PPDEV
|
|||
|
||||
config VIRTIO_CONSOLE
|
||||
tristate "Virtio console"
|
||||
depends on VIRTIO && TTY
|
||||
depends on TTY
|
||||
select HVC_DRIVER
|
||||
select VIRTIO
|
||||
help
|
||||
Virtio console for use with hypervisors.
|
||||
|
||||
|
|
|
@ -853,8 +853,10 @@ static void lp_console_write(struct console *co, const char *s,
|
|||
count--;
|
||||
do {
|
||||
written = parport_write(port, crlf, i);
|
||||
if (written > 0)
|
||||
i -= written, crlf += written;
|
||||
if (written > 0) {
|
||||
i -= written;
|
||||
crlf += written;
|
||||
}
|
||||
} while (i > 0 && (CONSOLE_LP_STRICT || written > 0));
|
||||
}
|
||||
} while (count > 0 && (CONSOLE_LP_STRICT || written > 0));
|
||||
|
|
|
@ -726,6 +726,33 @@ static ssize_t read_iter_zero(struct kiocb *iocb, struct iov_iter *iter)
|
|||
return written;
|
||||
}
|
||||
|
||||
static ssize_t read_zero(struct file *file, char __user *buf,
|
||||
size_t count, loff_t *ppos)
|
||||
{
|
||||
size_t cleared = 0;
|
||||
|
||||
while (count) {
|
||||
size_t chunk = min_t(size_t, count, PAGE_SIZE);
|
||||
size_t left;
|
||||
|
||||
left = clear_user(buf + cleared, chunk);
|
||||
if (unlikely(left)) {
|
||||
cleared += (chunk - left);
|
||||
if (!cleared)
|
||||
return -EFAULT;
|
||||
break;
|
||||
}
|
||||
cleared += chunk;
|
||||
count -= chunk;
|
||||
|
||||
if (signal_pending(current))
|
||||
break;
|
||||
cond_resched();
|
||||
}
|
||||
|
||||
return cleared;
|
||||
}
|
||||
|
||||
static int mmap_zero(struct file *file, struct vm_area_struct *vma)
|
||||
{
|
||||
#ifndef CONFIG_MMU
|
||||
|
@ -921,6 +948,7 @@ static const struct file_operations zero_fops = {
|
|||
.llseek = zero_lseek,
|
||||
.write = write_zero,
|
||||
.read_iter = read_iter_zero,
|
||||
.read = read_zero,
|
||||
.write_iter = write_iter_zero,
|
||||
.mmap = mmap_zero,
|
||||
.get_unmapped_area = get_unmapped_area_zero,
|
||||
|
|
|
@ -195,10 +195,7 @@ mspec_mmap(struct file *file, struct vm_area_struct *vma,
|
|||
|
||||
pages = vma_pages(vma);
|
||||
vdata_size = sizeof(struct vma_data) + pages * sizeof(long);
|
||||
if (vdata_size <= PAGE_SIZE)
|
||||
vdata = kzalloc(vdata_size, GFP_KERNEL);
|
||||
else
|
||||
vdata = vzalloc(vdata_size);
|
||||
vdata = kvzalloc(vdata_size, GFP_KERNEL);
|
||||
if (!vdata)
|
||||
return -ENOMEM;
|
||||
|
||||
|
|
|
@ -491,18 +491,7 @@ static struct platform_driver axp288_extcon_driver = {
|
|||
.pm = &axp288_extcon_pm_ops,
|
||||
},
|
||||
};
|
||||
|
||||
static int __init axp288_extcon_init(void)
|
||||
{
|
||||
return platform_driver_register(&axp288_extcon_driver);
|
||||
}
|
||||
module_init(axp288_extcon_init);
|
||||
|
||||
static void __exit axp288_extcon_exit(void)
|
||||
{
|
||||
platform_driver_unregister(&axp288_extcon_driver);
|
||||
}
|
||||
module_exit(axp288_extcon_exit);
|
||||
module_platform_driver(axp288_extcon_driver);
|
||||
|
||||
MODULE_AUTHOR("Ramakrishna Pallala <ramakrishna.pallala@intel.com>");
|
||||
MODULE_AUTHOR("Hans de Goede <hdegoede@redhat.com>");
|
||||
|
|
|
@ -713,7 +713,7 @@ static int max14577_muic_probe(struct platform_device *pdev)
|
|||
max14577_extcon_cable);
|
||||
if (IS_ERR(info->edev)) {
|
||||
dev_err(&pdev->dev, "failed to allocate memory for extcon\n");
|
||||
return -ENOMEM;
|
||||
return PTR_ERR(info->edev);
|
||||
}
|
||||
|
||||
ret = devm_extcon_dev_register(&pdev->dev, info->edev);
|
||||
|
|
|
@ -1157,7 +1157,7 @@ static int max77693_muic_probe(struct platform_device *pdev)
|
|||
max77693_extcon_cable);
|
||||
if (IS_ERR(info->edev)) {
|
||||
dev_err(&pdev->dev, "failed to allocate memory for extcon\n");
|
||||
return -ENOMEM;
|
||||
return PTR_ERR(info->edev);
|
||||
}
|
||||
|
||||
ret = devm_extcon_dev_register(&pdev->dev, info->edev);
|
||||
|
|
|
@ -845,7 +845,7 @@ static int max77843_muic_probe(struct platform_device *pdev)
|
|||
max77843_extcon_cable);
|
||||
if (IS_ERR(info->edev)) {
|
||||
dev_err(&pdev->dev, "Failed to allocate memory for extcon\n");
|
||||
ret = -ENODEV;
|
||||
ret = PTR_ERR(info->edev);
|
||||
goto err_muic_irq;
|
||||
}
|
||||
|
||||
|
|
|
@ -674,7 +674,7 @@ static int max8997_muic_probe(struct platform_device *pdev)
|
|||
info->edev = devm_extcon_dev_allocate(&pdev->dev, max8997_extcon_cable);
|
||||
if (IS_ERR(info->edev)) {
|
||||
dev_err(&pdev->dev, "failed to allocate memory for extcon\n");
|
||||
ret = -ENOMEM;
|
||||
ret = PTR_ERR(info->edev);
|
||||
goto err_irq;
|
||||
}
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
/*
|
||||
* Palmas USB transceiver driver
|
||||
*
|
||||
* Copyright (C) 2013 Texas Instruments Incorporated - http://www.ti.com
|
||||
* Copyright (C) 2013 Texas Instruments Incorporated - https://www.ti.com
|
||||
* Author: Graeme Gregory <gg@slimlogic.co.uk>
|
||||
* Author: Kishon Vijay Abraham I <kishon@ti.com>
|
||||
* Based on twl6030_usb.c
|
||||
|
@ -205,21 +205,15 @@ static int palmas_usb_probe(struct platform_device *pdev)
|
|||
|
||||
palmas_usb->id_gpiod = devm_gpiod_get_optional(&pdev->dev, "id",
|
||||
GPIOD_IN);
|
||||
if (PTR_ERR(palmas_usb->id_gpiod) == -EPROBE_DEFER) {
|
||||
return -EPROBE_DEFER;
|
||||
} else if (IS_ERR(palmas_usb->id_gpiod)) {
|
||||
dev_err(&pdev->dev, "failed to get id gpio\n");
|
||||
return PTR_ERR(palmas_usb->id_gpiod);
|
||||
}
|
||||
if (IS_ERR(palmas_usb->id_gpiod))
|
||||
return dev_err_probe(&pdev->dev, PTR_ERR(palmas_usb->id_gpiod),
|
||||
"failed to get id gpio\n");
|
||||
|
||||
palmas_usb->vbus_gpiod = devm_gpiod_get_optional(&pdev->dev, "vbus",
|
||||
GPIOD_IN);
|
||||
if (PTR_ERR(palmas_usb->vbus_gpiod) == -EPROBE_DEFER) {
|
||||
return -EPROBE_DEFER;
|
||||
} else if (IS_ERR(palmas_usb->vbus_gpiod)) {
|
||||
dev_err(&pdev->dev, "failed to get vbus gpio\n");
|
||||
return PTR_ERR(palmas_usb->vbus_gpiod);
|
||||
}
|
||||
if (IS_ERR(palmas_usb->vbus_gpiod))
|
||||
return dev_err_probe(&pdev->dev, PTR_ERR(palmas_usb->vbus_gpiod),
|
||||
"failed to get id gpio\n");
|
||||
|
||||
if (palmas_usb->enable_id_detection && palmas_usb->id_gpiod) {
|
||||
palmas_usb->enable_id_detection = false;
|
||||
|
|
|
@ -5,7 +5,9 @@
|
|||
// Based on extcon-sm5502.c driver
|
||||
// Copyright (c) 2018-2019 by Vijai Kumar K
|
||||
// Author: Vijai Kumar K <vijaikumar.kanagarajan@gmail.com>
|
||||
// Copyright (c) 2020 Krzysztof Kozlowski <krzk@kernel.org>
|
||||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/i2c.h>
|
||||
#include <linux/interrupt.h>
|
||||
|
@ -17,46 +19,28 @@
|
|||
#include <linux/gpio/consumer.h>
|
||||
|
||||
/* PTN5150 registers */
|
||||
enum ptn5150_reg {
|
||||
PTN5150_REG_DEVICE_ID = 0x01,
|
||||
PTN5150_REG_CONTROL,
|
||||
PTN5150_REG_INT_STATUS,
|
||||
PTN5150_REG_CC_STATUS,
|
||||
PTN5150_REG_CON_DET = 0x09,
|
||||
PTN5150_REG_VCONN_STATUS,
|
||||
PTN5150_REG_RESET,
|
||||
PTN5150_REG_INT_MASK = 0x18,
|
||||
PTN5150_REG_INT_REG_STATUS,
|
||||
PTN5150_REG_END,
|
||||
};
|
||||
#define PTN5150_REG_DEVICE_ID 0x01
|
||||
#define PTN5150_REG_CONTROL 0x02
|
||||
#define PTN5150_REG_INT_STATUS 0x03
|
||||
#define PTN5150_REG_CC_STATUS 0x04
|
||||
#define PTN5150_REG_CON_DET 0x09
|
||||
#define PTN5150_REG_VCONN_STATUS 0x0a
|
||||
#define PTN5150_REG_RESET 0x0b
|
||||
#define PTN5150_REG_INT_MASK 0x18
|
||||
#define PTN5150_REG_INT_REG_STATUS 0x19
|
||||
#define PTN5150_REG_END PTN5150_REG_INT_REG_STATUS
|
||||
|
||||
#define PTN5150_DFP_ATTACHED 0x1
|
||||
#define PTN5150_UFP_ATTACHED 0x2
|
||||
|
||||
/* Define PTN5150 MASK/SHIFT constant */
|
||||
#define PTN5150_REG_DEVICE_ID_VENDOR_SHIFT 0
|
||||
#define PTN5150_REG_DEVICE_ID_VENDOR_MASK \
|
||||
(0x3 << PTN5150_REG_DEVICE_ID_VENDOR_SHIFT)
|
||||
#define PTN5150_REG_DEVICE_ID_VERSION GENMASK(7, 3)
|
||||
#define PTN5150_REG_DEVICE_ID_VENDOR GENMASK(2, 0)
|
||||
|
||||
#define PTN5150_REG_DEVICE_ID_VERSION_SHIFT 3
|
||||
#define PTN5150_REG_DEVICE_ID_VERSION_MASK \
|
||||
(0x1f << PTN5150_REG_DEVICE_ID_VERSION_SHIFT)
|
||||
|
||||
#define PTN5150_REG_CC_PORT_ATTACHMENT_SHIFT 2
|
||||
#define PTN5150_REG_CC_PORT_ATTACHMENT_MASK \
|
||||
(0x7 << PTN5150_REG_CC_PORT_ATTACHMENT_SHIFT)
|
||||
|
||||
#define PTN5150_REG_CC_VBUS_DETECTION_SHIFT 7
|
||||
#define PTN5150_REG_CC_VBUS_DETECTION_MASK \
|
||||
(0x1 << PTN5150_REG_CC_VBUS_DETECTION_SHIFT)
|
||||
|
||||
#define PTN5150_REG_INT_CABLE_ATTACH_SHIFT 0
|
||||
#define PTN5150_REG_INT_CABLE_ATTACH_MASK \
|
||||
(0x1 << PTN5150_REG_INT_CABLE_ATTACH_SHIFT)
|
||||
|
||||
#define PTN5150_REG_INT_CABLE_DETACH_SHIFT 1
|
||||
#define PTN5150_REG_INT_CABLE_DETACH_MASK \
|
||||
(0x1 << PTN5150_REG_CC_CABLE_DETACH_SHIFT)
|
||||
#define PTN5150_REG_CC_PORT_ATTACHMENT GENMASK(4, 2)
|
||||
#define PTN5150_REG_CC_VBUS_DETECTION BIT(7)
|
||||
#define PTN5150_REG_INT_CABLE_ATTACH_MASK BIT(0)
|
||||
#define PTN5150_REG_INT_CABLE_DETACH_MASK BIT(1)
|
||||
|
||||
struct ptn5150_info {
|
||||
struct device *dev;
|
||||
|
@ -83,12 +67,45 @@ static const struct regmap_config ptn5150_regmap_config = {
|
|||
.max_register = PTN5150_REG_END,
|
||||
};
|
||||
|
||||
static void ptn5150_check_state(struct ptn5150_info *info)
|
||||
{
|
||||
unsigned int port_status, reg_data, vbus;
|
||||
int ret;
|
||||
|
||||
ret = regmap_read(info->regmap, PTN5150_REG_CC_STATUS, ®_data);
|
||||
if (ret) {
|
||||
dev_err(info->dev, "failed to read CC STATUS %d\n", ret);
|
||||
return;
|
||||
}
|
||||
|
||||
port_status = FIELD_GET(PTN5150_REG_CC_PORT_ATTACHMENT, reg_data);
|
||||
|
||||
switch (port_status) {
|
||||
case PTN5150_DFP_ATTACHED:
|
||||
extcon_set_state_sync(info->edev, EXTCON_USB_HOST, false);
|
||||
gpiod_set_value_cansleep(info->vbus_gpiod, 0);
|
||||
extcon_set_state_sync(info->edev, EXTCON_USB, true);
|
||||
break;
|
||||
case PTN5150_UFP_ATTACHED:
|
||||
extcon_set_state_sync(info->edev, EXTCON_USB, false);
|
||||
vbus = FIELD_GET(PTN5150_REG_CC_VBUS_DETECTION, reg_data);
|
||||
if (vbus)
|
||||
gpiod_set_value_cansleep(info->vbus_gpiod, 0);
|
||||
else
|
||||
gpiod_set_value_cansleep(info->vbus_gpiod, 1);
|
||||
|
||||
extcon_set_state_sync(info->edev, EXTCON_USB_HOST, true);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
static void ptn5150_irq_work(struct work_struct *work)
|
||||
{
|
||||
struct ptn5150_info *info = container_of(work,
|
||||
struct ptn5150_info, irq_work);
|
||||
int ret = 0;
|
||||
unsigned int reg_data;
|
||||
unsigned int int_status;
|
||||
|
||||
if (!info->edev)
|
||||
|
@ -96,13 +113,6 @@ static void ptn5150_irq_work(struct work_struct *work)
|
|||
|
||||
mutex_lock(&info->mutex);
|
||||
|
||||
ret = regmap_read(info->regmap, PTN5150_REG_CC_STATUS, ®_data);
|
||||
if (ret) {
|
||||
dev_err(info->dev, "failed to read CC STATUS %d\n", ret);
|
||||
mutex_unlock(&info->mutex);
|
||||
return;
|
||||
}
|
||||
|
||||
/* Clear interrupt. Read would clear the register */
|
||||
ret = regmap_read(info->regmap, PTN5150_REG_INT_STATUS, &int_status);
|
||||
if (ret) {
|
||||
|
@ -116,47 +126,13 @@ static void ptn5150_irq_work(struct work_struct *work)
|
|||
|
||||
cable_attach = int_status & PTN5150_REG_INT_CABLE_ATTACH_MASK;
|
||||
if (cable_attach) {
|
||||
unsigned int port_status;
|
||||
unsigned int vbus;
|
||||
|
||||
port_status = ((reg_data &
|
||||
PTN5150_REG_CC_PORT_ATTACHMENT_MASK) >>
|
||||
PTN5150_REG_CC_PORT_ATTACHMENT_SHIFT);
|
||||
|
||||
switch (port_status) {
|
||||
case PTN5150_DFP_ATTACHED:
|
||||
extcon_set_state_sync(info->edev,
|
||||
EXTCON_USB_HOST, false);
|
||||
gpiod_set_value(info->vbus_gpiod, 0);
|
||||
extcon_set_state_sync(info->edev, EXTCON_USB,
|
||||
true);
|
||||
break;
|
||||
case PTN5150_UFP_ATTACHED:
|
||||
extcon_set_state_sync(info->edev, EXTCON_USB,
|
||||
false);
|
||||
vbus = ((reg_data &
|
||||
PTN5150_REG_CC_VBUS_DETECTION_MASK) >>
|
||||
PTN5150_REG_CC_VBUS_DETECTION_SHIFT);
|
||||
if (vbus)
|
||||
gpiod_set_value(info->vbus_gpiod, 0);
|
||||
else
|
||||
gpiod_set_value(info->vbus_gpiod, 1);
|
||||
|
||||
extcon_set_state_sync(info->edev,
|
||||
EXTCON_USB_HOST, true);
|
||||
break;
|
||||
default:
|
||||
dev_err(info->dev,
|
||||
"Unknown Port status : %x\n",
|
||||
port_status);
|
||||
break;
|
||||
}
|
||||
ptn5150_check_state(info);
|
||||
} else {
|
||||
extcon_set_state_sync(info->edev,
|
||||
EXTCON_USB_HOST, false);
|
||||
extcon_set_state_sync(info->edev,
|
||||
EXTCON_USB, false);
|
||||
gpiod_set_value(info->vbus_gpiod, 0);
|
||||
gpiod_set_value_cansleep(info->vbus_gpiod, 0);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -194,13 +170,10 @@ static int ptn5150_init_dev_type(struct ptn5150_info *info)
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
vendor_id = ((reg_data & PTN5150_REG_DEVICE_ID_VENDOR_MASK) >>
|
||||
PTN5150_REG_DEVICE_ID_VENDOR_SHIFT);
|
||||
version_id = ((reg_data & PTN5150_REG_DEVICE_ID_VERSION_MASK) >>
|
||||
PTN5150_REG_DEVICE_ID_VERSION_SHIFT);
|
||||
|
||||
dev_info(info->dev, "Device type: version: 0x%x, vendor: 0x%x\n",
|
||||
version_id, vendor_id);
|
||||
vendor_id = FIELD_GET(PTN5150_REG_DEVICE_ID_VENDOR, reg_data);
|
||||
version_id = FIELD_GET(PTN5150_REG_DEVICE_ID_VERSION, reg_data);
|
||||
dev_dbg(info->dev, "Device type: version: 0x%x, vendor: 0x%x\n",
|
||||
version_id, vendor_id);
|
||||
|
||||
/* Clear any existing interrupts */
|
||||
ret = regmap_read(info->regmap, PTN5150_REG_INT_STATUS, ®_data);
|
||||
|
@ -221,8 +194,7 @@ static int ptn5150_init_dev_type(struct ptn5150_info *info)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int ptn5150_i2c_probe(struct i2c_client *i2c,
|
||||
const struct i2c_device_id *id)
|
||||
static int ptn5150_i2c_probe(struct i2c_client *i2c)
|
||||
{
|
||||
struct device *dev = &i2c->dev;
|
||||
struct device_node *np = i2c->dev.of_node;
|
||||
|
@ -239,20 +211,15 @@ static int ptn5150_i2c_probe(struct i2c_client *i2c,
|
|||
|
||||
info->dev = &i2c->dev;
|
||||
info->i2c = i2c;
|
||||
info->int_gpiod = devm_gpiod_get(&i2c->dev, "int", GPIOD_IN);
|
||||
if (IS_ERR(info->int_gpiod)) {
|
||||
dev_err(dev, "failed to get INT GPIO\n");
|
||||
return PTR_ERR(info->int_gpiod);
|
||||
}
|
||||
info->vbus_gpiod = devm_gpiod_get(&i2c->dev, "vbus", GPIOD_IN);
|
||||
info->vbus_gpiod = devm_gpiod_get(&i2c->dev, "vbus", GPIOD_OUT_LOW);
|
||||
if (IS_ERR(info->vbus_gpiod)) {
|
||||
dev_err(dev, "failed to get VBUS GPIO\n");
|
||||
return PTR_ERR(info->vbus_gpiod);
|
||||
}
|
||||
ret = gpiod_direction_output(info->vbus_gpiod, 0);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to set VBUS GPIO direction\n");
|
||||
return -EINVAL;
|
||||
ret = PTR_ERR(info->vbus_gpiod);
|
||||
if (ret == -ENOENT) {
|
||||
dev_info(dev, "No VBUS GPIO, ignoring VBUS control\n");
|
||||
info->vbus_gpiod = NULL;
|
||||
} else {
|
||||
return dev_err_probe(dev, ret, "failed to get VBUS GPIO\n");
|
||||
}
|
||||
}
|
||||
|
||||
mutex_init(&info->mutex);
|
||||
|
@ -261,28 +228,34 @@ static int ptn5150_i2c_probe(struct i2c_client *i2c,
|
|||
|
||||
info->regmap = devm_regmap_init_i2c(i2c, &ptn5150_regmap_config);
|
||||
if (IS_ERR(info->regmap)) {
|
||||
ret = PTR_ERR(info->regmap);
|
||||
dev_err(info->dev, "failed to allocate register map: %d\n",
|
||||
ret);
|
||||
return ret;
|
||||
return dev_err_probe(info->dev, PTR_ERR(info->regmap),
|
||||
"failed to allocate register map\n");
|
||||
}
|
||||
|
||||
if (info->int_gpiod) {
|
||||
if (i2c->irq > 0) {
|
||||
info->irq = i2c->irq;
|
||||
} else {
|
||||
info->int_gpiod = devm_gpiod_get(&i2c->dev, "int", GPIOD_IN);
|
||||
if (IS_ERR(info->int_gpiod)) {
|
||||
return dev_err_probe(dev, PTR_ERR(info->int_gpiod),
|
||||
"failed to get INT GPIO\n");
|
||||
}
|
||||
|
||||
info->irq = gpiod_to_irq(info->int_gpiod);
|
||||
if (info->irq < 0) {
|
||||
dev_err(dev, "failed to get INTB IRQ\n");
|
||||
return info->irq;
|
||||
}
|
||||
}
|
||||
|
||||
ret = devm_request_threaded_irq(dev, info->irq, NULL,
|
||||
ptn5150_irq_handler,
|
||||
IRQF_TRIGGER_FALLING |
|
||||
IRQF_ONESHOT,
|
||||
i2c->name, info);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "failed to request handler for INTB IRQ\n");
|
||||
return ret;
|
||||
}
|
||||
ret = devm_request_threaded_irq(dev, info->irq, NULL,
|
||||
ptn5150_irq_handler,
|
||||
IRQF_TRIGGER_FALLING |
|
||||
IRQF_ONESHOT,
|
||||
i2c->name, info);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "failed to request handler for INTB IRQ\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Allocate extcon device */
|
||||
|
@ -299,11 +272,26 @@ static int ptn5150_i2c_probe(struct i2c_client *i2c,
|
|||
return ret;
|
||||
}
|
||||
|
||||
extcon_set_property_capability(info->edev, EXTCON_USB,
|
||||
EXTCON_PROP_USB_VBUS);
|
||||
extcon_set_property_capability(info->edev, EXTCON_USB_HOST,
|
||||
EXTCON_PROP_USB_VBUS);
|
||||
extcon_set_property_capability(info->edev, EXTCON_USB_HOST,
|
||||
EXTCON_PROP_USB_TYPEC_POLARITY);
|
||||
|
||||
/* Initialize PTN5150 device and print vendor id and version id */
|
||||
ret = ptn5150_init_dev_type(info);
|
||||
if (ret)
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* Update current extcon state if for example OTG connection was there
|
||||
* before the probe
|
||||
*/
|
||||
mutex_lock(&info->mutex);
|
||||
ptn5150_check_state(info);
|
||||
mutex_unlock(&info->mutex);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -324,16 +312,12 @@ static struct i2c_driver ptn5150_i2c_driver = {
|
|||
.name = "ptn5150",
|
||||
.of_match_table = ptn5150_dt_match,
|
||||
},
|
||||
.probe = ptn5150_i2c_probe,
|
||||
.probe_new = ptn5150_i2c_probe,
|
||||
.id_table = ptn5150_i2c_id,
|
||||
};
|
||||
|
||||
static int __init ptn5150_i2c_init(void)
|
||||
{
|
||||
return i2c_add_driver(&ptn5150_i2c_driver);
|
||||
}
|
||||
subsys_initcall(ptn5150_i2c_init);
|
||||
module_i2c_driver(ptn5150_i2c_driver);
|
||||
|
||||
MODULE_DESCRIPTION("NXP PTN5150 CC logic Extcon driver");
|
||||
MODULE_AUTHOR("Vijai Kumar K <vijaikumar.kanagarajan@gmail.com>");
|
||||
MODULE_AUTHOR("Krzysztof Kozlowski <krzk@kernel.org>");
|
||||
MODULE_LICENSE("GPL v2");
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
/**
|
||||
* drivers/extcon/extcon-usb-gpio.c - USB GPIO extcon driver
|
||||
*
|
||||
* Copyright (C) 2015 Texas Instruments Incorporated - http://www.ti.com
|
||||
* Copyright (C) 2015 Texas Instruments Incorporated - https://www.ti.com
|
||||
* Author: Roger Quadros <rogerq@ti.com>
|
||||
*/
|
||||
|
||||
|
|
|
@ -148,7 +148,7 @@ struct fme_perf_priv {
|
|||
struct device *dev;
|
||||
void __iomem *ioaddr;
|
||||
struct pmu pmu;
|
||||
u64 id;
|
||||
u16 id;
|
||||
|
||||
u32 fab_users;
|
||||
u32 fab_port_id;
|
||||
|
|
|
@ -31,12 +31,12 @@ struct cci_drvdata {
|
|||
struct dfl_fpga_cdev *cdev; /* container device */
|
||||
};
|
||||
|
||||
static void __iomem *cci_pci_ioremap_bar(struct pci_dev *pcidev, int bar)
|
||||
static void __iomem *cci_pci_ioremap_bar0(struct pci_dev *pcidev)
|
||||
{
|
||||
if (pcim_iomap_regions(pcidev, BIT(bar), DRV_NAME))
|
||||
if (pcim_iomap_regions(pcidev, BIT(0), DRV_NAME))
|
||||
return NULL;
|
||||
|
||||
return pcim_iomap_table(pcidev)[bar];
|
||||
return pcim_iomap_table(pcidev)[0];
|
||||
}
|
||||
|
||||
static int cci_pci_alloc_irq(struct pci_dev *pcidev)
|
||||
|
@ -156,8 +156,8 @@ static int cci_enumerate_feature_devs(struct pci_dev *pcidev)
|
|||
goto irq_free_exit;
|
||||
}
|
||||
|
||||
/* start to find Device Feature List from Bar 0 */
|
||||
base = cci_pci_ioremap_bar(pcidev, 0);
|
||||
/* start to find Device Feature List in Bar 0 */
|
||||
base = cci_pci_ioremap_bar0(pcidev);
|
||||
if (!base) {
|
||||
ret = -ENOMEM;
|
||||
goto irq_free_exit;
|
||||
|
@ -172,7 +172,7 @@ static int cci_enumerate_feature_devs(struct pci_dev *pcidev)
|
|||
start = pci_resource_start(pcidev, 0);
|
||||
len = pci_resource_len(pcidev, 0);
|
||||
|
||||
dfl_fpga_enum_info_add_dfl(info, start, len, base);
|
||||
dfl_fpga_enum_info_add_dfl(info, start, len);
|
||||
|
||||
/*
|
||||
* find more Device Feature Lists (e.g. Ports) per information
|
||||
|
@ -196,26 +196,24 @@ static int cci_enumerate_feature_devs(struct pci_dev *pcidev)
|
|||
*/
|
||||
bar = FIELD_GET(FME_PORT_OFST_BAR_ID, v);
|
||||
offset = FIELD_GET(FME_PORT_OFST_DFH_OFST, v);
|
||||
base = cci_pci_ioremap_bar(pcidev, bar);
|
||||
if (!base)
|
||||
continue;
|
||||
|
||||
start = pci_resource_start(pcidev, bar) + offset;
|
||||
len = pci_resource_len(pcidev, bar) - offset;
|
||||
|
||||
dfl_fpga_enum_info_add_dfl(info, start, len,
|
||||
base + offset);
|
||||
dfl_fpga_enum_info_add_dfl(info, start, len);
|
||||
}
|
||||
} else if (dfl_feature_is_port(base)) {
|
||||
start = pci_resource_start(pcidev, 0);
|
||||
len = pci_resource_len(pcidev, 0);
|
||||
|
||||
dfl_fpga_enum_info_add_dfl(info, start, len, base);
|
||||
dfl_fpga_enum_info_add_dfl(info, start, len);
|
||||
} else {
|
||||
ret = -ENODEV;
|
||||
goto irq_free_exit;
|
||||
}
|
||||
|
||||
/* release I/O mappings for next step enumeration */
|
||||
pcim_iounmap_regions(pcidev, BIT(0));
|
||||
|
||||
/* start enumeration with prepared enumeration information */
|
||||
cdev = dfl_fpga_feature_devs_enumerate(info);
|
||||
if (IS_ERR(cdev)) {
|
||||
|
|
|
@ -30,12 +30,6 @@ static DEFINE_MUTEX(dfl_id_mutex);
|
|||
* index to dfl_chardevs table. If no chardev support just set devt_type
|
||||
* as one invalid index (DFL_FPGA_DEVT_MAX).
|
||||
*/
|
||||
enum dfl_id_type {
|
||||
FME_ID, /* fme id allocation and mapping */
|
||||
PORT_ID, /* port id allocation and mapping */
|
||||
DFL_ID_MAX,
|
||||
};
|
||||
|
||||
enum dfl_fpga_devt_type {
|
||||
DFL_FPGA_DEVT_FME,
|
||||
DFL_FPGA_DEVT_PORT,
|
||||
|
@ -58,7 +52,7 @@ static const char *dfl_pdata_key_strings[DFL_ID_MAX] = {
|
|||
*/
|
||||
struct dfl_dev_info {
|
||||
const char *name;
|
||||
u32 dfh_id;
|
||||
u16 dfh_id;
|
||||
struct idr id;
|
||||
enum dfl_fpga_devt_type devt_type;
|
||||
};
|
||||
|
@ -134,7 +128,7 @@ static enum dfl_id_type feature_dev_id_type(struct platform_device *pdev)
|
|||
return DFL_ID_MAX;
|
||||
}
|
||||
|
||||
static enum dfl_id_type dfh_id_to_type(u32 id)
|
||||
static enum dfl_id_type dfh_id_to_type(u16 id)
|
||||
{
|
||||
int i;
|
||||
|
||||
|
@ -250,6 +244,249 @@ int dfl_fpga_check_port_id(struct platform_device *pdev, void *pport_id)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(dfl_fpga_check_port_id);
|
||||
|
||||
static DEFINE_IDA(dfl_device_ida);
|
||||
|
||||
static const struct dfl_device_id *
|
||||
dfl_match_one_device(const struct dfl_device_id *id, struct dfl_device *ddev)
|
||||
{
|
||||
if (id->type == ddev->type && id->feature_id == ddev->feature_id)
|
||||
return id;
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static int dfl_bus_match(struct device *dev, struct device_driver *drv)
|
||||
{
|
||||
struct dfl_device *ddev = to_dfl_dev(dev);
|
||||
struct dfl_driver *ddrv = to_dfl_drv(drv);
|
||||
const struct dfl_device_id *id_entry;
|
||||
|
||||
id_entry = ddrv->id_table;
|
||||
if (id_entry) {
|
||||
while (id_entry->feature_id) {
|
||||
if (dfl_match_one_device(id_entry, ddev)) {
|
||||
ddev->id_entry = id_entry;
|
||||
return 1;
|
||||
}
|
||||
id_entry++;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int dfl_bus_probe(struct device *dev)
|
||||
{
|
||||
struct dfl_driver *ddrv = to_dfl_drv(dev->driver);
|
||||
struct dfl_device *ddev = to_dfl_dev(dev);
|
||||
|
||||
return ddrv->probe(ddev);
|
||||
}
|
||||
|
||||
static int dfl_bus_remove(struct device *dev)
|
||||
{
|
||||
struct dfl_driver *ddrv = to_dfl_drv(dev->driver);
|
||||
struct dfl_device *ddev = to_dfl_dev(dev);
|
||||
|
||||
if (ddrv->remove)
|
||||
ddrv->remove(ddev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int dfl_bus_uevent(struct device *dev, struct kobj_uevent_env *env)
|
||||
{
|
||||
struct dfl_device *ddev = to_dfl_dev(dev);
|
||||
|
||||
/* The type has 4 valid bits and feature_id has 12 valid bits */
|
||||
return add_uevent_var(env, "MODALIAS=dfl:t%01Xf%03X",
|
||||
ddev->type, ddev->feature_id);
|
||||
}
|
||||
|
||||
static ssize_t
|
||||
type_show(struct device *dev, struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct dfl_device *ddev = to_dfl_dev(dev);
|
||||
|
||||
return sprintf(buf, "0x%x\n", ddev->type);
|
||||
}
|
||||
static DEVICE_ATTR_RO(type);
|
||||
|
||||
static ssize_t
|
||||
feature_id_show(struct device *dev, struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct dfl_device *ddev = to_dfl_dev(dev);
|
||||
|
||||
return sprintf(buf, "0x%x\n", ddev->feature_id);
|
||||
}
|
||||
static DEVICE_ATTR_RO(feature_id);
|
||||
|
||||
static struct attribute *dfl_dev_attrs[] = {
|
||||
&dev_attr_type.attr,
|
||||
&dev_attr_feature_id.attr,
|
||||
NULL,
|
||||
};
|
||||
ATTRIBUTE_GROUPS(dfl_dev);
|
||||
|
||||
static struct bus_type dfl_bus_type = {
|
||||
.name = "dfl",
|
||||
.match = dfl_bus_match,
|
||||
.probe = dfl_bus_probe,
|
||||
.remove = dfl_bus_remove,
|
||||
.uevent = dfl_bus_uevent,
|
||||
.dev_groups = dfl_dev_groups,
|
||||
};
|
||||
|
||||
static void release_dfl_dev(struct device *dev)
|
||||
{
|
||||
struct dfl_device *ddev = to_dfl_dev(dev);
|
||||
|
||||
if (ddev->mmio_res.parent)
|
||||
release_resource(&ddev->mmio_res);
|
||||
|
||||
ida_simple_remove(&dfl_device_ida, ddev->id);
|
||||
kfree(ddev->irqs);
|
||||
kfree(ddev);
|
||||
}
|
||||
|
||||
static struct dfl_device *
|
||||
dfl_dev_add(struct dfl_feature_platform_data *pdata,
|
||||
struct dfl_feature *feature)
|
||||
{
|
||||
struct platform_device *pdev = pdata->dev;
|
||||
struct resource *parent_res;
|
||||
struct dfl_device *ddev;
|
||||
int id, i, ret;
|
||||
|
||||
ddev = kzalloc(sizeof(*ddev), GFP_KERNEL);
|
||||
if (!ddev)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
id = ida_simple_get(&dfl_device_ida, 0, 0, GFP_KERNEL);
|
||||
if (id < 0) {
|
||||
dev_err(&pdev->dev, "unable to get id\n");
|
||||
kfree(ddev);
|
||||
return ERR_PTR(id);
|
||||
}
|
||||
|
||||
/* freeing resources by put_device() after device_initialize() */
|
||||
device_initialize(&ddev->dev);
|
||||
ddev->dev.parent = &pdev->dev;
|
||||
ddev->dev.bus = &dfl_bus_type;
|
||||
ddev->dev.release = release_dfl_dev;
|
||||
ddev->id = id;
|
||||
ret = dev_set_name(&ddev->dev, "dfl_dev.%d", id);
|
||||
if (ret)
|
||||
goto put_dev;
|
||||
|
||||
ddev->type = feature_dev_id_type(pdev);
|
||||
ddev->feature_id = feature->id;
|
||||
ddev->cdev = pdata->dfl_cdev;
|
||||
|
||||
/* add mmio resource */
|
||||
parent_res = &pdev->resource[feature->resource_index];
|
||||
ddev->mmio_res.flags = IORESOURCE_MEM;
|
||||
ddev->mmio_res.start = parent_res->start;
|
||||
ddev->mmio_res.end = parent_res->end;
|
||||
ddev->mmio_res.name = dev_name(&ddev->dev);
|
||||
ret = insert_resource(parent_res, &ddev->mmio_res);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "%s failed to claim resource: %pR\n",
|
||||
dev_name(&ddev->dev), &ddev->mmio_res);
|
||||
goto put_dev;
|
||||
}
|
||||
|
||||
/* then add irq resource */
|
||||
if (feature->nr_irqs) {
|
||||
ddev->irqs = kcalloc(feature->nr_irqs,
|
||||
sizeof(*ddev->irqs), GFP_KERNEL);
|
||||
if (!ddev->irqs) {
|
||||
ret = -ENOMEM;
|
||||
goto put_dev;
|
||||
}
|
||||
|
||||
for (i = 0; i < feature->nr_irqs; i++)
|
||||
ddev->irqs[i] = feature->irq_ctx[i].irq;
|
||||
|
||||
ddev->num_irqs = feature->nr_irqs;
|
||||
}
|
||||
|
||||
ret = device_add(&ddev->dev);
|
||||
if (ret)
|
||||
goto put_dev;
|
||||
|
||||
dev_dbg(&pdev->dev, "add dfl_dev: %s\n", dev_name(&ddev->dev));
|
||||
return ddev;
|
||||
|
||||
put_dev:
|
||||
/* calls release_dfl_dev() which does the clean up */
|
||||
put_device(&ddev->dev);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
static void dfl_devs_remove(struct dfl_feature_platform_data *pdata)
|
||||
{
|
||||
struct dfl_feature *feature;
|
||||
|
||||
dfl_fpga_dev_for_each_feature(pdata, feature) {
|
||||
if (feature->ddev) {
|
||||
device_unregister(&feature->ddev->dev);
|
||||
feature->ddev = NULL;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static int dfl_devs_add(struct dfl_feature_platform_data *pdata)
|
||||
{
|
||||
struct dfl_feature *feature;
|
||||
struct dfl_device *ddev;
|
||||
int ret;
|
||||
|
||||
dfl_fpga_dev_for_each_feature(pdata, feature) {
|
||||
if (feature->ioaddr)
|
||||
continue;
|
||||
|
||||
if (feature->ddev) {
|
||||
ret = -EEXIST;
|
||||
goto err;
|
||||
}
|
||||
|
||||
ddev = dfl_dev_add(pdata, feature);
|
||||
if (IS_ERR(ddev)) {
|
||||
ret = PTR_ERR(ddev);
|
||||
goto err;
|
||||
}
|
||||
|
||||
feature->ddev = ddev;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err:
|
||||
dfl_devs_remove(pdata);
|
||||
return ret;
|
||||
}
|
||||
|
||||
int __dfl_driver_register(struct dfl_driver *dfl_drv, struct module *owner)
|
||||
{
|
||||
if (!dfl_drv || !dfl_drv->probe || !dfl_drv->id_table)
|
||||
return -EINVAL;
|
||||
|
||||
dfl_drv->drv.owner = owner;
|
||||
dfl_drv->drv.bus = &dfl_bus_type;
|
||||
|
||||
return driver_register(&dfl_drv->drv);
|
||||
}
|
||||
EXPORT_SYMBOL(__dfl_driver_register);
|
||||
|
||||
void dfl_driver_unregister(struct dfl_driver *dfl_drv)
|
||||
{
|
||||
driver_unregister(&dfl_drv->drv);
|
||||
}
|
||||
EXPORT_SYMBOL(dfl_driver_unregister);
|
||||
|
||||
#define is_header_feature(feature) ((feature)->id == FEATURE_ID_FIU_HEADER)
|
||||
|
||||
/**
|
||||
* dfl_fpga_dev_feature_uinit - uinit for sub features of dfl feature device
|
||||
* @pdev: feature device.
|
||||
|
@ -259,12 +496,15 @@ void dfl_fpga_dev_feature_uinit(struct platform_device *pdev)
|
|||
struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev);
|
||||
struct dfl_feature *feature;
|
||||
|
||||
dfl_fpga_dev_for_each_feature(pdata, feature)
|
||||
dfl_devs_remove(pdata);
|
||||
|
||||
dfl_fpga_dev_for_each_feature(pdata, feature) {
|
||||
if (feature->ops) {
|
||||
if (feature->ops->uinit)
|
||||
feature->ops->uinit(pdev, feature);
|
||||
feature->ops = NULL;
|
||||
}
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dfl_fpga_dev_feature_uinit);
|
||||
|
||||
|
@ -273,8 +513,22 @@ static int dfl_feature_instance_init(struct platform_device *pdev,
|
|||
struct dfl_feature *feature,
|
||||
struct dfl_feature_driver *drv)
|
||||
{
|
||||
void __iomem *base;
|
||||
int ret = 0;
|
||||
|
||||
if (!is_header_feature(feature)) {
|
||||
base = devm_platform_ioremap_resource(pdev,
|
||||
feature->resource_index);
|
||||
if (IS_ERR(base)) {
|
||||
dev_err(&pdev->dev,
|
||||
"ioremap failed for feature 0x%x!\n",
|
||||
feature->id);
|
||||
return PTR_ERR(base);
|
||||
}
|
||||
|
||||
feature->ioaddr = base;
|
||||
}
|
||||
|
||||
if (drv->ops->init) {
|
||||
ret = drv->ops->init(pdev, feature);
|
||||
if (ret)
|
||||
|
@ -331,6 +585,10 @@ int dfl_fpga_dev_feature_init(struct platform_device *pdev,
|
|||
drv++;
|
||||
}
|
||||
|
||||
ret = dfl_devs_add(pdata);
|
||||
if (ret)
|
||||
goto exit;
|
||||
|
||||
return 0;
|
||||
exit:
|
||||
dfl_fpga_dev_feature_uinit(pdev);
|
||||
|
@ -427,7 +685,9 @@ EXPORT_SYMBOL_GPL(dfl_fpga_dev_ops_unregister);
|
|||
* @irq_table: Linux IRQ numbers for all irqs, indexed by local irq index of
|
||||
* this device.
|
||||
* @feature_dev: current feature device.
|
||||
* @ioaddr: header register region address of feature device in enumeration.
|
||||
* @ioaddr: header register region address of current FIU in enumeration.
|
||||
* @start: register resource start of current FIU.
|
||||
* @len: max register resource length of current FIU.
|
||||
* @sub_features: a sub features linked list for feature device in enumeration.
|
||||
* @feature_num: number of sub features for feature device in enumeration.
|
||||
*/
|
||||
|
@ -439,6 +699,8 @@ struct build_feature_devs_info {
|
|||
|
||||
struct platform_device *feature_dev;
|
||||
void __iomem *ioaddr;
|
||||
resource_size_t start;
|
||||
resource_size_t len;
|
||||
struct list_head sub_features;
|
||||
int feature_num;
|
||||
};
|
||||
|
@ -454,7 +716,7 @@ struct build_feature_devs_info {
|
|||
* @nr_irqs: number of irqs of this sub feature.
|
||||
*/
|
||||
struct dfl_feature_info {
|
||||
u64 fid;
|
||||
u16 fid;
|
||||
struct resource mmio_res;
|
||||
void __iomem *ioaddr;
|
||||
struct list_head node;
|
||||
|
@ -484,10 +746,7 @@ static int build_info_commit_dev(struct build_feature_devs_info *binfo)
|
|||
struct dfl_feature_platform_data *pdata;
|
||||
struct dfl_feature_info *finfo, *p;
|
||||
enum dfl_id_type type;
|
||||
int ret, index = 0;
|
||||
|
||||
if (!fdev)
|
||||
return 0;
|
||||
int ret, index = 0, res_idx = 0;
|
||||
|
||||
type = feature_dev_id_type(fdev);
|
||||
if (WARN_ON_ONCE(type >= DFL_ID_MAX))
|
||||
|
@ -530,16 +789,32 @@ static int build_info_commit_dev(struct build_feature_devs_info *binfo)
|
|||
|
||||
/* fill features and resource information for feature dev */
|
||||
list_for_each_entry_safe(finfo, p, &binfo->sub_features, node) {
|
||||
struct dfl_feature *feature = &pdata->features[index];
|
||||
struct dfl_feature *feature = &pdata->features[index++];
|
||||
struct dfl_feature_irq_ctx *ctx;
|
||||
unsigned int i;
|
||||
|
||||
/* save resource information for each feature */
|
||||
feature->dev = fdev;
|
||||
feature->id = finfo->fid;
|
||||
feature->resource_index = index;
|
||||
feature->ioaddr = finfo->ioaddr;
|
||||
fdev->resource[index++] = finfo->mmio_res;
|
||||
|
||||
/*
|
||||
* the FIU header feature has some fundamental functions (sriov
|
||||
* set, port enable/disable) needed for the dfl bus device and
|
||||
* other sub features. So its mmio resource should be mapped by
|
||||
* DFL bus device. And we should not assign it to feature
|
||||
* devices (dfl-fme/afu) again.
|
||||
*/
|
||||
if (is_header_feature(feature)) {
|
||||
feature->resource_index = -1;
|
||||
feature->ioaddr =
|
||||
devm_ioremap_resource(binfo->dev,
|
||||
&finfo->mmio_res);
|
||||
if (IS_ERR(feature->ioaddr))
|
||||
return PTR_ERR(feature->ioaddr);
|
||||
} else {
|
||||
feature->resource_index = res_idx;
|
||||
fdev->resource[res_idx++] = finfo->mmio_res;
|
||||
}
|
||||
|
||||
if (finfo->nr_irqs) {
|
||||
ctx = devm_kcalloc(binfo->dev, finfo->nr_irqs,
|
||||
|
@ -582,19 +857,13 @@ static int build_info_commit_dev(struct build_feature_devs_info *binfo)
|
|||
|
||||
static int
|
||||
build_info_create_dev(struct build_feature_devs_info *binfo,
|
||||
enum dfl_id_type type, void __iomem *ioaddr)
|
||||
enum dfl_id_type type)
|
||||
{
|
||||
struct platform_device *fdev;
|
||||
int ret;
|
||||
|
||||
if (type >= DFL_ID_MAX)
|
||||
return -EINVAL;
|
||||
|
||||
/* we will create a new device, commit current device first */
|
||||
ret = build_info_commit_dev(binfo);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* we use -ENODEV as the initialization indicator which indicates
|
||||
* whether the id need to be reclaimed
|
||||
|
@ -605,7 +874,7 @@ build_info_create_dev(struct build_feature_devs_info *binfo,
|
|||
|
||||
binfo->feature_dev = fdev;
|
||||
binfo->feature_num = 0;
|
||||
binfo->ioaddr = ioaddr;
|
||||
|
||||
INIT_LIST_HEAD(&binfo->sub_features);
|
||||
|
||||
fdev->id = dfl_id_alloc(type, &fdev->dev);
|
||||
|
@ -649,7 +918,7 @@ static inline u32 feature_size(void __iomem *start)
|
|||
return ofst ? ofst : 4096;
|
||||
}
|
||||
|
||||
static u64 feature_id(void __iomem *start)
|
||||
static u16 feature_id(void __iomem *start)
|
||||
{
|
||||
u64 v = readq(start + DFH);
|
||||
u16 id = FIELD_GET(DFH_ID, v);
|
||||
|
@ -667,7 +936,7 @@ static u64 feature_id(void __iomem *start)
|
|||
}
|
||||
|
||||
static int parse_feature_irqs(struct build_feature_devs_info *binfo,
|
||||
resource_size_t ofst, u64 fid,
|
||||
resource_size_t ofst, u16 fid,
|
||||
unsigned int *irq_base, unsigned int *nr_irqs)
|
||||
{
|
||||
void __iomem *base = binfo->ioaddr + ofst;
|
||||
|
@ -713,12 +982,12 @@ static int parse_feature_irqs(struct build_feature_devs_info *binfo,
|
|||
return 0;
|
||||
}
|
||||
|
||||
dev_dbg(binfo->dev, "feature: 0x%llx, irq_base: %u, nr_irqs: %u\n",
|
||||
dev_dbg(binfo->dev, "feature: 0x%x, irq_base: %u, nr_irqs: %u\n",
|
||||
fid, ibase, inr);
|
||||
|
||||
if (ibase + inr > binfo->nr_irqs) {
|
||||
dev_err(binfo->dev,
|
||||
"Invalid interrupt number in feature 0x%llx\n", fid);
|
||||
"Invalid interrupt number in feature 0x%x\n", fid);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
@ -726,7 +995,7 @@ static int parse_feature_irqs(struct build_feature_devs_info *binfo,
|
|||
virq = binfo->irq_table[ibase + i];
|
||||
if (virq < 0 || virq > NR_IRQS) {
|
||||
dev_err(binfo->dev,
|
||||
"Invalid irq table entry for feature 0x%llx\n",
|
||||
"Invalid irq table entry for feature 0x%x\n",
|
||||
fid);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
@ -747,18 +1016,17 @@ static int parse_feature_irqs(struct build_feature_devs_info *binfo,
|
|||
*/
|
||||
static int
|
||||
create_feature_instance(struct build_feature_devs_info *binfo,
|
||||
struct dfl_fpga_enum_dfl *dfl, resource_size_t ofst,
|
||||
resource_size_t size, u64 fid)
|
||||
resource_size_t ofst, resource_size_t size, u16 fid)
|
||||
{
|
||||
unsigned int irq_base, nr_irqs;
|
||||
struct dfl_feature_info *finfo;
|
||||
int ret;
|
||||
|
||||
/* read feature size and id if inputs are invalid */
|
||||
size = size ? size : feature_size(dfl->ioaddr + ofst);
|
||||
fid = fid ? fid : feature_id(dfl->ioaddr + ofst);
|
||||
size = size ? size : feature_size(binfo->ioaddr + ofst);
|
||||
fid = fid ? fid : feature_id(binfo->ioaddr + ofst);
|
||||
|
||||
if (dfl->len - ofst < size)
|
||||
if (binfo->len - ofst < size)
|
||||
return -EINVAL;
|
||||
|
||||
ret = parse_feature_irqs(binfo, ofst, fid, &irq_base, &nr_irqs);
|
||||
|
@ -770,12 +1038,11 @@ create_feature_instance(struct build_feature_devs_info *binfo,
|
|||
return -ENOMEM;
|
||||
|
||||
finfo->fid = fid;
|
||||
finfo->mmio_res.start = dfl->start + ofst;
|
||||
finfo->mmio_res.start = binfo->start + ofst;
|
||||
finfo->mmio_res.end = finfo->mmio_res.start + size - 1;
|
||||
finfo->mmio_res.flags = IORESOURCE_MEM;
|
||||
finfo->irq_base = irq_base;
|
||||
finfo->nr_irqs = nr_irqs;
|
||||
finfo->ioaddr = dfl->ioaddr + ofst;
|
||||
|
||||
list_add_tail(&finfo->node, &binfo->sub_features);
|
||||
binfo->feature_num++;
|
||||
|
@ -784,7 +1051,6 @@ create_feature_instance(struct build_feature_devs_info *binfo,
|
|||
}
|
||||
|
||||
static int parse_feature_port_afu(struct build_feature_devs_info *binfo,
|
||||
struct dfl_fpga_enum_dfl *dfl,
|
||||
resource_size_t ofst)
|
||||
{
|
||||
u64 v = readq(binfo->ioaddr + PORT_HDR_CAP);
|
||||
|
@ -792,21 +1058,22 @@ static int parse_feature_port_afu(struct build_feature_devs_info *binfo,
|
|||
|
||||
WARN_ON(!size);
|
||||
|
||||
return create_feature_instance(binfo, dfl, ofst, size, FEATURE_ID_AFU);
|
||||
return create_feature_instance(binfo, ofst, size, FEATURE_ID_AFU);
|
||||
}
|
||||
|
||||
#define is_feature_dev_detected(binfo) (!!(binfo)->feature_dev)
|
||||
|
||||
static int parse_feature_afu(struct build_feature_devs_info *binfo,
|
||||
struct dfl_fpga_enum_dfl *dfl,
|
||||
resource_size_t ofst)
|
||||
{
|
||||
if (!binfo->feature_dev) {
|
||||
if (!is_feature_dev_detected(binfo)) {
|
||||
dev_err(binfo->dev, "this AFU does not belong to any FIU.\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
switch (feature_dev_id_type(binfo->feature_dev)) {
|
||||
case PORT_ID:
|
||||
return parse_feature_port_afu(binfo, dfl, ofst);
|
||||
return parse_feature_port_afu(binfo, ofst);
|
||||
default:
|
||||
dev_info(binfo->dev, "AFU belonging to FIU %s is not supported yet.\n",
|
||||
binfo->feature_dev->name);
|
||||
|
@ -815,35 +1082,79 @@ static int parse_feature_afu(struct build_feature_devs_info *binfo,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int build_info_prepare(struct build_feature_devs_info *binfo,
|
||||
resource_size_t start, resource_size_t len)
|
||||
{
|
||||
struct device *dev = binfo->dev;
|
||||
void __iomem *ioaddr;
|
||||
|
||||
if (!devm_request_mem_region(dev, start, len, dev_name(dev))) {
|
||||
dev_err(dev, "request region fail, start:%pa, len:%pa\n",
|
||||
&start, &len);
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
ioaddr = devm_ioremap(dev, start, len);
|
||||
if (!ioaddr) {
|
||||
dev_err(dev, "ioremap region fail, start:%pa, len:%pa\n",
|
||||
&start, &len);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
binfo->start = start;
|
||||
binfo->len = len;
|
||||
binfo->ioaddr = ioaddr;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void build_info_complete(struct build_feature_devs_info *binfo)
|
||||
{
|
||||
devm_iounmap(binfo->dev, binfo->ioaddr);
|
||||
devm_release_mem_region(binfo->dev, binfo->start, binfo->len);
|
||||
}
|
||||
|
||||
static int parse_feature_fiu(struct build_feature_devs_info *binfo,
|
||||
struct dfl_fpga_enum_dfl *dfl,
|
||||
resource_size_t ofst)
|
||||
{
|
||||
u32 id, offset;
|
||||
u64 v;
|
||||
int ret = 0;
|
||||
u32 offset;
|
||||
u16 id;
|
||||
u64 v;
|
||||
|
||||
v = readq(dfl->ioaddr + ofst + DFH);
|
||||
if (is_feature_dev_detected(binfo)) {
|
||||
build_info_complete(binfo);
|
||||
|
||||
ret = build_info_commit_dev(binfo);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = build_info_prepare(binfo, binfo->start + ofst,
|
||||
binfo->len - ofst);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
v = readq(binfo->ioaddr + DFH);
|
||||
id = FIELD_GET(DFH_ID, v);
|
||||
|
||||
/* create platform device for dfl feature dev */
|
||||
ret = build_info_create_dev(binfo, dfh_id_to_type(id),
|
||||
dfl->ioaddr + ofst);
|
||||
ret = build_info_create_dev(binfo, dfh_id_to_type(id));
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = create_feature_instance(binfo, dfl, ofst, 0, 0);
|
||||
ret = create_feature_instance(binfo, 0, 0, 0);
|
||||
if (ret)
|
||||
return ret;
|
||||
/*
|
||||
* find and parse FIU's child AFU via its NEXT_AFU register.
|
||||
* please note that only Port has valid NEXT_AFU pointer per spec.
|
||||
*/
|
||||
v = readq(dfl->ioaddr + ofst + NEXT_AFU);
|
||||
v = readq(binfo->ioaddr + NEXT_AFU);
|
||||
|
||||
offset = FIELD_GET(NEXT_AFU_NEXT_DFH_OFST, v);
|
||||
if (offset)
|
||||
return parse_feature_afu(binfo, dfl, ofst + offset);
|
||||
return parse_feature_afu(binfo, offset);
|
||||
|
||||
dev_dbg(binfo->dev, "No AFUs detected on FIU %d\n", id);
|
||||
|
||||
|
@ -851,41 +1162,39 @@ static int parse_feature_fiu(struct build_feature_devs_info *binfo,
|
|||
}
|
||||
|
||||
static int parse_feature_private(struct build_feature_devs_info *binfo,
|
||||
struct dfl_fpga_enum_dfl *dfl,
|
||||
resource_size_t ofst)
|
||||
{
|
||||
if (!binfo->feature_dev) {
|
||||
dev_err(binfo->dev, "the private feature %llx does not belong to any AFU.\n",
|
||||
(unsigned long long)feature_id(dfl->ioaddr + ofst));
|
||||
if (!is_feature_dev_detected(binfo)) {
|
||||
dev_err(binfo->dev, "the private feature 0x%x does not belong to any AFU.\n",
|
||||
feature_id(binfo->ioaddr + ofst));
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return create_feature_instance(binfo, dfl, ofst, 0, 0);
|
||||
return create_feature_instance(binfo, ofst, 0, 0);
|
||||
}
|
||||
|
||||
/**
|
||||
* parse_feature - parse a feature on given device feature list
|
||||
*
|
||||
* @binfo: build feature devices information.
|
||||
* @dfl: device feature list to parse
|
||||
* @ofst: offset to feature header on this device feature list
|
||||
* @ofst: offset to current FIU header
|
||||
*/
|
||||
static int parse_feature(struct build_feature_devs_info *binfo,
|
||||
struct dfl_fpga_enum_dfl *dfl, resource_size_t ofst)
|
||||
resource_size_t ofst)
|
||||
{
|
||||
u64 v;
|
||||
u32 type;
|
||||
|
||||
v = readq(dfl->ioaddr + ofst + DFH);
|
||||
v = readq(binfo->ioaddr + ofst + DFH);
|
||||
type = FIELD_GET(DFH_TYPE, v);
|
||||
|
||||
switch (type) {
|
||||
case DFH_TYPE_AFU:
|
||||
return parse_feature_afu(binfo, dfl, ofst);
|
||||
return parse_feature_afu(binfo, ofst);
|
||||
case DFH_TYPE_PRIVATE:
|
||||
return parse_feature_private(binfo, dfl, ofst);
|
||||
return parse_feature_private(binfo, ofst);
|
||||
case DFH_TYPE_FIU:
|
||||
return parse_feature_fiu(binfo, dfl, ofst);
|
||||
return parse_feature_fiu(binfo, ofst);
|
||||
default:
|
||||
dev_info(binfo->dev,
|
||||
"Feature Type %x is not supported.\n", type);
|
||||
|
@ -895,14 +1204,17 @@ static int parse_feature(struct build_feature_devs_info *binfo,
|
|||
}
|
||||
|
||||
static int parse_feature_list(struct build_feature_devs_info *binfo,
|
||||
struct dfl_fpga_enum_dfl *dfl)
|
||||
resource_size_t start, resource_size_t len)
|
||||
{
|
||||
void __iomem *start = dfl->ioaddr;
|
||||
void __iomem *end = dfl->ioaddr + dfl->len;
|
||||
resource_size_t end = start + len;
|
||||
int ret = 0;
|
||||
u32 ofst = 0;
|
||||
u64 v;
|
||||
|
||||
ret = build_info_prepare(binfo, start, len);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* walk through the device feature list via DFH's next DFH pointer. */
|
||||
for (; start < end; start += ofst) {
|
||||
if (end - start < DFH_SIZE) {
|
||||
|
@ -910,11 +1222,11 @@ static int parse_feature_list(struct build_feature_devs_info *binfo,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
ret = parse_feature(binfo, dfl, start - dfl->ioaddr);
|
||||
ret = parse_feature(binfo, start - binfo->start);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
v = readq(start + DFH);
|
||||
v = readq(binfo->ioaddr + start - binfo->start + DFH);
|
||||
ofst = FIELD_GET(DFH_NEXT_HDR_OFST, v);
|
||||
|
||||
/* stop parsing if EOL(End of List) is set or offset is 0 */
|
||||
|
@ -923,7 +1235,12 @@ static int parse_feature_list(struct build_feature_devs_info *binfo,
|
|||
}
|
||||
|
||||
/* commit current feature device when reach the end of list */
|
||||
return build_info_commit_dev(binfo);
|
||||
build_info_complete(binfo);
|
||||
|
||||
if (is_feature_dev_detected(binfo))
|
||||
ret = build_info_commit_dev(binfo);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
struct dfl_fpga_enum_info *dfl_fpga_enum_info_alloc(struct device *dev)
|
||||
|
@ -976,7 +1293,6 @@ EXPORT_SYMBOL_GPL(dfl_fpga_enum_info_free);
|
|||
* @info: ptr to dfl_fpga_enum_info
|
||||
* @start: mmio resource address of the device feature list.
|
||||
* @len: mmio resource length of the device feature list.
|
||||
* @ioaddr: mapped mmio resource address of the device feature list.
|
||||
*
|
||||
* One FPGA device may have one or more Device Feature Lists (DFLs), use this
|
||||
* function to add information of each DFL to common data structure for next
|
||||
|
@ -985,8 +1301,7 @@ EXPORT_SYMBOL_GPL(dfl_fpga_enum_info_free);
|
|||
* Return: 0 on success, negative error code otherwise.
|
||||
*/
|
||||
int dfl_fpga_enum_info_add_dfl(struct dfl_fpga_enum_info *info,
|
||||
resource_size_t start, resource_size_t len,
|
||||
void __iomem *ioaddr)
|
||||
resource_size_t start, resource_size_t len)
|
||||
{
|
||||
struct dfl_fpga_enum_dfl *dfl;
|
||||
|
||||
|
@ -996,7 +1311,6 @@ int dfl_fpga_enum_info_add_dfl(struct dfl_fpga_enum_info *info,
|
|||
|
||||
dfl->start = start;
|
||||
dfl->len = len;
|
||||
dfl->ioaddr = ioaddr;
|
||||
|
||||
list_add_tail(&dfl->node, &info->dfls);
|
||||
|
||||
|
@ -1119,7 +1433,7 @@ dfl_fpga_feature_devs_enumerate(struct dfl_fpga_enum_info *info)
|
|||
* Lists.
|
||||
*/
|
||||
list_for_each_entry(dfl, &info->dfls, node) {
|
||||
ret = parse_feature_list(binfo, dfl);
|
||||
ret = parse_feature_list(binfo, dfl->start, dfl->len);
|
||||
if (ret) {
|
||||
remove_feature_devs(cdev);
|
||||
build_info_free(binfo);
|
||||
|
@ -1212,11 +1526,17 @@ static int __init dfl_fpga_init(void)
|
|||
{
|
||||
int ret;
|
||||
|
||||
ret = bus_register(&dfl_bus_type);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
dfl_ids_init();
|
||||
|
||||
ret = dfl_chardev_init();
|
||||
if (ret)
|
||||
if (ret) {
|
||||
dfl_ids_destroy();
|
||||
bus_unregister(&dfl_bus_type);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -1424,7 +1744,7 @@ static int do_set_irq_trigger(struct dfl_feature *feature, unsigned int idx,
|
|||
return 0;
|
||||
|
||||
feature->irq_ctx[idx].name =
|
||||
kasprintf(GFP_KERNEL, "fpga-irq[%u](%s-%llx)", idx,
|
||||
kasprintf(GFP_KERNEL, "fpga-irq[%u](%s-%x)", idx,
|
||||
dev_name(&pdev->dev), feature->id);
|
||||
if (!feature->irq_ctx[idx].name)
|
||||
return -ENOMEM;
|
||||
|
@ -1554,6 +1874,7 @@ static void __exit dfl_fpga_exit(void)
|
|||
{
|
||||
dfl_chardev_uinit();
|
||||
dfl_ids_destroy();
|
||||
bus_unregister(&dfl_bus_type);
|
||||
}
|
||||
|
||||
module_init(dfl_fpga_init);
|
||||
|
|
|
@ -197,7 +197,7 @@ int dfl_fpga_check_port_id(struct platform_device *pdev, void *pport_id);
|
|||
* @id: unique dfl private feature id.
|
||||
*/
|
||||
struct dfl_feature_id {
|
||||
u64 id;
|
||||
u16 id;
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -236,16 +236,18 @@ struct dfl_feature_irq_ctx {
|
|||
* @irq_ctx: interrupt context list.
|
||||
* @nr_irqs: number of interrupt contexts.
|
||||
* @ops: ops of this sub feature.
|
||||
* @ddev: ptr to the dfl device of this sub feature.
|
||||
* @priv: priv data of this feature.
|
||||
*/
|
||||
struct dfl_feature {
|
||||
struct platform_device *dev;
|
||||
u64 id;
|
||||
u16 id;
|
||||
int resource_index;
|
||||
void __iomem *ioaddr;
|
||||
struct dfl_feature_irq_ctx *irq_ctx;
|
||||
unsigned int nr_irqs;
|
||||
const struct dfl_feature_ops *ops;
|
||||
struct dfl_device *ddev;
|
||||
void *priv;
|
||||
};
|
||||
|
||||
|
@ -365,7 +367,7 @@ struct platform_device *dfl_fpga_inode_to_feature_dev(struct inode *inode)
|
|||
(feature) < (pdata)->features + (pdata)->num; (feature)++)
|
||||
|
||||
static inline
|
||||
struct dfl_feature *dfl_get_feature_by_id(struct device *dev, u64 id)
|
||||
struct dfl_feature *dfl_get_feature_by_id(struct device *dev, u16 id)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
struct dfl_feature *feature;
|
||||
|
@ -378,7 +380,7 @@ struct dfl_feature *dfl_get_feature_by_id(struct device *dev, u64 id)
|
|||
}
|
||||
|
||||
static inline
|
||||
void __iomem *dfl_get_feature_ioaddr_by_id(struct device *dev, u64 id)
|
||||
void __iomem *dfl_get_feature_ioaddr_by_id(struct device *dev, u16 id)
|
||||
{
|
||||
struct dfl_feature *feature = dfl_get_feature_by_id(dev, id);
|
||||
|
||||
|
@ -389,7 +391,7 @@ void __iomem *dfl_get_feature_ioaddr_by_id(struct device *dev, u64 id)
|
|||
return NULL;
|
||||
}
|
||||
|
||||
static inline bool is_dfl_feature_present(struct device *dev, u64 id)
|
||||
static inline bool is_dfl_feature_present(struct device *dev, u16 id)
|
||||
{
|
||||
return !!dfl_get_feature_ioaddr_by_id(dev, id);
|
||||
}
|
||||
|
@ -441,22 +443,17 @@ struct dfl_fpga_enum_info {
|
|||
*
|
||||
* @start: base address of this device feature list.
|
||||
* @len: size of this device feature list.
|
||||
* @ioaddr: mapped base address of this device feature list.
|
||||
* @node: node in list of device feature lists.
|
||||
*/
|
||||
struct dfl_fpga_enum_dfl {
|
||||
resource_size_t start;
|
||||
resource_size_t len;
|
||||
|
||||
void __iomem *ioaddr;
|
||||
|
||||
struct list_head node;
|
||||
};
|
||||
|
||||
struct dfl_fpga_enum_info *dfl_fpga_enum_info_alloc(struct device *dev);
|
||||
int dfl_fpga_enum_info_add_dfl(struct dfl_fpga_enum_info *info,
|
||||
resource_size_t start, resource_size_t len,
|
||||
void __iomem *ioaddr);
|
||||
resource_size_t start, resource_size_t len);
|
||||
int dfl_fpga_enum_info_add_irq(struct dfl_fpga_enum_info *info,
|
||||
unsigned int nr_irqs, int *irq_table);
|
||||
void dfl_fpga_enum_info_free(struct dfl_fpga_enum_info *info);
|
||||
|
@ -519,4 +516,88 @@ long dfl_feature_ioctl_set_irq(struct platform_device *pdev,
|
|||
struct dfl_feature *feature,
|
||||
unsigned long arg);
|
||||
|
||||
/**
|
||||
* enum dfl_id_type - define the DFL FIU types
|
||||
*/
|
||||
enum dfl_id_type {
|
||||
FME_ID,
|
||||
PORT_ID,
|
||||
DFL_ID_MAX,
|
||||
};
|
||||
|
||||
/**
|
||||
* struct dfl_device_id - dfl device identifier
|
||||
* @type: contains 4 bits DFL FIU type of the device. See enum dfl_id_type.
|
||||
* @feature_id: contains 12 bits feature identifier local to its DFL FIU type.
|
||||
* @driver_data: driver specific data.
|
||||
*/
|
||||
struct dfl_device_id {
|
||||
u8 type;
|
||||
u16 feature_id;
|
||||
unsigned long driver_data;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct dfl_device - represent an dfl device on dfl bus
|
||||
*
|
||||
* @dev: generic device interface.
|
||||
* @id: id of the dfl device.
|
||||
* @type: type of DFL FIU of the device. See enum dfl_id_type.
|
||||
* @feature_id: 16 bits feature identifier local to its DFL FIU type.
|
||||
* @mmio_res: mmio resource of this dfl device.
|
||||
* @irqs: list of Linux IRQ numbers of this dfl device.
|
||||
* @num_irqs: number of IRQs supported by this dfl device.
|
||||
* @cdev: pointer to DFL FPGA container device this dfl device belongs to.
|
||||
* @id_entry: matched id entry in dfl driver's id table.
|
||||
*/
|
||||
struct dfl_device {
|
||||
struct device dev;
|
||||
int id;
|
||||
u8 type;
|
||||
u16 feature_id;
|
||||
struct resource mmio_res;
|
||||
int *irqs;
|
||||
unsigned int num_irqs;
|
||||
struct dfl_fpga_cdev *cdev;
|
||||
const struct dfl_device_id *id_entry;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct dfl_driver - represent an dfl device driver
|
||||
*
|
||||
* @drv: driver model structure.
|
||||
* @id_table: pointer to table of device IDs the driver is interested in.
|
||||
* { } member terminated.
|
||||
* @probe: mandatory callback for device binding.
|
||||
* @remove: callback for device unbinding.
|
||||
*/
|
||||
struct dfl_driver {
|
||||
struct device_driver drv;
|
||||
const struct dfl_device_id *id_table;
|
||||
|
||||
int (*probe)(struct dfl_device *dfl_dev);
|
||||
void (*remove)(struct dfl_device *dfl_dev);
|
||||
};
|
||||
|
||||
#define to_dfl_dev(d) container_of(d, struct dfl_device, dev)
|
||||
#define to_dfl_drv(d) container_of(d, struct dfl_driver, drv)
|
||||
|
||||
/*
|
||||
* use a macro to avoid include chaining to get THIS_MODULE.
|
||||
*/
|
||||
#define dfl_driver_register(drv) \
|
||||
__dfl_driver_register(drv, THIS_MODULE)
|
||||
int __dfl_driver_register(struct dfl_driver *dfl_drv, struct module *owner);
|
||||
void dfl_driver_unregister(struct dfl_driver *dfl_drv);
|
||||
|
||||
/*
|
||||
* module_dfl_driver() - Helper macro for drivers that don't do
|
||||
* anything special in module init/exit. This eliminates a lot of
|
||||
* boilerplate. Each module may only use this macro once, and
|
||||
* calling it replaces module_init() and module_exit().
|
||||
*/
|
||||
#define module_dfl_driver(__dfl_driver) \
|
||||
module_driver(__dfl_driver, dfl_driver_register, \
|
||||
dfl_driver_unregister)
|
||||
|
||||
#endif /* __FPGA_DFL_H */
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* FPGA Region - Device Tree support for FPGA programming under Linux
|
||||
* FPGA Region - Support for FPGA programming under Linux
|
||||
*
|
||||
* Copyright (C) 2013-2016 Altera Corporation
|
||||
* Copyright (C) 2017 Intel Corporation
|
||||
|
|
|
@ -196,17 +196,13 @@ static int s10_ops_write_init(struct fpga_manager *mgr,
|
|||
if (ret < 0)
|
||||
goto init_done;
|
||||
|
||||
ret = wait_for_completion_interruptible_timeout(
|
||||
ret = wait_for_completion_timeout(
|
||||
&priv->status_return_completion, S10_RECONFIG_TIMEOUT);
|
||||
if (!ret) {
|
||||
dev_err(dev, "timeout waiting for RECONFIG_REQUEST\n");
|
||||
ret = -ETIMEDOUT;
|
||||
goto init_done;
|
||||
}
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "error (%d) waiting for RECONFIG_REQUEST\n", ret);
|
||||
goto init_done;
|
||||
}
|
||||
|
||||
ret = 0;
|
||||
if (!test_and_clear_bit(SVC_STATUS_OK, &priv->status)) {
|
||||
|
@ -318,7 +314,7 @@ static int s10_ops_write(struct fpga_manager *mgr, const char *buf,
|
|||
*/
|
||||
wait_status = 1; /* not timed out */
|
||||
if (!priv->status)
|
||||
wait_status = wait_for_completion_interruptible_timeout(
|
||||
wait_status = wait_for_completion_timeout(
|
||||
&priv->status_return_completion,
|
||||
S10_BUFFER_TIMEOUT);
|
||||
|
||||
|
@ -340,13 +336,6 @@ static int s10_ops_write(struct fpga_manager *mgr, const char *buf,
|
|||
ret = -ETIMEDOUT;
|
||||
break;
|
||||
}
|
||||
if (wait_status < 0) {
|
||||
ret = wait_status;
|
||||
dev_err(dev,
|
||||
"error (%d) waiting for svc layer buffers\n",
|
||||
ret);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!s10_free_buffers(mgr))
|
||||
|
@ -372,7 +361,7 @@ static int s10_ops_write_complete(struct fpga_manager *mgr,
|
|||
if (ret < 0)
|
||||
break;
|
||||
|
||||
ret = wait_for_completion_interruptible_timeout(
|
||||
ret = wait_for_completion_timeout(
|
||||
&priv->status_return_completion, timeout);
|
||||
if (!ret) {
|
||||
dev_err(dev,
|
||||
|
@ -380,12 +369,6 @@ static int s10_ops_write_complete(struct fpga_manager *mgr,
|
|||
ret = -ETIMEDOUT;
|
||||
break;
|
||||
}
|
||||
if (ret < 0) {
|
||||
dev_err(dev,
|
||||
"error (%d) waiting for RECONFIG_COMPLETED\n",
|
||||
ret);
|
||||
break;
|
||||
}
|
||||
/* Not error or timeout, so ret is # of jiffies until timeout */
|
||||
timeout = ret;
|
||||
ret = 0;
|
||||
|
|
|
@ -27,11 +27,22 @@ struct xilinx_spi_conf {
|
|||
struct gpio_desc *done;
|
||||
};
|
||||
|
||||
static enum fpga_mgr_states xilinx_spi_state(struct fpga_manager *mgr)
|
||||
static int get_done_gpio(struct fpga_manager *mgr)
|
||||
{
|
||||
struct xilinx_spi_conf *conf = mgr->priv;
|
||||
int ret;
|
||||
|
||||
if (!gpiod_get_value(conf->done))
|
||||
ret = gpiod_get_value(conf->done);
|
||||
|
||||
if (ret < 0)
|
||||
dev_err(&mgr->dev, "Error reading DONE (%d)\n", ret);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static enum fpga_mgr_states xilinx_spi_state(struct fpga_manager *mgr)
|
||||
{
|
||||
if (!get_done_gpio(mgr))
|
||||
return FPGA_MGR_STATE_RESET;
|
||||
|
||||
return FPGA_MGR_STATE_UNKNOWN;
|
||||
|
@ -57,11 +68,21 @@ static int wait_for_init_b(struct fpga_manager *mgr, int value,
|
|||
|
||||
if (conf->init_b) {
|
||||
while (time_before(jiffies, timeout)) {
|
||||
/* dump_state(conf, "wait for init_d .."); */
|
||||
if (gpiod_get_value(conf->init_b) == value)
|
||||
int ret = gpiod_get_value(conf->init_b);
|
||||
|
||||
if (ret == value)
|
||||
return 0;
|
||||
|
||||
if (ret < 0) {
|
||||
dev_err(&mgr->dev, "Error reading INIT_B (%d)\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
usleep_range(100, 400);
|
||||
}
|
||||
|
||||
dev_err(&mgr->dev, "Timeout waiting for INIT_B to %s\n",
|
||||
value ? "assert" : "deassert");
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
|
@ -78,7 +99,7 @@ static int xilinx_spi_write_init(struct fpga_manager *mgr,
|
|||
int err;
|
||||
|
||||
if (info->flags & FPGA_MGR_PARTIAL_RECONFIG) {
|
||||
dev_err(&mgr->dev, "Partial reconfiguration not supported.\n");
|
||||
dev_err(&mgr->dev, "Partial reconfiguration not supported\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
@ -86,7 +107,6 @@ static int xilinx_spi_write_init(struct fpga_manager *mgr,
|
|||
|
||||
err = wait_for_init_b(mgr, 1, 1); /* min is 500 ns */
|
||||
if (err) {
|
||||
dev_err(&mgr->dev, "INIT_B pin did not go low\n");
|
||||
gpiod_set_value(conf->prog_b, 0);
|
||||
return err;
|
||||
}
|
||||
|
@ -94,12 +114,10 @@ static int xilinx_spi_write_init(struct fpga_manager *mgr,
|
|||
gpiod_set_value(conf->prog_b, 0);
|
||||
|
||||
err = wait_for_init_b(mgr, 0, 0);
|
||||
if (err) {
|
||||
dev_err(&mgr->dev, "INIT_B pin did not go high\n");
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
if (gpiod_get_value(conf->done)) {
|
||||
if (get_done_gpio(mgr)) {
|
||||
dev_err(&mgr->dev, "Unexpected DONE pin state...\n");
|
||||
return -EIO;
|
||||
}
|
||||
|
@ -152,25 +170,46 @@ static int xilinx_spi_write_complete(struct fpga_manager *mgr,
|
|||
struct fpga_image_info *info)
|
||||
{
|
||||
struct xilinx_spi_conf *conf = mgr->priv;
|
||||
unsigned long timeout;
|
||||
unsigned long timeout = jiffies + usecs_to_jiffies(info->config_complete_timeout_us);
|
||||
bool expired = false;
|
||||
int done;
|
||||
int ret;
|
||||
|
||||
if (gpiod_get_value(conf->done))
|
||||
return xilinx_spi_apply_cclk_cycles(conf);
|
||||
/*
|
||||
* This loop is carefully written such that if the driver is
|
||||
* scheduled out for more than 'timeout', we still check for DONE
|
||||
* before giving up and we apply 8 extra CCLK cycles in all cases.
|
||||
*/
|
||||
while (!expired) {
|
||||
expired = time_after(jiffies, timeout);
|
||||
|
||||
timeout = jiffies + usecs_to_jiffies(info->config_complete_timeout_us);
|
||||
|
||||
while (time_before(jiffies, timeout)) {
|
||||
done = get_done_gpio(mgr);
|
||||
if (done < 0)
|
||||
return done;
|
||||
|
||||
ret = xilinx_spi_apply_cclk_cycles(conf);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (gpiod_get_value(conf->done))
|
||||
return xilinx_spi_apply_cclk_cycles(conf);
|
||||
if (done)
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (conf->init_b) {
|
||||
ret = gpiod_get_value(conf->init_b);
|
||||
|
||||
if (ret < 0) {
|
||||
dev_err(&mgr->dev, "Error reading INIT_B (%d)\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
dev_err(&mgr->dev,
|
||||
ret ? "CRC error or invalid device\n"
|
||||
: "Missing sync word or incomplete bitstream\n");
|
||||
} else {
|
||||
dev_err(&mgr->dev, "Timeout after config data transfer\n");
|
||||
}
|
||||
|
||||
dev_err(&mgr->dev, "Timeout after config data transfer.\n");
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
|
|
|
@ -50,6 +50,7 @@ static const int engine_page_size = 0x400;
|
|||
#define FSI_SMODE 0x0 /* R/W: Mode register */
|
||||
#define FSI_SISC 0x8 /* R/W: Interrupt condition */
|
||||
#define FSI_SSTAT 0x14 /* R : Slave status */
|
||||
#define FSI_SLBUS 0x30 /* W : LBUS Ownership */
|
||||
#define FSI_LLMODE 0x100 /* R/W: Link layer mode register */
|
||||
|
||||
/*
|
||||
|
@ -66,6 +67,11 @@ static const int engine_page_size = 0x400;
|
|||
#define FSI_SMODE_LBCRR_SHIFT 8 /* Clk ratio shift */
|
||||
#define FSI_SMODE_LBCRR_MASK 0xf /* Clk ratio mask */
|
||||
|
||||
/*
|
||||
* SLBUS fields
|
||||
*/
|
||||
#define FSI_SLBUS_FORCE 0x80000000 /* Force LBUS ownership */
|
||||
|
||||
/*
|
||||
* LLMODE fields
|
||||
*/
|
||||
|
@ -981,7 +987,7 @@ static int fsi_slave_init(struct fsi_master *master, int link, uint8_t id)
|
|||
uint32_t cfam_id;
|
||||
struct fsi_slave *slave;
|
||||
uint8_t crc;
|
||||
__be32 data, llmode;
|
||||
__be32 data, llmode, slbus;
|
||||
int rc;
|
||||
|
||||
/* Currently, we only support single slaves on a link, and use the
|
||||
|
@ -1052,6 +1058,14 @@ static int fsi_slave_init(struct fsi_master *master, int link, uint8_t id)
|
|||
|
||||
}
|
||||
|
||||
slbus = cpu_to_be32(FSI_SLBUS_FORCE);
|
||||
rc = fsi_master_write(master, link, id, FSI_SLAVE_BASE + FSI_SLBUS,
|
||||
&slbus, sizeof(slbus));
|
||||
if (rc)
|
||||
dev_warn(&master->dev,
|
||||
"can't set slbus on slave:%02x:%02x %d\n", link, id,
|
||||
rc);
|
||||
|
||||
rc = fsi_slave_set_smode(slave);
|
||||
if (rc) {
|
||||
dev_warn(&master->dev,
|
||||
|
@ -1154,10 +1168,18 @@ static int fsi_master_write(struct fsi_master *master, int link,
|
|||
return rc;
|
||||
}
|
||||
|
||||
static int fsi_master_link_disable(struct fsi_master *master, int link)
|
||||
{
|
||||
if (master->link_enable)
|
||||
return master->link_enable(master, link, false);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int fsi_master_link_enable(struct fsi_master *master, int link)
|
||||
{
|
||||
if (master->link_enable)
|
||||
return master->link_enable(master, link);
|
||||
return master->link_enable(master, link, true);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1192,12 +1214,15 @@ static int fsi_master_scan(struct fsi_master *master)
|
|||
}
|
||||
rc = fsi_master_break(master, link);
|
||||
if (rc) {
|
||||
fsi_master_link_disable(master, link);
|
||||
dev_dbg(&master->dev,
|
||||
"break to link %d failed: %d\n", link, rc);
|
||||
continue;
|
||||
}
|
||||
|
||||
fsi_slave_init(master, link, 0);
|
||||
rc = fsi_slave_init(master, link, 0);
|
||||
if (rc)
|
||||
fsi_master_link_disable(master, link);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
#include <linux/regmap.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/iopoll.h>
|
||||
#include <linux/gpio/consumer.h>
|
||||
|
||||
#include "fsi-master.h"
|
||||
|
||||
|
@ -21,6 +22,7 @@ struct fsi_master_aspeed {
|
|||
struct device *dev;
|
||||
void __iomem *base;
|
||||
struct clk *clk;
|
||||
struct gpio_desc *cfam_reset_gpio;
|
||||
};
|
||||
|
||||
#define to_fsi_master_aspeed(m) \
|
||||
|
@ -82,7 +84,12 @@ static const u32 fsi_base = 0xa0000000;
|
|||
|
||||
#define FSI_LINK_ENABLE_SETUP_TIME 10 /* in mS */
|
||||
|
||||
#define DEFAULT_DIVISOR 14
|
||||
/* Run the bus at maximum speed by default */
|
||||
#define FSI_DIVISOR_DEFAULT 1
|
||||
#define FSI_DIVISOR_CABLED 2
|
||||
static u16 aspeed_fsi_divisor = FSI_DIVISOR_DEFAULT;
|
||||
module_param_named(bus_div,aspeed_fsi_divisor, ushort, 0);
|
||||
|
||||
#define OPB_POLL_TIMEOUT 10000
|
||||
|
||||
static int __opb_write(struct fsi_master_aspeed *aspeed, u32 addr,
|
||||
|
@ -241,9 +248,10 @@ static int aspeed_master_read(struct fsi_master *master, int link,
|
|||
struct fsi_master_aspeed *aspeed = to_fsi_master_aspeed(master);
|
||||
int ret;
|
||||
|
||||
if (id != 0)
|
||||
if (id > 0x3)
|
||||
return -EINVAL;
|
||||
|
||||
addr |= id << 21;
|
||||
addr += link * FSI_HUB_LINK_SIZE;
|
||||
|
||||
switch (size) {
|
||||
|
@ -273,9 +281,10 @@ static int aspeed_master_write(struct fsi_master *master, int link,
|
|||
struct fsi_master_aspeed *aspeed = to_fsi_master_aspeed(master);
|
||||
int ret;
|
||||
|
||||
if (id != 0)
|
||||
if (id > 0x3)
|
||||
return -EINVAL;
|
||||
|
||||
addr |= id << 21;
|
||||
addr += link * FSI_HUB_LINK_SIZE;
|
||||
|
||||
switch (size) {
|
||||
|
@ -299,32 +308,28 @@ static int aspeed_master_write(struct fsi_master *master, int link,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int aspeed_master_link_enable(struct fsi_master *master, int link)
|
||||
static int aspeed_master_link_enable(struct fsi_master *master, int link,
|
||||
bool enable)
|
||||
{
|
||||
struct fsi_master_aspeed *aspeed = to_fsi_master_aspeed(master);
|
||||
int idx, bit, ret;
|
||||
__be32 reg, result;
|
||||
__be32 reg;
|
||||
|
||||
idx = link / 32;
|
||||
bit = link % 32;
|
||||
|
||||
reg = cpu_to_be32(0x80000000 >> bit);
|
||||
|
||||
if (!enable)
|
||||
return opb_writel(aspeed, ctrl_base + FSI_MCENP0 + (4 * idx),
|
||||
reg);
|
||||
|
||||
ret = opb_writel(aspeed, ctrl_base + FSI_MSENP0 + (4 * idx), reg);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
mdelay(FSI_LINK_ENABLE_SETUP_TIME);
|
||||
|
||||
ret = opb_readl(aspeed, ctrl_base + FSI_MENP0 + (4 * idx), &result);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (result != reg) {
|
||||
dev_err(aspeed->dev, "%s failed: %08x\n", __func__, result);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -386,9 +391,11 @@ static int aspeed_master_init(struct fsi_master_aspeed *aspeed)
|
|||
opb_writel(aspeed, ctrl_base + FSI_MECTRL, reg);
|
||||
|
||||
reg = cpu_to_be32(FSI_MMODE_ECRC | FSI_MMODE_EPC | FSI_MMODE_RELA
|
||||
| fsi_mmode_crs0(DEFAULT_DIVISOR)
|
||||
| fsi_mmode_crs1(DEFAULT_DIVISOR)
|
||||
| fsi_mmode_crs0(aspeed_fsi_divisor)
|
||||
| fsi_mmode_crs1(aspeed_fsi_divisor)
|
||||
| FSI_MMODE_P8_TO_LSB);
|
||||
dev_info(aspeed->dev, "mmode set to %08x (divisor %d)\n",
|
||||
be32_to_cpu(reg), aspeed_fsi_divisor);
|
||||
opb_writel(aspeed, ctrl_base + FSI_MMODE, reg);
|
||||
|
||||
reg = cpu_to_be32(0xffff0000);
|
||||
|
@ -419,6 +426,90 @@ static int aspeed_master_init(struct fsi_master_aspeed *aspeed)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static ssize_t cfam_reset_store(struct device *dev, struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct fsi_master_aspeed *aspeed = dev_get_drvdata(dev);
|
||||
|
||||
gpiod_set_value(aspeed->cfam_reset_gpio, 1);
|
||||
usleep_range(900, 1000);
|
||||
gpiod_set_value(aspeed->cfam_reset_gpio, 0);
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(cfam_reset, 0200, NULL, cfam_reset_store);
|
||||
|
||||
static int setup_cfam_reset(struct fsi_master_aspeed *aspeed)
|
||||
{
|
||||
struct device *dev = aspeed->dev;
|
||||
struct gpio_desc *gpio;
|
||||
int rc;
|
||||
|
||||
gpio = devm_gpiod_get_optional(dev, "cfam-reset", GPIOD_OUT_LOW);
|
||||
if (IS_ERR(gpio))
|
||||
return PTR_ERR(gpio);
|
||||
if (!gpio)
|
||||
return 0;
|
||||
|
||||
aspeed->cfam_reset_gpio = gpio;
|
||||
|
||||
rc = device_create_file(dev, &dev_attr_cfam_reset);
|
||||
if (rc) {
|
||||
devm_gpiod_put(dev, gpio);
|
||||
return rc;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int tacoma_cabled_fsi_fixup(struct device *dev)
|
||||
{
|
||||
struct gpio_desc *routing_gpio, *mux_gpio;
|
||||
int gpio;
|
||||
|
||||
/*
|
||||
* The routing GPIO is a jumper indicating we should mux for the
|
||||
* externally connected FSI cable.
|
||||
*/
|
||||
routing_gpio = devm_gpiod_get_optional(dev, "fsi-routing",
|
||||
GPIOD_IN | GPIOD_FLAGS_BIT_NONEXCLUSIVE);
|
||||
if (IS_ERR(routing_gpio))
|
||||
return PTR_ERR(routing_gpio);
|
||||
if (!routing_gpio)
|
||||
return 0;
|
||||
|
||||
mux_gpio = devm_gpiod_get_optional(dev, "fsi-mux", GPIOD_ASIS);
|
||||
if (IS_ERR(mux_gpio))
|
||||
return PTR_ERR(mux_gpio);
|
||||
if (!mux_gpio)
|
||||
return 0;
|
||||
|
||||
gpio = gpiod_get_value(routing_gpio);
|
||||
if (gpio < 0)
|
||||
return gpio;
|
||||
|
||||
/* If the routing GPIO is high we should set the mux to low. */
|
||||
if (gpio) {
|
||||
/*
|
||||
* Cable signal integrity means we should run the bus
|
||||
* slightly slower. Do not override if a kernel param
|
||||
* has already overridden.
|
||||
*/
|
||||
if (aspeed_fsi_divisor == FSI_DIVISOR_DEFAULT)
|
||||
aspeed_fsi_divisor = FSI_DIVISOR_CABLED;
|
||||
|
||||
gpiod_direction_output(mux_gpio, 0);
|
||||
dev_info(dev, "FSI configured for external cable\n");
|
||||
} else {
|
||||
gpiod_direction_output(mux_gpio, 1);
|
||||
}
|
||||
|
||||
devm_gpiod_put(dev, routing_gpio);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int fsi_master_aspeed_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct fsi_master_aspeed *aspeed;
|
||||
|
@ -426,6 +517,12 @@ static int fsi_master_aspeed_probe(struct platform_device *pdev)
|
|||
int rc, links, reg;
|
||||
__be32 raw;
|
||||
|
||||
rc = tacoma_cabled_fsi_fixup(&pdev->dev);
|
||||
if (rc) {
|
||||
dev_err(&pdev->dev, "Tacoma FSI cable fixup failed\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
aspeed = devm_kzalloc(&pdev->dev, sizeof(*aspeed), GFP_KERNEL);
|
||||
if (!aspeed)
|
||||
return -ENOMEM;
|
||||
|
@ -448,6 +545,11 @@ static int fsi_master_aspeed_probe(struct platform_device *pdev)
|
|||
return rc;
|
||||
}
|
||||
|
||||
rc = setup_cfam_reset(aspeed);
|
||||
if (rc) {
|
||||
dev_err(&pdev->dev, "CFAM reset GPIO setup failed\n");
|
||||
}
|
||||
|
||||
writel(0x1, aspeed->base + OPB_CLK_SYNC);
|
||||
writel(OPB1_XFER_ACK_EN | OPB0_XFER_ACK_EN,
|
||||
aspeed->base + OPB_IRQ_MASK);
|
||||
|
|
|
@ -838,7 +838,7 @@ static int load_copro_firmware(struct fsi_master_acf *master)
|
|||
rc = request_firmware(&fw, FW_FILE_NAME, master->dev);
|
||||
if (rc) {
|
||||
dev_err(
|
||||
master->dev, "Error %d to load firwmare '%s' !\n",
|
||||
master->dev, "Error %d to load firmware '%s' !\n",
|
||||
rc, FW_FILE_NAME);
|
||||
return rc;
|
||||
}
|
||||
|
@ -1039,7 +1039,8 @@ static void fsi_master_acf_setup_external(struct fsi_master_acf *master)
|
|||
gpiod_direction_input(master->gpio_data);
|
||||
}
|
||||
|
||||
static int fsi_master_acf_link_enable(struct fsi_master *_master, int link)
|
||||
static int fsi_master_acf_link_enable(struct fsi_master *_master, int link,
|
||||
bool enable)
|
||||
{
|
||||
struct fsi_master_acf *master = to_fsi_master_acf(_master);
|
||||
int rc = -EBUSY;
|
||||
|
@ -1049,7 +1050,7 @@ static int fsi_master_acf_link_enable(struct fsi_master *_master, int link)
|
|||
|
||||
mutex_lock(&master->lock);
|
||||
if (!master->external_mode) {
|
||||
gpiod_set_value(master->gpio_enable, 1);
|
||||
gpiod_set_value(master->gpio_enable, enable ? 1 : 0);
|
||||
rc = 0;
|
||||
}
|
||||
mutex_unlock(&master->lock);
|
||||
|
|
|
@ -678,7 +678,8 @@ static void fsi_master_gpio_init_external(struct fsi_master_gpio *master)
|
|||
gpiod_direction_input(master->gpio_data);
|
||||
}
|
||||
|
||||
static int fsi_master_gpio_link_enable(struct fsi_master *_master, int link)
|
||||
static int fsi_master_gpio_link_enable(struct fsi_master *_master, int link,
|
||||
bool enable)
|
||||
{
|
||||
struct fsi_master_gpio *master = to_fsi_master_gpio(_master);
|
||||
int rc = -EBUSY;
|
||||
|
@ -688,7 +689,7 @@ static int fsi_master_gpio_link_enable(struct fsi_master *_master, int link)
|
|||
|
||||
mutex_lock(&master->cmd_lock);
|
||||
if (!master->external_mode) {
|
||||
gpiod_set_value(master->gpio_enable, 1);
|
||||
gpiod_set_value(master->gpio_enable, enable ? 1 : 0);
|
||||
rc = 0;
|
||||
}
|
||||
mutex_unlock(&master->cmd_lock);
|
||||
|
|
|
@ -77,7 +77,8 @@ static int hub_master_break(struct fsi_master *master, int link)
|
|||
return hub_master_write(master, link, 0, addr, &cmd, sizeof(cmd));
|
||||
}
|
||||
|
||||
static int hub_master_link_enable(struct fsi_master *master, int link)
|
||||
static int hub_master_link_enable(struct fsi_master *master, int link,
|
||||
bool enable)
|
||||
{
|
||||
struct fsi_master_hub *hub = to_fsi_master_hub(master);
|
||||
int idx, bit;
|
||||
|
@ -89,13 +90,17 @@ static int hub_master_link_enable(struct fsi_master *master, int link)
|
|||
|
||||
reg = cpu_to_be32(0x80000000 >> bit);
|
||||
|
||||
if (!enable)
|
||||
return fsi_device_write(hub->upstream, FSI_MCENP0 + (4 * idx),
|
||||
®, 4);
|
||||
|
||||
rc = fsi_device_write(hub->upstream, FSI_MSENP0 + (4 * idx), ®, 4);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
mdelay(FSI_LINK_ENABLE_SETUP_TIME);
|
||||
|
||||
fsi_device_read(hub->upstream, FSI_MENP0 + (4 * idx), ®, 4);
|
||||
|
||||
return rc;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void hub_master_release(struct device *dev)
|
||||
|
@ -271,7 +276,7 @@ static int hub_master_remove(struct device *dev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static struct fsi_device_id hub_master_ids[] = {
|
||||
static const struct fsi_device_id hub_master_ids[] = {
|
||||
{
|
||||
.engine_type = FSI_ENGID_HUB_MASTER,
|
||||
.version = FSI_VERSION_ANY,
|
||||
|
|
|
@ -130,7 +130,8 @@ struct fsi_master {
|
|||
uint32_t addr, const void *val, size_t size);
|
||||
int (*term)(struct fsi_master *, int link, uint8_t id);
|
||||
int (*send_break)(struct fsi_master *, int link);
|
||||
int (*link_enable)(struct fsi_master *, int link);
|
||||
int (*link_enable)(struct fsi_master *, int link,
|
||||
bool enable);
|
||||
int (*link_config)(struct fsi_master *, int link,
|
||||
u8 t_send_delay, u8 t_echo_delay);
|
||||
};
|
||||
|
|
|
@ -555,7 +555,7 @@ static int occ_probe(struct platform_device *pdev)
|
|||
|
||||
hwmon_dev_info.id = occ->idx;
|
||||
hwmon_dev = platform_device_register_full(&hwmon_dev_info);
|
||||
if (!hwmon_dev)
|
||||
if (IS_ERR(hwmon_dev))
|
||||
dev_warn(dev, "failed to create hwmon device\n");
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -1028,7 +1028,7 @@ static int sbefifo_remove(struct device *dev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static struct fsi_device_id sbefifo_ids[] = {
|
||||
static const struct fsi_device_id sbefifo_ids[] = {
|
||||
{
|
||||
.engine_type = FSI_ENGID_SBE,
|
||||
.version = FSI_VERSION_ANY,
|
||||
|
|
|
@ -627,7 +627,7 @@ static int scom_remove(struct device *dev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static struct fsi_device_id scom_ids[] = {
|
||||
static const struct fsi_device_id scom_ids[] = {
|
||||
{
|
||||
.engine_type = FSI_ENGID_SCOM,
|
||||
.version = FSI_VERSION_ANY,
|
||||
|
|
|
@ -620,7 +620,7 @@ static struct attribute *interface_common_attrs[] = {
|
|||
static umode_t interface_unipro_is_visible(struct kobject *kobj,
|
||||
struct attribute *attr, int n)
|
||||
{
|
||||
struct device *dev = container_of(kobj, struct device, kobj);
|
||||
struct device *dev = kobj_to_dev(kobj);
|
||||
struct gb_interface *intf = to_gb_interface(dev);
|
||||
|
||||
switch (intf->type) {
|
||||
|
@ -635,7 +635,7 @@ static umode_t interface_unipro_is_visible(struct kobject *kobj,
|
|||
static umode_t interface_greybus_is_visible(struct kobject *kobj,
|
||||
struct attribute *attr, int n)
|
||||
{
|
||||
struct device *dev = container_of(kobj, struct device, kobj);
|
||||
struct device *dev = kobj_to_dev(kobj);
|
||||
struct gb_interface *intf = to_gb_interface(dev);
|
||||
|
||||
switch (intf->type) {
|
||||
|
@ -649,7 +649,7 @@ static umode_t interface_greybus_is_visible(struct kobject *kobj,
|
|||
static umode_t interface_power_is_visible(struct kobject *kobj,
|
||||
struct attribute *attr, int n)
|
||||
{
|
||||
struct device *dev = container_of(kobj, struct device, kobj);
|
||||
struct device *dev = kobj_to_dev(kobj);
|
||||
struct gb_interface *intf = to_gb_interface(dev);
|
||||
|
||||
switch (intf->type) {
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
# Coresight configuration
|
||||
#
|
||||
menuconfig CORESIGHT
|
||||
bool "CoreSight Tracing Support"
|
||||
tristate "CoreSight Tracing Support"
|
||||
depends on ARM || ARM64
|
||||
depends on OF || ACPI
|
||||
select ARM_AMBA
|
||||
|
@ -15,17 +15,24 @@ menuconfig CORESIGHT
|
|||
specification and configure the right series of components when a
|
||||
trace source gets enabled.
|
||||
|
||||
To compile this driver as a module, choose M here: the
|
||||
module will be called coresight.
|
||||
|
||||
if CORESIGHT
|
||||
config CORESIGHT_LINKS_AND_SINKS
|
||||
bool "CoreSight Link and Sink drivers"
|
||||
tristate "CoreSight Link and Sink drivers"
|
||||
help
|
||||
This enables support for CoreSight link and sink drivers that are
|
||||
responsible for transporting and collecting the trace data
|
||||
respectively. Link and sinks are dynamically aggregated with a trace
|
||||
entity at run time to form a complete trace path.
|
||||
|
||||
To compile these drivers as modules, choose M here: the
|
||||
modules will be called coresight-funnel and coresight-replicator.
|
||||
|
||||
config CORESIGHT_LINK_AND_SINK_TMC
|
||||
bool "Coresight generic TMC driver"
|
||||
tristate "Coresight generic TMC driver"
|
||||
|
||||
depends on CORESIGHT_LINKS_AND_SINKS
|
||||
help
|
||||
This enables support for the Trace Memory Controller driver.
|
||||
|
@ -34,8 +41,11 @@ config CORESIGHT_LINK_AND_SINK_TMC
|
|||
complies with the generic implementation of the component without
|
||||
special enhancement or added features.
|
||||
|
||||
To compile this driver as a module, choose M here: the
|
||||
module will be called coresight-tmc.
|
||||
|
||||
config CORESIGHT_CATU
|
||||
bool "Coresight Address Translation Unit (CATU) driver"
|
||||
tristate "Coresight Address Translation Unit (CATU) driver"
|
||||
depends on CORESIGHT_LINK_AND_SINK_TMC
|
||||
help
|
||||
Enable support for the Coresight Address Translation Unit (CATU).
|
||||
|
@ -45,8 +55,11 @@ config CORESIGHT_CATU
|
|||
by looking up the provided table. CATU can also be used in pass-through
|
||||
mode where the address is not translated.
|
||||
|
||||
To compile this driver as a module, choose M here: the
|
||||
module will be called coresight-catu.
|
||||
|
||||
config CORESIGHT_SINK_TPIU
|
||||
bool "Coresight generic TPIU driver"
|
||||
tristate "Coresight generic TPIU driver"
|
||||
depends on CORESIGHT_LINKS_AND_SINKS
|
||||
help
|
||||
This enables support for the Trace Port Interface Unit driver,
|
||||
|
@ -56,16 +69,22 @@ config CORESIGHT_SINK_TPIU
|
|||
connected to an external host for use case capturing more traces than
|
||||
the on-board coresight memory can handle.
|
||||
|
||||
To compile this driver as a module, choose M here: the
|
||||
module will be called coresight-tpiu.
|
||||
|
||||
config CORESIGHT_SINK_ETBV10
|
||||
bool "Coresight ETBv1.0 driver"
|
||||
tristate "Coresight ETBv1.0 driver"
|
||||
depends on CORESIGHT_LINKS_AND_SINKS
|
||||
help
|
||||
This enables support for the Embedded Trace Buffer version 1.0 driver
|
||||
that complies with the generic implementation of the component without
|
||||
special enhancement or added features.
|
||||
|
||||
To compile this driver as a module, choose M here: the
|
||||
module will be called coresight-etb10.
|
||||
|
||||
config CORESIGHT_SOURCE_ETM3X
|
||||
bool "CoreSight Embedded Trace Macrocell 3.x driver"
|
||||
tristate "CoreSight Embedded Trace Macrocell 3.x driver"
|
||||
depends on !ARM64
|
||||
select CORESIGHT_LINKS_AND_SINKS
|
||||
help
|
||||
|
@ -74,8 +93,11 @@ config CORESIGHT_SOURCE_ETM3X
|
|||
This is primarily useful for instruction level tracing. Depending
|
||||
the ETM version data tracing may also be available.
|
||||
|
||||
To compile this driver as a module, choose M here: the
|
||||
module will be called coresight-etm3x.
|
||||
|
||||
config CORESIGHT_SOURCE_ETM4X
|
||||
bool "CoreSight Embedded Trace Macrocell 4.x driver"
|
||||
tristate "CoreSight Embedded Trace Macrocell 4.x driver"
|
||||
depends on ARM64
|
||||
select CORESIGHT_LINKS_AND_SINKS
|
||||
select PID_IN_CONTEXTIDR
|
||||
|
@ -85,8 +107,11 @@ config CORESIGHT_SOURCE_ETM4X
|
|||
for instruction level tracing. Depending on the implemented version
|
||||
data tracing may also be available.
|
||||
|
||||
To compile this driver as a module, choose M here: the
|
||||
module will be called coresight-etm4x.
|
||||
|
||||
config CORESIGHT_STM
|
||||
bool "CoreSight System Trace Macrocell driver"
|
||||
tristate "CoreSight System Trace Macrocell driver"
|
||||
depends on (ARM && !(CPU_32v3 || CPU_32v4 || CPU_32v4T)) || ARM64
|
||||
select CORESIGHT_LINKS_AND_SINKS
|
||||
select STM
|
||||
|
@ -96,6 +121,9 @@ config CORESIGHT_STM
|
|||
logging useful software events or data coming from various entities
|
||||
in the system, possibly running different OSs
|
||||
|
||||
To compile this driver as a module, choose M here: the
|
||||
module will be called coresight-stm.
|
||||
|
||||
config CORESIGHT_CPU_DEBUG
|
||||
tristate "CoreSight CPU Debug driver"
|
||||
depends on ARM || ARM64
|
||||
|
@ -110,8 +138,11 @@ config CORESIGHT_CPU_DEBUG
|
|||
properly, please refer Documentation/trace/coresight/coresight-cpu-debug.rst
|
||||
for detailed description and the example for usage.
|
||||
|
||||
To compile this driver as a module, choose M here: the
|
||||
module will be called coresight-cpu-debug.
|
||||
|
||||
config CORESIGHT_CTI
|
||||
bool "CoreSight Cross Trigger Interface (CTI) driver"
|
||||
tristate "CoreSight Cross Trigger Interface (CTI) driver"
|
||||
depends on ARM || ARM64
|
||||
help
|
||||
This driver provides support for CoreSight CTI and CTM components.
|
||||
|
@ -122,6 +153,9 @@ config CORESIGHT_CTI
|
|||
halt compared to disabling sources and sinks normally in driver
|
||||
software.
|
||||
|
||||
To compile this driver as a module, choose M here: the
|
||||
module will be called coresight-cti.
|
||||
|
||||
config CORESIGHT_CTI_INTEGRATION_REGS
|
||||
bool "Access CTI CoreSight Integration Registers"
|
||||
depends on CORESIGHT_CTI
|
||||
|
|
|
@ -2,22 +2,24 @@
|
|||
#
|
||||
# Makefile for CoreSight drivers.
|
||||
#
|
||||
obj-$(CONFIG_CORESIGHT) += coresight.o coresight-etm-perf.o \
|
||||
coresight-platform.o coresight-sysfs.o
|
||||
obj-$(CONFIG_CORESIGHT_LINK_AND_SINK_TMC) += coresight-tmc.o \
|
||||
coresight-tmc-etf.o \
|
||||
coresight-tmc-etr.o
|
||||
obj-$(CONFIG_CORESIGHT) += coresight.o
|
||||
coresight-y := coresight-core.o coresight-etm-perf.o coresight-platform.o \
|
||||
coresight-sysfs.o
|
||||
obj-$(CONFIG_CORESIGHT_LINK_AND_SINK_TMC) += coresight-tmc.o
|
||||
coresight-tmc-y := coresight-tmc-core.o coresight-tmc-etf.o \
|
||||
coresight-tmc-etr.o
|
||||
obj-$(CONFIG_CORESIGHT_SINK_TPIU) += coresight-tpiu.o
|
||||
obj-$(CONFIG_CORESIGHT_SINK_ETBV10) += coresight-etb10.o
|
||||
obj-$(CONFIG_CORESIGHT_LINKS_AND_SINKS) += coresight-funnel.o \
|
||||
coresight-replicator.o
|
||||
obj-$(CONFIG_CORESIGHT_SOURCE_ETM3X) += coresight-etm3x.o coresight-etm-cp14.o \
|
||||
coresight-etm3x-sysfs.o
|
||||
obj-$(CONFIG_CORESIGHT_SOURCE_ETM4X) += coresight-etm4x.o \
|
||||
coresight-etm4x-sysfs.o
|
||||
obj-$(CONFIG_CORESIGHT_SOURCE_ETM3X) += coresight-etm3x.o
|
||||
coresight-etm3x-y := coresight-etm3x-core.o coresight-etm-cp14.o \
|
||||
coresight-etm3x-sysfs.o
|
||||
obj-$(CONFIG_CORESIGHT_SOURCE_ETM4X) += coresight-etm4x.o
|
||||
coresight-etm4x-y := coresight-etm4x-core.o coresight-etm4x-sysfs.o
|
||||
obj-$(CONFIG_CORESIGHT_STM) += coresight-stm.o
|
||||
obj-$(CONFIG_CORESIGHT_CPU_DEBUG) += coresight-cpu-debug.o
|
||||
obj-$(CONFIG_CORESIGHT_CATU) += coresight-catu.o
|
||||
obj-$(CONFIG_CORESIGHT_CTI) += coresight-cti.o \
|
||||
coresight-cti-platform.o \
|
||||
coresight-cti-sysfs.o
|
||||
obj-$(CONFIG_CORESIGHT_CTI) += coresight-cti.o
|
||||
coresight-cti-y := coresight-cti-core.o coresight-cti-platform.o \
|
||||
coresight-cti-sysfs.o
|
||||
|
|
|
@ -358,7 +358,7 @@ static int catu_alloc_etr_buf(struct tmc_drvdata *tmc_drvdata,
|
|||
return 0;
|
||||
}
|
||||
|
||||
const struct etr_buf_operations etr_catu_buf_ops = {
|
||||
static const struct etr_buf_operations etr_catu_buf_ops = {
|
||||
.alloc = catu_alloc_etr_buf,
|
||||
.free = catu_free_etr_buf,
|
||||
.sync = catu_sync_etr_buf,
|
||||
|
@ -567,11 +567,21 @@ out:
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int __exit catu_remove(struct amba_device *adev)
|
||||
{
|
||||
struct catu_drvdata *drvdata = dev_get_drvdata(&adev->dev);
|
||||
|
||||
coresight_unregister(drvdata->csdev);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct amba_id catu_ids[] = {
|
||||
CS_AMBA_ID(0x000bb9ee),
|
||||
{},
|
||||
};
|
||||
|
||||
MODULE_DEVICE_TABLE(amba, catu_ids);
|
||||
|
||||
static struct amba_driver catu_driver = {
|
||||
.drv = {
|
||||
.name = "coresight-catu",
|
||||
|
@ -579,7 +589,30 @@ static struct amba_driver catu_driver = {
|
|||
.suppress_bind_attrs = true,
|
||||
},
|
||||
.probe = catu_probe,
|
||||
.remove = catu_remove,
|
||||
.id_table = catu_ids,
|
||||
};
|
||||
|
||||
builtin_amba_driver(catu_driver);
|
||||
static int __init catu_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = amba_driver_register(&catu_driver);
|
||||
if (ret)
|
||||
pr_info("Error registering catu driver\n");
|
||||
tmc_etr_set_catu_ops(&etr_catu_buf_ops);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void __exit catu_exit(void)
|
||||
{
|
||||
tmc_etr_remove_catu_ops();
|
||||
amba_driver_unregister(&catu_driver);
|
||||
}
|
||||
|
||||
module_init(catu_init);
|
||||
module_exit(catu_exit);
|
||||
|
||||
MODULE_AUTHOR("Suzuki K Poulose <suzuki.poulose@arm.com>");
|
||||
MODULE_DESCRIPTION("Arm CoreSight Address Translation Unit (CATU) Driver");
|
||||
MODULE_LICENSE("GPL v2");
|
||||
|
|
|
@ -108,6 +108,4 @@ static inline bool coresight_is_catu_device(struct coresight_device *csdev)
|
|||
return true;
|
||||
}
|
||||
|
||||
extern const struct etr_buf_operations etr_catu_buf_ops;
|
||||
|
||||
#endif
|
||||
|
|
|
@ -53,7 +53,22 @@ static struct list_head *stm_path;
|
|||
* beginning of the data collected in a buffer. That way the decoder knows that
|
||||
* it needs to look for another sync sequence.
|
||||
*/
|
||||
const u32 barrier_pkt[4] = {0x7fffffff, 0x7fffffff, 0x7fffffff, 0x7fffffff};
|
||||
const u32 coresight_barrier_pkt[4] = {0x7fffffff, 0x7fffffff, 0x7fffffff, 0x7fffffff};
|
||||
EXPORT_SYMBOL_GPL(coresight_barrier_pkt);
|
||||
|
||||
static const struct cti_assoc_op *cti_assoc_ops;
|
||||
|
||||
void coresight_set_cti_ops(const struct cti_assoc_op *cti_op)
|
||||
{
|
||||
cti_assoc_ops = cti_op;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(coresight_set_cti_ops);
|
||||
|
||||
void coresight_remove_cti_ops(void)
|
||||
{
|
||||
cti_assoc_ops = NULL;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(coresight_remove_cti_ops);
|
||||
|
||||
static int coresight_id_match(struct device *dev, void *data)
|
||||
{
|
||||
|
@ -179,6 +194,7 @@ int coresight_claim_device_unlocked(void __iomem *base)
|
|||
coresight_clear_claim_tags(base);
|
||||
return -EBUSY;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(coresight_claim_device_unlocked);
|
||||
|
||||
int coresight_claim_device(void __iomem *base)
|
||||
{
|
||||
|
@ -190,6 +206,7 @@ int coresight_claim_device(void __iomem *base)
|
|||
|
||||
return rc;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(coresight_claim_device);
|
||||
|
||||
/*
|
||||
* coresight_disclaim_device_unlocked : Clear the claim tags for the device.
|
||||
|
@ -208,6 +225,7 @@ void coresight_disclaim_device_unlocked(void __iomem *base)
|
|||
*/
|
||||
WARN_ON_ONCE(1);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(coresight_disclaim_device_unlocked);
|
||||
|
||||
void coresight_disclaim_device(void __iomem *base)
|
||||
{
|
||||
|
@ -215,6 +233,7 @@ void coresight_disclaim_device(void __iomem *base)
|
|||
coresight_disclaim_device_unlocked(base);
|
||||
CS_LOCK(base);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(coresight_disclaim_device);
|
||||
|
||||
/* enable or disable an associated CTI device of the supplied CS device */
|
||||
static int
|
||||
|
@ -222,16 +241,32 @@ coresight_control_assoc_ectdev(struct coresight_device *csdev, bool enable)
|
|||
{
|
||||
int ect_ret = 0;
|
||||
struct coresight_device *ect_csdev = csdev->ect_dev;
|
||||
struct module *mod;
|
||||
|
||||
if (!ect_csdev)
|
||||
return 0;
|
||||
if ((!ect_ops(ect_csdev)->enable) || (!ect_ops(ect_csdev)->disable))
|
||||
return 0;
|
||||
|
||||
mod = ect_csdev->dev.parent->driver->owner;
|
||||
if (enable) {
|
||||
if (ect_ops(ect_csdev)->enable)
|
||||
if (try_module_get(mod)) {
|
||||
ect_ret = ect_ops(ect_csdev)->enable(ect_csdev);
|
||||
if (ect_ret) {
|
||||
module_put(mod);
|
||||
} else {
|
||||
get_device(ect_csdev->dev.parent);
|
||||
csdev->ect_enabled = true;
|
||||
}
|
||||
} else
|
||||
ect_ret = -ENODEV;
|
||||
} else {
|
||||
if (ect_ops(ect_csdev)->disable)
|
||||
if (csdev->ect_enabled) {
|
||||
ect_ret = ect_ops(ect_csdev)->disable(ect_csdev);
|
||||
put_device(ect_csdev->dev.parent);
|
||||
module_put(mod);
|
||||
csdev->ect_enabled = false;
|
||||
}
|
||||
}
|
||||
|
||||
/* output warning if ECT enable is preventing trace operation */
|
||||
|
@ -253,6 +288,7 @@ void coresight_set_assoc_ectdev_mutex(struct coresight_device *csdev,
|
|||
csdev->ect_dev = ect_csdev;
|
||||
mutex_unlock(&coresight_mutex);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(coresight_set_assoc_ectdev_mutex);
|
||||
|
||||
static int coresight_enable_sink(struct coresight_device *csdev,
|
||||
u32 mode, void *data)
|
||||
|
@ -467,6 +503,7 @@ void coresight_disable_path(struct list_head *path)
|
|||
{
|
||||
coresight_disable_path_from(path, NULL);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(coresight_disable_path);
|
||||
|
||||
int coresight_enable_path(struct list_head *path, u32 mode, void *sink_data)
|
||||
{
|
||||
|
@ -540,50 +577,46 @@ struct coresight_device *coresight_get_sink(struct list_head *path)
|
|||
return csdev;
|
||||
}
|
||||
|
||||
static int coresight_enabled_sink(struct device *dev, const void *data)
|
||||
static struct coresight_device *
|
||||
coresight_find_enabled_sink(struct coresight_device *csdev)
|
||||
{
|
||||
const bool *reset = data;
|
||||
struct coresight_device *csdev = to_coresight_device(dev);
|
||||
int i;
|
||||
struct coresight_device *sink;
|
||||
|
||||
if ((csdev->type == CORESIGHT_DEV_TYPE_SINK ||
|
||||
csdev->type == CORESIGHT_DEV_TYPE_LINKSINK) &&
|
||||
csdev->activated) {
|
||||
/*
|
||||
* Now that we have a handle on the sink for this session,
|
||||
* disable the sysFS "enable_sink" flag so that possible
|
||||
* concurrent perf session that wish to use another sink don't
|
||||
* trip on it. Doing so has no ramification for the current
|
||||
* session.
|
||||
*/
|
||||
if (*reset)
|
||||
csdev->activated = false;
|
||||
csdev->activated)
|
||||
return csdev;
|
||||
|
||||
return 1;
|
||||
/*
|
||||
* Recursively explore each port found on this element.
|
||||
*/
|
||||
for (i = 0; i < csdev->pdata->nr_outport; i++) {
|
||||
struct coresight_device *child_dev;
|
||||
|
||||
child_dev = csdev->pdata->conns[i].child_dev;
|
||||
if (child_dev)
|
||||
sink = coresight_find_enabled_sink(child_dev);
|
||||
if (sink)
|
||||
return sink;
|
||||
}
|
||||
|
||||
return 0;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* coresight_get_enabled_sink - returns the first enabled sink found on the bus
|
||||
* @deactivate: Whether the 'enable_sink' flag should be reset
|
||||
* coresight_get_enabled_sink - returns the first enabled sink using
|
||||
* connection based search starting from the source reference
|
||||
*
|
||||
* When operated from perf the deactivate parameter should be set to 'true'.
|
||||
* That way the "enabled_sink" flag of the sink that was selected can be reset,
|
||||
* allowing for other concurrent perf sessions to choose a different sink.
|
||||
*
|
||||
* When operated from sysFS users have full control and as such the deactivate
|
||||
* parameter should be set to 'false', hence mandating users to explicitly
|
||||
* clear the flag.
|
||||
* @source: Coresight source device reference
|
||||
*/
|
||||
struct coresight_device *coresight_get_enabled_sink(bool deactivate)
|
||||
struct coresight_device *
|
||||
coresight_get_enabled_sink(struct coresight_device *source)
|
||||
{
|
||||
struct device *dev = NULL;
|
||||
if (!source)
|
||||
return NULL;
|
||||
|
||||
dev = bus_find_device(&coresight_bustype, NULL, &deactivate,
|
||||
coresight_enabled_sink);
|
||||
|
||||
return dev ? to_coresight_device(dev) : NULL;
|
||||
return coresight_find_enabled_sink(source);
|
||||
}
|
||||
|
||||
static int coresight_sink_by_id(struct device *dev, const void *data)
|
||||
|
@ -627,13 +660,45 @@ struct coresight_device *coresight_get_sink_by_id(u32 id)
|
|||
return dev ? to_coresight_device(dev) : NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* coresight_get_ref- Helper function to increase reference count to module
|
||||
* and device.
|
||||
* Return true in successful case and power up the device.
|
||||
* Return false when failed to get reference of module.
|
||||
*/
|
||||
static inline bool coresight_get_ref(struct coresight_device *csdev)
|
||||
{
|
||||
struct device *dev = csdev->dev.parent;
|
||||
|
||||
/* Make sure the driver can't be removed */
|
||||
if (!try_module_get(dev->driver->owner))
|
||||
return false;
|
||||
/* Make sure the device can't go away */
|
||||
get_device(dev);
|
||||
pm_runtime_get_sync(dev);
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* coresight_put_ref- Helper function to decrease reference count to module
|
||||
* and device. Power off the device.
|
||||
*/
|
||||
static inline void coresight_put_ref(struct coresight_device *csdev)
|
||||
{
|
||||
struct device *dev = csdev->dev.parent;
|
||||
|
||||
pm_runtime_put(dev);
|
||||
put_device(dev);
|
||||
module_put(dev->driver->owner);
|
||||
}
|
||||
|
||||
/*
|
||||
* coresight_grab_device - Power up this device and any of the helper
|
||||
* devices connected to it for trace operation. Since the helper devices
|
||||
* don't appear on the trace path, they should be handled along with the
|
||||
* the master device.
|
||||
*/
|
||||
static void coresight_grab_device(struct coresight_device *csdev)
|
||||
static int coresight_grab_device(struct coresight_device *csdev)
|
||||
{
|
||||
int i;
|
||||
|
||||
|
@ -642,9 +707,20 @@ static void coresight_grab_device(struct coresight_device *csdev)
|
|||
|
||||
child = csdev->pdata->conns[i].child_dev;
|
||||
if (child && child->type == CORESIGHT_DEV_TYPE_HELPER)
|
||||
pm_runtime_get_sync(child->dev.parent);
|
||||
if (!coresight_get_ref(child))
|
||||
goto err;
|
||||
}
|
||||
pm_runtime_get_sync(csdev->dev.parent);
|
||||
if (coresight_get_ref(csdev))
|
||||
return 0;
|
||||
err:
|
||||
for (i--; i >= 0; i--) {
|
||||
struct coresight_device *child;
|
||||
|
||||
child = csdev->pdata->conns[i].child_dev;
|
||||
if (child && child->type == CORESIGHT_DEV_TYPE_HELPER)
|
||||
coresight_put_ref(child);
|
||||
}
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -655,13 +731,13 @@ static void coresight_drop_device(struct coresight_device *csdev)
|
|||
{
|
||||
int i;
|
||||
|
||||
pm_runtime_put(csdev->dev.parent);
|
||||
coresight_put_ref(csdev);
|
||||
for (i = 0; i < csdev->pdata->nr_outport; i++) {
|
||||
struct coresight_device *child;
|
||||
|
||||
child = csdev->pdata->conns[i].child_dev;
|
||||
if (child && child->type == CORESIGHT_DEV_TYPE_HELPER)
|
||||
pm_runtime_put(child->dev.parent);
|
||||
coresight_put_ref(child);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -680,7 +756,7 @@ static int _coresight_build_path(struct coresight_device *csdev,
|
|||
struct coresight_device *sink,
|
||||
struct list_head *path)
|
||||
{
|
||||
int i;
|
||||
int i, ret;
|
||||
bool found = false;
|
||||
struct coresight_node *node;
|
||||
|
||||
|
@ -710,11 +786,14 @@ out:
|
|||
* is tell the PM runtime core we need this element and add a node
|
||||
* for it.
|
||||
*/
|
||||
ret = coresight_grab_device(csdev);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
node = kzalloc(sizeof(struct coresight_node), GFP_KERNEL);
|
||||
if (!node)
|
||||
return -ENOMEM;
|
||||
|
||||
coresight_grab_device(csdev);
|
||||
node->csdev = csdev;
|
||||
list_add(&node->link, path);
|
||||
|
||||
|
@ -988,11 +1067,7 @@ int coresight_enable(struct coresight_device *csdev)
|
|||
goto out;
|
||||
}
|
||||
|
||||
/*
|
||||
* Search for a valid sink for this session but don't reset the
|
||||
* "enable_sink" flag in sysFS. Users get to do that explicitly.
|
||||
*/
|
||||
sink = coresight_get_enabled_sink(false);
|
||||
sink = coresight_get_enabled_sink(csdev);
|
||||
if (!sink) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
|
@ -1188,7 +1263,6 @@ static void coresight_device_release(struct device *dev)
|
|||
{
|
||||
struct coresight_device *csdev = to_coresight_device(dev);
|
||||
|
||||
cti_remove_assoc_from_csdev(csdev);
|
||||
fwnode_handle_put(csdev->dev.fwnode);
|
||||
kfree(csdev->refcnt);
|
||||
kfree(csdev);
|
||||
|
@ -1376,16 +1450,7 @@ int coresight_timeout(void __iomem *addr, u32 offset, int position, int value)
|
|||
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
struct bus_type coresight_bustype = {
|
||||
.name = "coresight",
|
||||
};
|
||||
|
||||
static int __init coresight_init(void)
|
||||
{
|
||||
return bus_register(&coresight_bustype);
|
||||
}
|
||||
postcore_initcall(coresight_init);
|
||||
EXPORT_SYMBOL_GPL(coresight_timeout);
|
||||
|
||||
/*
|
||||
* coresight_release_platform_data: Release references to the devices connected
|
||||
|
@ -1498,8 +1563,8 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
|
|||
ret = coresight_fixup_device_conns(csdev);
|
||||
if (!ret)
|
||||
ret = coresight_fixup_orphan_conns(csdev);
|
||||
if (!ret)
|
||||
cti_add_assoc_to_csdev(csdev);
|
||||
if (!ret && cti_assoc_ops && cti_assoc_ops->add)
|
||||
cti_assoc_ops->add(csdev);
|
||||
|
||||
mutex_unlock(&coresight_mutex);
|
||||
if (ret) {
|
||||
|
@ -1522,6 +1587,8 @@ void coresight_unregister(struct coresight_device *csdev)
|
|||
{
|
||||
etm_perf_del_symlink_sink(csdev);
|
||||
/* Remove references of that device in the topology */
|
||||
if (cti_assoc_ops && cti_assoc_ops->remove)
|
||||
cti_assoc_ops->remove(csdev);
|
||||
coresight_remove_conns(csdev);
|
||||
coresight_clear_default_sink(csdev);
|
||||
coresight_release_platform_data(csdev, csdev->pdata);
|
||||
|
@ -1552,6 +1619,7 @@ bool coresight_loses_context_with_cpu(struct device *dev)
|
|||
return fwnode_property_present(dev_fwnode(dev),
|
||||
"arm,coresight-loses-context-with-cpu");
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(coresight_loses_context_with_cpu);
|
||||
|
||||
/*
|
||||
* coresight_alloc_device_name - Get an index for a given device in the
|
||||
|
@ -1592,3 +1660,35 @@ done:
|
|||
return name;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(coresight_alloc_device_name);
|
||||
|
||||
struct bus_type coresight_bustype = {
|
||||
.name = "coresight",
|
||||
};
|
||||
|
||||
static int __init coresight_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = bus_register(&coresight_bustype);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = etm_perf_init();
|
||||
if (ret)
|
||||
bus_unregister(&coresight_bustype);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void __exit coresight_exit(void)
|
||||
{
|
||||
etm_perf_exit();
|
||||
bus_unregister(&coresight_bustype);
|
||||
}
|
||||
|
||||
module_init(coresight_init);
|
||||
module_exit(coresight_exit);
|
||||
|
||||
MODULE_AUTHOR("Pratik Patel <pratikp@codeaurora.org>");
|
||||
MODULE_AUTHOR("Mathieu Poirier <mathieu.poirier@linaro.org>");
|
||||
MODULE_DESCRIPTION("Arm CoreSight tracer driver");
|
|
@ -665,6 +665,8 @@ static const struct amba_id debug_ids[] = {
|
|||
{},
|
||||
};
|
||||
|
||||
MODULE_DEVICE_TABLE(amba, debug_ids);
|
||||
|
||||
static struct amba_driver debug_driver = {
|
||||
.drv = {
|
||||
.name = "coresight-cpu-debug",
|
||||
|
|
|
@ -86,22 +86,16 @@ void cti_write_all_hw_regs(struct cti_drvdata *drvdata)
|
|||
CS_LOCK(drvdata->base);
|
||||
}
|
||||
|
||||
static void cti_enable_hw_smp_call(void *info)
|
||||
{
|
||||
struct cti_drvdata *drvdata = info;
|
||||
|
||||
cti_write_all_hw_regs(drvdata);
|
||||
}
|
||||
|
||||
/* write regs to hardware and enable */
|
||||
static int cti_enable_hw(struct cti_drvdata *drvdata)
|
||||
{
|
||||
struct cti_config *config = &drvdata->config;
|
||||
struct device *dev = &drvdata->csdev->dev;
|
||||
unsigned long flags;
|
||||
int rc = 0;
|
||||
|
||||
pm_runtime_get_sync(dev->parent);
|
||||
spin_lock(&drvdata->spinlock);
|
||||
spin_lock_irqsave(&drvdata->spinlock, flags);
|
||||
|
||||
/* no need to do anything if enabled or unpowered*/
|
||||
if (config->hw_enabled || !config->hw_powered)
|
||||
|
@ -112,19 +106,11 @@ static int cti_enable_hw(struct cti_drvdata *drvdata)
|
|||
if (rc)
|
||||
goto cti_err_not_enabled;
|
||||
|
||||
if (drvdata->ctidev.cpu >= 0) {
|
||||
rc = smp_call_function_single(drvdata->ctidev.cpu,
|
||||
cti_enable_hw_smp_call,
|
||||
drvdata, 1);
|
||||
if (rc)
|
||||
goto cti_err_not_enabled;
|
||||
} else {
|
||||
cti_write_all_hw_regs(drvdata);
|
||||
}
|
||||
cti_write_all_hw_regs(drvdata);
|
||||
|
||||
config->hw_enabled = true;
|
||||
atomic_inc(&drvdata->config.enable_req_count);
|
||||
spin_unlock(&drvdata->spinlock);
|
||||
spin_unlock_irqrestore(&drvdata->spinlock, flags);
|
||||
return rc;
|
||||
|
||||
cti_state_unchanged:
|
||||
|
@ -132,7 +118,7 @@ cti_state_unchanged:
|
|||
|
||||
/* cannot enable due to error */
|
||||
cti_err_not_enabled:
|
||||
spin_unlock(&drvdata->spinlock);
|
||||
spin_unlock_irqrestore(&drvdata->spinlock, flags);
|
||||
pm_runtime_put(dev->parent);
|
||||
return rc;
|
||||
}
|
||||
|
@ -141,9 +127,7 @@ cti_err_not_enabled:
|
|||
static void cti_cpuhp_enable_hw(struct cti_drvdata *drvdata)
|
||||
{
|
||||
struct cti_config *config = &drvdata->config;
|
||||
struct device *dev = &drvdata->csdev->dev;
|
||||
|
||||
pm_runtime_get_sync(dev->parent);
|
||||
spin_lock(&drvdata->spinlock);
|
||||
config->hw_powered = true;
|
||||
|
||||
|
@ -163,7 +147,6 @@ static void cti_cpuhp_enable_hw(struct cti_drvdata *drvdata)
|
|||
/* did not re-enable due to no claim / no request */
|
||||
cti_hp_not_enabled:
|
||||
spin_unlock(&drvdata->spinlock);
|
||||
pm_runtime_put(dev->parent);
|
||||
}
|
||||
|
||||
/* disable hardware */
|
||||
|
@ -511,12 +494,15 @@ static bool cti_add_sysfs_link(struct cti_drvdata *drvdata,
|
|||
return !link_err;
|
||||
}
|
||||
|
||||
static void cti_remove_sysfs_link(struct cti_trig_con *tc)
|
||||
static void cti_remove_sysfs_link(struct cti_drvdata *drvdata,
|
||||
struct cti_trig_con *tc)
|
||||
{
|
||||
struct coresight_sysfs_link link_info;
|
||||
|
||||
link_info.orig = drvdata->csdev;
|
||||
link_info.orig_name = tc->con_dev_name;
|
||||
link_info.target = tc->con_dev;
|
||||
link_info.target_name = dev_name(&drvdata->csdev->dev);
|
||||
coresight_remove_sysfs_link(&link_info);
|
||||
}
|
||||
|
||||
|
@ -556,7 +542,7 @@ cti_match_fixup_csdev(struct cti_device *ctidev, const char *node_name,
|
|||
* This will set the association if CTI declared before the CS device.
|
||||
* (called from coresight_register() with coresight_mutex locked).
|
||||
*/
|
||||
void cti_add_assoc_to_csdev(struct coresight_device *csdev)
|
||||
static void cti_add_assoc_to_csdev(struct coresight_device *csdev)
|
||||
{
|
||||
struct cti_drvdata *ect_item;
|
||||
struct cti_device *ctidev;
|
||||
|
@ -589,13 +575,12 @@ void cti_add_assoc_to_csdev(struct coresight_device *csdev)
|
|||
cti_add_done:
|
||||
mutex_unlock(&ect_mutex);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cti_add_assoc_to_csdev);
|
||||
|
||||
/*
|
||||
* Removing the associated devices is easier.
|
||||
* A CTI will not have a value for csdev->ect_dev.
|
||||
*/
|
||||
void cti_remove_assoc_from_csdev(struct coresight_device *csdev)
|
||||
static void cti_remove_assoc_from_csdev(struct coresight_device *csdev)
|
||||
{
|
||||
struct cti_drvdata *ctidrv;
|
||||
struct cti_trig_con *tc;
|
||||
|
@ -606,8 +591,8 @@ void cti_remove_assoc_from_csdev(struct coresight_device *csdev)
|
|||
ctidrv = csdev_to_cti_drvdata(csdev->ect_dev);
|
||||
ctidev = &ctidrv->ctidev;
|
||||
list_for_each_entry(tc, &ctidev->trig_cons, node) {
|
||||
if (tc->con_dev == csdev->ect_dev) {
|
||||
cti_remove_sysfs_link(tc);
|
||||
if (tc->con_dev == csdev) {
|
||||
cti_remove_sysfs_link(ctidrv, tc);
|
||||
tc->con_dev = NULL;
|
||||
break;
|
||||
}
|
||||
|
@ -616,7 +601,15 @@ void cti_remove_assoc_from_csdev(struct coresight_device *csdev)
|
|||
}
|
||||
mutex_unlock(&ect_mutex);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cti_remove_assoc_from_csdev);
|
||||
|
||||
/*
|
||||
* Operations to add and remove associated CTI.
|
||||
* Register to coresight core driver as call back function.
|
||||
*/
|
||||
static struct cti_assoc_op cti_assoc_ops = {
|
||||
.add = cti_add_assoc_to_csdev,
|
||||
.remove = cti_remove_assoc_from_csdev
|
||||
};
|
||||
|
||||
/*
|
||||
* Update the cross references where the associated device was found
|
||||
|
@ -651,7 +644,7 @@ static void cti_remove_conn_xrefs(struct cti_drvdata *drvdata)
|
|||
if (tc->con_dev) {
|
||||
coresight_set_assoc_ectdev_mutex(tc->con_dev,
|
||||
NULL);
|
||||
cti_remove_sysfs_link(tc);
|
||||
cti_remove_sysfs_link(drvdata, tc);
|
||||
tc->con_dev = NULL;
|
||||
}
|
||||
}
|
||||
|
@ -742,7 +735,8 @@ static int cti_dying_cpu(unsigned int cpu)
|
|||
|
||||
spin_lock(&drvdata->spinlock);
|
||||
drvdata->config.hw_powered = false;
|
||||
coresight_disclaim_device(drvdata->base);
|
||||
if (drvdata->config.hw_enabled)
|
||||
coresight_disclaim_device(drvdata->base);
|
||||
spin_unlock(&drvdata->spinlock);
|
||||
return 0;
|
||||
}
|
||||
|
@ -828,7 +822,6 @@ static void cti_device_release(struct device *dev)
|
|||
struct cti_drvdata *ect_item, *ect_tmp;
|
||||
|
||||
mutex_lock(&ect_mutex);
|
||||
cti_remove_conn_xrefs(drvdata);
|
||||
cti_pm_release(drvdata);
|
||||
|
||||
/* remove from the list */
|
||||
|
@ -843,6 +836,18 @@ static void cti_device_release(struct device *dev)
|
|||
if (drvdata->csdev_release)
|
||||
drvdata->csdev_release(dev);
|
||||
}
|
||||
static int __exit cti_remove(struct amba_device *adev)
|
||||
{
|
||||
struct cti_drvdata *drvdata = dev_get_drvdata(&adev->dev);
|
||||
|
||||
mutex_lock(&ect_mutex);
|
||||
cti_remove_conn_xrefs(drvdata);
|
||||
mutex_unlock(&ect_mutex);
|
||||
|
||||
coresight_unregister(drvdata->csdev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int cti_probe(struct amba_device *adev, const struct amba_id *id)
|
||||
{
|
||||
|
@ -963,6 +968,8 @@ static const struct amba_id cti_ids[] = {
|
|||
{ 0, 0},
|
||||
};
|
||||
|
||||
MODULE_DEVICE_TABLE(amba, cti_ids);
|
||||
|
||||
static struct amba_driver cti_driver = {
|
||||
.drv = {
|
||||
.name = "coresight-cti",
|
||||
|
@ -970,6 +977,30 @@ static struct amba_driver cti_driver = {
|
|||
.suppress_bind_attrs = true,
|
||||
},
|
||||
.probe = cti_probe,
|
||||
.remove = cti_remove,
|
||||
.id_table = cti_ids,
|
||||
};
|
||||
builtin_amba_driver(cti_driver);
|
||||
|
||||
static int __init cti_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = amba_driver_register(&cti_driver);
|
||||
if (ret)
|
||||
pr_info("Error registering cti driver\n");
|
||||
coresight_set_cti_ops(&cti_assoc_ops);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void __exit cti_exit(void)
|
||||
{
|
||||
coresight_remove_cti_ops();
|
||||
amba_driver_unregister(&cti_driver);
|
||||
}
|
||||
|
||||
module_init(cti_init);
|
||||
module_exit(cti_exit);
|
||||
|
||||
MODULE_AUTHOR("Mike Leach <mike.leach@linaro.org>");
|
||||
MODULE_DESCRIPTION("Arm CoreSight CTI Driver");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -525,7 +525,7 @@ static unsigned long etb_update_buffer(struct coresight_device *csdev,
|
|||
|
||||
cur = buf->cur;
|
||||
offset = buf->offset;
|
||||
barrier = barrier_pkt;
|
||||
barrier = coresight_barrier_pkt;
|
||||
|
||||
for (i = 0; i < to_read; i += 4) {
|
||||
buf_ptr = buf->data_pages[cur] + offset;
|
||||
|
@ -801,6 +801,21 @@ err_misc_register:
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int __exit etb_remove(struct amba_device *adev)
|
||||
{
|
||||
struct etb_drvdata *drvdata = dev_get_drvdata(&adev->dev);
|
||||
|
||||
/*
|
||||
* Since misc_open() holds a refcount on the f_ops, which is
|
||||
* etb fops in this case, device is there until last file
|
||||
* handler to this device is closed.
|
||||
*/
|
||||
misc_deregister(&drvdata->miscdev);
|
||||
coresight_unregister(drvdata->csdev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
static int etb_runtime_suspend(struct device *dev)
|
||||
{
|
||||
|
@ -835,6 +850,8 @@ static const struct amba_id etb_ids[] = {
|
|||
{ 0, 0},
|
||||
};
|
||||
|
||||
MODULE_DEVICE_TABLE(amba, etb_ids);
|
||||
|
||||
static struct amba_driver etb_driver = {
|
||||
.drv = {
|
||||
.name = "coresight-etb10",
|
||||
|
@ -844,6 +861,13 @@ static struct amba_driver etb_driver = {
|
|||
|
||||
},
|
||||
.probe = etb_probe,
|
||||
.remove = etb_remove,
|
||||
.id_table = etb_ids,
|
||||
};
|
||||
builtin_amba_driver(etb_driver);
|
||||
|
||||
module_amba_driver(etb_driver);
|
||||
|
||||
MODULE_AUTHOR("Pratik Patel <pratikp@codeaurora.org>");
|
||||
MODULE_AUTHOR("Mathieu Poirier <mathieu.poirier@linaro.org>");
|
||||
MODULE_DESCRIPTION("Arm CoreSight Embedded Trace Buffer driver");
|
||||
MODULE_LICENSE("GPL v2");
|
||||
|
|
|
@ -126,10 +126,10 @@ static void free_sink_buffer(struct etm_event_data *event_data)
|
|||
cpumask_t *mask = &event_data->mask;
|
||||
struct coresight_device *sink;
|
||||
|
||||
if (WARN_ON(cpumask_empty(mask)))
|
||||
if (!event_data->snk_config)
|
||||
return;
|
||||
|
||||
if (!event_data->snk_config)
|
||||
if (WARN_ON(cpumask_empty(mask)))
|
||||
return;
|
||||
|
||||
cpu = cpumask_first(mask);
|
||||
|
@ -222,8 +222,6 @@ static void *etm_setup_aux(struct perf_event *event, void **pages,
|
|||
if (event->attr.config2) {
|
||||
id = (u32)event->attr.config2;
|
||||
sink = coresight_get_sink_by_id(id);
|
||||
} else {
|
||||
sink = coresight_get_enabled_sink(true);
|
||||
}
|
||||
|
||||
mask = &event_data->mask;
|
||||
|
@ -321,6 +319,16 @@ static void etm_event_start(struct perf_event *event, int flags)
|
|||
if (!event_data)
|
||||
goto fail;
|
||||
|
||||
/*
|
||||
* Check if this ETM is allowed to trace, as decided
|
||||
* at etm_setup_aux(). This could be due to an unreachable
|
||||
* sink from this ETM. We can't do much in this case if
|
||||
* the sink was specified or hinted to the driver. For
|
||||
* now, simply don't record anything on this ETM.
|
||||
*/
|
||||
if (!cpumask_test_cpu(cpu, &event_data->mask))
|
||||
goto fail_end_stop;
|
||||
|
||||
path = etm_event_cpu_path(event_data, cpu);
|
||||
/* We need a sink, no need to continue without one */
|
||||
sink = coresight_get_sink(path);
|
||||
|
@ -517,6 +525,7 @@ int etm_perf_symlink(struct coresight_device *csdev, bool link)
|
|||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(etm_perf_symlink);
|
||||
|
||||
static ssize_t etm_perf_sink_name_show(struct device *dev,
|
||||
struct device_attribute *dattr,
|
||||
|
@ -590,7 +599,7 @@ void etm_perf_del_symlink_sink(struct coresight_device *csdev)
|
|||
csdev->ea = NULL;
|
||||
}
|
||||
|
||||
static int __init etm_perf_init(void)
|
||||
int __init etm_perf_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
|
@ -617,4 +626,8 @@ static int __init etm_perf_init(void)
|
|||
|
||||
return ret;
|
||||
}
|
||||
device_initcall(etm_perf_init);
|
||||
|
||||
void __exit etm_perf_exit(void)
|
||||
{
|
||||
perf_pmu_unregister(&etm_pmu);
|
||||
}
|
||||
|
|
|
@ -57,7 +57,7 @@ struct etm_event_data {
|
|||
struct list_head * __percpu *path;
|
||||
};
|
||||
|
||||
#ifdef CONFIG_CORESIGHT
|
||||
#if IS_ENABLED(CONFIG_CORESIGHT)
|
||||
int etm_perf_symlink(struct coresight_device *csdev, bool link);
|
||||
int etm_perf_add_symlink_sink(struct coresight_device *csdev);
|
||||
void etm_perf_del_symlink_sink(struct coresight_device *csdev);
|
||||
|
@ -82,4 +82,7 @@ static inline void *etm_perf_sink_config(struct perf_output_handle *handle)
|
|||
|
||||
#endif /* CONFIG_CORESIGHT */
|
||||
|
||||
int __init etm_perf_init(void);
|
||||
void __exit etm_perf_exit(void);
|
||||
|
||||
#endif
|
||||
|
|
|
@ -40,8 +40,6 @@
|
|||
static int boot_enable;
|
||||
module_param_named(boot_enable, boot_enable, int, S_IRUGO);
|
||||
|
||||
/* The number of ETM/PTM currently registered */
|
||||
static int etm_count;
|
||||
static struct etm_drvdata *etmdrvdata[NR_CPUS];
|
||||
|
||||
static enum cpuhp_state hp_online;
|
||||
|
@ -782,6 +780,42 @@ static void etm_init_trace_id(struct etm_drvdata *drvdata)
|
|||
drvdata->traceid = coresight_get_trace_id(drvdata->cpu);
|
||||
}
|
||||
|
||||
static int __init etm_hp_setup(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = cpuhp_setup_state_nocalls_cpuslocked(CPUHP_AP_ARM_CORESIGHT_STARTING,
|
||||
"arm/coresight:starting",
|
||||
etm_starting_cpu, etm_dying_cpu);
|
||||
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = cpuhp_setup_state_nocalls_cpuslocked(CPUHP_AP_ONLINE_DYN,
|
||||
"arm/coresight:online",
|
||||
etm_online_cpu, NULL);
|
||||
|
||||
/* HP dyn state ID returned in ret on success */
|
||||
if (ret > 0) {
|
||||
hp_online = ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* failed dyn state - remove others */
|
||||
cpuhp_remove_state_nocalls(CPUHP_AP_ARM_CORESIGHT_STARTING);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void etm_hp_clear(void)
|
||||
{
|
||||
cpuhp_remove_state_nocalls(CPUHP_AP_ARM_CORESIGHT_STARTING);
|
||||
if (hp_online) {
|
||||
cpuhp_remove_state_nocalls(hp_online);
|
||||
hp_online = 0;
|
||||
}
|
||||
}
|
||||
|
||||
static int etm_probe(struct amba_device *adev, const struct amba_id *id)
|
||||
{
|
||||
int ret;
|
||||
|
@ -823,39 +857,20 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
|
|||
if (!desc.name)
|
||||
return -ENOMEM;
|
||||
|
||||
cpus_read_lock();
|
||||
etmdrvdata[drvdata->cpu] = drvdata;
|
||||
|
||||
if (smp_call_function_single(drvdata->cpu,
|
||||
etm_init_arch_data, drvdata, 1))
|
||||
dev_err(dev, "ETM arch init failed\n");
|
||||
|
||||
if (!etm_count++) {
|
||||
cpuhp_setup_state_nocalls_cpuslocked(CPUHP_AP_ARM_CORESIGHT_STARTING,
|
||||
"arm/coresight:starting",
|
||||
etm_starting_cpu, etm_dying_cpu);
|
||||
ret = cpuhp_setup_state_nocalls_cpuslocked(CPUHP_AP_ONLINE_DYN,
|
||||
"arm/coresight:online",
|
||||
etm_online_cpu, NULL);
|
||||
if (ret < 0)
|
||||
goto err_arch_supported;
|
||||
hp_online = ret;
|
||||
}
|
||||
cpus_read_unlock();
|
||||
|
||||
if (etm_arch_supported(drvdata->arch) == false) {
|
||||
ret = -EINVAL;
|
||||
goto err_arch_supported;
|
||||
}
|
||||
if (etm_arch_supported(drvdata->arch) == false)
|
||||
return -EINVAL;
|
||||
|
||||
etm_init_trace_id(drvdata);
|
||||
etm_set_default(&drvdata->config);
|
||||
|
||||
pdata = coresight_get_platform_data(dev);
|
||||
if (IS_ERR(pdata)) {
|
||||
ret = PTR_ERR(pdata);
|
||||
goto err_arch_supported;
|
||||
}
|
||||
if (IS_ERR(pdata))
|
||||
return PTR_ERR(pdata);
|
||||
|
||||
adev->dev.platform_data = pdata;
|
||||
|
||||
desc.type = CORESIGHT_DEV_TYPE_SOURCE;
|
||||
|
@ -865,17 +880,17 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
|
|||
desc.dev = dev;
|
||||
desc.groups = coresight_etm_groups;
|
||||
drvdata->csdev = coresight_register(&desc);
|
||||
if (IS_ERR(drvdata->csdev)) {
|
||||
ret = PTR_ERR(drvdata->csdev);
|
||||
goto err_arch_supported;
|
||||
}
|
||||
if (IS_ERR(drvdata->csdev))
|
||||
return PTR_ERR(drvdata->csdev);
|
||||
|
||||
ret = etm_perf_symlink(drvdata->csdev, true);
|
||||
if (ret) {
|
||||
coresight_unregister(drvdata->csdev);
|
||||
goto err_arch_supported;
|
||||
return ret;
|
||||
}
|
||||
|
||||
etmdrvdata[drvdata->cpu] = drvdata;
|
||||
|
||||
pm_runtime_put(&adev->dev);
|
||||
dev_info(&drvdata->csdev->dev,
|
||||
"%s initialized\n", (char *)coresight_get_uci_data(id));
|
||||
|
@ -885,14 +900,40 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
|
|||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
err_arch_supported:
|
||||
if (--etm_count == 0) {
|
||||
cpuhp_remove_state_nocalls(CPUHP_AP_ARM_CORESIGHT_STARTING);
|
||||
if (hp_online)
|
||||
cpuhp_remove_state_nocalls(hp_online);
|
||||
}
|
||||
return ret;
|
||||
static void __exit clear_etmdrvdata(void *info)
|
||||
{
|
||||
int cpu = *(int *)info;
|
||||
|
||||
etmdrvdata[cpu] = NULL;
|
||||
}
|
||||
|
||||
static int __exit etm_remove(struct amba_device *adev)
|
||||
{
|
||||
struct etm_drvdata *drvdata = dev_get_drvdata(&adev->dev);
|
||||
|
||||
etm_perf_symlink(drvdata->csdev, false);
|
||||
|
||||
/*
|
||||
* Taking hotplug lock here to avoid racing between etm_remove and
|
||||
* CPU hotplug call backs.
|
||||
*/
|
||||
cpus_read_lock();
|
||||
/*
|
||||
* The readers for etmdrvdata[] are CPU hotplug call backs
|
||||
* and PM notification call backs. Change etmdrvdata[i] on
|
||||
* CPU i ensures these call backs has consistent view
|
||||
* inside one call back function.
|
||||
*/
|
||||
if (smp_call_function_single(drvdata->cpu, clear_etmdrvdata, &drvdata->cpu, 1))
|
||||
etmdrvdata[drvdata->cpu] = NULL;
|
||||
|
||||
cpus_read_unlock();
|
||||
|
||||
coresight_unregister(drvdata->csdev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
|
@ -937,6 +978,8 @@ static const struct amba_id etm_ids[] = {
|
|||
{ 0, 0},
|
||||
};
|
||||
|
||||
MODULE_DEVICE_TABLE(amba, etm_ids);
|
||||
|
||||
static struct amba_driver etm_driver = {
|
||||
.drv = {
|
||||
.name = "coresight-etm3x",
|
||||
|
@ -945,6 +988,39 @@ static struct amba_driver etm_driver = {
|
|||
.suppress_bind_attrs = true,
|
||||
},
|
||||
.probe = etm_probe,
|
||||
.remove = etm_remove,
|
||||
.id_table = etm_ids,
|
||||
};
|
||||
builtin_amba_driver(etm_driver);
|
||||
|
||||
static int __init etm_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = etm_hp_setup();
|
||||
|
||||
/* etm_hp_setup() does its own cleanup - exit on error */
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = amba_driver_register(&etm_driver);
|
||||
if (ret) {
|
||||
pr_err("Error registering etm3x driver\n");
|
||||
etm_hp_clear();
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void __exit etm_exit(void)
|
||||
{
|
||||
amba_driver_unregister(&etm_driver);
|
||||
etm_hp_clear();
|
||||
}
|
||||
|
||||
module_init(etm_init);
|
||||
module_exit(etm_exit);
|
||||
|
||||
MODULE_AUTHOR("Pratik Patel <pratikp@codeaurora.org>");
|
||||
MODULE_AUTHOR("Mathieu Poirier <mathieu.poirier@linaro.org>");
|
||||
MODULE_DESCRIPTION("Arm CoreSight Program Flow Trace driver");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -48,12 +48,11 @@ module_param(pm_save_enable, int, 0444);
|
|||
MODULE_PARM_DESC(pm_save_enable,
|
||||
"Save/restore state on power down: 1 = never, 2 = self-hosted");
|
||||
|
||||
/* The number of ETMv4 currently registered */
|
||||
static int etm4_count;
|
||||
static struct etmv4_drvdata *etmdrvdata[NR_CPUS];
|
||||
static void etm4_set_default_config(struct etmv4_config *config);
|
||||
static int etm4_set_event_filters(struct etmv4_drvdata *drvdata,
|
||||
struct perf_event *event);
|
||||
static u64 etm4_get_access_type(struct etmv4_config *config);
|
||||
|
||||
static enum cpuhp_state hp_online;
|
||||
|
||||
|
@ -743,8 +742,14 @@ static void etm4_init_arch_data(void *info)
|
|||
* The number of resource pairs conveyed by the HW starts at 0, i.e a
|
||||
* value of 0x0 indicate 1 resource pair, 0x1 indicate two and so on.
|
||||
* As such add 1 to the value of NUMRSPAIR for a better representation.
|
||||
*
|
||||
* For ETM v4.3 and later, 0x0 means 0, and no pairs are available -
|
||||
* the default TRUE and FALSE resource selectors are omitted.
|
||||
* Otherwise for values 0x1 and above the number is N + 1 as per v4.2.
|
||||
*/
|
||||
drvdata->nr_resource = BMVAL(etmidr4, 16, 19) + 1;
|
||||
drvdata->nr_resource = BMVAL(etmidr4, 16, 19);
|
||||
if ((drvdata->arch < ETM4X_ARCH_4V3) || (drvdata->nr_resource > 0))
|
||||
drvdata->nr_resource += 1;
|
||||
/*
|
||||
* NUMSSCC, bits[23:20] the number of single-shot
|
||||
* comparator control for tracing. Read any status regs as these
|
||||
|
@ -785,6 +790,22 @@ static void etm4_init_arch_data(void *info)
|
|||
CS_LOCK(drvdata->base);
|
||||
}
|
||||
|
||||
/* Set ELx trace filter access in the TRCVICTLR register */
|
||||
static void etm4_set_victlr_access(struct etmv4_config *config)
|
||||
{
|
||||
u64 access_type;
|
||||
|
||||
config->vinst_ctrl &= ~(ETM_EXLEVEL_S_VICTLR_MASK | ETM_EXLEVEL_NS_VICTLR_MASK);
|
||||
|
||||
/*
|
||||
* TRCVICTLR::EXLEVEL_NS:EXLEVELS: Set kernel / user filtering
|
||||
* bits in vinst_ctrl, same bit pattern as TRCACATRn values returned by
|
||||
* etm4_get_access_type() but with a relative shift in this register.
|
||||
*/
|
||||
access_type = etm4_get_access_type(config) << ETM_EXLEVEL_LSHIFT_TRCVICTLR;
|
||||
config->vinst_ctrl |= (u32)access_type;
|
||||
}
|
||||
|
||||
static void etm4_set_default_config(struct etmv4_config *config)
|
||||
{
|
||||
/* disable all events tracing */
|
||||
|
@ -802,6 +823,9 @@ static void etm4_set_default_config(struct etmv4_config *config)
|
|||
|
||||
/* TRCVICTLR::EVENT = 0x01, select the always on logic */
|
||||
config->vinst_ctrl = BIT(0);
|
||||
|
||||
/* TRCVICTLR::EXLEVEL_NS:EXLEVELS: Set kernel / user filtering */
|
||||
etm4_set_victlr_access(config);
|
||||
}
|
||||
|
||||
static u64 etm4_get_ns_access_type(struct etmv4_config *config)
|
||||
|
@ -1066,7 +1090,7 @@ out:
|
|||
|
||||
void etm4_config_trace_mode(struct etmv4_config *config)
|
||||
{
|
||||
u32 addr_acc, mode;
|
||||
u32 mode;
|
||||
|
||||
mode = config->mode;
|
||||
mode &= (ETM_MODE_EXCL_KERN | ETM_MODE_EXCL_USER);
|
||||
|
@ -1078,15 +1102,7 @@ void etm4_config_trace_mode(struct etmv4_config *config)
|
|||
if (!(mode & ETM_MODE_EXCL_KERN) && !(mode & ETM_MODE_EXCL_USER))
|
||||
return;
|
||||
|
||||
addr_acc = config->addr_acc[ETM_DEFAULT_ADDR_COMP];
|
||||
/* clear default config */
|
||||
addr_acc &= ~(ETM_EXLEVEL_NS_APP | ETM_EXLEVEL_NS_OS |
|
||||
ETM_EXLEVEL_NS_HYP);
|
||||
|
||||
addr_acc |= etm4_get_ns_access_type(config);
|
||||
|
||||
config->addr_acc[ETM_DEFAULT_ADDR_COMP] = addr_acc;
|
||||
config->addr_acc[ETM_DEFAULT_ADDR_COMP + 1] = addr_acc;
|
||||
etm4_set_victlr_access(config);
|
||||
}
|
||||
|
||||
static int etm4_online_cpu(unsigned int cpu)
|
||||
|
@ -1183,7 +1199,7 @@ static int etm4_cpu_save(struct etmv4_drvdata *drvdata)
|
|||
state->trcvdsacctlr = readl(drvdata->base + TRCVDSACCTLR);
|
||||
state->trcvdarcctlr = readl(drvdata->base + TRCVDARCCTLR);
|
||||
|
||||
for (i = 0; i < drvdata->nrseqstate; i++)
|
||||
for (i = 0; i < drvdata->nrseqstate - 1; i++)
|
||||
state->trcseqevr[i] = readl(drvdata->base + TRCSEQEVRn(i));
|
||||
|
||||
state->trcseqrstevr = readl(drvdata->base + TRCSEQRSTEVR);
|
||||
|
@ -1227,7 +1243,7 @@ static int etm4_cpu_save(struct etmv4_drvdata *drvdata)
|
|||
state->trccidcctlr1 = readl(drvdata->base + TRCCIDCCTLR1);
|
||||
|
||||
state->trcvmidcctlr0 = readl(drvdata->base + TRCVMIDCCTLR0);
|
||||
state->trcvmidcctlr0 = readl(drvdata->base + TRCVMIDCCTLR1);
|
||||
state->trcvmidcctlr1 = readl(drvdata->base + TRCVMIDCCTLR1);
|
||||
|
||||
state->trcclaimset = readl(drvdata->base + TRCCLAIMCLR);
|
||||
|
||||
|
@ -1288,7 +1304,7 @@ static void etm4_cpu_restore(struct etmv4_drvdata *drvdata)
|
|||
writel_relaxed(state->trcvdsacctlr, drvdata->base + TRCVDSACCTLR);
|
||||
writel_relaxed(state->trcvdarcctlr, drvdata->base + TRCVDARCCTLR);
|
||||
|
||||
for (i = 0; i < drvdata->nrseqstate; i++)
|
||||
for (i = 0; i < drvdata->nrseqstate - 1; i++)
|
||||
writel_relaxed(state->trcseqevr[i],
|
||||
drvdata->base + TRCSEQEVRn(i));
|
||||
|
||||
|
@ -1337,7 +1353,7 @@ static void etm4_cpu_restore(struct etmv4_drvdata *drvdata)
|
|||
writel_relaxed(state->trccidcctlr1, drvdata->base + TRCCIDCCTLR1);
|
||||
|
||||
writel_relaxed(state->trcvmidcctlr0, drvdata->base + TRCVMIDCCTLR0);
|
||||
writel_relaxed(state->trcvmidcctlr0, drvdata->base + TRCVMIDCCTLR1);
|
||||
writel_relaxed(state->trcvmidcctlr1, drvdata->base + TRCVMIDCCTLR1);
|
||||
|
||||
writel_relaxed(state->trcclaimset, drvdata->base + TRCCLAIMSET);
|
||||
|
||||
|
@ -1397,28 +1413,25 @@ static struct notifier_block etm4_cpu_pm_nb = {
|
|||
.notifier_call = etm4_cpu_pm_notify,
|
||||
};
|
||||
|
||||
/* Setup PM. Called with cpus locked. Deals with error conditions and counts */
|
||||
static int etm4_pm_setup_cpuslocked(void)
|
||||
/* Setup PM. Deals with error conditions and counts */
|
||||
static int __init etm4_pm_setup(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (etm4_count++)
|
||||
return 0;
|
||||
|
||||
ret = cpu_pm_register_notifier(&etm4_cpu_pm_nb);
|
||||
if (ret)
|
||||
goto reduce_count;
|
||||
return ret;
|
||||
|
||||
ret = cpuhp_setup_state_nocalls_cpuslocked(CPUHP_AP_ARM_CORESIGHT_STARTING,
|
||||
"arm/coresight4:starting",
|
||||
etm4_starting_cpu, etm4_dying_cpu);
|
||||
ret = cpuhp_setup_state_nocalls(CPUHP_AP_ARM_CORESIGHT_STARTING,
|
||||
"arm/coresight4:starting",
|
||||
etm4_starting_cpu, etm4_dying_cpu);
|
||||
|
||||
if (ret)
|
||||
goto unregister_notifier;
|
||||
|
||||
ret = cpuhp_setup_state_nocalls_cpuslocked(CPUHP_AP_ONLINE_DYN,
|
||||
"arm/coresight4:online",
|
||||
etm4_online_cpu, NULL);
|
||||
ret = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN,
|
||||
"arm/coresight4:online",
|
||||
etm4_online_cpu, NULL);
|
||||
|
||||
/* HP dyn state ID returned in ret on success */
|
||||
if (ret > 0) {
|
||||
|
@ -1427,21 +1440,15 @@ static int etm4_pm_setup_cpuslocked(void)
|
|||
}
|
||||
|
||||
/* failed dyn state - remove others */
|
||||
cpuhp_remove_state_nocalls_cpuslocked(CPUHP_AP_ARM_CORESIGHT_STARTING);
|
||||
cpuhp_remove_state_nocalls(CPUHP_AP_ARM_CORESIGHT_STARTING);
|
||||
|
||||
unregister_notifier:
|
||||
cpu_pm_unregister_notifier(&etm4_cpu_pm_nb);
|
||||
|
||||
reduce_count:
|
||||
--etm4_count;
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void etm4_pm_clear(void)
|
||||
{
|
||||
if (--etm4_count != 0)
|
||||
return;
|
||||
|
||||
cpu_pm_unregister_notifier(&etm4_cpu_pm_nb);
|
||||
cpuhp_remove_state_nocalls(CPUHP_AP_ARM_CORESIGHT_STARTING);
|
||||
if (hp_online) {
|
||||
|
@ -1497,35 +1504,20 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
|
|||
if (!desc.name)
|
||||
return -ENOMEM;
|
||||
|
||||
cpus_read_lock();
|
||||
etmdrvdata[drvdata->cpu] = drvdata;
|
||||
|
||||
if (smp_call_function_single(drvdata->cpu,
|
||||
etm4_init_arch_data, drvdata, 1))
|
||||
dev_err(dev, "ETM arch init failed\n");
|
||||
|
||||
ret = etm4_pm_setup_cpuslocked();
|
||||
cpus_read_unlock();
|
||||
|
||||
/* etm4_pm_setup_cpuslocked() does its own cleanup - exit on error */
|
||||
if (ret) {
|
||||
etmdrvdata[drvdata->cpu] = NULL;
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (etm4_arch_supported(drvdata->arch) == false) {
|
||||
ret = -EINVAL;
|
||||
goto err_arch_supported;
|
||||
}
|
||||
if (etm4_arch_supported(drvdata->arch) == false)
|
||||
return -EINVAL;
|
||||
|
||||
etm4_init_trace_id(drvdata);
|
||||
etm4_set_default(&drvdata->config);
|
||||
|
||||
pdata = coresight_get_platform_data(dev);
|
||||
if (IS_ERR(pdata)) {
|
||||
ret = PTR_ERR(pdata);
|
||||
goto err_arch_supported;
|
||||
}
|
||||
if (IS_ERR(pdata))
|
||||
return PTR_ERR(pdata);
|
||||
|
||||
adev->dev.platform_data = pdata;
|
||||
|
||||
desc.type = CORESIGHT_DEV_TYPE_SOURCE;
|
||||
|
@ -1535,17 +1527,17 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
|
|||
desc.dev = dev;
|
||||
desc.groups = coresight_etmv4_groups;
|
||||
drvdata->csdev = coresight_register(&desc);
|
||||
if (IS_ERR(drvdata->csdev)) {
|
||||
ret = PTR_ERR(drvdata->csdev);
|
||||
goto err_arch_supported;
|
||||
}
|
||||
if (IS_ERR(drvdata->csdev))
|
||||
return PTR_ERR(drvdata->csdev);
|
||||
|
||||
ret = etm_perf_symlink(drvdata->csdev, true);
|
||||
if (ret) {
|
||||
coresight_unregister(drvdata->csdev);
|
||||
goto err_arch_supported;
|
||||
return ret;
|
||||
}
|
||||
|
||||
etmdrvdata[drvdata->cpu] = drvdata;
|
||||
|
||||
pm_runtime_put(&adev->dev);
|
||||
dev_info(&drvdata->csdev->dev, "CPU%d: ETM v%d.%d initialized\n",
|
||||
drvdata->cpu, drvdata->arch >> 4, drvdata->arch & 0xf);
|
||||
|
@ -1556,11 +1548,6 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
|
|||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err_arch_supported:
|
||||
etmdrvdata[drvdata->cpu] = NULL;
|
||||
etm4_pm_clear();
|
||||
return ret;
|
||||
}
|
||||
|
||||
static struct amba_cs_uci_id uci_id_etm4[] = {
|
||||
|
@ -1572,6 +1559,40 @@ static struct amba_cs_uci_id uci_id_etm4[] = {
|
|||
}
|
||||
};
|
||||
|
||||
static void __exit clear_etmdrvdata(void *info)
|
||||
{
|
||||
int cpu = *(int *)info;
|
||||
|
||||
etmdrvdata[cpu] = NULL;
|
||||
}
|
||||
|
||||
static int __exit etm4_remove(struct amba_device *adev)
|
||||
{
|
||||
struct etmv4_drvdata *drvdata = dev_get_drvdata(&adev->dev);
|
||||
|
||||
etm_perf_symlink(drvdata->csdev, false);
|
||||
|
||||
/*
|
||||
* Taking hotplug lock here to avoid racing between etm4_remove and
|
||||
* CPU hotplug call backs.
|
||||
*/
|
||||
cpus_read_lock();
|
||||
/*
|
||||
* The readers for etmdrvdata[] are CPU hotplug call backs
|
||||
* and PM notification call backs. Change etmdrvdata[i] on
|
||||
* CPU i ensures these call backs has consistent view
|
||||
* inside one call back function.
|
||||
*/
|
||||
if (smp_call_function_single(drvdata->cpu, clear_etmdrvdata, &drvdata->cpu, 1))
|
||||
etmdrvdata[drvdata->cpu] = NULL;
|
||||
|
||||
cpus_read_unlock();
|
||||
|
||||
coresight_unregister(drvdata->csdev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct amba_id etm4_ids[] = {
|
||||
CS_AMBA_ID(0x000bb95d), /* Cortex-A53 */
|
||||
CS_AMBA_ID(0x000bb95e), /* Cortex-A57 */
|
||||
|
@ -1586,15 +1607,53 @@ static const struct amba_id etm4_ids[] = {
|
|||
CS_AMBA_UCI_ID(0x000bb805, uci_id_etm4),/* Qualcomm Kryo 4XX Cortex-A55 */
|
||||
CS_AMBA_UCI_ID(0x000bb804, uci_id_etm4),/* Qualcomm Kryo 4XX Cortex-A76 */
|
||||
CS_AMBA_UCI_ID(0x000cc0af, uci_id_etm4),/* Marvell ThunderX2 */
|
||||
CS_AMBA_UCI_ID(0x000b6d01, uci_id_etm4),/* HiSilicon-Hip08 */
|
||||
CS_AMBA_UCI_ID(0x000b6d02, uci_id_etm4),/* HiSilicon-Hip09 */
|
||||
{},
|
||||
};
|
||||
|
||||
MODULE_DEVICE_TABLE(amba, etm4_ids);
|
||||
|
||||
static struct amba_driver etm4x_driver = {
|
||||
.drv = {
|
||||
.name = "coresight-etm4x",
|
||||
.owner = THIS_MODULE,
|
||||
.suppress_bind_attrs = true,
|
||||
},
|
||||
.probe = etm4_probe,
|
||||
.remove = etm4_remove,
|
||||
.id_table = etm4_ids,
|
||||
};
|
||||
builtin_amba_driver(etm4x_driver);
|
||||
|
||||
static int __init etm4x_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = etm4_pm_setup();
|
||||
|
||||
/* etm4_pm_setup() does its own cleanup - exit on error */
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = amba_driver_register(&etm4x_driver);
|
||||
if (ret) {
|
||||
pr_err("Error registering etm4x driver\n");
|
||||
etm4_pm_clear();
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void __exit etm4x_exit(void)
|
||||
{
|
||||
amba_driver_unregister(&etm4x_driver);
|
||||
etm4_pm_clear();
|
||||
}
|
||||
|
||||
module_init(etm4x_init);
|
||||
module_exit(etm4x_exit);
|
||||
|
||||
MODULE_AUTHOR("Pratik Patel <pratikp@codeaurora.org>");
|
||||
MODULE_AUTHOR("Mathieu Poirier <mathieu.poirier@linaro.org>");
|
||||
MODULE_DESCRIPTION("Arm CoreSight Program Flow Trace v4.x driver");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -206,7 +206,7 @@ static ssize_t reset_store(struct device *dev,
|
|||
* each trace run.
|
||||
*/
|
||||
config->vinst_ctrl = BIT(0);
|
||||
if (drvdata->nr_addr_cmp == true) {
|
||||
if (drvdata->nr_addr_cmp > 0) {
|
||||
config->mode |= ETM_MODE_VIEWINST_STARTSTOP;
|
||||
/* SSSTATUS, bit[9] */
|
||||
config->vinst_ctrl |= BIT(9);
|
||||
|
@ -236,7 +236,7 @@ static ssize_t reset_store(struct device *dev,
|
|||
}
|
||||
|
||||
config->res_idx = 0x0;
|
||||
for (i = 0; i < drvdata->nr_resource; i++)
|
||||
for (i = 2; i < 2 * drvdata->nr_resource; i++)
|
||||
config->res_ctrl[i] = 0x0;
|
||||
|
||||
config->ss_idx = 0x0;
|
||||
|
@ -1663,8 +1663,11 @@ static ssize_t res_idx_store(struct device *dev,
|
|||
|
||||
if (kstrtoul(buf, 16, &val))
|
||||
return -EINVAL;
|
||||
/* Resource selector pair 0 is always implemented and reserved */
|
||||
if ((val == 0) || (val >= drvdata->nr_resource))
|
||||
/*
|
||||
* Resource selector pair 0 is always implemented and reserved,
|
||||
* namely an idx with 0 and 1 is illegal.
|
||||
*/
|
||||
if ((val < 2) || (val >= 2 * drvdata->nr_resource))
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
|
|
|
@ -192,11 +192,17 @@
|
|||
#define ETM_EXLEVEL_NS_HYP BIT(14)
|
||||
#define ETM_EXLEVEL_NS_NA BIT(15)
|
||||
|
||||
/* access level control in TRCVICTLR - same bits as TRCACATRn but shifted */
|
||||
#define ETM_EXLEVEL_LSHIFT_TRCVICTLR 8
|
||||
|
||||
/* secure / non secure masks - TRCVICTLR, IDR3 */
|
||||
#define ETM_EXLEVEL_S_VICTLR_MASK GENMASK(19, 16)
|
||||
/* NS MON (EL3) mode never implemented */
|
||||
#define ETM_EXLEVEL_NS_VICTLR_MASK GENMASK(22, 20)
|
||||
|
||||
/* Interpretation of resource numbers change at ETM v4.3 architecture */
|
||||
#define ETM4X_ARCH_4V3 0x43
|
||||
|
||||
/**
|
||||
* struct etmv4_config - configuration information related to an ETMv4
|
||||
* @mode: Controls various modes supported by this ETM.
|
||||
|
|
|
@ -274,6 +274,15 @@ out_disable_clk:
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int __exit funnel_remove(struct device *dev)
|
||||
{
|
||||
struct funnel_drvdata *drvdata = dev_get_drvdata(dev);
|
||||
|
||||
coresight_unregister(drvdata->csdev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
static int funnel_runtime_suspend(struct device *dev)
|
||||
{
|
||||
|
@ -319,29 +328,41 @@ static int static_funnel_probe(struct platform_device *pdev)
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int __exit static_funnel_remove(struct platform_device *pdev)
|
||||
{
|
||||
funnel_remove(&pdev->dev);
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct of_device_id static_funnel_match[] = {
|
||||
{.compatible = "arm,coresight-static-funnel"},
|
||||
{}
|
||||
};
|
||||
|
||||
MODULE_DEVICE_TABLE(of, static_funnel_match);
|
||||
|
||||
#ifdef CONFIG_ACPI
|
||||
static const struct acpi_device_id static_funnel_ids[] = {
|
||||
{"ARMHC9FE", 0},
|
||||
{},
|
||||
};
|
||||
|
||||
MODULE_DEVICE_TABLE(acpi, static_funnel_ids);
|
||||
#endif
|
||||
|
||||
static struct platform_driver static_funnel_driver = {
|
||||
.probe = static_funnel_probe,
|
||||
.remove = static_funnel_remove,
|
||||
.driver = {
|
||||
.name = "coresight-static-funnel",
|
||||
.owner = THIS_MODULE,
|
||||
.of_match_table = static_funnel_match,
|
||||
.acpi_match_table = ACPI_PTR(static_funnel_ids),
|
||||
.pm = &funnel_dev_pm_ops,
|
||||
.suppress_bind_attrs = true,
|
||||
},
|
||||
};
|
||||
builtin_platform_driver(static_funnel_driver);
|
||||
|
||||
static int dynamic_funnel_probe(struct amba_device *adev,
|
||||
const struct amba_id *id)
|
||||
|
@ -349,6 +370,11 @@ static int dynamic_funnel_probe(struct amba_device *adev,
|
|||
return funnel_probe(&adev->dev, &adev->res);
|
||||
}
|
||||
|
||||
static int __exit dynamic_funnel_remove(struct amba_device *adev)
|
||||
{
|
||||
return funnel_remove(&adev->dev);
|
||||
}
|
||||
|
||||
static const struct amba_id dynamic_funnel_ids[] = {
|
||||
{
|
||||
.id = 0x000bb908,
|
||||
|
@ -362,6 +388,8 @@ static const struct amba_id dynamic_funnel_ids[] = {
|
|||
{ 0, 0},
|
||||
};
|
||||
|
||||
MODULE_DEVICE_TABLE(amba, dynamic_funnel_ids);
|
||||
|
||||
static struct amba_driver dynamic_funnel_driver = {
|
||||
.drv = {
|
||||
.name = "coresight-dynamic-funnel",
|
||||
|
@ -370,6 +398,39 @@ static struct amba_driver dynamic_funnel_driver = {
|
|||
.suppress_bind_attrs = true,
|
||||
},
|
||||
.probe = dynamic_funnel_probe,
|
||||
.remove = dynamic_funnel_remove,
|
||||
.id_table = dynamic_funnel_ids,
|
||||
};
|
||||
builtin_amba_driver(dynamic_funnel_driver);
|
||||
|
||||
static int __init funnel_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = platform_driver_register(&static_funnel_driver);
|
||||
if (ret) {
|
||||
pr_info("Error registering platform driver\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = amba_driver_register(&dynamic_funnel_driver);
|
||||
if (ret) {
|
||||
pr_info("Error registering amba driver\n");
|
||||
platform_driver_unregister(&static_funnel_driver);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void __exit funnel_exit(void)
|
||||
{
|
||||
platform_driver_unregister(&static_funnel_driver);
|
||||
amba_driver_unregister(&dynamic_funnel_driver);
|
||||
}
|
||||
|
||||
module_init(funnel_init);
|
||||
module_exit(funnel_exit);
|
||||
|
||||
MODULE_AUTHOR("Pratik Patel <pratikp@codeaurora.org>");
|
||||
MODULE_AUTHOR("Mathieu Poirier <mathieu.poirier@linaro.org>");
|
||||
MODULE_DESCRIPTION("Arm CoreSight Funnel Driver");
|
||||
MODULE_LICENSE("GPL v2");
|
||||
|
|
|
@ -75,6 +75,7 @@ coresight_find_csdev_by_fwnode(struct fwnode_handle *r_fwnode)
|
|||
}
|
||||
return csdev;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(coresight_find_csdev_by_fwnode);
|
||||
|
||||
#ifdef CONFIG_OF
|
||||
static inline bool of_coresight_legacy_ep_is_input(struct device_node *ep)
|
||||
|
@ -711,11 +712,11 @@ static int acpi_coresight_parse_graph(struct acpi_device *adev,
|
|||
return dir;
|
||||
|
||||
if (dir == ACPI_CORESIGHT_LINK_MASTER) {
|
||||
if (ptr->outport > pdata->nr_outport)
|
||||
pdata->nr_outport = ptr->outport;
|
||||
if (ptr->outport >= pdata->nr_outport)
|
||||
pdata->nr_outport = ptr->outport + 1;
|
||||
ptr++;
|
||||
} else {
|
||||
WARN_ON(pdata->nr_inport == ptr->child_port);
|
||||
WARN_ON(pdata->nr_inport == ptr->child_port + 1);
|
||||
/*
|
||||
* We do not track input port connections for a device.
|
||||
* However we need the highest port number described,
|
||||
|
@ -723,8 +724,8 @@ static int acpi_coresight_parse_graph(struct acpi_device *adev,
|
|||
* record for an output connection. Hence, do not move
|
||||
* the ptr for input connections
|
||||
*/
|
||||
if (ptr->child_port > pdata->nr_inport)
|
||||
pdata->nr_inport = ptr->child_port;
|
||||
if (ptr->child_port >= pdata->nr_inport)
|
||||
pdata->nr_inport = ptr->child_port + 1;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -66,8 +66,8 @@ static DEVICE_ATTR_RO(name)
|
|||
#define coresight_simple_reg64(type, name, lo_off, hi_off) \
|
||||
__coresight_simple_func(type, NULL, name, lo_off, hi_off)
|
||||
|
||||
extern const u32 barrier_pkt[4];
|
||||
#define CORESIGHT_BARRIER_PKT_SIZE (sizeof(barrier_pkt))
|
||||
extern const u32 coresight_barrier_pkt[4];
|
||||
#define CORESIGHT_BARRIER_PKT_SIZE (sizeof(coresight_barrier_pkt))
|
||||
|
||||
enum etm_addr_type {
|
||||
ETM_ADDR_TYPE_NONE,
|
||||
|
@ -104,10 +104,9 @@ struct cs_buffers {
|
|||
static inline void coresight_insert_barrier_packet(void *buf)
|
||||
{
|
||||
if (buf)
|
||||
memcpy(buf, barrier_pkt, CORESIGHT_BARRIER_PKT_SIZE);
|
||||
memcpy(buf, coresight_barrier_pkt, CORESIGHT_BARRIER_PKT_SIZE);
|
||||
}
|
||||
|
||||
|
||||
static inline void CS_LOCK(void __iomem *addr)
|
||||
{
|
||||
do {
|
||||
|
@ -148,7 +147,8 @@ static inline void coresight_write_reg_pair(void __iomem *addr, u64 val,
|
|||
void coresight_disable_path(struct list_head *path);
|
||||
int coresight_enable_path(struct list_head *path, u32 mode, void *sink_data);
|
||||
struct coresight_device *coresight_get_sink(struct list_head *path);
|
||||
struct coresight_device *coresight_get_enabled_sink(bool reset);
|
||||
struct coresight_device *
|
||||
coresight_get_enabled_sink(struct coresight_device *source);
|
||||
struct coresight_device *coresight_get_sink_by_id(u32 id);
|
||||
struct coresight_device *
|
||||
coresight_find_default_sink(struct coresight_device *csdev);
|
||||
|
@ -165,7 +165,7 @@ int coresight_make_links(struct coresight_device *orig,
|
|||
void coresight_remove_links(struct coresight_device *orig,
|
||||
struct coresight_connection *conn);
|
||||
|
||||
#ifdef CONFIG_CORESIGHT_SOURCE_ETM3X
|
||||
#if IS_ENABLED(CONFIG_CORESIGHT_SOURCE_ETM3X)
|
||||
extern int etm_readl_cp14(u32 off, unsigned int *val);
|
||||
extern int etm_writel_cp14(u32 off, u32 val);
|
||||
#else
|
||||
|
@ -173,15 +173,13 @@ static inline int etm_readl_cp14(u32 off, unsigned int *val) { return 0; }
|
|||
static inline int etm_writel_cp14(u32 off, u32 val) { return 0; }
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_CORESIGHT_CTI
|
||||
extern void cti_add_assoc_to_csdev(struct coresight_device *csdev);
|
||||
extern void cti_remove_assoc_from_csdev(struct coresight_device *csdev);
|
||||
struct cti_assoc_op {
|
||||
void (*add)(struct coresight_device *csdev);
|
||||
void (*remove)(struct coresight_device *csdev);
|
||||
};
|
||||
|
||||
#else
|
||||
static inline void cti_add_assoc_to_csdev(struct coresight_device *csdev) {}
|
||||
static inline void
|
||||
cti_remove_assoc_from_csdev(struct coresight_device *csdev) {}
|
||||
#endif
|
||||
extern void coresight_set_cti_ops(const struct cti_assoc_op *cti_op);
|
||||
extern void coresight_remove_cti_ops(void);
|
||||
|
||||
/*
|
||||
* Macros and inline functions to handle CoreSight UCI data and driver
|
||||
|
|
|
@ -291,6 +291,14 @@ out_disable_clk:
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int __exit replicator_remove(struct device *dev)
|
||||
{
|
||||
struct replicator_drvdata *drvdata = dev_get_drvdata(dev);
|
||||
|
||||
coresight_unregister(drvdata->csdev);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int static_replicator_probe(struct platform_device *pdev)
|
||||
{
|
||||
int ret;
|
||||
|
@ -310,6 +318,13 @@ static int static_replicator_probe(struct platform_device *pdev)
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int __exit static_replicator_remove(struct platform_device *pdev)
|
||||
{
|
||||
replicator_remove(&pdev->dev);
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
static int replicator_runtime_suspend(struct device *dev)
|
||||
{
|
||||
|
@ -343,24 +358,29 @@ static const struct of_device_id static_replicator_match[] = {
|
|||
{}
|
||||
};
|
||||
|
||||
MODULE_DEVICE_TABLE(of, static_replicator_match);
|
||||
|
||||
#ifdef CONFIG_ACPI
|
||||
static const struct acpi_device_id static_replicator_acpi_ids[] = {
|
||||
{"ARMHC985", 0}, /* ARM CoreSight Static Replicator */
|
||||
{}
|
||||
};
|
||||
|
||||
MODULE_DEVICE_TABLE(acpi, static_replicator_acpi_ids);
|
||||
#endif
|
||||
|
||||
static struct platform_driver static_replicator_driver = {
|
||||
.probe = static_replicator_probe,
|
||||
.remove = static_replicator_remove,
|
||||
.driver = {
|
||||
.name = "coresight-static-replicator",
|
||||
.owner = THIS_MODULE,
|
||||
.of_match_table = of_match_ptr(static_replicator_match),
|
||||
.acpi_match_table = ACPI_PTR(static_replicator_acpi_ids),
|
||||
.pm = &replicator_dev_pm_ops,
|
||||
.suppress_bind_attrs = true,
|
||||
},
|
||||
};
|
||||
builtin_platform_driver(static_replicator_driver);
|
||||
|
||||
static int dynamic_replicator_probe(struct amba_device *adev,
|
||||
const struct amba_id *id)
|
||||
|
@ -368,19 +388,60 @@ static int dynamic_replicator_probe(struct amba_device *adev,
|
|||
return replicator_probe(&adev->dev, &adev->res);
|
||||
}
|
||||
|
||||
static int __exit dynamic_replicator_remove(struct amba_device *adev)
|
||||
{
|
||||
return replicator_remove(&adev->dev);
|
||||
}
|
||||
|
||||
static const struct amba_id dynamic_replicator_ids[] = {
|
||||
CS_AMBA_ID(0x000bb909),
|
||||
CS_AMBA_ID(0x000bb9ec), /* Coresight SoC-600 */
|
||||
{},
|
||||
};
|
||||
|
||||
MODULE_DEVICE_TABLE(amba, dynamic_replicator_ids);
|
||||
|
||||
static struct amba_driver dynamic_replicator_driver = {
|
||||
.drv = {
|
||||
.name = "coresight-dynamic-replicator",
|
||||
.pm = &replicator_dev_pm_ops,
|
||||
.owner = THIS_MODULE,
|
||||
.suppress_bind_attrs = true,
|
||||
},
|
||||
.probe = dynamic_replicator_probe,
|
||||
.remove = dynamic_replicator_remove,
|
||||
.id_table = dynamic_replicator_ids,
|
||||
};
|
||||
builtin_amba_driver(dynamic_replicator_driver);
|
||||
|
||||
static int __init replicator_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = platform_driver_register(&static_replicator_driver);
|
||||
if (ret) {
|
||||
pr_info("Error registering platform driver\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = amba_driver_register(&dynamic_replicator_driver);
|
||||
if (ret) {
|
||||
pr_info("Error registering amba driver\n");
|
||||
platform_driver_unregister(&static_replicator_driver);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void __exit replicator_exit(void)
|
||||
{
|
||||
platform_driver_unregister(&static_replicator_driver);
|
||||
amba_driver_unregister(&dynamic_replicator_driver);
|
||||
}
|
||||
|
||||
module_init(replicator_init);
|
||||
module_exit(replicator_exit);
|
||||
|
||||
MODULE_AUTHOR("Pratik Patel <pratikp@codeaurora.org>");
|
||||
MODULE_AUTHOR("Mathieu Poirier <mathieu.poirier@linaro.org>");
|
||||
MODULE_DESCRIPTION("Arm CoreSight Replicator Driver");
|
||||
MODULE_LICENSE("GPL v2");
|
||||
|
|
|
@ -412,6 +412,7 @@ static ssize_t notrace stm_generic_packet(struct stm_data *stm_data,
|
|||
void __iomem *ch_addr;
|
||||
struct stm_drvdata *drvdata = container_of(stm_data,
|
||||
struct stm_drvdata, stm);
|
||||
unsigned int stm_flags;
|
||||
|
||||
if (!(drvdata && local_read(&drvdata->mode)))
|
||||
return -EACCES;
|
||||
|
@ -421,8 +422,9 @@ static ssize_t notrace stm_generic_packet(struct stm_data *stm_data,
|
|||
|
||||
ch_addr = stm_channel_addr(drvdata, channel);
|
||||
|
||||
flags = (flags == STP_PACKET_TIMESTAMPED) ? STM_FLAG_TIMESTAMPED : 0;
|
||||
flags |= test_bit(channel, drvdata->chs.guaranteed) ?
|
||||
stm_flags = (flags & STP_PACKET_TIMESTAMPED) ?
|
||||
STM_FLAG_TIMESTAMPED : 0;
|
||||
stm_flags |= test_bit(channel, drvdata->chs.guaranteed) ?
|
||||
STM_FLAG_GUARANTEED : 0;
|
||||
|
||||
if (size > drvdata->write_bytes)
|
||||
|
@ -432,7 +434,7 @@ static ssize_t notrace stm_generic_packet(struct stm_data *stm_data,
|
|||
|
||||
switch (packet) {
|
||||
case STP_PACKET_FLAG:
|
||||
ch_addr += stm_channel_off(STM_PKT_TYPE_FLAG, flags);
|
||||
ch_addr += stm_channel_off(STM_PKT_TYPE_FLAG, stm_flags);
|
||||
|
||||
/*
|
||||
* The generic STM core sets a size of '0' on flag packets.
|
||||
|
@ -444,7 +446,8 @@ static ssize_t notrace stm_generic_packet(struct stm_data *stm_data,
|
|||
break;
|
||||
|
||||
case STP_PACKET_DATA:
|
||||
ch_addr += stm_channel_off(STM_PKT_TYPE_DATA, flags);
|
||||
stm_flags |= (flags & STP_PACKET_MARKED) ? STM_FLAG_MARKED : 0;
|
||||
ch_addr += stm_channel_off(STM_PKT_TYPE_DATA, stm_flags);
|
||||
stm_send(ch_addr, payload, size,
|
||||
drvdata->write_bytes);
|
||||
break;
|
||||
|
@ -948,6 +951,17 @@ stm_unregister:
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int __exit stm_remove(struct amba_device *adev)
|
||||
{
|
||||
struct stm_drvdata *drvdata = dev_get_drvdata(&adev->dev);
|
||||
|
||||
coresight_unregister(drvdata->csdev);
|
||||
|
||||
stm_unregister_device(&drvdata->stm);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
static int stm_runtime_suspend(struct device *dev)
|
||||
{
|
||||
|
@ -980,6 +994,8 @@ static const struct amba_id stm_ids[] = {
|
|||
{ 0, 0},
|
||||
};
|
||||
|
||||
MODULE_DEVICE_TABLE(amba, stm_ids);
|
||||
|
||||
static struct amba_driver stm_driver = {
|
||||
.drv = {
|
||||
.name = "coresight-stm",
|
||||
|
@ -988,7 +1004,12 @@ static struct amba_driver stm_driver = {
|
|||
.suppress_bind_attrs = true,
|
||||
},
|
||||
.probe = stm_probe,
|
||||
.remove = stm_remove,
|
||||
.id_table = stm_ids,
|
||||
};
|
||||
|
||||
builtin_amba_driver(stm_driver);
|
||||
module_amba_driver(stm_driver);
|
||||
|
||||
MODULE_AUTHOR("Pratik Patel <pratikp@codeaurora.org>");
|
||||
MODULE_DESCRIPTION("Arm CoreSight System Trace Macrocell driver");
|
||||
MODULE_LICENSE("GPL v2");
|
||||
|
|
|
@ -102,6 +102,7 @@ int coresight_add_sysfs_link(struct coresight_sysfs_link *info)
|
|||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(coresight_add_sysfs_link);
|
||||
|
||||
void coresight_remove_sysfs_link(struct coresight_sysfs_link *info)
|
||||
{
|
||||
|
@ -122,6 +123,7 @@ void coresight_remove_sysfs_link(struct coresight_sysfs_link *info)
|
|||
info->orig->nr_links--;
|
||||
info->target->nr_links--;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(coresight_remove_sysfs_link);
|
||||
|
||||
/*
|
||||
* coresight_make_links: Make a link for a connection from a @orig
|
||||
|
|
|
@ -559,6 +559,21 @@ out:
|
|||
spin_unlock_irqrestore(&drvdata->spinlock, flags);
|
||||
}
|
||||
|
||||
static int __exit tmc_remove(struct amba_device *adev)
|
||||
{
|
||||
struct tmc_drvdata *drvdata = dev_get_drvdata(&adev->dev);
|
||||
|
||||
/*
|
||||
* Since misc_open() holds a refcount on the f_ops, which is
|
||||
* etb fops in this case, device is there until last file
|
||||
* handler to this device is closed.
|
||||
*/
|
||||
misc_deregister(&drvdata->miscdev);
|
||||
coresight_unregister(drvdata->csdev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct amba_id tmc_ids[] = {
|
||||
CS_AMBA_ID(0x000bb961),
|
||||
/* Coresight SoC 600 TMC-ETR/ETS */
|
||||
|
@ -570,6 +585,8 @@ static const struct amba_id tmc_ids[] = {
|
|||
{ 0, 0},
|
||||
};
|
||||
|
||||
MODULE_DEVICE_TABLE(amba, tmc_ids);
|
||||
|
||||
static struct amba_driver tmc_driver = {
|
||||
.drv = {
|
||||
.name = "coresight-tmc",
|
||||
|
@ -578,6 +595,12 @@ static struct amba_driver tmc_driver = {
|
|||
},
|
||||
.probe = tmc_probe,
|
||||
.shutdown = tmc_shutdown,
|
||||
.remove = tmc_remove,
|
||||
.id_table = tmc_ids,
|
||||
};
|
||||
builtin_amba_driver(tmc_driver);
|
||||
|
||||
module_amba_driver(tmc_driver);
|
||||
|
||||
MODULE_AUTHOR("Pratik Patel <pratikp@codeaurora.org>");
|
||||
MODULE_DESCRIPTION("Arm CoreSight Trace Memory Controller driver");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -519,7 +519,7 @@ static unsigned long tmc_update_etf_buffer(struct coresight_device *csdev,
|
|||
|
||||
cur = buf->cur;
|
||||
offset = buf->offset;
|
||||
barrier = barrier_pkt;
|
||||
barrier = coresight_barrier_pkt;
|
||||
|
||||
/* for every byte to read */
|
||||
for (i = 0; i < to_read; i += 4) {
|
||||
|
|
|
@ -255,6 +255,7 @@ void tmc_free_sg_table(struct tmc_sg_table *sg_table)
|
|||
tmc_free_table_pages(sg_table);
|
||||
tmc_free_data_pages(sg_table);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tmc_free_sg_table);
|
||||
|
||||
/*
|
||||
* Alloc pages for the table. Since this will be used by the device,
|
||||
|
@ -340,6 +341,7 @@ struct tmc_sg_table *tmc_alloc_sg_table(struct device *dev,
|
|||
|
||||
return sg_table;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tmc_alloc_sg_table);
|
||||
|
||||
/*
|
||||
* tmc_sg_table_sync_data_range: Sync the data buffer written
|
||||
|
@ -360,6 +362,7 @@ void tmc_sg_table_sync_data_range(struct tmc_sg_table *table,
|
|||
PAGE_SIZE, DMA_FROM_DEVICE);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tmc_sg_table_sync_data_range);
|
||||
|
||||
/* tmc_sg_sync_table: Sync the page table */
|
||||
void tmc_sg_table_sync_table(struct tmc_sg_table *sg_table)
|
||||
|
@ -372,6 +375,7 @@ void tmc_sg_table_sync_table(struct tmc_sg_table *sg_table)
|
|||
dma_sync_single_for_device(real_dev, table_pages->daddrs[i],
|
||||
PAGE_SIZE, DMA_TO_DEVICE);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tmc_sg_table_sync_table);
|
||||
|
||||
/*
|
||||
* tmc_sg_table_get_data: Get the buffer pointer for data @offset
|
||||
|
@ -401,6 +405,7 @@ ssize_t tmc_sg_table_get_data(struct tmc_sg_table *sg_table,
|
|||
*bufpp = page_address(data_pages->pages[pg_idx]) + pg_offset;
|
||||
return len;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tmc_sg_table_get_data);
|
||||
|
||||
#ifdef ETR_SG_DEBUG
|
||||
/* Map a dma address to virtual address */
|
||||
|
@ -766,6 +771,7 @@ tmc_etr_get_catu_device(struct tmc_drvdata *drvdata)
|
|||
|
||||
return NULL;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tmc_etr_get_catu_device);
|
||||
|
||||
static inline int tmc_etr_enable_catu(struct tmc_drvdata *drvdata,
|
||||
struct etr_buf *etr_buf)
|
||||
|
@ -788,10 +794,21 @@ static inline void tmc_etr_disable_catu(struct tmc_drvdata *drvdata)
|
|||
static const struct etr_buf_operations *etr_buf_ops[] = {
|
||||
[ETR_MODE_FLAT] = &etr_flat_buf_ops,
|
||||
[ETR_MODE_ETR_SG] = &etr_sg_buf_ops,
|
||||
[ETR_MODE_CATU] = IS_ENABLED(CONFIG_CORESIGHT_CATU)
|
||||
? &etr_catu_buf_ops : NULL,
|
||||
[ETR_MODE_CATU] = NULL,
|
||||
};
|
||||
|
||||
void tmc_etr_set_catu_ops(const struct etr_buf_operations *catu)
|
||||
{
|
||||
etr_buf_ops[ETR_MODE_CATU] = catu;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tmc_etr_set_catu_ops);
|
||||
|
||||
void tmc_etr_remove_catu_ops(void)
|
||||
{
|
||||
etr_buf_ops[ETR_MODE_CATU] = NULL;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tmc_etr_remove_catu_ops);
|
||||
|
||||
static inline int tmc_etr_mode_alloc_buf(int mode,
|
||||
struct tmc_drvdata *drvdata,
|
||||
struct etr_buf *etr_buf, int node,
|
||||
|
|
|
@ -326,4 +326,7 @@ tmc_sg_table_buf_size(struct tmc_sg_table *sg_table)
|
|||
|
||||
struct coresight_device *tmc_etr_get_catu_device(struct tmc_drvdata *drvdata);
|
||||
|
||||
void tmc_etr_set_catu_ops(const struct etr_buf_operations *catu);
|
||||
void tmc_etr_remove_catu_ops(void);
|
||||
|
||||
#endif
|
||||
|
|
|
@ -173,6 +173,15 @@ static int tpiu_probe(struct amba_device *adev, const struct amba_id *id)
|
|||
return PTR_ERR(drvdata->csdev);
|
||||
}
|
||||
|
||||
static int __exit tpiu_remove(struct amba_device *adev)
|
||||
{
|
||||
struct tpiu_drvdata *drvdata = dev_get_drvdata(&adev->dev);
|
||||
|
||||
coresight_unregister(drvdata->csdev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
static int tpiu_runtime_suspend(struct device *dev)
|
||||
{
|
||||
|
@ -216,6 +225,8 @@ static const struct amba_id tpiu_ids[] = {
|
|||
{ 0, 0},
|
||||
};
|
||||
|
||||
MODULE_DEVICE_TABLE(amba, tpiu_ids);
|
||||
|
||||
static struct amba_driver tpiu_driver = {
|
||||
.drv = {
|
||||
.name = "coresight-tpiu",
|
||||
|
@ -224,6 +235,13 @@ static struct amba_driver tpiu_driver = {
|
|||
.suppress_bind_attrs = true,
|
||||
},
|
||||
.probe = tpiu_probe,
|
||||
.remove = tpiu_remove,
|
||||
.id_table = tpiu_ids,
|
||||
};
|
||||
builtin_amba_driver(tpiu_driver);
|
||||
|
||||
module_amba_driver(tpiu_driver);
|
||||
|
||||
MODULE_AUTHOR("Pratik Patel <pratikp@codeaurora.org>");
|
||||
MODULE_AUTHOR("Mathieu Poirier <mathieu.poirier@linaro.org>");
|
||||
MODULE_DESCRIPTION("Arm CoreSight TPIU (Trace Port Interface Unit) driver");
|
||||
MODULE_LICENSE("GPL v2");
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue