Char/Misc driver patches for 5.4-rc1
Here is the big char/misc driver pull request for 5.4-rc1. As has been happening in previous releases, more and more individual driver subsystem trees are ending up in here. Now if that is good or bad I can't tell, but hopefully it makes your life easier as it's more of an aggregation of trees together to one merge point for you. Anyway, lots of stuff in here: - habanalabs driver updates - thunderbolt driver updates - misc driver updates - coresight and intel_th hwtracing driver updates - fpga driver updates - extcon driver updates - some dma driver updates - char driver updates - android binder driver updates - nvmem driver updates - phy driver updates - parport driver fixes - pcmcia driver fix - uio driver updates - w1 driver updates - configfs fixes - other assorted driver updates All of these have been in linux-next for a long time with no reported issues. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> -----BEGIN PGP SIGNATURE----- iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCXYIT1g8cZ3JlZ0Brcm9h aC5jb20ACgkQMUfUDdst+ym9lwCgrHZlMMvfYNVm6GQ5ge58JJsVTL4AoNatTcL4 hfVMA6pCHWBjV65xVSf6 =Tijw -----END PGP SIGNATURE----- Merge tag 'char-misc-5.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc Pull char/misc driver updates from Greg KH: "Here is the big char/misc driver pull request for 5.4-rc1. As has been happening in previous releases, more and more individual driver subsystem trees are ending up in here. Now if that is good or bad I can't tell, but hopefully it makes your life easier as it's more of an aggregation of trees together to one merge point for you. Anyway, lots of stuff in here: - habanalabs driver updates - thunderbolt driver updates - misc driver updates - coresight and intel_th hwtracing driver updates - fpga driver updates - extcon driver updates - some dma driver updates - char driver updates - android binder driver updates - nvmem driver updates - phy driver updates - parport driver fixes - pcmcia driver fix - uio driver updates - w1 driver updates - configfs fixes - other assorted driver updates All of these have been in linux-next for a long time with no reported issues" * tag 'char-misc-5.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (200 commits) misc: mic: Use PTR_ERR_OR_ZERO rather than its implementation habanalabs: correctly cast variable to __le32 habanalabs: show correct id in error print habanalabs: stop using the acronym KMD habanalabs: display card name as sensors header habanalabs: add uapi to retrieve aggregate H/W events habanalabs: add uapi to retrieve device utilization habanalabs: Make the Coresight timestamp perpetual habanalabs: explicitly set the queue-id enumerated numbers habanalabs: print to kernel log when reset is finished habanalabs: replace __le32_to_cpu with le32_to_cpu habanalabs: replace __cpu_to_le32/64 with cpu_to_le32/64 habanalabs: Handle HW_IP_INFO if device disabled or in reset habanalabs: Expose devices after initialization is done habanalabs: improve security in Debug IOCTL habanalabs: use default structure for user input in Debug IOCTL habanalabs: Add descriptive name to PSOC app status register habanalabs: Add descriptive names to PSOC scratch-pad registers habanalabs: create two char devices per ASIC habanalabs: change device_setup_cdev() to be more generic ...
This commit is contained in:
commit
6cfae0c26b
|
@ -12,7 +12,8 @@ Description: (RW) Configure MSC operating mode:
|
|||
- "single", for contiguous buffer mode (high-order alloc);
|
||||
- "multi", for multiblock mode;
|
||||
- "ExI", for DCI handler mode;
|
||||
- "debug", for debug mode.
|
||||
- "debug", for debug mode;
|
||||
- any of the currently loaded buffer sinks.
|
||||
If operating mode changes, existing buffer is deallocated,
|
||||
provided there are no active users and tracing is not enabled,
|
||||
otherwise the write will fail.
|
||||
|
|
|
@ -0,0 +1,128 @@
|
|||
Intel Stratix10 Remote System Update (RSU) device attributes
|
||||
|
||||
What: /sys/devices/platform/stratix10-rsu.0/current_image
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Richard Gong <richard.gong@linux.intel.com>
|
||||
Description:
|
||||
(RO) the address in flash of currently running image.
|
||||
|
||||
What: /sys/devices/platform/stratix10-rsu.0/fail_image
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Richard Gong <richard.gong@linux.intel.com>
|
||||
Description:
|
||||
(RO) the address in flash of failed image.
|
||||
|
||||
What: /sys/devices/platform/stratix10-rsu.0/state
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Richard Gong <richard.gong@linux.intel.com>
|
||||
Description:
|
||||
(RO) the state of RSU system.
|
||||
The state field has two parts: major error code in
|
||||
upper 16 bits and minor error code in lower 16 bits.
|
||||
|
||||
b[15:0]
|
||||
Currently used only when major error is 0xF006
|
||||
(CPU watchdog timeout), in which case the minor
|
||||
error code is the value reported by CPU to
|
||||
firmware through the RSU notify command before
|
||||
the watchdog timeout occurs.
|
||||
|
||||
b[31:16]
|
||||
0xF001 bitstream error
|
||||
0xF002 hardware access failure
|
||||
0xF003 bitstream corruption
|
||||
0xF004 internal error
|
||||
0xF005 device error
|
||||
0xF006 CPU watchdog timeout
|
||||
0xF007 internal unknown error
|
||||
|
||||
What: /sys/devices/platform/stratix10-rsu.0/version
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Richard Gong <richard.gong@linux.intel.com>
|
||||
Description:
|
||||
(RO) the version number of RSU firmware. 19.3 or late
|
||||
version includes information about the firmware which
|
||||
reported the error.
|
||||
|
||||
pre 19.3:
|
||||
b[31:0]
|
||||
0x0 version number
|
||||
|
||||
19.3 or late:
|
||||
b[15:0]
|
||||
0x1 version number
|
||||
b[31:16]
|
||||
0x0 no error
|
||||
0x0DCF Decision CMF error
|
||||
0x0ACF Application CMF error
|
||||
|
||||
What: /sys/devices/platform/stratix10-rsu.0/error_location
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Richard Gong <richard.gong@linux.intel.com>
|
||||
Description:
|
||||
(RO) the error offset inside the image that failed.
|
||||
|
||||
What: /sys/devices/platform/stratix10-rsu.0/error_details
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Richard Gong <richard.gong@linux.intel.com>
|
||||
Description:
|
||||
(RO) error code.
|
||||
|
||||
What: /sys/devices/platform/stratix10-rsu.0/retry_counter
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Richard Gong <richard.gong@linux.intel.com>
|
||||
Description:
|
||||
(RO) the current image's retry counter, which is used by
|
||||
user to know how many times the images is still allowed
|
||||
to reload itself before giving up and starting RSU
|
||||
fail-over flow.
|
||||
|
||||
What: /sys/devices/platform/stratix10-rsu.0/reboot_image
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Richard Gong <richard.gong@linux.intel.com>
|
||||
Description:
|
||||
(WO) the address in flash of image to be loaded on next
|
||||
reboot command.
|
||||
|
||||
What: /sys/devices/platform/stratix10-rsu.0/notify
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Richard Gong <richard.gong@linux.intel.com>
|
||||
Description:
|
||||
(WO) client to notify firmware with different actions.
|
||||
|
||||
b[15:0]
|
||||
inform firmware the current software execution
|
||||
stage.
|
||||
0 the first stage bootloader didn't run or
|
||||
didn't reach the point of launching second
|
||||
stage bootloader.
|
||||
1 failed in second bootloader or didn't get
|
||||
to the point of launching the operating
|
||||
system.
|
||||
2 both first and second stage bootloader ran
|
||||
and the operating system launch was
|
||||
attempted.
|
||||
|
||||
b[16]
|
||||
1 firmware to reset current image retry
|
||||
counter.
|
||||
0 no action.
|
||||
|
||||
b[17]
|
||||
1 firmware to clear RSU log
|
||||
0 no action.
|
||||
|
||||
b[18]
|
||||
this is negative logic
|
||||
1 no action
|
||||
0 firmware record the notify code defined
|
||||
in b[15:0].
|
|
@ -57,6 +57,7 @@ KernelVersion: 5.1
|
|||
Contact: oded.gabbay@gmail.com
|
||||
Description: Allows the user to set the maximum clock frequency for MME, TPC
|
||||
and IC when the power management profile is set to "automatic".
|
||||
This property is valid only for the Goya ASIC family
|
||||
|
||||
What: /sys/class/habanalabs/hl<n>/ic_clk
|
||||
Date: Jan 2019
|
||||
|
@ -127,8 +128,8 @@ Description: Power management profile. Values are "auto", "manual". In "auto"
|
|||
the max clock frequency to a low value when there are no user
|
||||
processes that are opened on the device's file. In "manual"
|
||||
mode, the user sets the maximum clock frequency by writing to
|
||||
ic_clk, mme_clk and tpc_clk
|
||||
|
||||
ic_clk, mme_clk and tpc_clk. This property is valid only for
|
||||
the Goya ASIC family
|
||||
|
||||
What: /sys/class/habanalabs/hl<n>/preboot_btl_ver
|
||||
Date: Jan 2019
|
||||
|
@ -186,11 +187,4 @@ What: /sys/class/habanalabs/hl<n>/uboot_ver
|
|||
Date: Jan 2019
|
||||
KernelVersion: 5.1
|
||||
Contact: oded.gabbay@gmail.com
|
||||
Description: Version of the u-boot running on the device's CPU
|
||||
|
||||
What: /sys/class/habanalabs/hl<n>/write_open_cnt
|
||||
Date: Jan 2019
|
||||
KernelVersion: 5.1
|
||||
Contact: oded.gabbay@gmail.com
|
||||
Description: Displays the total number of user processes that are currently
|
||||
opened on the device's file
|
||||
Description: Version of the u-boot running on the device's CPU
|
|
@ -21,3 +21,88 @@ Contact: Wu Hao <hao.wu@intel.com>
|
|||
Description: Read-only. It returns Bitstream (static FPGA region) meta
|
||||
data, which includes the synthesis date, seed and other
|
||||
information of this static FPGA region.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/cache_size
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-only. It returns cache size of this FPGA device.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/fabric_version
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-only. It returns fabric version of this FPGA device.
|
||||
Userspace applications need this information to select
|
||||
best data channels per different fabric design.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/socket_id
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-only. It returns socket_id to indicate which socket
|
||||
this FPGA belongs to, only valid for integrated solution.
|
||||
User only needs this information, in case standard numa node
|
||||
can't provide correct information.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/errors/pcie0_errors
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-Write. Read this file for errors detected on pcie0 link.
|
||||
Write this file to clear errors logged in pcie0_errors. Write
|
||||
fails with -EINVAL if input parsing fails or input error code
|
||||
doesn't match.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/errors/pcie1_errors
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-Write. Read this file for errors detected on pcie1 link.
|
||||
Write this file to clear errors logged in pcie1_errors. Write
|
||||
fails with -EINVAL if input parsing fails or input error code
|
||||
doesn't match.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/errors/nonfatal_errors
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-only. It returns non-fatal errors detected.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/errors/catfatal_errors
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-only. It returns catastrophic and fatal errors detected.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/errors/inject_errors
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-Write. Read this file to check errors injected. Write this
|
||||
file to inject errors for testing purpose. Write fails with
|
||||
-EINVAL if input parsing fails or input inject error code isn't
|
||||
supported.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/errors/fme_errors
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-Write. Read this file to get errors detected on FME.
|
||||
Write this file to clear errors logged in fme_errors. Write
|
||||
fials with -EINVAL if input parsing fails or input error code
|
||||
doesn't match.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/errors/first_error
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-only. Read this file to get the first error detected by
|
||||
hardware.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-fme.0/errors/next_error
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-only. Read this file to get the second error detected by
|
||||
hardware.
|
||||
|
|
|
@ -14,3 +14,88 @@ Description: Read-only. User can program different PR bitstreams to FPGA
|
|||
Accelerator Function Unit (AFU) for different functions. It
|
||||
returns uuid which could be used to identify which PR bitstream
|
||||
is programmed in this AFU.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-port.0/power_state
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-only. It reports the APx (AFU Power) state, different APx
|
||||
means different throttling level. When reading this file, it
|
||||
returns "0" - Normal / "1" - AP1 / "2" - AP2 / "6" - AP6.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-port.0/ap1_event
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-write. Read this file for AP1 (AFU Power State 1) event.
|
||||
It's used to indicate transient AP1 state. Write 1 to this
|
||||
file to clear AP1 event.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-port.0/ap2_event
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-write. Read this file for AP2 (AFU Power State 2) event.
|
||||
It's used to indicate transient AP2 state. Write 1 to this
|
||||
file to clear AP2 event.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-port.0/ltr
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-write. Read or set AFU latency tolerance reporting value.
|
||||
Set ltr to 1 if the AFU can tolerate latency >= 40us or set it
|
||||
to 0 if it is latency sensitive.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-port.0/userclk_freqcmd
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Write-only. User writes command to this interface to set
|
||||
userclock to AFU.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-port.0/userclk_freqsts
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-only. Read this file to get the status of issued command
|
||||
to userclck_freqcmd.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-port.0/userclk_freqcntrcmd
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Write-only. User writes command to this interface to set
|
||||
userclock counter.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-port.0/userclk_freqcntrsts
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-only. Read this file to get the status of issued command
|
||||
to userclck_freqcntrcmd.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-port.0/errors/errors
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-Write. Read this file to get errors detected on port and
|
||||
Accelerated Function Unit (AFU). Write error code to this file
|
||||
to clear errors. Write fails with -EINVAL if input parsing
|
||||
fails or input error code doesn't match. Write fails with
|
||||
-EBUSY or -ETIMEDOUT if error can't be cleared as hardware
|
||||
in low power state (-EBUSY) or not respoding (-ETIMEDOUT).
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-port.0/errors/first_error
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-only. Read this file to get the first error detected by
|
||||
hardware.
|
||||
|
||||
What: /sys/bus/platform/devices/dfl-port.0/errors/first_malformed_req
|
||||
Date: August 2019
|
||||
KernelVersion: 5.4
|
||||
Contact: Wu Hao <hao.wu@intel.com>
|
||||
Description: Read-only. Read this file to get the first malformed request
|
||||
captured by hardware.
|
||||
|
|
|
@ -136,7 +136,9 @@ Required properties:
|
|||
OCOTP bindings based on SCU Message Protocol
|
||||
------------------------------------------------------------
|
||||
Required properties:
|
||||
- compatible: Should be "fsl,imx8qxp-scu-ocotp"
|
||||
- compatible: Should be one of:
|
||||
"fsl,imx8qm-scu-ocotp",
|
||||
"fsl,imx8qxp-scu-ocotp".
|
||||
- #address-cells: Must be 1. Contains byte index
|
||||
- #size-cells: Must be 1. Contains byte length
|
||||
|
||||
|
|
|
@ -72,5 +72,5 @@ codec: wm8280@0 {
|
|||
1 2 1 /* MICDET2 MICBIAS2 GPIO=high */
|
||||
>;
|
||||
|
||||
wlf,gpsw = <0>;
|
||||
wlf,gpsw = <ARIZONA_GPSW_OPEN>;
|
||||
};
|
||||
|
|
|
@ -5,7 +5,9 @@ controlled using I2C and enables USB data, stereo and mono audio, video,
|
|||
microphone, and UART data to use a common connector port.
|
||||
|
||||
Required properties:
|
||||
- compatible : Must be "fcs,fsa9480"
|
||||
- compatible : Must be one of
|
||||
"fcs,fsa9480"
|
||||
"fcs,fsa880"
|
||||
- reg : Specifies i2c slave address. Must be 0x25.
|
||||
- interrupts : Should contain one entry specifying interrupt signal of
|
||||
interrupt parent to which interrupt pin of the chip is connected.
|
||||
|
|
|
@ -3,10 +3,7 @@ Altera FPGA To SDRAM Bridge Driver
|
|||
Required properties:
|
||||
- compatible : Should contain "altr,socfpga-fpga2sdram-bridge"
|
||||
|
||||
Optional properties:
|
||||
- bridge-enable : 0 if driver should disable bridge at startup
|
||||
1 if driver should enable bridge at startup
|
||||
Default is to leave bridge in current state.
|
||||
See Documentation/devicetree/bindings/fpga/fpga-bridge.txt for generic bindings.
|
||||
|
||||
Example:
|
||||
fpga_bridge3: fpga-bridge@ffc25080 {
|
||||
|
|
|
@ -10,10 +10,7 @@ Required properties:
|
|||
- compatible : Should contain "altr,freeze-bridge-controller"
|
||||
- regs : base address and size for freeze bridge module
|
||||
|
||||
Optional properties:
|
||||
- bridge-enable : 0 if driver should disable bridge at startup
|
||||
1 if driver should enable bridge at startup
|
||||
Default is to leave bridge in current state.
|
||||
See Documentation/devicetree/bindings/fpga/fpga-bridge.txt for generic bindings.
|
||||
|
||||
Example:
|
||||
freeze-controller@100000450 {
|
||||
|
|
|
@ -9,10 +9,7 @@ Required properties:
|
|||
- resets : Phandle and reset specifier for this bridge's reset
|
||||
- clocks : Clocks used by this module.
|
||||
|
||||
Optional properties:
|
||||
- bridge-enable : 0 if driver should disable bridge at startup.
|
||||
1 if driver should enable bridge at startup.
|
||||
Default is to leave bridge in its current state.
|
||||
See Documentation/devicetree/bindings/fpga/fpga-bridge.txt for generic bindings.
|
||||
|
||||
Example:
|
||||
fpga_bridge0: fpga-bridge@ff400000 {
|
||||
|
|
|
@ -0,0 +1,13 @@
|
|||
FPGA Bridge Device Tree Binding
|
||||
|
||||
Optional properties:
|
||||
- bridge-enable : 0 if driver should disable bridge at startup
|
||||
1 if driver should enable bridge at startup
|
||||
Default is to leave bridge in current state.
|
||||
|
||||
Example:
|
||||
fpga_bridge3: fpga-bridge@ffc25080 {
|
||||
compatible = "altr,socfpga-fpga2sdram-bridge";
|
||||
reg = <0xffc25080 0x4>;
|
||||
bridge-enable = <0>;
|
||||
};
|
|
@ -18,12 +18,8 @@ Required properties:
|
|||
- clocks : input clock to IP
|
||||
- clock-names : should contain "aclk"
|
||||
|
||||
Optional properties:
|
||||
- bridge-enable : 0 if driver should disable bridge at startup
|
||||
1 if driver should enable bridge at startup
|
||||
Default is to leave bridge in current state.
|
||||
|
||||
See Documentation/devicetree/bindings/fpga/fpga-region.txt for generic bindings.
|
||||
See Documentation/devicetree/bindings/fpga/fpga-region.txt and
|
||||
Documentation/devicetree/bindings/fpga/fpga-bridge.txt for generic bindings.
|
||||
|
||||
Example:
|
||||
fpga-bridge@100000450 {
|
||||
|
|
|
@ -0,0 +1,45 @@
|
|||
Qualcomm QCS404 Network-On-Chip interconnect driver binding
|
||||
-----------------------------------------------------------
|
||||
|
||||
Required properties :
|
||||
- compatible : shall contain only one of the following:
|
||||
"qcom,qcs404-bimc"
|
||||
"qcom,qcs404-pcnoc"
|
||||
"qcom,qcs404-snoc"
|
||||
- #interconnect-cells : should contain 1
|
||||
|
||||
reg : specifies the physical base address and size of registers
|
||||
clocks : list of phandles and specifiers to all interconnect bus clocks
|
||||
clock-names : clock names should include both "bus" and "bus_a"
|
||||
|
||||
Example:
|
||||
|
||||
soc {
|
||||
...
|
||||
bimc: interconnect@400000 {
|
||||
reg = <0x00400000 0x80000>;
|
||||
compatible = "qcom,qcs404-bimc";
|
||||
#interconnect-cells = <1>;
|
||||
clock-names = "bus", "bus_a";
|
||||
clocks = <&rpmcc RPM_SMD_BIMC_CLK>,
|
||||
<&rpmcc RPM_SMD_BIMC_A_CLK>;
|
||||
};
|
||||
|
||||
pnoc: interconnect@500000 {
|
||||
reg = <0x00500000 0x15080>;
|
||||
compatible = "qcom,qcs404-pcnoc";
|
||||
#interconnect-cells = <1>;
|
||||
clock-names = "bus", "bus_a";
|
||||
clocks = <&rpmcc RPM_SMD_PNOC_CLK>,
|
||||
<&rpmcc RPM_SMD_PNOC_A_CLK>;
|
||||
};
|
||||
|
||||
snoc: interconnect@580000 {
|
||||
reg = <0x00580000 0x23080>;
|
||||
compatible = "qcom,qcs404-snoc";
|
||||
#interconnect-cells = <1>;
|
||||
clock-names = "bus", "bus_a";
|
||||
clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
|
||||
<&rpmcc RPM_SMD_SNOC_A_CLK>;
|
||||
};
|
||||
};
|
|
@ -2,7 +2,7 @@ Freescale i.MX6 On-Chip OTP Controller (OCOTP) device tree bindings
|
|||
|
||||
This binding represents the on-chip eFuse OTP controller found on
|
||||
i.MX6Q/D, i.MX6DL/S, i.MX6SL, i.MX6SX, i.MX6UL, i.MX6ULL/ULZ, i.MX6SLL,
|
||||
i.MX7D/S, i.MX7ULP and i.MX8MQ SoCs.
|
||||
i.MX7D/S, i.MX7ULP, i.MX8MQ, i.MX8MM and i.MX8MN SoCs.
|
||||
|
||||
Required properties:
|
||||
- compatible: should be one of
|
||||
|
@ -16,6 +16,7 @@ Required properties:
|
|||
"fsl,imx7ulp-ocotp" (i.MX7ULP),
|
||||
"fsl,imx8mq-ocotp" (i.MX8MQ),
|
||||
"fsl,imx8mm-ocotp" (i.MX8MM),
|
||||
"fsl,imx8mn-ocotp" (i.MX8MN),
|
||||
followed by "syscon".
|
||||
- #address-cells : Should be 1
|
||||
- #size-cells : Should be 1
|
||||
|
|
|
@ -17,6 +17,14 @@ Required properties:
|
|||
name must be "core" for the first clock and "reg" for the second
|
||||
one
|
||||
|
||||
Optional properties:
|
||||
- phys: phandle(s) to PHY node(s) following the generic PHY bindings.
|
||||
Either 1, 2 or 4 PHYs might be needed depending on the number of
|
||||
PCIe lanes.
|
||||
- phy-names: names of the PHYs corresponding to the number of lanes.
|
||||
Must be "cp0-pcie0-x4-lane0-phy", "cp0-pcie0-x4-lane1-phy" for
|
||||
2 PHYs.
|
||||
|
||||
Example:
|
||||
|
||||
pcie@f2600000 {
|
||||
|
|
|
@ -0,0 +1,95 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/phy/lantiq,vrx200-pcie-phy.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Lantiq VRX200 and ARX300 PCIe PHY Device Tree Bindings
|
||||
|
||||
maintainers:
|
||||
- Martin Blumenstingl <martin.blumenstingl@googlemail.com>
|
||||
|
||||
properties:
|
||||
"#phy-cells":
|
||||
const: 1
|
||||
description: selects the PHY mode as defined in <dt-bindings/phy/phy-lantiq-vrx200-pcie.h>
|
||||
|
||||
compatible:
|
||||
enum:
|
||||
- lantiq,vrx200-pcie-phy
|
||||
- lantiq,arx300-pcie-phy
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
clocks:
|
||||
items:
|
||||
- description: PHY module clock
|
||||
- description: PDI register clock
|
||||
|
||||
clock-names:
|
||||
items:
|
||||
- const: phy
|
||||
- const: pdi
|
||||
|
||||
resets:
|
||||
items:
|
||||
- description: exclusive PHY reset line
|
||||
- description: shared reset line between the PCIe PHY and PCIe controller
|
||||
|
||||
resets-names:
|
||||
items:
|
||||
- const: phy
|
||||
- const: pcie
|
||||
|
||||
lantiq,rcu:
|
||||
$ref: /schemas/types.yaml#/definitions/phandle
|
||||
description: phandle to the RCU syscon
|
||||
|
||||
lantiq,rcu-endian-offset:
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description: the offset of the endian registers for this PHY instance in the RCU syscon
|
||||
|
||||
lantiq,rcu-big-endian-mask:
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
description: the mask to set the PDI (PHY) registers for this PHY instance to big endian
|
||||
|
||||
big-endian:
|
||||
description: Configures the PDI (PHY) registers in big-endian mode
|
||||
type: boolean
|
||||
|
||||
little-endian:
|
||||
description: Configures the PDI (PHY) registers in big-endian mode
|
||||
type: boolean
|
||||
|
||||
required:
|
||||
- "#phy-cells"
|
||||
- compatible
|
||||
- reg
|
||||
- clocks
|
||||
- clock-names
|
||||
- resets
|
||||
- reset-names
|
||||
- lantiq,rcu
|
||||
- lantiq,rcu-endian-offset
|
||||
- lantiq,rcu-big-endian-mask
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
pcie0_phy: phy@106800 {
|
||||
compatible = "lantiq,vrx200-pcie-phy";
|
||||
reg = <0x106800 0x100>;
|
||||
lantiq,rcu = <&rcu0>;
|
||||
lantiq,rcu-endian-offset = <0x4c>;
|
||||
lantiq,rcu-big-endian-mask = <0x80>; /* bit 7 */
|
||||
big-endian;
|
||||
clocks = <&pmu 32>, <&pmu 36>;
|
||||
clock-names = "phy", "pdi";
|
||||
resets = <&reset0 12 24>, <&reset0 22 22>;
|
||||
reset-names = "phy", "pcie";
|
||||
#phy-cells = <1>;
|
||||
};
|
||||
|
||||
...
|
|
@ -25,6 +25,13 @@ Required properties:
|
|||
- #address-cells: should be 1.
|
||||
- #size-cells: should be 0.
|
||||
|
||||
Optional properlties:
|
||||
|
||||
- clocks: pointers to the reference clocks for this device (CP110 only),
|
||||
consequently: MG clock, MG Core clock, AXI clock.
|
||||
- clock-names: names of used clocks for CP110 only, must be :
|
||||
"mg_clk", "mg_core_clk" and "axi_clk".
|
||||
|
||||
A sub-node is required for each comphy lane provided by the comphy.
|
||||
|
||||
Required properties (child nodes):
|
||||
|
@ -39,6 +46,9 @@ Examples:
|
|||
compatible = "marvell,comphy-cp110";
|
||||
reg = <0x120000 0x6000>;
|
||||
marvell,system-controller = <&cpm_syscon0>;
|
||||
clocks = <&CP110_LABEL(clk) 1 5>, <&CP110_LABEL(clk) 1 6>,
|
||||
<&CP110_LABEL(clk) 1 18>;
|
||||
clock-names = "mg_clk", "mg_core_clk", "axi_clk";
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
|
||||
|
|
|
@ -408,6 +408,13 @@ handler code. You also do not need to know anything about the chip's
|
|||
internal registers to create the kernel part of the driver. All you need
|
||||
to know is the irq number of the pin the chip is connected to.
|
||||
|
||||
When used in a device-tree enabled system, the driver needs to be
|
||||
probed with the ``"of_id"`` module parameter set to the ``"compatible"``
|
||||
string of the node the driver is supposed to handle. By default, the
|
||||
node's name (without the unit address) is exposed as name for the
|
||||
UIO device in userspace. To set a custom name, a property named
|
||||
``"linux,uio-name"`` may be specified in the DT node.
|
||||
|
||||
Using uio_dmem_genirq for platform devices
|
||||
------------------------------------------
|
||||
|
||||
|
|
|
@ -87,6 +87,8 @@ The following functions are exposed through ioctls:
|
|||
- Get driver API version (DFL_FPGA_GET_API_VERSION)
|
||||
- Check for extensions (DFL_FPGA_CHECK_EXTENSION)
|
||||
- Program bitstream (DFL_FPGA_FME_PORT_PR)
|
||||
- Assign port to PF (DFL_FPGA_FME_PORT_ASSIGN)
|
||||
- Release port from PF (DFL_FPGA_FME_PORT_RELEASE)
|
||||
|
||||
More functions are exposed through sysfs
|
||||
(/sys/class/fpga_region/regionX/dfl-fme.n/):
|
||||
|
@ -102,6 +104,10 @@ More functions are exposed through sysfs
|
|||
one FPGA device may have more than one port, this sysfs interface indicates
|
||||
how many ports the FPGA device has.
|
||||
|
||||
Global error reporting management (errors/)
|
||||
error reporting sysfs interfaces allow user to read errors detected by the
|
||||
hardware, and clear the logged errors.
|
||||
|
||||
|
||||
FIU - PORT
|
||||
==========
|
||||
|
@ -143,6 +149,10 @@ More functions are exposed through sysfs:
|
|||
Read Accelerator GUID (afu_id)
|
||||
afu_id indicates which PR bitstream is programmed to this AFU.
|
||||
|
||||
Error reporting (errors/)
|
||||
error reporting sysfs interfaces allow user to read port/afu errors
|
||||
detected by the hardware, and clear the logged errors.
|
||||
|
||||
|
||||
DFL Framework Overview
|
||||
======================
|
||||
|
@ -218,6 +228,101 @@ the compat_id exposed by the target FPGA region. This check is usually done by
|
|||
userspace before calling the reconfiguration IOCTL.
|
||||
|
||||
|
||||
FPGA virtualization - PCIe SRIOV
|
||||
================================
|
||||
This section describes the virtualization support on DFL based FPGA device to
|
||||
enable accessing an accelerator from applications running in a virtual machine
|
||||
(VM). This section only describes the PCIe based FPGA device with SRIOV support.
|
||||
|
||||
Features supported by the particular FPGA device are exposed through Device
|
||||
Feature Lists, as illustrated below:
|
||||
|
||||
::
|
||||
|
||||
+-------------------------------+ +-------------+
|
||||
| PF | | VF |
|
||||
+-------------------------------+ +-------------+
|
||||
^ ^ ^ ^
|
||||
| | | |
|
||||
+-----|------------|---------|--------------|-------+
|
||||
| | | | | |
|
||||
| +-----+ +-------+ +-------+ +-------+ |
|
||||
| | FME | | Port0 | | Port1 | | Port2 | |
|
||||
| +-----+ +-------+ +-------+ +-------+ |
|
||||
| ^ ^ ^ |
|
||||
| | | | |
|
||||
| +-------+ +------+ +-------+ |
|
||||
| | AFU | | AFU | | AFU | |
|
||||
| +-------+ +------+ +-------+ |
|
||||
| |
|
||||
| DFL based FPGA PCIe Device |
|
||||
+---------------------------------------------------+
|
||||
|
||||
FME is always accessed through the physical function (PF).
|
||||
|
||||
Ports (and related AFUs) are accessed via PF by default, but could be exposed
|
||||
through virtual function (VF) devices via PCIe SRIOV. Each VF only contains
|
||||
1 Port and 1 AFU for isolation. Users could assign individual VFs (accelerators)
|
||||
created via PCIe SRIOV interface, to virtual machines.
|
||||
|
||||
The driver organization in virtualization case is illustrated below:
|
||||
::
|
||||
|
||||
+-------++------++------+ |
|
||||
| FME || FME || FME | |
|
||||
| FPGA || FPGA || FPGA | |
|
||||
|Manager||Bridge||Region| |
|
||||
+-------++------++------+ |
|
||||
+-----------------------+ +--------+ | +--------+
|
||||
| FME | | AFU | | | AFU |
|
||||
| Module | | Module | | | Module |
|
||||
+-----------------------+ +--------+ | +--------+
|
||||
+-----------------------+ | +-----------------------+
|
||||
| FPGA Container Device | | | FPGA Container Device |
|
||||
| (FPGA Base Region) | | | (FPGA Base Region) |
|
||||
+-----------------------+ | +-----------------------+
|
||||
+------------------+ | +------------------+
|
||||
| FPGA PCIE Module | | Virtual | FPGA PCIE Module |
|
||||
+------------------+ Host | Machine +------------------+
|
||||
-------------------------------------- | ------------------------------
|
||||
+---------------+ | +---------------+
|
||||
| PCI PF Device | | | PCI VF Device |
|
||||
+---------------+ | +---------------+
|
||||
|
||||
FPGA PCIe device driver is always loaded first once a FPGA PCIe PF or VF device
|
||||
is detected. It:
|
||||
|
||||
* Finishes enumeration on both FPGA PCIe PF and VF device using common
|
||||
interfaces from DFL framework.
|
||||
* Supports SRIOV.
|
||||
|
||||
The FME device driver plays a management role in this driver architecture, it
|
||||
provides ioctls to release Port from PF and assign Port to PF. After release
|
||||
a port from PF, then it's safe to expose this port through a VF via PCIe SRIOV
|
||||
sysfs interface.
|
||||
|
||||
To enable accessing an accelerator from applications running in a VM, the
|
||||
respective AFU's port needs to be assigned to a VF using the following steps:
|
||||
|
||||
#. The PF owns all AFU ports by default. Any port that needs to be
|
||||
reassigned to a VF must first be released through the
|
||||
DFL_FPGA_FME_PORT_RELEASE ioctl on the FME device.
|
||||
|
||||
#. Once N ports are released from PF, then user can use command below
|
||||
to enable SRIOV and VFs. Each VF owns only one Port with AFU.
|
||||
|
||||
::
|
||||
|
||||
echo N > $PCI_DEVICE_PATH/sriov_numvfs
|
||||
|
||||
#. Pass through the VFs to VMs
|
||||
|
||||
#. The AFU under VF is accessible from applications in VM (using the
|
||||
same driver inside the VF).
|
||||
|
||||
Note that an FME can't be assigned to a VF, thus PR and other management
|
||||
functions are only available via the PF.
|
||||
|
||||
Device enumeration
|
||||
==================
|
||||
This section introduces how applications enumerate the fpga device from
|
||||
|
|
|
@ -20,3 +20,4 @@ fit into other categories.
|
|||
isl29003
|
||||
lis3lv02d
|
||||
max6875
|
||||
xilinx_sdfec
|
||||
|
|
23
MAINTAINERS
23
MAINTAINERS
|
@ -8360,6 +8360,17 @@ F: drivers/platform/x86/intel_speed_select_if/
|
|||
F: tools/power/x86/intel-speed-select/
|
||||
F: include/uapi/linux/isst_if.h
|
||||
|
||||
INTEL STRATIX10 FIRMWARE DRIVERS
|
||||
M: Richard Gong <richard.gong@linux.intel.com>
|
||||
L: linux-kernel@vger.kernel.org
|
||||
S: Maintained
|
||||
F: drivers/firmware/stratix10-rsu.c
|
||||
F: drivers/firmware/stratix10-svc.c
|
||||
F: include/linux/firmware/intel/stratix10-smc.h
|
||||
F: include/linux/firmware/intel/stratix10-svc-client.h
|
||||
F: Documentation/ABI/testing/sysfs-devices-platform-stratix10-rsu
|
||||
F: Documentation/devicetree/bindings/firmware/intel,stratix10-svc.txt
|
||||
|
||||
INTEL TELEMETRY DRIVER
|
||||
M: Rajneesh Bhardwaj <rajneesh.bhardwaj@linux.intel.com>
|
||||
M: "David E. Box" <david.e.box@linux.intel.com>
|
||||
|
@ -8411,6 +8422,7 @@ M: Alexander Shishkin <alexander.shishkin@linux.intel.com>
|
|||
S: Supported
|
||||
F: Documentation/trace/intel_th.rst
|
||||
F: drivers/hwtracing/intel_th/
|
||||
F: include/linux/intel_th.h
|
||||
|
||||
INTEL(R) TRUSTED EXECUTION TECHNOLOGY (TXT)
|
||||
M: Ning Sun <ning.sun@intel.com>
|
||||
|
@ -17760,6 +17772,17 @@ F: Documentation/devicetree/bindings/media/xilinx/
|
|||
F: drivers/media/platform/xilinx/
|
||||
F: include/uapi/linux/xilinx-v4l2-controls.h
|
||||
|
||||
XILINX SD-FEC IP CORES
|
||||
M: Derek Kiernan <derek.kiernan@xilinx.com>
|
||||
M: Dragan Cvetic <dragan.cvetic@xilinx.com>
|
||||
S: Maintained
|
||||
F: Documentation/devicetree/bindings/misc/xlnx,sd-fec.txt
|
||||
F: Documentation/misc-devices/xilinx_sdfec.rst
|
||||
F: drivers/misc/xilinx_sdfec.c
|
||||
F: drivers/misc/Kconfig
|
||||
F: drivers/misc/Makefile
|
||||
F: include/uapi/misc/xilinx_sdfec.h
|
||||
|
||||
XILLYBUS DRIVER
|
||||
M: Eli Billauer <eli.billauer@gmail.com>
|
||||
L: linux-kernel@vger.kernel.org
|
||||
|
|
|
@ -39,6 +39,12 @@ static const guid_t prp_guids[] = {
|
|||
/* External facing port GUID: efcc06cc-73ac-4bc3-bff0-76143807c389 */
|
||||
GUID_INIT(0xefcc06cc, 0x73ac, 0x4bc3,
|
||||
0xbf, 0xf0, 0x76, 0x14, 0x38, 0x07, 0xc3, 0x89),
|
||||
/* Thunderbolt GUID for IMR_VALID: c44d002f-69f9-4e7d-a904-a7baabdf43f7 */
|
||||
GUID_INIT(0xc44d002f, 0x69f9, 0x4e7d,
|
||||
0xa9, 0x04, 0xa7, 0xba, 0xab, 0xdf, 0x43, 0xf7),
|
||||
/* Thunderbolt GUID for WAKE_SUPPORTED: 6c501103-c189-4296-ba72-9bf5a26ebe5d */
|
||||
GUID_INIT(0x6c501103, 0xc189, 0x4296,
|
||||
0xba, 0x72, 0x9b, 0xf5, 0xa2, 0x6e, 0xbe, 0x5d),
|
||||
};
|
||||
|
||||
/* ACPI _DSD data subnodes GUID: dbb8e3e6-5886-4ba6-8795-1319f52a966b */
|
||||
|
|
|
@ -122,7 +122,7 @@ static uint32_t binder_debug_mask = BINDER_DEBUG_USER_ERROR |
|
|||
BINDER_DEBUG_FAILED_TRANSACTION | BINDER_DEBUG_DEAD_TRANSACTION;
|
||||
module_param_named(debug_mask, binder_debug_mask, uint, 0644);
|
||||
|
||||
static char *binder_devices_param = CONFIG_ANDROID_BINDER_DEVICES;
|
||||
char *binder_devices_param = CONFIG_ANDROID_BINDER_DEVICES;
|
||||
module_param_named(devices, binder_devices_param, charp, 0444);
|
||||
|
||||
static DECLARE_WAIT_QUEUE_HEAD(binder_user_error_wait);
|
||||
|
@ -196,30 +196,8 @@ static inline void binder_stats_created(enum binder_stat_types type)
|
|||
atomic_inc(&binder_stats.obj_created[type]);
|
||||
}
|
||||
|
||||
struct binder_transaction_log_entry {
|
||||
int debug_id;
|
||||
int debug_id_done;
|
||||
int call_type;
|
||||
int from_proc;
|
||||
int from_thread;
|
||||
int target_handle;
|
||||
int to_proc;
|
||||
int to_thread;
|
||||
int to_node;
|
||||
int data_size;
|
||||
int offsets_size;
|
||||
int return_error_line;
|
||||
uint32_t return_error;
|
||||
uint32_t return_error_param;
|
||||
const char *context_name;
|
||||
};
|
||||
struct binder_transaction_log {
|
||||
atomic_t cur;
|
||||
bool full;
|
||||
struct binder_transaction_log_entry entry[32];
|
||||
};
|
||||
static struct binder_transaction_log binder_transaction_log;
|
||||
static struct binder_transaction_log binder_transaction_log_failed;
|
||||
struct binder_transaction_log binder_transaction_log;
|
||||
struct binder_transaction_log binder_transaction_log_failed;
|
||||
|
||||
static struct binder_transaction_log_entry *binder_transaction_log_add(
|
||||
struct binder_transaction_log *log)
|
||||
|
@ -480,6 +458,7 @@ enum binder_deferred_state {
|
|||
* @inner_lock: can nest under outer_lock and/or node lock
|
||||
* @outer_lock: no nesting under innor or node lock
|
||||
* Lock order: 1) outer, 2) node, 3) inner
|
||||
* @binderfs_entry: process-specific binderfs log file
|
||||
*
|
||||
* Bookkeeping structure for binder processes
|
||||
*/
|
||||
|
@ -509,6 +488,7 @@ struct binder_proc {
|
|||
struct binder_context *context;
|
||||
spinlock_t inner_lock;
|
||||
spinlock_t outer_lock;
|
||||
struct dentry *binderfs_entry;
|
||||
};
|
||||
|
||||
enum {
|
||||
|
@ -5230,6 +5210,8 @@ static int binder_open(struct inode *nodp, struct file *filp)
|
|||
{
|
||||
struct binder_proc *proc;
|
||||
struct binder_device *binder_dev;
|
||||
struct binderfs_info *info;
|
||||
struct dentry *binder_binderfs_dir_entry_proc = NULL;
|
||||
|
||||
binder_debug(BINDER_DEBUG_OPEN_CLOSE, "%s: %d:%d\n", __func__,
|
||||
current->group_leader->pid, current->pid);
|
||||
|
@ -5244,11 +5226,14 @@ static int binder_open(struct inode *nodp, struct file *filp)
|
|||
INIT_LIST_HEAD(&proc->todo);
|
||||
proc->default_priority = task_nice(current);
|
||||
/* binderfs stashes devices in i_private */
|
||||
if (is_binderfs_device(nodp))
|
||||
if (is_binderfs_device(nodp)) {
|
||||
binder_dev = nodp->i_private;
|
||||
else
|
||||
info = nodp->i_sb->s_fs_info;
|
||||
binder_binderfs_dir_entry_proc = info->proc_log_dir;
|
||||
} else {
|
||||
binder_dev = container_of(filp->private_data,
|
||||
struct binder_device, miscdev);
|
||||
}
|
||||
proc->context = &binder_dev->context;
|
||||
binder_alloc_init(&proc->alloc);
|
||||
|
||||
|
@ -5279,6 +5264,35 @@ static int binder_open(struct inode *nodp, struct file *filp)
|
|||
&proc_fops);
|
||||
}
|
||||
|
||||
if (binder_binderfs_dir_entry_proc) {
|
||||
char strbuf[11];
|
||||
struct dentry *binderfs_entry;
|
||||
|
||||
snprintf(strbuf, sizeof(strbuf), "%u", proc->pid);
|
||||
/*
|
||||
* Similar to debugfs, the process specific log file is shared
|
||||
* between contexts. If the file has already been created for a
|
||||
* process, the following binderfs_create_file() call will
|
||||
* fail with error code EEXIST if another context of the same
|
||||
* process invoked binder_open(). This is ok since same as
|
||||
* debugfs, the log file will contain information on all
|
||||
* contexts of a given PID.
|
||||
*/
|
||||
binderfs_entry = binderfs_create_file(binder_binderfs_dir_entry_proc,
|
||||
strbuf, &proc_fops, (void *)(unsigned long)proc->pid);
|
||||
if (!IS_ERR(binderfs_entry)) {
|
||||
proc->binderfs_entry = binderfs_entry;
|
||||
} else {
|
||||
int error;
|
||||
|
||||
error = PTR_ERR(binderfs_entry);
|
||||
if (error != -EEXIST) {
|
||||
pr_warn("Unable to create file %s in binderfs (error %d)\n",
|
||||
strbuf, error);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -5318,6 +5332,12 @@ static int binder_release(struct inode *nodp, struct file *filp)
|
|||
struct binder_proc *proc = filp->private_data;
|
||||
|
||||
debugfs_remove(proc->debugfs_entry);
|
||||
|
||||
if (proc->binderfs_entry) {
|
||||
binderfs_remove_file(proc->binderfs_entry);
|
||||
proc->binderfs_entry = NULL;
|
||||
}
|
||||
|
||||
binder_defer_work(proc, BINDER_DEFERRED_RELEASE);
|
||||
|
||||
return 0;
|
||||
|
@ -5907,7 +5927,7 @@ static void print_binder_proc_stats(struct seq_file *m,
|
|||
}
|
||||
|
||||
|
||||
static int state_show(struct seq_file *m, void *unused)
|
||||
int binder_state_show(struct seq_file *m, void *unused)
|
||||
{
|
||||
struct binder_proc *proc;
|
||||
struct binder_node *node;
|
||||
|
@ -5946,7 +5966,7 @@ static int state_show(struct seq_file *m, void *unused)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int stats_show(struct seq_file *m, void *unused)
|
||||
int binder_stats_show(struct seq_file *m, void *unused)
|
||||
{
|
||||
struct binder_proc *proc;
|
||||
|
||||
|
@ -5962,7 +5982,7 @@ static int stats_show(struct seq_file *m, void *unused)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int transactions_show(struct seq_file *m, void *unused)
|
||||
int binder_transactions_show(struct seq_file *m, void *unused)
|
||||
{
|
||||
struct binder_proc *proc;
|
||||
|
||||
|
@ -6018,7 +6038,7 @@ static void print_binder_transaction_log_entry(struct seq_file *m,
|
|||
"\n" : " (incomplete)\n");
|
||||
}
|
||||
|
||||
static int transaction_log_show(struct seq_file *m, void *unused)
|
||||
int binder_transaction_log_show(struct seq_file *m, void *unused)
|
||||
{
|
||||
struct binder_transaction_log *log = m->private;
|
||||
unsigned int log_cur = atomic_read(&log->cur);
|
||||
|
@ -6050,11 +6070,6 @@ const struct file_operations binder_fops = {
|
|||
.release = binder_release,
|
||||
};
|
||||
|
||||
DEFINE_SHOW_ATTRIBUTE(state);
|
||||
DEFINE_SHOW_ATTRIBUTE(stats);
|
||||
DEFINE_SHOW_ATTRIBUTE(transactions);
|
||||
DEFINE_SHOW_ATTRIBUTE(transaction_log);
|
||||
|
||||
static int __init init_binder_device(const char *name)
|
||||
{
|
||||
int ret;
|
||||
|
@ -6108,30 +6123,31 @@ static int __init binder_init(void)
|
|||
0444,
|
||||
binder_debugfs_dir_entry_root,
|
||||
NULL,
|
||||
&state_fops);
|
||||
&binder_state_fops);
|
||||
debugfs_create_file("stats",
|
||||
0444,
|
||||
binder_debugfs_dir_entry_root,
|
||||
NULL,
|
||||
&stats_fops);
|
||||
&binder_stats_fops);
|
||||
debugfs_create_file("transactions",
|
||||
0444,
|
||||
binder_debugfs_dir_entry_root,
|
||||
NULL,
|
||||
&transactions_fops);
|
||||
&binder_transactions_fops);
|
||||
debugfs_create_file("transaction_log",
|
||||
0444,
|
||||
binder_debugfs_dir_entry_root,
|
||||
&binder_transaction_log,
|
||||
&transaction_log_fops);
|
||||
&binder_transaction_log_fops);
|
||||
debugfs_create_file("failed_transaction_log",
|
||||
0444,
|
||||
binder_debugfs_dir_entry_root,
|
||||
&binder_transaction_log_failed,
|
||||
&transaction_log_fops);
|
||||
&binder_transaction_log_fops);
|
||||
}
|
||||
|
||||
if (strcmp(binder_devices_param, "") != 0) {
|
||||
if (!IS_ENABLED(CONFIG_ANDROID_BINDERFS) &&
|
||||
strcmp(binder_devices_param, "") != 0) {
|
||||
/*
|
||||
* Copy the module_parameter string, because we don't want to
|
||||
* tokenize it in-place.
|
||||
|
|
|
@ -35,15 +35,63 @@ struct binder_device {
|
|||
struct inode *binderfs_inode;
|
||||
};
|
||||
|
||||
/**
|
||||
* binderfs_mount_opts - mount options for binderfs
|
||||
* @max: maximum number of allocatable binderfs binder devices
|
||||
* @stats_mode: enable binder stats in binderfs.
|
||||
*/
|
||||
struct binderfs_mount_opts {
|
||||
int max;
|
||||
int stats_mode;
|
||||
};
|
||||
|
||||
/**
|
||||
* binderfs_info - information about a binderfs mount
|
||||
* @ipc_ns: The ipc namespace the binderfs mount belongs to.
|
||||
* @control_dentry: This records the dentry of this binderfs mount
|
||||
* binder-control device.
|
||||
* @root_uid: uid that needs to be used when a new binder device is
|
||||
* created.
|
||||
* @root_gid: gid that needs to be used when a new binder device is
|
||||
* created.
|
||||
* @mount_opts: The mount options in use.
|
||||
* @device_count: The current number of allocated binder devices.
|
||||
* @proc_log_dir: Pointer to the directory dentry containing process-specific
|
||||
* logs.
|
||||
*/
|
||||
struct binderfs_info {
|
||||
struct ipc_namespace *ipc_ns;
|
||||
struct dentry *control_dentry;
|
||||
kuid_t root_uid;
|
||||
kgid_t root_gid;
|
||||
struct binderfs_mount_opts mount_opts;
|
||||
int device_count;
|
||||
struct dentry *proc_log_dir;
|
||||
};
|
||||
|
||||
extern const struct file_operations binder_fops;
|
||||
|
||||
extern char *binder_devices_param;
|
||||
|
||||
#ifdef CONFIG_ANDROID_BINDERFS
|
||||
extern bool is_binderfs_device(const struct inode *inode);
|
||||
extern struct dentry *binderfs_create_file(struct dentry *dir, const char *name,
|
||||
const struct file_operations *fops,
|
||||
void *data);
|
||||
extern void binderfs_remove_file(struct dentry *dentry);
|
||||
#else
|
||||
static inline bool is_binderfs_device(const struct inode *inode)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
static inline struct dentry *binderfs_create_file(struct dentry *dir,
|
||||
const char *name,
|
||||
const struct file_operations *fops,
|
||||
void *data)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
static inline void binderfs_remove_file(struct dentry *dentry) {}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_ANDROID_BINDERFS
|
||||
|
@ -55,4 +103,42 @@ static inline int __init init_binderfs(void)
|
|||
}
|
||||
#endif
|
||||
|
||||
int binder_stats_show(struct seq_file *m, void *unused);
|
||||
DEFINE_SHOW_ATTRIBUTE(binder_stats);
|
||||
|
||||
int binder_state_show(struct seq_file *m, void *unused);
|
||||
DEFINE_SHOW_ATTRIBUTE(binder_state);
|
||||
|
||||
int binder_transactions_show(struct seq_file *m, void *unused);
|
||||
DEFINE_SHOW_ATTRIBUTE(binder_transactions);
|
||||
|
||||
int binder_transaction_log_show(struct seq_file *m, void *unused);
|
||||
DEFINE_SHOW_ATTRIBUTE(binder_transaction_log);
|
||||
|
||||
struct binder_transaction_log_entry {
|
||||
int debug_id;
|
||||
int debug_id_done;
|
||||
int call_type;
|
||||
int from_proc;
|
||||
int from_thread;
|
||||
int target_handle;
|
||||
int to_proc;
|
||||
int to_thread;
|
||||
int to_node;
|
||||
int data_size;
|
||||
int offsets_size;
|
||||
int return_error_line;
|
||||
uint32_t return_error;
|
||||
uint32_t return_error_param;
|
||||
const char *context_name;
|
||||
};
|
||||
|
||||
struct binder_transaction_log {
|
||||
atomic_t cur;
|
||||
bool full;
|
||||
struct binder_transaction_log_entry entry[32];
|
||||
};
|
||||
|
||||
extern struct binder_transaction_log binder_transaction_log;
|
||||
extern struct binder_transaction_log binder_transaction_log_failed;
|
||||
#endif /* _LINUX_BINDER_INTERNAL_H */
|
||||
|
|
|
@ -48,45 +48,23 @@ static dev_t binderfs_dev;
|
|||
static DEFINE_MUTEX(binderfs_minors_mutex);
|
||||
static DEFINE_IDA(binderfs_minors);
|
||||
|
||||
/**
|
||||
* binderfs_mount_opts - mount options for binderfs
|
||||
* @max: maximum number of allocatable binderfs binder devices
|
||||
*/
|
||||
struct binderfs_mount_opts {
|
||||
int max;
|
||||
};
|
||||
|
||||
enum {
|
||||
Opt_max,
|
||||
Opt_stats_mode,
|
||||
Opt_err
|
||||
};
|
||||
|
||||
enum binderfs_stats_mode {
|
||||
STATS_NONE,
|
||||
STATS_GLOBAL,
|
||||
};
|
||||
|
||||
static const match_table_t tokens = {
|
||||
{ Opt_max, "max=%d" },
|
||||
{ Opt_stats_mode, "stats=%s" },
|
||||
{ Opt_err, NULL }
|
||||
};
|
||||
|
||||
/**
|
||||
* binderfs_info - information about a binderfs mount
|
||||
* @ipc_ns: The ipc namespace the binderfs mount belongs to.
|
||||
* @control_dentry: This records the dentry of this binderfs mount
|
||||
* binder-control device.
|
||||
* @root_uid: uid that needs to be used when a new binder device is
|
||||
* created.
|
||||
* @root_gid: gid that needs to be used when a new binder device is
|
||||
* created.
|
||||
* @mount_opts: The mount options in use.
|
||||
* @device_count: The current number of allocated binder devices.
|
||||
*/
|
||||
struct binderfs_info {
|
||||
struct ipc_namespace *ipc_ns;
|
||||
struct dentry *control_dentry;
|
||||
kuid_t root_uid;
|
||||
kgid_t root_gid;
|
||||
struct binderfs_mount_opts mount_opts;
|
||||
int device_count;
|
||||
};
|
||||
|
||||
static inline struct binderfs_info *BINDERFS_I(const struct inode *inode)
|
||||
{
|
||||
return inode->i_sb->s_fs_info;
|
||||
|
@ -186,8 +164,7 @@ static int binderfs_binder_device_create(struct inode *ref_inode,
|
|||
req->major = MAJOR(binderfs_dev);
|
||||
req->minor = minor;
|
||||
|
||||
ret = copy_to_user(userp, req, sizeof(*req));
|
||||
if (ret) {
|
||||
if (userp && copy_to_user(userp, req, sizeof(*req))) {
|
||||
ret = -EFAULT;
|
||||
goto err;
|
||||
}
|
||||
|
@ -272,7 +249,7 @@ static void binderfs_evict_inode(struct inode *inode)
|
|||
|
||||
clear_inode(inode);
|
||||
|
||||
if (!device)
|
||||
if (!S_ISCHR(inode->i_mode) || !device)
|
||||
return;
|
||||
|
||||
mutex_lock(&binderfs_minors_mutex);
|
||||
|
@ -291,8 +268,9 @@ static void binderfs_evict_inode(struct inode *inode)
|
|||
static int binderfs_parse_mount_opts(char *data,
|
||||
struct binderfs_mount_opts *opts)
|
||||
{
|
||||
char *p;
|
||||
char *p, *stats;
|
||||
opts->max = BINDERFS_MAX_MINOR;
|
||||
opts->stats_mode = STATS_NONE;
|
||||
|
||||
while ((p = strsep(&data, ",")) != NULL) {
|
||||
substring_t args[MAX_OPT_ARGS];
|
||||
|
@ -312,6 +290,22 @@ static int binderfs_parse_mount_opts(char *data,
|
|||
|
||||
opts->max = max_devices;
|
||||
break;
|
||||
case Opt_stats_mode:
|
||||
if (!capable(CAP_SYS_ADMIN))
|
||||
return -EINVAL;
|
||||
|
||||
stats = match_strdup(&args[0]);
|
||||
if (!stats)
|
||||
return -ENOMEM;
|
||||
|
||||
if (strcmp(stats, "global") != 0) {
|
||||
kfree(stats);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
opts->stats_mode = STATS_GLOBAL;
|
||||
kfree(stats);
|
||||
break;
|
||||
default:
|
||||
pr_err("Invalid mount options\n");
|
||||
return -EINVAL;
|
||||
|
@ -323,8 +317,21 @@ static int binderfs_parse_mount_opts(char *data,
|
|||
|
||||
static int binderfs_remount(struct super_block *sb, int *flags, char *data)
|
||||
{
|
||||
int prev_stats_mode, ret;
|
||||
struct binderfs_info *info = sb->s_fs_info;
|
||||
return binderfs_parse_mount_opts(data, &info->mount_opts);
|
||||
|
||||
prev_stats_mode = info->mount_opts.stats_mode;
|
||||
ret = binderfs_parse_mount_opts(data, &info->mount_opts);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (prev_stats_mode != info->mount_opts.stats_mode) {
|
||||
pr_err("Binderfs stats mode cannot be changed during a remount\n");
|
||||
info->mount_opts.stats_mode = prev_stats_mode;
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int binderfs_show_mount_opts(struct seq_file *seq, struct dentry *root)
|
||||
|
@ -334,6 +341,8 @@ static int binderfs_show_mount_opts(struct seq_file *seq, struct dentry *root)
|
|||
info = root->d_sb->s_fs_info;
|
||||
if (info->mount_opts.max <= BINDERFS_MAX_MINOR)
|
||||
seq_printf(seq, ",max=%d", info->mount_opts.max);
|
||||
if (info->mount_opts.stats_mode == STATS_GLOBAL)
|
||||
seq_printf(seq, ",stats=global");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -462,11 +471,192 @@ static const struct inode_operations binderfs_dir_inode_operations = {
|
|||
.unlink = binderfs_unlink,
|
||||
};
|
||||
|
||||
static struct inode *binderfs_make_inode(struct super_block *sb, int mode)
|
||||
{
|
||||
struct inode *ret;
|
||||
|
||||
ret = new_inode(sb);
|
||||
if (ret) {
|
||||
ret->i_ino = iunique(sb, BINDERFS_MAX_MINOR + INODE_OFFSET);
|
||||
ret->i_mode = mode;
|
||||
ret->i_atime = ret->i_mtime = ret->i_ctime = current_time(ret);
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
static struct dentry *binderfs_create_dentry(struct dentry *parent,
|
||||
const char *name)
|
||||
{
|
||||
struct dentry *dentry;
|
||||
|
||||
dentry = lookup_one_len(name, parent, strlen(name));
|
||||
if (IS_ERR(dentry))
|
||||
return dentry;
|
||||
|
||||
/* Return error if the file/dir already exists. */
|
||||
if (d_really_is_positive(dentry)) {
|
||||
dput(dentry);
|
||||
return ERR_PTR(-EEXIST);
|
||||
}
|
||||
|
||||
return dentry;
|
||||
}
|
||||
|
||||
void binderfs_remove_file(struct dentry *dentry)
|
||||
{
|
||||
struct inode *parent_inode;
|
||||
|
||||
parent_inode = d_inode(dentry->d_parent);
|
||||
inode_lock(parent_inode);
|
||||
if (simple_positive(dentry)) {
|
||||
dget(dentry);
|
||||
simple_unlink(parent_inode, dentry);
|
||||
d_delete(dentry);
|
||||
dput(dentry);
|
||||
}
|
||||
inode_unlock(parent_inode);
|
||||
}
|
||||
|
||||
struct dentry *binderfs_create_file(struct dentry *parent, const char *name,
|
||||
const struct file_operations *fops,
|
||||
void *data)
|
||||
{
|
||||
struct dentry *dentry;
|
||||
struct inode *new_inode, *parent_inode;
|
||||
struct super_block *sb;
|
||||
|
||||
parent_inode = d_inode(parent);
|
||||
inode_lock(parent_inode);
|
||||
|
||||
dentry = binderfs_create_dentry(parent, name);
|
||||
if (IS_ERR(dentry))
|
||||
goto out;
|
||||
|
||||
sb = parent_inode->i_sb;
|
||||
new_inode = binderfs_make_inode(sb, S_IFREG | 0444);
|
||||
if (!new_inode) {
|
||||
dput(dentry);
|
||||
dentry = ERR_PTR(-ENOMEM);
|
||||
goto out;
|
||||
}
|
||||
|
||||
new_inode->i_fop = fops;
|
||||
new_inode->i_private = data;
|
||||
d_instantiate(dentry, new_inode);
|
||||
fsnotify_create(parent_inode, dentry);
|
||||
|
||||
out:
|
||||
inode_unlock(parent_inode);
|
||||
return dentry;
|
||||
}
|
||||
|
||||
static struct dentry *binderfs_create_dir(struct dentry *parent,
|
||||
const char *name)
|
||||
{
|
||||
struct dentry *dentry;
|
||||
struct inode *new_inode, *parent_inode;
|
||||
struct super_block *sb;
|
||||
|
||||
parent_inode = d_inode(parent);
|
||||
inode_lock(parent_inode);
|
||||
|
||||
dentry = binderfs_create_dentry(parent, name);
|
||||
if (IS_ERR(dentry))
|
||||
goto out;
|
||||
|
||||
sb = parent_inode->i_sb;
|
||||
new_inode = binderfs_make_inode(sb, S_IFDIR | 0755);
|
||||
if (!new_inode) {
|
||||
dput(dentry);
|
||||
dentry = ERR_PTR(-ENOMEM);
|
||||
goto out;
|
||||
}
|
||||
|
||||
new_inode->i_fop = &simple_dir_operations;
|
||||
new_inode->i_op = &simple_dir_inode_operations;
|
||||
|
||||
set_nlink(new_inode, 2);
|
||||
d_instantiate(dentry, new_inode);
|
||||
inc_nlink(parent_inode);
|
||||
fsnotify_mkdir(parent_inode, dentry);
|
||||
|
||||
out:
|
||||
inode_unlock(parent_inode);
|
||||
return dentry;
|
||||
}
|
||||
|
||||
static int init_binder_logs(struct super_block *sb)
|
||||
{
|
||||
struct dentry *binder_logs_root_dir, *dentry, *proc_log_dir;
|
||||
struct binderfs_info *info;
|
||||
int ret = 0;
|
||||
|
||||
binder_logs_root_dir = binderfs_create_dir(sb->s_root,
|
||||
"binder_logs");
|
||||
if (IS_ERR(binder_logs_root_dir)) {
|
||||
ret = PTR_ERR(binder_logs_root_dir);
|
||||
goto out;
|
||||
}
|
||||
|
||||
dentry = binderfs_create_file(binder_logs_root_dir, "stats",
|
||||
&binder_stats_fops, NULL);
|
||||
if (IS_ERR(dentry)) {
|
||||
ret = PTR_ERR(dentry);
|
||||
goto out;
|
||||
}
|
||||
|
||||
dentry = binderfs_create_file(binder_logs_root_dir, "state",
|
||||
&binder_state_fops, NULL);
|
||||
if (IS_ERR(dentry)) {
|
||||
ret = PTR_ERR(dentry);
|
||||
goto out;
|
||||
}
|
||||
|
||||
dentry = binderfs_create_file(binder_logs_root_dir, "transactions",
|
||||
&binder_transactions_fops, NULL);
|
||||
if (IS_ERR(dentry)) {
|
||||
ret = PTR_ERR(dentry);
|
||||
goto out;
|
||||
}
|
||||
|
||||
dentry = binderfs_create_file(binder_logs_root_dir,
|
||||
"transaction_log",
|
||||
&binder_transaction_log_fops,
|
||||
&binder_transaction_log);
|
||||
if (IS_ERR(dentry)) {
|
||||
ret = PTR_ERR(dentry);
|
||||
goto out;
|
||||
}
|
||||
|
||||
dentry = binderfs_create_file(binder_logs_root_dir,
|
||||
"failed_transaction_log",
|
||||
&binder_transaction_log_fops,
|
||||
&binder_transaction_log_failed);
|
||||
if (IS_ERR(dentry)) {
|
||||
ret = PTR_ERR(dentry);
|
||||
goto out;
|
||||
}
|
||||
|
||||
proc_log_dir = binderfs_create_dir(binder_logs_root_dir, "proc");
|
||||
if (IS_ERR(proc_log_dir)) {
|
||||
ret = PTR_ERR(proc_log_dir);
|
||||
goto out;
|
||||
}
|
||||
info = sb->s_fs_info;
|
||||
info->proc_log_dir = proc_log_dir;
|
||||
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int binderfs_fill_super(struct super_block *sb, void *data, int silent)
|
||||
{
|
||||
int ret;
|
||||
struct binderfs_info *info;
|
||||
struct inode *inode = NULL;
|
||||
struct binderfs_device device_info = { 0 };
|
||||
const char *name;
|
||||
size_t len;
|
||||
|
||||
sb->s_blocksize = PAGE_SIZE;
|
||||
sb->s_blocksize_bits = PAGE_SHIFT;
|
||||
|
@ -521,7 +711,25 @@ static int binderfs_fill_super(struct super_block *sb, void *data, int silent)
|
|||
if (!sb->s_root)
|
||||
return -ENOMEM;
|
||||
|
||||
return binderfs_binder_ctl_create(sb);
|
||||
ret = binderfs_binder_ctl_create(sb);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
name = binder_devices_param;
|
||||
for (len = strcspn(name, ","); len > 0; len = strcspn(name, ",")) {
|
||||
strscpy(device_info.name, name, len + 1);
|
||||
ret = binderfs_binder_device_create(inode, NULL, &device_info);
|
||||
if (ret)
|
||||
return ret;
|
||||
name += len;
|
||||
if (*name == ',')
|
||||
name++;
|
||||
}
|
||||
|
||||
if (info->mount_opts.stats_mode == STATS_GLOBAL)
|
||||
return init_binder_logs(sb);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct dentry *binderfs_mount(struct file_system_type *fs_type,
|
||||
|
@ -553,6 +761,18 @@ static struct file_system_type binder_fs_type = {
|
|||
int __init init_binderfs(void)
|
||||
{
|
||||
int ret;
|
||||
const char *name;
|
||||
size_t len;
|
||||
|
||||
/* Verify that the default binderfs device names are valid. */
|
||||
name = binder_devices_param;
|
||||
for (len = strcspn(name, ","); len > 0; len = strcspn(name, ",")) {
|
||||
if (len > BINDERFS_MAX_NAME)
|
||||
return -E2BIG;
|
||||
name += len;
|
||||
if (*name == ',')
|
||||
name++;
|
||||
}
|
||||
|
||||
/* Allocate new major number for binderfs. */
|
||||
ret = alloc_chrdev_region(&binderfs_dev, 0, BINDERFS_MAX_MINOR,
|
||||
|
|
|
@ -97,6 +97,13 @@ void __weak unxlate_dev_mem_ptr(phys_addr_t phys, void *addr)
|
|||
}
|
||||
#endif
|
||||
|
||||
static inline bool should_stop_iteration(void)
|
||||
{
|
||||
if (need_resched())
|
||||
cond_resched();
|
||||
return fatal_signal_pending(current);
|
||||
}
|
||||
|
||||
/*
|
||||
* This funcion reads the *physical* memory. The f_pos points directly to the
|
||||
* memory location.
|
||||
|
@ -175,6 +182,8 @@ static ssize_t read_mem(struct file *file, char __user *buf,
|
|||
p += sz;
|
||||
count -= sz;
|
||||
read += sz;
|
||||
if (should_stop_iteration())
|
||||
break;
|
||||
}
|
||||
kfree(bounce);
|
||||
|
||||
|
@ -251,6 +260,8 @@ static ssize_t write_mem(struct file *file, const char __user *buf,
|
|||
p += sz;
|
||||
count -= sz;
|
||||
written += sz;
|
||||
if (should_stop_iteration())
|
||||
break;
|
||||
}
|
||||
|
||||
*ppos += written;
|
||||
|
@ -468,6 +479,10 @@ static ssize_t read_kmem(struct file *file, char __user *buf,
|
|||
read += sz;
|
||||
low_count -= sz;
|
||||
count -= sz;
|
||||
if (should_stop_iteration()) {
|
||||
count = 0;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -492,6 +507,8 @@ static ssize_t read_kmem(struct file *file, char __user *buf,
|
|||
buf += sz;
|
||||
read += sz;
|
||||
p += sz;
|
||||
if (should_stop_iteration())
|
||||
break;
|
||||
}
|
||||
free_page((unsigned long)kbuf);
|
||||
}
|
||||
|
@ -544,6 +561,8 @@ static ssize_t do_write_kmem(unsigned long p, const char __user *buf,
|
|||
p += sz;
|
||||
count -= sz;
|
||||
written += sz;
|
||||
if (should_stop_iteration())
|
||||
break;
|
||||
}
|
||||
|
||||
*ppos += written;
|
||||
|
@ -595,6 +614,8 @@ static ssize_t write_kmem(struct file *file, const char __user *buf,
|
|||
buf += sz;
|
||||
virtr += sz;
|
||||
p += sz;
|
||||
if (should_stop_iteration())
|
||||
break;
|
||||
}
|
||||
free_page((unsigned long)kbuf);
|
||||
}
|
||||
|
|
|
@ -737,7 +737,7 @@ static int pp_release(struct inode *inode, struct file *file)
|
|||
"negotiated back to compatibility mode because user-space forgot\n");
|
||||
}
|
||||
|
||||
if (pp->flags & PP_CLAIMED) {
|
||||
if ((pp->flags & PP_CLAIMED) && pp->pdev) {
|
||||
struct ieee1284_info *info;
|
||||
|
||||
info = &pp->pdev->port->ieee1284;
|
||||
|
|
|
@ -373,7 +373,7 @@ static int tosh_get_machine_id(void __iomem *bios)
|
|||
value. This has been verified on a Satellite Pro 430CDT,
|
||||
Tecra 750CDT, Tecra 780DVD and Satellite 310CDT. */
|
||||
#if TOSH_DEBUG
|
||||
printk("toshiba: debugging ID ebx=0x%04x\n", regs.ebx);
|
||||
pr_debug("toshiba: debugging ID ebx=0x%04x\n", regs.ebx);
|
||||
#endif
|
||||
bx = 0xe6f5;
|
||||
|
||||
|
@ -417,7 +417,7 @@ static int tosh_probe(void)
|
|||
|
||||
for (i=0;i<7;i++) {
|
||||
if (readb(bios+0xe010+i)!=signature[i]) {
|
||||
printk("toshiba: not a supported Toshiba laptop\n");
|
||||
pr_err("toshiba: not a supported Toshiba laptop\n");
|
||||
iounmap(bios);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
@ -433,7 +433,7 @@ static int tosh_probe(void)
|
|||
/* if this is not a Toshiba laptop carry flag is set and ah=0x86 */
|
||||
|
||||
if ((flag==1) || ((regs.eax & 0xff00)==0x8600)) {
|
||||
printk("toshiba: not a supported Toshiba laptop\n");
|
||||
pr_err("toshiba: not a supported Toshiba laptop\n");
|
||||
iounmap(bios);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
@ -486,7 +486,7 @@ static int __init toshiba_init(void)
|
|||
if (tosh_probe())
|
||||
return -ENODEV;
|
||||
|
||||
printk(KERN_INFO "Toshiba System Management Mode driver v" TOSH_VERSION "\n");
|
||||
pr_info("Toshiba System Management Mode driver v" TOSH_VERSION "\n");
|
||||
|
||||
/* set the port to use for Fn status if not specified as a parameter */
|
||||
if (tosh_fn==0x00)
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (c) 2018, The Linux Foundation. All rights reserved.
|
||||
* Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
|
||||
*/
|
||||
|
||||
#include <linux/clk-provider.h>
|
||||
|
@ -12,23 +12,13 @@
|
|||
#include <linux/platform_device.h>
|
||||
#include <soc/qcom/cmd-db.h>
|
||||
#include <soc/qcom/rpmh.h>
|
||||
#include <soc/qcom/tcs.h>
|
||||
|
||||
#include <dt-bindings/clock/qcom,rpmh.h>
|
||||
|
||||
#define CLK_RPMH_ARC_EN_OFFSET 0
|
||||
#define CLK_RPMH_VRM_EN_OFFSET 4
|
||||
|
||||
#define BCM_TCS_CMD_COMMIT_MASK 0x40000000
|
||||
#define BCM_TCS_CMD_VALID_SHIFT 29
|
||||
#define BCM_TCS_CMD_VOTE_MASK 0x3fff
|
||||
#define BCM_TCS_CMD_VOTE_SHIFT 0
|
||||
|
||||
#define BCM_TCS_CMD(valid, vote) \
|
||||
(BCM_TCS_CMD_COMMIT_MASK | \
|
||||
((valid) << BCM_TCS_CMD_VALID_SHIFT) | \
|
||||
((vote & BCM_TCS_CMD_VOTE_MASK) \
|
||||
<< BCM_TCS_CMD_VOTE_SHIFT))
|
||||
|
||||
/**
|
||||
* struct bcm_db - Auxiliary data pertaining to each Bus Clock Manager(BCM)
|
||||
* @unit: divisor used to convert Hz value to an RPMh msg
|
||||
|
@ -269,7 +259,7 @@ static int clk_rpmh_bcm_send_cmd(struct clk_rpmh *c, bool enable)
|
|||
}
|
||||
|
||||
cmd.addr = c->res_addr;
|
||||
cmd.data = BCM_TCS_CMD(enable, cmd_state);
|
||||
cmd.data = BCM_TCS_CMD(1, enable, 0, cmd_state);
|
||||
|
||||
ret = rpmh_write_async(c->dev, RPMH_ACTIVE_ONLY_STATE, &cmd, 1);
|
||||
if (ret) {
|
||||
|
|
|
@ -140,10 +140,8 @@ static int adc_jack_probe(struct platform_device *pdev)
|
|||
return err;
|
||||
|
||||
data->irq = platform_get_irq(pdev, 0);
|
||||
if (data->irq < 0) {
|
||||
dev_err(&pdev->dev, "platform_get_irq failed\n");
|
||||
if (data->irq < 0)
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
err = request_any_context_irq(data->irq, adc_jack_irq_thread,
|
||||
pdata->irq_flags, pdata->name, data);
|
||||
|
|
|
@ -1253,7 +1253,7 @@ static int arizona_extcon_get_micd_configs(struct device *dev,
|
|||
int i, j;
|
||||
u32 *vals;
|
||||
|
||||
nconfs = device_property_read_u32_array(arizona->dev, prop, NULL, 0);
|
||||
nconfs = device_property_count_u32(arizona->dev, prop);
|
||||
if (nconfs <= 0)
|
||||
return 0;
|
||||
|
||||
|
|
|
@ -121,7 +121,6 @@ static const char * const axp288_pwr_up_down_info[] = {
|
|||
"Last shutdown caused by PMIC UVLO threshold",
|
||||
"Last shutdown caused by SOC initiated cold off",
|
||||
"Last shutdown caused by user pressing the power button",
|
||||
NULL,
|
||||
};
|
||||
|
||||
/*
|
||||
|
@ -130,18 +129,21 @@ static const char * const axp288_pwr_up_down_info[] = {
|
|||
*/
|
||||
static void axp288_extcon_log_rsi(struct axp288_extcon_info *info)
|
||||
{
|
||||
const char * const *rsi;
|
||||
unsigned int val, i, clear_mask = 0;
|
||||
unsigned long bits;
|
||||
int ret;
|
||||
|
||||
ret = regmap_read(info->regmap, AXP288_PS_BOOT_REASON_REG, &val);
|
||||
for (i = 0, rsi = axp288_pwr_up_down_info; *rsi; rsi++, i++) {
|
||||
if (val & BIT(i)) {
|
||||
dev_dbg(info->dev, "%s\n", *rsi);
|
||||
clear_mask |= BIT(i);
|
||||
}
|
||||
if (ret < 0) {
|
||||
dev_err(info->dev, "failed to read reset source indicator\n");
|
||||
return;
|
||||
}
|
||||
|
||||
bits = val & GENMASK(ARRAY_SIZE(axp288_pwr_up_down_info) - 1, 0);
|
||||
for_each_set_bit(i, &bits, ARRAY_SIZE(axp288_pwr_up_down_info))
|
||||
dev_dbg(info->dev, "%s\n", axp288_pwr_up_down_info[i]);
|
||||
clear_mask = bits;
|
||||
|
||||
/* Clear the register value for next reboot (write 1 to clear bit) */
|
||||
regmap_write(info->regmap, AXP288_PS_BOOT_REASON_REG, clear_mask);
|
||||
}
|
||||
|
|
|
@ -363,6 +363,7 @@ MODULE_DEVICE_TABLE(i2c, fsa9480_id);
|
|||
|
||||
static const struct of_device_id fsa9480_of_match[] = {
|
||||
{ .compatible = "fcs,fsa9480", },
|
||||
{ .compatible = "fcs,fsa880", },
|
||||
{ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, fsa9480_of_match);
|
||||
|
|
|
@ -22,26 +22,22 @@
|
|||
/**
|
||||
* struct gpio_extcon_data - A simple GPIO-controlled extcon device state container.
|
||||
* @edev: Extcon device.
|
||||
* @irq: Interrupt line for the external connector.
|
||||
* @work: Work fired by the interrupt.
|
||||
* @debounce_jiffies: Number of jiffies to wait for the GPIO to stabilize, from the debounce
|
||||
* value.
|
||||
* @gpiod: GPIO descriptor for this external connector.
|
||||
* @extcon_id: The unique id of specific external connector.
|
||||
* @debounce: Debounce time for GPIO IRQ in ms.
|
||||
* @irq_flags: IRQ Flags (e.g., IRQF_TRIGGER_LOW).
|
||||
* @check_on_resume: Boolean describing whether to check the state of gpio
|
||||
* while resuming from sleep.
|
||||
*/
|
||||
struct gpio_extcon_data {
|
||||
struct extcon_dev *edev;
|
||||
int irq;
|
||||
struct delayed_work work;
|
||||
unsigned long debounce_jiffies;
|
||||
struct gpio_desc *gpiod;
|
||||
unsigned int extcon_id;
|
||||
unsigned long debounce;
|
||||
unsigned long irq_flags;
|
||||
bool check_on_resume;
|
||||
};
|
||||
|
||||
|
@ -69,6 +65,8 @@ static int gpio_extcon_probe(struct platform_device *pdev)
|
|||
{
|
||||
struct gpio_extcon_data *data;
|
||||
struct device *dev = &pdev->dev;
|
||||
unsigned long irq_flags;
|
||||
int irq;
|
||||
int ret;
|
||||
|
||||
data = devm_kzalloc(dev, sizeof(struct gpio_extcon_data), GFP_KERNEL);
|
||||
|
@ -82,15 +80,26 @@ static int gpio_extcon_probe(struct platform_device *pdev)
|
|||
* developed to get the extcon id from device-tree or others.
|
||||
* On later, it have to be solved.
|
||||
*/
|
||||
if (!data->irq_flags || data->extcon_id > EXTCON_NONE)
|
||||
if (data->extcon_id > EXTCON_NONE)
|
||||
return -EINVAL;
|
||||
|
||||
data->gpiod = devm_gpiod_get(dev, "extcon", GPIOD_IN);
|
||||
if (IS_ERR(data->gpiod))
|
||||
return PTR_ERR(data->gpiod);
|
||||
data->irq = gpiod_to_irq(data->gpiod);
|
||||
if (data->irq <= 0)
|
||||
return data->irq;
|
||||
irq = gpiod_to_irq(data->gpiod);
|
||||
if (irq <= 0)
|
||||
return irq;
|
||||
|
||||
/*
|
||||
* It is unlikely that this is an acknowledged interrupt that goes
|
||||
* away after handling, what we are looking for are falling edges
|
||||
* if the signal is active low, and rising edges if the signal is
|
||||
* active high.
|
||||
*/
|
||||
if (gpiod_is_active_low(data->gpiod))
|
||||
irq_flags = IRQF_TRIGGER_FALLING;
|
||||
else
|
||||
irq_flags = IRQF_TRIGGER_RISING;
|
||||
|
||||
/* Allocate the memory of extcon devie and register extcon device */
|
||||
data->edev = devm_extcon_dev_allocate(dev, &data->extcon_id);
|
||||
|
@ -109,8 +118,8 @@ static int gpio_extcon_probe(struct platform_device *pdev)
|
|||
* Request the interrupt of gpio to detect whether external connector
|
||||
* is attached or detached.
|
||||
*/
|
||||
ret = devm_request_any_context_irq(dev, data->irq,
|
||||
gpio_irq_handler, data->irq_flags,
|
||||
ret = devm_request_any_context_irq(dev, irq,
|
||||
gpio_irq_handler, irq_flags,
|
||||
pdev->name, data);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
|
|
@ -774,12 +774,12 @@ static int max77843_init_muic_regmap(struct max77693_dev *max77843)
|
|||
{
|
||||
int ret;
|
||||
|
||||
max77843->i2c_muic = i2c_new_dummy(max77843->i2c->adapter,
|
||||
max77843->i2c_muic = i2c_new_dummy_device(max77843->i2c->adapter,
|
||||
I2C_ADDR_MUIC);
|
||||
if (!max77843->i2c_muic) {
|
||||
if (IS_ERR(max77843->i2c_muic)) {
|
||||
dev_err(&max77843->i2c->dev,
|
||||
"Cannot allocate I2C device for MUIC\n");
|
||||
return -ENOMEM;
|
||||
return PTR_ERR(max77843->i2c_muic);
|
||||
}
|
||||
|
||||
i2c_set_clientdata(max77843->i2c_muic, max77843);
|
||||
|
|
|
@ -597,7 +597,7 @@ static int sm5022_muic_i2c_probe(struct i2c_client *i2c,
|
|||
|
||||
ret = devm_request_threaded_irq(info->dev, virq, NULL,
|
||||
sm5502_muic_irq_handler,
|
||||
IRQF_NO_SUSPEND,
|
||||
IRQF_NO_SUSPEND | IRQF_ONESHOT,
|
||||
muic_irq->name, info);
|
||||
if (ret) {
|
||||
dev_err(info->dev,
|
||||
|
|
|
@ -216,6 +216,24 @@ config INTEL_STRATIX10_SERVICE
|
|||
|
||||
Say Y here if you want Stratix10 service layer support.
|
||||
|
||||
config INTEL_STRATIX10_RSU
|
||||
tristate "Intel Stratix10 Remote System Update"
|
||||
depends on INTEL_STRATIX10_SERVICE
|
||||
help
|
||||
The Intel Remote System Update (RSU) driver exposes interfaces
|
||||
access through the Intel Service Layer to user space via sysfs
|
||||
device attribute nodes. The RSU interfaces report/control some of
|
||||
the optional RSU features of the Stratix 10 SoC FPGA.
|
||||
|
||||
The RSU provides a way for customers to update the boot
|
||||
configuration of a Stratix 10 SoC device with significantly reduced
|
||||
risk of corrupting the bitstream storage and bricking the system.
|
||||
|
||||
Enable RSU support if you are using an Intel SoC FPGA with the RSU
|
||||
feature enabled and you want Linux user space control.
|
||||
|
||||
Say Y here if you want Intel RSU support.
|
||||
|
||||
config QCOM_SCM
|
||||
bool
|
||||
depends on ARM || ARM64
|
||||
|
|
|
@ -11,6 +11,7 @@ obj-$(CONFIG_EDD) += edd.o
|
|||
obj-$(CONFIG_EFI_PCDP) += pcdp.o
|
||||
obj-$(CONFIG_DMIID) += dmi-id.o
|
||||
obj-$(CONFIG_INTEL_STRATIX10_SERVICE) += stratix10-svc.o
|
||||
obj-$(CONFIG_INTEL_STRATIX10_RSU) += stratix10-rsu.o
|
||||
obj-$(CONFIG_ISCSI_IBFT_FIND) += iscsi_ibft_find.o
|
||||
obj-$(CONFIG_ISCSI_IBFT) += iscsi_ibft.o
|
||||
obj-$(CONFIG_FIRMWARE_MEMMAP) += memmap.o
|
||||
|
|
|
@ -92,8 +92,8 @@ static int vpd_section_check_key_name(const u8 *key, s32 key_len)
|
|||
return VPD_OK;
|
||||
}
|
||||
|
||||
static int vpd_section_attrib_add(const u8 *key, s32 key_len,
|
||||
const u8 *value, s32 value_len,
|
||||
static int vpd_section_attrib_add(const u8 *key, u32 key_len,
|
||||
const u8 *value, u32 value_len,
|
||||
void *arg)
|
||||
{
|
||||
int ret;
|
||||
|
|
|
@ -9,8 +9,8 @@
|
|||
|
||||
#include "vpd_decode.h"
|
||||
|
||||
static int vpd_decode_len(const s32 max_len, const u8 *in,
|
||||
s32 *length, s32 *decoded_len)
|
||||
static int vpd_decode_len(const u32 max_len, const u8 *in,
|
||||
u32 *length, u32 *decoded_len)
|
||||
{
|
||||
u8 more;
|
||||
int i = 0;
|
||||
|
@ -30,18 +30,39 @@ static int vpd_decode_len(const s32 max_len, const u8 *in,
|
|||
} while (more);
|
||||
|
||||
*decoded_len = i;
|
||||
|
||||
return VPD_OK;
|
||||
}
|
||||
|
||||
int vpd_decode_string(const s32 max_len, const u8 *input_buf, s32 *consumed,
|
||||
static int vpd_decode_entry(const u32 max_len, const u8 *input_buf,
|
||||
u32 *_consumed, const u8 **entry, u32 *entry_len)
|
||||
{
|
||||
u32 decoded_len;
|
||||
u32 consumed = *_consumed;
|
||||
|
||||
if (vpd_decode_len(max_len - consumed, &input_buf[consumed],
|
||||
entry_len, &decoded_len) != VPD_OK)
|
||||
return VPD_FAIL;
|
||||
if (max_len - consumed < decoded_len)
|
||||
return VPD_FAIL;
|
||||
|
||||
consumed += decoded_len;
|
||||
*entry = input_buf + consumed;
|
||||
|
||||
/* entry_len is untrusted data and must be checked again. */
|
||||
if (max_len - consumed < *entry_len)
|
||||
return VPD_FAIL;
|
||||
|
||||
consumed += decoded_len;
|
||||
*_consumed = consumed;
|
||||
return VPD_OK;
|
||||
}
|
||||
|
||||
int vpd_decode_string(const u32 max_len, const u8 *input_buf, u32 *consumed,
|
||||
vpd_decode_callback callback, void *callback_arg)
|
||||
{
|
||||
int type;
|
||||
int res;
|
||||
s32 key_len;
|
||||
s32 value_len;
|
||||
s32 decoded_len;
|
||||
u32 key_len;
|
||||
u32 value_len;
|
||||
const u8 *key;
|
||||
const u8 *value;
|
||||
|
||||
|
@ -56,26 +77,14 @@ int vpd_decode_string(const s32 max_len, const u8 *input_buf, s32 *consumed,
|
|||
case VPD_TYPE_STRING:
|
||||
(*consumed)++;
|
||||
|
||||
/* key */
|
||||
res = vpd_decode_len(max_len - *consumed, &input_buf[*consumed],
|
||||
&key_len, &decoded_len);
|
||||
if (res != VPD_OK || *consumed + decoded_len >= max_len)
|
||||
if (vpd_decode_entry(max_len, input_buf, consumed, &key,
|
||||
&key_len) != VPD_OK)
|
||||
return VPD_FAIL;
|
||||
|
||||
*consumed += decoded_len;
|
||||
key = &input_buf[*consumed];
|
||||
*consumed += key_len;
|
||||
|
||||
/* value */
|
||||
res = vpd_decode_len(max_len - *consumed, &input_buf[*consumed],
|
||||
&value_len, &decoded_len);
|
||||
if (res != VPD_OK || *consumed + decoded_len > max_len)
|
||||
if (vpd_decode_entry(max_len, input_buf, consumed, &value,
|
||||
&value_len) != VPD_OK)
|
||||
return VPD_FAIL;
|
||||
|
||||
*consumed += decoded_len;
|
||||
value = &input_buf[*consumed];
|
||||
*consumed += value_len;
|
||||
|
||||
if (type == VPD_TYPE_STRING)
|
||||
return callback(key, key_len, value, value_len,
|
||||
callback_arg);
|
||||
|
|
|
@ -25,8 +25,8 @@ enum {
|
|||
};
|
||||
|
||||
/* Callback for vpd_decode_string to invoke. */
|
||||
typedef int vpd_decode_callback(const u8 *key, s32 key_len,
|
||||
const u8 *value, s32 value_len,
|
||||
typedef int vpd_decode_callback(const u8 *key, u32 key_len,
|
||||
const u8 *value, u32 value_len,
|
||||
void *arg);
|
||||
|
||||
/*
|
||||
|
@ -44,7 +44,7 @@ typedef int vpd_decode_callback(const u8 *key, s32 key_len,
|
|||
* If one entry is successfully decoded, sends it to callback and returns the
|
||||
* result.
|
||||
*/
|
||||
int vpd_decode_string(const s32 max_len, const u8 *input_buf, s32 *consumed,
|
||||
int vpd_decode_string(const u32 max_len, const u8 *input_buf, u32 *consumed,
|
||||
vpd_decode_callback callback, void *callback_arg);
|
||||
|
||||
#endif /* __VPD_DECODE_H */
|
||||
|
|
|
@ -0,0 +1,451 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (C) 2018-2019, Intel Corporation
|
||||
*/
|
||||
|
||||
#include <linux/arm-smccc.h>
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/completion.h>
|
||||
#include <linux/kobject.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/firmware/intel/stratix10-svc-client.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/sysfs.h>
|
||||
|
||||
#define RSU_STATE_MASK GENMASK_ULL(31, 0)
|
||||
#define RSU_VERSION_MASK GENMASK_ULL(63, 32)
|
||||
#define RSU_ERROR_LOCATION_MASK GENMASK_ULL(31, 0)
|
||||
#define RSU_ERROR_DETAIL_MASK GENMASK_ULL(63, 32)
|
||||
#define RSU_FW_VERSION_MASK GENMASK_ULL(15, 0)
|
||||
|
||||
#define RSU_TIMEOUT (msecs_to_jiffies(SVC_RSU_REQUEST_TIMEOUT_MS))
|
||||
|
||||
#define INVALID_RETRY_COUNTER 0xFFFFFFFF
|
||||
|
||||
typedef void (*rsu_callback)(struct stratix10_svc_client *client,
|
||||
struct stratix10_svc_cb_data *data);
|
||||
/**
|
||||
* struct stratix10_rsu_priv - rsu data structure
|
||||
* @chan: pointer to the allocated service channel
|
||||
* @client: active service client
|
||||
* @completion: state for callback completion
|
||||
* @lock: a mutex to protect callback completion state
|
||||
* @status.current_image: address of image currently running in flash
|
||||
* @status.fail_image: address of failed image in flash
|
||||
* @status.version: the version number of RSU firmware
|
||||
* @status.state: the state of RSU system
|
||||
* @status.error_details: error code
|
||||
* @status.error_location: the error offset inside the image that failed
|
||||
* @retry_counter: the current image's retry counter
|
||||
*/
|
||||
struct stratix10_rsu_priv {
|
||||
struct stratix10_svc_chan *chan;
|
||||
struct stratix10_svc_client client;
|
||||
struct completion completion;
|
||||
struct mutex lock;
|
||||
struct {
|
||||
unsigned long current_image;
|
||||
unsigned long fail_image;
|
||||
unsigned int version;
|
||||
unsigned int state;
|
||||
unsigned int error_details;
|
||||
unsigned int error_location;
|
||||
} status;
|
||||
unsigned int retry_counter;
|
||||
};
|
||||
|
||||
/**
|
||||
* rsu_status_callback() - Status callback from Intel Service Layer
|
||||
* @client: pointer to service client
|
||||
* @data: pointer to callback data structure
|
||||
*
|
||||
* Callback from Intel service layer for RSU status request. Status is
|
||||
* only updated after a system reboot, so a get updated status call is
|
||||
* made during driver probe.
|
||||
*/
|
||||
static void rsu_status_callback(struct stratix10_svc_client *client,
|
||||
struct stratix10_svc_cb_data *data)
|
||||
{
|
||||
struct stratix10_rsu_priv *priv = client->priv;
|
||||
struct arm_smccc_res *res = (struct arm_smccc_res *)data->kaddr1;
|
||||
|
||||
if (data->status == BIT(SVC_STATUS_RSU_OK)) {
|
||||
priv->status.version = FIELD_GET(RSU_VERSION_MASK,
|
||||
res->a2);
|
||||
priv->status.state = FIELD_GET(RSU_STATE_MASK, res->a2);
|
||||
priv->status.fail_image = res->a1;
|
||||
priv->status.current_image = res->a0;
|
||||
priv->status.error_location =
|
||||
FIELD_GET(RSU_ERROR_LOCATION_MASK, res->a3);
|
||||
priv->status.error_details =
|
||||
FIELD_GET(RSU_ERROR_DETAIL_MASK, res->a3);
|
||||
} else {
|
||||
dev_err(client->dev, "COMMAND_RSU_STATUS returned 0x%lX\n",
|
||||
res->a0);
|
||||
priv->status.version = 0;
|
||||
priv->status.state = 0;
|
||||
priv->status.fail_image = 0;
|
||||
priv->status.current_image = 0;
|
||||
priv->status.error_location = 0;
|
||||
priv->status.error_details = 0;
|
||||
}
|
||||
|
||||
complete(&priv->completion);
|
||||
}
|
||||
|
||||
/**
|
||||
* rsu_command_callback() - Update callback from Intel Service Layer
|
||||
* @client: pointer to client
|
||||
* @data: pointer to callback data structure
|
||||
*
|
||||
* Callback from Intel service layer for RSU commands.
|
||||
*/
|
||||
static void rsu_command_callback(struct stratix10_svc_client *client,
|
||||
struct stratix10_svc_cb_data *data)
|
||||
{
|
||||
struct stratix10_rsu_priv *priv = client->priv;
|
||||
|
||||
if (data->status != BIT(SVC_STATUS_RSU_OK))
|
||||
dev_err(client->dev, "RSU returned status is %i\n",
|
||||
data->status);
|
||||
complete(&priv->completion);
|
||||
}
|
||||
|
||||
/**
|
||||
* rsu_retry_callback() - Callback from Intel service layer for getting
|
||||
* the current image's retry counter from firmware
|
||||
* @client: pointer to client
|
||||
* @data: pointer to callback data structure
|
||||
*
|
||||
* Callback from Intel service layer for retry counter, which is used by
|
||||
* user to know how many times the images is still allowed to reload
|
||||
* itself before giving up and starting RSU fail-over flow.
|
||||
*/
|
||||
static void rsu_retry_callback(struct stratix10_svc_client *client,
|
||||
struct stratix10_svc_cb_data *data)
|
||||
{
|
||||
struct stratix10_rsu_priv *priv = client->priv;
|
||||
unsigned int *counter = (unsigned int *)data->kaddr1;
|
||||
|
||||
if (data->status == BIT(SVC_STATUS_RSU_OK))
|
||||
priv->retry_counter = *counter;
|
||||
else
|
||||
dev_err(client->dev, "Failed to get retry counter %i\n",
|
||||
data->status);
|
||||
|
||||
complete(&priv->completion);
|
||||
}
|
||||
|
||||
/**
|
||||
* rsu_send_msg() - send a message to Intel service layer
|
||||
* @priv: pointer to rsu private data
|
||||
* @command: RSU status or update command
|
||||
* @arg: the request argument, the bitstream address or notify status
|
||||
* @callback: function pointer for the callback (status or update)
|
||||
*
|
||||
* Start an Intel service layer transaction to perform the SMC call that
|
||||
* is necessary to get RSU boot log or set the address of bitstream to
|
||||
* boot after reboot.
|
||||
*
|
||||
* Returns 0 on success or -ETIMEDOUT on error.
|
||||
*/
|
||||
static int rsu_send_msg(struct stratix10_rsu_priv *priv,
|
||||
enum stratix10_svc_command_code command,
|
||||
unsigned long arg,
|
||||
rsu_callback callback)
|
||||
{
|
||||
struct stratix10_svc_client_msg msg;
|
||||
int ret;
|
||||
|
||||
mutex_lock(&priv->lock);
|
||||
reinit_completion(&priv->completion);
|
||||
priv->client.receive_cb = callback;
|
||||
|
||||
msg.command = command;
|
||||
if (arg)
|
||||
msg.arg[0] = arg;
|
||||
|
||||
ret = stratix10_svc_send(priv->chan, &msg);
|
||||
if (ret < 0)
|
||||
goto status_done;
|
||||
|
||||
ret = wait_for_completion_interruptible_timeout(&priv->completion,
|
||||
RSU_TIMEOUT);
|
||||
if (!ret) {
|
||||
dev_err(priv->client.dev,
|
||||
"timeout waiting for SMC call\n");
|
||||
ret = -ETIMEDOUT;
|
||||
goto status_done;
|
||||
} else if (ret < 0) {
|
||||
dev_err(priv->client.dev,
|
||||
"error %d waiting for SMC call\n", ret);
|
||||
goto status_done;
|
||||
} else {
|
||||
ret = 0;
|
||||
}
|
||||
|
||||
status_done:
|
||||
stratix10_svc_done(priv->chan);
|
||||
mutex_unlock(&priv->lock);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* This driver exposes some optional features of the Intel Stratix 10 SoC FPGA.
|
||||
* The sysfs interfaces exposed here are FPGA Remote System Update (RSU)
|
||||
* related. They allow user space software to query the configuration system
|
||||
* status and to request optional reboot behavior specific to Intel FPGAs.
|
||||
*/
|
||||
|
||||
static ssize_t current_image_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct stratix10_rsu_priv *priv = dev_get_drvdata(dev);
|
||||
|
||||
if (!priv)
|
||||
return -ENODEV;
|
||||
|
||||
return sprintf(buf, "0x%08lx\n", priv->status.current_image);
|
||||
}
|
||||
|
||||
static ssize_t fail_image_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct stratix10_rsu_priv *priv = dev_get_drvdata(dev);
|
||||
|
||||
if (!priv)
|
||||
return -ENODEV;
|
||||
|
||||
return sprintf(buf, "0x%08lx\n", priv->status.fail_image);
|
||||
}
|
||||
|
||||
static ssize_t version_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct stratix10_rsu_priv *priv = dev_get_drvdata(dev);
|
||||
|
||||
if (!priv)
|
||||
return -ENODEV;
|
||||
|
||||
return sprintf(buf, "0x%08x\n", priv->status.version);
|
||||
}
|
||||
|
||||
static ssize_t state_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct stratix10_rsu_priv *priv = dev_get_drvdata(dev);
|
||||
|
||||
if (!priv)
|
||||
return -ENODEV;
|
||||
|
||||
return sprintf(buf, "0x%08x\n", priv->status.state);
|
||||
}
|
||||
|
||||
static ssize_t error_location_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct stratix10_rsu_priv *priv = dev_get_drvdata(dev);
|
||||
|
||||
if (!priv)
|
||||
return -ENODEV;
|
||||
|
||||
return sprintf(buf, "0x%08x\n", priv->status.error_location);
|
||||
}
|
||||
|
||||
static ssize_t error_details_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct stratix10_rsu_priv *priv = dev_get_drvdata(dev);
|
||||
|
||||
if (!priv)
|
||||
return -ENODEV;
|
||||
|
||||
return sprintf(buf, "0x%08x\n", priv->status.error_details);
|
||||
}
|
||||
|
||||
static ssize_t retry_counter_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct stratix10_rsu_priv *priv = dev_get_drvdata(dev);
|
||||
|
||||
if (!priv)
|
||||
return -ENODEV;
|
||||
|
||||
return sprintf(buf, "0x%08x\n", priv->retry_counter);
|
||||
}
|
||||
|
||||
static ssize_t reboot_image_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct stratix10_rsu_priv *priv = dev_get_drvdata(dev);
|
||||
unsigned long address;
|
||||
int ret;
|
||||
|
||||
if (priv == 0)
|
||||
return -ENODEV;
|
||||
|
||||
ret = kstrtoul(buf, 0, &address);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = rsu_send_msg(priv, COMMAND_RSU_UPDATE,
|
||||
address, rsu_command_callback);
|
||||
if (ret) {
|
||||
dev_err(dev, "Error, RSU update returned %i\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static ssize_t notify_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct stratix10_rsu_priv *priv = dev_get_drvdata(dev);
|
||||
unsigned long status;
|
||||
int ret;
|
||||
|
||||
if (priv == 0)
|
||||
return -ENODEV;
|
||||
|
||||
ret = kstrtoul(buf, 0, &status);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = rsu_send_msg(priv, COMMAND_RSU_NOTIFY,
|
||||
status, rsu_command_callback);
|
||||
if (ret) {
|
||||
dev_err(dev, "Error, RSU notify returned %i\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* to get the updated state */
|
||||
ret = rsu_send_msg(priv, COMMAND_RSU_STATUS,
|
||||
0, rsu_status_callback);
|
||||
if (ret) {
|
||||
dev_err(dev, "Error, getting RSU status %i\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* only 19.3 or late version FW supports retry counter feature */
|
||||
if (FIELD_GET(RSU_FW_VERSION_MASK, priv->status.version)) {
|
||||
ret = rsu_send_msg(priv, COMMAND_RSU_RETRY,
|
||||
0, rsu_retry_callback);
|
||||
if (ret) {
|
||||
dev_err(dev,
|
||||
"Error, getting RSU retry %i\n", ret);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR_RO(current_image);
|
||||
static DEVICE_ATTR_RO(fail_image);
|
||||
static DEVICE_ATTR_RO(state);
|
||||
static DEVICE_ATTR_RO(version);
|
||||
static DEVICE_ATTR_RO(error_location);
|
||||
static DEVICE_ATTR_RO(error_details);
|
||||
static DEVICE_ATTR_RO(retry_counter);
|
||||
static DEVICE_ATTR_WO(reboot_image);
|
||||
static DEVICE_ATTR_WO(notify);
|
||||
|
||||
static struct attribute *rsu_attrs[] = {
|
||||
&dev_attr_current_image.attr,
|
||||
&dev_attr_fail_image.attr,
|
||||
&dev_attr_state.attr,
|
||||
&dev_attr_version.attr,
|
||||
&dev_attr_error_location.attr,
|
||||
&dev_attr_error_details.attr,
|
||||
&dev_attr_retry_counter.attr,
|
||||
&dev_attr_reboot_image.attr,
|
||||
&dev_attr_notify.attr,
|
||||
NULL
|
||||
};
|
||||
|
||||
ATTRIBUTE_GROUPS(rsu);
|
||||
|
||||
static int stratix10_rsu_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct stratix10_rsu_priv *priv;
|
||||
int ret;
|
||||
|
||||
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
|
||||
if (!priv)
|
||||
return -ENOMEM;
|
||||
|
||||
priv->client.dev = dev;
|
||||
priv->client.receive_cb = NULL;
|
||||
priv->client.priv = priv;
|
||||
priv->status.current_image = 0;
|
||||
priv->status.fail_image = 0;
|
||||
priv->status.error_location = 0;
|
||||
priv->status.error_details = 0;
|
||||
priv->status.version = 0;
|
||||
priv->status.state = 0;
|
||||
priv->retry_counter = INVALID_RETRY_COUNTER;
|
||||
|
||||
mutex_init(&priv->lock);
|
||||
priv->chan = stratix10_svc_request_channel_byname(&priv->client,
|
||||
SVC_CLIENT_RSU);
|
||||
if (IS_ERR(priv->chan)) {
|
||||
dev_err(dev, "couldn't get service channel %s\n",
|
||||
SVC_CLIENT_RSU);
|
||||
return PTR_ERR(priv->chan);
|
||||
}
|
||||
|
||||
init_completion(&priv->completion);
|
||||
platform_set_drvdata(pdev, priv);
|
||||
|
||||
/* get the initial state from firmware */
|
||||
ret = rsu_send_msg(priv, COMMAND_RSU_STATUS,
|
||||
0, rsu_status_callback);
|
||||
if (ret) {
|
||||
dev_err(dev, "Error, getting RSU status %i\n", ret);
|
||||
stratix10_svc_free_channel(priv->chan);
|
||||
}
|
||||
|
||||
/* only 19.3 or late version FW supports retry counter feature */
|
||||
if (FIELD_GET(RSU_FW_VERSION_MASK, priv->status.version)) {
|
||||
ret = rsu_send_msg(priv, COMMAND_RSU_RETRY, 0,
|
||||
rsu_retry_callback);
|
||||
if (ret) {
|
||||
dev_err(dev,
|
||||
"Error, getting RSU retry %i\n", ret);
|
||||
stratix10_svc_free_channel(priv->chan);
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int stratix10_rsu_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct stratix10_rsu_priv *priv = platform_get_drvdata(pdev);
|
||||
|
||||
stratix10_svc_free_channel(priv->chan);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct platform_driver stratix10_rsu_driver = {
|
||||
.probe = stratix10_rsu_probe,
|
||||
.remove = stratix10_rsu_remove,
|
||||
.driver = {
|
||||
.name = "stratix10-rsu",
|
||||
.dev_groups = rsu_groups,
|
||||
},
|
||||
};
|
||||
|
||||
module_platform_driver(stratix10_rsu_driver);
|
||||
|
||||
MODULE_LICENSE("GPL v2");
|
||||
MODULE_DESCRIPTION("Intel Remote System Update Driver");
|
||||
MODULE_AUTHOR("Richard Gong <richard.gong@intel.com>");
|
|
@ -38,12 +38,23 @@
|
|||
#define FPGA_CONFIG_DATA_CLAIM_TIMEOUT_MS 200
|
||||
#define FPGA_CONFIG_STATUS_TIMEOUT_SEC 30
|
||||
|
||||
/* stratix10 service layer clients */
|
||||
#define STRATIX10_RSU "stratix10-rsu"
|
||||
|
||||
typedef void (svc_invoke_fn)(unsigned long, unsigned long, unsigned long,
|
||||
unsigned long, unsigned long, unsigned long,
|
||||
unsigned long, unsigned long,
|
||||
struct arm_smccc_res *);
|
||||
struct stratix10_svc_chan;
|
||||
|
||||
/**
|
||||
* struct stratix10_svc - svc private data
|
||||
* @stratix10_svc_rsu: pointer to stratix10 RSU device
|
||||
*/
|
||||
struct stratix10_svc {
|
||||
struct platform_device *stratix10_svc_rsu;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct stratix10_svc_sh_memory - service shared memory structure
|
||||
* @sync_complete: state for a completion
|
||||
|
@ -296,8 +307,13 @@ static void svc_thread_recv_status_ok(struct stratix10_svc_data *p_data,
|
|||
cb_data->status = BIT(SVC_STATUS_RECONFIG_COMPLETED);
|
||||
break;
|
||||
case COMMAND_RSU_UPDATE:
|
||||
case COMMAND_RSU_NOTIFY:
|
||||
cb_data->status = BIT(SVC_STATUS_RSU_OK);
|
||||
break;
|
||||
case COMMAND_RSU_RETRY:
|
||||
cb_data->status = BIT(SVC_STATUS_RSU_OK);
|
||||
cb_data->kaddr1 = &res.a1;
|
||||
break;
|
||||
default:
|
||||
pr_warn("it shouldn't happen\n");
|
||||
break;
|
||||
|
@ -386,6 +402,16 @@ static int svc_normal_to_secure_thread(void *data)
|
|||
a1 = pdata->arg[0];
|
||||
a2 = 0;
|
||||
break;
|
||||
case COMMAND_RSU_NOTIFY:
|
||||
a0 = INTEL_SIP_SMC_RSU_NOTIFY;
|
||||
a1 = pdata->arg[0];
|
||||
a2 = 0;
|
||||
break;
|
||||
case COMMAND_RSU_RETRY:
|
||||
a0 = INTEL_SIP_SMC_RSU_RETRY_COUNTER;
|
||||
a1 = 0;
|
||||
a2 = 0;
|
||||
break;
|
||||
default:
|
||||
pr_warn("it shouldn't happen\n");
|
||||
break;
|
||||
|
@ -438,7 +464,28 @@ static int svc_normal_to_secure_thread(void *data)
|
|||
pr_debug("%s: STATUS_REJECTED\n", __func__);
|
||||
break;
|
||||
case INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR:
|
||||
case INTEL_SIP_SMC_RSU_ERROR:
|
||||
pr_err("%s: STATUS_ERROR\n", __func__);
|
||||
switch (pdata->command) {
|
||||
/* for FPGA mgr */
|
||||
case COMMAND_RECONFIG_DATA_CLAIM:
|
||||
case COMMAND_RECONFIG:
|
||||
case COMMAND_RECONFIG_DATA_SUBMIT:
|
||||
case COMMAND_RECONFIG_STATUS:
|
||||
cbdata->status =
|
||||
BIT(SVC_STATUS_RECONFIG_ERROR);
|
||||
break;
|
||||
|
||||
/* for RSU */
|
||||
case COMMAND_RSU_STATUS:
|
||||
case COMMAND_RSU_UPDATE:
|
||||
case COMMAND_RSU_NOTIFY:
|
||||
case COMMAND_RSU_RETRY:
|
||||
cbdata->status =
|
||||
BIT(SVC_STATUS_RSU_ERROR);
|
||||
break;
|
||||
}
|
||||
|
||||
cbdata->status = BIT(SVC_STATUS_RECONFIG_ERROR);
|
||||
cbdata->kaddr1 = NULL;
|
||||
cbdata->kaddr2 = NULL;
|
||||
|
@ -530,7 +577,7 @@ static int svc_get_sh_memory(struct platform_device *pdev,
|
|||
|
||||
if (!sh_memory->addr || !sh_memory->size) {
|
||||
dev_err(dev,
|
||||
"fails to get shared memory info from secure world\n");
|
||||
"failed to get shared memory info from secure world\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
|
@ -768,7 +815,7 @@ int stratix10_svc_send(struct stratix10_svc_chan *chan, void *msg)
|
|||
"svc_smc_hvc_thread");
|
||||
if (IS_ERR(chan->ctrl->task)) {
|
||||
dev_err(chan->ctrl->dev,
|
||||
"fails to create svc_smc_hvc_thread\n");
|
||||
"failed to create svc_smc_hvc_thread\n");
|
||||
kfree(p_data);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
@ -913,6 +960,8 @@ static int stratix10_svc_drv_probe(struct platform_device *pdev)
|
|||
struct stratix10_svc_chan *chans;
|
||||
struct gen_pool *genpool;
|
||||
struct stratix10_svc_sh_memory *sh_memory;
|
||||
struct stratix10_svc *svc;
|
||||
|
||||
svc_invoke_fn *invoke_fn;
|
||||
size_t fifo_size;
|
||||
int ret;
|
||||
|
@ -957,7 +1006,7 @@ static int stratix10_svc_drv_probe(struct platform_device *pdev)
|
|||
fifo_size = sizeof(struct stratix10_svc_data) * SVC_NUM_DATA_IN_FIFO;
|
||||
ret = kfifo_alloc(&controller->svc_fifo, fifo_size, GFP_KERNEL);
|
||||
if (ret) {
|
||||
dev_err(dev, "fails to allocate FIFO\n");
|
||||
dev_err(dev, "failed to allocate FIFO\n");
|
||||
return ret;
|
||||
}
|
||||
spin_lock_init(&controller->svc_fifo_lock);
|
||||
|
@ -975,6 +1024,24 @@ static int stratix10_svc_drv_probe(struct platform_device *pdev)
|
|||
list_add_tail(&controller->node, &svc_ctrl);
|
||||
platform_set_drvdata(pdev, controller);
|
||||
|
||||
/* add svc client device(s) */
|
||||
svc = devm_kzalloc(dev, sizeof(*svc), GFP_KERNEL);
|
||||
if (!svc)
|
||||
return -ENOMEM;
|
||||
|
||||
svc->stratix10_svc_rsu = platform_device_alloc(STRATIX10_RSU, 0);
|
||||
if (!svc->stratix10_svc_rsu) {
|
||||
dev_err(dev, "failed to allocate %s device\n", STRATIX10_RSU);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
ret = platform_device_add(svc->stratix10_svc_rsu);
|
||||
if (ret) {
|
||||
platform_device_put(svc->stratix10_svc_rsu);
|
||||
return ret;
|
||||
}
|
||||
dev_set_drvdata(dev, svc);
|
||||
|
||||
pr_info("Intel Service Layer Driver Initialized\n");
|
||||
|
||||
return ret;
|
||||
|
@ -982,8 +1049,11 @@ static int stratix10_svc_drv_probe(struct platform_device *pdev)
|
|||
|
||||
static int stratix10_svc_drv_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct stratix10_svc *svc = dev_get_drvdata(&pdev->dev);
|
||||
struct stratix10_svc_controller *ctrl = platform_get_drvdata(pdev);
|
||||
|
||||
platform_device_unregister(svc->stratix10_svc_rsu);
|
||||
|
||||
kfifo_free(&ctrl->svc_fifo);
|
||||
if (ctrl->task) {
|
||||
kthread_stop(ctrl->task);
|
||||
|
|
|
@ -46,11 +46,11 @@ config FPGA_MGR_ALTERA_PS_SPI
|
|||
using the passive serial interface over SPI.
|
||||
|
||||
config FPGA_MGR_ALTERA_CVP
|
||||
tristate "Altera Arria-V/Cyclone-V/Stratix-V CvP FPGA Manager"
|
||||
tristate "Altera CvP FPGA Manager"
|
||||
depends on PCI
|
||||
help
|
||||
FPGA manager driver support for Arria-V, Cyclone-V, Stratix-V
|
||||
and Arria 10 Altera FPGAs using the CvP interface over PCIe.
|
||||
FPGA manager driver support for Arria-V, Cyclone-V, Stratix-V,
|
||||
Arria 10 and Stratix10 Altera FPGAs using the CvP interface over PCIe.
|
||||
|
||||
config FPGA_MGR_ZYNQ_FPGA
|
||||
tristate "Xilinx Zynq FPGA"
|
||||
|
|
|
@ -39,8 +39,9 @@ obj-$(CONFIG_FPGA_DFL_FME_BRIDGE) += dfl-fme-br.o
|
|||
obj-$(CONFIG_FPGA_DFL_FME_REGION) += dfl-fme-region.o
|
||||
obj-$(CONFIG_FPGA_DFL_AFU) += dfl-afu.o
|
||||
|
||||
dfl-fme-objs := dfl-fme-main.o dfl-fme-pr.o
|
||||
dfl-fme-objs := dfl-fme-main.o dfl-fme-pr.o dfl-fme-error.o
|
||||
dfl-afu-objs := dfl-afu-main.o dfl-afu-region.o dfl-afu-dma-region.o
|
||||
dfl-afu-objs += dfl-afu-error.o
|
||||
|
||||
# Drivers for FPGAs which implement DFL
|
||||
obj-$(CONFIG_FPGA_DFL_PCI) += dfl-pci.o
|
||||
|
|
|
@ -22,10 +22,10 @@
|
|||
#define TIMEOUT_US 2000 /* CVP STATUS timeout for USERMODE polling */
|
||||
|
||||
/* Vendor Specific Extended Capability Registers */
|
||||
#define VSE_PCIE_EXT_CAP_ID 0x200
|
||||
#define VSE_PCIE_EXT_CAP_ID 0x0
|
||||
#define VSE_PCIE_EXT_CAP_ID_VAL 0x000b /* 16bit */
|
||||
|
||||
#define VSE_CVP_STATUS 0x21c /* 32bit */
|
||||
#define VSE_CVP_STATUS 0x1c /* 32bit */
|
||||
#define VSE_CVP_STATUS_CFG_RDY BIT(18) /* CVP_CONFIG_READY */
|
||||
#define VSE_CVP_STATUS_CFG_ERR BIT(19) /* CVP_CONFIG_ERROR */
|
||||
#define VSE_CVP_STATUS_CVP_EN BIT(20) /* ctrl block is enabling CVP */
|
||||
|
@ -33,41 +33,93 @@
|
|||
#define VSE_CVP_STATUS_CFG_DONE BIT(23) /* CVP_CONFIG_DONE */
|
||||
#define VSE_CVP_STATUS_PLD_CLK_IN_USE BIT(24) /* PLD_CLK_IN_USE */
|
||||
|
||||
#define VSE_CVP_MODE_CTRL 0x220 /* 32bit */
|
||||
#define VSE_CVP_MODE_CTRL 0x20 /* 32bit */
|
||||
#define VSE_CVP_MODE_CTRL_CVP_MODE BIT(0) /* CVP (1) or normal mode (0) */
|
||||
#define VSE_CVP_MODE_CTRL_HIP_CLK_SEL BIT(1) /* PMA (1) or fabric clock (0) */
|
||||
#define VSE_CVP_MODE_CTRL_NUMCLKS_OFF 8 /* NUMCLKS bits offset */
|
||||
#define VSE_CVP_MODE_CTRL_NUMCLKS_MASK GENMASK(15, 8)
|
||||
|
||||
#define VSE_CVP_DATA 0x228 /* 32bit */
|
||||
#define VSE_CVP_PROG_CTRL 0x22c /* 32bit */
|
||||
#define VSE_CVP_DATA 0x28 /* 32bit */
|
||||
#define VSE_CVP_PROG_CTRL 0x2c /* 32bit */
|
||||
#define VSE_CVP_PROG_CTRL_CONFIG BIT(0)
|
||||
#define VSE_CVP_PROG_CTRL_START_XFER BIT(1)
|
||||
#define VSE_CVP_PROG_CTRL_MASK GENMASK(1, 0)
|
||||
|
||||
#define VSE_UNCOR_ERR_STATUS 0x234 /* 32bit */
|
||||
#define VSE_UNCOR_ERR_STATUS 0x34 /* 32bit */
|
||||
#define VSE_UNCOR_ERR_CVP_CFG_ERR BIT(5) /* CVP_CONFIG_ERROR_LATCHED */
|
||||
|
||||
#define V1_VSEC_OFFSET 0x200 /* Vendor Specific Offset V1 */
|
||||
/* V2 Defines */
|
||||
#define VSE_CVP_TX_CREDITS 0x49 /* 8bit */
|
||||
|
||||
#define V2_CREDIT_TIMEOUT_US 20000
|
||||
#define V2_CHECK_CREDIT_US 10
|
||||
#define V2_POLL_TIMEOUT_US 1000000
|
||||
#define V2_USER_TIMEOUT_US 500000
|
||||
|
||||
#define V1_POLL_TIMEOUT_US 10
|
||||
|
||||
#define DRV_NAME "altera-cvp"
|
||||
#define ALTERA_CVP_MGR_NAME "Altera CvP FPGA Manager"
|
||||
|
||||
/* Write block sizes */
|
||||
#define ALTERA_CVP_V1_SIZE 4
|
||||
#define ALTERA_CVP_V2_SIZE 4096
|
||||
|
||||
/* Optional CvP config error status check for debugging */
|
||||
static bool altera_cvp_chkcfg;
|
||||
|
||||
struct cvp_priv;
|
||||
|
||||
struct altera_cvp_conf {
|
||||
struct fpga_manager *mgr;
|
||||
struct pci_dev *pci_dev;
|
||||
void __iomem *map;
|
||||
void (*write_data)(struct altera_cvp_conf *, u32);
|
||||
void (*write_data)(struct altera_cvp_conf *conf,
|
||||
u32 data);
|
||||
char mgr_name[64];
|
||||
u8 numclks;
|
||||
u32 sent_packets;
|
||||
u32 vsec_offset;
|
||||
const struct cvp_priv *priv;
|
||||
};
|
||||
|
||||
struct cvp_priv {
|
||||
void (*switch_clk)(struct altera_cvp_conf *conf);
|
||||
int (*clear_state)(struct altera_cvp_conf *conf);
|
||||
int (*wait_credit)(struct fpga_manager *mgr, u32 blocks);
|
||||
size_t block_size;
|
||||
int poll_time_us;
|
||||
int user_time_us;
|
||||
};
|
||||
|
||||
static int altera_read_config_byte(struct altera_cvp_conf *conf,
|
||||
int where, u8 *val)
|
||||
{
|
||||
return pci_read_config_byte(conf->pci_dev, conf->vsec_offset + where,
|
||||
val);
|
||||
}
|
||||
|
||||
static int altera_read_config_dword(struct altera_cvp_conf *conf,
|
||||
int where, u32 *val)
|
||||
{
|
||||
return pci_read_config_dword(conf->pci_dev, conf->vsec_offset + where,
|
||||
val);
|
||||
}
|
||||
|
||||
static int altera_write_config_dword(struct altera_cvp_conf *conf,
|
||||
int where, u32 val)
|
||||
{
|
||||
return pci_write_config_dword(conf->pci_dev, conf->vsec_offset + where,
|
||||
val);
|
||||
}
|
||||
|
||||
static enum fpga_mgr_states altera_cvp_state(struct fpga_manager *mgr)
|
||||
{
|
||||
struct altera_cvp_conf *conf = mgr->priv;
|
||||
u32 status;
|
||||
|
||||
pci_read_config_dword(conf->pci_dev, VSE_CVP_STATUS, &status);
|
||||
altera_read_config_dword(conf, VSE_CVP_STATUS, &status);
|
||||
|
||||
if (status & VSE_CVP_STATUS_CFG_DONE)
|
||||
return FPGA_MGR_STATE_OPERATING;
|
||||
|
@ -85,7 +137,8 @@ static void altera_cvp_write_data_iomem(struct altera_cvp_conf *conf, u32 val)
|
|||
|
||||
static void altera_cvp_write_data_config(struct altera_cvp_conf *conf, u32 val)
|
||||
{
|
||||
pci_write_config_dword(conf->pci_dev, VSE_CVP_DATA, val);
|
||||
pci_write_config_dword(conf->pci_dev, conf->vsec_offset + VSE_CVP_DATA,
|
||||
val);
|
||||
}
|
||||
|
||||
/* switches between CvP clock and internal clock */
|
||||
|
@ -95,10 +148,10 @@ static void altera_cvp_dummy_write(struct altera_cvp_conf *conf)
|
|||
u32 val;
|
||||
|
||||
/* set 1 CVP clock cycle for every CVP Data Register Write */
|
||||
pci_read_config_dword(conf->pci_dev, VSE_CVP_MODE_CTRL, &val);
|
||||
altera_read_config_dword(conf, VSE_CVP_MODE_CTRL, &val);
|
||||
val &= ~VSE_CVP_MODE_CTRL_NUMCLKS_MASK;
|
||||
val |= 1 << VSE_CVP_MODE_CTRL_NUMCLKS_OFF;
|
||||
pci_write_config_dword(conf->pci_dev, VSE_CVP_MODE_CTRL, val);
|
||||
altera_write_config_dword(conf, VSE_CVP_MODE_CTRL, val);
|
||||
|
||||
for (i = 0; i < CVP_DUMMY_WR; i++)
|
||||
conf->write_data(conf, 0); /* dummy data, could be any value */
|
||||
|
@ -115,7 +168,7 @@ static int altera_cvp_wait_status(struct altera_cvp_conf *conf, u32 status_mask,
|
|||
retries++;
|
||||
|
||||
do {
|
||||
pci_read_config_dword(conf->pci_dev, VSE_CVP_STATUS, &val);
|
||||
altera_read_config_dword(conf, VSE_CVP_STATUS, &val);
|
||||
if ((val & status_mask) == status_val)
|
||||
return 0;
|
||||
|
||||
|
@ -126,32 +179,136 @@ static int altera_cvp_wait_status(struct altera_cvp_conf *conf, u32 status_mask,
|
|||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
static int altera_cvp_chk_error(struct fpga_manager *mgr, size_t bytes)
|
||||
{
|
||||
struct altera_cvp_conf *conf = mgr->priv;
|
||||
u32 val;
|
||||
int ret;
|
||||
|
||||
/* STEP 10 (optional) - check CVP_CONFIG_ERROR flag */
|
||||
ret = altera_read_config_dword(conf, VSE_CVP_STATUS, &val);
|
||||
if (ret || (val & VSE_CVP_STATUS_CFG_ERR)) {
|
||||
dev_err(&mgr->dev, "CVP_CONFIG_ERROR after %zu bytes!\n",
|
||||
bytes);
|
||||
return -EPROTO;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* CvP Version2 Functions
|
||||
* Recent Intel FPGAs use a credit mechanism to throttle incoming
|
||||
* bitstreams and a different method of clearing the state.
|
||||
*/
|
||||
|
||||
static int altera_cvp_v2_clear_state(struct altera_cvp_conf *conf)
|
||||
{
|
||||
u32 val;
|
||||
int ret;
|
||||
|
||||
/* Clear the START_XFER and CVP_CONFIG bits */
|
||||
ret = altera_read_config_dword(conf, VSE_CVP_PROG_CTRL, &val);
|
||||
if (ret) {
|
||||
dev_err(&conf->pci_dev->dev,
|
||||
"Error reading CVP Program Control Register\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
val &= ~VSE_CVP_PROG_CTRL_MASK;
|
||||
ret = altera_write_config_dword(conf, VSE_CVP_PROG_CTRL, val);
|
||||
if (ret) {
|
||||
dev_err(&conf->pci_dev->dev,
|
||||
"Error writing CVP Program Control Register\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
return altera_cvp_wait_status(conf, VSE_CVP_STATUS_CFG_RDY, 0,
|
||||
conf->priv->poll_time_us);
|
||||
}
|
||||
|
||||
static int altera_cvp_v2_wait_for_credit(struct fpga_manager *mgr,
|
||||
u32 blocks)
|
||||
{
|
||||
u32 timeout = V2_CREDIT_TIMEOUT_US / V2_CHECK_CREDIT_US;
|
||||
struct altera_cvp_conf *conf = mgr->priv;
|
||||
int ret;
|
||||
u8 val;
|
||||
|
||||
do {
|
||||
ret = altera_read_config_byte(conf, VSE_CVP_TX_CREDITS, &val);
|
||||
if (ret) {
|
||||
dev_err(&conf->pci_dev->dev,
|
||||
"Error reading CVP Credit Register\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Return if there is space in FIFO */
|
||||
if (val - (u8)conf->sent_packets)
|
||||
return 0;
|
||||
|
||||
ret = altera_cvp_chk_error(mgr, blocks * ALTERA_CVP_V2_SIZE);
|
||||
if (ret) {
|
||||
dev_err(&conf->pci_dev->dev,
|
||||
"CE Bit error credit reg[0x%x]:sent[0x%x]\n",
|
||||
val, conf->sent_packets);
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
/* Limit the check credit byte traffic */
|
||||
usleep_range(V2_CHECK_CREDIT_US, V2_CHECK_CREDIT_US + 1);
|
||||
} while (timeout--);
|
||||
|
||||
dev_err(&conf->pci_dev->dev, "Timeout waiting for credit\n");
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
static int altera_cvp_send_block(struct altera_cvp_conf *conf,
|
||||
const u32 *data, size_t len)
|
||||
{
|
||||
u32 mask, words = len / sizeof(u32);
|
||||
int i, remainder;
|
||||
|
||||
for (i = 0; i < words; i++)
|
||||
conf->write_data(conf, *data++);
|
||||
|
||||
/* write up to 3 trailing bytes, if any */
|
||||
remainder = len % sizeof(u32);
|
||||
if (remainder) {
|
||||
mask = BIT(remainder * 8) - 1;
|
||||
if (mask)
|
||||
conf->write_data(conf, *data & mask);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int altera_cvp_teardown(struct fpga_manager *mgr,
|
||||
struct fpga_image_info *info)
|
||||
{
|
||||
struct altera_cvp_conf *conf = mgr->priv;
|
||||
struct pci_dev *pdev = conf->pci_dev;
|
||||
int ret;
|
||||
u32 val;
|
||||
|
||||
/* STEP 12 - reset START_XFER bit */
|
||||
pci_read_config_dword(pdev, VSE_CVP_PROG_CTRL, &val);
|
||||
altera_read_config_dword(conf, VSE_CVP_PROG_CTRL, &val);
|
||||
val &= ~VSE_CVP_PROG_CTRL_START_XFER;
|
||||
pci_write_config_dword(pdev, VSE_CVP_PROG_CTRL, val);
|
||||
altera_write_config_dword(conf, VSE_CVP_PROG_CTRL, val);
|
||||
|
||||
/* STEP 13 - reset CVP_CONFIG bit */
|
||||
val &= ~VSE_CVP_PROG_CTRL_CONFIG;
|
||||
pci_write_config_dword(pdev, VSE_CVP_PROG_CTRL, val);
|
||||
altera_write_config_dword(conf, VSE_CVP_PROG_CTRL, val);
|
||||
|
||||
/*
|
||||
* STEP 14
|
||||
* - set CVP_NUMCLKS to 1 and then issue CVP_DUMMY_WR dummy
|
||||
* writes to the HIP
|
||||
*/
|
||||
altera_cvp_dummy_write(conf); /* from CVP clock to internal clock */
|
||||
if (conf->priv->switch_clk)
|
||||
conf->priv->switch_clk(conf);
|
||||
|
||||
/* STEP 15 - poll CVP_CONFIG_READY bit for 0 with 10us timeout */
|
||||
ret = altera_cvp_wait_status(conf, VSE_CVP_STATUS_CFG_RDY, 0, 10);
|
||||
ret = altera_cvp_wait_status(conf, VSE_CVP_STATUS_CFG_RDY, 0,
|
||||
conf->priv->poll_time_us);
|
||||
if (ret)
|
||||
dev_err(&mgr->dev, "CFG_RDY == 0 timeout\n");
|
||||
|
||||
|
@ -163,7 +320,6 @@ static int altera_cvp_write_init(struct fpga_manager *mgr,
|
|||
const char *buf, size_t count)
|
||||
{
|
||||
struct altera_cvp_conf *conf = mgr->priv;
|
||||
struct pci_dev *pdev = conf->pci_dev;
|
||||
u32 iflags, val;
|
||||
int ret;
|
||||
|
||||
|
@ -183,7 +339,7 @@ static int altera_cvp_write_init(struct fpga_manager *mgr,
|
|||
conf->numclks = 1; /* for uncompressed and unencrypted images */
|
||||
|
||||
/* STEP 1 - read CVP status and check CVP_EN flag */
|
||||
pci_read_config_dword(pdev, VSE_CVP_STATUS, &val);
|
||||
altera_read_config_dword(conf, VSE_CVP_STATUS, &val);
|
||||
if (!(val & VSE_CVP_STATUS_CVP_EN)) {
|
||||
dev_err(&mgr->dev, "CVP mode off: 0x%04x\n", val);
|
||||
return -ENODEV;
|
||||
|
@ -201,30 +357,42 @@ static int altera_cvp_write_init(struct fpga_manager *mgr,
|
|||
* - set HIP_CLK_SEL and CVP_MODE (must be set in the order mentioned)
|
||||
*/
|
||||
/* switch from fabric to PMA clock */
|
||||
pci_read_config_dword(pdev, VSE_CVP_MODE_CTRL, &val);
|
||||
altera_read_config_dword(conf, VSE_CVP_MODE_CTRL, &val);
|
||||
val |= VSE_CVP_MODE_CTRL_HIP_CLK_SEL;
|
||||
pci_write_config_dword(pdev, VSE_CVP_MODE_CTRL, val);
|
||||
altera_write_config_dword(conf, VSE_CVP_MODE_CTRL, val);
|
||||
|
||||
/* set CVP mode */
|
||||
pci_read_config_dword(pdev, VSE_CVP_MODE_CTRL, &val);
|
||||
altera_read_config_dword(conf, VSE_CVP_MODE_CTRL, &val);
|
||||
val |= VSE_CVP_MODE_CTRL_CVP_MODE;
|
||||
pci_write_config_dword(pdev, VSE_CVP_MODE_CTRL, val);
|
||||
altera_write_config_dword(conf, VSE_CVP_MODE_CTRL, val);
|
||||
|
||||
/*
|
||||
* STEP 3
|
||||
* - set CVP_NUMCLKS to 1 and issue CVP_DUMMY_WR dummy writes to the HIP
|
||||
*/
|
||||
altera_cvp_dummy_write(conf);
|
||||
if (conf->priv->switch_clk)
|
||||
conf->priv->switch_clk(conf);
|
||||
|
||||
if (conf->priv->clear_state) {
|
||||
ret = conf->priv->clear_state(conf);
|
||||
if (ret) {
|
||||
dev_err(&mgr->dev, "Problem clearing out state\n");
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
conf->sent_packets = 0;
|
||||
|
||||
/* STEP 4 - set CVP_CONFIG bit */
|
||||
pci_read_config_dword(pdev, VSE_CVP_PROG_CTRL, &val);
|
||||
altera_read_config_dword(conf, VSE_CVP_PROG_CTRL, &val);
|
||||
/* request control block to begin transfer using CVP */
|
||||
val |= VSE_CVP_PROG_CTRL_CONFIG;
|
||||
pci_write_config_dword(pdev, VSE_CVP_PROG_CTRL, val);
|
||||
altera_write_config_dword(conf, VSE_CVP_PROG_CTRL, val);
|
||||
|
||||
/* STEP 5 - poll CVP_CONFIG READY for 1 with 10us timeout */
|
||||
/* STEP 5 - poll CVP_CONFIG READY for 1 with timeout */
|
||||
ret = altera_cvp_wait_status(conf, VSE_CVP_STATUS_CFG_RDY,
|
||||
VSE_CVP_STATUS_CFG_RDY, 10);
|
||||
VSE_CVP_STATUS_CFG_RDY,
|
||||
conf->priv->poll_time_us);
|
||||
if (ret) {
|
||||
dev_warn(&mgr->dev, "CFG_RDY == 1 timeout\n");
|
||||
return ret;
|
||||
|
@ -234,33 +402,28 @@ static int altera_cvp_write_init(struct fpga_manager *mgr,
|
|||
* STEP 6
|
||||
* - set CVP_NUMCLKS to 1 and issue CVP_DUMMY_WR dummy writes to the HIP
|
||||
*/
|
||||
altera_cvp_dummy_write(conf);
|
||||
if (conf->priv->switch_clk)
|
||||
conf->priv->switch_clk(conf);
|
||||
|
||||
if (altera_cvp_chkcfg) {
|
||||
ret = altera_cvp_chk_error(mgr, 0);
|
||||
if (ret) {
|
||||
dev_warn(&mgr->dev, "CFG_RDY == 1 timeout\n");
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
/* STEP 7 - set START_XFER */
|
||||
pci_read_config_dword(pdev, VSE_CVP_PROG_CTRL, &val);
|
||||
altera_read_config_dword(conf, VSE_CVP_PROG_CTRL, &val);
|
||||
val |= VSE_CVP_PROG_CTRL_START_XFER;
|
||||
pci_write_config_dword(pdev, VSE_CVP_PROG_CTRL, val);
|
||||
altera_write_config_dword(conf, VSE_CVP_PROG_CTRL, val);
|
||||
|
||||
/* STEP 8 - start transfer (set CVP_NUMCLKS for bitstream) */
|
||||
pci_read_config_dword(pdev, VSE_CVP_MODE_CTRL, &val);
|
||||
val &= ~VSE_CVP_MODE_CTRL_NUMCLKS_MASK;
|
||||
val |= conf->numclks << VSE_CVP_MODE_CTRL_NUMCLKS_OFF;
|
||||
pci_write_config_dword(pdev, VSE_CVP_MODE_CTRL, val);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int altera_cvp_chk_error(struct fpga_manager *mgr, size_t bytes)
|
||||
{
|
||||
struct altera_cvp_conf *conf = mgr->priv;
|
||||
u32 val;
|
||||
|
||||
/* STEP 10 (optional) - check CVP_CONFIG_ERROR flag */
|
||||
pci_read_config_dword(conf->pci_dev, VSE_CVP_STATUS, &val);
|
||||
if (val & VSE_CVP_STATUS_CFG_ERR) {
|
||||
dev_err(&mgr->dev, "CVP_CONFIG_ERROR after %zu bytes!\n",
|
||||
bytes);
|
||||
return -EPROTO;
|
||||
if (conf->priv->switch_clk) {
|
||||
altera_read_config_dword(conf, VSE_CVP_MODE_CTRL, &val);
|
||||
val &= ~VSE_CVP_MODE_CTRL_NUMCLKS_MASK;
|
||||
val |= conf->numclks << VSE_CVP_MODE_CTRL_NUMCLKS_OFF;
|
||||
altera_write_config_dword(conf, VSE_CVP_MODE_CTRL, val);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
@ -269,20 +432,32 @@ static int altera_cvp_write(struct fpga_manager *mgr, const char *buf,
|
|||
size_t count)
|
||||
{
|
||||
struct altera_cvp_conf *conf = mgr->priv;
|
||||
size_t done, remaining, len;
|
||||
const u32 *data;
|
||||
size_t done, remaining;
|
||||
int status = 0;
|
||||
u32 mask;
|
||||
|
||||
/* STEP 9 - write 32-bit data from RBF file to CVP data register */
|
||||
data = (u32 *)buf;
|
||||
remaining = count;
|
||||
done = 0;
|
||||
|
||||
while (remaining >= 4) {
|
||||
conf->write_data(conf, *data++);
|
||||
done += 4;
|
||||
remaining -= 4;
|
||||
while (remaining) {
|
||||
/* Use credit throttling if available */
|
||||
if (conf->priv->wait_credit) {
|
||||
status = conf->priv->wait_credit(mgr, done);
|
||||
if (status) {
|
||||
dev_err(&conf->pci_dev->dev,
|
||||
"Wait Credit ERR: 0x%x\n", status);
|
||||
return status;
|
||||
}
|
||||
}
|
||||
|
||||
len = min(conf->priv->block_size, remaining);
|
||||
altera_cvp_send_block(conf, data, len);
|
||||
data += len / sizeof(u32);
|
||||
done += len;
|
||||
remaining -= len;
|
||||
conf->sent_packets++;
|
||||
|
||||
/*
|
||||
* STEP 10 (optional) and STEP 11
|
||||
|
@ -300,11 +475,6 @@ static int altera_cvp_write(struct fpga_manager *mgr, const char *buf,
|
|||
}
|
||||
}
|
||||
|
||||
/* write up to 3 trailing bytes, if any */
|
||||
mask = BIT(remaining * 8) - 1;
|
||||
if (mask)
|
||||
conf->write_data(conf, *data & mask);
|
||||
|
||||
if (altera_cvp_chkcfg)
|
||||
status = altera_cvp_chk_error(mgr, count);
|
||||
|
||||
|
@ -315,31 +485,30 @@ static int altera_cvp_write_complete(struct fpga_manager *mgr,
|
|||
struct fpga_image_info *info)
|
||||
{
|
||||
struct altera_cvp_conf *conf = mgr->priv;
|
||||
struct pci_dev *pdev = conf->pci_dev;
|
||||
u32 mask, val;
|
||||
int ret;
|
||||
u32 mask;
|
||||
u32 val;
|
||||
|
||||
ret = altera_cvp_teardown(mgr, info);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* STEP 16 - check CVP_CONFIG_ERROR_LATCHED bit */
|
||||
pci_read_config_dword(pdev, VSE_UNCOR_ERR_STATUS, &val);
|
||||
altera_read_config_dword(conf, VSE_UNCOR_ERR_STATUS, &val);
|
||||
if (val & VSE_UNCOR_ERR_CVP_CFG_ERR) {
|
||||
dev_err(&mgr->dev, "detected CVP_CONFIG_ERROR_LATCHED!\n");
|
||||
return -EPROTO;
|
||||
}
|
||||
|
||||
/* STEP 17 - reset CVP_MODE and HIP_CLK_SEL bit */
|
||||
pci_read_config_dword(pdev, VSE_CVP_MODE_CTRL, &val);
|
||||
altera_read_config_dword(conf, VSE_CVP_MODE_CTRL, &val);
|
||||
val &= ~VSE_CVP_MODE_CTRL_HIP_CLK_SEL;
|
||||
val &= ~VSE_CVP_MODE_CTRL_CVP_MODE;
|
||||
pci_write_config_dword(pdev, VSE_CVP_MODE_CTRL, val);
|
||||
altera_write_config_dword(conf, VSE_CVP_MODE_CTRL, val);
|
||||
|
||||
/* STEP 18 - poll PLD_CLK_IN_USE and USER_MODE bits */
|
||||
mask = VSE_CVP_STATUS_PLD_CLK_IN_USE | VSE_CVP_STATUS_USERMODE;
|
||||
ret = altera_cvp_wait_status(conf, mask, mask, TIMEOUT_US);
|
||||
ret = altera_cvp_wait_status(conf, mask, mask,
|
||||
conf->priv->user_time_us);
|
||||
if (ret)
|
||||
dev_err(&mgr->dev, "PLD_CLK_IN_USE|USERMODE timeout\n");
|
||||
|
||||
|
@ -353,6 +522,21 @@ static const struct fpga_manager_ops altera_cvp_ops = {
|
|||
.write_complete = altera_cvp_write_complete,
|
||||
};
|
||||
|
||||
static const struct cvp_priv cvp_priv_v1 = {
|
||||
.switch_clk = altera_cvp_dummy_write,
|
||||
.block_size = ALTERA_CVP_V1_SIZE,
|
||||
.poll_time_us = V1_POLL_TIMEOUT_US,
|
||||
.user_time_us = TIMEOUT_US,
|
||||
};
|
||||
|
||||
static const struct cvp_priv cvp_priv_v2 = {
|
||||
.clear_state = altera_cvp_v2_clear_state,
|
||||
.wait_credit = altera_cvp_v2_wait_for_credit,
|
||||
.block_size = ALTERA_CVP_V2_SIZE,
|
||||
.poll_time_us = V2_POLL_TIMEOUT_US,
|
||||
.user_time_us = V2_USER_TIMEOUT_US,
|
||||
};
|
||||
|
||||
static ssize_t chkcfg_show(struct device_driver *dev, char *buf)
|
||||
{
|
||||
return snprintf(buf, 3, "%d\n", altera_cvp_chkcfg);
|
||||
|
@ -394,22 +578,29 @@ static int altera_cvp_probe(struct pci_dev *pdev,
|
|||
{
|
||||
struct altera_cvp_conf *conf;
|
||||
struct fpga_manager *mgr;
|
||||
int ret, offset;
|
||||
u16 cmd, val;
|
||||
u32 regval;
|
||||
int ret;
|
||||
|
||||
/* Discover the Vendor Specific Offset for this device */
|
||||
offset = pci_find_next_ext_capability(pdev, 0, PCI_EXT_CAP_ID_VNDR);
|
||||
if (!offset) {
|
||||
dev_err(&pdev->dev, "No Vendor Specific Offset.\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
/*
|
||||
* First check if this is the expected FPGA device. PCI config
|
||||
* space access works without enabling the PCI device, memory
|
||||
* space access is enabled further down.
|
||||
*/
|
||||
pci_read_config_word(pdev, VSE_PCIE_EXT_CAP_ID, &val);
|
||||
pci_read_config_word(pdev, offset + VSE_PCIE_EXT_CAP_ID, &val);
|
||||
if (val != VSE_PCIE_EXT_CAP_ID_VAL) {
|
||||
dev_err(&pdev->dev, "Wrong EXT_CAP_ID value 0x%x\n", val);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
pci_read_config_dword(pdev, VSE_CVP_STATUS, ®val);
|
||||
pci_read_config_dword(pdev, offset + VSE_CVP_STATUS, ®val);
|
||||
if (!(regval & VSE_CVP_STATUS_CVP_EN)) {
|
||||
dev_err(&pdev->dev,
|
||||
"CVP is disabled for this device: CVP_STATUS Reg 0x%x\n",
|
||||
|
@ -421,6 +612,8 @@ static int altera_cvp_probe(struct pci_dev *pdev,
|
|||
if (!conf)
|
||||
return -ENOMEM;
|
||||
|
||||
conf->vsec_offset = offset;
|
||||
|
||||
/*
|
||||
* Enable memory BAR access. We cannot use pci_enable_device() here
|
||||
* because it will make the driver unusable with FPGA devices that
|
||||
|
@ -445,6 +638,11 @@ static int altera_cvp_probe(struct pci_dev *pdev,
|
|||
conf->pci_dev = pdev;
|
||||
conf->write_data = altera_cvp_write_data_iomem;
|
||||
|
||||
if (conf->vsec_offset == V1_VSEC_OFFSET)
|
||||
conf->priv = &cvp_priv_v1;
|
||||
else
|
||||
conf->priv = &cvp_priv_v2;
|
||||
|
||||
conf->map = pci_iomap(pdev, CVP_BAR, 0);
|
||||
if (!conf->map) {
|
||||
dev_warn(&pdev->dev, "Mapping CVP BAR failed\n");
|
||||
|
|
|
@ -32,7 +32,9 @@ static int alt_pr_platform_remove(struct platform_device *pdev)
|
|||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
|
||||
return alt_pr_unregister(dev);
|
||||
alt_pr_unregister(dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct of_device_id alt_pr_of_match[] = {
|
||||
|
|
|
@ -201,15 +201,13 @@ int alt_pr_register(struct device *dev, void __iomem *reg_base)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(alt_pr_register);
|
||||
|
||||
int alt_pr_unregister(struct device *dev)
|
||||
void alt_pr_unregister(struct device *dev)
|
||||
{
|
||||
struct fpga_manager *mgr = dev_get_drvdata(dev);
|
||||
|
||||
dev_dbg(dev, "%s\n", __func__);
|
||||
|
||||
fpga_mgr_unregister(mgr);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(alt_pr_unregister);
|
||||
|
||||
|
|
|
@ -0,0 +1,230 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Driver for FPGA Accelerated Function Unit (AFU) Error Reporting
|
||||
*
|
||||
* Copyright 2019 Intel Corporation, Inc.
|
||||
*
|
||||
* Authors:
|
||||
* Wu Hao <hao.wu@linux.intel.com>
|
||||
* Xiao Guangrong <guangrong.xiao@linux.intel.com>
|
||||
* Joseph Grecco <joe.grecco@intel.com>
|
||||
* Enno Luebbers <enno.luebbers@intel.com>
|
||||
* Tim Whisonant <tim.whisonant@intel.com>
|
||||
* Ananda Ravuri <ananda.ravuri@intel.com>
|
||||
* Mitchel Henry <henry.mitchel@intel.com>
|
||||
*/
|
||||
|
||||
#include <linux/uaccess.h>
|
||||
|
||||
#include "dfl-afu.h"
|
||||
|
||||
#define PORT_ERROR_MASK 0x8
|
||||
#define PORT_ERROR 0x10
|
||||
#define PORT_FIRST_ERROR 0x18
|
||||
#define PORT_MALFORMED_REQ0 0x20
|
||||
#define PORT_MALFORMED_REQ1 0x28
|
||||
|
||||
#define ERROR_MASK GENMASK_ULL(63, 0)
|
||||
|
||||
/* mask or unmask port errors by the error mask register. */
|
||||
static void __afu_port_err_mask(struct device *dev, bool mask)
|
||||
{
|
||||
void __iomem *base;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_ERROR);
|
||||
|
||||
writeq(mask ? ERROR_MASK : 0, base + PORT_ERROR_MASK);
|
||||
}
|
||||
|
||||
static void afu_port_err_mask(struct device *dev, bool mask)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
__afu_port_err_mask(dev, mask);
|
||||
mutex_unlock(&pdata->lock);
|
||||
}
|
||||
|
||||
/* clear port errors. */
|
||||
static int afu_port_err_clear(struct device *dev, u64 err)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
struct platform_device *pdev = to_platform_device(dev);
|
||||
void __iomem *base_err, *base_hdr;
|
||||
int ret = -EBUSY;
|
||||
u64 v;
|
||||
|
||||
base_err = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_ERROR);
|
||||
base_hdr = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
|
||||
/*
|
||||
* clear Port Errors
|
||||
*
|
||||
* - Check for AP6 State
|
||||
* - Halt Port by keeping Port in reset
|
||||
* - Set PORT Error mask to all 1 to mask errors
|
||||
* - Clear all errors
|
||||
* - Set Port mask to all 0 to enable errors
|
||||
* - All errors start capturing new errors
|
||||
* - Enable Port by pulling the port out of reset
|
||||
*/
|
||||
|
||||
/* if device is still in AP6 power state, can not clear any error. */
|
||||
v = readq(base_hdr + PORT_HDR_STS);
|
||||
if (FIELD_GET(PORT_STS_PWR_STATE, v) == PORT_STS_PWR_STATE_AP6) {
|
||||
dev_err(dev, "Could not clear errors, device in AP6 state.\n");
|
||||
goto done;
|
||||
}
|
||||
|
||||
/* Halt Port by keeping Port in reset */
|
||||
ret = __afu_port_disable(pdev);
|
||||
if (ret)
|
||||
goto done;
|
||||
|
||||
/* Mask all errors */
|
||||
__afu_port_err_mask(dev, true);
|
||||
|
||||
/* Clear errors if err input matches with current port errors.*/
|
||||
v = readq(base_err + PORT_ERROR);
|
||||
|
||||
if (v == err) {
|
||||
writeq(v, base_err + PORT_ERROR);
|
||||
|
||||
v = readq(base_err + PORT_FIRST_ERROR);
|
||||
writeq(v, base_err + PORT_FIRST_ERROR);
|
||||
} else {
|
||||
ret = -EINVAL;
|
||||
}
|
||||
|
||||
/* Clear mask */
|
||||
__afu_port_err_mask(dev, false);
|
||||
|
||||
/* Enable the Port by clear the reset */
|
||||
__afu_port_enable(pdev);
|
||||
|
||||
done:
|
||||
mutex_unlock(&pdata->lock);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static ssize_t errors_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
void __iomem *base;
|
||||
u64 error;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_ERROR);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
error = readq(base + PORT_ERROR);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return sprintf(buf, "0x%llx\n", (unsigned long long)error);
|
||||
}
|
||||
|
||||
static ssize_t errors_store(struct device *dev, struct device_attribute *attr,
|
||||
const char *buff, size_t count)
|
||||
{
|
||||
u64 value;
|
||||
int ret;
|
||||
|
||||
if (kstrtou64(buff, 0, &value))
|
||||
return -EINVAL;
|
||||
|
||||
ret = afu_port_err_clear(dev, value);
|
||||
|
||||
return ret ? ret : count;
|
||||
}
|
||||
static DEVICE_ATTR_RW(errors);
|
||||
|
||||
static ssize_t first_error_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
void __iomem *base;
|
||||
u64 error;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_ERROR);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
error = readq(base + PORT_FIRST_ERROR);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return sprintf(buf, "0x%llx\n", (unsigned long long)error);
|
||||
}
|
||||
static DEVICE_ATTR_RO(first_error);
|
||||
|
||||
static ssize_t first_malformed_req_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
void __iomem *base;
|
||||
u64 req0, req1;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_ERROR);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
req0 = readq(base + PORT_MALFORMED_REQ0);
|
||||
req1 = readq(base + PORT_MALFORMED_REQ1);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return sprintf(buf, "0x%016llx%016llx\n",
|
||||
(unsigned long long)req1, (unsigned long long)req0);
|
||||
}
|
||||
static DEVICE_ATTR_RO(first_malformed_req);
|
||||
|
||||
static struct attribute *port_err_attrs[] = {
|
||||
&dev_attr_errors.attr,
|
||||
&dev_attr_first_error.attr,
|
||||
&dev_attr_first_malformed_req.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static umode_t port_err_attrs_visible(struct kobject *kobj,
|
||||
struct attribute *attr, int n)
|
||||
{
|
||||
struct device *dev = kobj_to_dev(kobj);
|
||||
|
||||
/*
|
||||
* sysfs entries are visible only if related private feature is
|
||||
* enumerated.
|
||||
*/
|
||||
if (!dfl_get_feature_by_id(dev, PORT_FEATURE_ID_ERROR))
|
||||
return 0;
|
||||
|
||||
return attr->mode;
|
||||
}
|
||||
|
||||
const struct attribute_group port_err_group = {
|
||||
.name = "errors",
|
||||
.attrs = port_err_attrs,
|
||||
.is_visible = port_err_attrs_visible,
|
||||
};
|
||||
|
||||
static int port_err_init(struct platform_device *pdev,
|
||||
struct dfl_feature *feature)
|
||||
{
|
||||
afu_port_err_mask(&pdev->dev, false);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void port_err_uinit(struct platform_device *pdev,
|
||||
struct dfl_feature *feature)
|
||||
{
|
||||
afu_port_err_mask(&pdev->dev, true);
|
||||
}
|
||||
|
||||
const struct dfl_feature_id port_err_id_table[] = {
|
||||
{.id = PORT_FEATURE_ID_ERROR,},
|
||||
{0,}
|
||||
};
|
||||
|
||||
const struct dfl_feature_ops port_err_ops = {
|
||||
.init = port_err_init,
|
||||
.uinit = port_err_uinit,
|
||||
};
|
|
@ -22,14 +22,17 @@
|
|||
#include "dfl-afu.h"
|
||||
|
||||
/**
|
||||
* port_enable - enable a port
|
||||
* __afu_port_enable - enable a port by clear reset
|
||||
* @pdev: port platform device.
|
||||
*
|
||||
* Enable Port by clear the port soft reset bit, which is set by default.
|
||||
* The AFU is unable to respond to any MMIO access while in reset.
|
||||
* port_enable function should only be used after port_disable function.
|
||||
* __afu_port_enable function should only be used after __afu_port_disable
|
||||
* function.
|
||||
*
|
||||
* The caller needs to hold lock for protection.
|
||||
*/
|
||||
static void port_enable(struct platform_device *pdev)
|
||||
void __afu_port_enable(struct platform_device *pdev)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev);
|
||||
void __iomem *base;
|
||||
|
@ -52,13 +55,14 @@ static void port_enable(struct platform_device *pdev)
|
|||
#define RST_POLL_TIMEOUT 1000 /* us */
|
||||
|
||||
/**
|
||||
* port_disable - disable a port
|
||||
* __afu_port_disable - disable a port by hold reset
|
||||
* @pdev: port platform device.
|
||||
*
|
||||
* Disable Port by setting the port soft reset bit, it puts the port into
|
||||
* reset.
|
||||
* Disable Port by setting the port soft reset bit, it puts the port into reset.
|
||||
*
|
||||
* The caller needs to hold lock for protection.
|
||||
*/
|
||||
static int port_disable(struct platform_device *pdev)
|
||||
int __afu_port_disable(struct platform_device *pdev)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev);
|
||||
void __iomem *base;
|
||||
|
@ -104,9 +108,9 @@ static int __port_reset(struct platform_device *pdev)
|
|||
{
|
||||
int ret;
|
||||
|
||||
ret = port_disable(pdev);
|
||||
ret = __afu_port_disable(pdev);
|
||||
if (!ret)
|
||||
port_enable(pdev);
|
||||
__afu_port_enable(pdev);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -141,27 +145,267 @@ id_show(struct device *dev, struct device_attribute *attr, char *buf)
|
|||
}
|
||||
static DEVICE_ATTR_RO(id);
|
||||
|
||||
static const struct attribute *port_hdr_attrs[] = {
|
||||
static ssize_t
|
||||
ltr_show(struct device *dev, struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
void __iomem *base;
|
||||
u64 v;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
v = readq(base + PORT_HDR_CTRL);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return sprintf(buf, "%x\n", (u8)FIELD_GET(PORT_CTRL_LATENCY, v));
|
||||
}
|
||||
|
||||
static ssize_t
|
||||
ltr_store(struct device *dev, struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
void __iomem *base;
|
||||
bool ltr;
|
||||
u64 v;
|
||||
|
||||
if (kstrtobool(buf, <r))
|
||||
return -EINVAL;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
v = readq(base + PORT_HDR_CTRL);
|
||||
v &= ~PORT_CTRL_LATENCY;
|
||||
v |= FIELD_PREP(PORT_CTRL_LATENCY, ltr ? 1 : 0);
|
||||
writeq(v, base + PORT_HDR_CTRL);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return count;
|
||||
}
|
||||
static DEVICE_ATTR_RW(ltr);
|
||||
|
||||
static ssize_t
|
||||
ap1_event_show(struct device *dev, struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
void __iomem *base;
|
||||
u64 v;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
v = readq(base + PORT_HDR_STS);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return sprintf(buf, "%x\n", (u8)FIELD_GET(PORT_STS_AP1_EVT, v));
|
||||
}
|
||||
|
||||
static ssize_t
|
||||
ap1_event_store(struct device *dev, struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
void __iomem *base;
|
||||
bool clear;
|
||||
|
||||
if (kstrtobool(buf, &clear) || !clear)
|
||||
return -EINVAL;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
writeq(PORT_STS_AP1_EVT, base + PORT_HDR_STS);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return count;
|
||||
}
|
||||
static DEVICE_ATTR_RW(ap1_event);
|
||||
|
||||
static ssize_t
|
||||
ap2_event_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
void __iomem *base;
|
||||
u64 v;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
v = readq(base + PORT_HDR_STS);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return sprintf(buf, "%x\n", (u8)FIELD_GET(PORT_STS_AP2_EVT, v));
|
||||
}
|
||||
|
||||
static ssize_t
|
||||
ap2_event_store(struct device *dev, struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
void __iomem *base;
|
||||
bool clear;
|
||||
|
||||
if (kstrtobool(buf, &clear) || !clear)
|
||||
return -EINVAL;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
writeq(PORT_STS_AP2_EVT, base + PORT_HDR_STS);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return count;
|
||||
}
|
||||
static DEVICE_ATTR_RW(ap2_event);
|
||||
|
||||
static ssize_t
|
||||
power_state_show(struct device *dev, struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
void __iomem *base;
|
||||
u64 v;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
v = readq(base + PORT_HDR_STS);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return sprintf(buf, "0x%x\n", (u8)FIELD_GET(PORT_STS_PWR_STATE, v));
|
||||
}
|
||||
static DEVICE_ATTR_RO(power_state);
|
||||
|
||||
static ssize_t
|
||||
userclk_freqcmd_store(struct device *dev, struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
u64 userclk_freq_cmd;
|
||||
void __iomem *base;
|
||||
|
||||
if (kstrtou64(buf, 0, &userclk_freq_cmd))
|
||||
return -EINVAL;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
writeq(userclk_freq_cmd, base + PORT_HDR_USRCLK_CMD0);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return count;
|
||||
}
|
||||
static DEVICE_ATTR_WO(userclk_freqcmd);
|
||||
|
||||
static ssize_t
|
||||
userclk_freqcntrcmd_store(struct device *dev, struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
u64 userclk_freqcntr_cmd;
|
||||
void __iomem *base;
|
||||
|
||||
if (kstrtou64(buf, 0, &userclk_freqcntr_cmd))
|
||||
return -EINVAL;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
writeq(userclk_freqcntr_cmd, base + PORT_HDR_USRCLK_CMD1);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return count;
|
||||
}
|
||||
static DEVICE_ATTR_WO(userclk_freqcntrcmd);
|
||||
|
||||
static ssize_t
|
||||
userclk_freqsts_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
u64 userclk_freqsts;
|
||||
void __iomem *base;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
userclk_freqsts = readq(base + PORT_HDR_USRCLK_STS0);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return sprintf(buf, "0x%llx\n", (unsigned long long)userclk_freqsts);
|
||||
}
|
||||
static DEVICE_ATTR_RO(userclk_freqsts);
|
||||
|
||||
static ssize_t
|
||||
userclk_freqcntrsts_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
u64 userclk_freqcntrsts;
|
||||
void __iomem *base;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
userclk_freqcntrsts = readq(base + PORT_HDR_USRCLK_STS1);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return sprintf(buf, "0x%llx\n",
|
||||
(unsigned long long)userclk_freqcntrsts);
|
||||
}
|
||||
static DEVICE_ATTR_RO(userclk_freqcntrsts);
|
||||
|
||||
static struct attribute *port_hdr_attrs[] = {
|
||||
&dev_attr_id.attr,
|
||||
&dev_attr_ltr.attr,
|
||||
&dev_attr_ap1_event.attr,
|
||||
&dev_attr_ap2_event.attr,
|
||||
&dev_attr_power_state.attr,
|
||||
&dev_attr_userclk_freqcmd.attr,
|
||||
&dev_attr_userclk_freqcntrcmd.attr,
|
||||
&dev_attr_userclk_freqsts.attr,
|
||||
&dev_attr_userclk_freqcntrsts.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static umode_t port_hdr_attrs_visible(struct kobject *kobj,
|
||||
struct attribute *attr, int n)
|
||||
{
|
||||
struct device *dev = kobj_to_dev(kobj);
|
||||
umode_t mode = attr->mode;
|
||||
void __iomem *base;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER);
|
||||
|
||||
if (dfl_feature_revision(base) > 0) {
|
||||
/*
|
||||
* userclk sysfs interfaces are only visible in case port
|
||||
* revision is 0, as hardware with revision >0 doesn't
|
||||
* support this.
|
||||
*/
|
||||
if (attr == &dev_attr_userclk_freqcmd.attr ||
|
||||
attr == &dev_attr_userclk_freqcntrcmd.attr ||
|
||||
attr == &dev_attr_userclk_freqsts.attr ||
|
||||
attr == &dev_attr_userclk_freqcntrsts.attr)
|
||||
mode = 0;
|
||||
}
|
||||
|
||||
return mode;
|
||||
}
|
||||
|
||||
static const struct attribute_group port_hdr_group = {
|
||||
.attrs = port_hdr_attrs,
|
||||
.is_visible = port_hdr_attrs_visible,
|
||||
};
|
||||
|
||||
static int port_hdr_init(struct platform_device *pdev,
|
||||
struct dfl_feature *feature)
|
||||
{
|
||||
dev_dbg(&pdev->dev, "PORT HDR Init.\n");
|
||||
|
||||
port_reset(pdev);
|
||||
|
||||
return sysfs_create_files(&pdev->dev.kobj, port_hdr_attrs);
|
||||
}
|
||||
|
||||
static void port_hdr_uinit(struct platform_device *pdev,
|
||||
struct dfl_feature *feature)
|
||||
{
|
||||
dev_dbg(&pdev->dev, "PORT HDR UInit.\n");
|
||||
|
||||
sysfs_remove_files(&pdev->dev.kobj, port_hdr_attrs);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static long
|
||||
|
@ -185,9 +429,13 @@ port_hdr_ioctl(struct platform_device *pdev, struct dfl_feature *feature,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static const struct dfl_feature_id port_hdr_id_table[] = {
|
||||
{.id = PORT_FEATURE_ID_HEADER,},
|
||||
{0,}
|
||||
};
|
||||
|
||||
static const struct dfl_feature_ops port_hdr_ops = {
|
||||
.init = port_hdr_init,
|
||||
.uinit = port_hdr_uinit,
|
||||
.ioctl = port_hdr_ioctl,
|
||||
};
|
||||
|
||||
|
@ -214,51 +462,90 @@ afu_id_show(struct device *dev, struct device_attribute *attr, char *buf)
|
|||
}
|
||||
static DEVICE_ATTR_RO(afu_id);
|
||||
|
||||
static const struct attribute *port_afu_attrs[] = {
|
||||
static struct attribute *port_afu_attrs[] = {
|
||||
&dev_attr_afu_id.attr,
|
||||
NULL
|
||||
};
|
||||
|
||||
static umode_t port_afu_attrs_visible(struct kobject *kobj,
|
||||
struct attribute *attr, int n)
|
||||
{
|
||||
struct device *dev = kobj_to_dev(kobj);
|
||||
|
||||
/*
|
||||
* sysfs entries are visible only if related private feature is
|
||||
* enumerated.
|
||||
*/
|
||||
if (!dfl_get_feature_by_id(dev, PORT_FEATURE_ID_AFU))
|
||||
return 0;
|
||||
|
||||
return attr->mode;
|
||||
}
|
||||
|
||||
static const struct attribute_group port_afu_group = {
|
||||
.attrs = port_afu_attrs,
|
||||
.is_visible = port_afu_attrs_visible,
|
||||
};
|
||||
|
||||
static int port_afu_init(struct platform_device *pdev,
|
||||
struct dfl_feature *feature)
|
||||
{
|
||||
struct resource *res = &pdev->resource[feature->resource_index];
|
||||
int ret;
|
||||
|
||||
dev_dbg(&pdev->dev, "PORT AFU Init.\n");
|
||||
|
||||
ret = afu_mmio_region_add(dev_get_platdata(&pdev->dev),
|
||||
DFL_PORT_REGION_INDEX_AFU, resource_size(res),
|
||||
res->start, DFL_PORT_REGION_READ |
|
||||
DFL_PORT_REGION_WRITE | DFL_PORT_REGION_MMAP);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return sysfs_create_files(&pdev->dev.kobj, port_afu_attrs);
|
||||
return afu_mmio_region_add(dev_get_platdata(&pdev->dev),
|
||||
DFL_PORT_REGION_INDEX_AFU,
|
||||
resource_size(res), res->start,
|
||||
DFL_PORT_REGION_MMAP | DFL_PORT_REGION_READ |
|
||||
DFL_PORT_REGION_WRITE);
|
||||
}
|
||||
|
||||
static void port_afu_uinit(struct platform_device *pdev,
|
||||
struct dfl_feature *feature)
|
||||
{
|
||||
dev_dbg(&pdev->dev, "PORT AFU UInit.\n");
|
||||
|
||||
sysfs_remove_files(&pdev->dev.kobj, port_afu_attrs);
|
||||
}
|
||||
static const struct dfl_feature_id port_afu_id_table[] = {
|
||||
{.id = PORT_FEATURE_ID_AFU,},
|
||||
{0,}
|
||||
};
|
||||
|
||||
static const struct dfl_feature_ops port_afu_ops = {
|
||||
.init = port_afu_init,
|
||||
.uinit = port_afu_uinit,
|
||||
};
|
||||
|
||||
static int port_stp_init(struct platform_device *pdev,
|
||||
struct dfl_feature *feature)
|
||||
{
|
||||
struct resource *res = &pdev->resource[feature->resource_index];
|
||||
|
||||
return afu_mmio_region_add(dev_get_platdata(&pdev->dev),
|
||||
DFL_PORT_REGION_INDEX_STP,
|
||||
resource_size(res), res->start,
|
||||
DFL_PORT_REGION_MMAP | DFL_PORT_REGION_READ |
|
||||
DFL_PORT_REGION_WRITE);
|
||||
}
|
||||
|
||||
static const struct dfl_feature_id port_stp_id_table[] = {
|
||||
{.id = PORT_FEATURE_ID_STP,},
|
||||
{0,}
|
||||
};
|
||||
|
||||
static const struct dfl_feature_ops port_stp_ops = {
|
||||
.init = port_stp_init,
|
||||
};
|
||||
|
||||
static struct dfl_feature_driver port_feature_drvs[] = {
|
||||
{
|
||||
.id = PORT_FEATURE_ID_HEADER,
|
||||
.id_table = port_hdr_id_table,
|
||||
.ops = &port_hdr_ops,
|
||||
},
|
||||
{
|
||||
.id = PORT_FEATURE_ID_AFU,
|
||||
.id_table = port_afu_id_table,
|
||||
.ops = &port_afu_ops,
|
||||
},
|
||||
{
|
||||
.id_table = port_err_id_table,
|
||||
.ops = &port_err_ops,
|
||||
},
|
||||
{
|
||||
.id_table = port_stp_id_table,
|
||||
.ops = &port_stp_ops,
|
||||
},
|
||||
{
|
||||
.ops = NULL,
|
||||
}
|
||||
|
@ -545,9 +832,9 @@ static int port_enable_set(struct platform_device *pdev, bool enable)
|
|||
|
||||
mutex_lock(&pdata->lock);
|
||||
if (enable)
|
||||
port_enable(pdev);
|
||||
__afu_port_enable(pdev);
|
||||
else
|
||||
ret = port_disable(pdev);
|
||||
ret = __afu_port_disable(pdev);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return ret;
|
||||
|
@ -599,9 +886,17 @@ static int afu_remove(struct platform_device *pdev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static const struct attribute_group *afu_dev_groups[] = {
|
||||
&port_hdr_group,
|
||||
&port_afu_group,
|
||||
&port_err_group,
|
||||
NULL
|
||||
};
|
||||
|
||||
static struct platform_driver afu_driver = {
|
||||
.driver = {
|
||||
.name = DFL_FPGA_FEATURE_DEV_PORT,
|
||||
.name = DFL_FPGA_FEATURE_DEV_PORT,
|
||||
.dev_groups = afu_dev_groups,
|
||||
},
|
||||
.probe = afu_probe,
|
||||
.remove = afu_remove,
|
||||
|
|
|
@ -79,6 +79,10 @@ struct dfl_afu {
|
|||
struct dfl_feature_platform_data *pdata;
|
||||
};
|
||||
|
||||
/* hold pdata->lock when call __afu_port_enable/disable */
|
||||
void __afu_port_enable(struct platform_device *pdev);
|
||||
int __afu_port_disable(struct platform_device *pdev);
|
||||
|
||||
void afu_mmio_region_init(struct dfl_feature_platform_data *pdata);
|
||||
int afu_mmio_region_add(struct dfl_feature_platform_data *pdata,
|
||||
u32 region_index, u64 region_size, u64 phys, u32 flags);
|
||||
|
@ -97,4 +101,9 @@ int afu_dma_unmap_region(struct dfl_feature_platform_data *pdata, u64 iova);
|
|||
struct dfl_afu_dma_region *
|
||||
afu_dma_region_find(struct dfl_feature_platform_data *pdata,
|
||||
u64 iova, u64 size);
|
||||
|
||||
extern const struct dfl_feature_ops port_err_ops;
|
||||
extern const struct dfl_feature_id port_err_id_table[];
|
||||
extern const struct attribute_group port_err_group;
|
||||
|
||||
#endif /* __DFL_AFU_H */
|
||||
|
|
|
@ -0,0 +1,359 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Driver for FPGA Management Engine Error Management
|
||||
*
|
||||
* Copyright 2019 Intel Corporation, Inc.
|
||||
*
|
||||
* Authors:
|
||||
* Kang Luwei <luwei.kang@intel.com>
|
||||
* Xiao Guangrong <guangrong.xiao@linux.intel.com>
|
||||
* Wu Hao <hao.wu@intel.com>
|
||||
* Joseph Grecco <joe.grecco@intel.com>
|
||||
* Enno Luebbers <enno.luebbers@intel.com>
|
||||
* Tim Whisonant <tim.whisonant@intel.com>
|
||||
* Ananda Ravuri <ananda.ravuri@intel.com>
|
||||
* Mitchel, Henry <henry.mitchel@intel.com>
|
||||
*/
|
||||
|
||||
#include <linux/uaccess.h>
|
||||
|
||||
#include "dfl.h"
|
||||
#include "dfl-fme.h"
|
||||
|
||||
#define FME_ERROR_MASK 0x8
|
||||
#define FME_ERROR 0x10
|
||||
#define MBP_ERROR BIT_ULL(6)
|
||||
#define PCIE0_ERROR_MASK 0x18
|
||||
#define PCIE0_ERROR 0x20
|
||||
#define PCIE1_ERROR_MASK 0x28
|
||||
#define PCIE1_ERROR 0x30
|
||||
#define FME_FIRST_ERROR 0x38
|
||||
#define FME_NEXT_ERROR 0x40
|
||||
#define RAS_NONFAT_ERROR_MASK 0x48
|
||||
#define RAS_NONFAT_ERROR 0x50
|
||||
#define RAS_CATFAT_ERROR_MASK 0x58
|
||||
#define RAS_CATFAT_ERROR 0x60
|
||||
#define RAS_ERROR_INJECT 0x68
|
||||
#define INJECT_ERROR_MASK GENMASK_ULL(2, 0)
|
||||
|
||||
#define ERROR_MASK GENMASK_ULL(63, 0)
|
||||
|
||||
static ssize_t pcie0_errors_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
void __iomem *base;
|
||||
u64 value;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
value = readq(base + PCIE0_ERROR);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return sprintf(buf, "0x%llx\n", (unsigned long long)value);
|
||||
}
|
||||
|
||||
static ssize_t pcie0_errors_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
void __iomem *base;
|
||||
int ret = 0;
|
||||
u64 v, val;
|
||||
|
||||
if (kstrtou64(buf, 0, &val))
|
||||
return -EINVAL;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
writeq(GENMASK_ULL(63, 0), base + PCIE0_ERROR_MASK);
|
||||
|
||||
v = readq(base + PCIE0_ERROR);
|
||||
if (val == v)
|
||||
writeq(v, base + PCIE0_ERROR);
|
||||
else
|
||||
ret = -EINVAL;
|
||||
|
||||
writeq(0ULL, base + PCIE0_ERROR_MASK);
|
||||
mutex_unlock(&pdata->lock);
|
||||
return ret ? ret : count;
|
||||
}
|
||||
static DEVICE_ATTR_RW(pcie0_errors);
|
||||
|
||||
static ssize_t pcie1_errors_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
void __iomem *base;
|
||||
u64 value;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
value = readq(base + PCIE1_ERROR);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return sprintf(buf, "0x%llx\n", (unsigned long long)value);
|
||||
}
|
||||
|
||||
static ssize_t pcie1_errors_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
void __iomem *base;
|
||||
int ret = 0;
|
||||
u64 v, val;
|
||||
|
||||
if (kstrtou64(buf, 0, &val))
|
||||
return -EINVAL;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
writeq(GENMASK_ULL(63, 0), base + PCIE1_ERROR_MASK);
|
||||
|
||||
v = readq(base + PCIE1_ERROR);
|
||||
if (val == v)
|
||||
writeq(v, base + PCIE1_ERROR);
|
||||
else
|
||||
ret = -EINVAL;
|
||||
|
||||
writeq(0ULL, base + PCIE1_ERROR_MASK);
|
||||
mutex_unlock(&pdata->lock);
|
||||
return ret ? ret : count;
|
||||
}
|
||||
static DEVICE_ATTR_RW(pcie1_errors);
|
||||
|
||||
static ssize_t nonfatal_errors_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
void __iomem *base;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR);
|
||||
|
||||
return sprintf(buf, "0x%llx\n",
|
||||
(unsigned long long)readq(base + RAS_NONFAT_ERROR));
|
||||
}
|
||||
static DEVICE_ATTR_RO(nonfatal_errors);
|
||||
|
||||
static ssize_t catfatal_errors_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
void __iomem *base;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR);
|
||||
|
||||
return sprintf(buf, "0x%llx\n",
|
||||
(unsigned long long)readq(base + RAS_CATFAT_ERROR));
|
||||
}
|
||||
static DEVICE_ATTR_RO(catfatal_errors);
|
||||
|
||||
static ssize_t inject_errors_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
void __iomem *base;
|
||||
u64 v;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
v = readq(base + RAS_ERROR_INJECT);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return sprintf(buf, "0x%llx\n",
|
||||
(unsigned long long)FIELD_GET(INJECT_ERROR_MASK, v));
|
||||
}
|
||||
|
||||
static ssize_t inject_errors_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
void __iomem *base;
|
||||
u8 inject_error;
|
||||
u64 v;
|
||||
|
||||
if (kstrtou8(buf, 0, &inject_error))
|
||||
return -EINVAL;
|
||||
|
||||
if (inject_error & ~INJECT_ERROR_MASK)
|
||||
return -EINVAL;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
v = readq(base + RAS_ERROR_INJECT);
|
||||
v &= ~INJECT_ERROR_MASK;
|
||||
v |= FIELD_PREP(INJECT_ERROR_MASK, inject_error);
|
||||
writeq(v, base + RAS_ERROR_INJECT);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return count;
|
||||
}
|
||||
static DEVICE_ATTR_RW(inject_errors);
|
||||
|
||||
static ssize_t fme_errors_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
void __iomem *base;
|
||||
u64 value;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
value = readq(base + FME_ERROR);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return sprintf(buf, "0x%llx\n", (unsigned long long)value);
|
||||
}
|
||||
|
||||
static ssize_t fme_errors_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
void __iomem *base;
|
||||
u64 v, val;
|
||||
int ret = 0;
|
||||
|
||||
if (kstrtou64(buf, 0, &val))
|
||||
return -EINVAL;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
writeq(GENMASK_ULL(63, 0), base + FME_ERROR_MASK);
|
||||
|
||||
v = readq(base + FME_ERROR);
|
||||
if (val == v)
|
||||
writeq(v, base + FME_ERROR);
|
||||
else
|
||||
ret = -EINVAL;
|
||||
|
||||
/* Workaround: disable MBP_ERROR if feature revision is 0 */
|
||||
writeq(dfl_feature_revision(base) ? 0ULL : MBP_ERROR,
|
||||
base + FME_ERROR_MASK);
|
||||
mutex_unlock(&pdata->lock);
|
||||
return ret ? ret : count;
|
||||
}
|
||||
static DEVICE_ATTR_RW(fme_errors);
|
||||
|
||||
static ssize_t first_error_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
void __iomem *base;
|
||||
u64 value;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
value = readq(base + FME_FIRST_ERROR);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return sprintf(buf, "0x%llx\n", (unsigned long long)value);
|
||||
}
|
||||
static DEVICE_ATTR_RO(first_error);
|
||||
|
||||
static ssize_t next_error_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
void __iomem *base;
|
||||
u64 value;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
value = readq(base + FME_NEXT_ERROR);
|
||||
mutex_unlock(&pdata->lock);
|
||||
|
||||
return sprintf(buf, "0x%llx\n", (unsigned long long)value);
|
||||
}
|
||||
static DEVICE_ATTR_RO(next_error);
|
||||
|
||||
static struct attribute *fme_global_err_attrs[] = {
|
||||
&dev_attr_pcie0_errors.attr,
|
||||
&dev_attr_pcie1_errors.attr,
|
||||
&dev_attr_nonfatal_errors.attr,
|
||||
&dev_attr_catfatal_errors.attr,
|
||||
&dev_attr_inject_errors.attr,
|
||||
&dev_attr_fme_errors.attr,
|
||||
&dev_attr_first_error.attr,
|
||||
&dev_attr_next_error.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static umode_t fme_global_err_attrs_visible(struct kobject *kobj,
|
||||
struct attribute *attr, int n)
|
||||
{
|
||||
struct device *dev = kobj_to_dev(kobj);
|
||||
|
||||
/*
|
||||
* sysfs entries are visible only if related private feature is
|
||||
* enumerated.
|
||||
*/
|
||||
if (!dfl_get_feature_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR))
|
||||
return 0;
|
||||
|
||||
return attr->mode;
|
||||
}
|
||||
|
||||
const struct attribute_group fme_global_err_group = {
|
||||
.name = "errors",
|
||||
.attrs = fme_global_err_attrs,
|
||||
.is_visible = fme_global_err_attrs_visible,
|
||||
};
|
||||
|
||||
static void fme_err_mask(struct device *dev, bool mask)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
|
||||
void __iomem *base;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR);
|
||||
|
||||
mutex_lock(&pdata->lock);
|
||||
|
||||
/* Workaround: keep MBP_ERROR always masked if revision is 0 */
|
||||
if (dfl_feature_revision(base))
|
||||
writeq(mask ? ERROR_MASK : 0, base + FME_ERROR_MASK);
|
||||
else
|
||||
writeq(mask ? ERROR_MASK : MBP_ERROR, base + FME_ERROR_MASK);
|
||||
|
||||
writeq(mask ? ERROR_MASK : 0, base + PCIE0_ERROR_MASK);
|
||||
writeq(mask ? ERROR_MASK : 0, base + PCIE1_ERROR_MASK);
|
||||
writeq(mask ? ERROR_MASK : 0, base + RAS_NONFAT_ERROR_MASK);
|
||||
writeq(mask ? ERROR_MASK : 0, base + RAS_CATFAT_ERROR_MASK);
|
||||
|
||||
mutex_unlock(&pdata->lock);
|
||||
}
|
||||
|
||||
static int fme_global_err_init(struct platform_device *pdev,
|
||||
struct dfl_feature *feature)
|
||||
{
|
||||
fme_err_mask(&pdev->dev, false);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void fme_global_err_uinit(struct platform_device *pdev,
|
||||
struct dfl_feature *feature)
|
||||
{
|
||||
fme_err_mask(&pdev->dev, true);
|
||||
}
|
||||
|
||||
const struct dfl_feature_id fme_global_err_id_table[] = {
|
||||
{.id = FME_FEATURE_ID_GLOBAL_ERR,},
|
||||
{0,}
|
||||
};
|
||||
|
||||
const struct dfl_feature_ops fme_global_err_ops = {
|
||||
.init = fme_global_err_init,
|
||||
.uinit = fme_global_err_uinit,
|
||||
};
|
|
@ -16,6 +16,7 @@
|
|||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <linux/fpga-dfl.h>
|
||||
|
||||
#include "dfl.h"
|
||||
|
@ -72,50 +73,126 @@ static ssize_t bitstream_metadata_show(struct device *dev,
|
|||
}
|
||||
static DEVICE_ATTR_RO(bitstream_metadata);
|
||||
|
||||
static const struct attribute *fme_hdr_attrs[] = {
|
||||
static ssize_t cache_size_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
void __iomem *base;
|
||||
u64 v;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_HEADER);
|
||||
|
||||
v = readq(base + FME_HDR_CAP);
|
||||
|
||||
return sprintf(buf, "%u\n",
|
||||
(unsigned int)FIELD_GET(FME_CAP_CACHE_SIZE, v));
|
||||
}
|
||||
static DEVICE_ATTR_RO(cache_size);
|
||||
|
||||
static ssize_t fabric_version_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
void __iomem *base;
|
||||
u64 v;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_HEADER);
|
||||
|
||||
v = readq(base + FME_HDR_CAP);
|
||||
|
||||
return sprintf(buf, "%u\n",
|
||||
(unsigned int)FIELD_GET(FME_CAP_FABRIC_VERID, v));
|
||||
}
|
||||
static DEVICE_ATTR_RO(fabric_version);
|
||||
|
||||
static ssize_t socket_id_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
void __iomem *base;
|
||||
u64 v;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_HEADER);
|
||||
|
||||
v = readq(base + FME_HDR_CAP);
|
||||
|
||||
return sprintf(buf, "%u\n",
|
||||
(unsigned int)FIELD_GET(FME_CAP_SOCKET_ID, v));
|
||||
}
|
||||
static DEVICE_ATTR_RO(socket_id);
|
||||
|
||||
static struct attribute *fme_hdr_attrs[] = {
|
||||
&dev_attr_ports_num.attr,
|
||||
&dev_attr_bitstream_id.attr,
|
||||
&dev_attr_bitstream_metadata.attr,
|
||||
&dev_attr_cache_size.attr,
|
||||
&dev_attr_fabric_version.attr,
|
||||
&dev_attr_socket_id.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static int fme_hdr_init(struct platform_device *pdev,
|
||||
struct dfl_feature *feature)
|
||||
static const struct attribute_group fme_hdr_group = {
|
||||
.attrs = fme_hdr_attrs,
|
||||
};
|
||||
|
||||
static long fme_hdr_ioctl_release_port(struct dfl_feature_platform_data *pdata,
|
||||
unsigned long arg)
|
||||
{
|
||||
void __iomem *base = feature->ioaddr;
|
||||
int ret;
|
||||
struct dfl_fpga_cdev *cdev = pdata->dfl_cdev;
|
||||
int port_id;
|
||||
|
||||
dev_dbg(&pdev->dev, "FME HDR Init.\n");
|
||||
dev_dbg(&pdev->dev, "FME cap %llx.\n",
|
||||
(unsigned long long)readq(base + FME_HDR_CAP));
|
||||
if (get_user(port_id, (int __user *)arg))
|
||||
return -EFAULT;
|
||||
|
||||
ret = sysfs_create_files(&pdev->dev.kobj, fme_hdr_attrs);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return 0;
|
||||
return dfl_fpga_cdev_release_port(cdev, port_id);
|
||||
}
|
||||
|
||||
static void fme_hdr_uinit(struct platform_device *pdev,
|
||||
struct dfl_feature *feature)
|
||||
static long fme_hdr_ioctl_assign_port(struct dfl_feature_platform_data *pdata,
|
||||
unsigned long arg)
|
||||
{
|
||||
dev_dbg(&pdev->dev, "FME HDR UInit.\n");
|
||||
sysfs_remove_files(&pdev->dev.kobj, fme_hdr_attrs);
|
||||
struct dfl_fpga_cdev *cdev = pdata->dfl_cdev;
|
||||
int port_id;
|
||||
|
||||
if (get_user(port_id, (int __user *)arg))
|
||||
return -EFAULT;
|
||||
|
||||
return dfl_fpga_cdev_assign_port(cdev, port_id);
|
||||
}
|
||||
|
||||
static long fme_hdr_ioctl(struct platform_device *pdev,
|
||||
struct dfl_feature *feature,
|
||||
unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev);
|
||||
|
||||
switch (cmd) {
|
||||
case DFL_FPGA_FME_PORT_RELEASE:
|
||||
return fme_hdr_ioctl_release_port(pdata, arg);
|
||||
case DFL_FPGA_FME_PORT_ASSIGN:
|
||||
return fme_hdr_ioctl_assign_port(pdata, arg);
|
||||
}
|
||||
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
static const struct dfl_feature_id fme_hdr_id_table[] = {
|
||||
{.id = FME_FEATURE_ID_HEADER,},
|
||||
{0,}
|
||||
};
|
||||
|
||||
static const struct dfl_feature_ops fme_hdr_ops = {
|
||||
.init = fme_hdr_init,
|
||||
.uinit = fme_hdr_uinit,
|
||||
.ioctl = fme_hdr_ioctl,
|
||||
};
|
||||
|
||||
static struct dfl_feature_driver fme_feature_drvs[] = {
|
||||
{
|
||||
.id = FME_FEATURE_ID_HEADER,
|
||||
.id_table = fme_hdr_id_table,
|
||||
.ops = &fme_hdr_ops,
|
||||
},
|
||||
{
|
||||
.id = FME_FEATURE_ID_PR_MGMT,
|
||||
.ops = &pr_mgmt_ops,
|
||||
.id_table = fme_pr_mgmt_id_table,
|
||||
.ops = &fme_pr_mgmt_ops,
|
||||
},
|
||||
{
|
||||
.id_table = fme_global_err_id_table,
|
||||
.ops = &fme_global_err_ops,
|
||||
},
|
||||
{
|
||||
.ops = NULL,
|
||||
|
@ -263,9 +340,16 @@ static int fme_remove(struct platform_device *pdev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static const struct attribute_group *fme_dev_groups[] = {
|
||||
&fme_hdr_group,
|
||||
&fme_global_err_group,
|
||||
NULL
|
||||
};
|
||||
|
||||
static struct platform_driver fme_driver = {
|
||||
.driver = {
|
||||
.name = DFL_FPGA_FEATURE_DEV_FME,
|
||||
.name = DFL_FPGA_FEATURE_DEV_FME,
|
||||
.dev_groups = fme_dev_groups,
|
||||
},
|
||||
.probe = fme_probe,
|
||||
.remove = fme_remove,
|
||||
|
|
|
@ -470,7 +470,12 @@ static long fme_pr_ioctl(struct platform_device *pdev,
|
|||
return ret;
|
||||
}
|
||||
|
||||
const struct dfl_feature_ops pr_mgmt_ops = {
|
||||
const struct dfl_feature_id fme_pr_mgmt_id_table[] = {
|
||||
{.id = FME_FEATURE_ID_PR_MGMT,},
|
||||
{0}
|
||||
};
|
||||
|
||||
const struct dfl_feature_ops fme_pr_mgmt_ops = {
|
||||
.init = pr_mgmt_init,
|
||||
.uinit = pr_mgmt_uinit,
|
||||
.ioctl = fme_pr_ioctl,
|
||||
|
|
|
@ -33,6 +33,10 @@ struct dfl_fme {
|
|||
struct dfl_feature_platform_data *pdata;
|
||||
};
|
||||
|
||||
extern const struct dfl_feature_ops pr_mgmt_ops;
|
||||
extern const struct dfl_feature_ops fme_pr_mgmt_ops;
|
||||
extern const struct dfl_feature_id fme_pr_mgmt_id_table[];
|
||||
extern const struct dfl_feature_ops fme_global_err_ops;
|
||||
extern const struct dfl_feature_id fme_global_err_id_table[];
|
||||
extern const struct attribute_group fme_global_err_group;
|
||||
|
||||
#endif /* __DFL_FME_H */
|
||||
|
|
|
@ -223,8 +223,43 @@ disable_error_report_exit:
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int cci_pci_sriov_configure(struct pci_dev *pcidev, int num_vfs)
|
||||
{
|
||||
struct cci_drvdata *drvdata = pci_get_drvdata(pcidev);
|
||||
struct dfl_fpga_cdev *cdev = drvdata->cdev;
|
||||
int ret = 0;
|
||||
|
||||
if (!num_vfs) {
|
||||
/*
|
||||
* disable SRIOV and then put released ports back to default
|
||||
* PF access mode.
|
||||
*/
|
||||
pci_disable_sriov(pcidev);
|
||||
|
||||
dfl_fpga_cdev_config_ports_pf(cdev);
|
||||
|
||||
} else {
|
||||
/*
|
||||
* before enable SRIOV, put released ports into VF access mode
|
||||
* first of all.
|
||||
*/
|
||||
ret = dfl_fpga_cdev_config_ports_vf(cdev, num_vfs);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = pci_enable_sriov(pcidev, num_vfs);
|
||||
if (ret)
|
||||
dfl_fpga_cdev_config_ports_pf(cdev);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void cci_pci_remove(struct pci_dev *pcidev)
|
||||
{
|
||||
if (dev_is_pf(&pcidev->dev))
|
||||
cci_pci_sriov_configure(pcidev, 0);
|
||||
|
||||
cci_remove_feature_devs(pcidev);
|
||||
pci_disable_pcie_error_reporting(pcidev);
|
||||
}
|
||||
|
@ -234,6 +269,7 @@ static struct pci_driver cci_pci_driver = {
|
|||
.id_table = cci_pcie_id_tbl,
|
||||
.probe = cci_pci_probe,
|
||||
.remove = cci_pci_remove,
|
||||
.sriov_configure = cci_pci_sriov_configure,
|
||||
};
|
||||
|
||||
module_pci_driver(cci_pci_driver);
|
||||
|
|
|
@ -231,16 +231,20 @@ EXPORT_SYMBOL_GPL(dfl_fpga_port_ops_del);
|
|||
*/
|
||||
int dfl_fpga_check_port_id(struct platform_device *pdev, void *pport_id)
|
||||
{
|
||||
struct dfl_fpga_port_ops *port_ops = dfl_fpga_port_ops_get(pdev);
|
||||
int port_id;
|
||||
struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev);
|
||||
struct dfl_fpga_port_ops *port_ops;
|
||||
|
||||
if (pdata->id != FEATURE_DEV_ID_UNUSED)
|
||||
return pdata->id == *(int *)pport_id;
|
||||
|
||||
port_ops = dfl_fpga_port_ops_get(pdev);
|
||||
if (!port_ops || !port_ops->get_id)
|
||||
return 0;
|
||||
|
||||
port_id = port_ops->get_id(pdev);
|
||||
pdata->id = port_ops->get_id(pdev);
|
||||
dfl_fpga_port_ops_put(port_ops);
|
||||
|
||||
return port_id == *(int *)pport_id;
|
||||
return pdata->id == *(int *)pport_id;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dfl_fpga_check_port_id);
|
||||
|
||||
|
@ -255,7 +259,8 @@ void dfl_fpga_dev_feature_uinit(struct platform_device *pdev)
|
|||
|
||||
dfl_fpga_dev_for_each_feature(pdata, feature)
|
||||
if (feature->ops) {
|
||||
feature->ops->uinit(pdev, feature);
|
||||
if (feature->ops->uinit)
|
||||
feature->ops->uinit(pdev, feature);
|
||||
feature->ops = NULL;
|
||||
}
|
||||
}
|
||||
|
@ -266,17 +271,34 @@ static int dfl_feature_instance_init(struct platform_device *pdev,
|
|||
struct dfl_feature *feature,
|
||||
struct dfl_feature_driver *drv)
|
||||
{
|
||||
int ret;
|
||||
int ret = 0;
|
||||
|
||||
ret = drv->ops->init(pdev, feature);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (drv->ops->init) {
|
||||
ret = drv->ops->init(pdev, feature);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
feature->ops = drv->ops;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static bool dfl_feature_drv_match(struct dfl_feature *feature,
|
||||
struct dfl_feature_driver *driver)
|
||||
{
|
||||
const struct dfl_feature_id *ids = driver->id_table;
|
||||
|
||||
if (ids) {
|
||||
while (ids->id) {
|
||||
if (ids->id == feature->id)
|
||||
return true;
|
||||
ids++;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* dfl_fpga_dev_feature_init - init for sub features of dfl feature device
|
||||
* @pdev: feature device.
|
||||
|
@ -297,8 +319,7 @@ int dfl_fpga_dev_feature_init(struct platform_device *pdev,
|
|||
|
||||
while (drv->ops) {
|
||||
dfl_fpga_dev_for_each_feature(pdata, feature) {
|
||||
/* match feature and drv using id */
|
||||
if (feature->id == drv->id) {
|
||||
if (dfl_feature_drv_match(feature, drv)) {
|
||||
ret = dfl_feature_instance_init(pdev, pdata,
|
||||
feature, drv);
|
||||
if (ret)
|
||||
|
@ -474,6 +495,7 @@ static int build_info_commit_dev(struct build_feature_devs_info *binfo)
|
|||
pdata->dev = fdev;
|
||||
pdata->num = binfo->feature_num;
|
||||
pdata->dfl_cdev = binfo->cdev;
|
||||
pdata->id = FEATURE_DEV_ID_UNUSED;
|
||||
mutex_init(&pdata->lock);
|
||||
lockdep_set_class_and_name(&pdata->lock, &dfl_pdata_keys[type],
|
||||
dfl_pdata_key_strings[type]);
|
||||
|
@ -973,25 +995,27 @@ void dfl_fpga_feature_devs_remove(struct dfl_fpga_cdev *cdev)
|
|||
{
|
||||
struct dfl_feature_platform_data *pdata, *ptmp;
|
||||
|
||||
remove_feature_devs(cdev);
|
||||
|
||||
mutex_lock(&cdev->lock);
|
||||
if (cdev->fme_dev) {
|
||||
/* the fme should be unregistered. */
|
||||
WARN_ON(device_is_registered(cdev->fme_dev));
|
||||
if (cdev->fme_dev)
|
||||
put_device(cdev->fme_dev);
|
||||
}
|
||||
|
||||
list_for_each_entry_safe(pdata, ptmp, &cdev->port_dev_list, node) {
|
||||
struct platform_device *port_dev = pdata->dev;
|
||||
|
||||
/* the port should be unregistered. */
|
||||
WARN_ON(device_is_registered(&port_dev->dev));
|
||||
/* remove released ports */
|
||||
if (!device_is_registered(&port_dev->dev)) {
|
||||
dfl_id_free(feature_dev_id_type(port_dev),
|
||||
port_dev->id);
|
||||
platform_device_put(port_dev);
|
||||
}
|
||||
|
||||
list_del(&pdata->node);
|
||||
put_device(&port_dev->dev);
|
||||
}
|
||||
mutex_unlock(&cdev->lock);
|
||||
|
||||
remove_feature_devs(cdev);
|
||||
|
||||
fpga_region_unregister(cdev->region);
|
||||
devm_kfree(cdev->parent, cdev);
|
||||
}
|
||||
|
@ -1042,6 +1066,170 @@ static int __init dfl_fpga_init(void)
|
|||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* dfl_fpga_cdev_release_port - release a port platform device
|
||||
*
|
||||
* @cdev: parent container device.
|
||||
* @port_id: id of the port platform device.
|
||||
*
|
||||
* This function allows user to release a port platform device. This is a
|
||||
* mandatory step before turn a port from PF into VF for SRIOV support.
|
||||
*
|
||||
* Return: 0 on success, negative error code otherwise.
|
||||
*/
|
||||
int dfl_fpga_cdev_release_port(struct dfl_fpga_cdev *cdev, int port_id)
|
||||
{
|
||||
struct platform_device *port_pdev;
|
||||
int ret = -ENODEV;
|
||||
|
||||
mutex_lock(&cdev->lock);
|
||||
port_pdev = __dfl_fpga_cdev_find_port(cdev, &port_id,
|
||||
dfl_fpga_check_port_id);
|
||||
if (!port_pdev)
|
||||
goto unlock_exit;
|
||||
|
||||
if (!device_is_registered(&port_pdev->dev)) {
|
||||
ret = -EBUSY;
|
||||
goto put_dev_exit;
|
||||
}
|
||||
|
||||
ret = dfl_feature_dev_use_begin(dev_get_platdata(&port_pdev->dev));
|
||||
if (ret)
|
||||
goto put_dev_exit;
|
||||
|
||||
platform_device_del(port_pdev);
|
||||
cdev->released_port_num++;
|
||||
put_dev_exit:
|
||||
put_device(&port_pdev->dev);
|
||||
unlock_exit:
|
||||
mutex_unlock(&cdev->lock);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dfl_fpga_cdev_release_port);
|
||||
|
||||
/**
|
||||
* dfl_fpga_cdev_assign_port - assign a port platform device back
|
||||
*
|
||||
* @cdev: parent container device.
|
||||
* @port_id: id of the port platform device.
|
||||
*
|
||||
* This function allows user to assign a port platform device back. This is
|
||||
* a mandatory step after disable SRIOV support.
|
||||
*
|
||||
* Return: 0 on success, negative error code otherwise.
|
||||
*/
|
||||
int dfl_fpga_cdev_assign_port(struct dfl_fpga_cdev *cdev, int port_id)
|
||||
{
|
||||
struct platform_device *port_pdev;
|
||||
int ret = -ENODEV;
|
||||
|
||||
mutex_lock(&cdev->lock);
|
||||
port_pdev = __dfl_fpga_cdev_find_port(cdev, &port_id,
|
||||
dfl_fpga_check_port_id);
|
||||
if (!port_pdev)
|
||||
goto unlock_exit;
|
||||
|
||||
if (device_is_registered(&port_pdev->dev)) {
|
||||
ret = -EBUSY;
|
||||
goto put_dev_exit;
|
||||
}
|
||||
|
||||
ret = platform_device_add(port_pdev);
|
||||
if (ret)
|
||||
goto put_dev_exit;
|
||||
|
||||
dfl_feature_dev_use_end(dev_get_platdata(&port_pdev->dev));
|
||||
cdev->released_port_num--;
|
||||
put_dev_exit:
|
||||
put_device(&port_pdev->dev);
|
||||
unlock_exit:
|
||||
mutex_unlock(&cdev->lock);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dfl_fpga_cdev_assign_port);
|
||||
|
||||
static void config_port_access_mode(struct device *fme_dev, int port_id,
|
||||
bool is_vf)
|
||||
{
|
||||
void __iomem *base;
|
||||
u64 v;
|
||||
|
||||
base = dfl_get_feature_ioaddr_by_id(fme_dev, FME_FEATURE_ID_HEADER);
|
||||
|
||||
v = readq(base + FME_HDR_PORT_OFST(port_id));
|
||||
|
||||
v &= ~FME_PORT_OFST_ACC_CTRL;
|
||||
v |= FIELD_PREP(FME_PORT_OFST_ACC_CTRL,
|
||||
is_vf ? FME_PORT_OFST_ACC_VF : FME_PORT_OFST_ACC_PF);
|
||||
|
||||
writeq(v, base + FME_HDR_PORT_OFST(port_id));
|
||||
}
|
||||
|
||||
#define config_port_vf_mode(dev, id) config_port_access_mode(dev, id, true)
|
||||
#define config_port_pf_mode(dev, id) config_port_access_mode(dev, id, false)
|
||||
|
||||
/**
|
||||
* dfl_fpga_cdev_config_ports_pf - configure ports to PF access mode
|
||||
*
|
||||
* @cdev: parent container device.
|
||||
*
|
||||
* This function is needed in sriov configuration routine. It could be used to
|
||||
* configure the all released ports from VF access mode to PF.
|
||||
*/
|
||||
void dfl_fpga_cdev_config_ports_pf(struct dfl_fpga_cdev *cdev)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata;
|
||||
|
||||
mutex_lock(&cdev->lock);
|
||||
list_for_each_entry(pdata, &cdev->port_dev_list, node) {
|
||||
if (device_is_registered(&pdata->dev->dev))
|
||||
continue;
|
||||
|
||||
config_port_pf_mode(cdev->fme_dev, pdata->id);
|
||||
}
|
||||
mutex_unlock(&cdev->lock);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dfl_fpga_cdev_config_ports_pf);
|
||||
|
||||
/**
|
||||
* dfl_fpga_cdev_config_ports_vf - configure ports to VF access mode
|
||||
*
|
||||
* @cdev: parent container device.
|
||||
* @num_vfs: VF device number.
|
||||
*
|
||||
* This function is needed in sriov configuration routine. It could be used to
|
||||
* configure the released ports from PF access mode to VF.
|
||||
*
|
||||
* Return: 0 on success, negative error code otherwise.
|
||||
*/
|
||||
int dfl_fpga_cdev_config_ports_vf(struct dfl_fpga_cdev *cdev, int num_vfs)
|
||||
{
|
||||
struct dfl_feature_platform_data *pdata;
|
||||
int ret = 0;
|
||||
|
||||
mutex_lock(&cdev->lock);
|
||||
/*
|
||||
* can't turn multiple ports into 1 VF device, only 1 port for 1 VF
|
||||
* device, so if released port number doesn't match VF device number,
|
||||
* then reject the request with -EINVAL error code.
|
||||
*/
|
||||
if (cdev->released_port_num != num_vfs) {
|
||||
ret = -EINVAL;
|
||||
goto done;
|
||||
}
|
||||
|
||||
list_for_each_entry(pdata, &cdev->port_dev_list, node) {
|
||||
if (device_is_registered(&pdata->dev->dev))
|
||||
continue;
|
||||
|
||||
config_port_vf_mode(cdev->fme_dev, pdata->id);
|
||||
}
|
||||
done:
|
||||
mutex_unlock(&cdev->lock);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dfl_fpga_cdev_config_ports_vf);
|
||||
|
||||
static void __exit dfl_fpga_exit(void)
|
||||
{
|
||||
dfl_chardev_uinit();
|
||||
|
|
|
@ -30,8 +30,8 @@
|
|||
/* plus one for fme device */
|
||||
#define MAX_DFL_FEATURE_DEV_NUM (MAX_DFL_FPGA_PORT_NUM + 1)
|
||||
|
||||
/* Reserved 0x0 for Header Group Register and 0xff for AFU */
|
||||
#define FEATURE_ID_FIU_HEADER 0x0
|
||||
/* Reserved 0xfe for Header Group Register and 0xff for AFU */
|
||||
#define FEATURE_ID_FIU_HEADER 0xfe
|
||||
#define FEATURE_ID_AFU 0xff
|
||||
|
||||
#define FME_FEATURE_ID_HEADER FEATURE_ID_FIU_HEADER
|
||||
|
@ -119,6 +119,11 @@
|
|||
#define PORT_HDR_NEXT_AFU NEXT_AFU
|
||||
#define PORT_HDR_CAP 0x30
|
||||
#define PORT_HDR_CTRL 0x38
|
||||
#define PORT_HDR_STS 0x40
|
||||
#define PORT_HDR_USRCLK_CMD0 0x50
|
||||
#define PORT_HDR_USRCLK_CMD1 0x58
|
||||
#define PORT_HDR_USRCLK_STS0 0x60
|
||||
#define PORT_HDR_USRCLK_STS1 0x68
|
||||
|
||||
/* Port Capability Register Bitfield */
|
||||
#define PORT_CAP_PORT_NUM GENMASK_ULL(1, 0) /* ID of this port */
|
||||
|
@ -130,6 +135,16 @@
|
|||
/* Latency tolerance reporting. '1' >= 40us, '0' < 40us.*/
|
||||
#define PORT_CTRL_LATENCY BIT_ULL(2)
|
||||
#define PORT_CTRL_SFTRST_ACK BIT_ULL(4) /* HW ack for reset */
|
||||
|
||||
/* Port Status Register Bitfield */
|
||||
#define PORT_STS_AP2_EVT BIT_ULL(13) /* AP2 event detected */
|
||||
#define PORT_STS_AP1_EVT BIT_ULL(12) /* AP1 event detected */
|
||||
#define PORT_STS_PWR_STATE GENMASK_ULL(11, 8) /* AFU power states */
|
||||
#define PORT_STS_PWR_STATE_NORM 0
|
||||
#define PORT_STS_PWR_STATE_AP1 1 /* 50% throttling */
|
||||
#define PORT_STS_PWR_STATE_AP2 2 /* 90% throttling */
|
||||
#define PORT_STS_PWR_STATE_AP6 6 /* 100% throttling */
|
||||
|
||||
/**
|
||||
* struct dfl_fpga_port_ops - port ops
|
||||
*
|
||||
|
@ -154,13 +169,22 @@ void dfl_fpga_port_ops_put(struct dfl_fpga_port_ops *ops);
|
|||
int dfl_fpga_check_port_id(struct platform_device *pdev, void *pport_id);
|
||||
|
||||
/**
|
||||
* struct dfl_feature_driver - sub feature's driver
|
||||
* struct dfl_feature_id - dfl private feature id
|
||||
*
|
||||
* @id: sub feature id.
|
||||
* @ops: ops of this sub feature.
|
||||
* @id: unique dfl private feature id.
|
||||
*/
|
||||
struct dfl_feature_id {
|
||||
u64 id;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct dfl_feature_driver - dfl private feature driver
|
||||
*
|
||||
* @id_table: id_table for dfl private features supported by this driver.
|
||||
* @ops: ops of this dfl private feature driver.
|
||||
*/
|
||||
struct dfl_feature_driver {
|
||||
u64 id;
|
||||
const struct dfl_feature_id *id_table;
|
||||
const struct dfl_feature_ops *ops;
|
||||
};
|
||||
|
||||
|
@ -183,6 +207,8 @@ struct dfl_feature {
|
|||
|
||||
#define DEV_STATUS_IN_USE 0
|
||||
|
||||
#define FEATURE_DEV_ID_UNUSED (-1)
|
||||
|
||||
/**
|
||||
* struct dfl_feature_platform_data - platform data for feature devices
|
||||
*
|
||||
|
@ -191,6 +217,7 @@ struct dfl_feature {
|
|||
* @cdev: cdev of feature dev.
|
||||
* @dev: ptr to platform device linked with this platform data.
|
||||
* @dfl_cdev: ptr to container device.
|
||||
* @id: id used for this feature device.
|
||||
* @disable_count: count for port disable.
|
||||
* @num: number for sub features.
|
||||
* @dev_status: dev status (e.g. DEV_STATUS_IN_USE).
|
||||
|
@ -203,6 +230,7 @@ struct dfl_feature_platform_data {
|
|||
struct cdev cdev;
|
||||
struct platform_device *dev;
|
||||
struct dfl_fpga_cdev *dfl_cdev;
|
||||
int id;
|
||||
unsigned int disable_count;
|
||||
unsigned long dev_status;
|
||||
void *private;
|
||||
|
@ -331,6 +359,11 @@ static inline bool dfl_feature_is_port(void __iomem *base)
|
|||
(FIELD_GET(DFH_ID, v) == DFH_ID_FIU_PORT);
|
||||
}
|
||||
|
||||
static inline u8 dfl_feature_revision(void __iomem *base)
|
||||
{
|
||||
return (u8)FIELD_GET(DFH_REVISION, readq(base + DFH));
|
||||
}
|
||||
|
||||
/**
|
||||
* struct dfl_fpga_enum_info - DFL FPGA enumeration information
|
||||
*
|
||||
|
@ -373,6 +406,7 @@ void dfl_fpga_enum_info_free(struct dfl_fpga_enum_info *info);
|
|||
* @fme_dev: FME feature device under this container device.
|
||||
* @lock: mutex lock to protect the port device list.
|
||||
* @port_dev_list: list of all port feature devices under this container device.
|
||||
* @released_port_num: released port number under this container device.
|
||||
*/
|
||||
struct dfl_fpga_cdev {
|
||||
struct device *parent;
|
||||
|
@ -380,6 +414,7 @@ struct dfl_fpga_cdev {
|
|||
struct device *fme_dev;
|
||||
struct mutex lock;
|
||||
struct list_head port_dev_list;
|
||||
int released_port_num;
|
||||
};
|
||||
|
||||
struct dfl_fpga_cdev *
|
||||
|
@ -407,4 +442,9 @@ dfl_fpga_cdev_find_port(struct dfl_fpga_cdev *cdev, void *data,
|
|||
|
||||
return pdev;
|
||||
}
|
||||
|
||||
int dfl_fpga_cdev_release_port(struct dfl_fpga_cdev *cdev, int port_id);
|
||||
int dfl_fpga_cdev_assign_port(struct dfl_fpga_cdev *cdev, int port_id);
|
||||
void dfl_fpga_cdev_config_ports_pf(struct dfl_fpga_cdev *cdev);
|
||||
int dfl_fpga_cdev_config_ports_vf(struct dfl_fpga_cdev *cdev, int num_vf);
|
||||
#endif /* __FPGA_DFL_H */
|
||||
|
|
|
@ -646,24 +646,23 @@ static int debug_remove(struct amba_device *adev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static const struct amba_cs_uci_id uci_id_debug[] = {
|
||||
{
|
||||
/* CPU Debug UCI data */
|
||||
.devarch = 0x47706a15,
|
||||
.devarch_mask = 0xfff0ffff,
|
||||
.devtype = 0x00000015,
|
||||
}
|
||||
};
|
||||
|
||||
static const struct amba_id debug_ids[] = {
|
||||
{ /* Debug for Cortex-A53 */
|
||||
.id = 0x000bbd03,
|
||||
.mask = 0x000fffff,
|
||||
},
|
||||
{ /* Debug for Cortex-A57 */
|
||||
.id = 0x000bbd07,
|
||||
.mask = 0x000fffff,
|
||||
},
|
||||
{ /* Debug for Cortex-A72 */
|
||||
.id = 0x000bbd08,
|
||||
.mask = 0x000fffff,
|
||||
},
|
||||
{ /* Debug for Cortex-A73 */
|
||||
.id = 0x000bbd09,
|
||||
.mask = 0x000fffff,
|
||||
},
|
||||
{ 0, 0 },
|
||||
CS_AMBA_ID(0x000bbd03), /* Cortex-A53 */
|
||||
CS_AMBA_ID(0x000bbd07), /* Cortex-A57 */
|
||||
CS_AMBA_ID(0x000bbd08), /* Cortex-A72 */
|
||||
CS_AMBA_ID(0x000bbd09), /* Cortex-A73 */
|
||||
CS_AMBA_UCI_ID(0x000f0205, uci_id_debug), /* Qualcomm Kryo */
|
||||
CS_AMBA_UCI_ID(0x000f0211, uci_id_debug), /* Qualcomm Kryo */
|
||||
{},
|
||||
};
|
||||
|
||||
static struct amba_driver debug_driver = {
|
||||
|
|
|
@ -296,11 +296,8 @@ static ssize_t mode_store(struct device *dev,
|
|||
|
||||
spin_lock(&drvdata->spinlock);
|
||||
config->mode = val & ETMv4_MODE_ALL;
|
||||
|
||||
if (config->mode & ETM_MODE_EXCLUDE)
|
||||
etm4_set_mode_exclude(drvdata, true);
|
||||
else
|
||||
etm4_set_mode_exclude(drvdata, false);
|
||||
etm4_set_mode_exclude(drvdata,
|
||||
config->mode & ETM_MODE_EXCLUDE ? true : false);
|
||||
|
||||
if (drvdata->instrp0 == true) {
|
||||
/* start by clearing instruction P0 field */
|
||||
|
@ -999,10 +996,8 @@ static ssize_t addr_range_store(struct device *dev,
|
|||
* Program include or exclude control bits for vinst or vdata
|
||||
* whenever we change addr comparators to ETM_ADDR_TYPE_RANGE
|
||||
*/
|
||||
if (config->mode & ETM_MODE_EXCLUDE)
|
||||
etm4_set_mode_exclude(drvdata, true);
|
||||
else
|
||||
etm4_set_mode_exclude(drvdata, false);
|
||||
etm4_set_mode_exclude(drvdata,
|
||||
config->mode & ETM_MODE_EXCLUDE ? true : false);
|
||||
|
||||
spin_unlock(&drvdata->spinlock);
|
||||
return size;
|
||||
|
|
|
@ -34,7 +34,8 @@
|
|||
#include "coresight-etm-perf.h"
|
||||
|
||||
static int boot_enable;
|
||||
module_param_named(boot_enable, boot_enable, int, S_IRUGO);
|
||||
module_param(boot_enable, int, 0444);
|
||||
MODULE_PARM_DESC(boot_enable, "Enable tracing on boot");
|
||||
|
||||
/* The number of ETMv4 currently registered */
|
||||
static int etm4_count;
|
||||
|
@ -47,7 +48,7 @@ static enum cpuhp_state hp_online;
|
|||
|
||||
static void etm4_os_unlock(struct etmv4_drvdata *drvdata)
|
||||
{
|
||||
/* Writing any value to ETMOSLAR unlocks the trace registers */
|
||||
/* Writing 0 to TRCOSLAR unlocks the trace registers */
|
||||
writel_relaxed(0x0, drvdata->base + TRCOSLAR);
|
||||
drvdata->os_unlock = true;
|
||||
isb();
|
||||
|
@ -188,6 +189,13 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
|
|||
dev_err(etm_dev,
|
||||
"timeout while waiting for Idle Trace Status\n");
|
||||
|
||||
/*
|
||||
* As recommended by section 4.3.7 ("Synchronization when using the
|
||||
* memory-mapped interface") of ARM IHI 0064D
|
||||
*/
|
||||
dsb(sy);
|
||||
isb();
|
||||
|
||||
done:
|
||||
CS_LOCK(drvdata->base);
|
||||
|
||||
|
@ -453,8 +461,12 @@ static void etm4_disable_hw(void *info)
|
|||
/* EN, bit[0] Trace unit enable bit */
|
||||
control &= ~0x1;
|
||||
|
||||
/* make sure everything completes before disabling */
|
||||
mb();
|
||||
/*
|
||||
* Make sure everything completes before disabling, as recommended
|
||||
* by section 7.3.77 ("TRCVICTLR, ViewInst Main Control Register,
|
||||
* SSTATUS") of ARM IHI 0064D
|
||||
*/
|
||||
dsb(sy);
|
||||
isb();
|
||||
writel_relaxed(control, drvdata->base + TRCPRGCTLR);
|
||||
|
||||
|
@ -1047,10 +1059,8 @@ static int etm4_starting_cpu(unsigned int cpu)
|
|||
return 0;
|
||||
|
||||
spin_lock(&etmdrvdata[cpu]->spinlock);
|
||||
if (!etmdrvdata[cpu]->os_unlock) {
|
||||
if (!etmdrvdata[cpu]->os_unlock)
|
||||
etm4_os_unlock(etmdrvdata[cpu]);
|
||||
etmdrvdata[cpu]->os_unlock = true;
|
||||
}
|
||||
|
||||
if (local_read(&etmdrvdata[cpu]->mode))
|
||||
etm4_enable_hw(etmdrvdata[cpu]);
|
||||
|
@ -1192,11 +1202,15 @@ static struct amba_cs_uci_id uci_id_etm4[] = {
|
|||
};
|
||||
|
||||
static const struct amba_id etm4_ids[] = {
|
||||
CS_AMBA_ID(0x000bb95d), /* Cortex-A53 */
|
||||
CS_AMBA_ID(0x000bb95e), /* Cortex-A57 */
|
||||
CS_AMBA_ID(0x000bb95a), /* Cortex-A72 */
|
||||
CS_AMBA_ID(0x000bb959), /* Cortex-A73 */
|
||||
CS_AMBA_UCI_ID(0x000bb9da, uci_id_etm4), /* Cortex-A35 */
|
||||
CS_AMBA_ID(0x000bb95d), /* Cortex-A53 */
|
||||
CS_AMBA_ID(0x000bb95e), /* Cortex-A57 */
|
||||
CS_AMBA_ID(0x000bb95a), /* Cortex-A72 */
|
||||
CS_AMBA_ID(0x000bb959), /* Cortex-A73 */
|
||||
CS_AMBA_UCI_ID(0x000bb9da, uci_id_etm4),/* Cortex-A35 */
|
||||
CS_AMBA_UCI_ID(0x000f0205, uci_id_etm4),/* Qualcomm Kryo */
|
||||
CS_AMBA_UCI_ID(0x000f0211, uci_id_etm4),/* Qualcomm Kryo */
|
||||
CS_AMBA_ID(0x000bb802), /* Qualcomm Kryo 385 Cortex-A55 */
|
||||
CS_AMBA_ID(0x000bb803), /* Qualcomm Kryo 385 Cortex-A75 */
|
||||
{},
|
||||
};
|
||||
|
||||
|
|
|
@ -5,6 +5,7 @@
|
|||
* Description: CoreSight Funnel driver
|
||||
*/
|
||||
|
||||
#include <linux/acpi.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/types.h>
|
||||
|
@ -192,7 +193,7 @@ static int funnel_probe(struct device *dev, struct resource *res)
|
|||
|
||||
if (is_of_node(dev_fwnode(dev)) &&
|
||||
of_device_is_compatible(dev->of_node, "arm,coresight-funnel"))
|
||||
pr_warn_once("Uses OBSOLETE CoreSight funnel binding\n");
|
||||
dev_warn_once(dev, "Uses OBSOLETE CoreSight funnel binding\n");
|
||||
|
||||
desc.name = coresight_alloc_device_name(&funnel_devs, dev);
|
||||
if (!desc.name)
|
||||
|
@ -302,11 +303,19 @@ static const struct of_device_id static_funnel_match[] = {
|
|||
{}
|
||||
};
|
||||
|
||||
#ifdef CONFIG_ACPI
|
||||
static const struct acpi_device_id static_funnel_ids[] = {
|
||||
{"ARMHC9FE", 0},
|
||||
{},
|
||||
};
|
||||
#endif
|
||||
|
||||
static struct platform_driver static_funnel_driver = {
|
||||
.probe = static_funnel_probe,
|
||||
.driver = {
|
||||
.name = "coresight-static-funnel",
|
||||
.of_match_table = static_funnel_match,
|
||||
.acpi_match_table = ACPI_PTR(static_funnel_ids),
|
||||
.pm = &funnel_dev_pm_ops,
|
||||
.suppress_bind_attrs = true,
|
||||
},
|
||||
|
|
|
@ -185,11 +185,11 @@ static inline int etm_writel_cp14(u32 off, u32 val) { return 0; }
|
|||
}
|
||||
|
||||
/* coresight AMBA ID, full UCI structure: id table entry. */
|
||||
#define CS_AMBA_UCI_ID(pid, uci_ptr) \
|
||||
{ \
|
||||
.id = pid, \
|
||||
.mask = 0x000fffff, \
|
||||
.data = uci_ptr \
|
||||
#define CS_AMBA_UCI_ID(pid, uci_ptr) \
|
||||
{ \
|
||||
.id = pid, \
|
||||
.mask = 0x000fffff, \
|
||||
.data = (void *)uci_ptr \
|
||||
}
|
||||
|
||||
/* extract the data value from a UCI structure given amba_id pointer. */
|
||||
|
|
|
@ -184,7 +184,8 @@ static int replicator_probe(struct device *dev, struct resource *res)
|
|||
|
||||
if (is_of_node(dev_fwnode(dev)) &&
|
||||
of_device_is_compatible(dev->of_node, "arm,coresight-replicator"))
|
||||
pr_warn_once("Uses OBSOLETE CoreSight replicator binding\n");
|
||||
dev_warn_once(dev,
|
||||
"Uses OBSOLETE CoreSight replicator binding\n");
|
||||
|
||||
desc.name = coresight_alloc_device_name(&replicator_devs, dev);
|
||||
if (!desc.name)
|
||||
|
|
|
@ -479,30 +479,11 @@ static unsigned long tmc_update_etf_buffer(struct coresight_device *csdev,
|
|||
* traces.
|
||||
*/
|
||||
if (!buf->snapshot && to_read > handle->size) {
|
||||
u32 mask = 0;
|
||||
|
||||
/*
|
||||
* The value written to RRP must be byte-address aligned to
|
||||
* the width of the trace memory databus _and_ to a frame
|
||||
* boundary (16 byte), whichever is the biggest. For example,
|
||||
* for 32-bit, 64-bit and 128-bit wide trace memory, the four
|
||||
* LSBs must be 0s. For 256-bit wide trace memory, the five
|
||||
* LSBs must be 0s.
|
||||
*/
|
||||
switch (drvdata->memwidth) {
|
||||
case TMC_MEM_INTF_WIDTH_32BITS:
|
||||
case TMC_MEM_INTF_WIDTH_64BITS:
|
||||
case TMC_MEM_INTF_WIDTH_128BITS:
|
||||
mask = GENMASK(31, 4);
|
||||
break;
|
||||
case TMC_MEM_INTF_WIDTH_256BITS:
|
||||
mask = GENMASK(31, 5);
|
||||
break;
|
||||
}
|
||||
u32 mask = tmc_get_memwidth_mask(drvdata);
|
||||
|
||||
/*
|
||||
* Make sure the new size is aligned in accordance with the
|
||||
* requirement explained above.
|
||||
* requirement explained in function tmc_get_memwidth_mask().
|
||||
*/
|
||||
to_read = handle->size & mask;
|
||||
/* Move the RAM read pointer up */
|
||||
|
|
|
@ -871,6 +871,7 @@ static struct etr_buf *tmc_alloc_etr_buf(struct tmc_drvdata *drvdata,
|
|||
return ERR_PTR(rc);
|
||||
}
|
||||
|
||||
refcount_set(&etr_buf->refcount, 1);
|
||||
dev_dbg(dev, "allocated buffer of size %ldKB in mode %d\n",
|
||||
(unsigned long)size >> 10, etr_buf->mode);
|
||||
return etr_buf;
|
||||
|
@ -927,15 +928,24 @@ static void tmc_sync_etr_buf(struct tmc_drvdata *drvdata)
|
|||
rrp = tmc_read_rrp(drvdata);
|
||||
rwp = tmc_read_rwp(drvdata);
|
||||
status = readl_relaxed(drvdata->base + TMC_STS);
|
||||
|
||||
/*
|
||||
* If there were memory errors in the session, truncate the
|
||||
* buffer.
|
||||
*/
|
||||
if (WARN_ON_ONCE(status & TMC_STS_MEMERR)) {
|
||||
dev_dbg(&drvdata->csdev->dev,
|
||||
"tmc memory error detected, truncating buffer\n");
|
||||
etr_buf->len = 0;
|
||||
etr_buf->full = 0;
|
||||
return;
|
||||
}
|
||||
|
||||
etr_buf->full = status & TMC_STS_FULL;
|
||||
|
||||
WARN_ON(!etr_buf->ops || !etr_buf->ops->sync);
|
||||
|
||||
etr_buf->ops->sync(etr_buf, rrp, rwp);
|
||||
|
||||
/* Insert barrier packets at the beginning, if there was an overflow */
|
||||
if (etr_buf->full)
|
||||
tmc_etr_buf_insert_barrier_packet(etr_buf, etr_buf->offset);
|
||||
}
|
||||
|
||||
static void __tmc_etr_enable_hw(struct tmc_drvdata *drvdata)
|
||||
|
@ -1072,6 +1082,13 @@ static void tmc_etr_sync_sysfs_buf(struct tmc_drvdata *drvdata)
|
|||
drvdata->sysfs_buf = NULL;
|
||||
} else {
|
||||
tmc_sync_etr_buf(drvdata);
|
||||
/*
|
||||
* Insert barrier packets at the beginning, if there was
|
||||
* an overflow.
|
||||
*/
|
||||
if (etr_buf->full)
|
||||
tmc_etr_buf_insert_barrier_packet(etr_buf,
|
||||
etr_buf->offset);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1263,8 +1280,6 @@ retry:
|
|||
if (IS_ERR(etr_buf))
|
||||
return etr_buf;
|
||||
|
||||
refcount_set(&etr_buf->refcount, 1);
|
||||
|
||||
/* Now that we have a buffer, add it to the IDR. */
|
||||
mutex_lock(&drvdata->idr_mutex);
|
||||
ret = idr_alloc(&drvdata->idr, etr_buf, pid, pid + 1, GFP_KERNEL);
|
||||
|
@ -1291,19 +1306,11 @@ get_perf_etr_buf_per_thread(struct tmc_drvdata *drvdata,
|
|||
struct perf_event *event, int nr_pages,
|
||||
void **pages, bool snapshot)
|
||||
{
|
||||
struct etr_buf *etr_buf;
|
||||
|
||||
/*
|
||||
* In per-thread mode the etr_buf isn't shared, so just go ahead
|
||||
* with memory allocation.
|
||||
*/
|
||||
etr_buf = alloc_etr_buf(drvdata, event, nr_pages, pages, snapshot);
|
||||
if (IS_ERR(etr_buf))
|
||||
goto out;
|
||||
|
||||
refcount_set(&etr_buf->refcount, 1);
|
||||
out:
|
||||
return etr_buf;
|
||||
return alloc_etr_buf(drvdata, event, nr_pages, pages, snapshot);
|
||||
}
|
||||
|
||||
static struct etr_buf *
|
||||
|
@ -1410,10 +1417,12 @@ free_etr_perf_buffer:
|
|||
* tmc_etr_sync_perf_buffer: Copy the actual trace data from the hardware
|
||||
* buffer to the perf ring buffer.
|
||||
*/
|
||||
static void tmc_etr_sync_perf_buffer(struct etr_perf_buffer *etr_perf)
|
||||
static void tmc_etr_sync_perf_buffer(struct etr_perf_buffer *etr_perf,
|
||||
unsigned long src_offset,
|
||||
unsigned long to_copy)
|
||||
{
|
||||
long bytes, to_copy;
|
||||
long pg_idx, pg_offset, src_offset;
|
||||
long bytes;
|
||||
long pg_idx, pg_offset;
|
||||
unsigned long head = etr_perf->head;
|
||||
char **dst_pages, *src_buf;
|
||||
struct etr_buf *etr_buf = etr_perf->etr_buf;
|
||||
|
@ -1422,8 +1431,6 @@ static void tmc_etr_sync_perf_buffer(struct etr_perf_buffer *etr_perf)
|
|||
pg_idx = head >> PAGE_SHIFT;
|
||||
pg_offset = head & (PAGE_SIZE - 1);
|
||||
dst_pages = (char **)etr_perf->pages;
|
||||
src_offset = etr_buf->offset;
|
||||
to_copy = etr_buf->len;
|
||||
|
||||
while (to_copy > 0) {
|
||||
/*
|
||||
|
@ -1434,6 +1441,8 @@ static void tmc_etr_sync_perf_buffer(struct etr_perf_buffer *etr_perf)
|
|||
* 3) what is available in the destination page.
|
||||
* in one iteration.
|
||||
*/
|
||||
if (src_offset >= etr_buf->size)
|
||||
src_offset -= etr_buf->size;
|
||||
bytes = tmc_etr_buf_get_data(etr_buf, src_offset, to_copy,
|
||||
&src_buf);
|
||||
if (WARN_ON_ONCE(bytes <= 0))
|
||||
|
@ -1454,8 +1463,6 @@ static void tmc_etr_sync_perf_buffer(struct etr_perf_buffer *etr_perf)
|
|||
|
||||
/* Move source pointers */
|
||||
src_offset += bytes;
|
||||
if (src_offset >= etr_buf->size)
|
||||
src_offset -= etr_buf->size;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1471,7 +1478,7 @@ tmc_update_etr_buffer(struct coresight_device *csdev,
|
|||
void *config)
|
||||
{
|
||||
bool lost = false;
|
||||
unsigned long flags, size = 0;
|
||||
unsigned long flags, offset, size = 0;
|
||||
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
|
||||
struct etr_perf_buffer *etr_perf = config;
|
||||
struct etr_buf *etr_buf = etr_perf->etr_buf;
|
||||
|
@ -1484,7 +1491,7 @@ tmc_update_etr_buffer(struct coresight_device *csdev,
|
|||
goto out;
|
||||
}
|
||||
|
||||
if (WARN_ON(drvdata->perf_data != etr_perf)) {
|
||||
if (WARN_ON(drvdata->perf_buf != etr_buf)) {
|
||||
lost = true;
|
||||
spin_unlock_irqrestore(&drvdata->spinlock, flags);
|
||||
goto out;
|
||||
|
@ -1496,12 +1503,38 @@ tmc_update_etr_buffer(struct coresight_device *csdev,
|
|||
tmc_sync_etr_buf(drvdata);
|
||||
|
||||
CS_LOCK(drvdata->base);
|
||||
/* Reset perf specific data */
|
||||
drvdata->perf_data = NULL;
|
||||
spin_unlock_irqrestore(&drvdata->spinlock, flags);
|
||||
|
||||
lost = etr_buf->full;
|
||||
offset = etr_buf->offset;
|
||||
size = etr_buf->len;
|
||||
tmc_etr_sync_perf_buffer(etr_perf);
|
||||
|
||||
/*
|
||||
* The ETR buffer may be bigger than the space available in the
|
||||
* perf ring buffer (handle->size). If so advance the offset so that we
|
||||
* get the latest trace data. In snapshot mode none of that matters
|
||||
* since we are expected to clobber stale data in favour of the latest
|
||||
* traces.
|
||||
*/
|
||||
if (!etr_perf->snapshot && size > handle->size) {
|
||||
u32 mask = tmc_get_memwidth_mask(drvdata);
|
||||
|
||||
/*
|
||||
* Make sure the new size is aligned in accordance with the
|
||||
* requirement explained in function tmc_get_memwidth_mask().
|
||||
*/
|
||||
size = handle->size & mask;
|
||||
offset = etr_buf->offset + etr_buf->len - size;
|
||||
|
||||
if (offset >= etr_buf->size)
|
||||
offset -= etr_buf->size;
|
||||
lost = true;
|
||||
}
|
||||
|
||||
/* Insert barrier packets at the beginning, if there was an overflow */
|
||||
if (lost)
|
||||
tmc_etr_buf_insert_barrier_packet(etr_buf, etr_buf->offset);
|
||||
tmc_etr_sync_perf_buffer(etr_perf, offset, size);
|
||||
|
||||
/*
|
||||
* In snapshot mode we simply increment the head by the number of byte
|
||||
|
@ -1511,8 +1544,6 @@ tmc_update_etr_buffer(struct coresight_device *csdev,
|
|||
*/
|
||||
if (etr_perf->snapshot)
|
||||
handle->head += size;
|
||||
|
||||
lost |= etr_buf->full;
|
||||
out:
|
||||
/*
|
||||
* Don't set the TRUNCATED flag in snapshot mode because 1) the
|
||||
|
@ -1556,7 +1587,6 @@ static int tmc_enable_etr_sink_perf(struct coresight_device *csdev, void *data)
|
|||
}
|
||||
|
||||
etr_perf->head = PERF_IDX2OFF(handle->head, etr_perf);
|
||||
drvdata->perf_data = etr_perf;
|
||||
|
||||
/*
|
||||
* No HW configuration is needed if the sink is already in
|
||||
|
@ -1572,6 +1602,7 @@ static int tmc_enable_etr_sink_perf(struct coresight_device *csdev, void *data)
|
|||
/* Associate with monitored process. */
|
||||
drvdata->pid = pid;
|
||||
drvdata->mode = CS_MODE_PERF;
|
||||
drvdata->perf_buf = etr_perf->etr_buf;
|
||||
atomic_inc(csdev->refcnt);
|
||||
}
|
||||
|
||||
|
@ -1617,6 +1648,8 @@ static int tmc_disable_etr_sink(struct coresight_device *csdev)
|
|||
/* Dissociate from monitored process. */
|
||||
drvdata->pid = -1;
|
||||
drvdata->mode = CS_MODE_DISABLED;
|
||||
/* Reset perf specific data */
|
||||
drvdata->perf_buf = NULL;
|
||||
|
||||
spin_unlock_irqrestore(&drvdata->spinlock, flags);
|
||||
|
||||
|
|
|
@ -70,6 +70,34 @@ void tmc_disable_hw(struct tmc_drvdata *drvdata)
|
|||
writel_relaxed(0x0, drvdata->base + TMC_CTL);
|
||||
}
|
||||
|
||||
u32 tmc_get_memwidth_mask(struct tmc_drvdata *drvdata)
|
||||
{
|
||||
u32 mask = 0;
|
||||
|
||||
/*
|
||||
* When moving RRP or an offset address forward, the new values must
|
||||
* be byte-address aligned to the width of the trace memory databus
|
||||
* _and_ to a frame boundary (16 byte), whichever is the biggest. For
|
||||
* example, for 32-bit, 64-bit and 128-bit wide trace memory, the four
|
||||
* LSBs must be 0s. For 256-bit wide trace memory, the five LSBs must
|
||||
* be 0s.
|
||||
*/
|
||||
switch (drvdata->memwidth) {
|
||||
case TMC_MEM_INTF_WIDTH_32BITS:
|
||||
/* fallthrough */
|
||||
case TMC_MEM_INTF_WIDTH_64BITS:
|
||||
/* fallthrough */
|
||||
case TMC_MEM_INTF_WIDTH_128BITS:
|
||||
mask = GENMASK(31, 4);
|
||||
break;
|
||||
case TMC_MEM_INTF_WIDTH_256BITS:
|
||||
mask = GENMASK(31, 5);
|
||||
break;
|
||||
}
|
||||
|
||||
return mask;
|
||||
}
|
||||
|
||||
static int tmc_read_prepare(struct tmc_drvdata *drvdata)
|
||||
{
|
||||
int ret = 0;
|
||||
|
@ -236,6 +264,7 @@ coresight_tmc_reg(ffcr, TMC_FFCR);
|
|||
coresight_tmc_reg(mode, TMC_MODE);
|
||||
coresight_tmc_reg(pscr, TMC_PSCR);
|
||||
coresight_tmc_reg(axictl, TMC_AXICTL);
|
||||
coresight_tmc_reg(authstatus, TMC_AUTHSTATUS);
|
||||
coresight_tmc_reg(devid, CORESIGHT_DEVID);
|
||||
coresight_tmc_reg64(rrp, TMC_RRP, TMC_RRPHI);
|
||||
coresight_tmc_reg64(rwp, TMC_RWP, TMC_RWPHI);
|
||||
|
@ -255,6 +284,7 @@ static struct attribute *coresight_tmc_mgmt_attrs[] = {
|
|||
&dev_attr_devid.attr,
|
||||
&dev_attr_dba.attr,
|
||||
&dev_attr_axictl.attr,
|
||||
&dev_attr_authstatus.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
|
@ -342,6 +372,13 @@ static inline bool tmc_etr_can_use_sg(struct device *dev)
|
|||
return fwnode_property_present(dev->fwnode, "arm,scatter-gather");
|
||||
}
|
||||
|
||||
static inline bool tmc_etr_has_non_secure_access(struct tmc_drvdata *drvdata)
|
||||
{
|
||||
u32 auth = readl_relaxed(drvdata->base + TMC_AUTHSTATUS);
|
||||
|
||||
return (auth & TMC_AUTH_NSID_MASK) == 0x3;
|
||||
}
|
||||
|
||||
/* Detect and initialise the capabilities of a TMC ETR */
|
||||
static int tmc_etr_setup_caps(struct device *parent, u32 devid, void *dev_caps)
|
||||
{
|
||||
|
@ -349,6 +386,9 @@ static int tmc_etr_setup_caps(struct device *parent, u32 devid, void *dev_caps)
|
|||
u32 dma_mask = 0;
|
||||
struct tmc_drvdata *drvdata = dev_get_drvdata(parent);
|
||||
|
||||
if (!tmc_etr_has_non_secure_access(drvdata))
|
||||
return -EACCES;
|
||||
|
||||
/* Set the unadvertised capabilities */
|
||||
tmc_etr_init_caps(drvdata, (u32)(unsigned long)dev_caps);
|
||||
|
||||
|
|
|
@ -39,6 +39,7 @@
|
|||
#define TMC_ITATBCTR2 0xef0
|
||||
#define TMC_ITATBCTR1 0xef4
|
||||
#define TMC_ITATBCTR0 0xef8
|
||||
#define TMC_AUTHSTATUS 0xfb8
|
||||
|
||||
/* register description */
|
||||
/* TMC_CTL - 0x020 */
|
||||
|
@ -47,6 +48,7 @@
|
|||
#define TMC_STS_TMCREADY_BIT 2
|
||||
#define TMC_STS_FULL BIT(0)
|
||||
#define TMC_STS_TRIGGERED BIT(1)
|
||||
#define TMC_STS_MEMERR BIT(5)
|
||||
/*
|
||||
* TMC_AXICTL - 0x110
|
||||
*
|
||||
|
@ -89,6 +91,8 @@
|
|||
#define TMC_DEVID_AXIAW_SHIFT 17
|
||||
#define TMC_DEVID_AXIAW_MASK 0x7f
|
||||
|
||||
#define TMC_AUTH_NSID_MASK GENMASK(1, 0)
|
||||
|
||||
enum tmc_config_type {
|
||||
TMC_CONFIG_TYPE_ETB,
|
||||
TMC_CONFIG_TYPE_ETR,
|
||||
|
@ -178,8 +182,8 @@ struct etr_buf {
|
|||
* device configuration register (DEVID)
|
||||
* @idr: Holds etr_bufs allocated for this ETR.
|
||||
* @idr_mutex: Access serialisation for idr.
|
||||
* @perf_data: PERF buffer for ETR.
|
||||
* @sysfs_data: SYSFS buffer for ETR.
|
||||
* @sysfs_buf: SYSFS buffer for ETR.
|
||||
* @perf_buf: PERF buffer for ETR.
|
||||
*/
|
||||
struct tmc_drvdata {
|
||||
void __iomem *base;
|
||||
|
@ -202,7 +206,7 @@ struct tmc_drvdata {
|
|||
struct idr idr;
|
||||
struct mutex idr_mutex;
|
||||
struct etr_buf *sysfs_buf;
|
||||
void *perf_data;
|
||||
struct etr_buf *perf_buf;
|
||||
};
|
||||
|
||||
struct etr_buf_operations {
|
||||
|
@ -251,6 +255,7 @@ void tmc_wait_for_tmcready(struct tmc_drvdata *drvdata);
|
|||
void tmc_flush_and_stop(struct tmc_drvdata *drvdata);
|
||||
void tmc_enable_hw(struct tmc_drvdata *drvdata);
|
||||
void tmc_disable_hw(struct tmc_drvdata *drvdata);
|
||||
u32 tmc_get_memwidth_mask(struct tmc_drvdata *drvdata);
|
||||
|
||||
/* ETB/ETF functions */
|
||||
int tmc_read_prepare_etb(struct tmc_drvdata *drvdata);
|
||||
|
|
|
@ -20,3 +20,6 @@ intel_th_msu-y := msu.o
|
|||
|
||||
obj-$(CONFIG_INTEL_TH_PTI) += intel_th_pti.o
|
||||
intel_th_pti-y := pti.o
|
||||
|
||||
obj-$(CONFIG_INTEL_TH_MSU) += intel_th_msu_sink.o
|
||||
intel_th_msu_sink-y := msu-sink.o
|
||||
|
|
|
@ -0,0 +1,116 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* An example software sink buffer for Intel TH MSU.
|
||||
*
|
||||
* Copyright (C) 2019 Intel Corporation.
|
||||
*/
|
||||
|
||||
#include <linux/intel_th.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/dma-mapping.h>
|
||||
|
||||
#define MAX_SGTS 16
|
||||
|
||||
struct msu_sink_private {
|
||||
struct device *dev;
|
||||
struct sg_table **sgts;
|
||||
unsigned int nr_sgts;
|
||||
};
|
||||
|
||||
static void *msu_sink_assign(struct device *dev, int *mode)
|
||||
{
|
||||
struct msu_sink_private *priv;
|
||||
|
||||
priv = kzalloc(sizeof(*priv), GFP_KERNEL);
|
||||
if (!priv)
|
||||
return NULL;
|
||||
|
||||
priv->sgts = kcalloc(MAX_SGTS, sizeof(void *), GFP_KERNEL);
|
||||
if (!priv->sgts) {
|
||||
kfree(priv);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
priv->dev = dev;
|
||||
*mode = MSC_MODE_MULTI;
|
||||
|
||||
return priv;
|
||||
}
|
||||
|
||||
static void msu_sink_unassign(void *data)
|
||||
{
|
||||
struct msu_sink_private *priv = data;
|
||||
|
||||
kfree(priv->sgts);
|
||||
kfree(priv);
|
||||
}
|
||||
|
||||
/* See also: msc.c: __msc_buffer_win_alloc() */
|
||||
static int msu_sink_alloc_window(void *data, struct sg_table **sgt, size_t size)
|
||||
{
|
||||
struct msu_sink_private *priv = data;
|
||||
unsigned int nents;
|
||||
struct scatterlist *sg_ptr;
|
||||
void *block;
|
||||
int ret, i;
|
||||
|
||||
if (priv->nr_sgts == MAX_SGTS)
|
||||
return -ENOMEM;
|
||||
|
||||
nents = DIV_ROUND_UP(size, PAGE_SIZE);
|
||||
|
||||
ret = sg_alloc_table(*sgt, nents, GFP_KERNEL);
|
||||
if (ret)
|
||||
return -ENOMEM;
|
||||
|
||||
priv->sgts[priv->nr_sgts++] = *sgt;
|
||||
|
||||
for_each_sg((*sgt)->sgl, sg_ptr, nents, i) {
|
||||
block = dma_alloc_coherent(priv->dev->parent->parent,
|
||||
PAGE_SIZE, &sg_dma_address(sg_ptr),
|
||||
GFP_KERNEL);
|
||||
sg_set_buf(sg_ptr, block, PAGE_SIZE);
|
||||
}
|
||||
|
||||
return nents;
|
||||
}
|
||||
|
||||
/* See also: msc.c: __msc_buffer_win_free() */
|
||||
static void msu_sink_free_window(void *data, struct sg_table *sgt)
|
||||
{
|
||||
struct msu_sink_private *priv = data;
|
||||
struct scatterlist *sg_ptr;
|
||||
int i;
|
||||
|
||||
for_each_sg(sgt->sgl, sg_ptr, sgt->nents, i) {
|
||||
dma_free_coherent(priv->dev->parent->parent, PAGE_SIZE,
|
||||
sg_virt(sg_ptr), sg_dma_address(sg_ptr));
|
||||
}
|
||||
|
||||
sg_free_table(sgt);
|
||||
priv->nr_sgts--;
|
||||
}
|
||||
|
||||
static int msu_sink_ready(void *data, struct sg_table *sgt, size_t bytes)
|
||||
{
|
||||
struct msu_sink_private *priv = data;
|
||||
|
||||
intel_th_msc_window_unlock(priv->dev, sgt);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct msu_buffer sink_mbuf = {
|
||||
.name = "sink",
|
||||
.assign = msu_sink_assign,
|
||||
.unassign = msu_sink_unassign,
|
||||
.alloc_window = msu_sink_alloc_window,
|
||||
.free_window = msu_sink_free_window,
|
||||
.ready = msu_sink_ready,
|
||||
};
|
||||
|
||||
module_intel_th_msu_buffer(sink_mbuf);
|
||||
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -17,21 +17,48 @@
|
|||
#include <linux/mm.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/workqueue.h>
|
||||
#include <linux/dma-mapping.h>
|
||||
|
||||
#ifdef CONFIG_X86
|
||||
#include <asm/set_memory.h>
|
||||
#endif
|
||||
|
||||
#include <linux/intel_th.h>
|
||||
#include "intel_th.h"
|
||||
#include "msu.h"
|
||||
|
||||
#define msc_dev(x) (&(x)->thdev->dev)
|
||||
|
||||
/*
|
||||
* Lockout state transitions:
|
||||
* READY -> INUSE -+-> LOCKED -+-> READY -> etc.
|
||||
* \-----------/
|
||||
* WIN_READY: window can be used by HW
|
||||
* WIN_INUSE: window is in use
|
||||
* WIN_LOCKED: window is filled up and is being processed by the buffer
|
||||
* handling code
|
||||
*
|
||||
* All state transitions happen automatically, except for the LOCKED->READY,
|
||||
* which needs to be signalled by the buffer code by calling
|
||||
* intel_th_msc_window_unlock().
|
||||
*
|
||||
* When the interrupt handler has to switch to the next window, it checks
|
||||
* whether it's READY, and if it is, it performs the switch and tracing
|
||||
* continues. If it's LOCKED, it stops the trace.
|
||||
*/
|
||||
enum lockout_state {
|
||||
WIN_READY = 0,
|
||||
WIN_INUSE,
|
||||
WIN_LOCKED
|
||||
};
|
||||
|
||||
/**
|
||||
* struct msc_window - multiblock mode window descriptor
|
||||
* @entry: window list linkage (msc::win_list)
|
||||
* @pgoff: page offset into the buffer that this window starts at
|
||||
* @lockout: lockout state, see comment below
|
||||
* @lo_lock: lockout state serialization
|
||||
* @nr_blocks: number of blocks (pages) in this window
|
||||
* @nr_segs: number of segments in this window (<= @nr_blocks)
|
||||
* @_sgt: array of block descriptors
|
||||
|
@ -40,6 +67,8 @@
|
|||
struct msc_window {
|
||||
struct list_head entry;
|
||||
unsigned long pgoff;
|
||||
enum lockout_state lockout;
|
||||
spinlock_t lo_lock;
|
||||
unsigned int nr_blocks;
|
||||
unsigned int nr_segs;
|
||||
struct msc *msc;
|
||||
|
@ -66,8 +95,8 @@ struct msc_iter {
|
|||
struct msc_window *start_win;
|
||||
struct msc_window *win;
|
||||
unsigned long offset;
|
||||
int start_block;
|
||||
int block;
|
||||
struct scatterlist *start_block;
|
||||
struct scatterlist *block;
|
||||
unsigned int block_off;
|
||||
unsigned int wrap_count;
|
||||
unsigned int eof;
|
||||
|
@ -77,6 +106,8 @@ struct msc_iter {
|
|||
* struct msc - MSC device representation
|
||||
* @reg_base: register window base address
|
||||
* @thdev: intel_th_device pointer
|
||||
* @mbuf: MSU buffer, if assigned
|
||||
* @mbuf_priv MSU buffer's private data, if @mbuf
|
||||
* @win_list: list of windows in multiblock mode
|
||||
* @single_sgt: single mode buffer
|
||||
* @cur_win: current window
|
||||
|
@ -100,6 +131,10 @@ struct msc {
|
|||
void __iomem *msu_base;
|
||||
struct intel_th_device *thdev;
|
||||
|
||||
const struct msu_buffer *mbuf;
|
||||
void *mbuf_priv;
|
||||
|
||||
struct work_struct work;
|
||||
struct list_head win_list;
|
||||
struct sg_table single_sgt;
|
||||
struct msc_window *cur_win;
|
||||
|
@ -108,6 +143,8 @@ struct msc {
|
|||
unsigned int single_wrap : 1;
|
||||
void *base;
|
||||
dma_addr_t base_addr;
|
||||
u32 orig_addr;
|
||||
u32 orig_sz;
|
||||
|
||||
/* <0: no buffer, 0: no users, >0: active users */
|
||||
atomic_t user_count;
|
||||
|
@ -126,6 +163,101 @@ struct msc {
|
|||
unsigned int index;
|
||||
};
|
||||
|
||||
static LIST_HEAD(msu_buffer_list);
|
||||
static struct mutex msu_buffer_mutex;
|
||||
|
||||
/**
|
||||
* struct msu_buffer_entry - internal MSU buffer bookkeeping
|
||||
* @entry: link to msu_buffer_list
|
||||
* @mbuf: MSU buffer object
|
||||
* @owner: module that provides this MSU buffer
|
||||
*/
|
||||
struct msu_buffer_entry {
|
||||
struct list_head entry;
|
||||
const struct msu_buffer *mbuf;
|
||||
struct module *owner;
|
||||
};
|
||||
|
||||
static struct msu_buffer_entry *__msu_buffer_entry_find(const char *name)
|
||||
{
|
||||
struct msu_buffer_entry *mbe;
|
||||
|
||||
lockdep_assert_held(&msu_buffer_mutex);
|
||||
|
||||
list_for_each_entry(mbe, &msu_buffer_list, entry) {
|
||||
if (!strcmp(mbe->mbuf->name, name))
|
||||
return mbe;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static const struct msu_buffer *
|
||||
msu_buffer_get(const char *name)
|
||||
{
|
||||
struct msu_buffer_entry *mbe;
|
||||
|
||||
mutex_lock(&msu_buffer_mutex);
|
||||
mbe = __msu_buffer_entry_find(name);
|
||||
if (mbe && !try_module_get(mbe->owner))
|
||||
mbe = NULL;
|
||||
mutex_unlock(&msu_buffer_mutex);
|
||||
|
||||
return mbe ? mbe->mbuf : NULL;
|
||||
}
|
||||
|
||||
static void msu_buffer_put(const struct msu_buffer *mbuf)
|
||||
{
|
||||
struct msu_buffer_entry *mbe;
|
||||
|
||||
mutex_lock(&msu_buffer_mutex);
|
||||
mbe = __msu_buffer_entry_find(mbuf->name);
|
||||
if (mbe)
|
||||
module_put(mbe->owner);
|
||||
mutex_unlock(&msu_buffer_mutex);
|
||||
}
|
||||
|
||||
int intel_th_msu_buffer_register(const struct msu_buffer *mbuf,
|
||||
struct module *owner)
|
||||
{
|
||||
struct msu_buffer_entry *mbe;
|
||||
int ret = 0;
|
||||
|
||||
mbe = kzalloc(sizeof(*mbe), GFP_KERNEL);
|
||||
if (!mbe)
|
||||
return -ENOMEM;
|
||||
|
||||
mutex_lock(&msu_buffer_mutex);
|
||||
if (__msu_buffer_entry_find(mbuf->name)) {
|
||||
ret = -EEXIST;
|
||||
kfree(mbe);
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
mbe->mbuf = mbuf;
|
||||
mbe->owner = owner;
|
||||
list_add_tail(&mbe->entry, &msu_buffer_list);
|
||||
unlock:
|
||||
mutex_unlock(&msu_buffer_mutex);
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(intel_th_msu_buffer_register);
|
||||
|
||||
void intel_th_msu_buffer_unregister(const struct msu_buffer *mbuf)
|
||||
{
|
||||
struct msu_buffer_entry *mbe;
|
||||
|
||||
mutex_lock(&msu_buffer_mutex);
|
||||
mbe = __msu_buffer_entry_find(mbuf->name);
|
||||
if (mbe) {
|
||||
list_del(&mbe->entry);
|
||||
kfree(mbe);
|
||||
}
|
||||
mutex_unlock(&msu_buffer_mutex);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(intel_th_msu_buffer_unregister);
|
||||
|
||||
static inline bool msc_block_is_empty(struct msc_block_desc *bdesc)
|
||||
{
|
||||
/* header hasn't been written */
|
||||
|
@ -139,28 +271,25 @@ static inline bool msc_block_is_empty(struct msc_block_desc *bdesc)
|
|||
return false;
|
||||
}
|
||||
|
||||
static inline struct msc_block_desc *
|
||||
msc_win_block(struct msc_window *win, unsigned int block)
|
||||
static inline struct scatterlist *msc_win_base_sg(struct msc_window *win)
|
||||
{
|
||||
return sg_virt(&win->sgt->sgl[block]);
|
||||
return win->sgt->sgl;
|
||||
}
|
||||
|
||||
static inline size_t
|
||||
msc_win_actual_bsz(struct msc_window *win, unsigned int block)
|
||||
static inline struct msc_block_desc *msc_win_base(struct msc_window *win)
|
||||
{
|
||||
return win->sgt->sgl[block].length;
|
||||
return sg_virt(msc_win_base_sg(win));
|
||||
}
|
||||
|
||||
static inline dma_addr_t
|
||||
msc_win_baddr(struct msc_window *win, unsigned int block)
|
||||
static inline dma_addr_t msc_win_base_dma(struct msc_window *win)
|
||||
{
|
||||
return sg_dma_address(&win->sgt->sgl[block]);
|
||||
return sg_dma_address(msc_win_base_sg(win));
|
||||
}
|
||||
|
||||
static inline unsigned long
|
||||
msc_win_bpfn(struct msc_window *win, unsigned int block)
|
||||
msc_win_base_pfn(struct msc_window *win)
|
||||
{
|
||||
return msc_win_baddr(win, block) >> PAGE_SHIFT;
|
||||
return PFN_DOWN(msc_win_base_dma(win));
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -188,6 +317,26 @@ static struct msc_window *msc_next_window(struct msc_window *win)
|
|||
return list_next_entry(win, entry);
|
||||
}
|
||||
|
||||
static size_t msc_win_total_sz(struct msc_window *win)
|
||||
{
|
||||
struct scatterlist *sg;
|
||||
unsigned int blk;
|
||||
size_t size = 0;
|
||||
|
||||
for_each_sg(win->sgt->sgl, sg, win->nr_segs, blk) {
|
||||
struct msc_block_desc *bdesc = sg_virt(sg);
|
||||
|
||||
if (msc_block_wrapped(bdesc))
|
||||
return win->nr_blocks << PAGE_SHIFT;
|
||||
|
||||
size += msc_total_sz(bdesc);
|
||||
if (msc_block_last_written(bdesc))
|
||||
break;
|
||||
}
|
||||
|
||||
return size;
|
||||
}
|
||||
|
||||
/**
|
||||
* msc_find_window() - find a window matching a given sg_table
|
||||
* @msc: MSC device
|
||||
|
@ -216,7 +365,7 @@ msc_find_window(struct msc *msc, struct sg_table *sgt, bool nonempty)
|
|||
found++;
|
||||
|
||||
/* skip the empty ones */
|
||||
if (nonempty && msc_block_is_empty(msc_win_block(win, 0)))
|
||||
if (nonempty && msc_block_is_empty(msc_win_base(win)))
|
||||
continue;
|
||||
|
||||
if (found)
|
||||
|
@ -250,44 +399,38 @@ static struct msc_window *msc_oldest_window(struct msc *msc)
|
|||
}
|
||||
|
||||
/**
|
||||
* msc_win_oldest_block() - locate the oldest block in a given window
|
||||
* msc_win_oldest_sg() - locate the oldest block in a given window
|
||||
* @win: window to look at
|
||||
*
|
||||
* Return: index of the block with the oldest data
|
||||
*/
|
||||
static unsigned int msc_win_oldest_block(struct msc_window *win)
|
||||
static struct scatterlist *msc_win_oldest_sg(struct msc_window *win)
|
||||
{
|
||||
unsigned int blk;
|
||||
struct msc_block_desc *bdesc = msc_win_block(win, 0);
|
||||
struct scatterlist *sg;
|
||||
struct msc_block_desc *bdesc = msc_win_base(win);
|
||||
|
||||
/* without wrapping, first block is the oldest */
|
||||
if (!msc_block_wrapped(bdesc))
|
||||
return 0;
|
||||
return msc_win_base_sg(win);
|
||||
|
||||
/*
|
||||
* with wrapping, last written block contains both the newest and the
|
||||
* oldest data for this window.
|
||||
*/
|
||||
for (blk = 0; blk < win->nr_segs; blk++) {
|
||||
bdesc = msc_win_block(win, blk);
|
||||
for_each_sg(win->sgt->sgl, sg, win->nr_segs, blk) {
|
||||
struct msc_block_desc *bdesc = sg_virt(sg);
|
||||
|
||||
if (msc_block_last_written(bdesc))
|
||||
return blk;
|
||||
return sg;
|
||||
}
|
||||
|
||||
return 0;
|
||||
return msc_win_base_sg(win);
|
||||
}
|
||||
|
||||
static struct msc_block_desc *msc_iter_bdesc(struct msc_iter *iter)
|
||||
{
|
||||
return msc_win_block(iter->win, iter->block);
|
||||
}
|
||||
|
||||
static void msc_iter_init(struct msc_iter *iter)
|
||||
{
|
||||
memset(iter, 0, sizeof(*iter));
|
||||
iter->start_block = -1;
|
||||
iter->block = -1;
|
||||
return sg_virt(iter->block);
|
||||
}
|
||||
|
||||
static struct msc_iter *msc_iter_install(struct msc *msc)
|
||||
|
@ -312,7 +455,6 @@ static struct msc_iter *msc_iter_install(struct msc *msc)
|
|||
goto unlock;
|
||||
}
|
||||
|
||||
msc_iter_init(iter);
|
||||
iter->msc = msc;
|
||||
|
||||
list_add_tail(&iter->entry, &msc->iter_list);
|
||||
|
@ -333,10 +475,10 @@ static void msc_iter_remove(struct msc_iter *iter, struct msc *msc)
|
|||
|
||||
static void msc_iter_block_start(struct msc_iter *iter)
|
||||
{
|
||||
if (iter->start_block != -1)
|
||||
if (iter->start_block)
|
||||
return;
|
||||
|
||||
iter->start_block = msc_win_oldest_block(iter->win);
|
||||
iter->start_block = msc_win_oldest_sg(iter->win);
|
||||
iter->block = iter->start_block;
|
||||
iter->wrap_count = 0;
|
||||
|
||||
|
@ -360,7 +502,7 @@ static int msc_iter_win_start(struct msc_iter *iter, struct msc *msc)
|
|||
return -EINVAL;
|
||||
|
||||
iter->win = iter->start_win;
|
||||
iter->start_block = -1;
|
||||
iter->start_block = NULL;
|
||||
|
||||
msc_iter_block_start(iter);
|
||||
|
||||
|
@ -370,7 +512,7 @@ static int msc_iter_win_start(struct msc_iter *iter, struct msc *msc)
|
|||
static int msc_iter_win_advance(struct msc_iter *iter)
|
||||
{
|
||||
iter->win = msc_next_window(iter->win);
|
||||
iter->start_block = -1;
|
||||
iter->start_block = NULL;
|
||||
|
||||
if (iter->win == iter->start_win) {
|
||||
iter->eof++;
|
||||
|
@ -400,8 +542,10 @@ static int msc_iter_block_advance(struct msc_iter *iter)
|
|||
return msc_iter_win_advance(iter);
|
||||
|
||||
/* block advance */
|
||||
if (++iter->block == iter->win->nr_segs)
|
||||
iter->block = 0;
|
||||
if (sg_is_last(iter->block))
|
||||
iter->block = msc_win_base_sg(iter->win);
|
||||
else
|
||||
iter->block = sg_next(iter->block);
|
||||
|
||||
/* no wrapping, sanity check in case there is no last written block */
|
||||
if (!iter->wrap_count && iter->block == iter->start_block)
|
||||
|
@ -506,14 +650,15 @@ next_block:
|
|||
static void msc_buffer_clear_hw_header(struct msc *msc)
|
||||
{
|
||||
struct msc_window *win;
|
||||
struct scatterlist *sg;
|
||||
|
||||
list_for_each_entry(win, &msc->win_list, entry) {
|
||||
unsigned int blk;
|
||||
size_t hw_sz = sizeof(struct msc_block_desc) -
|
||||
offsetof(struct msc_block_desc, hw_tag);
|
||||
|
||||
for (blk = 0; blk < win->nr_segs; blk++) {
|
||||
struct msc_block_desc *bdesc = msc_win_block(win, blk);
|
||||
for_each_sg(win->sgt->sgl, sg, win->nr_segs, blk) {
|
||||
struct msc_block_desc *bdesc = sg_virt(sg);
|
||||
|
||||
memset(&bdesc->hw_tag, 0, hw_sz);
|
||||
}
|
||||
|
@ -527,6 +672,9 @@ static int intel_th_msu_init(struct msc *msc)
|
|||
if (!msc->do_irq)
|
||||
return 0;
|
||||
|
||||
if (!msc->mbuf)
|
||||
return 0;
|
||||
|
||||
mintctl = ioread32(msc->msu_base + REG_MSU_MINTCTL);
|
||||
mintctl |= msc->index ? M1BLIE : M0BLIE;
|
||||
iowrite32(mintctl, msc->msu_base + REG_MSU_MINTCTL);
|
||||
|
@ -554,6 +702,49 @@ static void intel_th_msu_deinit(struct msc *msc)
|
|||
iowrite32(mintctl, msc->msu_base + REG_MSU_MINTCTL);
|
||||
}
|
||||
|
||||
static int msc_win_set_lockout(struct msc_window *win,
|
||||
enum lockout_state expect,
|
||||
enum lockout_state new)
|
||||
{
|
||||
enum lockout_state old;
|
||||
unsigned long flags;
|
||||
int ret = 0;
|
||||
|
||||
if (!win->msc->mbuf)
|
||||
return 0;
|
||||
|
||||
spin_lock_irqsave(&win->lo_lock, flags);
|
||||
old = win->lockout;
|
||||
|
||||
if (old != expect) {
|
||||
ret = -EINVAL;
|
||||
dev_warn_ratelimited(msc_dev(win->msc),
|
||||
"expected lockout state %d, got %d\n",
|
||||
expect, old);
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
win->lockout = new;
|
||||
|
||||
if (old == expect && new == WIN_LOCKED)
|
||||
atomic_inc(&win->msc->user_count);
|
||||
else if (old == expect && old == WIN_LOCKED)
|
||||
atomic_dec(&win->msc->user_count);
|
||||
|
||||
unlock:
|
||||
spin_unlock_irqrestore(&win->lo_lock, flags);
|
||||
|
||||
if (ret) {
|
||||
if (expect == WIN_READY && old == WIN_LOCKED)
|
||||
return -EBUSY;
|
||||
|
||||
/* from intel_th_msc_window_unlock(), don't warn if not locked */
|
||||
if (expect == WIN_LOCKED && old == new)
|
||||
return 0;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
/**
|
||||
* msc_configure() - set up MSC hardware
|
||||
* @msc: the MSC device to configure
|
||||
|
@ -571,8 +762,15 @@ static int msc_configure(struct msc *msc)
|
|||
if (msc->mode > MSC_MODE_MULTI)
|
||||
return -ENOTSUPP;
|
||||
|
||||
if (msc->mode == MSC_MODE_MULTI)
|
||||
if (msc->mode == MSC_MODE_MULTI) {
|
||||
if (msc_win_set_lockout(msc->cur_win, WIN_READY, WIN_INUSE))
|
||||
return -EBUSY;
|
||||
|
||||
msc_buffer_clear_hw_header(msc);
|
||||
}
|
||||
|
||||
msc->orig_addr = ioread32(msc->reg_base + REG_MSU_MSC0BAR);
|
||||
msc->orig_sz = ioread32(msc->reg_base + REG_MSU_MSC0SIZE);
|
||||
|
||||
reg = msc->base_addr >> PAGE_SHIFT;
|
||||
iowrite32(reg, msc->reg_base + REG_MSU_MSC0BAR);
|
||||
|
@ -594,10 +792,14 @@ static int msc_configure(struct msc *msc)
|
|||
|
||||
iowrite32(reg, msc->reg_base + REG_MSU_MSC0CTL);
|
||||
|
||||
intel_th_msu_init(msc);
|
||||
|
||||
msc->thdev->output.multiblock = msc->mode == MSC_MODE_MULTI;
|
||||
intel_th_trace_enable(msc->thdev);
|
||||
msc->enabled = 1;
|
||||
|
||||
if (msc->mbuf && msc->mbuf->activate)
|
||||
msc->mbuf->activate(msc->mbuf_priv);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -611,10 +813,17 @@ static int msc_configure(struct msc *msc)
|
|||
*/
|
||||
static void msc_disable(struct msc *msc)
|
||||
{
|
||||
struct msc_window *win = msc->cur_win;
|
||||
u32 reg;
|
||||
|
||||
lockdep_assert_held(&msc->buf_mutex);
|
||||
|
||||
if (msc->mode == MSC_MODE_MULTI)
|
||||
msc_win_set_lockout(win, WIN_INUSE, WIN_LOCKED);
|
||||
|
||||
if (msc->mbuf && msc->mbuf->deactivate)
|
||||
msc->mbuf->deactivate(msc->mbuf_priv);
|
||||
intel_th_msu_deinit(msc);
|
||||
intel_th_trace_disable(msc->thdev);
|
||||
|
||||
if (msc->mode == MSC_MODE_SINGLE) {
|
||||
|
@ -630,16 +839,25 @@ static void msc_disable(struct msc *msc)
|
|||
reg = ioread32(msc->reg_base + REG_MSU_MSC0CTL);
|
||||
reg &= ~MSC_EN;
|
||||
iowrite32(reg, msc->reg_base + REG_MSU_MSC0CTL);
|
||||
|
||||
if (msc->mbuf && msc->mbuf->ready)
|
||||
msc->mbuf->ready(msc->mbuf_priv, win->sgt,
|
||||
msc_win_total_sz(win));
|
||||
|
||||
msc->enabled = 0;
|
||||
|
||||
iowrite32(0, msc->reg_base + REG_MSU_MSC0BAR);
|
||||
iowrite32(0, msc->reg_base + REG_MSU_MSC0SIZE);
|
||||
iowrite32(msc->orig_addr, msc->reg_base + REG_MSU_MSC0BAR);
|
||||
iowrite32(msc->orig_sz, msc->reg_base + REG_MSU_MSC0SIZE);
|
||||
|
||||
dev_dbg(msc_dev(msc), "MSCnNWSA: %08x\n",
|
||||
ioread32(msc->reg_base + REG_MSU_MSC0NWSA));
|
||||
|
||||
reg = ioread32(msc->reg_base + REG_MSU_MSC0STS);
|
||||
dev_dbg(msc_dev(msc), "MSCnSTS: %08x\n", reg);
|
||||
|
||||
reg = ioread32(msc->reg_base + REG_MSU_MSUSTS);
|
||||
reg &= msc->index ? MSUSTS_MSC1BLAST : MSUSTS_MSC0BLAST;
|
||||
iowrite32(reg, msc->reg_base + REG_MSU_MSUSTS);
|
||||
}
|
||||
|
||||
static int intel_th_msc_activate(struct intel_th_device *thdev)
|
||||
|
@ -791,10 +1009,9 @@ static int __msc_buffer_win_alloc(struct msc_window *win,
|
|||
return nr_segs;
|
||||
|
||||
err_nomem:
|
||||
for (i--; i >= 0; i--)
|
||||
for_each_sg(win->sgt->sgl, sg_ptr, i, ret)
|
||||
dma_free_coherent(msc_dev(win->msc)->parent->parent, PAGE_SIZE,
|
||||
msc_win_block(win, i),
|
||||
msc_win_baddr(win, i));
|
||||
sg_virt(sg_ptr), sg_dma_address(sg_ptr));
|
||||
|
||||
sg_free_table(win->sgt);
|
||||
|
||||
|
@ -804,20 +1021,26 @@ err_nomem:
|
|||
#ifdef CONFIG_X86
|
||||
static void msc_buffer_set_uc(struct msc_window *win, unsigned int nr_segs)
|
||||
{
|
||||
struct scatterlist *sg_ptr;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < nr_segs; i++)
|
||||
for_each_sg(win->sgt->sgl, sg_ptr, nr_segs, i) {
|
||||
/* Set the page as uncached */
|
||||
set_memory_uc((unsigned long)msc_win_block(win, i), 1);
|
||||
set_memory_uc((unsigned long)sg_virt(sg_ptr),
|
||||
PFN_DOWN(sg_ptr->length));
|
||||
}
|
||||
}
|
||||
|
||||
static void msc_buffer_set_wb(struct msc_window *win)
|
||||
{
|
||||
struct scatterlist *sg_ptr;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < win->nr_segs; i++)
|
||||
for_each_sg(win->sgt->sgl, sg_ptr, win->nr_segs, i) {
|
||||
/* Reset the page to write-back */
|
||||
set_memory_wb((unsigned long)msc_win_block(win, i), 1);
|
||||
set_memory_wb((unsigned long)sg_virt(sg_ptr),
|
||||
PFN_DOWN(sg_ptr->length));
|
||||
}
|
||||
}
|
||||
#else /* !X86 */
|
||||
static inline void
|
||||
|
@ -843,19 +1066,14 @@ static int msc_buffer_win_alloc(struct msc *msc, unsigned int nr_blocks)
|
|||
if (!nr_blocks)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* This limitation hold as long as we need random access to the
|
||||
* block. When that changes, this can go away.
|
||||
*/
|
||||
if (nr_blocks > SG_MAX_SINGLE_ALLOC)
|
||||
return -EINVAL;
|
||||
|
||||
win = kzalloc(sizeof(*win), GFP_KERNEL);
|
||||
if (!win)
|
||||
return -ENOMEM;
|
||||
|
||||
win->msc = msc;
|
||||
win->sgt = &win->_sgt;
|
||||
win->lockout = WIN_READY;
|
||||
spin_lock_init(&win->lo_lock);
|
||||
|
||||
if (!list_empty(&msc->win_list)) {
|
||||
struct msc_window *prev = list_last_entry(&msc->win_list,
|
||||
|
@ -865,8 +1083,13 @@ static int msc_buffer_win_alloc(struct msc *msc, unsigned int nr_blocks)
|
|||
win->pgoff = prev->pgoff + prev->nr_blocks;
|
||||
}
|
||||
|
||||
ret = __msc_buffer_win_alloc(win, nr_blocks);
|
||||
if (ret < 0)
|
||||
if (msc->mbuf && msc->mbuf->alloc_window)
|
||||
ret = msc->mbuf->alloc_window(msc->mbuf_priv, &win->sgt,
|
||||
nr_blocks << PAGE_SHIFT);
|
||||
else
|
||||
ret = __msc_buffer_win_alloc(win, nr_blocks);
|
||||
|
||||
if (ret <= 0)
|
||||
goto err_nomem;
|
||||
|
||||
msc_buffer_set_uc(win, ret);
|
||||
|
@ -875,8 +1098,8 @@ static int msc_buffer_win_alloc(struct msc *msc, unsigned int nr_blocks)
|
|||
win->nr_blocks = nr_blocks;
|
||||
|
||||
if (list_empty(&msc->win_list)) {
|
||||
msc->base = msc_win_block(win, 0);
|
||||
msc->base_addr = msc_win_baddr(win, 0);
|
||||
msc->base = msc_win_base(win);
|
||||
msc->base_addr = msc_win_base_dma(win);
|
||||
msc->cur_win = win;
|
||||
}
|
||||
|
||||
|
@ -893,14 +1116,15 @@ err_nomem:
|
|||
|
||||
static void __msc_buffer_win_free(struct msc *msc, struct msc_window *win)
|
||||
{
|
||||
struct scatterlist *sg;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < win->nr_segs; i++) {
|
||||
struct page *page = sg_page(&win->sgt->sgl[i]);
|
||||
for_each_sg(win->sgt->sgl, sg, win->nr_segs, i) {
|
||||
struct page *page = sg_page(sg);
|
||||
|
||||
page->mapping = NULL;
|
||||
dma_free_coherent(msc_dev(win->msc)->parent->parent, PAGE_SIZE,
|
||||
msc_win_block(win, i), msc_win_baddr(win, i));
|
||||
sg_virt(sg), sg_dma_address(sg));
|
||||
}
|
||||
sg_free_table(win->sgt);
|
||||
}
|
||||
|
@ -925,7 +1149,10 @@ static void msc_buffer_win_free(struct msc *msc, struct msc_window *win)
|
|||
|
||||
msc_buffer_set_wb(win);
|
||||
|
||||
__msc_buffer_win_free(msc, win);
|
||||
if (msc->mbuf && msc->mbuf->free_window)
|
||||
msc->mbuf->free_window(msc->mbuf_priv, win->sgt);
|
||||
else
|
||||
__msc_buffer_win_free(msc, win);
|
||||
|
||||
kfree(win);
|
||||
}
|
||||
|
@ -943,6 +1170,7 @@ static void msc_buffer_relink(struct msc *msc)
|
|||
|
||||
/* call with msc::mutex locked */
|
||||
list_for_each_entry(win, &msc->win_list, entry) {
|
||||
struct scatterlist *sg;
|
||||
unsigned int blk;
|
||||
u32 sw_tag = 0;
|
||||
|
||||
|
@ -958,12 +1186,12 @@ static void msc_buffer_relink(struct msc *msc)
|
|||
next_win = list_next_entry(win, entry);
|
||||
}
|
||||
|
||||
for (blk = 0; blk < win->nr_segs; blk++) {
|
||||
struct msc_block_desc *bdesc = msc_win_block(win, blk);
|
||||
for_each_sg(win->sgt->sgl, sg, win->nr_segs, blk) {
|
||||
struct msc_block_desc *bdesc = sg_virt(sg);
|
||||
|
||||
memset(bdesc, 0, sizeof(*bdesc));
|
||||
|
||||
bdesc->next_win = msc_win_bpfn(next_win, 0);
|
||||
bdesc->next_win = msc_win_base_pfn(next_win);
|
||||
|
||||
/*
|
||||
* Similarly to last window, last block should point
|
||||
|
@ -971,13 +1199,15 @@ static void msc_buffer_relink(struct msc *msc)
|
|||
*/
|
||||
if (blk == win->nr_segs - 1) {
|
||||
sw_tag |= MSC_SW_TAG_LASTBLK;
|
||||
bdesc->next_blk = msc_win_bpfn(win, 0);
|
||||
bdesc->next_blk = msc_win_base_pfn(win);
|
||||
} else {
|
||||
bdesc->next_blk = msc_win_bpfn(win, blk + 1);
|
||||
dma_addr_t addr = sg_dma_address(sg_next(sg));
|
||||
|
||||
bdesc->next_blk = PFN_DOWN(addr);
|
||||
}
|
||||
|
||||
bdesc->sw_tag = sw_tag;
|
||||
bdesc->block_sz = msc_win_actual_bsz(win, blk) / 64;
|
||||
bdesc->block_sz = sg->length / 64;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1136,6 +1366,7 @@ static int msc_buffer_free_unless_used(struct msc *msc)
|
|||
static struct page *msc_buffer_get_page(struct msc *msc, unsigned long pgoff)
|
||||
{
|
||||
struct msc_window *win;
|
||||
struct scatterlist *sg;
|
||||
unsigned int blk;
|
||||
|
||||
if (msc->mode == MSC_MODE_SINGLE)
|
||||
|
@ -1150,9 +1381,9 @@ static struct page *msc_buffer_get_page(struct msc *msc, unsigned long pgoff)
|
|||
found:
|
||||
pgoff -= win->pgoff;
|
||||
|
||||
for (blk = 0; blk < win->nr_segs; blk++) {
|
||||
struct page *page = sg_page(&win->sgt->sgl[blk]);
|
||||
size_t pgsz = PFN_DOWN(msc_win_actual_bsz(win, blk));
|
||||
for_each_sg(win->sgt->sgl, sg, win->nr_segs, blk) {
|
||||
struct page *page = sg_page(sg);
|
||||
size_t pgsz = PFN_DOWN(sg->length);
|
||||
|
||||
if (pgoff < pgsz)
|
||||
return page + pgoff;
|
||||
|
@ -1456,24 +1687,83 @@ static void msc_win_switch(struct msc *msc)
|
|||
else
|
||||
msc->cur_win = list_next_entry(msc->cur_win, entry);
|
||||
|
||||
msc->base = msc_win_block(msc->cur_win, 0);
|
||||
msc->base_addr = msc_win_baddr(msc->cur_win, 0);
|
||||
msc->base = msc_win_base(msc->cur_win);
|
||||
msc->base_addr = msc_win_base_dma(msc->cur_win);
|
||||
|
||||
intel_th_trace_switch(msc->thdev);
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_th_msc_window_unlock - put the window back in rotation
|
||||
* @dev: MSC device to which this relates
|
||||
* @sgt: buffer's sg_table for the window, does nothing if NULL
|
||||
*/
|
||||
void intel_th_msc_window_unlock(struct device *dev, struct sg_table *sgt)
|
||||
{
|
||||
struct msc *msc = dev_get_drvdata(dev);
|
||||
struct msc_window *win;
|
||||
|
||||
if (!sgt)
|
||||
return;
|
||||
|
||||
win = msc_find_window(msc, sgt, false);
|
||||
if (!win)
|
||||
return;
|
||||
|
||||
msc_win_set_lockout(win, WIN_LOCKED, WIN_READY);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(intel_th_msc_window_unlock);
|
||||
|
||||
static void msc_work(struct work_struct *work)
|
||||
{
|
||||
struct msc *msc = container_of(work, struct msc, work);
|
||||
|
||||
intel_th_msc_deactivate(msc->thdev);
|
||||
}
|
||||
|
||||
static irqreturn_t intel_th_msc_interrupt(struct intel_th_device *thdev)
|
||||
{
|
||||
struct msc *msc = dev_get_drvdata(&thdev->dev);
|
||||
u32 msusts = ioread32(msc->msu_base + REG_MSU_MSUSTS);
|
||||
u32 mask = msc->index ? MSUSTS_MSC1BLAST : MSUSTS_MSC0BLAST;
|
||||
struct msc_window *win, *next_win;
|
||||
|
||||
if (!(msusts & mask)) {
|
||||
if (msc->enabled)
|
||||
return IRQ_HANDLED;
|
||||
if (!msc->do_irq || !msc->mbuf)
|
||||
return IRQ_NONE;
|
||||
|
||||
msusts &= mask;
|
||||
|
||||
if (!msusts)
|
||||
return msc->enabled ? IRQ_HANDLED : IRQ_NONE;
|
||||
|
||||
iowrite32(msusts, msc->msu_base + REG_MSU_MSUSTS);
|
||||
|
||||
if (!msc->enabled)
|
||||
return IRQ_NONE;
|
||||
|
||||
/* grab the window before we do the switch */
|
||||
win = msc->cur_win;
|
||||
if (!win)
|
||||
return IRQ_HANDLED;
|
||||
next_win = msc_next_window(win);
|
||||
if (!next_win)
|
||||
return IRQ_HANDLED;
|
||||
|
||||
/* next window: if READY, proceed, if LOCKED, stop the trace */
|
||||
if (msc_win_set_lockout(next_win, WIN_READY, WIN_INUSE)) {
|
||||
schedule_work(&msc->work);
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
/* current window: INUSE -> LOCKED */
|
||||
msc_win_set_lockout(win, WIN_INUSE, WIN_LOCKED);
|
||||
|
||||
msc_win_switch(msc);
|
||||
|
||||
if (msc->mbuf && msc->mbuf->ready)
|
||||
msc->mbuf->ready(msc->mbuf_priv, win->sgt,
|
||||
msc_win_total_sz(win));
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
|
@ -1511,21 +1801,43 @@ wrap_store(struct device *dev, struct device_attribute *attr, const char *buf,
|
|||
|
||||
static DEVICE_ATTR_RW(wrap);
|
||||
|
||||
static void msc_buffer_unassign(struct msc *msc)
|
||||
{
|
||||
lockdep_assert_held(&msc->buf_mutex);
|
||||
|
||||
if (!msc->mbuf)
|
||||
return;
|
||||
|
||||
msc->mbuf->unassign(msc->mbuf_priv);
|
||||
msu_buffer_put(msc->mbuf);
|
||||
msc->mbuf_priv = NULL;
|
||||
msc->mbuf = NULL;
|
||||
}
|
||||
|
||||
static ssize_t
|
||||
mode_show(struct device *dev, struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct msc *msc = dev_get_drvdata(dev);
|
||||
const char *mode = msc_mode[msc->mode];
|
||||
ssize_t ret;
|
||||
|
||||
return scnprintf(buf, PAGE_SIZE, "%s\n", msc_mode[msc->mode]);
|
||||
mutex_lock(&msc->buf_mutex);
|
||||
if (msc->mbuf)
|
||||
mode = msc->mbuf->name;
|
||||
ret = scnprintf(buf, PAGE_SIZE, "%s\n", mode);
|
||||
mutex_unlock(&msc->buf_mutex);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static ssize_t
|
||||
mode_store(struct device *dev, struct device_attribute *attr, const char *buf,
|
||||
size_t size)
|
||||
{
|
||||
const struct msu_buffer *mbuf = NULL;
|
||||
struct msc *msc = dev_get_drvdata(dev);
|
||||
size_t len = size;
|
||||
char *cp;
|
||||
char *cp, *mode;
|
||||
int i, ret;
|
||||
|
||||
if (!capable(CAP_SYS_RAWIO))
|
||||
|
@ -1535,17 +1847,59 @@ mode_store(struct device *dev, struct device_attribute *attr, const char *buf,
|
|||
if (cp)
|
||||
len = cp - buf;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(msc_mode); i++)
|
||||
if (!strncmp(msc_mode[i], buf, len))
|
||||
goto found;
|
||||
mode = kstrndup(buf, len, GFP_KERNEL);
|
||||
i = match_string(msc_mode, ARRAY_SIZE(msc_mode), mode);
|
||||
if (i >= 0)
|
||||
goto found;
|
||||
|
||||
/* Buffer sinks only work with a usable IRQ */
|
||||
if (!msc->do_irq) {
|
||||
kfree(mode);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
mbuf = msu_buffer_get(mode);
|
||||
kfree(mode);
|
||||
if (mbuf)
|
||||
goto found;
|
||||
|
||||
return -EINVAL;
|
||||
|
||||
found:
|
||||
mutex_lock(&msc->buf_mutex);
|
||||
ret = 0;
|
||||
|
||||
/* Same buffer: do nothing */
|
||||
if (mbuf && mbuf == msc->mbuf) {
|
||||
/* put the extra reference we just got */
|
||||
msu_buffer_put(mbuf);
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
ret = msc_buffer_unlocked_free_unless_used(msc);
|
||||
if (!ret)
|
||||
msc->mode = i;
|
||||
if (ret)
|
||||
goto unlock;
|
||||
|
||||
if (mbuf) {
|
||||
void *mbuf_priv = mbuf->assign(dev, &i);
|
||||
|
||||
if (!mbuf_priv) {
|
||||
ret = -ENOMEM;
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
msc_buffer_unassign(msc);
|
||||
msc->mbuf_priv = mbuf_priv;
|
||||
msc->mbuf = mbuf;
|
||||
} else {
|
||||
msc_buffer_unassign(msc);
|
||||
}
|
||||
|
||||
msc->mode = i;
|
||||
|
||||
unlock:
|
||||
if (ret && mbuf)
|
||||
msu_buffer_put(mbuf);
|
||||
mutex_unlock(&msc->buf_mutex);
|
||||
|
||||
return ret ? ret : size;
|
||||
|
@ -1667,7 +2021,12 @@ win_switch_store(struct device *dev, struct device_attribute *attr,
|
|||
return -EINVAL;
|
||||
|
||||
mutex_lock(&msc->buf_mutex);
|
||||
if (msc->mode != MSC_MODE_MULTI)
|
||||
/*
|
||||
* Window switch can only happen in the "multi" mode.
|
||||
* If a external buffer is engaged, they have the full
|
||||
* control over window switching.
|
||||
*/
|
||||
if (msc->mode != MSC_MODE_MULTI || msc->mbuf)
|
||||
ret = -ENOTSUPP;
|
||||
else
|
||||
msc_win_switch(msc);
|
||||
|
@ -1720,10 +2079,7 @@ static int intel_th_msc_probe(struct intel_th_device *thdev)
|
|||
msc->reg_base = base + msc->index * 0x100;
|
||||
msc->msu_base = base;
|
||||
|
||||
err = intel_th_msu_init(msc);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
INIT_WORK(&msc->work, msc_work);
|
||||
err = intel_th_msc_init(msc);
|
||||
if (err)
|
||||
return err;
|
||||
|
@ -1739,7 +2095,6 @@ static void intel_th_msc_remove(struct intel_th_device *thdev)
|
|||
int ret;
|
||||
|
||||
intel_th_msc_deactivate(thdev);
|
||||
intel_th_msu_deinit(msc);
|
||||
|
||||
/*
|
||||
* Buffers should not be used at this point except if the
|
||||
|
|
|
@ -44,14 +44,6 @@ enum {
|
|||
#define M0BLIE BIT(16)
|
||||
#define M1BLIE BIT(24)
|
||||
|
||||
/* MSC operating modes (MSC_MODE) */
|
||||
enum {
|
||||
MSC_MODE_SINGLE = 0,
|
||||
MSC_MODE_MULTI,
|
||||
MSC_MODE_EXI,
|
||||
MSC_MODE_DEBUG,
|
||||
};
|
||||
|
||||
/* MSCnSTS bits */
|
||||
#define MSCSTS_WRAPSTAT BIT(1) /* Wrap occurred */
|
||||
#define MSCSTS_PLE BIT(2) /* Pipeline Empty */
|
||||
|
@ -93,6 +85,16 @@ static inline unsigned long msc_data_sz(struct msc_block_desc *bdesc)
|
|||
return bdesc->valid_dw * 4 - MSC_BDESC;
|
||||
}
|
||||
|
||||
static inline unsigned long msc_total_sz(struct msc_block_desc *bdesc)
|
||||
{
|
||||
return bdesc->valid_dw * 4;
|
||||
}
|
||||
|
||||
static inline unsigned long msc_block_sz(struct msc_block_desc *bdesc)
|
||||
{
|
||||
return bdesc->block_sz * 64 - MSC_BDESC;
|
||||
}
|
||||
|
||||
static inline bool msc_block_wrapped(struct msc_block_desc *bdesc)
|
||||
{
|
||||
if (bdesc->hw_tag & (MSC_HW_TAG_BLOCKWRAP | MSC_HW_TAG_WINWRAP))
|
||||
|
@ -104,7 +106,7 @@ static inline bool msc_block_wrapped(struct msc_block_desc *bdesc)
|
|||
static inline bool msc_block_last_written(struct msc_block_desc *bdesc)
|
||||
{
|
||||
if ((bdesc->hw_tag & MSC_HW_TAG_ENDBIT) ||
|
||||
(msc_data_sz(bdesc) != DATA_IN_PAGE))
|
||||
(msc_data_sz(bdesc) != msc_block_sz(bdesc)))
|
||||
return true;
|
||||
|
||||
return false;
|
||||
|
|
|
@ -29,6 +29,7 @@ static struct dentry *icc_debugfs_dir;
|
|||
* @req_node: entry in list of requests for the particular @node
|
||||
* @node: the interconnect node to which this constraint applies
|
||||
* @dev: reference to the device that sets the constraints
|
||||
* @tag: path tag (optional)
|
||||
* @avg_bw: an integer describing the average bandwidth in kBps
|
||||
* @peak_bw: an integer describing the peak bandwidth in kBps
|
||||
*/
|
||||
|
@ -36,6 +37,7 @@ struct icc_req {
|
|||
struct hlist_node req_node;
|
||||
struct icc_node *node;
|
||||
struct device *dev;
|
||||
u32 tag;
|
||||
u32 avg_bw;
|
||||
u32 peak_bw;
|
||||
};
|
||||
|
@ -203,8 +205,11 @@ static int aggregate_requests(struct icc_node *node)
|
|||
node->avg_bw = 0;
|
||||
node->peak_bw = 0;
|
||||
|
||||
if (p->pre_aggregate)
|
||||
p->pre_aggregate(node);
|
||||
|
||||
hlist_for_each_entry(r, &node->req_list, req_node)
|
||||
p->aggregate(node, r->avg_bw, r->peak_bw,
|
||||
p->aggregate(node, r->tag, r->avg_bw, r->peak_bw,
|
||||
&node->avg_bw, &node->peak_bw);
|
||||
|
||||
return 0;
|
||||
|
@ -385,6 +390,26 @@ struct icc_path *of_icc_get(struct device *dev, const char *name)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(of_icc_get);
|
||||
|
||||
/**
|
||||
* icc_set_tag() - set an optional tag on a path
|
||||
* @path: the path we want to tag
|
||||
* @tag: the tag value
|
||||
*
|
||||
* This function allows consumers to append a tag to the requests associated
|
||||
* with a path, so that a different aggregation could be done based on this tag.
|
||||
*/
|
||||
void icc_set_tag(struct icc_path *path, u32 tag)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (!path)
|
||||
return;
|
||||
|
||||
for (i = 0; i < path->num_nodes; i++)
|
||||
path->reqs[i].tag = tag;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(icc_set_tag);
|
||||
|
||||
/**
|
||||
* icc_set_bw() - set bandwidth constraints on an interconnect path
|
||||
* @path: reference to the path returned by icc_get()
|
||||
|
|
|
@ -5,6 +5,15 @@ config INTERCONNECT_QCOM
|
|||
help
|
||||
Support for Qualcomm's Network-on-Chip interconnect hardware.
|
||||
|
||||
config INTERCONNECT_QCOM_QCS404
|
||||
tristate "Qualcomm QCS404 interconnect driver"
|
||||
depends on INTERCONNECT_QCOM
|
||||
depends on QCOM_SMD_RPM
|
||||
select INTERCONNECT_QCOM_SMD_RPM
|
||||
help
|
||||
This is a driver for the Qualcomm Network-on-Chip on qcs404-based
|
||||
platforms.
|
||||
|
||||
config INTERCONNECT_QCOM_SDM845
|
||||
tristate "Qualcomm SDM845 interconnect driver"
|
||||
depends on INTERCONNECT_QCOM
|
||||
|
@ -12,3 +21,6 @@ config INTERCONNECT_QCOM_SDM845
|
|||
help
|
||||
This is a driver for the Qualcomm Network-on-Chip on sdm845-based
|
||||
platforms.
|
||||
|
||||
config INTERCONNECT_QCOM_SMD_RPM
|
||||
tristate
|
||||
|
|
|
@ -1,5 +1,9 @@
|
|||
# SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
qnoc-qcs404-objs := qcs404.o
|
||||
qnoc-sdm845-objs := sdm845.o
|
||||
icc-smd-rpm-objs := smd-rpm.o
|
||||
|
||||
obj-$(CONFIG_INTERCONNECT_QCOM_QCS404) += qnoc-qcs404.o
|
||||
obj-$(CONFIG_INTERCONNECT_QCOM_SDM845) += qnoc-sdm845.o
|
||||
obj-$(CONFIG_INTERCONNECT_QCOM_SMD_RPM) += icc-smd-rpm.o
|
||||
|
|
|
@ -0,0 +1,539 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (C) 2019 Linaro Ltd
|
||||
*/
|
||||
|
||||
#include <dt-bindings/interconnect/qcom,qcs404.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/interconnect-provider.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include "smd-rpm.h"
|
||||
|
||||
#define RPM_BUS_MASTER_REQ 0x73616d62
|
||||
#define RPM_BUS_SLAVE_REQ 0x766c7362
|
||||
|
||||
enum {
|
||||
QCS404_MASTER_AMPSS_M0 = 1,
|
||||
QCS404_MASTER_GRAPHICS_3D,
|
||||
QCS404_MASTER_MDP_PORT0,
|
||||
QCS404_SNOC_BIMC_1_MAS,
|
||||
QCS404_MASTER_TCU_0,
|
||||
QCS404_MASTER_SPDM,
|
||||
QCS404_MASTER_BLSP_1,
|
||||
QCS404_MASTER_BLSP_2,
|
||||
QCS404_MASTER_XM_USB_HS1,
|
||||
QCS404_MASTER_CRYPTO_CORE0,
|
||||
QCS404_MASTER_SDCC_1,
|
||||
QCS404_MASTER_SDCC_2,
|
||||
QCS404_SNOC_PNOC_MAS,
|
||||
QCS404_MASTER_QPIC,
|
||||
QCS404_MASTER_QDSS_BAM,
|
||||
QCS404_BIMC_SNOC_MAS,
|
||||
QCS404_PNOC_SNOC_MAS,
|
||||
QCS404_MASTER_QDSS_ETR,
|
||||
QCS404_MASTER_EMAC,
|
||||
QCS404_MASTER_PCIE,
|
||||
QCS404_MASTER_USB3,
|
||||
QCS404_PNOC_INT_0,
|
||||
QCS404_PNOC_INT_2,
|
||||
QCS404_PNOC_INT_3,
|
||||
QCS404_PNOC_SLV_0,
|
||||
QCS404_PNOC_SLV_1,
|
||||
QCS404_PNOC_SLV_2,
|
||||
QCS404_PNOC_SLV_3,
|
||||
QCS404_PNOC_SLV_4,
|
||||
QCS404_PNOC_SLV_6,
|
||||
QCS404_PNOC_SLV_7,
|
||||
QCS404_PNOC_SLV_8,
|
||||
QCS404_PNOC_SLV_9,
|
||||
QCS404_PNOC_SLV_10,
|
||||
QCS404_PNOC_SLV_11,
|
||||
QCS404_SNOC_QDSS_INT,
|
||||
QCS404_SNOC_INT_0,
|
||||
QCS404_SNOC_INT_1,
|
||||
QCS404_SNOC_INT_2,
|
||||
QCS404_SLAVE_EBI_CH0,
|
||||
QCS404_BIMC_SNOC_SLV,
|
||||
QCS404_SLAVE_SPDM_WRAPPER,
|
||||
QCS404_SLAVE_PDM,
|
||||
QCS404_SLAVE_PRNG,
|
||||
QCS404_SLAVE_TCSR,
|
||||
QCS404_SLAVE_SNOC_CFG,
|
||||
QCS404_SLAVE_MESSAGE_RAM,
|
||||
QCS404_SLAVE_DISPLAY_CFG,
|
||||
QCS404_SLAVE_GRAPHICS_3D_CFG,
|
||||
QCS404_SLAVE_BLSP_1,
|
||||
QCS404_SLAVE_TLMM_NORTH,
|
||||
QCS404_SLAVE_PCIE_1,
|
||||
QCS404_SLAVE_EMAC_CFG,
|
||||
QCS404_SLAVE_BLSP_2,
|
||||
QCS404_SLAVE_TLMM_EAST,
|
||||
QCS404_SLAVE_TCU,
|
||||
QCS404_SLAVE_PMIC_ARB,
|
||||
QCS404_SLAVE_SDCC_1,
|
||||
QCS404_SLAVE_SDCC_2,
|
||||
QCS404_SLAVE_TLMM_SOUTH,
|
||||
QCS404_SLAVE_USB_HS,
|
||||
QCS404_SLAVE_USB3,
|
||||
QCS404_SLAVE_CRYPTO_0_CFG,
|
||||
QCS404_PNOC_SNOC_SLV,
|
||||
QCS404_SLAVE_APPSS,
|
||||
QCS404_SLAVE_WCSS,
|
||||
QCS404_SNOC_BIMC_1_SLV,
|
||||
QCS404_SLAVE_OCIMEM,
|
||||
QCS404_SNOC_PNOC_SLV,
|
||||
QCS404_SLAVE_QDSS_STM,
|
||||
QCS404_SLAVE_CATS_128,
|
||||
QCS404_SLAVE_OCMEM_64,
|
||||
QCS404_SLAVE_LPASS,
|
||||
};
|
||||
|
||||
#define to_qcom_provider(_provider) \
|
||||
container_of(_provider, struct qcom_icc_provider, provider)
|
||||
|
||||
static const struct clk_bulk_data bus_clocks[] = {
|
||||
{ .id = "bus" },
|
||||
{ .id = "bus_a" },
|
||||
};
|
||||
|
||||
/**
|
||||
* struct qcom_icc_provider - Qualcomm specific interconnect provider
|
||||
* @provider: generic interconnect provider
|
||||
* @bus_clks: the clk_bulk_data table of bus clocks
|
||||
* @num_clks: the total number of clk_bulk_data entries
|
||||
*/
|
||||
struct qcom_icc_provider {
|
||||
struct icc_provider provider;
|
||||
struct clk_bulk_data *bus_clks;
|
||||
int num_clks;
|
||||
};
|
||||
|
||||
#define QCS404_MAX_LINKS 12
|
||||
|
||||
/**
|
||||
* struct qcom_icc_node - Qualcomm specific interconnect nodes
|
||||
* @name: the node name used in debugfs
|
||||
* @id: a unique node identifier
|
||||
* @links: an array of nodes where we can go next while traversing
|
||||
* @num_links: the total number of @links
|
||||
* @buswidth: width of the interconnect between a node and the bus (bytes)
|
||||
* @mas_rpm_id: RPM id for devices that are bus masters
|
||||
* @slv_rpm_id: RPM id for devices that are bus slaves
|
||||
* @rate: current bus clock rate in Hz
|
||||
*/
|
||||
struct qcom_icc_node {
|
||||
unsigned char *name;
|
||||
u16 id;
|
||||
u16 links[QCS404_MAX_LINKS];
|
||||
u16 num_links;
|
||||
u16 buswidth;
|
||||
int mas_rpm_id;
|
||||
int slv_rpm_id;
|
||||
u64 rate;
|
||||
};
|
||||
|
||||
struct qcom_icc_desc {
|
||||
struct qcom_icc_node **nodes;
|
||||
size_t num_nodes;
|
||||
};
|
||||
|
||||
#define DEFINE_QNODE(_name, _id, _buswidth, _mas_rpm_id, _slv_rpm_id, \
|
||||
...) \
|
||||
static struct qcom_icc_node _name = { \
|
||||
.name = #_name, \
|
||||
.id = _id, \
|
||||
.buswidth = _buswidth, \
|
||||
.mas_rpm_id = _mas_rpm_id, \
|
||||
.slv_rpm_id = _slv_rpm_id, \
|
||||
.num_links = ARRAY_SIZE(((int[]){ __VA_ARGS__ })), \
|
||||
.links = { __VA_ARGS__ }, \
|
||||
}
|
||||
|
||||
DEFINE_QNODE(mas_apps_proc, QCS404_MASTER_AMPSS_M0, 8, 0, -1, QCS404_SLAVE_EBI_CH0, QCS404_BIMC_SNOC_SLV);
|
||||
DEFINE_QNODE(mas_oxili, QCS404_MASTER_GRAPHICS_3D, 8, 6, -1, QCS404_SLAVE_EBI_CH0, QCS404_BIMC_SNOC_SLV);
|
||||
DEFINE_QNODE(mas_mdp, QCS404_MASTER_MDP_PORT0, 8, 8, -1, QCS404_SLAVE_EBI_CH0, QCS404_BIMC_SNOC_SLV);
|
||||
DEFINE_QNODE(mas_snoc_bimc_1, QCS404_SNOC_BIMC_1_MAS, 8, 76, -1, QCS404_SLAVE_EBI_CH0);
|
||||
DEFINE_QNODE(mas_tcu_0, QCS404_MASTER_TCU_0, 8, -1, -1, QCS404_SLAVE_EBI_CH0, QCS404_BIMC_SNOC_SLV);
|
||||
DEFINE_QNODE(mas_spdm, QCS404_MASTER_SPDM, 4, -1, -1, QCS404_PNOC_INT_3);
|
||||
DEFINE_QNODE(mas_blsp_1, QCS404_MASTER_BLSP_1, 4, 41, -1, QCS404_PNOC_INT_3);
|
||||
DEFINE_QNODE(mas_blsp_2, QCS404_MASTER_BLSP_2, 4, 39, -1, QCS404_PNOC_INT_3);
|
||||
DEFINE_QNODE(mas_xi_usb_hs1, QCS404_MASTER_XM_USB_HS1, 8, 138, -1, QCS404_PNOC_INT_0);
|
||||
DEFINE_QNODE(mas_crypto, QCS404_MASTER_CRYPTO_CORE0, 8, 23, -1, QCS404_PNOC_SNOC_SLV, QCS404_PNOC_INT_2);
|
||||
DEFINE_QNODE(mas_sdcc_1, QCS404_MASTER_SDCC_1, 8, 33, -1, QCS404_PNOC_INT_0);
|
||||
DEFINE_QNODE(mas_sdcc_2, QCS404_MASTER_SDCC_2, 8, 35, -1, QCS404_PNOC_INT_0);
|
||||
DEFINE_QNODE(mas_snoc_pcnoc, QCS404_SNOC_PNOC_MAS, 8, 77, -1, QCS404_PNOC_INT_2);
|
||||
DEFINE_QNODE(mas_qpic, QCS404_MASTER_QPIC, 4, -1, -1, QCS404_PNOC_INT_0);
|
||||
DEFINE_QNODE(mas_qdss_bam, QCS404_MASTER_QDSS_BAM, 4, -1, -1, QCS404_SNOC_QDSS_INT);
|
||||
DEFINE_QNODE(mas_bimc_snoc, QCS404_BIMC_SNOC_MAS, 8, 21, -1, QCS404_SLAVE_OCMEM_64, QCS404_SLAVE_CATS_128, QCS404_SNOC_INT_0, QCS404_SNOC_INT_1);
|
||||
DEFINE_QNODE(mas_pcnoc_snoc, QCS404_PNOC_SNOC_MAS, 8, 29, -1, QCS404_SNOC_BIMC_1_SLV, QCS404_SNOC_INT_2, QCS404_SNOC_INT_0);
|
||||
DEFINE_QNODE(mas_qdss_etr, QCS404_MASTER_QDSS_ETR, 8, -1, -1, QCS404_SNOC_QDSS_INT);
|
||||
DEFINE_QNODE(mas_emac, QCS404_MASTER_EMAC, 8, -1, -1, QCS404_SNOC_BIMC_1_SLV, QCS404_SNOC_INT_1);
|
||||
DEFINE_QNODE(mas_pcie, QCS404_MASTER_PCIE, 8, -1, -1, QCS404_SNOC_BIMC_1_SLV, QCS404_SNOC_INT_1);
|
||||
DEFINE_QNODE(mas_usb3, QCS404_MASTER_USB3, 8, -1, -1, QCS404_SNOC_BIMC_1_SLV, QCS404_SNOC_INT_1);
|
||||
DEFINE_QNODE(pcnoc_int_0, QCS404_PNOC_INT_0, 8, 85, 114, QCS404_PNOC_SNOC_SLV, QCS404_PNOC_INT_2);
|
||||
DEFINE_QNODE(pcnoc_int_2, QCS404_PNOC_INT_2, 8, 124, 184, QCS404_PNOC_SLV_10, QCS404_SLAVE_TCU, QCS404_PNOC_SLV_11, QCS404_PNOC_SLV_2, QCS404_PNOC_SLV_3, QCS404_PNOC_SLV_0, QCS404_PNOC_SLV_1, QCS404_PNOC_SLV_6, QCS404_PNOC_SLV_7, QCS404_PNOC_SLV_4, QCS404_PNOC_SLV_8, QCS404_PNOC_SLV_9);
|
||||
DEFINE_QNODE(pcnoc_int_3, QCS404_PNOC_INT_3, 8, 125, 185, QCS404_PNOC_SNOC_SLV);
|
||||
DEFINE_QNODE(pcnoc_s_0, QCS404_PNOC_SLV_0, 4, 89, 118, QCS404_SLAVE_PRNG, QCS404_SLAVE_SPDM_WRAPPER, QCS404_SLAVE_PDM);
|
||||
DEFINE_QNODE(pcnoc_s_1, QCS404_PNOC_SLV_1, 4, 90, 119, QCS404_SLAVE_TCSR);
|
||||
DEFINE_QNODE(pcnoc_s_2, QCS404_PNOC_SLV_2, 4, -1, -1, QCS404_SLAVE_GRAPHICS_3D_CFG);
|
||||
DEFINE_QNODE(pcnoc_s_3, QCS404_PNOC_SLV_3, 4, 92, 121, QCS404_SLAVE_MESSAGE_RAM);
|
||||
DEFINE_QNODE(pcnoc_s_4, QCS404_PNOC_SLV_4, 4, 93, 122, QCS404_SLAVE_SNOC_CFG);
|
||||
DEFINE_QNODE(pcnoc_s_6, QCS404_PNOC_SLV_6, 4, 94, 123, QCS404_SLAVE_BLSP_1, QCS404_SLAVE_TLMM_NORTH, QCS404_SLAVE_EMAC_CFG);
|
||||
DEFINE_QNODE(pcnoc_s_7, QCS404_PNOC_SLV_7, 4, 95, 124, QCS404_SLAVE_TLMM_SOUTH, QCS404_SLAVE_DISPLAY_CFG, QCS404_SLAVE_SDCC_1, QCS404_SLAVE_PCIE_1, QCS404_SLAVE_SDCC_2);
|
||||
DEFINE_QNODE(pcnoc_s_8, QCS404_PNOC_SLV_8, 4, 96, 125, QCS404_SLAVE_CRYPTO_0_CFG);
|
||||
DEFINE_QNODE(pcnoc_s_9, QCS404_PNOC_SLV_9, 4, 97, 126, QCS404_SLAVE_BLSP_2, QCS404_SLAVE_TLMM_EAST, QCS404_SLAVE_PMIC_ARB);
|
||||
DEFINE_QNODE(pcnoc_s_10, QCS404_PNOC_SLV_10, 4, 157, -1, QCS404_SLAVE_USB_HS);
|
||||
DEFINE_QNODE(pcnoc_s_11, QCS404_PNOC_SLV_11, 4, 158, 246, QCS404_SLAVE_USB3);
|
||||
DEFINE_QNODE(qdss_int, QCS404_SNOC_QDSS_INT, 8, -1, -1, QCS404_SNOC_BIMC_1_SLV, QCS404_SNOC_INT_1);
|
||||
DEFINE_QNODE(snoc_int_0, QCS404_SNOC_INT_0, 8, 99, 130, QCS404_SLAVE_LPASS, QCS404_SLAVE_APPSS, QCS404_SLAVE_WCSS);
|
||||
DEFINE_QNODE(snoc_int_1, QCS404_SNOC_INT_1, 8, 100, 131, QCS404_SNOC_PNOC_SLV, QCS404_SNOC_INT_2);
|
||||
DEFINE_QNODE(snoc_int_2, QCS404_SNOC_INT_2, 8, 134, 197, QCS404_SLAVE_QDSS_STM, QCS404_SLAVE_OCIMEM);
|
||||
DEFINE_QNODE(slv_ebi, QCS404_SLAVE_EBI_CH0, 8, -1, 0, 0);
|
||||
DEFINE_QNODE(slv_bimc_snoc, QCS404_BIMC_SNOC_SLV, 8, -1, 2, QCS404_BIMC_SNOC_MAS);
|
||||
DEFINE_QNODE(slv_spdm, QCS404_SLAVE_SPDM_WRAPPER, 4, -1, -1, 0);
|
||||
DEFINE_QNODE(slv_pdm, QCS404_SLAVE_PDM, 4, -1, 41, 0);
|
||||
DEFINE_QNODE(slv_prng, QCS404_SLAVE_PRNG, 4, -1, 44, 0);
|
||||
DEFINE_QNODE(slv_tcsr, QCS404_SLAVE_TCSR, 4, -1, 50, 0);
|
||||
DEFINE_QNODE(slv_snoc_cfg, QCS404_SLAVE_SNOC_CFG, 4, -1, 70, 0);
|
||||
DEFINE_QNODE(slv_message_ram, QCS404_SLAVE_MESSAGE_RAM, 4, -1, 55, 0);
|
||||
DEFINE_QNODE(slv_disp_ss_cfg, QCS404_SLAVE_DISPLAY_CFG, 4, -1, -1, 0);
|
||||
DEFINE_QNODE(slv_gpu_cfg, QCS404_SLAVE_GRAPHICS_3D_CFG, 4, -1, -1, 0);
|
||||
DEFINE_QNODE(slv_blsp_1, QCS404_SLAVE_BLSP_1, 4, -1, 39, 0);
|
||||
DEFINE_QNODE(slv_tlmm_north, QCS404_SLAVE_TLMM_NORTH, 4, -1, 214, 0);
|
||||
DEFINE_QNODE(slv_pcie, QCS404_SLAVE_PCIE_1, 4, -1, -1, 0);
|
||||
DEFINE_QNODE(slv_ethernet, QCS404_SLAVE_EMAC_CFG, 4, -1, -1, 0);
|
||||
DEFINE_QNODE(slv_blsp_2, QCS404_SLAVE_BLSP_2, 4, -1, 37, 0);
|
||||
DEFINE_QNODE(slv_tlmm_east, QCS404_SLAVE_TLMM_EAST, 4, -1, 213, 0);
|
||||
DEFINE_QNODE(slv_tcu, QCS404_SLAVE_TCU, 8, -1, -1, 0);
|
||||
DEFINE_QNODE(slv_pmic_arb, QCS404_SLAVE_PMIC_ARB, 4, -1, 59, 0);
|
||||
DEFINE_QNODE(slv_sdcc_1, QCS404_SLAVE_SDCC_1, 4, -1, 31, 0);
|
||||
DEFINE_QNODE(slv_sdcc_2, QCS404_SLAVE_SDCC_2, 4, -1, 33, 0);
|
||||
DEFINE_QNODE(slv_tlmm_south, QCS404_SLAVE_TLMM_SOUTH, 4, -1, -1, 0);
|
||||
DEFINE_QNODE(slv_usb_hs, QCS404_SLAVE_USB_HS, 4, -1, 40, 0);
|
||||
DEFINE_QNODE(slv_usb3, QCS404_SLAVE_USB3, 4, -1, 22, 0);
|
||||
DEFINE_QNODE(slv_crypto_0_cfg, QCS404_SLAVE_CRYPTO_0_CFG, 4, -1, 52, 0);
|
||||
DEFINE_QNODE(slv_pcnoc_snoc, QCS404_PNOC_SNOC_SLV, 8, -1, 45, QCS404_PNOC_SNOC_MAS);
|
||||
DEFINE_QNODE(slv_kpss_ahb, QCS404_SLAVE_APPSS, 4, -1, -1, 0);
|
||||
DEFINE_QNODE(slv_wcss, QCS404_SLAVE_WCSS, 4, -1, 23, 0);
|
||||
DEFINE_QNODE(slv_snoc_bimc_1, QCS404_SNOC_BIMC_1_SLV, 8, -1, 104, QCS404_SNOC_BIMC_1_MAS);
|
||||
DEFINE_QNODE(slv_imem, QCS404_SLAVE_OCIMEM, 8, -1, 26, 0);
|
||||
DEFINE_QNODE(slv_snoc_pcnoc, QCS404_SNOC_PNOC_SLV, 8, -1, 28, QCS404_SNOC_PNOC_MAS);
|
||||
DEFINE_QNODE(slv_qdss_stm, QCS404_SLAVE_QDSS_STM, 4, -1, 30, 0);
|
||||
DEFINE_QNODE(slv_cats_0, QCS404_SLAVE_CATS_128, 16, -1, -1, 0);
|
||||
DEFINE_QNODE(slv_cats_1, QCS404_SLAVE_OCMEM_64, 8, -1, -1, 0);
|
||||
DEFINE_QNODE(slv_lpass, QCS404_SLAVE_LPASS, 4, -1, -1, 0);
|
||||
|
||||
static struct qcom_icc_node *qcs404_bimc_nodes[] = {
|
||||
[MASTER_AMPSS_M0] = &mas_apps_proc,
|
||||
[MASTER_OXILI] = &mas_oxili,
|
||||
[MASTER_MDP_PORT0] = &mas_mdp,
|
||||
[MASTER_SNOC_BIMC_1] = &mas_snoc_bimc_1,
|
||||
[MASTER_TCU_0] = &mas_tcu_0,
|
||||
[SLAVE_EBI_CH0] = &slv_ebi,
|
||||
[SLAVE_BIMC_SNOC] = &slv_bimc_snoc,
|
||||
};
|
||||
|
||||
static struct qcom_icc_desc qcs404_bimc = {
|
||||
.nodes = qcs404_bimc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(qcs404_bimc_nodes),
|
||||
};
|
||||
|
||||
static struct qcom_icc_node *qcs404_pcnoc_nodes[] = {
|
||||
[MASTER_SPDM] = &mas_spdm,
|
||||
[MASTER_BLSP_1] = &mas_blsp_1,
|
||||
[MASTER_BLSP_2] = &mas_blsp_2,
|
||||
[MASTER_XI_USB_HS1] = &mas_xi_usb_hs1,
|
||||
[MASTER_CRYPT0] = &mas_crypto,
|
||||
[MASTER_SDCC_1] = &mas_sdcc_1,
|
||||
[MASTER_SDCC_2] = &mas_sdcc_2,
|
||||
[MASTER_SNOC_PCNOC] = &mas_snoc_pcnoc,
|
||||
[MASTER_QPIC] = &mas_qpic,
|
||||
[PCNOC_INT_0] = &pcnoc_int_0,
|
||||
[PCNOC_INT_2] = &pcnoc_int_2,
|
||||
[PCNOC_INT_3] = &pcnoc_int_3,
|
||||
[PCNOC_S_0] = &pcnoc_s_0,
|
||||
[PCNOC_S_1] = &pcnoc_s_1,
|
||||
[PCNOC_S_2] = &pcnoc_s_2,
|
||||
[PCNOC_S_3] = &pcnoc_s_3,
|
||||
[PCNOC_S_4] = &pcnoc_s_4,
|
||||
[PCNOC_S_6] = &pcnoc_s_6,
|
||||
[PCNOC_S_7] = &pcnoc_s_7,
|
||||
[PCNOC_S_8] = &pcnoc_s_8,
|
||||
[PCNOC_S_9] = &pcnoc_s_9,
|
||||
[PCNOC_S_10] = &pcnoc_s_10,
|
||||
[PCNOC_S_11] = &pcnoc_s_11,
|
||||
[SLAVE_SPDM] = &slv_spdm,
|
||||
[SLAVE_PDM] = &slv_pdm,
|
||||
[SLAVE_PRNG] = &slv_prng,
|
||||
[SLAVE_TCSR] = &slv_tcsr,
|
||||
[SLAVE_SNOC_CFG] = &slv_snoc_cfg,
|
||||
[SLAVE_MESSAGE_RAM] = &slv_message_ram,
|
||||
[SLAVE_DISP_SS_CFG] = &slv_disp_ss_cfg,
|
||||
[SLAVE_GPU_CFG] = &slv_gpu_cfg,
|
||||
[SLAVE_BLSP_1] = &slv_blsp_1,
|
||||
[SLAVE_BLSP_2] = &slv_blsp_2,
|
||||
[SLAVE_TLMM_NORTH] = &slv_tlmm_north,
|
||||
[SLAVE_PCIE] = &slv_pcie,
|
||||
[SLAVE_ETHERNET] = &slv_ethernet,
|
||||
[SLAVE_TLMM_EAST] = &slv_tlmm_east,
|
||||
[SLAVE_TCU] = &slv_tcu,
|
||||
[SLAVE_PMIC_ARB] = &slv_pmic_arb,
|
||||
[SLAVE_SDCC_1] = &slv_sdcc_1,
|
||||
[SLAVE_SDCC_2] = &slv_sdcc_2,
|
||||
[SLAVE_TLMM_SOUTH] = &slv_tlmm_south,
|
||||
[SLAVE_USB_HS] = &slv_usb_hs,
|
||||
[SLAVE_USB3] = &slv_usb3,
|
||||
[SLAVE_CRYPTO_0_CFG] = &slv_crypto_0_cfg,
|
||||
[SLAVE_PCNOC_SNOC] = &slv_pcnoc_snoc,
|
||||
};
|
||||
|
||||
static struct qcom_icc_desc qcs404_pcnoc = {
|
||||
.nodes = qcs404_pcnoc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(qcs404_pcnoc_nodes),
|
||||
};
|
||||
|
||||
static struct qcom_icc_node *qcs404_snoc_nodes[] = {
|
||||
[MASTER_QDSS_BAM] = &mas_qdss_bam,
|
||||
[MASTER_BIMC_SNOC] = &mas_bimc_snoc,
|
||||
[MASTER_PCNOC_SNOC] = &mas_pcnoc_snoc,
|
||||
[MASTER_QDSS_ETR] = &mas_qdss_etr,
|
||||
[MASTER_EMAC] = &mas_emac,
|
||||
[MASTER_PCIE] = &mas_pcie,
|
||||
[MASTER_USB3] = &mas_usb3,
|
||||
[QDSS_INT] = &qdss_int,
|
||||
[SNOC_INT_0] = &snoc_int_0,
|
||||
[SNOC_INT_1] = &snoc_int_1,
|
||||
[SNOC_INT_2] = &snoc_int_2,
|
||||
[SLAVE_KPSS_AHB] = &slv_kpss_ahb,
|
||||
[SLAVE_WCSS] = &slv_wcss,
|
||||
[SLAVE_SNOC_BIMC_1] = &slv_snoc_bimc_1,
|
||||
[SLAVE_IMEM] = &slv_imem,
|
||||
[SLAVE_SNOC_PCNOC] = &slv_snoc_pcnoc,
|
||||
[SLAVE_QDSS_STM] = &slv_qdss_stm,
|
||||
[SLAVE_CATS_0] = &slv_cats_0,
|
||||
[SLAVE_CATS_1] = &slv_cats_1,
|
||||
[SLAVE_LPASS] = &slv_lpass,
|
||||
};
|
||||
|
||||
static struct qcom_icc_desc qcs404_snoc = {
|
||||
.nodes = qcs404_snoc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(qcs404_snoc_nodes),
|
||||
};
|
||||
|
||||
static int qcom_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
|
||||
u32 peak_bw, u32 *agg_avg, u32 *agg_peak)
|
||||
{
|
||||
*agg_avg += avg_bw;
|
||||
*agg_peak = max(*agg_peak, peak_bw);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
|
||||
{
|
||||
struct qcom_icc_provider *qp;
|
||||
struct qcom_icc_node *qn;
|
||||
struct icc_provider *provider;
|
||||
struct icc_node *n;
|
||||
u64 sum_bw;
|
||||
u64 max_peak_bw;
|
||||
u64 rate;
|
||||
u32 agg_avg = 0;
|
||||
u32 agg_peak = 0;
|
||||
int ret, i;
|
||||
|
||||
qn = src->data;
|
||||
provider = src->provider;
|
||||
qp = to_qcom_provider(provider);
|
||||
|
||||
list_for_each_entry(n, &provider->nodes, node_list)
|
||||
qcom_icc_aggregate(n, 0, n->avg_bw, n->peak_bw,
|
||||
&agg_avg, &agg_peak);
|
||||
|
||||
sum_bw = icc_units_to_bps(agg_avg);
|
||||
max_peak_bw = icc_units_to_bps(agg_peak);
|
||||
|
||||
/* send bandwidth request message to the RPM processor */
|
||||
if (qn->mas_rpm_id != -1) {
|
||||
ret = qcom_icc_rpm_smd_send(QCOM_SMD_RPM_ACTIVE_STATE,
|
||||
RPM_BUS_MASTER_REQ,
|
||||
qn->mas_rpm_id,
|
||||
sum_bw);
|
||||
if (ret) {
|
||||
pr_err("qcom_icc_rpm_smd_send mas %d error %d\n",
|
||||
qn->mas_rpm_id, ret);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
if (qn->slv_rpm_id != -1) {
|
||||
ret = qcom_icc_rpm_smd_send(QCOM_SMD_RPM_ACTIVE_STATE,
|
||||
RPM_BUS_SLAVE_REQ,
|
||||
qn->slv_rpm_id,
|
||||
sum_bw);
|
||||
if (ret) {
|
||||
pr_err("qcom_icc_rpm_smd_send slv error %d\n",
|
||||
ret);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
rate = max(sum_bw, max_peak_bw);
|
||||
|
||||
do_div(rate, qn->buswidth);
|
||||
|
||||
if (qn->rate == rate)
|
||||
return 0;
|
||||
|
||||
for (i = 0; i < qp->num_clks; i++) {
|
||||
ret = clk_set_rate(qp->bus_clks[i].clk, rate);
|
||||
if (ret) {
|
||||
pr_err("%s clk_set_rate error: %d\n",
|
||||
qp->bus_clks[i].id, ret);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
qn->rate = rate;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int qnoc_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
const struct qcom_icc_desc *desc;
|
||||
struct icc_onecell_data *data;
|
||||
struct icc_provider *provider;
|
||||
struct qcom_icc_node **qnodes;
|
||||
struct qcom_icc_provider *qp;
|
||||
struct icc_node *node;
|
||||
size_t num_nodes, i;
|
||||
int ret;
|
||||
|
||||
/* wait for the RPM proxy */
|
||||
if (!qcom_icc_rpm_smd_available())
|
||||
return -EPROBE_DEFER;
|
||||
|
||||
desc = of_device_get_match_data(dev);
|
||||
if (!desc)
|
||||
return -EINVAL;
|
||||
|
||||
qnodes = desc->nodes;
|
||||
num_nodes = desc->num_nodes;
|
||||
|
||||
qp = devm_kzalloc(dev, sizeof(*qp), GFP_KERNEL);
|
||||
if (!qp)
|
||||
return -ENOMEM;
|
||||
|
||||
data = devm_kcalloc(dev, num_nodes, sizeof(*node), GFP_KERNEL);
|
||||
if (!data)
|
||||
return -ENOMEM;
|
||||
|
||||
qp->bus_clks = devm_kmemdup(dev, bus_clocks, sizeof(bus_clocks),
|
||||
GFP_KERNEL);
|
||||
if (!qp->bus_clks)
|
||||
return -ENOMEM;
|
||||
|
||||
qp->num_clks = ARRAY_SIZE(bus_clocks);
|
||||
ret = devm_clk_bulk_get(dev, qp->num_clks, qp->bus_clks);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = clk_bulk_prepare_enable(qp->num_clks, qp->bus_clks);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
provider = &qp->provider;
|
||||
INIT_LIST_HEAD(&provider->nodes);
|
||||
provider->dev = dev;
|
||||
provider->set = qcom_icc_set;
|
||||
provider->aggregate = qcom_icc_aggregate;
|
||||
provider->xlate = of_icc_xlate_onecell;
|
||||
provider->data = data;
|
||||
|
||||
ret = icc_provider_add(provider);
|
||||
if (ret) {
|
||||
dev_err(dev, "error adding interconnect provider: %d\n", ret);
|
||||
clk_bulk_disable_unprepare(qp->num_clks, qp->bus_clks);
|
||||
return ret;
|
||||
}
|
||||
|
||||
for (i = 0; i < num_nodes; i++) {
|
||||
size_t j;
|
||||
|
||||
node = icc_node_create(qnodes[i]->id);
|
||||
if (IS_ERR(node)) {
|
||||
ret = PTR_ERR(node);
|
||||
goto err;
|
||||
}
|
||||
|
||||
node->name = qnodes[i]->name;
|
||||
node->data = qnodes[i];
|
||||
icc_node_add(node, provider);
|
||||
|
||||
dev_dbg(dev, "registered node %s\n", node->name);
|
||||
|
||||
/* populate links */
|
||||
for (j = 0; j < qnodes[i]->num_links; j++)
|
||||
icc_link_create(node, qnodes[i]->links[j]);
|
||||
|
||||
data->nodes[i] = node;
|
||||
}
|
||||
data->num_nodes = num_nodes;
|
||||
|
||||
platform_set_drvdata(pdev, qp);
|
||||
|
||||
return 0;
|
||||
err:
|
||||
list_for_each_entry(node, &provider->nodes, node_list) {
|
||||
icc_node_del(node);
|
||||
icc_node_destroy(node->id);
|
||||
}
|
||||
clk_bulk_disable_unprepare(qp->num_clks, qp->bus_clks);
|
||||
icc_provider_del(provider);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int qnoc_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct qcom_icc_provider *qp = platform_get_drvdata(pdev);
|
||||
struct icc_provider *provider = &qp->provider;
|
||||
struct icc_node *n;
|
||||
|
||||
list_for_each_entry(n, &provider->nodes, node_list) {
|
||||
icc_node_del(n);
|
||||
icc_node_destroy(n->id);
|
||||
}
|
||||
clk_bulk_disable_unprepare(qp->num_clks, qp->bus_clks);
|
||||
|
||||
return icc_provider_del(provider);
|
||||
}
|
||||
|
||||
static const struct of_device_id qcs404_noc_of_match[] = {
|
||||
{ .compatible = "qcom,qcs404-bimc", .data = &qcs404_bimc },
|
||||
{ .compatible = "qcom,qcs404-pcnoc", .data = &qcs404_pcnoc },
|
||||
{ .compatible = "qcom,qcs404-snoc", .data = &qcs404_snoc },
|
||||
{ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, qcs404_noc_of_match);
|
||||
|
||||
static struct platform_driver qcs404_noc_driver = {
|
||||
.probe = qnoc_probe,
|
||||
.remove = qnoc_remove,
|
||||
.driver = {
|
||||
.name = "qnoc-qcs404",
|
||||
.of_match_table = qcs404_noc_of_match,
|
||||
},
|
||||
};
|
||||
module_platform_driver(qcs404_noc_driver);
|
||||
MODULE_DESCRIPTION("Qualcomm QCS404 NoC driver");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -1,6 +1,6 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (c) 2018, The Linux Foundation. All rights reserved.
|
||||
* Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
*/
|
||||
|
||||
|
@ -20,23 +20,6 @@
|
|||
#include <soc/qcom/rpmh.h>
|
||||
#include <soc/qcom/tcs.h>
|
||||
|
||||
#define BCM_TCS_CMD_COMMIT_SHFT 30
|
||||
#define BCM_TCS_CMD_COMMIT_MASK 0x40000000
|
||||
#define BCM_TCS_CMD_VALID_SHFT 29
|
||||
#define BCM_TCS_CMD_VALID_MASK 0x20000000
|
||||
#define BCM_TCS_CMD_VOTE_X_SHFT 14
|
||||
#define BCM_TCS_CMD_VOTE_MASK 0x3fff
|
||||
#define BCM_TCS_CMD_VOTE_Y_SHFT 0
|
||||
#define BCM_TCS_CMD_VOTE_Y_MASK 0xfffc000
|
||||
|
||||
#define BCM_TCS_CMD(commit, valid, vote_x, vote_y) \
|
||||
(((commit) << BCM_TCS_CMD_COMMIT_SHFT) | \
|
||||
((valid) << BCM_TCS_CMD_VALID_SHFT) | \
|
||||
((cpu_to_le32(vote_x) & \
|
||||
BCM_TCS_CMD_VOTE_MASK) << BCM_TCS_CMD_VOTE_X_SHFT) | \
|
||||
((cpu_to_le32(vote_y) & \
|
||||
BCM_TCS_CMD_VOTE_MASK) << BCM_TCS_CMD_VOTE_Y_SHFT))
|
||||
|
||||
#define to_qcom_provider(_provider) \
|
||||
container_of(_provider, struct qcom_icc_provider, provider)
|
||||
|
||||
|
@ -66,6 +49,22 @@ struct bcm_db {
|
|||
#define SDM845_MAX_BCM_PER_NODE 2
|
||||
#define SDM845_MAX_VCD 10
|
||||
|
||||
/*
|
||||
* The AMC bucket denotes constraints that are applied to hardware when
|
||||
* icc_set_bw() completes, whereas the WAKE and SLEEP constraints are applied
|
||||
* when the execution environment transitions between active and low power mode.
|
||||
*/
|
||||
#define QCOM_ICC_BUCKET_AMC 0
|
||||
#define QCOM_ICC_BUCKET_WAKE 1
|
||||
#define QCOM_ICC_BUCKET_SLEEP 2
|
||||
#define QCOM_ICC_NUM_BUCKETS 3
|
||||
#define QCOM_ICC_TAG_AMC BIT(QCOM_ICC_BUCKET_AMC)
|
||||
#define QCOM_ICC_TAG_WAKE BIT(QCOM_ICC_BUCKET_WAKE)
|
||||
#define QCOM_ICC_TAG_SLEEP BIT(QCOM_ICC_BUCKET_SLEEP)
|
||||
#define QCOM_ICC_TAG_ACTIVE_ONLY (QCOM_ICC_TAG_AMC | QCOM_ICC_TAG_WAKE)
|
||||
#define QCOM_ICC_TAG_ALWAYS (QCOM_ICC_TAG_AMC | QCOM_ICC_TAG_WAKE |\
|
||||
QCOM_ICC_TAG_SLEEP)
|
||||
|
||||
/**
|
||||
* struct qcom_icc_node - Qualcomm specific interconnect nodes
|
||||
* @name: the node name used in debugfs
|
||||
|
@ -86,8 +85,8 @@ struct qcom_icc_node {
|
|||
u16 num_links;
|
||||
u16 channels;
|
||||
u16 buswidth;
|
||||
u64 sum_avg;
|
||||
u64 max_peak;
|
||||
u64 sum_avg[QCOM_ICC_NUM_BUCKETS];
|
||||
u64 max_peak[QCOM_ICC_NUM_BUCKETS];
|
||||
struct qcom_icc_bcm *bcms[SDM845_MAX_BCM_PER_NODE];
|
||||
size_t num_bcms;
|
||||
};
|
||||
|
@ -112,8 +111,8 @@ struct qcom_icc_bcm {
|
|||
const char *name;
|
||||
u32 type;
|
||||
u32 addr;
|
||||
u64 vote_x;
|
||||
u64 vote_y;
|
||||
u64 vote_x[QCOM_ICC_NUM_BUCKETS];
|
||||
u64 vote_y[QCOM_ICC_NUM_BUCKETS];
|
||||
bool dirty;
|
||||
bool keepalive;
|
||||
struct bcm_db aux_data;
|
||||
|
@ -555,7 +554,7 @@ inline void tcs_cmd_gen(struct tcs_cmd *cmd, u64 vote_x, u64 vote_y,
|
|||
cmd->wait = true;
|
||||
}
|
||||
|
||||
static void tcs_list_gen(struct list_head *bcm_list,
|
||||
static void tcs_list_gen(struct list_head *bcm_list, int bucket,
|
||||
struct tcs_cmd tcs_list[SDM845_MAX_VCD],
|
||||
int n[SDM845_MAX_VCD])
|
||||
{
|
||||
|
@ -573,8 +572,8 @@ static void tcs_list_gen(struct list_head *bcm_list,
|
|||
commit = true;
|
||||
cur_vcd_size = 0;
|
||||
}
|
||||
tcs_cmd_gen(&tcs_list[idx], bcm->vote_x, bcm->vote_y,
|
||||
bcm->addr, commit);
|
||||
tcs_cmd_gen(&tcs_list[idx], bcm->vote_x[bucket],
|
||||
bcm->vote_y[bucket], bcm->addr, commit);
|
||||
idx++;
|
||||
n[batch]++;
|
||||
/*
|
||||
|
@ -595,38 +594,56 @@ static void tcs_list_gen(struct list_head *bcm_list,
|
|||
|
||||
static void bcm_aggregate(struct qcom_icc_bcm *bcm)
|
||||
{
|
||||
size_t i;
|
||||
u64 agg_avg = 0;
|
||||
u64 agg_peak = 0;
|
||||
size_t i, bucket;
|
||||
u64 agg_avg[QCOM_ICC_NUM_BUCKETS] = {0};
|
||||
u64 agg_peak[QCOM_ICC_NUM_BUCKETS] = {0};
|
||||
u64 temp;
|
||||
|
||||
for (i = 0; i < bcm->num_nodes; i++) {
|
||||
temp = bcm->nodes[i]->sum_avg * bcm->aux_data.width;
|
||||
do_div(temp, bcm->nodes[i]->buswidth * bcm->nodes[i]->channels);
|
||||
agg_avg = max(agg_avg, temp);
|
||||
for (bucket = 0; bucket < QCOM_ICC_NUM_BUCKETS; bucket++) {
|
||||
for (i = 0; i < bcm->num_nodes; i++) {
|
||||
temp = bcm->nodes[i]->sum_avg[bucket] * bcm->aux_data.width;
|
||||
do_div(temp, bcm->nodes[i]->buswidth * bcm->nodes[i]->channels);
|
||||
agg_avg[bucket] = max(agg_avg[bucket], temp);
|
||||
|
||||
temp = bcm->nodes[i]->max_peak * bcm->aux_data.width;
|
||||
do_div(temp, bcm->nodes[i]->buswidth);
|
||||
agg_peak = max(agg_peak, temp);
|
||||
temp = bcm->nodes[i]->max_peak[bucket] * bcm->aux_data.width;
|
||||
do_div(temp, bcm->nodes[i]->buswidth);
|
||||
agg_peak[bucket] = max(agg_peak[bucket], temp);
|
||||
}
|
||||
|
||||
temp = agg_avg[bucket] * 1000ULL;
|
||||
do_div(temp, bcm->aux_data.unit);
|
||||
bcm->vote_x[bucket] = temp;
|
||||
|
||||
temp = agg_peak[bucket] * 1000ULL;
|
||||
do_div(temp, bcm->aux_data.unit);
|
||||
bcm->vote_y[bucket] = temp;
|
||||
}
|
||||
|
||||
temp = agg_avg * 1000ULL;
|
||||
do_div(temp, bcm->aux_data.unit);
|
||||
bcm->vote_x = temp;
|
||||
|
||||
temp = agg_peak * 1000ULL;
|
||||
do_div(temp, bcm->aux_data.unit);
|
||||
bcm->vote_y = temp;
|
||||
|
||||
if (bcm->keepalive && bcm->vote_x == 0 && bcm->vote_y == 0) {
|
||||
bcm->vote_x = 1;
|
||||
bcm->vote_y = 1;
|
||||
if (bcm->keepalive && bcm->vote_x[QCOM_ICC_BUCKET_AMC] == 0 &&
|
||||
bcm->vote_y[QCOM_ICC_BUCKET_AMC] == 0) {
|
||||
bcm->vote_x[QCOM_ICC_BUCKET_AMC] = 1;
|
||||
bcm->vote_x[QCOM_ICC_BUCKET_WAKE] = 1;
|
||||
bcm->vote_y[QCOM_ICC_BUCKET_AMC] = 1;
|
||||
bcm->vote_y[QCOM_ICC_BUCKET_WAKE] = 1;
|
||||
}
|
||||
|
||||
bcm->dirty = false;
|
||||
}
|
||||
|
||||
static int qcom_icc_aggregate(struct icc_node *node, u32 avg_bw,
|
||||
static void qcom_icc_pre_aggregate(struct icc_node *node)
|
||||
{
|
||||
size_t i;
|
||||
struct qcom_icc_node *qn;
|
||||
|
||||
qn = node->data;
|
||||
|
||||
for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) {
|
||||
qn->sum_avg[i] = 0;
|
||||
qn->max_peak[i] = 0;
|
||||
}
|
||||
}
|
||||
|
||||
static int qcom_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
|
||||
u32 peak_bw, u32 *agg_avg, u32 *agg_peak)
|
||||
{
|
||||
size_t i;
|
||||
|
@ -634,12 +651,19 @@ static int qcom_icc_aggregate(struct icc_node *node, u32 avg_bw,
|
|||
|
||||
qn = node->data;
|
||||
|
||||
if (!tag)
|
||||
tag = QCOM_ICC_TAG_ALWAYS;
|
||||
|
||||
for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) {
|
||||
if (tag & BIT(i)) {
|
||||
qn->sum_avg[i] += avg_bw;
|
||||
qn->max_peak[i] = max_t(u32, qn->max_peak[i], peak_bw);
|
||||
}
|
||||
}
|
||||
|
||||
*agg_avg += avg_bw;
|
||||
*agg_peak = max_t(u32, *agg_peak, peak_bw);
|
||||
|
||||
qn->sum_avg = *agg_avg;
|
||||
qn->max_peak = *agg_peak;
|
||||
|
||||
for (i = 0; i < qn->num_bcms; i++)
|
||||
qn->bcms[i]->dirty = true;
|
||||
|
||||
|
@ -675,7 +699,7 @@ static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
|
|||
* Construct the command list based on a pre ordered list of BCMs
|
||||
* based on VCD.
|
||||
*/
|
||||
tcs_list_gen(&commit_list, cmds, commit_idx);
|
||||
tcs_list_gen(&commit_list, QCOM_ICC_BUCKET_AMC, cmds, commit_idx);
|
||||
|
||||
if (!commit_idx[0])
|
||||
return ret;
|
||||
|
@ -693,6 +717,41 @@ static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
|
|||
return ret;
|
||||
}
|
||||
|
||||
INIT_LIST_HEAD(&commit_list);
|
||||
|
||||
for (i = 0; i < qp->num_bcms; i++) {
|
||||
/*
|
||||
* Only generate WAKE and SLEEP commands if a resource's
|
||||
* requirements change as the execution environment transitions
|
||||
* between different power states.
|
||||
*/
|
||||
if (qp->bcms[i]->vote_x[QCOM_ICC_BUCKET_WAKE] !=
|
||||
qp->bcms[i]->vote_x[QCOM_ICC_BUCKET_SLEEP] ||
|
||||
qp->bcms[i]->vote_y[QCOM_ICC_BUCKET_WAKE] !=
|
||||
qp->bcms[i]->vote_y[QCOM_ICC_BUCKET_SLEEP]) {
|
||||
list_add_tail(&qp->bcms[i]->list, &commit_list);
|
||||
}
|
||||
}
|
||||
|
||||
if (list_empty(&commit_list))
|
||||
return ret;
|
||||
|
||||
tcs_list_gen(&commit_list, QCOM_ICC_BUCKET_WAKE, cmds, commit_idx);
|
||||
|
||||
ret = rpmh_write_batch(qp->dev, RPMH_WAKE_ONLY_STATE, cmds, commit_idx);
|
||||
if (ret) {
|
||||
pr_err("Error sending WAKE RPMH requests (%d)\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
tcs_list_gen(&commit_list, QCOM_ICC_BUCKET_SLEEP, cmds, commit_idx);
|
||||
|
||||
ret = rpmh_write_batch(qp->dev, RPMH_SLEEP_STATE, cmds, commit_idx);
|
||||
if (ret) {
|
||||
pr_err("Error sending SLEEP RPMH requests (%d)\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -738,6 +797,7 @@ static int qnoc_probe(struct platform_device *pdev)
|
|||
provider = &qp->provider;
|
||||
provider->dev = &pdev->dev;
|
||||
provider->set = qcom_icc_set;
|
||||
provider->pre_aggregate = qcom_icc_pre_aggregate;
|
||||
provider->aggregate = qcom_icc_aggregate;
|
||||
provider->xlate = of_icc_xlate_onecell;
|
||||
INIT_LIST_HEAD(&provider->nodes);
|
||||
|
|
|
@ -0,0 +1,77 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* RPM over SMD communication wrapper for interconnects
|
||||
*
|
||||
* Copyright (C) 2019 Linaro Ltd
|
||||
* Author: Georgi Djakov <georgi.djakov@linaro.org>
|
||||
*/
|
||||
|
||||
#include <linux/interconnect-provider.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/soc/qcom/smd-rpm.h>
|
||||
|
||||
#include "smd-rpm.h"
|
||||
|
||||
#define RPM_KEY_BW 0x00007762
|
||||
|
||||
static struct qcom_smd_rpm *icc_smd_rpm;
|
||||
|
||||
struct icc_rpm_smd_req {
|
||||
__le32 key;
|
||||
__le32 nbytes;
|
||||
__le32 value;
|
||||
};
|
||||
|
||||
bool qcom_icc_rpm_smd_available(void)
|
||||
{
|
||||
return !!icc_smd_rpm;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(qcom_icc_rpm_smd_available);
|
||||
|
||||
int qcom_icc_rpm_smd_send(int ctx, int rsc_type, int id, u32 val)
|
||||
{
|
||||
struct icc_rpm_smd_req req = {
|
||||
.key = cpu_to_le32(RPM_KEY_BW),
|
||||
.nbytes = cpu_to_le32(sizeof(u32)),
|
||||
.value = cpu_to_le32(val),
|
||||
};
|
||||
|
||||
return qcom_rpm_smd_write(icc_smd_rpm, ctx, rsc_type, id, &req,
|
||||
sizeof(req));
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(qcom_icc_rpm_smd_send);
|
||||
|
||||
static int qcom_icc_rpm_smd_remove(struct platform_device *pdev)
|
||||
{
|
||||
icc_smd_rpm = NULL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int qcom_icc_rpm_smd_probe(struct platform_device *pdev)
|
||||
{
|
||||
icc_smd_rpm = dev_get_drvdata(pdev->dev.parent);
|
||||
|
||||
if (!icc_smd_rpm) {
|
||||
dev_err(&pdev->dev, "unable to retrieve handle to RPM\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct platform_driver qcom_interconnect_rpm_smd_driver = {
|
||||
.driver = {
|
||||
.name = "icc_smd_rpm",
|
||||
},
|
||||
.probe = qcom_icc_rpm_smd_probe,
|
||||
.remove = qcom_icc_rpm_smd_remove,
|
||||
};
|
||||
module_platform_driver(qcom_interconnect_rpm_smd_driver);
|
||||
MODULE_AUTHOR("Georgi Djakov <georgi.djakov@linaro.org>");
|
||||
MODULE_DESCRIPTION("Qualcomm SMD RPM interconnect proxy driver");
|
||||
MODULE_LICENSE("GPL v2");
|
||||
MODULE_ALIAS("platform:icc_smd_rpm");
|
|
@ -0,0 +1,15 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Copyright (c) 2019, Linaro Ltd.
|
||||
* Author: Georgi Djakov <georgi.djakov@linaro.org>
|
||||
*/
|
||||
|
||||
#ifndef __DRIVERS_INTERCONNECT_QCOM_SMD_RPM_H
|
||||
#define __DRIVERS_INTERCONNECT_QCOM_SMD_RPM_H
|
||||
|
||||
#include <linux/soc/qcom/smd-rpm.h>
|
||||
|
||||
bool qcom_icc_rpm_smd_available(void);
|
||||
int qcom_icc_rpm_smd_send(int ctx, int rsc_type, int id, u32 val);
|
||||
|
||||
#endif
|
|
@ -362,15 +362,6 @@ config DS1682
|
|||
This driver can also be built as a module. If so, the module
|
||||
will be called ds1682.
|
||||
|
||||
config SPEAR13XX_PCIE_GADGET
|
||||
bool "PCIe gadget support for SPEAr13XX platform"
|
||||
depends on ARCH_SPEAR13XX && BROKEN
|
||||
help
|
||||
This option enables gadget support for PCIe controller. If
|
||||
board file defines any controller as PCIe endpoint then a sysfs
|
||||
entry will be created for that controller. User can use these
|
||||
sysfs node to configure PCIe EP as per his requirements.
|
||||
|
||||
config VMWARE_BALLOON
|
||||
tristate "VMware Balloon Driver"
|
||||
depends on VMWARE_VMCI && X86 && HYPERVISOR_GUEST
|
||||
|
|
|
@ -36,7 +36,6 @@ obj-$(CONFIG_C2PORT) += c2port/
|
|||
obj-$(CONFIG_HMC6352) += hmc6352.o
|
||||
obj-y += eeprom/
|
||||
obj-y += cb710/
|
||||
obj-$(CONFIG_SPEAR13XX_PCIE_GADGET) += spear13xx_pcie_gadget.o
|
||||
obj-$(CONFIG_VMWARE_BALLOON) += vmw_balloon.o
|
||||
obj-$(CONFIG_PCH_PHUB) += pch_phub.o
|
||||
obj-y += ti-st/
|
||||
|
|
|
@ -334,8 +334,7 @@ static void alcor_pci_remove(struct pci_dev *pdev)
|
|||
#ifdef CONFIG_PM_SLEEP
|
||||
static int alcor_suspend(struct device *dev)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
struct alcor_pci_priv *priv = pci_get_drvdata(pdev);
|
||||
struct alcor_pci_priv *priv = dev_get_drvdata(dev);
|
||||
|
||||
alcor_pci_aspm_ctrl(priv, 1);
|
||||
return 0;
|
||||
|
@ -344,8 +343,7 @@ static int alcor_suspend(struct device *dev)
|
|||
static int alcor_resume(struct device *dev)
|
||||
{
|
||||
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
struct alcor_pci_priv *priv = pci_get_drvdata(pdev);
|
||||
struct alcor_pci_priv *priv = dev_get_drvdata(dev);
|
||||
|
||||
alcor_pci_aspm_ctrl(priv, 0);
|
||||
return 0;
|
||||
|
|
|
@ -45,13 +45,16 @@ config EEPROM_AT25
|
|||
will be called at25.
|
||||
|
||||
config EEPROM_LEGACY
|
||||
tristate "Old I2C EEPROM reader"
|
||||
tristate "Old I2C EEPROM reader (DEPRECATED)"
|
||||
depends on I2C && SYSFS
|
||||
help
|
||||
If you say yes here you get read-only access to the EEPROM data
|
||||
available on modern memory DIMMs and Sony Vaio laptops via I2C. Such
|
||||
EEPROMs could theoretically be available on other devices as well.
|
||||
|
||||
This driver is deprecated and will be removed soon, please use the
|
||||
better at24 driver instead.
|
||||
|
||||
This driver can also be built as a module. If so, the module
|
||||
will be called eeprom.
|
||||
|
||||
|
|
|
@ -195,13 +195,13 @@ static int ee1004_probe(struct i2c_client *client,
|
|||
mutex_lock(&ee1004_bus_lock);
|
||||
if (++ee1004_dev_count == 1) {
|
||||
for (cnr = 0; cnr < 2; cnr++) {
|
||||
ee1004_set_page[cnr] = i2c_new_dummy(client->adapter,
|
||||
ee1004_set_page[cnr] = i2c_new_dummy_device(client->adapter,
|
||||
EE1004_ADDR_SET_PAGE + cnr);
|
||||
if (!ee1004_set_page[cnr]) {
|
||||
if (IS_ERR(ee1004_set_page[cnr])) {
|
||||
dev_err(&client->dev,
|
||||
"address 0x%02x unavailable\n",
|
||||
EE1004_ADDR_SET_PAGE + cnr);
|
||||
err = -EADDRINUSE;
|
||||
err = PTR_ERR(ee1004_set_page[cnr]);
|
||||
goto err_clients;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -150,9 +150,9 @@ static int max6875_probe(struct i2c_client *client,
|
|||
return -ENOMEM;
|
||||
|
||||
/* A fake client is created on the odd address */
|
||||
data->fake_client = i2c_new_dummy(client->adapter, client->addr + 1);
|
||||
if (!data->fake_client) {
|
||||
err = -ENOMEM;
|
||||
data->fake_client = i2c_new_dummy_device(client->adapter, client->addr + 1);
|
||||
if (IS_ERR(data->fake_client)) {
|
||||
err = PTR_ERR(data->fake_client);
|
||||
goto exit_kfree;
|
||||
}
|
||||
|
||||
|
|
|
@ -33,7 +33,6 @@
|
|||
#define FASTRPC_INIT_HANDLE 1
|
||||
#define FASTRPC_CTXID_MASK (0xFF0)
|
||||
#define INIT_FILELEN_MAX (64 * 1024 * 1024)
|
||||
#define INIT_MEMLEN_MAX (8 * 1024 * 1024)
|
||||
#define FASTRPC_DEVICE_NAME "fastrpc"
|
||||
|
||||
/* Retrives number of input buffers from the scalars parameter */
|
||||
|
@ -186,6 +185,7 @@ struct fastrpc_channel_ctx {
|
|||
struct idr ctx_idr;
|
||||
struct list_head users;
|
||||
struct miscdevice miscdev;
|
||||
struct kref refcount;
|
||||
};
|
||||
|
||||
struct fastrpc_user {
|
||||
|
@ -279,8 +279,11 @@ static int fastrpc_buf_alloc(struct fastrpc_user *fl, struct device *dev,
|
|||
|
||||
buf->virt = dma_alloc_coherent(dev, buf->size, (dma_addr_t *)&buf->phys,
|
||||
GFP_KERNEL);
|
||||
if (!buf->virt)
|
||||
if (!buf->virt) {
|
||||
mutex_destroy(&buf->lock);
|
||||
kfree(buf);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
if (fl->sctx && fl->sctx->sid)
|
||||
buf->phys += ((u64)fl->sctx->sid << 32);
|
||||
|
@ -290,6 +293,25 @@ static int fastrpc_buf_alloc(struct fastrpc_user *fl, struct device *dev,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void fastrpc_channel_ctx_free(struct kref *ref)
|
||||
{
|
||||
struct fastrpc_channel_ctx *cctx;
|
||||
|
||||
cctx = container_of(ref, struct fastrpc_channel_ctx, refcount);
|
||||
|
||||
kfree(cctx);
|
||||
}
|
||||
|
||||
static void fastrpc_channel_ctx_get(struct fastrpc_channel_ctx *cctx)
|
||||
{
|
||||
kref_get(&cctx->refcount);
|
||||
}
|
||||
|
||||
static void fastrpc_channel_ctx_put(struct fastrpc_channel_ctx *cctx)
|
||||
{
|
||||
kref_put(&cctx->refcount, fastrpc_channel_ctx_free);
|
||||
}
|
||||
|
||||
static void fastrpc_context_free(struct kref *ref)
|
||||
{
|
||||
struct fastrpc_invoke_ctx *ctx;
|
||||
|
@ -313,6 +335,8 @@ static void fastrpc_context_free(struct kref *ref)
|
|||
kfree(ctx->maps);
|
||||
kfree(ctx->olaps);
|
||||
kfree(ctx);
|
||||
|
||||
fastrpc_channel_ctx_put(cctx);
|
||||
}
|
||||
|
||||
static void fastrpc_context_get(struct fastrpc_invoke_ctx *ctx)
|
||||
|
@ -419,6 +443,9 @@ static struct fastrpc_invoke_ctx *fastrpc_context_alloc(
|
|||
fastrpc_get_buff_overlaps(ctx);
|
||||
}
|
||||
|
||||
/* Released in fastrpc_context_put() */
|
||||
fastrpc_channel_ctx_get(cctx);
|
||||
|
||||
ctx->sc = sc;
|
||||
ctx->retval = -1;
|
||||
ctx->pid = current->pid;
|
||||
|
@ -448,6 +475,7 @@ err_idr:
|
|||
spin_lock(&user->lock);
|
||||
list_del(&ctx->node);
|
||||
spin_unlock(&user->lock);
|
||||
fastrpc_channel_ctx_put(cctx);
|
||||
kfree(ctx->maps);
|
||||
kfree(ctx->olaps);
|
||||
kfree(ctx);
|
||||
|
@ -522,6 +550,7 @@ static void fastrpc_dma_buf_detatch(struct dma_buf *dmabuf,
|
|||
mutex_lock(&buffer->lock);
|
||||
list_del(&a->node);
|
||||
mutex_unlock(&buffer->lock);
|
||||
sg_free_table(&a->sgt);
|
||||
kfree(a);
|
||||
}
|
||||
|
||||
|
@ -884,6 +913,9 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel,
|
|||
if (!fl->sctx)
|
||||
return -EINVAL;
|
||||
|
||||
if (!fl->cctx->rpdev)
|
||||
return -EPIPE;
|
||||
|
||||
ctx = fastrpc_context_alloc(fl, kernel, sc, args);
|
||||
if (IS_ERR(ctx))
|
||||
return PTR_ERR(ctx);
|
||||
|
@ -1120,6 +1152,7 @@ static int fastrpc_device_release(struct inode *inode, struct file *file)
|
|||
}
|
||||
|
||||
fastrpc_session_free(cctx, fl->sctx);
|
||||
fastrpc_channel_ctx_put(cctx);
|
||||
|
||||
mutex_destroy(&fl->mutex);
|
||||
kfree(fl);
|
||||
|
@ -1138,6 +1171,9 @@ static int fastrpc_device_open(struct inode *inode, struct file *filp)
|
|||
if (!fl)
|
||||
return -ENOMEM;
|
||||
|
||||
/* Released in fastrpc_device_release() */
|
||||
fastrpc_channel_ctx_get(cctx);
|
||||
|
||||
filp->private_data = fl;
|
||||
spin_lock_init(&fl->lock);
|
||||
mutex_init(&fl->mutex);
|
||||
|
@ -1163,26 +1199,6 @@ static int fastrpc_device_open(struct inode *inode, struct file *filp)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int fastrpc_dmabuf_free(struct fastrpc_user *fl, char __user *argp)
|
||||
{
|
||||
struct dma_buf *buf;
|
||||
int info;
|
||||
|
||||
if (copy_from_user(&info, argp, sizeof(info)))
|
||||
return -EFAULT;
|
||||
|
||||
buf = dma_buf_get(info);
|
||||
if (IS_ERR_OR_NULL(buf))
|
||||
return -EINVAL;
|
||||
/*
|
||||
* one for the last get and other for the ALLOC_DMA_BUFF ioctl
|
||||
*/
|
||||
dma_buf_put(buf);
|
||||
dma_buf_put(buf);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int fastrpc_dmabuf_alloc(struct fastrpc_user *fl, char __user *argp)
|
||||
{
|
||||
struct fastrpc_alloc_dma_buf bp;
|
||||
|
@ -1218,8 +1234,6 @@ static int fastrpc_dmabuf_alloc(struct fastrpc_user *fl, char __user *argp)
|
|||
return -EFAULT;
|
||||
}
|
||||
|
||||
get_dma_buf(buf->dmabuf);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1287,9 +1301,6 @@ static long fastrpc_device_ioctl(struct file *file, unsigned int cmd,
|
|||
case FASTRPC_IOCTL_INIT_CREATE:
|
||||
err = fastrpc_init_create_process(fl, argp);
|
||||
break;
|
||||
case FASTRPC_IOCTL_FREE_DMA_BUFF:
|
||||
err = fastrpc_dmabuf_free(fl, argp);
|
||||
break;
|
||||
case FASTRPC_IOCTL_ALLOC_DMA_BUFF:
|
||||
err = fastrpc_dmabuf_alloc(fl, argp);
|
||||
break;
|
||||
|
@ -1395,10 +1406,6 @@ static int fastrpc_rpmsg_probe(struct rpmsg_device *rpdev)
|
|||
int i, err, domain_id = -1;
|
||||
const char *domain;
|
||||
|
||||
data = devm_kzalloc(rdev, sizeof(*data), GFP_KERNEL);
|
||||
if (!data)
|
||||
return -ENOMEM;
|
||||
|
||||
err = of_property_read_string(rdev->of_node, "label", &domain);
|
||||
if (err) {
|
||||
dev_info(rdev, "FastRPC Domain not specified in DT\n");
|
||||
|
@ -1417,6 +1424,10 @@ static int fastrpc_rpmsg_probe(struct rpmsg_device *rpdev)
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
data = kzalloc(sizeof(*data), GFP_KERNEL);
|
||||
if (!data)
|
||||
return -ENOMEM;
|
||||
|
||||
data->miscdev.minor = MISC_DYNAMIC_MINOR;
|
||||
data->miscdev.name = kasprintf(GFP_KERNEL, "fastrpc-%s",
|
||||
domains[domain_id]);
|
||||
|
@ -1425,6 +1436,8 @@ static int fastrpc_rpmsg_probe(struct rpmsg_device *rpdev)
|
|||
if (err)
|
||||
return err;
|
||||
|
||||
kref_init(&data->refcount);
|
||||
|
||||
dev_set_drvdata(&rpdev->dev, data);
|
||||
dma_set_mask_and_coherent(rdev, DMA_BIT_MASK(32));
|
||||
INIT_LIST_HEAD(&data->users);
|
||||
|
@ -1459,7 +1472,9 @@ static void fastrpc_rpmsg_remove(struct rpmsg_device *rpdev)
|
|||
|
||||
misc_deregister(&cctx->miscdev);
|
||||
of_platform_depopulate(&rpdev->dev);
|
||||
kfree(cctx);
|
||||
|
||||
cctx->rpdev = NULL;
|
||||
fastrpc_channel_ctx_put(cctx);
|
||||
}
|
||||
|
||||
static int fastrpc_rpmsg_callback(struct rpmsg_device *rpdev, void *data,
|
||||
|
|
|
@ -18,7 +18,7 @@ int hl_asid_init(struct hl_device *hdev)
|
|||
|
||||
mutex_init(&hdev->asid_mutex);
|
||||
|
||||
/* ASID 0 is reserved for KMD and device CPU */
|
||||
/* ASID 0 is reserved for the kernel driver and device CPU */
|
||||
set_bit(0, hdev->asid_bitmap);
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -397,7 +397,8 @@ struct hl_cb *hl_cb_kernel_create(struct hl_device *hdev, u32 cb_size)
|
|||
rc = hl_cb_create(hdev, &hdev->kernel_cb_mgr, cb_size, &cb_handle,
|
||||
HL_KERNEL_ASID_ID);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev, "Failed to allocate CB for KMD %d\n", rc);
|
||||
dev_err(hdev->dev,
|
||||
"Failed to allocate CB for the kernel driver %d\n", rc);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
|
|
@ -178,11 +178,23 @@ static void cs_do_release(struct kref *ref)
|
|||
|
||||
/* We also need to update CI for internal queues */
|
||||
if (cs->submitted) {
|
||||
int cs_cnt = atomic_dec_return(&hdev->cs_active_cnt);
|
||||
hdev->asic_funcs->hw_queues_lock(hdev);
|
||||
|
||||
WARN_ONCE((cs_cnt < 0),
|
||||
"hl%d: error in CS active cnt %d\n",
|
||||
hdev->id, cs_cnt);
|
||||
hdev->cs_active_cnt--;
|
||||
if (!hdev->cs_active_cnt) {
|
||||
struct hl_device_idle_busy_ts *ts;
|
||||
|
||||
ts = &hdev->idle_busy_ts_arr[hdev->idle_busy_ts_idx++];
|
||||
ts->busy_to_idle_ts = ktime_get();
|
||||
|
||||
if (hdev->idle_busy_ts_idx == HL_IDLE_BUSY_TS_ARR_SIZE)
|
||||
hdev->idle_busy_ts_idx = 0;
|
||||
} else if (hdev->cs_active_cnt < 0) {
|
||||
dev_crit(hdev->dev, "CS active cnt %d is negative\n",
|
||||
hdev->cs_active_cnt);
|
||||
}
|
||||
|
||||
hdev->asic_funcs->hw_queues_unlock(hdev);
|
||||
|
||||
hl_int_hw_queue_update_ci(cs);
|
||||
|
||||
|
@ -305,6 +317,8 @@ static int allocate_cs(struct hl_device *hdev, struct hl_ctx *ctx,
|
|||
other = ctx->cs_pending[fence->cs_seq & (HL_MAX_PENDING_CS - 1)];
|
||||
if ((other) && (!dma_fence_is_signaled(other))) {
|
||||
spin_unlock(&ctx->cs_lock);
|
||||
dev_dbg(hdev->dev,
|
||||
"Rejecting CS because of too many in-flights CS\n");
|
||||
rc = -EAGAIN;
|
||||
goto free_fence;
|
||||
}
|
||||
|
@ -395,8 +409,9 @@ static struct hl_cb *validate_queue_index(struct hl_device *hdev,
|
|||
return NULL;
|
||||
}
|
||||
|
||||
if (hw_queue_prop->kmd_only) {
|
||||
dev_err(hdev->dev, "Queue index %d is restricted for KMD\n",
|
||||
if (hw_queue_prop->driver_only) {
|
||||
dev_err(hdev->dev,
|
||||
"Queue index %d is restricted for the kernel driver\n",
|
||||
chunk->queue_index);
|
||||
return NULL;
|
||||
} else if (hw_queue_prop->type == QUEUE_TYPE_INT) {
|
||||
|
|
|
@ -26,12 +26,13 @@ static void hl_ctx_fini(struct hl_ctx *ctx)
|
|||
dma_fence_put(ctx->cs_pending[i]);
|
||||
|
||||
if (ctx->asid != HL_KERNEL_ASID_ID) {
|
||||
/*
|
||||
* The engines are stopped as there is no executing CS, but the
|
||||
/* The engines are stopped as there is no executing CS, but the
|
||||
* Coresight might be still working by accessing addresses
|
||||
* related to the stopped engines. Hence stop it explicitly.
|
||||
* Stop only if this is the compute context, as there can be
|
||||
* only one compute context
|
||||
*/
|
||||
if (hdev->in_debug)
|
||||
if ((hdev->in_debug) && (hdev->compute_ctx == ctx))
|
||||
hl_device_set_debug_mode(hdev, false);
|
||||
|
||||
hl_vm_ctx_fini(ctx);
|
||||
|
@ -67,29 +68,36 @@ int hl_ctx_create(struct hl_device *hdev, struct hl_fpriv *hpriv)
|
|||
goto out_err;
|
||||
}
|
||||
|
||||
rc = hl_ctx_init(hdev, ctx, false);
|
||||
if (rc)
|
||||
goto free_ctx;
|
||||
|
||||
hl_hpriv_get(hpriv);
|
||||
ctx->hpriv = hpriv;
|
||||
|
||||
/* TODO: remove for multiple contexts */
|
||||
hpriv->ctx = ctx;
|
||||
hdev->user_ctx = ctx;
|
||||
|
||||
mutex_lock(&mgr->ctx_lock);
|
||||
rc = idr_alloc(&mgr->ctx_handles, ctx, 1, 0, GFP_KERNEL);
|
||||
mutex_unlock(&mgr->ctx_lock);
|
||||
|
||||
if (rc < 0) {
|
||||
dev_err(hdev->dev, "Failed to allocate IDR for a new CTX\n");
|
||||
hl_ctx_free(hdev, ctx);
|
||||
goto out_err;
|
||||
goto free_ctx;
|
||||
}
|
||||
|
||||
ctx->handle = rc;
|
||||
|
||||
rc = hl_ctx_init(hdev, ctx, false);
|
||||
if (rc)
|
||||
goto remove_from_idr;
|
||||
|
||||
hl_hpriv_get(hpriv);
|
||||
ctx->hpriv = hpriv;
|
||||
|
||||
/* TODO: remove for multiple contexts per process */
|
||||
hpriv->ctx = ctx;
|
||||
|
||||
/* TODO: remove the following line for multiple process support */
|
||||
hdev->compute_ctx = ctx;
|
||||
|
||||
return 0;
|
||||
|
||||
remove_from_idr:
|
||||
mutex_lock(&mgr->ctx_lock);
|
||||
idr_remove(&mgr->ctx_handles, ctx->handle);
|
||||
mutex_unlock(&mgr->ctx_lock);
|
||||
free_ctx:
|
||||
kfree(ctx);
|
||||
out_err:
|
||||
|
@ -120,7 +128,7 @@ int hl_ctx_init(struct hl_device *hdev, struct hl_ctx *ctx, bool is_kernel_ctx)
|
|||
ctx->thread_ctx_switch_wait_token = 0;
|
||||
|
||||
if (is_kernel_ctx) {
|
||||
ctx->asid = HL_KERNEL_ASID_ID; /* KMD gets ASID 0 */
|
||||
ctx->asid = HL_KERNEL_ASID_ID; /* Kernel driver gets ASID 0 */
|
||||
rc = hl_mmu_ctx_init(ctx);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev, "Failed to init mmu ctx module\n");
|
||||
|
|
|
@ -29,7 +29,7 @@ static int hl_debugfs_i2c_read(struct hl_device *hdev, u8 i2c_bus, u8 i2c_addr,
|
|||
|
||||
memset(&pkt, 0, sizeof(pkt));
|
||||
|
||||
pkt.ctl = __cpu_to_le32(ARMCP_PACKET_I2C_RD <<
|
||||
pkt.ctl = cpu_to_le32(ARMCP_PACKET_I2C_RD <<
|
||||
ARMCP_PKT_CTL_OPCODE_SHIFT);
|
||||
pkt.i2c_bus = i2c_bus;
|
||||
pkt.i2c_addr = i2c_addr;
|
||||
|
@ -55,12 +55,12 @@ static int hl_debugfs_i2c_write(struct hl_device *hdev, u8 i2c_bus, u8 i2c_addr,
|
|||
|
||||
memset(&pkt, 0, sizeof(pkt));
|
||||
|
||||
pkt.ctl = __cpu_to_le32(ARMCP_PACKET_I2C_WR <<
|
||||
pkt.ctl = cpu_to_le32(ARMCP_PACKET_I2C_WR <<
|
||||
ARMCP_PKT_CTL_OPCODE_SHIFT);
|
||||
pkt.i2c_bus = i2c_bus;
|
||||
pkt.i2c_addr = i2c_addr;
|
||||
pkt.i2c_reg = i2c_reg;
|
||||
pkt.value = __cpu_to_le64(val);
|
||||
pkt.value = cpu_to_le64(val);
|
||||
|
||||
rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt),
|
||||
HL_DEVICE_TIMEOUT_USEC, NULL);
|
||||
|
@ -81,10 +81,10 @@ static void hl_debugfs_led_set(struct hl_device *hdev, u8 led, u8 state)
|
|||
|
||||
memset(&pkt, 0, sizeof(pkt));
|
||||
|
||||
pkt.ctl = __cpu_to_le32(ARMCP_PACKET_LED_SET <<
|
||||
pkt.ctl = cpu_to_le32(ARMCP_PACKET_LED_SET <<
|
||||
ARMCP_PKT_CTL_OPCODE_SHIFT);
|
||||
pkt.led_index = __cpu_to_le32(led);
|
||||
pkt.value = __cpu_to_le64(state);
|
||||
pkt.led_index = cpu_to_le32(led);
|
||||
pkt.value = cpu_to_le64(state);
|
||||
|
||||
rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt),
|
||||
HL_DEVICE_TIMEOUT_USEC, NULL);
|
||||
|
@ -370,7 +370,7 @@ static int mmu_show(struct seq_file *s, void *data)
|
|||
if (dev_entry->mmu_asid == HL_KERNEL_ASID_ID)
|
||||
ctx = hdev->kernel_ctx;
|
||||
else
|
||||
ctx = hdev->user_ctx;
|
||||
ctx = hdev->compute_ctx;
|
||||
|
||||
if (!ctx) {
|
||||
dev_err(hdev->dev, "no ctx available\n");
|
||||
|
@ -533,7 +533,7 @@ out:
|
|||
static int device_va_to_pa(struct hl_device *hdev, u64 virt_addr,
|
||||
u64 *phys_addr)
|
||||
{
|
||||
struct hl_ctx *ctx = hdev->user_ctx;
|
||||
struct hl_ctx *ctx = hdev->compute_ctx;
|
||||
u64 hop_addr, hop_pte_addr, hop_pte;
|
||||
u64 offset_mask = HOP4_MASK | OFFSET_MASK;
|
||||
int rc = 0;
|
||||
|
|
|
@ -42,10 +42,12 @@ static void hpriv_release(struct kref *ref)
|
|||
{
|
||||
struct hl_fpriv *hpriv;
|
||||
struct hl_device *hdev;
|
||||
struct hl_ctx *ctx;
|
||||
|
||||
hpriv = container_of(ref, struct hl_fpriv, refcount);
|
||||
|
||||
hdev = hpriv->hdev;
|
||||
ctx = hpriv->ctx;
|
||||
|
||||
put_pid(hpriv->taskpid);
|
||||
|
||||
|
@ -53,13 +55,12 @@ static void hpriv_release(struct kref *ref)
|
|||
|
||||
mutex_destroy(&hpriv->restore_phase_mutex);
|
||||
|
||||
mutex_lock(&hdev->fpriv_list_lock);
|
||||
list_del(&hpriv->dev_node);
|
||||
hdev->compute_ctx = NULL;
|
||||
mutex_unlock(&hdev->fpriv_list_lock);
|
||||
|
||||
kfree(hpriv);
|
||||
|
||||
/* Now the FD is really closed */
|
||||
atomic_dec(&hdev->fd_open_cnt);
|
||||
|
||||
/* This allows a new user context to open the device */
|
||||
hdev->user_ctx = NULL;
|
||||
}
|
||||
|
||||
void hl_hpriv_get(struct hl_fpriv *hpriv)
|
||||
|
@ -94,6 +95,24 @@ static int hl_device_release(struct inode *inode, struct file *filp)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int hl_device_release_ctrl(struct inode *inode, struct file *filp)
|
||||
{
|
||||
struct hl_fpriv *hpriv = filp->private_data;
|
||||
struct hl_device *hdev;
|
||||
|
||||
filp->private_data = NULL;
|
||||
|
||||
hdev = hpriv->hdev;
|
||||
|
||||
mutex_lock(&hdev->fpriv_list_lock);
|
||||
list_del(&hpriv->dev_node);
|
||||
mutex_unlock(&hdev->fpriv_list_lock);
|
||||
|
||||
kfree(hpriv);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* hl_mmap - mmap function for habanalabs device
|
||||
*
|
||||
|
@ -124,55 +143,102 @@ static const struct file_operations hl_ops = {
|
|||
.compat_ioctl = hl_ioctl
|
||||
};
|
||||
|
||||
static const struct file_operations hl_ctrl_ops = {
|
||||
.owner = THIS_MODULE,
|
||||
.open = hl_device_open_ctrl,
|
||||
.release = hl_device_release_ctrl,
|
||||
.unlocked_ioctl = hl_ioctl_control,
|
||||
.compat_ioctl = hl_ioctl_control
|
||||
};
|
||||
|
||||
static void device_release_func(struct device *dev)
|
||||
{
|
||||
kfree(dev);
|
||||
}
|
||||
|
||||
/*
|
||||
* device_setup_cdev - setup cdev and device for habanalabs device
|
||||
* device_init_cdev - Initialize cdev and device for habanalabs device
|
||||
*
|
||||
* @hdev: pointer to habanalabs device structure
|
||||
* @hclass: pointer to the class object of the device
|
||||
* @minor: minor number of the specific device
|
||||
* @fpos : file operations to install for this device
|
||||
* @fpos: file operations to install for this device
|
||||
* @name: name of the device as it will appear in the filesystem
|
||||
* @cdev: pointer to the char device object that will be initialized
|
||||
* @dev: pointer to the device object that will be initialized
|
||||
*
|
||||
* Create a cdev and a Linux device for habanalabs's device. Need to be
|
||||
* called at the end of the habanalabs device initialization process,
|
||||
* because this function exposes the device to the user
|
||||
* Initialize a cdev and a Linux device for habanalabs's device.
|
||||
*/
|
||||
static int device_setup_cdev(struct hl_device *hdev, struct class *hclass,
|
||||
int minor, const struct file_operations *fops)
|
||||
static int device_init_cdev(struct hl_device *hdev, struct class *hclass,
|
||||
int minor, const struct file_operations *fops,
|
||||
char *name, struct cdev *cdev,
|
||||
struct device **dev)
|
||||
{
|
||||
int err, devno = MKDEV(hdev->major, minor);
|
||||
struct cdev *hdev_cdev = &hdev->cdev;
|
||||
char *name;
|
||||
cdev_init(cdev, fops);
|
||||
cdev->owner = THIS_MODULE;
|
||||
|
||||
name = kasprintf(GFP_KERNEL, "hl%d", hdev->id);
|
||||
if (!name)
|
||||
*dev = kzalloc(sizeof(**dev), GFP_KERNEL);
|
||||
if (!*dev)
|
||||
return -ENOMEM;
|
||||
|
||||
cdev_init(hdev_cdev, fops);
|
||||
hdev_cdev->owner = THIS_MODULE;
|
||||
err = cdev_add(hdev_cdev, devno, 1);
|
||||
if (err) {
|
||||
pr_err("Failed to add char device %s\n", name);
|
||||
goto err_cdev_add;
|
||||
device_initialize(*dev);
|
||||
(*dev)->devt = MKDEV(hdev->major, minor);
|
||||
(*dev)->class = hclass;
|
||||
(*dev)->release = device_release_func;
|
||||
dev_set_drvdata(*dev, hdev);
|
||||
dev_set_name(*dev, "%s", name);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int device_cdev_sysfs_add(struct hl_device *hdev)
|
||||
{
|
||||
int rc;
|
||||
|
||||
rc = cdev_device_add(&hdev->cdev, hdev->dev);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev,
|
||||
"failed to add a char device to the system\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
hdev->dev = device_create(hclass, NULL, devno, NULL, "%s", name);
|
||||
if (IS_ERR(hdev->dev)) {
|
||||
pr_err("Failed to create device %s\n", name);
|
||||
err = PTR_ERR(hdev->dev);
|
||||
goto err_device_create;
|
||||
rc = cdev_device_add(&hdev->cdev_ctrl, hdev->dev_ctrl);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev,
|
||||
"failed to add a control char device to the system\n");
|
||||
goto delete_cdev_device;
|
||||
}
|
||||
|
||||
dev_set_drvdata(hdev->dev, hdev);
|
||||
/* hl_sysfs_init() must be done after adding the device to the system */
|
||||
rc = hl_sysfs_init(hdev);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev, "failed to initialize sysfs\n");
|
||||
goto delete_ctrl_cdev_device;
|
||||
}
|
||||
|
||||
kfree(name);
|
||||
hdev->cdev_sysfs_created = true;
|
||||
|
||||
return 0;
|
||||
|
||||
err_device_create:
|
||||
cdev_del(hdev_cdev);
|
||||
err_cdev_add:
|
||||
kfree(name);
|
||||
return err;
|
||||
delete_ctrl_cdev_device:
|
||||
cdev_device_del(&hdev->cdev_ctrl, hdev->dev_ctrl);
|
||||
delete_cdev_device:
|
||||
cdev_device_del(&hdev->cdev, hdev->dev);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static void device_cdev_sysfs_del(struct hl_device *hdev)
|
||||
{
|
||||
/* device_release() won't be called so must free devices explicitly */
|
||||
if (!hdev->cdev_sysfs_created) {
|
||||
kfree(hdev->dev_ctrl);
|
||||
kfree(hdev->dev);
|
||||
return;
|
||||
}
|
||||
|
||||
hl_sysfs_fini(hdev);
|
||||
cdev_device_del(&hdev->cdev_ctrl, hdev->dev_ctrl);
|
||||
cdev_device_del(&hdev->cdev, hdev->dev);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -227,20 +293,29 @@ static int device_early_init(struct hl_device *hdev)
|
|||
goto free_eq_wq;
|
||||
}
|
||||
|
||||
hdev->idle_busy_ts_arr = kmalloc_array(HL_IDLE_BUSY_TS_ARR_SIZE,
|
||||
sizeof(struct hl_device_idle_busy_ts),
|
||||
(GFP_KERNEL | __GFP_ZERO));
|
||||
if (!hdev->idle_busy_ts_arr) {
|
||||
rc = -ENOMEM;
|
||||
goto free_chip_info;
|
||||
}
|
||||
|
||||
hl_cb_mgr_init(&hdev->kernel_cb_mgr);
|
||||
|
||||
mutex_init(&hdev->fd_open_cnt_lock);
|
||||
mutex_init(&hdev->send_cpu_message_lock);
|
||||
mutex_init(&hdev->debug_lock);
|
||||
mutex_init(&hdev->mmu_cache_lock);
|
||||
INIT_LIST_HEAD(&hdev->hw_queues_mirror_list);
|
||||
spin_lock_init(&hdev->hw_queues_mirror_lock);
|
||||
INIT_LIST_HEAD(&hdev->fpriv_list);
|
||||
mutex_init(&hdev->fpriv_list_lock);
|
||||
atomic_set(&hdev->in_reset, 0);
|
||||
atomic_set(&hdev->fd_open_cnt, 0);
|
||||
atomic_set(&hdev->cs_active_cnt, 0);
|
||||
|
||||
return 0;
|
||||
|
||||
free_chip_info:
|
||||
kfree(hdev->hl_chip_info);
|
||||
free_eq_wq:
|
||||
destroy_workqueue(hdev->eq_wq);
|
||||
free_cq_wq:
|
||||
|
@ -266,8 +341,11 @@ static void device_early_fini(struct hl_device *hdev)
|
|||
mutex_destroy(&hdev->debug_lock);
|
||||
mutex_destroy(&hdev->send_cpu_message_lock);
|
||||
|
||||
mutex_destroy(&hdev->fpriv_list_lock);
|
||||
|
||||
hl_cb_mgr_fini(hdev, &hdev->kernel_cb_mgr);
|
||||
|
||||
kfree(hdev->idle_busy_ts_arr);
|
||||
kfree(hdev->hl_chip_info);
|
||||
|
||||
destroy_workqueue(hdev->eq_wq);
|
||||
|
@ -277,8 +355,6 @@ static void device_early_fini(struct hl_device *hdev)
|
|||
|
||||
if (hdev->asic_funcs->early_fini)
|
||||
hdev->asic_funcs->early_fini(hdev);
|
||||
|
||||
mutex_destroy(&hdev->fd_open_cnt_lock);
|
||||
}
|
||||
|
||||
static void set_freq_to_low_job(struct work_struct *work)
|
||||
|
@ -286,9 +362,13 @@ static void set_freq_to_low_job(struct work_struct *work)
|
|||
struct hl_device *hdev = container_of(work, struct hl_device,
|
||||
work_freq.work);
|
||||
|
||||
if (atomic_read(&hdev->fd_open_cnt) == 0)
|
||||
mutex_lock(&hdev->fpriv_list_lock);
|
||||
|
||||
if (!hdev->compute_ctx)
|
||||
hl_device_set_frequency(hdev, PLL_LOW);
|
||||
|
||||
mutex_unlock(&hdev->fpriv_list_lock);
|
||||
|
||||
schedule_delayed_work(&hdev->work_freq,
|
||||
usecs_to_jiffies(HL_PLL_LOW_JOB_FREQ_USEC));
|
||||
}
|
||||
|
@ -338,7 +418,7 @@ static int device_late_init(struct hl_device *hdev)
|
|||
hdev->high_pll = hdev->asic_prop.high_pll;
|
||||
|
||||
/* force setting to low frequency */
|
||||
atomic_set(&hdev->curr_pll_profile, PLL_LOW);
|
||||
hdev->curr_pll_profile = PLL_LOW;
|
||||
|
||||
if (hdev->pm_mng_profile == PM_AUTO)
|
||||
hdev->asic_funcs->set_pll_profile(hdev, PLL_LOW);
|
||||
|
@ -381,44 +461,128 @@ static void device_late_fini(struct hl_device *hdev)
|
|||
hdev->late_init_done = false;
|
||||
}
|
||||
|
||||
uint32_t hl_device_utilization(struct hl_device *hdev, uint32_t period_ms)
|
||||
{
|
||||
struct hl_device_idle_busy_ts *ts;
|
||||
ktime_t zero_ktime, curr = ktime_get();
|
||||
u32 overlap_cnt = 0, last_index = hdev->idle_busy_ts_idx;
|
||||
s64 period_us, last_start_us, last_end_us, last_busy_time_us,
|
||||
total_busy_time_us = 0, total_busy_time_ms;
|
||||
|
||||
zero_ktime = ktime_set(0, 0);
|
||||
period_us = period_ms * USEC_PER_MSEC;
|
||||
ts = &hdev->idle_busy_ts_arr[last_index];
|
||||
|
||||
/* check case that device is currently in idle */
|
||||
if (!ktime_compare(ts->busy_to_idle_ts, zero_ktime) &&
|
||||
!ktime_compare(ts->idle_to_busy_ts, zero_ktime)) {
|
||||
|
||||
last_index--;
|
||||
/* Handle case idle_busy_ts_idx was 0 */
|
||||
if (last_index > HL_IDLE_BUSY_TS_ARR_SIZE)
|
||||
last_index = HL_IDLE_BUSY_TS_ARR_SIZE - 1;
|
||||
|
||||
ts = &hdev->idle_busy_ts_arr[last_index];
|
||||
}
|
||||
|
||||
while (overlap_cnt < HL_IDLE_BUSY_TS_ARR_SIZE) {
|
||||
/* Check if we are in last sample case. i.e. if the sample
|
||||
* begun before the sampling period. This could be a real
|
||||
* sample or 0 so need to handle both cases
|
||||
*/
|
||||
last_start_us = ktime_to_us(
|
||||
ktime_sub(curr, ts->idle_to_busy_ts));
|
||||
|
||||
if (last_start_us > period_us) {
|
||||
|
||||
/* First check two cases:
|
||||
* 1. If the device is currently busy
|
||||
* 2. If the device was idle during the whole sampling
|
||||
* period
|
||||
*/
|
||||
|
||||
if (!ktime_compare(ts->busy_to_idle_ts, zero_ktime)) {
|
||||
/* Check if the device is currently busy */
|
||||
if (ktime_compare(ts->idle_to_busy_ts,
|
||||
zero_ktime))
|
||||
return 100;
|
||||
|
||||
/* We either didn't have any activity or we
|
||||
* reached an entry which is 0. Either way,
|
||||
* exit and return what was accumulated so far
|
||||
*/
|
||||
break;
|
||||
}
|
||||
|
||||
/* If sample has finished, check it is relevant */
|
||||
last_end_us = ktime_to_us(
|
||||
ktime_sub(curr, ts->busy_to_idle_ts));
|
||||
|
||||
if (last_end_us > period_us)
|
||||
break;
|
||||
|
||||
/* It is relevant so add it but with adjustment */
|
||||
last_busy_time_us = ktime_to_us(
|
||||
ktime_sub(ts->busy_to_idle_ts,
|
||||
ts->idle_to_busy_ts));
|
||||
total_busy_time_us += last_busy_time_us -
|
||||
(last_start_us - period_us);
|
||||
break;
|
||||
}
|
||||
|
||||
/* Check if the sample is finished or still open */
|
||||
if (ktime_compare(ts->busy_to_idle_ts, zero_ktime))
|
||||
last_busy_time_us = ktime_to_us(
|
||||
ktime_sub(ts->busy_to_idle_ts,
|
||||
ts->idle_to_busy_ts));
|
||||
else
|
||||
last_busy_time_us = ktime_to_us(
|
||||
ktime_sub(curr, ts->idle_to_busy_ts));
|
||||
|
||||
total_busy_time_us += last_busy_time_us;
|
||||
|
||||
last_index--;
|
||||
/* Handle case idle_busy_ts_idx was 0 */
|
||||
if (last_index > HL_IDLE_BUSY_TS_ARR_SIZE)
|
||||
last_index = HL_IDLE_BUSY_TS_ARR_SIZE - 1;
|
||||
|
||||
ts = &hdev->idle_busy_ts_arr[last_index];
|
||||
|
||||
overlap_cnt++;
|
||||
}
|
||||
|
||||
total_busy_time_ms = DIV_ROUND_UP_ULL(total_busy_time_us,
|
||||
USEC_PER_MSEC);
|
||||
|
||||
return DIV_ROUND_UP_ULL(total_busy_time_ms * 100, period_ms);
|
||||
}
|
||||
|
||||
/*
|
||||
* hl_device_set_frequency - set the frequency of the device
|
||||
*
|
||||
* @hdev: pointer to habanalabs device structure
|
||||
* @freq: the new frequency value
|
||||
*
|
||||
* Change the frequency if needed.
|
||||
* We allose to set PLL to low only if there is no user process
|
||||
* Returns 0 if no change was done, otherwise returns 1;
|
||||
* Change the frequency if needed. This function has no protection against
|
||||
* concurrency, therefore it is assumed that the calling function has protected
|
||||
* itself against the case of calling this function from multiple threads with
|
||||
* different values
|
||||
*
|
||||
* Returns 0 if no change was done, otherwise returns 1
|
||||
*/
|
||||
int hl_device_set_frequency(struct hl_device *hdev, enum hl_pll_frequency freq)
|
||||
{
|
||||
enum hl_pll_frequency old_freq =
|
||||
(freq == PLL_HIGH) ? PLL_LOW : PLL_HIGH;
|
||||
int ret;
|
||||
|
||||
if (hdev->pm_mng_profile == PM_MANUAL)
|
||||
if ((hdev->pm_mng_profile == PM_MANUAL) ||
|
||||
(hdev->curr_pll_profile == freq))
|
||||
return 0;
|
||||
|
||||
ret = atomic_cmpxchg(&hdev->curr_pll_profile, old_freq, freq);
|
||||
if (ret == freq)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* in case we want to lower frequency, check if device is not
|
||||
* opened. We must have a check here to workaround race condition with
|
||||
* hl_device_open
|
||||
*/
|
||||
if ((freq == PLL_LOW) && (atomic_read(&hdev->fd_open_cnt) > 0)) {
|
||||
atomic_set(&hdev->curr_pll_profile, PLL_HIGH);
|
||||
return 0;
|
||||
}
|
||||
|
||||
dev_dbg(hdev->dev, "Changing device frequency to %s\n",
|
||||
freq == PLL_HIGH ? "high" : "low");
|
||||
|
||||
hdev->asic_funcs->set_pll_profile(hdev, freq);
|
||||
|
||||
hdev->curr_pll_profile = freq;
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
|
@ -449,19 +613,8 @@ int hl_device_set_debug_mode(struct hl_device *hdev, bool enable)
|
|||
goto out;
|
||||
}
|
||||
|
||||
mutex_lock(&hdev->fd_open_cnt_lock);
|
||||
|
||||
if (atomic_read(&hdev->fd_open_cnt) > 1) {
|
||||
dev_err(hdev->dev,
|
||||
"Failed to enable debug mode. More then a single user is using the device\n");
|
||||
rc = -EPERM;
|
||||
goto unlock_fd_open_lock;
|
||||
}
|
||||
|
||||
hdev->in_debug = 1;
|
||||
|
||||
unlock_fd_open_lock:
|
||||
mutex_unlock(&hdev->fd_open_cnt_lock);
|
||||
out:
|
||||
mutex_unlock(&hdev->debug_lock);
|
||||
|
||||
|
@ -568,6 +721,7 @@ disable_device:
|
|||
static void device_kill_open_processes(struct hl_device *hdev)
|
||||
{
|
||||
u16 pending_total, pending_cnt;
|
||||
struct hl_fpriv *hpriv;
|
||||
struct task_struct *task = NULL;
|
||||
|
||||
if (hdev->pldm)
|
||||
|
@ -575,32 +729,31 @@ static void device_kill_open_processes(struct hl_device *hdev)
|
|||
else
|
||||
pending_total = HL_PENDING_RESET_PER_SEC;
|
||||
|
||||
pending_cnt = pending_total;
|
||||
|
||||
/* Flush all processes that are inside hl_open */
|
||||
mutex_lock(&hdev->fd_open_cnt_lock);
|
||||
|
||||
while ((atomic_read(&hdev->fd_open_cnt)) && (pending_cnt)) {
|
||||
|
||||
pending_cnt--;
|
||||
|
||||
dev_info(hdev->dev,
|
||||
"Can't HARD reset, waiting for user to close FD\n");
|
||||
/* Giving time for user to close FD, and for processes that are inside
|
||||
* hl_device_open to finish
|
||||
*/
|
||||
if (!list_empty(&hdev->fpriv_list))
|
||||
ssleep(1);
|
||||
}
|
||||
|
||||
if (atomic_read(&hdev->fd_open_cnt)) {
|
||||
task = get_pid_task(hdev->user_ctx->hpriv->taskpid,
|
||||
PIDTYPE_PID);
|
||||
mutex_lock(&hdev->fpriv_list_lock);
|
||||
|
||||
/* This section must be protected because we are dereferencing
|
||||
* pointers that are freed if the process exits
|
||||
*/
|
||||
list_for_each_entry(hpriv, &hdev->fpriv_list, dev_node) {
|
||||
task = get_pid_task(hpriv->taskpid, PIDTYPE_PID);
|
||||
if (task) {
|
||||
dev_info(hdev->dev, "Killing user processes\n");
|
||||
dev_info(hdev->dev, "Killing user process pid=%d\n",
|
||||
task_pid_nr(task));
|
||||
send_sig(SIGKILL, task, 1);
|
||||
msleep(100);
|
||||
usleep_range(1000, 10000);
|
||||
|
||||
put_task_struct(task);
|
||||
}
|
||||
}
|
||||
|
||||
mutex_unlock(&hdev->fpriv_list_lock);
|
||||
|
||||
/* We killed the open users, but because the driver cleans up after the
|
||||
* user contexts are closed (e.g. mmu mappings), we need to wait again
|
||||
* to make sure the cleaning phase is finished before continuing with
|
||||
|
@ -609,19 +762,18 @@ static void device_kill_open_processes(struct hl_device *hdev)
|
|||
|
||||
pending_cnt = pending_total;
|
||||
|
||||
while ((atomic_read(&hdev->fd_open_cnt)) && (pending_cnt)) {
|
||||
while ((!list_empty(&hdev->fpriv_list)) && (pending_cnt)) {
|
||||
dev_info(hdev->dev,
|
||||
"Waiting for all unmap operations to finish before hard reset\n");
|
||||
|
||||
pending_cnt--;
|
||||
|
||||
ssleep(1);
|
||||
}
|
||||
|
||||
if (atomic_read(&hdev->fd_open_cnt))
|
||||
if (!list_empty(&hdev->fpriv_list))
|
||||
dev_crit(hdev->dev,
|
||||
"Going to hard reset with open user contexts\n");
|
||||
|
||||
mutex_unlock(&hdev->fd_open_cnt_lock);
|
||||
|
||||
}
|
||||
|
||||
static void device_hard_reset_pending(struct work_struct *work)
|
||||
|
@ -630,8 +782,6 @@ static void device_hard_reset_pending(struct work_struct *work)
|
|||
container_of(work, struct hl_device_reset_work, reset_work);
|
||||
struct hl_device *hdev = device_reset_work->hdev;
|
||||
|
||||
device_kill_open_processes(hdev);
|
||||
|
||||
hl_device_reset(hdev, true, true);
|
||||
|
||||
kfree(device_reset_work);
|
||||
|
@ -679,13 +829,16 @@ int hl_device_reset(struct hl_device *hdev, bool hard_reset,
|
|||
/* This also blocks future CS/VM/JOB completion operations */
|
||||
hdev->disabled = true;
|
||||
|
||||
/*
|
||||
* Flush anyone that is inside the critical section of enqueue
|
||||
/* Flush anyone that is inside the critical section of enqueue
|
||||
* jobs to the H/W
|
||||
*/
|
||||
hdev->asic_funcs->hw_queues_lock(hdev);
|
||||
hdev->asic_funcs->hw_queues_unlock(hdev);
|
||||
|
||||
/* Flush anyone that is inside device open */
|
||||
mutex_lock(&hdev->fpriv_list_lock);
|
||||
mutex_unlock(&hdev->fpriv_list_lock);
|
||||
|
||||
dev_err(hdev->dev, "Going to RESET device!\n");
|
||||
}
|
||||
|
||||
|
@ -736,6 +889,13 @@ again:
|
|||
/* Go over all the queues, release all CS and their jobs */
|
||||
hl_cs_rollback_all(hdev);
|
||||
|
||||
/* Kill processes here after CS rollback. This is because the process
|
||||
* can't really exit until all its CSs are done, which is what we
|
||||
* do in cs rollback
|
||||
*/
|
||||
if (from_hard_reset_thread)
|
||||
device_kill_open_processes(hdev);
|
||||
|
||||
/* Release kernel context */
|
||||
if ((hard_reset) && (hl_ctx_put(hdev->kernel_ctx) == 1))
|
||||
hdev->kernel_ctx = NULL;
|
||||
|
@ -754,12 +914,24 @@ again:
|
|||
for (i = 0 ; i < hdev->asic_prop.completion_queues_count ; i++)
|
||||
hl_cq_reset(hdev, &hdev->completion_queue[i]);
|
||||
|
||||
hdev->idle_busy_ts_idx = 0;
|
||||
hdev->idle_busy_ts_arr[0].busy_to_idle_ts = ktime_set(0, 0);
|
||||
hdev->idle_busy_ts_arr[0].idle_to_busy_ts = ktime_set(0, 0);
|
||||
|
||||
if (hdev->cs_active_cnt)
|
||||
dev_crit(hdev->dev, "CS active cnt %d is not 0 during reset\n",
|
||||
hdev->cs_active_cnt);
|
||||
|
||||
mutex_lock(&hdev->fpriv_list_lock);
|
||||
|
||||
/* Make sure the context switch phase will run again */
|
||||
if (hdev->user_ctx) {
|
||||
atomic_set(&hdev->user_ctx->thread_ctx_switch_token, 1);
|
||||
hdev->user_ctx->thread_ctx_switch_wait_token = 0;
|
||||
if (hdev->compute_ctx) {
|
||||
atomic_set(&hdev->compute_ctx->thread_ctx_switch_token, 1);
|
||||
hdev->compute_ctx->thread_ctx_switch_wait_token = 0;
|
||||
}
|
||||
|
||||
mutex_unlock(&hdev->fpriv_list_lock);
|
||||
|
||||
/* Finished tear-down, starting to re-initialize */
|
||||
|
||||
if (hard_reset) {
|
||||
|
@ -788,7 +960,7 @@ again:
|
|||
goto out_err;
|
||||
}
|
||||
|
||||
hdev->user_ctx = NULL;
|
||||
hdev->compute_ctx = NULL;
|
||||
|
||||
rc = hl_ctx_init(hdev, hdev->kernel_ctx, true);
|
||||
if (rc) {
|
||||
|
@ -849,6 +1021,8 @@ again:
|
|||
else
|
||||
hdev->soft_reset_cnt++;
|
||||
|
||||
dev_warn(hdev->dev, "Successfully finished resetting the device\n");
|
||||
|
||||
return 0;
|
||||
|
||||
out_err:
|
||||
|
@ -883,17 +1057,43 @@ out_err:
|
|||
int hl_device_init(struct hl_device *hdev, struct class *hclass)
|
||||
{
|
||||
int i, rc, cq_ready_cnt;
|
||||
char *name;
|
||||
bool add_cdev_sysfs_on_err = false;
|
||||
|
||||
/* Create device */
|
||||
rc = device_setup_cdev(hdev, hclass, hdev->id, &hl_ops);
|
||||
name = kasprintf(GFP_KERNEL, "hl%d", hdev->id / 2);
|
||||
if (!name) {
|
||||
rc = -ENOMEM;
|
||||
goto out_disabled;
|
||||
}
|
||||
|
||||
/* Initialize cdev and device structures */
|
||||
rc = device_init_cdev(hdev, hclass, hdev->id, &hl_ops, name,
|
||||
&hdev->cdev, &hdev->dev);
|
||||
|
||||
kfree(name);
|
||||
|
||||
if (rc)
|
||||
goto out_disabled;
|
||||
|
||||
name = kasprintf(GFP_KERNEL, "hl_controlD%d", hdev->id / 2);
|
||||
if (!name) {
|
||||
rc = -ENOMEM;
|
||||
goto free_dev;
|
||||
}
|
||||
|
||||
/* Initialize cdev and device structures for control device */
|
||||
rc = device_init_cdev(hdev, hclass, hdev->id_control, &hl_ctrl_ops,
|
||||
name, &hdev->cdev_ctrl, &hdev->dev_ctrl);
|
||||
|
||||
kfree(name);
|
||||
|
||||
if (rc)
|
||||
goto free_dev;
|
||||
|
||||
/* Initialize ASIC function pointers and perform early init */
|
||||
rc = device_early_init(hdev);
|
||||
if (rc)
|
||||
goto release_device;
|
||||
goto free_dev_ctrl;
|
||||
|
||||
/*
|
||||
* Start calling ASIC initialization. First S/W then H/W and finally
|
||||
|
@ -965,7 +1165,7 @@ int hl_device_init(struct hl_device *hdev, struct class *hclass)
|
|||
goto mmu_fini;
|
||||
}
|
||||
|
||||
hdev->user_ctx = NULL;
|
||||
hdev->compute_ctx = NULL;
|
||||
|
||||
rc = hl_ctx_init(hdev, hdev->kernel_ctx, true);
|
||||
if (rc) {
|
||||
|
@ -980,12 +1180,6 @@ int hl_device_init(struct hl_device *hdev, struct class *hclass)
|
|||
goto release_ctx;
|
||||
}
|
||||
|
||||
rc = hl_sysfs_init(hdev);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev, "failed to initialize sysfs\n");
|
||||
goto free_cb_pool;
|
||||
}
|
||||
|
||||
hl_debugfs_add_device(hdev);
|
||||
|
||||
if (hdev->asic_funcs->get_hw_state(hdev) == HL_DEVICE_HW_STATE_DIRTY) {
|
||||
|
@ -994,6 +1188,12 @@ int hl_device_init(struct hl_device *hdev, struct class *hclass)
|
|||
hdev->asic_funcs->hw_fini(hdev, true);
|
||||
}
|
||||
|
||||
/*
|
||||
* From this point, in case of an error, add char devices and create
|
||||
* sysfs nodes as part of the error flow, to allow debugging.
|
||||
*/
|
||||
add_cdev_sysfs_on_err = true;
|
||||
|
||||
rc = hdev->asic_funcs->hw_init(hdev);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev, "failed to initialize the H/W\n");
|
||||
|
@ -1030,9 +1230,24 @@ int hl_device_init(struct hl_device *hdev, struct class *hclass)
|
|||
}
|
||||
|
||||
/*
|
||||
* hl_hwmon_init must be called after device_late_init, because only
|
||||
* Expose devices and sysfs nodes to user.
|
||||
* From here there is no need to add char devices and create sysfs nodes
|
||||
* in case of an error.
|
||||
*/
|
||||
add_cdev_sysfs_on_err = false;
|
||||
rc = device_cdev_sysfs_add(hdev);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev,
|
||||
"Failed to add char devices and sysfs nodes\n");
|
||||
rc = 0;
|
||||
goto out_disabled;
|
||||
}
|
||||
|
||||
/*
|
||||
* hl_hwmon_init() must be called after device_late_init(), because only
|
||||
* there we get the information from the device about which
|
||||
* hwmon-related sensors the device supports
|
||||
* hwmon-related sensors the device supports.
|
||||
* Furthermore, it must be done after adding the device to the system.
|
||||
*/
|
||||
rc = hl_hwmon_init(hdev);
|
||||
if (rc) {
|
||||
|
@ -1048,8 +1263,6 @@ int hl_device_init(struct hl_device *hdev, struct class *hclass)
|
|||
|
||||
return 0;
|
||||
|
||||
free_cb_pool:
|
||||
hl_cb_pool_fini(hdev);
|
||||
release_ctx:
|
||||
if (hl_ctx_put(hdev->kernel_ctx) != 1)
|
||||
dev_err(hdev->dev,
|
||||
|
@ -1068,18 +1281,21 @@ sw_fini:
|
|||
hdev->asic_funcs->sw_fini(hdev);
|
||||
early_fini:
|
||||
device_early_fini(hdev);
|
||||
release_device:
|
||||
device_destroy(hclass, hdev->dev->devt);
|
||||
cdev_del(&hdev->cdev);
|
||||
free_dev_ctrl:
|
||||
kfree(hdev->dev_ctrl);
|
||||
free_dev:
|
||||
kfree(hdev->dev);
|
||||
out_disabled:
|
||||
hdev->disabled = true;
|
||||
if (add_cdev_sysfs_on_err)
|
||||
device_cdev_sysfs_add(hdev);
|
||||
if (hdev->pdev)
|
||||
dev_err(&hdev->pdev->dev,
|
||||
"Failed to initialize hl%d. Device is NOT usable !\n",
|
||||
hdev->id);
|
||||
hdev->id / 2);
|
||||
else
|
||||
pr_err("Failed to initialize hl%d. Device is NOT usable !\n",
|
||||
hdev->id);
|
||||
hdev->id / 2);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
@ -1120,16 +1336,17 @@ void hl_device_fini(struct hl_device *hdev)
|
|||
/* Mark device as disabled */
|
||||
hdev->disabled = true;
|
||||
|
||||
/*
|
||||
* Flush anyone that is inside the critical section of enqueue
|
||||
/* Flush anyone that is inside the critical section of enqueue
|
||||
* jobs to the H/W
|
||||
*/
|
||||
hdev->asic_funcs->hw_queues_lock(hdev);
|
||||
hdev->asic_funcs->hw_queues_unlock(hdev);
|
||||
|
||||
hdev->hard_reset_pending = true;
|
||||
/* Flush anyone that is inside device open */
|
||||
mutex_lock(&hdev->fpriv_list_lock);
|
||||
mutex_unlock(&hdev->fpriv_list_lock);
|
||||
|
||||
device_kill_open_processes(hdev);
|
||||
hdev->hard_reset_pending = true;
|
||||
|
||||
hl_hwmon_fini(hdev);
|
||||
|
||||
|
@ -1137,8 +1354,6 @@ void hl_device_fini(struct hl_device *hdev)
|
|||
|
||||
hl_debugfs_remove_device(hdev);
|
||||
|
||||
hl_sysfs_fini(hdev);
|
||||
|
||||
/*
|
||||
* Halt the engines and disable interrupts so we won't get any more
|
||||
* completions from H/W and we won't have any accesses from the
|
||||
|
@ -1149,6 +1364,12 @@ void hl_device_fini(struct hl_device *hdev)
|
|||
/* Go over all the queues, release all CS and their jobs */
|
||||
hl_cs_rollback_all(hdev);
|
||||
|
||||
/* Kill processes here after CS rollback. This is because the process
|
||||
* can't really exit until all its CSs are done, which is what we
|
||||
* do in cs rollback
|
||||
*/
|
||||
device_kill_open_processes(hdev);
|
||||
|
||||
hl_cb_pool_fini(hdev);
|
||||
|
||||
/* Release kernel context */
|
||||
|
@ -1175,9 +1396,8 @@ void hl_device_fini(struct hl_device *hdev)
|
|||
|
||||
device_early_fini(hdev);
|
||||
|
||||
/* Hide device from user */
|
||||
device_destroy(hdev->dev->class, hdev->dev->devt);
|
||||
cdev_del(&hdev->cdev);
|
||||
/* Hide devices and sysfs nodes from user */
|
||||
device_cdev_sysfs_del(hdev);
|
||||
|
||||
pr_info("removed device successfully\n");
|
||||
}
|
||||
|
|
|
@ -9,6 +9,7 @@
|
|||
#include "include/hw_ip/mmu/mmu_general.h"
|
||||
#include "include/hw_ip/mmu/mmu_v1_0.h"
|
||||
#include "include/goya/asic_reg/goya_masks.h"
|
||||
#include "include/goya/goya_reg_map.h"
|
||||
|
||||
#include <linux/pci.h>
|
||||
#include <linux/genalloc.h>
|
||||
|
@ -41,8 +42,8 @@
|
|||
* PQ, CQ and CP are not secured.
|
||||
* PQ, CB and the data are on the SRAM/DRAM.
|
||||
*
|
||||
* Since QMAN DMA is secured, KMD is parsing the DMA CB:
|
||||
* - KMD checks DMA pointer
|
||||
* Since QMAN DMA is secured, the driver is parsing the DMA CB:
|
||||
* - checks DMA pointer
|
||||
* - WREG, MSG_PROT are not allowed.
|
||||
* - MSG_LONG/SHORT are allowed.
|
||||
*
|
||||
|
@ -55,15 +56,15 @@
|
|||
* QMAN DMA: PQ, CQ and CP are secured.
|
||||
* MMU is set to bypass on the Secure props register of the QMAN.
|
||||
* The reasons we don't enable MMU for PQ, CQ and CP are:
|
||||
* - PQ entry is in kernel address space and KMD doesn't map it.
|
||||
* - PQ entry is in kernel address space and the driver doesn't map it.
|
||||
* - CP writes to MSIX register and to kernel address space (completion
|
||||
* queue).
|
||||
*
|
||||
* DMA is not secured but because CP is secured, KMD still needs to parse the
|
||||
* CB, but doesn't need to check the DMA addresses.
|
||||
* DMA is not secured but because CP is secured, the driver still needs to parse
|
||||
* the CB, but doesn't need to check the DMA addresses.
|
||||
*
|
||||
* For QMAN DMA 0, DMA is also secured because only KMD uses this DMA and KMD
|
||||
* doesn't map memory in MMU.
|
||||
* For QMAN DMA 0, DMA is also secured because only the driver uses this DMA and
|
||||
* the driver doesn't map memory in MMU.
|
||||
*
|
||||
* QMAN TPC/MME: PQ, CQ and CP aren't secured (no change from MMU disabled mode)
|
||||
*
|
||||
|
@ -335,18 +336,18 @@ void goya_get_fixed_properties(struct hl_device *hdev)
|
|||
|
||||
for (i = 0 ; i < NUMBER_OF_EXT_HW_QUEUES ; i++) {
|
||||
prop->hw_queues_props[i].type = QUEUE_TYPE_EXT;
|
||||
prop->hw_queues_props[i].kmd_only = 0;
|
||||
prop->hw_queues_props[i].driver_only = 0;
|
||||
}
|
||||
|
||||
for (; i < NUMBER_OF_EXT_HW_QUEUES + NUMBER_OF_CPU_HW_QUEUES ; i++) {
|
||||
prop->hw_queues_props[i].type = QUEUE_TYPE_CPU;
|
||||
prop->hw_queues_props[i].kmd_only = 1;
|
||||
prop->hw_queues_props[i].driver_only = 1;
|
||||
}
|
||||
|
||||
for (; i < NUMBER_OF_EXT_HW_QUEUES + NUMBER_OF_CPU_HW_QUEUES +
|
||||
NUMBER_OF_INT_HW_QUEUES; i++) {
|
||||
prop->hw_queues_props[i].type = QUEUE_TYPE_INT;
|
||||
prop->hw_queues_props[i].kmd_only = 0;
|
||||
prop->hw_queues_props[i].driver_only = 0;
|
||||
}
|
||||
|
||||
for (; i < HL_MAX_QUEUES; i++)
|
||||
|
@ -1006,36 +1007,34 @@ int goya_init_cpu_queues(struct hl_device *hdev)
|
|||
|
||||
eq = &hdev->event_queue;
|
||||
|
||||
WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_0,
|
||||
lower_32_bits(cpu_pq->bus_address));
|
||||
WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_1,
|
||||
upper_32_bits(cpu_pq->bus_address));
|
||||
WREG32(mmCPU_PQ_BASE_ADDR_LOW, lower_32_bits(cpu_pq->bus_address));
|
||||
WREG32(mmCPU_PQ_BASE_ADDR_HIGH, upper_32_bits(cpu_pq->bus_address));
|
||||
|
||||
WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_2, lower_32_bits(eq->bus_address));
|
||||
WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_3, upper_32_bits(eq->bus_address));
|
||||
WREG32(mmCPU_EQ_BASE_ADDR_LOW, lower_32_bits(eq->bus_address));
|
||||
WREG32(mmCPU_EQ_BASE_ADDR_HIGH, upper_32_bits(eq->bus_address));
|
||||
|
||||
WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_8,
|
||||
WREG32(mmCPU_CQ_BASE_ADDR_LOW,
|
||||
lower_32_bits(VA_CPU_ACCESSIBLE_MEM_ADDR));
|
||||
WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_9,
|
||||
WREG32(mmCPU_CQ_BASE_ADDR_HIGH,
|
||||
upper_32_bits(VA_CPU_ACCESSIBLE_MEM_ADDR));
|
||||
|
||||
WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_5, HL_QUEUE_SIZE_IN_BYTES);
|
||||
WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_4, HL_EQ_SIZE_IN_BYTES);
|
||||
WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_10, HL_CPU_ACCESSIBLE_MEM_SIZE);
|
||||
WREG32(mmCPU_PQ_LENGTH, HL_QUEUE_SIZE_IN_BYTES);
|
||||
WREG32(mmCPU_EQ_LENGTH, HL_EQ_SIZE_IN_BYTES);
|
||||
WREG32(mmCPU_CQ_LENGTH, HL_CPU_ACCESSIBLE_MEM_SIZE);
|
||||
|
||||
/* Used for EQ CI */
|
||||
WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_6, 0);
|
||||
WREG32(mmCPU_EQ_CI, 0);
|
||||
|
||||
WREG32(mmCPU_IF_PF_PQ_PI, 0);
|
||||
|
||||
WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_7, PQ_INIT_STATUS_READY_FOR_CP);
|
||||
WREG32(mmCPU_PQ_INIT_STATUS, PQ_INIT_STATUS_READY_FOR_CP);
|
||||
|
||||
WREG32(mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR,
|
||||
GOYA_ASYNC_EVENT_ID_PI_UPDATE);
|
||||
|
||||
err = hl_poll_timeout(
|
||||
hdev,
|
||||
mmPSOC_GLOBAL_CONF_SCRATCHPAD_7,
|
||||
mmCPU_PQ_INIT_STATUS,
|
||||
status,
|
||||
(status == PQ_INIT_STATUS_READY_FOR_HOST),
|
||||
1000,
|
||||
|
@ -2063,6 +2062,25 @@ static void goya_disable_msix(struct hl_device *hdev)
|
|||
goya->hw_cap_initialized &= ~HW_CAP_MSIX;
|
||||
}
|
||||
|
||||
static void goya_enable_timestamp(struct hl_device *hdev)
|
||||
{
|
||||
/* Disable the timestamp counter */
|
||||
WREG32(mmPSOC_TIMESTAMP_BASE - CFG_BASE, 0);
|
||||
|
||||
/* Zero the lower/upper parts of the 64-bit counter */
|
||||
WREG32(mmPSOC_TIMESTAMP_BASE - CFG_BASE + 0xC, 0);
|
||||
WREG32(mmPSOC_TIMESTAMP_BASE - CFG_BASE + 0x8, 0);
|
||||
|
||||
/* Enable the counter */
|
||||
WREG32(mmPSOC_TIMESTAMP_BASE - CFG_BASE, 1);
|
||||
}
|
||||
|
||||
static void goya_disable_timestamp(struct hl_device *hdev)
|
||||
{
|
||||
/* Disable the timestamp counter */
|
||||
WREG32(mmPSOC_TIMESTAMP_BASE - CFG_BASE, 0);
|
||||
}
|
||||
|
||||
static void goya_halt_engines(struct hl_device *hdev, bool hard_reset)
|
||||
{
|
||||
u32 wait_timeout_ms, cpu_timeout_ms;
|
||||
|
@ -2103,6 +2121,8 @@ static void goya_halt_engines(struct hl_device *hdev, bool hard_reset)
|
|||
goya_disable_external_queues(hdev);
|
||||
goya_disable_internal_queues(hdev);
|
||||
|
||||
goya_disable_timestamp(hdev);
|
||||
|
||||
if (hard_reset) {
|
||||
goya_disable_msix(hdev);
|
||||
goya_mmu_remove_device_cpu_mappings(hdev);
|
||||
|
@ -2205,12 +2225,12 @@ static void goya_read_device_fw_version(struct hl_device *hdev,
|
|||
|
||||
switch (fwc) {
|
||||
case FW_COMP_UBOOT:
|
||||
ver_off = RREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_29);
|
||||
ver_off = RREG32(mmUBOOT_VER_OFFSET);
|
||||
dest = hdev->asic_prop.uboot_ver;
|
||||
name = "U-Boot";
|
||||
break;
|
||||
case FW_COMP_PREBOOT:
|
||||
ver_off = RREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_28);
|
||||
ver_off = RREG32(mmPREBOOT_VER_OFFSET);
|
||||
dest = hdev->asic_prop.preboot_ver;
|
||||
name = "Preboot";
|
||||
break;
|
||||
|
@ -2469,7 +2489,7 @@ static int goya_hw_init(struct hl_device *hdev)
|
|||
* we need to reset the chip before doing H/W init. This register is
|
||||
* cleared by the H/W upon H/W reset
|
||||
*/
|
||||
WREG32(mmPSOC_GLOBAL_CONF_APP_STATUS, HL_DEVICE_HW_STATE_DIRTY);
|
||||
WREG32(mmHW_STATE, HL_DEVICE_HW_STATE_DIRTY);
|
||||
|
||||
rc = goya_init_cpu(hdev, GOYA_CPU_TIMEOUT_USEC);
|
||||
if (rc) {
|
||||
|
@ -2505,6 +2525,8 @@ static int goya_hw_init(struct hl_device *hdev)
|
|||
|
||||
goya_init_tpc_qmans(hdev);
|
||||
|
||||
goya_enable_timestamp(hdev);
|
||||
|
||||
/* MSI-X must be enabled before CPU queues are initialized */
|
||||
rc = goya_enable_msix(hdev);
|
||||
if (rc)
|
||||
|
@ -2831,7 +2853,7 @@ static int goya_send_job_on_qman0(struct hl_device *hdev, struct hl_cs_job *job)
|
|||
|
||||
if (!hdev->asic_funcs->is_device_idle(hdev, NULL, NULL)) {
|
||||
dev_err_ratelimited(hdev->dev,
|
||||
"Can't send KMD job on QMAN0 because the device is not idle\n");
|
||||
"Can't send driver job on QMAN0 because the device is not idle\n");
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
|
@ -3949,7 +3971,7 @@ void goya_add_end_of_cb_packets(struct hl_device *hdev, u64 kernel_address,
|
|||
|
||||
void goya_update_eq_ci(struct hl_device *hdev, u32 val)
|
||||
{
|
||||
WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_6, val);
|
||||
WREG32(mmCPU_EQ_CI, val);
|
||||
}
|
||||
|
||||
void goya_restore_phase_topology(struct hl_device *hdev)
|
||||
|
@ -4447,6 +4469,7 @@ void goya_handle_eqe(struct hl_device *hdev, struct hl_eq_entry *eq_entry)
|
|||
struct goya_device *goya = hdev->asic_specific;
|
||||
|
||||
goya->events_stat[event_type]++;
|
||||
goya->events_stat_aggregate[event_type]++;
|
||||
|
||||
switch (event_type) {
|
||||
case GOYA_ASYNC_EVENT_ID_PCIE_IF:
|
||||
|
@ -4528,12 +4551,16 @@ void goya_handle_eqe(struct hl_device *hdev, struct hl_eq_entry *eq_entry)
|
|||
}
|
||||
}
|
||||
|
||||
void *goya_get_events_stat(struct hl_device *hdev, u32 *size)
|
||||
void *goya_get_events_stat(struct hl_device *hdev, bool aggregate, u32 *size)
|
||||
{
|
||||
struct goya_device *goya = hdev->asic_specific;
|
||||
|
||||
*size = (u32) sizeof(goya->events_stat);
|
||||
if (aggregate) {
|
||||
*size = (u32) sizeof(goya->events_stat_aggregate);
|
||||
return goya->events_stat_aggregate;
|
||||
}
|
||||
|
||||
*size = (u32) sizeof(goya->events_stat);
|
||||
return goya->events_stat;
|
||||
}
|
||||
|
||||
|
@ -4934,6 +4961,10 @@ int goya_armcp_info_get(struct hl_device *hdev)
|
|||
prop->dram_end_address = prop->dram_base_address + dram_size;
|
||||
}
|
||||
|
||||
if (!strlen(prop->armcp_info.card_name))
|
||||
strncpy(prop->armcp_info.card_name, GOYA_DEFAULT_CARD_NAME,
|
||||
CARD_NAME_MAX_LEN);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -5047,7 +5078,7 @@ static int goya_get_eeprom_data(struct hl_device *hdev, void *data,
|
|||
|
||||
static enum hl_device_hw_state goya_get_hw_state(struct hl_device *hdev)
|
||||
{
|
||||
return RREG32(mmPSOC_GLOBAL_CONF_APP_STATUS);
|
||||
return RREG32(mmHW_STATE);
|
||||
}
|
||||
|
||||
static const struct hl_asic_funcs goya_funcs = {
|
||||
|
|
|
@ -55,6 +55,8 @@
|
|||
|
||||
#define DRAM_PHYS_DEFAULT_SIZE 0x100000000ull /* 4GB */
|
||||
|
||||
#define GOYA_DEFAULT_CARD_NAME "HL1000"
|
||||
|
||||
/* DRAM Memory Map */
|
||||
|
||||
#define CPU_FW_IMAGE_SIZE 0x10000000 /* 256MB */
|
||||
|
@ -68,19 +70,19 @@
|
|||
MMU_PAGE_TABLES_SIZE)
|
||||
#define MMU_CACHE_MNG_ADDR (MMU_DRAM_DEFAULT_PAGE_ADDR + \
|
||||
MMU_DRAM_DEFAULT_PAGE_SIZE)
|
||||
#define DRAM_KMD_END_ADDR (MMU_CACHE_MNG_ADDR + \
|
||||
#define DRAM_DRIVER_END_ADDR (MMU_CACHE_MNG_ADDR + \
|
||||
MMU_CACHE_MNG_SIZE)
|
||||
|
||||
#define DRAM_BASE_ADDR_USER 0x20000000
|
||||
|
||||
#if (DRAM_KMD_END_ADDR > DRAM_BASE_ADDR_USER)
|
||||
#error "KMD must reserve no more than 512MB"
|
||||
#if (DRAM_DRIVER_END_ADDR > DRAM_BASE_ADDR_USER)
|
||||
#error "Driver must reserve no more than 512MB"
|
||||
#endif
|
||||
|
||||
/*
|
||||
* SRAM Memory Map for KMD
|
||||
* SRAM Memory Map for Driver
|
||||
*
|
||||
* KMD occupies KMD_SRAM_SIZE bytes from the start of SRAM. It is used for
|
||||
* Driver occupies DRIVER_SRAM_SIZE bytes from the start of SRAM. It is used for
|
||||
* MME/TPC QMANs
|
||||
*
|
||||
*/
|
||||
|
@ -106,10 +108,10 @@
|
|||
#define TPC7_QMAN_BASE_OFFSET (TPC6_QMAN_BASE_OFFSET + \
|
||||
(TPC_QMAN_LENGTH * QMAN_PQ_ENTRY_SIZE))
|
||||
|
||||
#define SRAM_KMD_RES_OFFSET (TPC7_QMAN_BASE_OFFSET + \
|
||||
#define SRAM_DRIVER_RES_OFFSET (TPC7_QMAN_BASE_OFFSET + \
|
||||
(TPC_QMAN_LENGTH * QMAN_PQ_ENTRY_SIZE))
|
||||
|
||||
#if (SRAM_KMD_RES_OFFSET >= GOYA_KMD_SRAM_RESERVED_SIZE_FROM_START)
|
||||
#if (SRAM_DRIVER_RES_OFFSET >= GOYA_KMD_SRAM_RESERVED_SIZE_FROM_START)
|
||||
#error "MME/TPC QMANs SRAM space exceeds limit"
|
||||
#endif
|
||||
|
||||
|
@ -162,6 +164,7 @@ struct goya_device {
|
|||
|
||||
u64 ddr_bar_cur_addr;
|
||||
u32 events_stat[GOYA_ASYNC_EVENT_ID_SIZE];
|
||||
u32 events_stat_aggregate[GOYA_ASYNC_EVENT_ID_SIZE];
|
||||
u32 hw_cap_initialized;
|
||||
u8 device_cpu_mmu_mappings_done;
|
||||
};
|
||||
|
@ -215,7 +218,7 @@ int goya_suspend(struct hl_device *hdev);
|
|||
int goya_resume(struct hl_device *hdev);
|
||||
|
||||
void goya_handle_eqe(struct hl_device *hdev, struct hl_eq_entry *eq_entry);
|
||||
void *goya_get_events_stat(struct hl_device *hdev, u32 *size);
|
||||
void *goya_get_events_stat(struct hl_device *hdev, bool aggregate, u32 *size);
|
||||
|
||||
void goya_add_end_of_cb_packets(struct hl_device *hdev, u64 kernel_address,
|
||||
u32 len, u64 cq_addr, u32 cq_val, u32 msix_vec);
|
||||
|
|
|
@ -15,6 +15,10 @@
|
|||
|
||||
#define GOYA_PLDM_CORESIGHT_TIMEOUT_USEC (CORESIGHT_TIMEOUT_USEC * 100)
|
||||
|
||||
#define SPMU_SECTION_SIZE DMA_CH_0_CS_SPMU_MAX_OFFSET
|
||||
#define SPMU_EVENT_TYPES_OFFSET 0x400
|
||||
#define SPMU_MAX_COUNTERS 6
|
||||
|
||||
static u64 debug_stm_regs[GOYA_STM_LAST + 1] = {
|
||||
[GOYA_STM_CPU] = mmCPU_STM_BASE,
|
||||
[GOYA_STM_DMA_CH_0_CS] = mmDMA_CH_0_CS_STM_BASE,
|
||||
|
@ -226,9 +230,16 @@ static int goya_config_stm(struct hl_device *hdev,
|
|||
struct hl_debug_params *params)
|
||||
{
|
||||
struct hl_debug_params_stm *input;
|
||||
u64 base_reg = debug_stm_regs[params->reg_idx] - CFG_BASE;
|
||||
u64 base_reg;
|
||||
int rc;
|
||||
|
||||
if (params->reg_idx >= ARRAY_SIZE(debug_stm_regs)) {
|
||||
dev_err(hdev->dev, "Invalid register index in STM\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
base_reg = debug_stm_regs[params->reg_idx] - CFG_BASE;
|
||||
|
||||
WREG32(base_reg + 0xFB0, CORESIGHT_UNLOCK);
|
||||
|
||||
if (params->enable) {
|
||||
|
@ -288,10 +299,17 @@ static int goya_config_etf(struct hl_device *hdev,
|
|||
struct hl_debug_params *params)
|
||||
{
|
||||
struct hl_debug_params_etf *input;
|
||||
u64 base_reg = debug_etf_regs[params->reg_idx] - CFG_BASE;
|
||||
u64 base_reg;
|
||||
u32 val;
|
||||
int rc;
|
||||
|
||||
if (params->reg_idx >= ARRAY_SIZE(debug_etf_regs)) {
|
||||
dev_err(hdev->dev, "Invalid register index in ETF\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
base_reg = debug_etf_regs[params->reg_idx] - CFG_BASE;
|
||||
|
||||
WREG32(base_reg + 0xFB0, CORESIGHT_UNLOCK);
|
||||
|
||||
val = RREG32(base_reg + 0x304);
|
||||
|
@ -445,11 +463,18 @@ static int goya_config_etr(struct hl_device *hdev,
|
|||
static int goya_config_funnel(struct hl_device *hdev,
|
||||
struct hl_debug_params *params)
|
||||
{
|
||||
WREG32(debug_funnel_regs[params->reg_idx] - CFG_BASE + 0xFB0,
|
||||
CORESIGHT_UNLOCK);
|
||||
u64 base_reg;
|
||||
|
||||
WREG32(debug_funnel_regs[params->reg_idx] - CFG_BASE,
|
||||
params->enable ? 0x33F : 0);
|
||||
if (params->reg_idx >= ARRAY_SIZE(debug_funnel_regs)) {
|
||||
dev_err(hdev->dev, "Invalid register index in FUNNEL\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
base_reg = debug_funnel_regs[params->reg_idx] - CFG_BASE;
|
||||
|
||||
WREG32(base_reg + 0xFB0, CORESIGHT_UNLOCK);
|
||||
|
||||
WREG32(base_reg, params->enable ? 0x33F : 0);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -458,9 +483,16 @@ static int goya_config_bmon(struct hl_device *hdev,
|
|||
struct hl_debug_params *params)
|
||||
{
|
||||
struct hl_debug_params_bmon *input;
|
||||
u64 base_reg = debug_bmon_regs[params->reg_idx] - CFG_BASE;
|
||||
u64 base_reg;
|
||||
u32 pcie_base = 0;
|
||||
|
||||
if (params->reg_idx >= ARRAY_SIZE(debug_bmon_regs)) {
|
||||
dev_err(hdev->dev, "Invalid register index in BMON\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
base_reg = debug_bmon_regs[params->reg_idx] - CFG_BASE;
|
||||
|
||||
WREG32(base_reg + 0x104, 1);
|
||||
|
||||
if (params->enable) {
|
||||
|
@ -522,7 +554,7 @@ static int goya_config_bmon(struct hl_device *hdev,
|
|||
static int goya_config_spmu(struct hl_device *hdev,
|
||||
struct hl_debug_params *params)
|
||||
{
|
||||
u64 base_reg = debug_spmu_regs[params->reg_idx] - CFG_BASE;
|
||||
u64 base_reg;
|
||||
struct hl_debug_params_spmu *input = params->input;
|
||||
u64 *output;
|
||||
u32 output_arr_len;
|
||||
|
@ -531,6 +563,13 @@ static int goya_config_spmu(struct hl_device *hdev,
|
|||
u32 cycle_cnt_idx;
|
||||
int i;
|
||||
|
||||
if (params->reg_idx >= ARRAY_SIZE(debug_spmu_regs)) {
|
||||
dev_err(hdev->dev, "Invalid register index in SPMU\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
base_reg = debug_spmu_regs[params->reg_idx] - CFG_BASE;
|
||||
|
||||
if (params->enable) {
|
||||
input = params->input;
|
||||
|
||||
|
@ -539,7 +578,13 @@ static int goya_config_spmu(struct hl_device *hdev,
|
|||
|
||||
if (input->event_types_num < 3) {
|
||||
dev_err(hdev->dev,
|
||||
"not enough values for SPMU enable\n");
|
||||
"not enough event types values for SPMU enable\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (input->event_types_num > SPMU_MAX_COUNTERS) {
|
||||
dev_err(hdev->dev,
|
||||
"too many event types values for SPMU enable\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
|
@ -547,7 +592,8 @@ static int goya_config_spmu(struct hl_device *hdev,
|
|||
WREG32(base_reg + 0xE04, 0x41013040);
|
||||
|
||||
for (i = 0 ; i < input->event_types_num ; i++)
|
||||
WREG32(base_reg + 0x400 + i * 4, input->event_types[i]);
|
||||
WREG32(base_reg + SPMU_EVENT_TYPES_OFFSET + i * 4,
|
||||
input->event_types[i]);
|
||||
|
||||
WREG32(base_reg + 0xE04, 0x41013041);
|
||||
WREG32(base_reg + 0xC00, 0x8000003F);
|
||||
|
@ -567,6 +613,12 @@ static int goya_config_spmu(struct hl_device *hdev,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (events_num > SPMU_MAX_COUNTERS) {
|
||||
dev_err(hdev->dev,
|
||||
"too many events values for SPMU disable\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
WREG32(base_reg + 0xE04, 0x41013040);
|
||||
|
||||
for (i = 0 ; i < events_num ; i++)
|
||||
|
@ -584,24 +636,11 @@ static int goya_config_spmu(struct hl_device *hdev,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int goya_config_timestamp(struct hl_device *hdev,
|
||||
struct hl_debug_params *params)
|
||||
{
|
||||
WREG32(mmPSOC_TIMESTAMP_BASE - CFG_BASE, 0);
|
||||
if (params->enable) {
|
||||
WREG32(mmPSOC_TIMESTAMP_BASE - CFG_BASE + 0xC, 0);
|
||||
WREG32(mmPSOC_TIMESTAMP_BASE - CFG_BASE + 0x8, 0);
|
||||
WREG32(mmPSOC_TIMESTAMP_BASE - CFG_BASE, 1);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int goya_debug_coresight(struct hl_device *hdev, void *data)
|
||||
{
|
||||
struct hl_debug_params *params = data;
|
||||
u32 val;
|
||||
int rc;
|
||||
int rc = 0;
|
||||
|
||||
switch (params->op) {
|
||||
case HL_DEBUG_OP_STM:
|
||||
|
@ -623,7 +662,7 @@ int goya_debug_coresight(struct hl_device *hdev, void *data)
|
|||
rc = goya_config_spmu(hdev, params);
|
||||
break;
|
||||
case HL_DEBUG_OP_TIMESTAMP:
|
||||
rc = goya_config_timestamp(hdev, params);
|
||||
/* Do nothing as this opcode is deprecated */
|
||||
break;
|
||||
|
||||
default:
|
||||
|
|
|
@ -230,18 +230,127 @@ static ssize_t ic_clk_curr_show(struct device *dev,
|
|||
return sprintf(buf, "%lu\n", value);
|
||||
}
|
||||
|
||||
static ssize_t pm_mng_profile_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct hl_device *hdev = dev_get_drvdata(dev);
|
||||
|
||||
if (hl_device_disabled_or_in_reset(hdev))
|
||||
return -ENODEV;
|
||||
|
||||
return sprintf(buf, "%s\n",
|
||||
(hdev->pm_mng_profile == PM_AUTO) ? "auto" :
|
||||
(hdev->pm_mng_profile == PM_MANUAL) ? "manual" :
|
||||
"unknown");
|
||||
}
|
||||
|
||||
static ssize_t pm_mng_profile_store(struct device *dev,
|
||||
struct device_attribute *attr, const char *buf, size_t count)
|
||||
{
|
||||
struct hl_device *hdev = dev_get_drvdata(dev);
|
||||
|
||||
if (hl_device_disabled_or_in_reset(hdev)) {
|
||||
count = -ENODEV;
|
||||
goto out;
|
||||
}
|
||||
|
||||
mutex_lock(&hdev->fpriv_list_lock);
|
||||
|
||||
if (hdev->compute_ctx) {
|
||||
dev_err(hdev->dev,
|
||||
"Can't change PM profile while compute context is opened on the device\n");
|
||||
count = -EPERM;
|
||||
goto unlock_mutex;
|
||||
}
|
||||
|
||||
if (strncmp("auto", buf, strlen("auto")) == 0) {
|
||||
/* Make sure we are in LOW PLL when changing modes */
|
||||
if (hdev->pm_mng_profile == PM_MANUAL) {
|
||||
hdev->curr_pll_profile = PLL_HIGH;
|
||||
hl_device_set_frequency(hdev, PLL_LOW);
|
||||
hdev->pm_mng_profile = PM_AUTO;
|
||||
}
|
||||
} else if (strncmp("manual", buf, strlen("manual")) == 0) {
|
||||
if (hdev->pm_mng_profile == PM_AUTO) {
|
||||
/* Must release the lock because the work thread also
|
||||
* takes this lock. But before we release it, set
|
||||
* the mode to manual so nothing will change if a user
|
||||
* suddenly opens the device
|
||||
*/
|
||||
hdev->pm_mng_profile = PM_MANUAL;
|
||||
|
||||
mutex_unlock(&hdev->fpriv_list_lock);
|
||||
|
||||
/* Flush the current work so we can return to the user
|
||||
* knowing that he is the only one changing frequencies
|
||||
*/
|
||||
flush_delayed_work(&hdev->work_freq);
|
||||
|
||||
return count;
|
||||
}
|
||||
} else {
|
||||
dev_err(hdev->dev, "value should be auto or manual\n");
|
||||
count = -EINVAL;
|
||||
}
|
||||
|
||||
unlock_mutex:
|
||||
mutex_unlock(&hdev->fpriv_list_lock);
|
||||
out:
|
||||
return count;
|
||||
}
|
||||
|
||||
static ssize_t high_pll_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct hl_device *hdev = dev_get_drvdata(dev);
|
||||
|
||||
if (hl_device_disabled_or_in_reset(hdev))
|
||||
return -ENODEV;
|
||||
|
||||
return sprintf(buf, "%u\n", hdev->high_pll);
|
||||
}
|
||||
|
||||
static ssize_t high_pll_store(struct device *dev, struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct hl_device *hdev = dev_get_drvdata(dev);
|
||||
long value;
|
||||
int rc;
|
||||
|
||||
if (hl_device_disabled_or_in_reset(hdev)) {
|
||||
count = -ENODEV;
|
||||
goto out;
|
||||
}
|
||||
|
||||
rc = kstrtoul(buf, 0, &value);
|
||||
|
||||
if (rc) {
|
||||
count = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
hdev->high_pll = value;
|
||||
|
||||
out:
|
||||
return count;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR_RW(high_pll);
|
||||
static DEVICE_ATTR_RW(ic_clk);
|
||||
static DEVICE_ATTR_RO(ic_clk_curr);
|
||||
static DEVICE_ATTR_RW(mme_clk);
|
||||
static DEVICE_ATTR_RO(mme_clk_curr);
|
||||
static DEVICE_ATTR_RW(pm_mng_profile);
|
||||
static DEVICE_ATTR_RW(tpc_clk);
|
||||
static DEVICE_ATTR_RO(tpc_clk_curr);
|
||||
|
||||
static struct attribute *goya_dev_attrs[] = {
|
||||
&dev_attr_high_pll.attr,
|
||||
&dev_attr_ic_clk.attr,
|
||||
&dev_attr_ic_clk_curr.attr,
|
||||
&dev_attr_mme_clk.attr,
|
||||
&dev_attr_mme_clk_curr.attr,
|
||||
&dev_attr_pm_mng_profile.attr,
|
||||
&dev_attr_tpc_clk.attr,
|
||||
&dev_attr_tpc_clk_curr.attr,
|
||||
NULL,
|
||||
|
|
|
@ -36,6 +36,8 @@
|
|||
|
||||
#define HL_PCI_ELBI_TIMEOUT_MSEC 10 /* 10ms */
|
||||
|
||||
#define HL_SIM_MAX_TIMEOUT_US 10000000 /* 10s */
|
||||
|
||||
#define HL_MAX_QUEUES 128
|
||||
|
||||
#define HL_MAX_JOBS_PER_CS 64
|
||||
|
@ -43,6 +45,8 @@
|
|||
/* MUST BE POWER OF 2 and larger than 1 */
|
||||
#define HL_MAX_PENDING_CS 64
|
||||
|
||||
#define HL_IDLE_BUSY_TS_ARR_SIZE 4096
|
||||
|
||||
/* Memory */
|
||||
#define MEM_HASH_TABLE_BITS 7 /* 1 << 7 buckets */
|
||||
|
||||
|
@ -92,12 +96,12 @@ enum hl_queue_type {
|
|||
/**
|
||||
* struct hw_queue_properties - queue information.
|
||||
* @type: queue type.
|
||||
* @kmd_only: true if only KMD is allowed to send a job to this queue, false
|
||||
* otherwise.
|
||||
* @driver_only: true if only the driver is allowed to send a job to this queue,
|
||||
* false otherwise.
|
||||
*/
|
||||
struct hw_queue_properties {
|
||||
enum hl_queue_type type;
|
||||
u8 kmd_only;
|
||||
u8 driver_only;
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -320,7 +324,7 @@ struct hl_cs_job;
|
|||
#define HL_EQ_LENGTH 64
|
||||
#define HL_EQ_SIZE_IN_BYTES (HL_EQ_LENGTH * HL_EQ_ENTRY_SIZE)
|
||||
|
||||
/* KMD <-> ArmCP shared memory size */
|
||||
/* Host <-> ArmCP shared memory size */
|
||||
#define HL_CPU_ACCESSIBLE_MEM_SIZE SZ_2M
|
||||
|
||||
/**
|
||||
|
@ -401,7 +405,7 @@ struct hl_cs_parser;
|
|||
|
||||
/**
|
||||
* enum hl_pm_mng_profile - power management profile.
|
||||
* @PM_AUTO: internal clock is set by KMD.
|
||||
* @PM_AUTO: internal clock is set by the Linux driver.
|
||||
* @PM_MANUAL: internal clock is set by the user.
|
||||
* @PM_LAST: last power management type.
|
||||
*/
|
||||
|
@ -554,7 +558,8 @@ struct hl_asic_funcs {
|
|||
struct hl_eq_entry *eq_entry);
|
||||
void (*set_pll_profile)(struct hl_device *hdev,
|
||||
enum hl_pll_frequency freq);
|
||||
void* (*get_events_stat)(struct hl_device *hdev, u32 *size);
|
||||
void* (*get_events_stat)(struct hl_device *hdev, bool aggregate,
|
||||
u32 *size);
|
||||
u64 (*read_pte)(struct hl_device *hdev, u64 addr);
|
||||
void (*write_pte)(struct hl_device *hdev, u64 addr, u64 val);
|
||||
void (*mmu_invalidate_cache)(struct hl_device *hdev, bool is_hard);
|
||||
|
@ -608,7 +613,7 @@ struct hl_va_range {
|
|||
* descriptor (hl_vm_phys_pg_list or hl_userptr).
|
||||
* @mmu_phys_hash: holds a mapping from physical address to pgt_info structure.
|
||||
* @mmu_shadow_hash: holds a mapping from shadow address to pgt_info structure.
|
||||
* @hpriv: pointer to the private (KMD) data of the process (fd).
|
||||
* @hpriv: pointer to the private (Kernel Driver) data of the process (fd).
|
||||
* @hdev: pointer to the device structure.
|
||||
* @refcount: reference counter for the context. Context is released only when
|
||||
* this hits 0l. It is incremented on CS and CS_WAIT.
|
||||
|
@ -634,6 +639,7 @@ struct hl_va_range {
|
|||
* execution phase before the context switch phase
|
||||
* has finished.
|
||||
* @asid: context's unique address space ID in the device's MMU.
|
||||
* @handle: context's opaque handle for user
|
||||
*/
|
||||
struct hl_ctx {
|
||||
DECLARE_HASHTABLE(mem_hash, MEM_HASH_TABLE_BITS);
|
||||
|
@ -655,6 +661,7 @@ struct hl_ctx {
|
|||
atomic_t thread_ctx_switch_token;
|
||||
u32 thread_ctx_switch_wait_token;
|
||||
u32 asid;
|
||||
u32 handle;
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -906,23 +913,27 @@ struct hl_debug_params {
|
|||
* @hdev: habanalabs device structure.
|
||||
* @filp: pointer to the given file structure.
|
||||
* @taskpid: current process ID.
|
||||
* @ctx: current executing context.
|
||||
* @ctx: current executing context. TODO: remove for multiple ctx per process
|
||||
* @ctx_mgr: context manager to handle multiple context for this FD.
|
||||
* @cb_mgr: command buffer manager to handle multiple buffers for this FD.
|
||||
* @debugfs_list: list of relevant ASIC debugfs.
|
||||
* @dev_node: node in the device list of file private data
|
||||
* @refcount: number of related contexts.
|
||||
* @restore_phase_mutex: lock for context switch and restore phase.
|
||||
* @is_control: true for control device, false otherwise
|
||||
*/
|
||||
struct hl_fpriv {
|
||||
struct hl_device *hdev;
|
||||
struct file *filp;
|
||||
struct pid *taskpid;
|
||||
struct hl_ctx *ctx; /* TODO: remove for multiple ctx */
|
||||
struct hl_ctx *ctx;
|
||||
struct hl_ctx_mgr ctx_mgr;
|
||||
struct hl_cb_mgr cb_mgr;
|
||||
struct list_head debugfs_list;
|
||||
struct list_head dev_node;
|
||||
struct kref refcount;
|
||||
struct mutex restore_phase_mutex;
|
||||
u8 is_control;
|
||||
};
|
||||
|
||||
|
||||
|
@ -1009,7 +1020,7 @@ struct hl_dbg_device_entry {
|
|||
*/
|
||||
|
||||
/* Theoretical limit only. A single host can only contain up to 4 or 8 PCIe
|
||||
* x16 cards. In extereme cases, there are hosts that can accommodate 16 cards
|
||||
* x16 cards. In extreme cases, there are hosts that can accommodate 16 cards.
|
||||
*/
|
||||
#define HL_MAX_MINORS 256
|
||||
|
||||
|
@ -1041,14 +1052,18 @@ void hl_wreg(struct hl_device *hdev, u32 reg, u32 val);
|
|||
WREG32(mm##reg, (RREG32(mm##reg) & ~REG_FIELD_MASK(reg, field)) | \
|
||||
(val) << REG_FIELD_SHIFT(reg, field))
|
||||
|
||||
/* Timeout should be longer when working with simulator but cap the
|
||||
* increased timeout to some maximum
|
||||
*/
|
||||
#define hl_poll_timeout(hdev, addr, val, cond, sleep_us, timeout_us) \
|
||||
({ \
|
||||
ktime_t __timeout; \
|
||||
/* timeout should be longer when working with simulator */ \
|
||||
if (hdev->pdev) \
|
||||
__timeout = ktime_add_us(ktime_get(), timeout_us); \
|
||||
else \
|
||||
__timeout = ktime_add_us(ktime_get(), (timeout_us * 10)); \
|
||||
__timeout = ktime_add_us(ktime_get(),\
|
||||
min((u64)(timeout_us * 10), \
|
||||
(u64) HL_SIM_MAX_TIMEOUT_US)); \
|
||||
might_sleep_if(sleep_us); \
|
||||
for (;;) { \
|
||||
(val) = RREG32(addr); \
|
||||
|
@ -1080,24 +1095,25 @@ void hl_wreg(struct hl_device *hdev, u32 reg, u32 val);
|
|||
mem_written_by_device) \
|
||||
({ \
|
||||
ktime_t __timeout; \
|
||||
/* timeout should be longer when working with simulator */ \
|
||||
if (hdev->pdev) \
|
||||
__timeout = ktime_add_us(ktime_get(), timeout_us); \
|
||||
else \
|
||||
__timeout = ktime_add_us(ktime_get(), (timeout_us * 10)); \
|
||||
__timeout = ktime_add_us(ktime_get(),\
|
||||
min((u64)(timeout_us * 10), \
|
||||
(u64) HL_SIM_MAX_TIMEOUT_US)); \
|
||||
might_sleep_if(sleep_us); \
|
||||
for (;;) { \
|
||||
/* Verify we read updates done by other cores or by device */ \
|
||||
mb(); \
|
||||
(val) = *((u32 *) (uintptr_t) (addr)); \
|
||||
if (mem_written_by_device) \
|
||||
(val) = le32_to_cpu(val); \
|
||||
(val) = le32_to_cpu(*(__le32 *) &(val)); \
|
||||
if (cond) \
|
||||
break; \
|
||||
if (timeout_us && ktime_compare(ktime_get(), __timeout) > 0) { \
|
||||
(val) = *((u32 *) (uintptr_t) (addr)); \
|
||||
if (mem_written_by_device) \
|
||||
(val) = le32_to_cpu(val); \
|
||||
(val) = le32_to_cpu(*(__le32 *) &(val)); \
|
||||
break; \
|
||||
} \
|
||||
if (sleep_us) \
|
||||
|
@ -1110,11 +1126,12 @@ void hl_wreg(struct hl_device *hdev, u32 reg, u32 val);
|
|||
timeout_us) \
|
||||
({ \
|
||||
ktime_t __timeout; \
|
||||
/* timeout should be longer when working with simulator */ \
|
||||
if (hdev->pdev) \
|
||||
__timeout = ktime_add_us(ktime_get(), timeout_us); \
|
||||
else \
|
||||
__timeout = ktime_add_us(ktime_get(), (timeout_us * 10)); \
|
||||
__timeout = ktime_add_us(ktime_get(),\
|
||||
min((u64)(timeout_us * 10), \
|
||||
(u64) HL_SIM_MAX_TIMEOUT_US)); \
|
||||
might_sleep_if(sleep_us); \
|
||||
for (;;) { \
|
||||
(val) = readl(addr); \
|
||||
|
@ -1142,13 +1159,25 @@ struct hl_device_reset_work {
|
|||
struct hl_device *hdev;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct hl_device_idle_busy_ts - used for calculating device utilization rate.
|
||||
* @idle_to_busy_ts: timestamp where device changed from idle to busy.
|
||||
* @busy_to_idle_ts: timestamp where device changed from busy to idle.
|
||||
*/
|
||||
struct hl_device_idle_busy_ts {
|
||||
ktime_t idle_to_busy_ts;
|
||||
ktime_t busy_to_idle_ts;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct hl_device - habanalabs device structure.
|
||||
* @pdev: pointer to PCI device, can be NULL in case of simulator device.
|
||||
* @pcie_bar: array of available PCIe bars.
|
||||
* @rmmio: configuration area address on SRAM.
|
||||
* @cdev: related char device.
|
||||
* @dev: realted kernel basic device structure.
|
||||
* @cdev_ctrl: char device for control operations only (INFO IOCTL)
|
||||
* @dev: related kernel basic device structure.
|
||||
* @dev_ctrl: related kernel device structure for the control device
|
||||
* @work_freq: delayed work to lower device frequency if possible.
|
||||
* @work_heartbeat: delayed work for ArmCP is-alive check.
|
||||
* @asic_name: ASIC specific nmae.
|
||||
|
@ -1156,25 +1185,19 @@ struct hl_device_reset_work {
|
|||
* @completion_queue: array of hl_cq.
|
||||
* @cq_wq: work queue of completion queues for executing work in process context
|
||||
* @eq_wq: work queue of event queue for executing work in process context.
|
||||
* @kernel_ctx: KMD context structure.
|
||||
* @kernel_ctx: Kernel driver context structure.
|
||||
* @kernel_queues: array of hl_hw_queue.
|
||||
* @hw_queues_mirror_list: CS mirror list for TDR.
|
||||
* @hw_queues_mirror_lock: protects hw_queues_mirror_list.
|
||||
* @kernel_cb_mgr: command buffer manager for creating/destroying/handling CGs.
|
||||
* @event_queue: event queue for IRQ from ArmCP.
|
||||
* @dma_pool: DMA pool for small allocations.
|
||||
* @cpu_accessible_dma_mem: KMD <-> ArmCP shared memory CPU address.
|
||||
* @cpu_accessible_dma_address: KMD <-> ArmCP shared memory DMA address.
|
||||
* @cpu_accessible_dma_pool: KMD <-> ArmCP shared memory pool.
|
||||
* @cpu_accessible_dma_mem: Host <-> ArmCP shared memory CPU address.
|
||||
* @cpu_accessible_dma_address: Host <-> ArmCP shared memory DMA address.
|
||||
* @cpu_accessible_dma_pool: Host <-> ArmCP shared memory pool.
|
||||
* @asid_bitmap: holds used/available ASIDs.
|
||||
* @asid_mutex: protects asid_bitmap.
|
||||
* @fd_open_cnt_lock: lock for updating fd_open_cnt in hl_device_open. Although
|
||||
* fd_open_cnt is atomic, we need this lock to serialize
|
||||
* the open function because the driver currently supports
|
||||
* only a single process at a time. In addition, we need a
|
||||
* lock here so we can flush user processes which are opening
|
||||
* the device while we are trying to hard reset it
|
||||
* @send_cpu_message_lock: enforces only one message in KMD <-> ArmCP queue.
|
||||
* @send_cpu_message_lock: enforces only one message in Host <-> ArmCP queue.
|
||||
* @debug_lock: protects critical section of setting debug mode for device
|
||||
* @asic_prop: ASIC specific immutable properties.
|
||||
* @asic_funcs: ASIC specific functions.
|
||||
|
@ -1189,22 +1212,28 @@ struct hl_device_reset_work {
|
|||
* @hl_debugfs: device's debugfs manager.
|
||||
* @cb_pool: list of preallocated CBs.
|
||||
* @cb_pool_lock: protects the CB pool.
|
||||
* @user_ctx: current user context executing.
|
||||
* @fpriv_list: list of file private data structures. Each structure is created
|
||||
* when a user opens the device
|
||||
* @fpriv_list_lock: protects the fpriv_list
|
||||
* @compute_ctx: current compute context executing.
|
||||
* @idle_busy_ts_arr: array to hold time stamps of transitions from idle to busy
|
||||
* and vice-versa
|
||||
* @dram_used_mem: current DRAM memory consumption.
|
||||
* @timeout_jiffies: device CS timeout value.
|
||||
* @max_power: the max power of the device, as configured by the sysadmin. This
|
||||
* value is saved so in case of hard-reset, KMD will restore this
|
||||
* value and update the F/W after the re-initialization
|
||||
* value is saved so in case of hard-reset, the driver will restore
|
||||
* this value and update the F/W after the re-initialization
|
||||
* @in_reset: is device in reset flow.
|
||||
* @curr_pll_profile: current PLL profile.
|
||||
* @fd_open_cnt: number of open user processes.
|
||||
* @cs_active_cnt: number of active command submissions on this device (active
|
||||
* means already in H/W queues)
|
||||
* @major: habanalabs KMD major.
|
||||
* @major: habanalabs kernel driver major.
|
||||
* @high_pll: high PLL profile frequency.
|
||||
* @soft_reset_cnt: number of soft reset since KMD loading.
|
||||
* @hard_reset_cnt: number of hard reset since KMD loading.
|
||||
* @soft_reset_cnt: number of soft reset since the driver was loaded.
|
||||
* @hard_reset_cnt: number of hard reset since the driver was loaded.
|
||||
* @idle_busy_ts_idx: index of current entry in idle_busy_ts_arr
|
||||
* @id: device minor.
|
||||
* @id_control: minor of the control device
|
||||
* @disabled: is device disabled.
|
||||
* @late_init_done: is late init stage was done during initialization.
|
||||
* @hwmon_initialized: is H/W monitor sensors was initialized.
|
||||
|
@ -1218,15 +1247,18 @@ struct hl_device_reset_work {
|
|||
* @mmu_enable: is MMU enabled.
|
||||
* @device_cpu_disabled: is the device CPU disabled (due to timeouts)
|
||||
* @dma_mask: the dma mask that was set for this device
|
||||
* @in_debug: is device under debug. This, together with fd_open_cnt, enforces
|
||||
* @in_debug: is device under debug. This, together with fpriv_list, enforces
|
||||
* that only a single user is configuring the debug infrastructure.
|
||||
* @cdev_sysfs_created: were char devices and sysfs nodes created.
|
||||
*/
|
||||
struct hl_device {
|
||||
struct pci_dev *pdev;
|
||||
void __iomem *pcie_bar[6];
|
||||
void __iomem *rmmio;
|
||||
struct cdev cdev;
|
||||
struct cdev cdev_ctrl;
|
||||
struct device *dev;
|
||||
struct device *dev_ctrl;
|
||||
struct delayed_work work_freq;
|
||||
struct delayed_work work_heartbeat;
|
||||
char asic_name[16];
|
||||
|
@ -1246,8 +1278,6 @@ struct hl_device {
|
|||
struct gen_pool *cpu_accessible_dma_pool;
|
||||
unsigned long *asid_bitmap;
|
||||
struct mutex asid_mutex;
|
||||
/* TODO: remove fd_open_cnt_lock for multiple process support */
|
||||
struct mutex fd_open_cnt_lock;
|
||||
struct mutex send_cpu_message_lock;
|
||||
struct mutex debug_lock;
|
||||
struct asic_fixed_properties asic_prop;
|
||||
|
@ -1266,21 +1296,26 @@ struct hl_device {
|
|||
struct list_head cb_pool;
|
||||
spinlock_t cb_pool_lock;
|
||||
|
||||
/* TODO: remove user_ctx for multiple process support */
|
||||
struct hl_ctx *user_ctx;
|
||||
struct list_head fpriv_list;
|
||||
struct mutex fpriv_list_lock;
|
||||
|
||||
struct hl_ctx *compute_ctx;
|
||||
|
||||
struct hl_device_idle_busy_ts *idle_busy_ts_arr;
|
||||
|
||||
atomic64_t dram_used_mem;
|
||||
u64 timeout_jiffies;
|
||||
u64 max_power;
|
||||
atomic_t in_reset;
|
||||
atomic_t curr_pll_profile;
|
||||
atomic_t fd_open_cnt;
|
||||
atomic_t cs_active_cnt;
|
||||
enum hl_pll_frequency curr_pll_profile;
|
||||
int cs_active_cnt;
|
||||
u32 major;
|
||||
u32 high_pll;
|
||||
u32 soft_reset_cnt;
|
||||
u32 hard_reset_cnt;
|
||||
u32 idle_busy_ts_idx;
|
||||
u16 id;
|
||||
u16 id_control;
|
||||
u8 disabled;
|
||||
u8 late_init_done;
|
||||
u8 hwmon_initialized;
|
||||
|
@ -1293,6 +1328,7 @@ struct hl_device {
|
|||
u8 device_cpu_disabled;
|
||||
u8 dma_mask;
|
||||
u8 in_debug;
|
||||
u8 cdev_sysfs_created;
|
||||
|
||||
/* Parameters for bring-up */
|
||||
u8 mmu_enable;
|
||||
|
@ -1386,6 +1422,7 @@ static inline bool hl_mem_area_crosses_range(u64 address, u32 size,
|
|||
}
|
||||
|
||||
int hl_device_open(struct inode *inode, struct file *filp);
|
||||
int hl_device_open_ctrl(struct inode *inode, struct file *filp);
|
||||
bool hl_device_disabled_or_in_reset(struct hl_device *hdev);
|
||||
enum hl_device_status hl_device_status(struct hl_device *hdev);
|
||||
int hl_device_set_debug_mode(struct hl_device *hdev, bool enable);
|
||||
|
@ -1439,6 +1476,7 @@ int hl_device_reset(struct hl_device *hdev, bool hard_reset,
|
|||
void hl_hpriv_get(struct hl_fpriv *hpriv);
|
||||
void hl_hpriv_put(struct hl_fpriv *hpriv);
|
||||
int hl_device_set_frequency(struct hl_device *hdev, enum hl_pll_frequency freq);
|
||||
uint32_t hl_device_utilization(struct hl_device *hdev, uint32_t period_ms);
|
||||
|
||||
int hl_build_hwmon_channel_info(struct hl_device *hdev,
|
||||
struct armcp_sensor *sensors_arr);
|
||||
|
@ -1625,6 +1663,7 @@ static inline void hl_debugfs_remove_ctx_mem_hash(struct hl_device *hdev,
|
|||
|
||||
/* IOCTLs */
|
||||
long hl_ioctl(struct file *filep, unsigned int cmd, unsigned long arg);
|
||||
long hl_ioctl_control(struct file *filep, unsigned int cmd, unsigned long arg);
|
||||
int hl_cb_ioctl(struct hl_fpriv *hpriv, void *data);
|
||||
int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data);
|
||||
int hl_cs_wait_ioctl(struct hl_fpriv *hpriv, void *data);
|
||||
|
|
|
@ -95,41 +95,9 @@ int hl_device_open(struct inode *inode, struct file *filp)
|
|||
return -ENXIO;
|
||||
}
|
||||
|
||||
mutex_lock(&hdev->fd_open_cnt_lock);
|
||||
|
||||
if (hl_device_disabled_or_in_reset(hdev)) {
|
||||
dev_err_ratelimited(hdev->dev,
|
||||
"Can't open %s because it is disabled or in reset\n",
|
||||
dev_name(hdev->dev));
|
||||
mutex_unlock(&hdev->fd_open_cnt_lock);
|
||||
return -EPERM;
|
||||
}
|
||||
|
||||
if (hdev->in_debug) {
|
||||
dev_err_ratelimited(hdev->dev,
|
||||
"Can't open %s because it is being debugged by another user\n",
|
||||
dev_name(hdev->dev));
|
||||
mutex_unlock(&hdev->fd_open_cnt_lock);
|
||||
return -EPERM;
|
||||
}
|
||||
|
||||
if (atomic_read(&hdev->fd_open_cnt)) {
|
||||
dev_info_ratelimited(hdev->dev,
|
||||
"Can't open %s because another user is working on it\n",
|
||||
dev_name(hdev->dev));
|
||||
mutex_unlock(&hdev->fd_open_cnt_lock);
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
atomic_inc(&hdev->fd_open_cnt);
|
||||
|
||||
mutex_unlock(&hdev->fd_open_cnt_lock);
|
||||
|
||||
hpriv = kzalloc(sizeof(*hpriv), GFP_KERNEL);
|
||||
if (!hpriv) {
|
||||
rc = -ENOMEM;
|
||||
goto close_device;
|
||||
}
|
||||
if (!hpriv)
|
||||
return -ENOMEM;
|
||||
|
||||
hpriv->hdev = hdev;
|
||||
filp->private_data = hpriv;
|
||||
|
@ -141,34 +109,113 @@ int hl_device_open(struct inode *inode, struct file *filp)
|
|||
hl_cb_mgr_init(&hpriv->cb_mgr);
|
||||
hl_ctx_mgr_init(&hpriv->ctx_mgr);
|
||||
|
||||
rc = hl_ctx_create(hdev, hpriv);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev, "Failed to open FD (CTX fail)\n");
|
||||
hpriv->taskpid = find_get_pid(current->pid);
|
||||
|
||||
mutex_lock(&hdev->fpriv_list_lock);
|
||||
|
||||
if (hl_device_disabled_or_in_reset(hdev)) {
|
||||
dev_err_ratelimited(hdev->dev,
|
||||
"Can't open %s because it is disabled or in reset\n",
|
||||
dev_name(hdev->dev));
|
||||
rc = -EPERM;
|
||||
goto out_err;
|
||||
}
|
||||
|
||||
hpriv->taskpid = find_get_pid(current->pid);
|
||||
if (hdev->in_debug) {
|
||||
dev_err_ratelimited(hdev->dev,
|
||||
"Can't open %s because it is being debugged by another user\n",
|
||||
dev_name(hdev->dev));
|
||||
rc = -EPERM;
|
||||
goto out_err;
|
||||
}
|
||||
|
||||
/*
|
||||
* Device is IDLE at this point so it is legal to change PLLs. There
|
||||
* is no need to check anything because if the PLL is already HIGH, the
|
||||
* set function will return without doing anything
|
||||
if (hdev->compute_ctx) {
|
||||
dev_dbg_ratelimited(hdev->dev,
|
||||
"Can't open %s because another user is working on it\n",
|
||||
dev_name(hdev->dev));
|
||||
rc = -EBUSY;
|
||||
goto out_err;
|
||||
}
|
||||
|
||||
rc = hl_ctx_create(hdev, hpriv);
|
||||
if (rc) {
|
||||
dev_err(hdev->dev, "Failed to create context %d\n", rc);
|
||||
goto out_err;
|
||||
}
|
||||
|
||||
/* Device is IDLE at this point so it is legal to change PLLs.
|
||||
* There is no need to check anything because if the PLL is
|
||||
* already HIGH, the set function will return without doing
|
||||
* anything
|
||||
*/
|
||||
hl_device_set_frequency(hdev, PLL_HIGH);
|
||||
|
||||
list_add(&hpriv->dev_node, &hdev->fpriv_list);
|
||||
mutex_unlock(&hdev->fpriv_list_lock);
|
||||
|
||||
hl_debugfs_add_file(hpriv);
|
||||
|
||||
return 0;
|
||||
|
||||
out_err:
|
||||
filp->private_data = NULL;
|
||||
hl_ctx_mgr_fini(hpriv->hdev, &hpriv->ctx_mgr);
|
||||
hl_cb_mgr_fini(hpriv->hdev, &hpriv->cb_mgr);
|
||||
mutex_destroy(&hpriv->restore_phase_mutex);
|
||||
kfree(hpriv);
|
||||
mutex_unlock(&hdev->fpriv_list_lock);
|
||||
|
||||
close_device:
|
||||
atomic_dec(&hdev->fd_open_cnt);
|
||||
hl_cb_mgr_fini(hpriv->hdev, &hpriv->cb_mgr);
|
||||
hl_ctx_mgr_fini(hpriv->hdev, &hpriv->ctx_mgr);
|
||||
filp->private_data = NULL;
|
||||
mutex_destroy(&hpriv->restore_phase_mutex);
|
||||
put_pid(hpriv->taskpid);
|
||||
|
||||
kfree(hpriv);
|
||||
return rc;
|
||||
}
|
||||
|
||||
int hl_device_open_ctrl(struct inode *inode, struct file *filp)
|
||||
{
|
||||
struct hl_device *hdev;
|
||||
struct hl_fpriv *hpriv;
|
||||
int rc;
|
||||
|
||||
mutex_lock(&hl_devs_idr_lock);
|
||||
hdev = idr_find(&hl_devs_idr, iminor(inode));
|
||||
mutex_unlock(&hl_devs_idr_lock);
|
||||
|
||||
if (!hdev) {
|
||||
pr_err("Couldn't find device %d:%d\n",
|
||||
imajor(inode), iminor(inode));
|
||||
return -ENXIO;
|
||||
}
|
||||
|
||||
hpriv = kzalloc(sizeof(*hpriv), GFP_KERNEL);
|
||||
if (!hpriv)
|
||||
return -ENOMEM;
|
||||
|
||||
mutex_lock(&hdev->fpriv_list_lock);
|
||||
|
||||
if (hl_device_disabled_or_in_reset(hdev)) {
|
||||
dev_err_ratelimited(hdev->dev_ctrl,
|
||||
"Can't open %s because it is disabled or in reset\n",
|
||||
dev_name(hdev->dev_ctrl));
|
||||
rc = -EPERM;
|
||||
goto out_err;
|
||||
}
|
||||
|
||||
list_add(&hpriv->dev_node, &hdev->fpriv_list);
|
||||
mutex_unlock(&hdev->fpriv_list_lock);
|
||||
|
||||
hpriv->hdev = hdev;
|
||||
filp->private_data = hpriv;
|
||||
hpriv->filp = filp;
|
||||
hpriv->is_control = true;
|
||||
nonseekable_open(inode, filp);
|
||||
|
||||
hpriv->taskpid = find_get_pid(current->pid);
|
||||
|
||||
return 0;
|
||||
|
||||
out_err:
|
||||
mutex_unlock(&hdev->fpriv_list_lock);
|
||||
kfree(hpriv);
|
||||
return rc;
|
||||
}
|
||||
|
||||
|
@ -199,7 +246,7 @@ int create_hdev(struct hl_device **dev, struct pci_dev *pdev,
|
|||
enum hl_asic_type asic_type, int minor)
|
||||
{
|
||||
struct hl_device *hdev;
|
||||
int rc;
|
||||
int rc, main_id, ctrl_id = 0;
|
||||
|
||||
*dev = NULL;
|
||||
|
||||
|
@ -240,33 +287,34 @@ int create_hdev(struct hl_device **dev, struct pci_dev *pdev,
|
|||
|
||||
mutex_lock(&hl_devs_idr_lock);
|
||||
|
||||
if (minor == -1) {
|
||||
rc = idr_alloc(&hl_devs_idr, hdev, 0, HL_MAX_MINORS,
|
||||
/* Always save 2 numbers, 1 for main device and 1 for control.
|
||||
* They must be consecutive
|
||||
*/
|
||||
main_id = idr_alloc(&hl_devs_idr, hdev, 0, HL_MAX_MINORS,
|
||||
GFP_KERNEL);
|
||||
} else {
|
||||
void *old_idr = idr_replace(&hl_devs_idr, hdev, minor);
|
||||
|
||||
if (IS_ERR_VALUE(old_idr)) {
|
||||
rc = PTR_ERR(old_idr);
|
||||
pr_err("Error %d when trying to replace minor %d\n",
|
||||
rc, minor);
|
||||
mutex_unlock(&hl_devs_idr_lock);
|
||||
goto free_hdev;
|
||||
}
|
||||
rc = minor;
|
||||
}
|
||||
if (main_id >= 0)
|
||||
ctrl_id = idr_alloc(&hl_devs_idr, hdev, main_id + 1,
|
||||
main_id + 2, GFP_KERNEL);
|
||||
|
||||
mutex_unlock(&hl_devs_idr_lock);
|
||||
|
||||
if (rc < 0) {
|
||||
if (rc == -ENOSPC) {
|
||||
if ((main_id < 0) || (ctrl_id < 0)) {
|
||||
if ((main_id == -ENOSPC) || (ctrl_id == -ENOSPC))
|
||||
pr_err("too many devices in the system\n");
|
||||
rc = -EBUSY;
|
||||
|
||||
if (main_id >= 0) {
|
||||
mutex_lock(&hl_devs_idr_lock);
|
||||
idr_remove(&hl_devs_idr, main_id);
|
||||
mutex_unlock(&hl_devs_idr_lock);
|
||||
}
|
||||
|
||||
rc = -EBUSY;
|
||||
goto free_hdev;
|
||||
}
|
||||
|
||||
hdev->id = rc;
|
||||
hdev->id = main_id;
|
||||
hdev->id_control = ctrl_id;
|
||||
|
||||
*dev = hdev;
|
||||
|
||||
|
@ -288,6 +336,7 @@ void destroy_hdev(struct hl_device *hdev)
|
|||
/* Remove device from the device list */
|
||||
mutex_lock(&hl_devs_idr_lock);
|
||||
idr_remove(&hl_devs_idr, hdev->id);
|
||||
idr_remove(&hl_devs_idr, hdev->id_control);
|
||||
mutex_unlock(&hl_devs_idr_lock);
|
||||
|
||||
kfree(hdev);
|
||||
|
@ -295,8 +344,7 @@ void destroy_hdev(struct hl_device *hdev)
|
|||
|
||||
static int hl_pmops_suspend(struct device *dev)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
struct hl_device *hdev = pci_get_drvdata(pdev);
|
||||
struct hl_device *hdev = dev_get_drvdata(dev);
|
||||
|
||||
pr_debug("Going to suspend PCI device\n");
|
||||
|
||||
|
@ -310,8 +358,7 @@ static int hl_pmops_suspend(struct device *dev)
|
|||
|
||||
static int hl_pmops_resume(struct device *dev)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev);
|
||||
struct hl_device *hdev = pci_get_drvdata(pdev);
|
||||
struct hl_device *hdev = dev_get_drvdata(dev);
|
||||
|
||||
pr_debug("Going to resume PCI device\n");
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue