Char/Misc driver patches for 4.19-rc1

Here is the bit set of char/misc drivers for 4.19-rc1
 
 There is a lot here, much more than normal, seems like everyone is
 writing new driver subsystems these days...  Anyway, major things here
 are:
 	- new FSI driver subsystem, yet-another-powerpc low-level
 	  hardware bus
 	- gnss, finally an in-kernel GPS subsystem to try to tame all of
 	  the crazy out-of-tree drivers that have been floating around
 	  for years, combined with some really hacky userspace
 	  implementations.  This is only for GNSS receivers, but you
 	  have to start somewhere, and this is great to see.
 Other than that, there are new slimbus drivers, new coresight drivers,
 new fpga drivers, and loads of DT bindings for all of these and existing
 drivers.
 
 Full details of everything is in the shortlog.
 
 All of these have been in linux-next for a while with no reported
 issues.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCW3g7ew8cZ3JlZ0Brcm9h
 aC5jb20ACgkQMUfUDdst+ykfBgCeOG0RkSI92XVZe0hs/QYFW9kk8JYAnRBf3Qpm
 cvW7a+McOoKz/MGmEKsi
 =TNfn
 -----END PGP SIGNATURE-----

Merge tag 'char-misc-4.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc driver updates from Greg KH:
 "Here is the bit set of char/misc drivers for 4.19-rc1

  There is a lot here, much more than normal, seems like everyone is
  writing new driver subsystems these days... Anyway, major things here
  are:

   - new FSI driver subsystem, yet-another-powerpc low-level hardware
     bus

   - gnss, finally an in-kernel GPS subsystem to try to tame all of the
     crazy out-of-tree drivers that have been floating around for years,
     combined with some really hacky userspace implementations. This is
     only for GNSS receivers, but you have to start somewhere, and this
     is great to see.

  Other than that, there are new slimbus drivers, new coresight drivers,
  new fpga drivers, and loads of DT bindings for all of these and
  existing drivers.

  All of these have been in linux-next for a while with no reported
  issues"

* tag 'char-misc-4.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (255 commits)
  android: binder: Rate-limit debug and userspace triggered err msgs
  fsi: sbefifo: Bump max command length
  fsi: scom: Fix NULL dereference
  misc: mic: SCIF Fix scif_get_new_port() error handling
  misc: cxl: changed asterisk position
  genwqe: card_base: Use true and false for boolean values
  misc: eeprom: assignment outside the if statement
  uio: potential double frees if __uio_register_device() fails
  eeprom: idt_89hpesx: clean up an error pointer vs NULL inconsistency
  misc: ti-st: Fix memory leak in the error path of probe()
  android: binder: Show extra_buffers_size in trace
  firmware: vpd: Fix section enabled flag on vpd_section_destroy
  platform: goldfish: Retire pdev_bus
  goldfish: Use dedicated macros instead of manual bit shifting
  goldfish: Add missing includes to goldfish.h
  mux: adgs1408: new driver for Analog Devices ADGS1408/1409 mux
  dt-bindings: mux: add adi,adgs1408
  Drivers: hv: vmbus: Cleanup synic memory free path
  Drivers: hv: vmbus: Remove use of slow_virt_to_phys()
  Drivers: hv: vmbus: Reset the channel callback in vmbus_onoffer_rescind()
  ...
This commit is contained in:
Linus Torvalds 2018-08-18 11:04:51 -07:00
commit d5acba26bf
302 changed files with 18161 additions and 1578 deletions

View File

@ -42,6 +42,13 @@ Contact: K. Y. Srinivasan <kys@microsoft.com>
Description: The 16 bit vendor ID of the device Description: The 16 bit vendor ID of the device
Users: tools/hv/lsvmbus and user level RDMA libraries Users: tools/hv/lsvmbus and user level RDMA libraries
What: /sys/bus/vmbus/devices/<UUID>/numa_node
Date: Jul 2018
KernelVersion: 4.19
Contact: Stephen Hemminger <sthemmin@microsoft.com>
Description: This NUMA node to which the VMBUS device is
attached, or -1 if the node is unknown.
What: /sys/bus/vmbus/devices/<UUID>/channels/<N> What: /sys/bus/vmbus/devices/<UUID>/channels/<N>
Date: September. 2017 Date: September. 2017
KernelVersion: 4.14 KernelVersion: 4.14

View File

@ -83,3 +83,11 @@ KernelVersion: 4.7
Contact: Mathieu Poirier <mathieu.poirier@linaro.org> Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
Description: (R) Indicates the capabilities of the Coresight TMC. Description: (R) Indicates the capabilities of the Coresight TMC.
The value is read directly from the DEVID register, 0xFC8, The value is read directly from the DEVID register, 0xFC8,
What: /sys/bus/coresight/devices/<memory_map>.tmc/buffer_size
Date: December 2018
KernelVersion: 4.19
Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
Description: (RW) Size of the trace buffer for TMC-ETR when used in SYSFS
mode. Writable only for TMC-ETR configurations. The value
should be aligned to the kernel pagesize.

View File

@ -35,3 +35,27 @@ Description: Read fpga manager state as a string.
* write complete = Doing post programming steps * write complete = Doing post programming steps
* write complete error = Error while doing post programming * write complete error = Error while doing post programming
* operating = FPGA is programmed and operating * operating = FPGA is programmed and operating
What: /sys/class/fpga_manager/<fpga>/status
Date: June 2018
KernelVersion: 4.19
Contact: Wu Hao <hao.wu@intel.com>
Description: Read fpga manager status as a string.
If FPGA programming operation fails, it could be caused by crc
error or incompatible bitstream image. The intent of this
interface is to provide more detailed information for FPGA
programming errors to userspace. This is a list of strings for
the supported status.
* reconfig operation error - invalid operations detected by
reconfiguration hardware.
e.g. start reconfiguration
with errors not cleared
* reconfig CRC error - CRC error detected by
reconfiguration hardware.
* reconfig incompatible image - reconfiguration image is
incompatible with hardware
* reconfig IP protocol error - protocol errors detected by
reconfiguration hardware
* reconfig fifo overflow error - FIFO overflow detected by
reconfiguration hardware

View File

@ -0,0 +1,9 @@
What: /sys/class/fpga_region/<region>/compat_id
Date: June 2018
KernelVersion: 4.19
Contact: Wu Hao <hao.wu@intel.com>
Description: FPGA region id for compatibility check, e.g. compatibility
of the FPGA reconfiguration hardware and image. This value
is defined or calculated by the layer that is creating the
FPGA region. This interface returns the compat_id value or
just error code -ENOENT in case compat_id is not used.

View File

@ -0,0 +1,15 @@
What: /sys/class/gnss/gnssN/type
Date: May 2018
KernelVersion: 4.18
Contact: Johan Hovold <johan@kernel.org>
Description:
The GNSS receiver type. The currently identified types reflect
the protocol(s) supported by the receiver:
"NMEA" NMEA 0183
"SiRF" SiRF Binary
"UBX" UBX
Note that also non-"NMEA" type receivers typically support a
subset of NMEA 0183 with vendor extensions (e.g. to allow
switching to a vendor protocol).

View File

@ -54,3 +54,14 @@ Description: Configure tx queue limit
Set maximal number of pending writes Set maximal number of pending writes
per opened session. per opened session.
What: /sys/class/mei/meiN/fw_ver
Date: May 2018
KernelVersion: 4.18
Contact: Tomas Winkler <tomas.winkler@intel.com>
Description: Display the ME firmware version.
The version of the platform ME firmware is in format:
<platform>:<major>.<minor>.<milestone>.<build_no>.
There can be up to three such blocks for different
FW components.

View File

@ -0,0 +1,23 @@
What: /sys/bus/platform/devices/dfl-fme.0/ports_num
Date: June 2018
KernelVersion: 4.19
Contact: Wu Hao <hao.wu@intel.com>
Description: Read-only. One DFL FPGA device may have more than 1
port/Accelerator Function Unit (AFU). It returns the
number of ports on the FPGA device when read it.
What: /sys/bus/platform/devices/dfl-fme.0/bitstream_id
Date: June 2018
KernelVersion: 4.19
Contact: Wu Hao <hao.wu@intel.com>
Description: Read-only. It returns Bitstream (static FPGA region)
identifier number, which includes the detailed version
and other information of this static FPGA region.
What: /sys/bus/platform/devices/dfl-fme.0/bitstream_metadata
Date: June 2018
KernelVersion: 4.19
Contact: Wu Hao <hao.wu@intel.com>
Description: Read-only. It returns Bitstream (static FPGA region) meta
data, which includes the synthesis date, seed and other
information of this static FPGA region.

View File

@ -0,0 +1,16 @@
What: /sys/bus/platform/devices/dfl-port.0/id
Date: June 2018
KernelVersion: 4.19
Contact: Wu Hao <hao.wu@intel.com>
Description: Read-only. It returns id of this port. One DFL FPGA device
may have more than one port. Userspace could use this id to
distinguish different ports under same FPGA device.
What: /sys/bus/platform/devices/dfl-port.0/afu_id
Date: June 2018
KernelVersion: 4.19
Contact: Wu Hao <hao.wu@intel.com>
Description: Read-only. User can program different PR bitstreams to FPGA
Accelerator Function Unit (AFU) for different functions. It
returns uuid which could be used to identify which PR bitstream
is programmed in this AFU.

View File

@ -39,6 +39,8 @@ its hardware characteristcs.
- System Trace Macrocell: - System Trace Macrocell:
"arm,coresight-stm", "arm,primecell"; [1] "arm,coresight-stm", "arm,primecell"; [1]
- Coresight Address Translation Unit (CATU)
"arm,coresight-catu", "arm,primecell";
* reg: physical base address and length of the register * reg: physical base address and length of the register
set(s) of the component. set(s) of the component.
@ -84,8 +86,15 @@ its hardware characteristcs.
* Optional property for TMC: * Optional property for TMC:
* arm,buffer-size: size of contiguous buffer space for TMC ETR * arm,buffer-size: size of contiguous buffer space for TMC ETR
(embedded trace router) (embedded trace router). This property is obsolete. The buffer size
can be configured dynamically via buffer_size property in sysfs.
* arm,scatter-gather: boolean. Indicates that the TMC-ETR can safely
use the SG mode on this system.
* Optional property for CATU :
* interrupts : Exactly one SPI may be listed for reporting the address
error
Example: Example:
@ -118,6 +127,35 @@ Example:
}; };
}; };
etr@20070000 {
compatible = "arm,coresight-tmc", "arm,primecell";
reg = <0 0x20070000 0 0x1000>;
clocks = <&oscclk6a>;
clock-names = "apb_pclk";
ports {
#address-cells = <1>;
#size-cells = <0>;
/* input port */
port@0 {
reg = <0>;
etr_in_port: endpoint {
slave-mode;
remote-endpoint = <&replicator2_out_port0>;
};
};
/* CATU link represented by output port */
port@1 {
reg = <1>;
etr_out_port: endpoint {
remote-endpoint = <&catu_in_port>;
};
};
};
};
2. Links 2. Links
replicator { replicator {
/* non-configurable replicators don't show up on the /* non-configurable replicators don't show up on the
@ -247,5 +285,23 @@ Example:
}; };
}; };
5. CATU
catu@207e0000 {
compatible = "arm,coresight-catu", "arm,primecell";
reg = <0 0x207e0000 0 0x1000>;
clocks = <&oscclk6a>;
clock-names = "apb_pclk";
interrupts = <GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>;
port {
catu_in_port: endpoint {
slave-mode;
remote-endpoint = <&etr_out_port>;
};
};
};
[1]. There is currently two version of STM: STM32 and STM500. Both [1]. There is currently two version of STM: STM32 and STM500. Both
have the same HW interface and as such don't need an explicit binding name. have the same HW interface and as such don't need an explicit binding name.

View File

@ -0,0 +1,36 @@
Device-tree bindings for ColdFire offloaded gpio-based FSI master driver
------------------------------------------------------------------------
Required properties:
- compatible =
"aspeed,ast2400-cf-fsi-master" for an AST2400 based system
or
"aspeed,ast2500-cf-fsi-master" for an AST2500 based system
- clock-gpios = <gpio-descriptor>; : GPIO for FSI clock
- data-gpios = <gpio-descriptor>; : GPIO for FSI data signal
- enable-gpios = <gpio-descriptor>; : GPIO for enable signal
- trans-gpios = <gpio-descriptor>; : GPIO for voltage translator enable
- mux-gpios = <gpio-descriptor>; : GPIO for pin multiplexing with other
functions (eg, external FSI masters)
- memory-region = <phandle>; : Reference to the reserved memory for
the ColdFire. Must be 2M aligned on
AST2400 and 1M aligned on AST2500
- aspeed,sram = <phandle>; : Reference to the SRAM node.
- aspeed,cvic = <phandle>; : Reference to the CVIC node.
Examples:
fsi-master {
compatible = "aspeed,ast2500-cf-fsi-master", "fsi-master";
clock-gpios = <&gpio 0>;
data-gpios = <&gpio 1>;
enable-gpios = <&gpio 2>;
trans-gpios = <&gpio 3>;
mux-gpios = <&gpio 4>;
memory-region = <&coldfire_memory>;
aspeed,sram = <&sram>;
aspeed,cvic = <&cvic>;
}

View File

@ -83,6 +83,10 @@ addresses and sizes in the slave address space:
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
Optionally, a slave can provide a global unique chip ID which is used to
identify the physical location of the chip in a system specific way
chip-id = <0>;
FSI engines (devices) FSI engines (devices)
--------------------- ---------------------
@ -125,6 +129,7 @@ device tree if no extra platform information is required.
reg = <0 0>; reg = <0 0>;
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
chip-id = <0>;
/* FSI engine at 0xc00, using a single page. In this example, /* FSI engine at 0xc00, using a single page. In this example,
* it's an I2C master controller, so subnodes describe the * it's an I2C master controller, so subnodes describe the

View File

@ -0,0 +1,36 @@
GNSS Receiver DT binding
This documents the binding structure and common properties for GNSS receiver
devices.
A GNSS receiver node is a node named "gnss" and typically resides on a serial
bus (e.g. UART, I2C or SPI).
Please refer to the following documents for generic properties:
Documentation/devicetree/bindings/serial/slave-device.txt
Documentation/devicetree/bindings/spi/spi-bus.txt
Required properties:
- compatible : A string reflecting the vendor and specific device the node
represents
Optional properties:
- enable-gpios : GPIO used to enable the device
- timepulse-gpios : Time pulse GPIO
Example:
serial@1234 {
compatible = "ns16550a";
gnss {
compatible = "u-blox,neo-8";
vcc-supply = <&gnss_reg>;
timepulse-gpios = <&gpio0 16 GPIO_ACTIVE_HIGH>;
current-speed = <4800>;
};
};

View File

@ -0,0 +1,45 @@
SiRFstar-based GNSS Receiver DT binding
SiRFstar chipsets are used in GNSS-receiver modules produced by several
vendors and can use UART, SPI or I2C interfaces.
Please see Documentation/devicetree/bindings/gnss/gnss.txt for generic
properties.
Required properties:
- compatible : Must be one of
"fastrax,uc430"
"linx,r4"
"wi2wi,w2sg0008i"
"wi2wi,w2sg0084i"
- vcc-supply : Main voltage regulator (pin name: 3V3_IN, VCC, VDD)
Required properties (I2C):
- reg : I2C slave address
Required properties (SPI):
- reg : SPI chip select address
Optional properties:
- sirf,onoff-gpios : GPIO used to power on and off device (pin name: ON_OFF)
- sirf,wakeup-gpios : GPIO used to determine device power state
(pin name: RFPWRUP, WAKEUP)
- timepulse-gpios : Time pulse GPIO (pin name: 1PPS, TM)
Example:
serial@1234 {
compatible = "ns16550a";
gnss {
compatible = "wi2wi,w2sg0084i";
vcc-supply = <&gnss_reg>;
sirf,onoff-gpios = <&gpio0 16 GPIO_ACTIVE_HIGH>;
sirf,wakeup-gpios = <&gpio0 17 GPIO_ACTIVE_HIGH>;
};
};

View File

@ -0,0 +1,44 @@
u-blox GNSS Receiver DT binding
The u-blox GNSS receivers can use UART, DDC (I2C), SPI and USB interfaces.
Please see Documentation/devicetree/bindings/gnss/gnss.txt for generic
properties.
Required properties:
- compatible : Must be one of
"u-blox,neo-8"
"u-blox,neo-m8"
- vcc-supply : Main voltage regulator
Required properties (DDC):
- reg : DDC (I2C) slave address
Required properties (SPI):
- reg : SPI chip select address
Required properties (USB):
- reg : Number of the USB hub port or the USB host-controller port
to which this device is attached
Optional properties:
- timepulse-gpios : Time pulse GPIO
- u-blox,extint-gpios : GPIO connected to the "external interrupt" input pin
- v-bckp-supply : Backup voltage regulator
Example:
serial@1234 {
compatible = "ns16550a";
gnss {
compatible = "u-blox,neo-8";
v-bckp-supply = <&gnss_v_bckp_reg>;
vcc-supply = <&gnss_vcc_reg>;
};
};

View File

@ -0,0 +1,48 @@
Bindings for Analog Devices ADGS1408/1409 8:1/Dual 4:1 Mux
Required properties:
- compatible : Should be one of
* "adi,adgs1408"
* "adi,adgs1409"
* Standard mux-controller bindings as described in mux-controller.txt
Optional properties for ADGS1408/1409:
- gpio-controller : if present, #gpio-cells is required.
- #gpio-cells : should be <2>
- First cell is the GPO line number, i.e. 0 to 3
for ADGS1408 and 0 to 4 for ADGS1409
- Second cell is used to specify active high (0)
or active low (1)
Optional properties:
- idle-state : if present, the state that the mux controller will have
when idle. The special state MUX_IDLE_AS_IS is the default and
MUX_IDLE_DISCONNECT is also supported.
States 0 through 7 correspond to signals S1 through S8 in the datasheet.
For ADGS1409 only states 0 to 3 are available.
Example:
/*
* One mux controller.
* Mux state set to idle as is (no idle-state declared)
*/
&spi0 {
mux: mux-controller@0 {
compatible = "adi,adgs1408";
reg = <0>;
spi-max-frequency = <1000000>;
#mux-control-cells = <0>;
};
}
adc-mux {
compatible = "io-channel-mux";
io-channels = <&adc 1>;
io-channel-names = "parent";
mux-controls = <&mux>;
channels = "out_a0", "out_a1", "test0", "test1",
"out_b0", "out_b1", "testb0", "testb1";
};

View File

@ -1,7 +1,7 @@
Freescale i.MX6 On-Chip OTP Controller (OCOTP) device tree bindings Freescale i.MX6 On-Chip OTP Controller (OCOTP) device tree bindings
This binding represents the on-chip eFuse OTP controller found on This binding represents the on-chip eFuse OTP controller found on
i.MX6Q/D, i.MX6DL/S, i.MX6SL, i.MX6SX and i.MX6UL SoCs. i.MX6Q/D, i.MX6DL/S, i.MX6SL, i.MX6SX, i.MX6UL and i.MX6SLL SoCs.
Required properties: Required properties:
- compatible: should be one of - compatible: should be one of
@ -10,6 +10,7 @@ Required properties:
"fsl,imx6sx-ocotp" (i.MX6SX), "fsl,imx6sx-ocotp" (i.MX6SX),
"fsl,imx6ul-ocotp" (i.MX6UL), "fsl,imx6ul-ocotp" (i.MX6UL),
"fsl,imx7d-ocotp" (i.MX7D/S), "fsl,imx7d-ocotp" (i.MX7D/S),
"fsl,imx6sll-ocotp" (i.MX6SLL),
followed by "syscon". followed by "syscon".
- #address-cells : Should be 1 - #address-cells : Should be 1
- #size-cells : Should be 1 - #size-cells : Should be 1

View File

@ -0,0 +1,52 @@
= Spreadtrum SC27XX PMIC eFuse device tree bindings =
Required properties:
- compatible: Should be one of the following.
"sprd,sc2720-efuse"
"sprd,sc2721-efuse"
"sprd,sc2723-efuse"
"sprd,sc2730-efuse"
"sprd,sc2731-efuse"
- reg: Specify the address offset of efuse controller.
- hwlocks: Reference to a phandle of a hwlock provider node.
= Data cells =
Are child nodes of eFuse, bindings of which as described in
bindings/nvmem/nvmem.txt
Example:
sc2731_pmic: pmic@0 {
compatible = "sprd,sc2731";
reg = <0>;
spi-max-frequency = <26000000>;
interrupts = <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>;
interrupt-controller;
#interrupt-cells = <2>;
#address-cells = <1>;
#size-cells = <0>;
efuse@380 {
compatible = "sprd,sc2731-efuse";
reg = <0x380>;
#address-cells = <1>;
#size-cells = <1>;
hwlocks = <&hwlock 12>;
/* Data cells */
thermal_calib: calib@10 {
reg = <0x10 0x2>;
};
};
};
= Data consumers =
Are device nodes which consume nvmem data cells.
Example:
thermal {
...
nvmem-cells = <&thermal_calib>;
nvmem-cell-names = "calibration";
};

View File

@ -0,0 +1,84 @@
Qualcomm SLIMBus Non Generic Device (NGD) Controller binding
SLIMBus NGD controller is a light-weight driver responsible for communicating
with SLIMBus slaves directly over the bus using messaging interface and
communicating with master component residing on ADSP for bandwidth and
data-channel management
Please refer to slimbus/bus.txt for details of the common SLIMBus bindings.
- compatible:
Usage: required
Value type: <stringlist>
Definition: must be "qcom,slim-ngd-v<MAJOR>.<MINOR>.<STEP>"
must be one of the following.
"qcom,slim-ngd-v1.5.0" for MSM8996
"qcom,slim-ngd-v2.1.0" for SDM845
- reg:
Usage: required
Value type: <prop-encoded-array>
Definition: must specify the base address and size of the controller
register space.
- dmas
Usage: required
Value type: <array of phandles>
Definition: List of rx and tx dma channels
- dma-names
Usage: required
Value type: <stringlist>
Definition: must be "rx" and "tx".
- interrupts:
Usage: required
Value type: <prop-encoded-array>
Definition: must list controller IRQ.
#address-cells
Usage: required
Value type: <u32>
Definition: Should be 1, reflecting the instance id of ngd.
#size-cells
Usage: required
Value type: <u32>
Definition: Should be 0
= NGD Devices
Each subnode represents an instance of NGD, must contain the following
properties:
- reg:
Usage: required
Value type: <u32>
Definition: Should be instance id of ngd.
#address-cells
Usage: required
Refer to slimbus/bus.txt for details of the common SLIMBus bindings.
#size-cells
Usage: required
Refer to slimbus/bus.txt for details of the common SLIMBus bindings.
= EXAMPLE
slim@91c0000 {
compatible = "qcom,slim-ngd-v1.5.0";
reg = <0x91c0000 0x2c000>;
interrupts = <0 163 0>;
dmas = <&slimbam 3>, <&slimbam 4>;
dma-names = "rx", "tx";
#address-cells = <1>;
#size-cells = <0>;
ngd@1 {
reg = <1>;
#address-cells = <1>;
#size-cells = <1>;
codec@1 {
compatible = "slim217,1a0";
reg = <1 0>;
};
};
};

View File

@ -129,6 +129,7 @@ excito Excito
ezchip EZchip Semiconductor ezchip EZchip Semiconductor
fairphone Fairphone B.V. fairphone Fairphone B.V.
faraday Faraday Technology Corporation faraday Faraday Technology Corporation
fastrax Fastrax Oy
fcs Fairchild Semiconductor fcs Fairchild Semiconductor
firefly Firefly firefly Firefly
focaltech FocalTech Systems Co.,Ltd focaltech FocalTech Systems Co.,Ltd
@ -209,6 +210,7 @@ licheepi Lichee Pi
linaro Linaro Limited linaro Linaro Limited
linksys Belkin International, Inc. (Linksys) linksys Belkin International, Inc. (Linksys)
linux Linux-specific binding linux Linux-specific binding
linx Linx Technologies
lltc Linear Technology Corporation lltc Linear Technology Corporation
logicpd Logic PD, Inc. logicpd Logic PD, Inc.
lsi LSI Corp. (LSI Logic) lsi LSI Corp. (LSI Logic)
@ -390,6 +392,7 @@ tronsmart Tronsmart
truly Truly Semiconductors Limited truly Truly Semiconductors Limited
tsd Theobroma Systems Design und Consulting GmbH tsd Theobroma Systems Design und Consulting GmbH
tyan Tyan Computer Corporation tyan Tyan Computer Corporation
u-blox u-blox
ucrobotics uCRobotics ucrobotics uCRobotics
ubnt Ubiquiti Networks ubnt Ubiquiti Networks
udoo Udoo udoo Udoo

View File

@ -83,7 +83,7 @@ The programming sequence is::
3. .write_complete 3. .write_complete
The .write_init function will prepare the FPGA to receive the image data. The The .write_init function will prepare the FPGA to receive the image data. The
buffer passed into .write_init will be atmost .initial_header_size bytes long, buffer passed into .write_init will be at most .initial_header_size bytes long;
if the whole bitstream is not immediately available then the core code will if the whole bitstream is not immediately available then the core code will
buffer up at least this much before starting. buffer up at least this much before starting.
@ -98,9 +98,9 @@ scatter list. This interface is suitable for drivers which use DMA.
The .write_complete function is called after all the image has been written The .write_complete function is called after all the image has been written
to put the FPGA into operating mode. to put the FPGA into operating mode.
The ops include a .state function which will read the hardware FPGA manager and The ops include a .state function which will determine the state the FPGA is in
return a code of type enum fpga_mgr_states. It doesn't result in a change in and return a code of type enum fpga_mgr_states. It doesn't result in a change
hardware state. in state.
How to write an image buffer to a supported FPGA How to write an image buffer to a supported FPGA
------------------------------------------------ ------------------------------------------------
@ -181,8 +181,8 @@ API for implementing a new FPGA Manager driver
.. kernel-doc:: drivers/fpga/fpga-mgr.c .. kernel-doc:: drivers/fpga/fpga-mgr.c
:functions: fpga_mgr_unregister :functions: fpga_mgr_unregister
API for programming a FPGA API for programming an FPGA
-------------------------- ---------------------------
.. kernel-doc:: include/linux/fpga/fpga-mgr.h .. kernel-doc:: include/linux/fpga/fpga-mgr.h
:functions: fpga_image_info :functions: fpga_image_info

View File

@ -4,7 +4,7 @@ FPGA Region
Overview Overview
-------- --------
This document is meant to be an brief overview of the FPGA region API usage. A This document is meant to be a brief overview of the FPGA region API usage. A
more conceptual look at regions can be found in the Device Tree binding more conceptual look at regions can be found in the Device Tree binding
document [#f1]_. document [#f1]_.
@ -31,11 +31,11 @@ fpga_image_info including:
* pointers to the image as either a scatter-gather buffer, a contiguous * pointers to the image as either a scatter-gather buffer, a contiguous
buffer, or the name of firmware file buffer, or the name of firmware file
* flags indicating specifics such as whether the image if for partial * flags indicating specifics such as whether the image is for partial
reconfiguration. reconfiguration.
How to program a FPGA using a region How to program an FPGA using a region
------------------------------------ -------------------------------------
First, allocate the info struct:: First, allocate the info struct::
@ -77,8 +77,8 @@ An example of usage can be seen in the probe function of [#f2]_.
.. [#f1] ../devicetree/bindings/fpga/fpga-region.txt .. [#f1] ../devicetree/bindings/fpga/fpga-region.txt
.. [#f2] ../../drivers/fpga/of-fpga-region.c .. [#f2] ../../drivers/fpga/of-fpga-region.c
API to program a FGPA API to program an FPGA
--------------------- ----------------------
.. kernel-doc:: drivers/fpga/fpga-region.c .. kernel-doc:: drivers/fpga/fpga-region.c
:functions: fpga_region_program_fpga :functions: fpga_region_program_fpga

View File

@ -12,18 +12,18 @@ Linux. Some of the core intentions of the FPGA subsystems are:
* Code should not be shared between upper and lower layers. This * Code should not be shared between upper and lower layers. This
should go without saying. If that seems necessary, there's probably should go without saying. If that seems necessary, there's probably
framework functionality that that can be added that will benefit framework functionality that can be added that will benefit
other users. Write the linux-fpga mailing list and maintainers and other users. Write the linux-fpga mailing list and maintainers and
seek out a solution that expands the framework for broad reuse. seek out a solution that expands the framework for broad reuse.
* Generally, when adding code, think of the future. Plan for re-use. * Generally, when adding code, think of the future. Plan for reuse.
The framework in the kernel is divided into: The framework in the kernel is divided into:
FPGA Manager FPGA Manager
------------ ------------
If you are adding a new FPGA or a new method of programming a FPGA, If you are adding a new FPGA or a new method of programming an FPGA,
this is the subsystem for you. Low level FPGA manager drivers contain this is the subsystem for you. Low level FPGA manager drivers contain
the knowledge of how to program a specific device. This subsystem the knowledge of how to program a specific device. This subsystem
includes the framework in fpga-mgr.c and the low level drivers that includes the framework in fpga-mgr.c and the low level drivers that
@ -32,10 +32,10 @@ are registered with it.
FPGA Bridge FPGA Bridge
----------- -----------
FPGA Bridges prevent spurious signals from going out of a FPGA or a FPGA Bridges prevent spurious signals from going out of an FPGA or a
region of a FPGA during programming. They are disabled before region of an FPGA during programming. They are disabled before
programming begins and re-enabled afterwards. An FPGA bridge may be programming begins and re-enabled afterwards. An FPGA bridge may be
actual hard hardware that gates a bus to a cpu or a soft ("freeze") actual hard hardware that gates a bus to a CPU or a soft ("freeze")
bridge in FPGA fabric that surrounds a partial reconfiguration region bridge in FPGA fabric that surrounds a partial reconfiguration region
of an FPGA. This subsystem includes fpga-bridge.c and the low level of an FPGA. This subsystem includes fpga-bridge.c and the low level
drivers that are registered with it. drivers that are registered with it.
@ -44,7 +44,7 @@ FPGA Region
----------- -----------
If you are adding a new interface to the FPGA framework, add it on top If you are adding a new interface to the FPGA framework, add it on top
of a FPGA region to allow the most reuse of your interface. of an FPGA region to allow the most reuse of your interface.
The FPGA Region framework (fpga-region.c) associates managers and The FPGA Region framework (fpga-region.c) associates managers and
bridges as reconfigurable regions. A region may refer to the whole bridges as reconfigurable regions. A region may refer to the whole

View File

@ -125,3 +125,8 @@ Messaging APIs:
~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/slimbus/messaging.c .. kernel-doc:: drivers/slimbus/messaging.c
:export: :export:
Streaming APIs:
~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/slimbus/stream.c
:export:

285
Documentation/fpga/dfl.txt Normal file
View File

@ -0,0 +1,285 @@
===============================================================================
FPGA Device Feature List (DFL) Framework Overview
-------------------------------------------------------------------------------
Enno Luebbers <enno.luebbers@intel.com>
Xiao Guangrong <guangrong.xiao@linux.intel.com>
Wu Hao <hao.wu@intel.com>
The Device Feature List (DFL) FPGA framework (and drivers according to this
this framework) hides the very details of low layer hardwares and provides
unified interfaces to userspace. Applications could use these interfaces to
configure, enumerate, open and access FPGA accelerators on platforms which
implement the DFL in the device memory. Besides this, the DFL framework
enables system level management functions such as FPGA reconfiguration.
Device Feature List (DFL) Overview
==================================
Device Feature List (DFL) defines a linked list of feature headers within the
device MMIO space to provide an extensible way of adding features. Software can
walk through these predefined data structures to enumerate FPGA features:
FPGA Interface Unit (FIU), Accelerated Function Unit (AFU) and Private Features,
as illustrated below:
Header Header Header Header
+----------+ +-->+----------+ +-->+----------+ +-->+----------+
| Type | | | Type | | | Type | | | Type |
| FIU | | | Private | | | Private | | | Private |
+----------+ | | Feature | | | Feature | | | Feature |
| Next_DFH |--+ +----------+ | +----------+ | +----------+
+----------+ | Next_DFH |--+ | Next_DFH |--+ | Next_DFH |--> NULL
| ID | +----------+ +----------+ +----------+
+----------+ | ID | | ID | | ID |
| Next_AFU |--+ +----------+ +----------+ +----------+
+----------+ | | Feature | | Feature | | Feature |
| Header | | | Register | | Register | | Register |
| Register | | | Set | | Set | | Set |
| Set | | +----------+ +----------+ +----------+
+----------+ | Header
+-->+----------+
| Type |
| AFU |
+----------+
| Next_DFH |--> NULL
+----------+
| GUID |
+----------+
| Header |
| Register |
| Set |
+----------+
FPGA Interface Unit (FIU) represents a standalone functional unit for the
interface to FPGA, e.g. the FPGA Management Engine (FME) and Port (more
descriptions on FME and Port in later sections).
Accelerated Function Unit (AFU) represents a FPGA programmable region and
always connects to a FIU (e.g. a Port) as its child as illustrated above.
Private Features represent sub features of the FIU and AFU. They could be
various function blocks with different IDs, but all private features which
belong to the same FIU or AFU, must be linked to one list via the Next Device
Feature Header (Next_DFH) pointer.
Each FIU, AFU and Private Feature could implement its own functional registers.
The functional register set for FIU and AFU, is named as Header Register Set,
e.g. FME Header Register Set, and the one for Private Feature, is named as
Feature Register Set, e.g. FME Partial Reconfiguration Feature Register Set.
This Device Feature List provides a way of linking features together, it's
convenient for software to locate each feature by walking through this list,
and can be implemented in register regions of any FPGA device.
FIU - FME (FPGA Management Engine)
==================================
The FPGA Management Engine performs reconfiguration and other infrastructure
functions. Each FPGA device only has one FME.
User-space applications can acquire exclusive access to the FME using open(),
and release it using close().
The following functions are exposed through ioctls:
Get driver API version (DFL_FPGA_GET_API_VERSION)
Check for extensions (DFL_FPGA_CHECK_EXTENSION)
Program bitstream (DFL_FPGA_FME_PORT_PR)
More functions are exposed through sysfs
(/sys/class/fpga_region/regionX/dfl-fme.n/):
Read bitstream ID (bitstream_id)
bitstream_id indicates version of the static FPGA region.
Read bitstream metadata (bitstream_metadata)
bitstream_metadata includes detailed information of static FPGA region,
e.g. synthesis date and seed.
Read number of ports (ports_num)
one FPGA device may have more than one port, this sysfs interface indicates
how many ports the FPGA device has.
FIU - PORT
==========
A port represents the interface between the static FPGA fabric and a partially
reconfigurable region containing an AFU. It controls the communication from SW
to the accelerator and exposes features such as reset and debug. Each FPGA
device may have more than one port, but always one AFU per port.
AFU
===
An AFU is attached to a port FIU and exposes a fixed length MMIO region to be
used for accelerator-specific control registers.
User-space applications can acquire exclusive access to an AFU attached to a
port by using open() on the port device node and release it using close().
The following functions are exposed through ioctls:
Get driver API version (DFL_FPGA_GET_API_VERSION)
Check for extensions (DFL_FPGA_CHECK_EXTENSION)
Get port info (DFL_FPGA_PORT_GET_INFO)
Get MMIO region info (DFL_FPGA_PORT_GET_REGION_INFO)
Map DMA buffer (DFL_FPGA_PORT_DMA_MAP)
Unmap DMA buffer (DFL_FPGA_PORT_DMA_UNMAP)
Reset AFU (*DFL_FPGA_PORT_RESET)
*DFL_FPGA_PORT_RESET: reset the FPGA Port and its AFU. Userspace can do Port
reset at any time, e.g. during DMA or Partial Reconfiguration. But it should
never cause any system level issue, only functional failure (e.g. DMA or PR
operation failure) and be recoverable from the failure.
User-space applications can also mmap() accelerator MMIO regions.
More functions are exposed through sysfs:
(/sys/class/fpga_region/<regionX>/<dfl-port.m>/):
Read Accelerator GUID (afu_id)
afu_id indicates which PR bitstream is programmed to this AFU.
DFL Framework Overview
======================
+----------+ +--------+ +--------+ +--------+
| FME | | AFU | | AFU | | AFU |
| Module | | Module | | Module | | Module |
+----------+ +--------+ +--------+ +--------+
+-----------------------+
| FPGA Container Device | Device Feature List
| (FPGA Base Region) | Framework
+-----------------------+
--------------------------------------------------------------------
+----------------------------+
| FPGA DFL Device Module |
| (e.g. PCIE/Platform Device)|
+----------------------------+
+------------------------+
| FPGA Hardware Device |
+------------------------+
DFL framework in kernel provides common interfaces to create container device
(FPGA base region), discover feature devices and their private features from the
given Device Feature Lists and create platform devices for feature devices
(e.g. FME, Port and AFU) with related resources under the container device. It
also abstracts operations for the private features and exposes common ops to
feature device drivers.
The FPGA DFL Device could be different hardwares, e.g. PCIe device, platform
device and etc. Its driver module is always loaded first once the device is
created by the system. This driver plays an infrastructural role in the
driver architecture. It locates the DFLs in the device memory, handles them
and related resources to common interfaces from DFL framework for enumeration.
(Please refer to drivers/fpga/dfl.c for detailed enumeration APIs).
The FPGA Management Engine (FME) driver is a platform driver which is loaded
automatically after FME platform device creation from the DFL device module. It
provides the key features for FPGA management, including:
a) Expose static FPGA region information, e.g. version and metadata.
Users can read related information via sysfs interfaces exposed
by FME driver.
b) Partial Reconfiguration. The FME driver creates FPGA manager, FPGA
bridges and FPGA regions during PR sub feature initialization. Once
it receives a DFL_FPGA_FME_PORT_PR ioctl from user, it invokes the
common interface function from FPGA Region to complete the partial
reconfiguration of the PR bitstream to the given port.
Similar to the FME driver, the FPGA Accelerated Function Unit (AFU) driver is
probed once the AFU platform device is created. The main function of this module
is to provide an interface for userspace applications to access the individual
accelerators, including basic reset control on port, AFU MMIO region export, dma
buffer mapping service functions.
After feature platform devices creation, matched platform drivers will be loaded
automatically to handle different functionalities. Please refer to next sections
for detailed information on functional units which have been already implemented
under this DFL framework.
Partial Reconfiguration
=======================
As mentioned above, accelerators can be reconfigured through partial
reconfiguration of a PR bitstream file. The PR bitstream file must have been
generated for the exact static FPGA region and targeted reconfigurable region
(port) of the FPGA, otherwise, the reconfiguration operation will fail and
possibly cause system instability. This compatibility can be checked by
comparing the compatibility ID noted in the header of PR bitstream file against
the compat_id exposed by the target FPGA region. This check is usually done by
userspace before calling the reconfiguration IOCTL.
Device enumeration
==================
This section introduces how applications enumerate the fpga device from
the sysfs hierarchy under /sys/class/fpga_region.
In the example below, two DFL based FPGA devices are installed in the host. Each
fpga device has one FME and two ports (AFUs).
FPGA regions are created under /sys/class/fpga_region/
/sys/class/fpga_region/region0
/sys/class/fpga_region/region1
/sys/class/fpga_region/region2
...
Application needs to search each regionX folder, if feature device is found,
(e.g. "dfl-port.n" or "dfl-fme.m" is found), then it's the base
fpga region which represents the FPGA device.
Each base region has one FME and two ports (AFUs) as child devices:
/sys/class/fpga_region/region0/dfl-fme.0
/sys/class/fpga_region/region0/dfl-port.0
/sys/class/fpga_region/region0/dfl-port.1
...
/sys/class/fpga_region/region3/dfl-fme.1
/sys/class/fpga_region/region3/dfl-port.2
/sys/class/fpga_region/region3/dfl-port.3
...
In general, the FME/AFU sysfs interfaces are named as follows:
/sys/class/fpga_region/<regionX>/<dfl-fme.n>/
/sys/class/fpga_region/<regionX>/<dfl-port.m>/
with 'n' consecutively numbering all FMEs and 'm' consecutively numbering all
ports.
The device nodes used for ioctl() or mmap() can be referenced through:
/sys/class/fpga_region/<regionX>/<dfl-fme.n>/dev
/sys/class/fpga_region/<regionX>/<dfl-port.n>/dev
Add new FIUs support
====================
It's possible that developers made some new function blocks (FIUs) under this
DFL framework, then new platform device driver needs to be developed for the
new feature dev (FIU) following the same way as existing feature dev drivers
(e.g. FME and Port/AFU platform device driver). Besides that, it requires
modification on DFL framework enumeration code too, for new FIU type detection
and related platform devices creation.
Add new private features support
================================
In some cases, we may need to add some new private features to existing FIUs
(e.g. FME or Port). Developers don't need to touch enumeration code in DFL
framework, as each private feature will be parsed automatically and related
mmio resources can be found under FIU platform device created by DFL framework.
Developer only needs to provide a sub feature driver with matched feature id.
FME Partial Reconfiguration Sub Feature driver (see drivers/fpga/dfl-fme-pr.c)
could be a reference.
Open discussion
===============
FME driver exports one ioctl (DFL_FPGA_FME_PORT_PR) for partial reconfiguration
to user now. In the future, if unified user interfaces for reconfiguration are
added, FME driver should switch to them from ioctl interface.

View File

@ -324,6 +324,7 @@ Code Seq#(hex) Include File Comments
0xB3 00 linux/mmc/ioctl.h 0xB3 00 linux/mmc/ioctl.h
0xB4 00-0F linux/gpio.h <mailto:linux-gpio@vger.kernel.org> 0xB4 00-0F linux/gpio.h <mailto:linux-gpio@vger.kernel.org>
0xB5 00-0F uapi/linux/rpmsg.h <mailto:linux-remoteproc@vger.kernel.org> 0xB5 00-0F uapi/linux/rpmsg.h <mailto:linux-remoteproc@vger.kernel.org>
0xB6 all linux/fpga-dfl.h
0xC0 00-0F linux/usb/iowarrior.h 0xC0 00-0F linux/usb/iowarrior.h
0xCA 00-0F uapi/misc/cxl.h 0xCA 00-0F uapi/misc/cxl.h
0xCA 10-2F uapi/misc/ocxl.h 0xCA 10-2F uapi/misc/ocxl.h

View File

@ -39,6 +39,7 @@ show up in /proc/sys/kernel:
- hung_task_check_count - hung_task_check_count
- hung_task_timeout_secs - hung_task_timeout_secs
- hung_task_warnings - hung_task_warnings
- hyperv_record_panic_msg
- kexec_load_disabled - kexec_load_disabled
- kptr_restrict - kptr_restrict
- l2cr [ PPC only ] - l2cr [ PPC only ]
@ -374,6 +375,16 @@ This file shows up if CONFIG_DETECT_HUNG_TASK is enabled.
============================================================== ==============================================================
hyperv_record_panic_msg:
Controls whether the panic kmsg data should be reported to Hyper-V.
0: do not report panic kmsg data.
1: report the panic kmsg data. This is the default behavior.
==============================================================
kexec_load_disabled: kexec_load_disabled:
A toggle indicating if the kexec_load syscall has been disabled. This A toggle indicating if the kexec_load syscall has been disabled. This

View File

@ -60,4 +60,4 @@ vad: general purpose A/D input (VAD)
vdd: battery input (VDD) vdd: battery input (VDD)
After the voltage conversion the value is returned as decimal ASCII. After the voltage conversion the value is returned as decimal ASCII.
Note: The value is in mV, so to get a volts the value has to be divided by 10. Note: To get a volts the value has to be divided by 100.

View File

@ -836,6 +836,12 @@ L: linux-media@vger.kernel.org
S: Maintained S: Maintained
F: drivers/media/i2c/ad9389b* F: drivers/media/i2c/ad9389b*
ANALOG DEVICES INC ADGS1408 DRIVER
M: Mircea Caprioru <mircea.caprioru@analog.com>
S: Supported
F: drivers/mux/adgs1408.c
F: Documentation/devicetree/bindings/mux/adgs1408.txt
ANALOG DEVICES INC ADV7180 DRIVER ANALOG DEVICES INC ADV7180 DRIVER
M: Lars-Peter Clausen <lars@metafoo.de> M: Lars-Peter Clausen <lars@metafoo.de>
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org
@ -5714,6 +5720,14 @@ F: drivers/fpga/
F: include/linux/fpga/ F: include/linux/fpga/
W: http://www.rocketboards.org W: http://www.rocketboards.org
FPGA DFL DRIVERS
M: Wu Hao <hao.wu@intel.com>
L: linux-fpga@vger.kernel.org
S: Maintained
F: Documentation/fpga/dfl.txt
F: include/uapi/linux/fpga-dfl.h
F: drivers/fpga/dfl*
FPU EMULATOR FPU EMULATOR
M: Bill Metzenthen <billm@melbpc.org.au> M: Bill Metzenthen <billm@melbpc.org.au>
W: http://floatingpoint.sourceforge.net/emulator/index.html W: http://floatingpoint.sourceforge.net/emulator/index.html
@ -6130,6 +6144,14 @@ F: Documentation/isdn/README.gigaset
F: drivers/isdn/gigaset/ F: drivers/isdn/gigaset/
F: include/uapi/linux/gigaset_dev.h F: include/uapi/linux/gigaset_dev.h
GNSS SUBSYSTEM
M: Johan Hovold <johan@kernel.org>
S: Maintained
F: Documentation/ABI/testing/sysfs-class-gnss
F: Documentation/devicetree/bindings/gnss/
F: drivers/gnss/
F: include/linux/gnss.h
GO7007 MPEG CODEC GO7007 MPEG CODEC
M: Hans Verkuil <hans.verkuil@cisco.com> M: Hans Verkuil <hans.verkuil@cisco.com>
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org
@ -15523,7 +15545,7 @@ F: include/linux/vme*
VMWARE BALLOON DRIVER VMWARE BALLOON DRIVER
M: Xavier Deguillard <xdeguillard@vmware.com> M: Xavier Deguillard <xdeguillard@vmware.com>
M: Philip Moltmann <moltmann@vmware.com> M: Nadav Amit <namit@vmware.com>
M: "VMware, Inc." <pv-drivers@vmware.com> M: "VMware, Inc." <pv-drivers@vmware.com>
L: linux-kernel@vger.kernel.org L: linux-kernel@vger.kernel.org
S: Maintained S: Maintained
@ -15615,6 +15637,7 @@ F: drivers/mmc/host/vub300.c
W1 DALLAS'S 1-WIRE BUS W1 DALLAS'S 1-WIRE BUS
M: Evgeniy Polyakov <zbr@ioremap.net> M: Evgeniy Polyakov <zbr@ioremap.net>
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/w1/
F: Documentation/w1/ F: Documentation/w1/
F: drivers/w1/ F: drivers/w1/
F: include/linux/w1.h F: include/linux/w1.h

View File

@ -52,6 +52,7 @@
compatible = "fsi-master-gpio", "fsi-master"; compatible = "fsi-master-gpio", "fsi-master";
#address-cells = <2>; #address-cells = <2>;
#size-cells = <0>; #size-cells = <0>;
no-gpio-delays;
clock-gpios = <&gpio ASPEED_GPIO(AA, 0) GPIO_ACTIVE_HIGH>; clock-gpios = <&gpio ASPEED_GPIO(AA, 0) GPIO_ACTIVE_HIGH>;
data-gpios = <&gpio ASPEED_GPIO(AA, 2) GPIO_ACTIVE_HIGH>; data-gpios = <&gpio ASPEED_GPIO(AA, 2) GPIO_ACTIVE_HIGH>;

View File

@ -153,6 +153,7 @@
compatible = "fsi-master-gpio", "fsi-master"; compatible = "fsi-master-gpio", "fsi-master";
#address-cells = <2>; #address-cells = <2>;
#size-cells = <0>; #size-cells = <0>;
no-gpio-delays;
clock-gpios = <&gpio ASPEED_GPIO(AA, 0) GPIO_ACTIVE_HIGH>; clock-gpios = <&gpio ASPEED_GPIO(AA, 0) GPIO_ACTIVE_HIGH>;
data-gpios = <&gpio ASPEED_GPIO(E, 0) GPIO_ACTIVE_HIGH>; data-gpios = <&gpio ASPEED_GPIO(E, 0) GPIO_ACTIVE_HIGH>;

View File

@ -91,6 +91,7 @@
compatible = "fsi-master-gpio", "fsi-master"; compatible = "fsi-master-gpio", "fsi-master";
#address-cells = <2>; #address-cells = <2>;
#size-cells = <0>; #size-cells = <0>;
no-gpio-delays;
trans-gpios = <&gpio ASPEED_GPIO(O, 6) GPIO_ACTIVE_HIGH>; trans-gpios = <&gpio ASPEED_GPIO(O, 6) GPIO_ACTIVE_HIGH>;
enable-gpios = <&gpio ASPEED_GPIO(D, 0) GPIO_ACTIVE_HIGH>; enable-gpios = <&gpio ASPEED_GPIO(D, 0) GPIO_ACTIVE_HIGH>;

View File

@ -15,6 +15,7 @@
*/ */
#include <linux/module.h> #include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/io.h> #include <linux/io.h>

View File

@ -8,6 +8,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/mod_devicetable.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/list.h> #include <linux/list.h>

View File

@ -333,7 +333,7 @@ void __init hyperv_init(void)
* Register Hyper-V specific clocksource. * Register Hyper-V specific clocksource.
*/ */
#ifdef CONFIG_HYPERV_TSCPAGE #ifdef CONFIG_HYPERV_TSCPAGE
if (ms_hyperv.features & HV_X64_MSR_REFERENCE_TSC_AVAILABLE) { if (ms_hyperv.features & HV_MSR_REFERENCE_TSC_AVAILABLE) {
union hv_x64_msr_hypercall_contents tsc_msr; union hv_x64_msr_hypercall_contents tsc_msr;
tsc_pg = __vmalloc(PAGE_SIZE, GFP_KERNEL, PAGE_KERNEL); tsc_pg = __vmalloc(PAGE_SIZE, GFP_KERNEL, PAGE_KERNEL);
@ -362,7 +362,7 @@ register_msr_cs:
*/ */
hyperv_cs = &hyperv_cs_msr; hyperv_cs = &hyperv_cs_msr;
if (ms_hyperv.features & HV_X64_MSR_TIME_REF_COUNT_AVAILABLE) if (ms_hyperv.features & HV_MSR_TIME_REF_COUNT_AVAILABLE)
clocksource_register_hz(&hyperv_cs_msr, NSEC_PER_SEC/100); clocksource_register_hz(&hyperv_cs_msr, NSEC_PER_SEC/100);
return; return;
@ -426,6 +426,33 @@ void hyperv_report_panic(struct pt_regs *regs, long err)
} }
EXPORT_SYMBOL_GPL(hyperv_report_panic); EXPORT_SYMBOL_GPL(hyperv_report_panic);
/**
* hyperv_report_panic_msg - report panic message to Hyper-V
* @pa: physical address of the panic page containing the message
* @size: size of the message in the page
*/
void hyperv_report_panic_msg(phys_addr_t pa, size_t size)
{
/*
* P3 to contain the physical address of the panic page & P4 to
* contain the size of the panic data in that page. Rest of the
* registers are no-op when the NOTIFY_MSG flag is set.
*/
wrmsrl(HV_X64_MSR_CRASH_P0, 0);
wrmsrl(HV_X64_MSR_CRASH_P1, 0);
wrmsrl(HV_X64_MSR_CRASH_P2, 0);
wrmsrl(HV_X64_MSR_CRASH_P3, pa);
wrmsrl(HV_X64_MSR_CRASH_P4, size);
/*
* Let Hyper-V know there is crash data available along with
* the panic message.
*/
wrmsrl(HV_X64_MSR_CRASH_CTL,
(HV_CRASH_CTL_CRASH_NOTIFY | HV_CRASH_CTL_CRASH_NOTIFY_MSG));
}
EXPORT_SYMBOL_GPL(hyperv_report_panic_msg);
bool hv_is_hyperv_initialized(void) bool hv_is_hyperv_initialized(void)
{ {
union hv_x64_msr_hypercall_contents hypercall_msr; union hv_x64_msr_hypercall_contents hypercall_msr;

View File

@ -35,9 +35,9 @@
/* VP Runtime (HV_X64_MSR_VP_RUNTIME) available */ /* VP Runtime (HV_X64_MSR_VP_RUNTIME) available */
#define HV_X64_MSR_VP_RUNTIME_AVAILABLE (1 << 0) #define HV_X64_MSR_VP_RUNTIME_AVAILABLE (1 << 0)
/* Partition Reference Counter (HV_X64_MSR_TIME_REF_COUNT) available*/ /* Partition Reference Counter (HV_X64_MSR_TIME_REF_COUNT) available*/
#define HV_X64_MSR_TIME_REF_COUNT_AVAILABLE (1 << 1) #define HV_MSR_TIME_REF_COUNT_AVAILABLE (1 << 1)
/* Partition reference TSC MSR is available */ /* Partition reference TSC MSR is available */
#define HV_X64_MSR_REFERENCE_TSC_AVAILABLE (1 << 9) #define HV_MSR_REFERENCE_TSC_AVAILABLE (1 << 9)
/* A partition's reference time stamp counter (TSC) page */ /* A partition's reference time stamp counter (TSC) page */
#define HV_X64_MSR_REFERENCE_TSC 0x40000021 #define HV_X64_MSR_REFERENCE_TSC 0x40000021
@ -60,7 +60,7 @@
* Synthetic Timer MSRs (HV_X64_MSR_STIMER0_CONFIG through * Synthetic Timer MSRs (HV_X64_MSR_STIMER0_CONFIG through
* HV_X64_MSR_STIMER3_COUNT) available * HV_X64_MSR_STIMER3_COUNT) available
*/ */
#define HV_X64_MSR_SYNTIMER_AVAILABLE (1 << 3) #define HV_MSR_SYNTIMER_AVAILABLE (1 << 3)
/* /*
* APIC access MSRs (HV_X64_MSR_EOI, HV_X64_MSR_ICR and HV_X64_MSR_TPR) * APIC access MSRs (HV_X64_MSR_EOI, HV_X64_MSR_ICR and HV_X64_MSR_TPR)
* are available * are available
@ -86,7 +86,7 @@
#define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE (1 << 10) #define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE (1 << 10)
/* stimer Direct Mode is available */ /* stimer Direct Mode is available */
#define HV_X64_STIMER_DIRECT_MODE_AVAILABLE (1 << 19) #define HV_STIMER_DIRECT_MODE_AVAILABLE (1 << 19)
/* /*
* Feature identification: EBX indicates which flags were specified at * Feature identification: EBX indicates which flags were specified at
@ -160,9 +160,9 @@
#define HV_X64_RELAXED_TIMING_RECOMMENDED (1 << 5) #define HV_X64_RELAXED_TIMING_RECOMMENDED (1 << 5)
/* /*
* Virtual APIC support * Recommend not using Auto End-Of-Interrupt feature
*/ */
#define HV_X64_DEPRECATING_AEOI_RECOMMENDED (1 << 9) #define HV_DEPRECATING_AEOI_RECOMMENDED (1 << 9)
/* /*
* Recommend using cluster IPI hypercalls. * Recommend using cluster IPI hypercalls.
@ -176,9 +176,10 @@
#define HV_X64_ENLIGHTENED_VMCS_RECOMMENDED (1 << 14) #define HV_X64_ENLIGHTENED_VMCS_RECOMMENDED (1 << 14)
/* /*
* Crash notification flag. * Crash notification flags.
*/ */
#define HV_CRASH_CTL_CRASH_NOTIFY (1ULL << 63) #define HV_CRASH_CTL_CRASH_NOTIFY_MSG BIT_ULL(62)
#define HV_CRASH_CTL_CRASH_NOTIFY BIT_ULL(63)
/* MSR used to identify the guest OS. */ /* MSR used to identify the guest OS. */
#define HV_X64_MSR_GUEST_OS_ID 0x40000000 #define HV_X64_MSR_GUEST_OS_ID 0x40000000

View File

@ -76,8 +76,10 @@ static inline void vmbus_signal_eom(struct hv_message *msg, u32 old_msg_type)
} }
} }
#define hv_init_timer(timer, tick) wrmsrl(timer, tick) #define hv_init_timer(timer, tick) \
#define hv_init_timer_config(config, val) wrmsrl(config, val) wrmsrl(HV_X64_MSR_STIMER0_COUNT + (2*timer), tick)
#define hv_init_timer_config(timer, val) \
wrmsrl(HV_X64_MSR_STIMER0_CONFIG + (2*timer), val)
#define hv_get_simp(val) rdmsrl(HV_X64_MSR_SIMP, val) #define hv_get_simp(val) rdmsrl(HV_X64_MSR_SIMP, val)
#define hv_set_simp(val) wrmsrl(HV_X64_MSR_SIMP, val) #define hv_set_simp(val) wrmsrl(HV_X64_MSR_SIMP, val)
@ -90,8 +92,13 @@ static inline void vmbus_signal_eom(struct hv_message *msg, u32 old_msg_type)
#define hv_get_vp_index(index) rdmsrl(HV_X64_MSR_VP_INDEX, index) #define hv_get_vp_index(index) rdmsrl(HV_X64_MSR_VP_INDEX, index)
#define hv_get_synint_state(int_num, val) rdmsrl(int_num, val) #define hv_get_synint_state(int_num, val) \
#define hv_set_synint_state(int_num, val) wrmsrl(int_num, val) rdmsrl(HV_X64_MSR_SINT0 + int_num, val)
#define hv_set_synint_state(int_num, val) \
wrmsrl(HV_X64_MSR_SINT0 + int_num, val)
#define hv_get_crash_ctl(val) \
rdmsrl(HV_X64_MSR_CRASH_CTL, val)
void hyperv_callback_vector(void); void hyperv_callback_vector(void);
void hyperv_reenlightenment_vector(void); void hyperv_reenlightenment_vector(void);
@ -332,6 +339,7 @@ static inline int cpumask_to_vpset(struct hv_vpset *vpset,
void __init hyperv_init(void); void __init hyperv_init(void);
void hyperv_setup_mmu_ops(void); void hyperv_setup_mmu_ops(void);
void hyperv_report_panic(struct pt_regs *regs, long err); void hyperv_report_panic(struct pt_regs *regs, long err);
void hyperv_report_panic_msg(phys_addr_t pa, size_t size);
bool hv_is_hyperv_initialized(void); bool hv_is_hyperv_initialized(void);
void hyperv_cleanup(void); void hyperv_cleanup(void);

View File

@ -41,7 +41,7 @@ static void (*hv_stimer0_handler)(void);
static void (*hv_kexec_handler)(void); static void (*hv_kexec_handler)(void);
static void (*hv_crash_handler)(struct pt_regs *regs); static void (*hv_crash_handler)(struct pt_regs *regs);
void hyperv_vector_handler(struct pt_regs *regs) __visible void __irq_entry hyperv_vector_handler(struct pt_regs *regs)
{ {
struct pt_regs *old_regs = set_irq_regs(regs); struct pt_regs *old_regs = set_irq_regs(regs);
@ -50,7 +50,7 @@ void hyperv_vector_handler(struct pt_regs *regs)
if (vmbus_handler) if (vmbus_handler)
vmbus_handler(); vmbus_handler();
if (ms_hyperv.hints & HV_X64_DEPRECATING_AEOI_RECOMMENDED) if (ms_hyperv.hints & HV_DEPRECATING_AEOI_RECOMMENDED)
ack_APIC_irq(); ack_APIC_irq();
exiting_irq(); exiting_irq();
@ -300,7 +300,7 @@ static void __init ms_hyperv_init_platform(void)
hyperv_reenlightenment_vector); hyperv_reenlightenment_vector);
/* Setup the IDT for stimer0 */ /* Setup the IDT for stimer0 */
if (ms_hyperv.misc_features & HV_X64_STIMER_DIRECT_MODE_AVAILABLE) if (ms_hyperv.misc_features & HV_STIMER_DIRECT_MODE_AVAILABLE)
alloc_intr_gate(HYPERV_STIMER0_VECTOR, alloc_intr_gate(HYPERV_STIMER0_VECTOR,
hv_stimer0_callback_vector); hv_stimer0_callback_vector);
#endif #endif

View File

@ -9,6 +9,8 @@ source "drivers/bus/Kconfig"
source "drivers/connector/Kconfig" source "drivers/connector/Kconfig"
source "drivers/gnss/Kconfig"
source "drivers/mtd/Kconfig" source "drivers/mtd/Kconfig"
source "drivers/of/Kconfig" source "drivers/of/Kconfig"

View File

@ -185,3 +185,4 @@ obj-$(CONFIG_TEE) += tee/
obj-$(CONFIG_MULTIPLEXER) += mux/ obj-$(CONFIG_MULTIPLEXER) += mux/
obj-$(CONFIG_UNISYS_VISORBUS) += visorbus/ obj-$(CONFIG_UNISYS_VISORBUS) += visorbus/
obj-$(CONFIG_SIOX) += siox/ obj-$(CONFIG_SIOX) += siox/
obj-$(CONFIG_GNSS) += gnss/

View File

@ -10,7 +10,7 @@ if ANDROID
config ANDROID_BINDER_IPC config ANDROID_BINDER_IPC
bool "Android Binder IPC Driver" bool "Android Binder IPC Driver"
depends on MMU && !M68K depends on MMU
default n default n
---help--- ---help---
Binder is used in Android for both communication between processes, Binder is used in Android for both communication between processes,

View File

@ -51,7 +51,6 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <asm/cacheflush.h>
#include <linux/fdtable.h> #include <linux/fdtable.h>
#include <linux/file.h> #include <linux/file.h>
#include <linux/freezer.h> #include <linux/freezer.h>
@ -71,8 +70,12 @@
#include <linux/pid_namespace.h> #include <linux/pid_namespace.h>
#include <linux/security.h> #include <linux/security.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/ratelimit.h>
#include <uapi/linux/android/binder.h> #include <uapi/linux/android/binder.h>
#include <asm/cacheflush.h>
#include "binder_alloc.h" #include "binder_alloc.h"
#include "binder_trace.h" #include "binder_trace.h"
@ -161,13 +164,13 @@ module_param_call(stop_on_user_error, binder_set_stop_on_user_error,
#define binder_debug(mask, x...) \ #define binder_debug(mask, x...) \
do { \ do { \
if (binder_debug_mask & mask) \ if (binder_debug_mask & mask) \
pr_info(x); \ pr_info_ratelimited(x); \
} while (0) } while (0)
#define binder_user_error(x...) \ #define binder_user_error(x...) \
do { \ do { \
if (binder_debug_mask & BINDER_DEBUG_USER_ERROR) \ if (binder_debug_mask & BINDER_DEBUG_USER_ERROR) \
pr_info(x); \ pr_info_ratelimited(x); \
if (binder_stop_on_user_error) \ if (binder_stop_on_user_error) \
binder_stop_on_user_error = 2; \ binder_stop_on_user_error = 2; \
} while (0) } while (0)

View File

@ -17,7 +17,6 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <asm/cacheflush.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/sched/mm.h> #include <linux/sched/mm.h>
#include <linux/module.h> #include <linux/module.h>
@ -28,6 +27,8 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/list_lru.h> #include <linux/list_lru.h>
#include <linux/ratelimit.h>
#include <asm/cacheflush.h>
#include "binder_alloc.h" #include "binder_alloc.h"
#include "binder_trace.h" #include "binder_trace.h"
@ -36,11 +37,12 @@ struct list_lru binder_alloc_lru;
static DEFINE_MUTEX(binder_alloc_mmap_lock); static DEFINE_MUTEX(binder_alloc_mmap_lock);
enum { enum {
BINDER_DEBUG_USER_ERROR = 1U << 0,
BINDER_DEBUG_OPEN_CLOSE = 1U << 1, BINDER_DEBUG_OPEN_CLOSE = 1U << 1,
BINDER_DEBUG_BUFFER_ALLOC = 1U << 2, BINDER_DEBUG_BUFFER_ALLOC = 1U << 2,
BINDER_DEBUG_BUFFER_ALLOC_ASYNC = 1U << 3, BINDER_DEBUG_BUFFER_ALLOC_ASYNC = 1U << 3,
}; };
static uint32_t binder_alloc_debug_mask; static uint32_t binder_alloc_debug_mask = BINDER_DEBUG_USER_ERROR;
module_param_named(debug_mask, binder_alloc_debug_mask, module_param_named(debug_mask, binder_alloc_debug_mask,
uint, 0644); uint, 0644);
@ -48,7 +50,7 @@ module_param_named(debug_mask, binder_alloc_debug_mask,
#define binder_alloc_debug(mask, x...) \ #define binder_alloc_debug(mask, x...) \
do { \ do { \
if (binder_alloc_debug_mask & mask) \ if (binder_alloc_debug_mask & mask) \
pr_info(x); \ pr_info_ratelimited(x); \
} while (0) } while (0)
static struct binder_buffer *binder_buffer_next(struct binder_buffer *buffer) static struct binder_buffer *binder_buffer_next(struct binder_buffer *buffer)
@ -152,8 +154,10 @@ static struct binder_buffer *binder_alloc_prepare_to_free_locked(
* free the buffer twice * free the buffer twice
*/ */
if (buffer->free_in_progress) { if (buffer->free_in_progress) {
pr_err("%d:%d FREE_BUFFER u%016llx user freed buffer twice\n", binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
alloc->pid, current->pid, (u64)user_ptr); "%d:%d FREE_BUFFER u%016llx user freed buffer twice\n",
alloc->pid, current->pid,
(u64)user_ptr);
return NULL; return NULL;
} }
buffer->free_in_progress = 1; buffer->free_in_progress = 1;
@ -224,8 +228,9 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
} }
if (!vma && need_mm) { if (!vma && need_mm) {
pr_err("%d: binder_alloc_buf failed to map pages in userspace, no vma\n", binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
alloc->pid); "%d: binder_alloc_buf failed to map pages in userspace, no vma\n",
alloc->pid);
goto err_no_vma; goto err_no_vma;
} }
@ -344,8 +349,9 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
int ret; int ret;
if (alloc->vma == NULL) { if (alloc->vma == NULL) {
pr_err("%d: binder_alloc_buf, no vma\n", binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
alloc->pid); "%d: binder_alloc_buf, no vma\n",
alloc->pid);
return ERR_PTR(-ESRCH); return ERR_PTR(-ESRCH);
} }
@ -417,11 +423,14 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
if (buffer_size > largest_free_size) if (buffer_size > largest_free_size)
largest_free_size = buffer_size; largest_free_size = buffer_size;
} }
pr_err("%d: binder_alloc_buf size %zd failed, no address space\n", binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
alloc->pid, size); "%d: binder_alloc_buf size %zd failed, no address space\n",
pr_err("allocated: %zd (num: %zd largest: %zd), free: %zd (num: %zd largest: %zd)\n", alloc->pid, size);
total_alloc_size, allocated_buffers, largest_alloc_size, binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
total_free_size, free_buffers, largest_free_size); "allocated: %zd (num: %zd largest: %zd), free: %zd (num: %zd largest: %zd)\n",
total_alloc_size, allocated_buffers,
largest_alloc_size, total_free_size,
free_buffers, largest_free_size);
return ERR_PTR(-ENOSPC); return ERR_PTR(-ENOSPC);
} }
if (n == NULL) { if (n == NULL) {
@ -731,8 +740,10 @@ err_alloc_pages_failed:
err_get_vm_area_failed: err_get_vm_area_failed:
err_already_mapped: err_already_mapped:
mutex_unlock(&binder_alloc_mmap_lock); mutex_unlock(&binder_alloc_mmap_lock);
pr_err("%s: %d %lx-%lx %s failed %d\n", __func__, binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
alloc->pid, vma->vm_start, vma->vm_end, failure_string, ret); "%s: %d %lx-%lx %s failed %d\n", __func__,
alloc->pid, vma->vm_start, vma->vm_end,
failure_string, ret);
return ret; return ret;
} }

View File

@ -248,14 +248,17 @@ DECLARE_EVENT_CLASS(binder_buffer_class,
__field(int, debug_id) __field(int, debug_id)
__field(size_t, data_size) __field(size_t, data_size)
__field(size_t, offsets_size) __field(size_t, offsets_size)
__field(size_t, extra_buffers_size)
), ),
TP_fast_assign( TP_fast_assign(
__entry->debug_id = buf->debug_id; __entry->debug_id = buf->debug_id;
__entry->data_size = buf->data_size; __entry->data_size = buf->data_size;
__entry->offsets_size = buf->offsets_size; __entry->offsets_size = buf->offsets_size;
__entry->extra_buffers_size = buf->extra_buffers_size;
), ),
TP_printk("transaction=%d data_size=%zd offsets_size=%zd", TP_printk("transaction=%d data_size=%zd offsets_size=%zd extra_buffers_size=%zd",
__entry->debug_id, __entry->data_size, __entry->offsets_size) __entry->debug_id, __entry->data_size, __entry->offsets_size,
__entry->extra_buffers_size)
); );
DEFINE_EVENT(binder_buffer_class, binder_transaction_alloc_buf, DEFINE_EVENT(binder_buffer_class, binder_transaction_alloc_buf,

View File

@ -17,6 +17,7 @@
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/libata.h> #include <linux/libata.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#define DRV_NAME "pata_imx" #define DRV_NAME "pata_imx"

View File

@ -17,6 +17,7 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/libata.h> #include <linux/libata.h>

View File

@ -9,6 +9,7 @@
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/gpio/consumer.h> #include <linux/gpio/consumer.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/property.h> #include <linux/property.h>
#include <linux/slab.h> #include <linux/slab.h>

View File

@ -579,7 +579,6 @@ hpet_ioctl_common(struct hpet_dev *devp, unsigned int cmd, unsigned long arg,
struct hpet_info *info) struct hpet_info *info)
{ {
struct hpet_timer __iomem *timer; struct hpet_timer __iomem *timer;
struct hpet __iomem *hpet;
struct hpets *hpetp; struct hpets *hpetp;
int err; int err;
unsigned long v; unsigned long v;
@ -591,7 +590,6 @@ hpet_ioctl_common(struct hpet_dev *devp, unsigned int cmd, unsigned long arg,
case HPET_DPI: case HPET_DPI:
case HPET_IRQFREQ: case HPET_IRQFREQ:
timer = devp->hd_timer; timer = devp->hd_timer;
hpet = devp->hd_hpet;
hpetp = devp->hd_hpets; hpetp = devp->hd_hpets;
break; break;
case HPET_IE_ON: case HPET_IE_ON:

View File

@ -8,6 +8,7 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/clk.h> #include <linux/clk.h>

View File

@ -19,6 +19,7 @@
#include <linux/iopoll.h> #include <linux/iopoll.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>

View File

@ -13,6 +13,7 @@
*/ */
#include <linux/module.h> #include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/clk.h> #include <linux/clk.h>

View File

@ -10,6 +10,7 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/module.h> #include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/random.h> #include <linux/random.h>

View File

@ -766,6 +766,7 @@ static loff_t memory_lseek(struct file *file, loff_t offset, int orig)
switch (orig) { switch (orig) {
case SEEK_CUR: case SEEK_CUR:
offset += file->f_pos; offset += file->f_pos;
/* fall through */
case SEEK_SET: case SEEK_SET:
/* to avoid userland mistaking f_pos=-9 as -EBADF=-9 */ /* to avoid userland mistaking f_pos=-9 as -EBADF=-9 */
if ((unsigned long long)offset >= -MAX_ERRNO) { if ((unsigned long long)offset >= -MAX_ERRNO) {

View File

@ -1748,8 +1748,6 @@ static int cm4000_config_check(struct pcmcia_device *p_dev, void *priv_data)
static int cm4000_config(struct pcmcia_device * link, int devno) static int cm4000_config(struct pcmcia_device * link, int devno)
{ {
struct cm4000_dev *dev;
link->config_flags |= CONF_AUTO_SET_IO; link->config_flags |= CONF_AUTO_SET_IO;
/* read the config-tuples */ /* read the config-tuples */
@ -1759,8 +1757,6 @@ static int cm4000_config(struct pcmcia_device * link, int devno)
if (pcmcia_enable_device(link)) if (pcmcia_enable_device(link))
goto cs_release; goto cs_release;
dev = link->priv;
return 0; return 0;
cs_release: cs_release:

View File

@ -1309,51 +1309,35 @@ static const struct attribute_group port_attribute_group = {
.attrs = port_sysfs_entries, .attrs = port_sysfs_entries,
}; };
static ssize_t debugfs_read(struct file *filp, char __user *ubuf, static int debugfs_show(struct seq_file *s, void *data)
size_t count, loff_t *offp)
{ {
struct port *port; struct port *port = s->private;
char *buf;
ssize_t ret, out_offset, out_count;
out_count = 1024; seq_printf(s, "name: %s\n", port->name ? port->name : "");
buf = kmalloc(out_count, GFP_KERNEL); seq_printf(s, "guest_connected: %d\n", port->guest_connected);
if (!buf) seq_printf(s, "host_connected: %d\n", port->host_connected);
return -ENOMEM; seq_printf(s, "outvq_full: %d\n", port->outvq_full);
seq_printf(s, "bytes_sent: %lu\n", port->stats.bytes_sent);
seq_printf(s, "bytes_received: %lu\n", port->stats.bytes_received);
seq_printf(s, "bytes_discarded: %lu\n", port->stats.bytes_discarded);
seq_printf(s, "is_console: %s\n",
is_console_port(port) ? "yes" : "no");
seq_printf(s, "console_vtermno: %u\n", port->cons.vtermno);
port = filp->private_data; return 0;
out_offset = 0; }
out_offset += snprintf(buf + out_offset, out_count,
"name: %s\n", port->name ? port->name : "");
out_offset += snprintf(buf + out_offset, out_count - out_offset,
"guest_connected: %d\n", port->guest_connected);
out_offset += snprintf(buf + out_offset, out_count - out_offset,
"host_connected: %d\n", port->host_connected);
out_offset += snprintf(buf + out_offset, out_count - out_offset,
"outvq_full: %d\n", port->outvq_full);
out_offset += snprintf(buf + out_offset, out_count - out_offset,
"bytes_sent: %lu\n", port->stats.bytes_sent);
out_offset += snprintf(buf + out_offset, out_count - out_offset,
"bytes_received: %lu\n",
port->stats.bytes_received);
out_offset += snprintf(buf + out_offset, out_count - out_offset,
"bytes_discarded: %lu\n",
port->stats.bytes_discarded);
out_offset += snprintf(buf + out_offset, out_count - out_offset,
"is_console: %s\n",
is_console_port(port) ? "yes" : "no");
out_offset += snprintf(buf + out_offset, out_count - out_offset,
"console_vtermno: %u\n", port->cons.vtermno);
ret = simple_read_from_buffer(ubuf, count, offp, buf, out_offset); static int debugfs_open(struct inode *inode, struct file *file)
kfree(buf); {
return ret; return single_open(file, debugfs_show, inode->i_private);
} }
static const struct file_operations port_debugfs_ops = { static const struct file_operations port_debugfs_ops = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
.open = simple_open, .open = debugfs_open,
.read = debugfs_read, .read = seq_read,
.llseek = seq_lseek,
.release = single_release,
}; };
static void set_console_size(struct port *port, u16 rows, u16 cols) static void set_console_size(struct port *port, u16 rows, u16 cols)

View File

@ -13,6 +13,7 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include "mtk-platform.h" #include "mtk-platform.h"

View File

@ -14,6 +14,7 @@
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/types.h> #include <linux/types.h>

View File

@ -8,6 +8,7 @@
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/crc32poly.h> #include <linux/crc32poly.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>

View File

@ -20,6 +20,7 @@
#include <linux/irqreturn.h> #include <linux/irqreturn.h>
#include <linux/klist.h> #include <linux/klist.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/regulator/consumer.h> #include <linux/regulator/consumer.h>
#include <linux/semaphore.h> #include <linux/semaphore.h>

View File

@ -21,6 +21,7 @@
#include <linux/klist.h> #include <linux/klist.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/crypto.h> #include <linux/crypto.h>

View File

@ -24,6 +24,7 @@
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/pm_opp.h> #include <linux/pm_opp.h>
#include <linux/reset.h> #include <linux/reset.h>

View File

@ -23,6 +23,7 @@
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/dmaengine.h> #include <linux/dmaengine.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/slab.h> #include <linux/slab.h>

View File

@ -35,6 +35,7 @@
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/platform_data/dma-s3c24xx.h> #include <linux/platform_data/dma-s3c24xx.h>

View File

@ -20,6 +20,7 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/mfd/intel_soc_pmic.h> #include <linux/mfd/intel_soc_pmic.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/regmap.h> #include <linux/regmap.h>
#include <linux/slab.h> #include <linux/slab.h>

View File

@ -20,7 +20,7 @@
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/extcon-provider.h> #include <linux/extcon-provider.h>
#include <linux/gpio.h> #include <linux/gpio/consumer.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>

View File

@ -14,6 +14,7 @@
#include <linux/gpio/consumer.h> #include <linux/gpio/consumer.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
struct max3355_data { struct max3355_data {

View File

@ -20,6 +20,7 @@
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/workqueue.h> #include <linux/workqueue.h>

View File

@ -1,18 +1,8 @@
/** // SPDX-License-Identifier: GPL-2.0
* drivers/extcon/extcon-usbc-cros-ec - ChromeOS Embedded Controller extcon // ChromeOS Embedded Controller extcon
* //
* Copyright (C) 2017 Google, Inc // Copyright (C) 2017 Google, Inc.
* Author: Benson Leung <bleung@chromium.org> // Author: Benson Leung <bleung@chromium.org>
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/extcon-provider.h> #include <linux/extcon-provider.h>
#include <linux/kernel.h> #include <linux/kernel.h>
@ -548,4 +538,4 @@ module_platform_driver(extcon_cros_ec_driver);
MODULE_DESCRIPTION("ChromeOS Embedded Controller extcon driver"); MODULE_DESCRIPTION("ChromeOS Embedded Controller extcon driver");
MODULE_AUTHOR("Benson Leung <bleung@chromium.org>"); MODULE_AUTHOR("Benson Leung <bleung@chromium.org>");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL v2");

View File

@ -433,8 +433,8 @@ int extcon_sync(struct extcon_dev *edev, unsigned int id)
return index; return index;
spin_lock_irqsave(&edev->lock, flags); spin_lock_irqsave(&edev->lock, flags);
state = !!(edev->state & BIT(index)); state = !!(edev->state & BIT(index));
spin_unlock_irqrestore(&edev->lock, flags);
/* /*
* Call functions in a raw notifier chain for the specific one * Call functions in a raw notifier chain for the specific one
@ -448,6 +448,7 @@ int extcon_sync(struct extcon_dev *edev, unsigned int id)
*/ */
raw_notifier_call_chain(&edev->nh_all, state, edev); raw_notifier_call_chain(&edev->nh_all, state, edev);
spin_lock_irqsave(&edev->lock, flags);
/* This could be in interrupt handler */ /* This could be in interrupt handler */
prop_buf = (char *)get_zeroed_page(GFP_ATOMIC); prop_buf = (char *)get_zeroed_page(GFP_ATOMIC);
if (!prop_buf) { if (!prop_buf) {

View File

@ -246,6 +246,7 @@ static int vpd_section_destroy(struct vpd_section *sec)
sysfs_remove_bin_file(vpd_kobj, &sec->bin_attr); sysfs_remove_bin_file(vpd_kobj, &sec->bin_attr);
kfree(sec->raw_name); kfree(sec->raw_name);
memunmap(sec->baseaddr); memunmap(sec->baseaddr);
sec->enabled = false;
} }
return 0; return 0;
@ -279,8 +280,10 @@ static int vpd_sections_init(phys_addr_t physaddr)
ret = vpd_section_init("rw", &rw_vpd, ret = vpd_section_init("rw", &rw_vpd,
physaddr + sizeof(struct vpd_cbmem) + physaddr + sizeof(struct vpd_cbmem) +
header.ro_size, header.rw_size); header.ro_size, header.rw_size);
if (ret) if (ret) {
vpd_section_destroy(&ro_vpd);
return ret; return ret;
}
} }
return 0; return 0;

View File

@ -130,4 +130,72 @@ config OF_FPGA_REGION
Support for loading FPGA images by applying a Device Tree Support for loading FPGA images by applying a Device Tree
overlay. overlay.
config FPGA_DFL
tristate "FPGA Device Feature List (DFL) support"
select FPGA_BRIDGE
select FPGA_REGION
help
Device Feature List (DFL) defines a feature list structure that
creates a linked list of feature headers within the MMIO space
to provide an extensible way of adding features for FPGA.
Driver can walk through the feature headers to enumerate feature
devices (e.g. FPGA Management Engine, Port and Accelerator
Function Unit) and their private features for target FPGA devices.
Select this option to enable common support for Field-Programmable
Gate Array (FPGA) solutions which implement Device Feature List.
It provides enumeration APIs and feature device infrastructure.
config FPGA_DFL_FME
tristate "FPGA DFL FME Driver"
depends on FPGA_DFL
help
The FPGA Management Engine (FME) is a feature device implemented
under Device Feature List (DFL) framework. Select this option to
enable the platform device driver for FME which implements all
FPGA platform level management features. There shall be one FME
per DFL based FPGA device.
config FPGA_DFL_FME_MGR
tristate "FPGA DFL FME Manager Driver"
depends on FPGA_DFL_FME && HAS_IOMEM
help
Say Y to enable FPGA Manager driver for FPGA Management Engine.
config FPGA_DFL_FME_BRIDGE
tristate "FPGA DFL FME Bridge Driver"
depends on FPGA_DFL_FME && HAS_IOMEM
help
Say Y to enable FPGA Bridge driver for FPGA Management Engine.
config FPGA_DFL_FME_REGION
tristate "FPGA DFL FME Region Driver"
depends on FPGA_DFL_FME && HAS_IOMEM
help
Say Y to enable FPGA Region driver for FPGA Management Engine.
config FPGA_DFL_AFU
tristate "FPGA DFL AFU Driver"
depends on FPGA_DFL
help
This is the driver for FPGA Accelerated Function Unit (AFU) which
implements AFU and Port management features. A User AFU connects
to the FPGA infrastructure via a Port. There may be more than one
Port/AFU per DFL based FPGA device.
config FPGA_DFL_PCI
tristate "FPGA DFL PCIe Device Driver"
depends on PCI && FPGA_DFL
help
Select this option to enable PCIe driver for PCIe-based
Field-Programmable Gate Array (FPGA) solutions which implement
the Device Feature List (DFL). This driver provides interfaces
for userspace applications to configure, enumerate, open and access
FPGA accelerators on the FPGA DFL devices, enables system level
management functions such as FPGA partial reconfiguration, power
management and virtualization with DFL framework and DFL feature
device drivers.
To compile this as a module, choose M here.
endif # FPGA endif # FPGA

View File

@ -28,3 +28,17 @@ obj-$(CONFIG_XILINX_PR_DECOUPLER) += xilinx-pr-decoupler.o
# High Level Interfaces # High Level Interfaces
obj-$(CONFIG_FPGA_REGION) += fpga-region.o obj-$(CONFIG_FPGA_REGION) += fpga-region.o
obj-$(CONFIG_OF_FPGA_REGION) += of-fpga-region.o obj-$(CONFIG_OF_FPGA_REGION) += of-fpga-region.o
# FPGA Device Feature List Support
obj-$(CONFIG_FPGA_DFL) += dfl.o
obj-$(CONFIG_FPGA_DFL_FME) += dfl-fme.o
obj-$(CONFIG_FPGA_DFL_FME_MGR) += dfl-fme-mgr.o
obj-$(CONFIG_FPGA_DFL_FME_BRIDGE) += dfl-fme-br.o
obj-$(CONFIG_FPGA_DFL_FME_REGION) += dfl-fme-region.o
obj-$(CONFIG_FPGA_DFL_AFU) += dfl-afu.o
dfl-fme-objs := dfl-fme-main.o dfl-fme-pr.o
dfl-afu-objs := dfl-afu-main.o dfl-afu-region.o dfl-afu-dma-region.o
# Drivers for FPGAs which implement DFL
obj-$(CONFIG_FPGA_DFL_PCI) += dfl-pci.o

View File

@ -0,0 +1,463 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Driver for FPGA Accelerated Function Unit (AFU) DMA Region Management
*
* Copyright (C) 2017-2018 Intel Corporation, Inc.
*
* Authors:
* Wu Hao <hao.wu@intel.com>
* Xiao Guangrong <guangrong.xiao@linux.intel.com>
*/
#include <linux/dma-mapping.h>
#include <linux/sched/signal.h>
#include <linux/uaccess.h>
#include "dfl-afu.h"
static void put_all_pages(struct page **pages, int npages)
{
int i;
for (i = 0; i < npages; i++)
if (pages[i])
put_page(pages[i]);
}
void afu_dma_region_init(struct dfl_feature_platform_data *pdata)
{
struct dfl_afu *afu = dfl_fpga_pdata_get_private(pdata);
afu->dma_regions = RB_ROOT;
}
/**
* afu_dma_adjust_locked_vm - adjust locked memory
* @dev: port device
* @npages: number of pages
* @incr: increase or decrease locked memory
*
* Increase or decrease the locked memory size with npages input.
*
* Return 0 on success.
* Return -ENOMEM if locked memory size is over the limit and no CAP_IPC_LOCK.
*/
static int afu_dma_adjust_locked_vm(struct device *dev, long npages, bool incr)
{
unsigned long locked, lock_limit;
int ret = 0;
/* the task is exiting. */
if (!current->mm)
return 0;
down_write(&current->mm->mmap_sem);
if (incr) {
locked = current->mm->locked_vm + npages;
lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
if (locked > lock_limit && !capable(CAP_IPC_LOCK))
ret = -ENOMEM;
else
current->mm->locked_vm += npages;
} else {
if (WARN_ON_ONCE(npages > current->mm->locked_vm))
npages = current->mm->locked_vm;
current->mm->locked_vm -= npages;
}
dev_dbg(dev, "[%d] RLIMIT_MEMLOCK %c%ld %ld/%ld%s\n", current->pid,
incr ? '+' : '-', npages << PAGE_SHIFT,
current->mm->locked_vm << PAGE_SHIFT, rlimit(RLIMIT_MEMLOCK),
ret ? "- execeeded" : "");
up_write(&current->mm->mmap_sem);
return ret;
}
/**
* afu_dma_pin_pages - pin pages of given dma memory region
* @pdata: feature device platform data
* @region: dma memory region to be pinned
*
* Pin all the pages of given dfl_afu_dma_region.
* Return 0 for success or negative error code.
*/
static int afu_dma_pin_pages(struct dfl_feature_platform_data *pdata,
struct dfl_afu_dma_region *region)
{
int npages = region->length >> PAGE_SHIFT;
struct device *dev = &pdata->dev->dev;
int ret, pinned;
ret = afu_dma_adjust_locked_vm(dev, npages, true);
if (ret)
return ret;
region->pages = kcalloc(npages, sizeof(struct page *), GFP_KERNEL);
if (!region->pages) {
ret = -ENOMEM;
goto unlock_vm;
}
pinned = get_user_pages_fast(region->user_addr, npages, 1,
region->pages);
if (pinned < 0) {
ret = pinned;
goto put_pages;
} else if (pinned != npages) {
ret = -EFAULT;
goto free_pages;
}
dev_dbg(dev, "%d pages pinned\n", pinned);
return 0;
put_pages:
put_all_pages(region->pages, pinned);
free_pages:
kfree(region->pages);
unlock_vm:
afu_dma_adjust_locked_vm(dev, npages, false);
return ret;
}
/**
* afu_dma_unpin_pages - unpin pages of given dma memory region
* @pdata: feature device platform data
* @region: dma memory region to be unpinned
*
* Unpin all the pages of given dfl_afu_dma_region.
* Return 0 for success or negative error code.
*/
static void afu_dma_unpin_pages(struct dfl_feature_platform_data *pdata,
struct dfl_afu_dma_region *region)
{
long npages = region->length >> PAGE_SHIFT;
struct device *dev = &pdata->dev->dev;
put_all_pages(region->pages, npages);
kfree(region->pages);
afu_dma_adjust_locked_vm(dev, npages, false);
dev_dbg(dev, "%ld pages unpinned\n", npages);
}
/**
* afu_dma_check_continuous_pages - check if pages are continuous
* @region: dma memory region
*
* Return true if pages of given dma memory region have continuous physical
* address, otherwise return false.
*/
static bool afu_dma_check_continuous_pages(struct dfl_afu_dma_region *region)
{
int npages = region->length >> PAGE_SHIFT;
int i;
for (i = 0; i < npages - 1; i++)
if (page_to_pfn(region->pages[i]) + 1 !=
page_to_pfn(region->pages[i + 1]))
return false;
return true;
}
/**
* dma_region_check_iova - check if memory area is fully contained in the region
* @region: dma memory region
* @iova: address of the dma memory area
* @size: size of the dma memory area
*
* Compare the dma memory area defined by @iova and @size with given dma region.
* Return true if memory area is fully contained in the region, otherwise false.
*/
static bool dma_region_check_iova(struct dfl_afu_dma_region *region,
u64 iova, u64 size)
{
if (!size && region->iova != iova)
return false;
return (region->iova <= iova) &&
(region->length + region->iova >= iova + size);
}
/**
* afu_dma_region_add - add given dma region to rbtree
* @pdata: feature device platform data
* @region: dma region to be added
*
* Return 0 for success, -EEXIST if dma region has already been added.
*
* Needs to be called with pdata->lock heold.
*/
static int afu_dma_region_add(struct dfl_feature_platform_data *pdata,
struct dfl_afu_dma_region *region)
{
struct dfl_afu *afu = dfl_fpga_pdata_get_private(pdata);
struct rb_node **new, *parent = NULL;
dev_dbg(&pdata->dev->dev, "add region (iova = %llx)\n",
(unsigned long long)region->iova);
new = &afu->dma_regions.rb_node;
while (*new) {
struct dfl_afu_dma_region *this;
this = container_of(*new, struct dfl_afu_dma_region, node);
parent = *new;
if (dma_region_check_iova(this, region->iova, region->length))
return -EEXIST;
if (region->iova < this->iova)
new = &((*new)->rb_left);
else if (region->iova > this->iova)
new = &((*new)->rb_right);
else
return -EEXIST;
}
rb_link_node(&region->node, parent, new);
rb_insert_color(&region->node, &afu->dma_regions);
return 0;
}
/**
* afu_dma_region_remove - remove given dma region from rbtree
* @pdata: feature device platform data
* @region: dma region to be removed
*
* Needs to be called with pdata->lock heold.
*/
static void afu_dma_region_remove(struct dfl_feature_platform_data *pdata,
struct dfl_afu_dma_region *region)
{
struct dfl_afu *afu;
dev_dbg(&pdata->dev->dev, "del region (iova = %llx)\n",
(unsigned long long)region->iova);
afu = dfl_fpga_pdata_get_private(pdata);
rb_erase(&region->node, &afu->dma_regions);
}
/**
* afu_dma_region_destroy - destroy all regions in rbtree
* @pdata: feature device platform data
*
* Needs to be called with pdata->lock heold.
*/
void afu_dma_region_destroy(struct dfl_feature_platform_data *pdata)
{
struct dfl_afu *afu = dfl_fpga_pdata_get_private(pdata);
struct rb_node *node = rb_first(&afu->dma_regions);
struct dfl_afu_dma_region *region;
while (node) {
region = container_of(node, struct dfl_afu_dma_region, node);
dev_dbg(&pdata->dev->dev, "del region (iova = %llx)\n",
(unsigned long long)region->iova);
rb_erase(node, &afu->dma_regions);
if (region->iova)
dma_unmap_page(dfl_fpga_pdata_to_parent(pdata),
region->iova, region->length,
DMA_BIDIRECTIONAL);
if (region->pages)
afu_dma_unpin_pages(pdata, region);
node = rb_next(node);
kfree(region);
}
}
/**
* afu_dma_region_find - find the dma region from rbtree based on iova and size
* @pdata: feature device platform data
* @iova: address of the dma memory area
* @size: size of the dma memory area
*
* It finds the dma region from the rbtree based on @iova and @size:
* - if @size == 0, it finds the dma region which starts from @iova
* - otherwise, it finds the dma region which fully contains
* [@iova, @iova+size)
* If nothing is matched returns NULL.
*
* Needs to be called with pdata->lock held.
*/
struct dfl_afu_dma_region *
afu_dma_region_find(struct dfl_feature_platform_data *pdata, u64 iova, u64 size)
{
struct dfl_afu *afu = dfl_fpga_pdata_get_private(pdata);
struct rb_node *node = afu->dma_regions.rb_node;
struct device *dev = &pdata->dev->dev;
while (node) {
struct dfl_afu_dma_region *region;
region = container_of(node, struct dfl_afu_dma_region, node);
if (dma_region_check_iova(region, iova, size)) {
dev_dbg(dev, "find region (iova = %llx)\n",
(unsigned long long)region->iova);
return region;
}
if (iova < region->iova)
node = node->rb_left;
else if (iova > region->iova)
node = node->rb_right;
else
/* the iova region is not fully covered. */
break;
}
dev_dbg(dev, "region with iova %llx and size %llx is not found\n",
(unsigned long long)iova, (unsigned long long)size);
return NULL;
}
/**
* afu_dma_region_find_iova - find the dma region from rbtree by iova
* @pdata: feature device platform data
* @iova: address of the dma region
*
* Needs to be called with pdata->lock held.
*/
static struct dfl_afu_dma_region *
afu_dma_region_find_iova(struct dfl_feature_platform_data *pdata, u64 iova)
{
return afu_dma_region_find(pdata, iova, 0);
}
/**
* afu_dma_map_region - map memory region for dma
* @pdata: feature device platform data
* @user_addr: address of the memory region
* @length: size of the memory region
* @iova: pointer of iova address
*
* Map memory region defined by @user_addr and @length, and return dma address
* of the memory region via @iova.
* Return 0 for success, otherwise error code.
*/
int afu_dma_map_region(struct dfl_feature_platform_data *pdata,
u64 user_addr, u64 length, u64 *iova)
{
struct dfl_afu_dma_region *region;
int ret;
/*
* Check Inputs, only accept page-aligned user memory region with
* valid length.
*/
if (!PAGE_ALIGNED(user_addr) || !PAGE_ALIGNED(length) || !length)
return -EINVAL;
/* Check overflow */
if (user_addr + length < user_addr)
return -EINVAL;
if (!access_ok(VERIFY_WRITE, (void __user *)(unsigned long)user_addr,
length))
return -EINVAL;
region = kzalloc(sizeof(*region), GFP_KERNEL);
if (!region)
return -ENOMEM;
region->user_addr = user_addr;
region->length = length;
/* Pin the user memory region */
ret = afu_dma_pin_pages(pdata, region);
if (ret) {
dev_err(&pdata->dev->dev, "failed to pin memory region\n");
goto free_region;
}
/* Only accept continuous pages, return error else */
if (!afu_dma_check_continuous_pages(region)) {
dev_err(&pdata->dev->dev, "pages are not continuous\n");
ret = -EINVAL;
goto unpin_pages;
}
/* As pages are continuous then start to do DMA mapping */
region->iova = dma_map_page(dfl_fpga_pdata_to_parent(pdata),
region->pages[0], 0,
region->length,
DMA_BIDIRECTIONAL);
if (dma_mapping_error(&pdata->dev->dev, region->iova)) {
dev_err(&pdata->dev->dev, "failed to map for dma\n");
ret = -EFAULT;
goto unpin_pages;
}
*iova = region->iova;
mutex_lock(&pdata->lock);
ret = afu_dma_region_add(pdata, region);
mutex_unlock(&pdata->lock);
if (ret) {
dev_err(&pdata->dev->dev, "failed to add dma region\n");
goto unmap_dma;
}
return 0;
unmap_dma:
dma_unmap_page(dfl_fpga_pdata_to_parent(pdata),
region->iova, region->length, DMA_BIDIRECTIONAL);
unpin_pages:
afu_dma_unpin_pages(pdata, region);
free_region:
kfree(region);
return ret;
}
/**
* afu_dma_unmap_region - unmap dma memory region
* @pdata: feature device platform data
* @iova: dma address of the region
*
* Unmap dma memory region based on @iova.
* Return 0 for success, otherwise error code.
*/
int afu_dma_unmap_region(struct dfl_feature_platform_data *pdata, u64 iova)
{
struct dfl_afu_dma_region *region;
mutex_lock(&pdata->lock);
region = afu_dma_region_find_iova(pdata, iova);
if (!region) {
mutex_unlock(&pdata->lock);
return -EINVAL;
}
if (region->in_use) {
mutex_unlock(&pdata->lock);
return -EBUSY;
}
afu_dma_region_remove(pdata, region);
mutex_unlock(&pdata->lock);
dma_unmap_page(dfl_fpga_pdata_to_parent(pdata),
region->iova, region->length, DMA_BIDIRECTIONAL);
afu_dma_unpin_pages(pdata, region);
kfree(region);
return 0;
}

636
drivers/fpga/dfl-afu-main.c Normal file
View File

@ -0,0 +1,636 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Driver for FPGA Accelerated Function Unit (AFU)
*
* Copyright (C) 2017-2018 Intel Corporation, Inc.
*
* Authors:
* Wu Hao <hao.wu@intel.com>
* Xiao Guangrong <guangrong.xiao@linux.intel.com>
* Joseph Grecco <joe.grecco@intel.com>
* Enno Luebbers <enno.luebbers@intel.com>
* Tim Whisonant <tim.whisonant@intel.com>
* Ananda Ravuri <ananda.ravuri@intel.com>
* Henry Mitchel <henry.mitchel@intel.com>
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/uaccess.h>
#include <linux/fpga-dfl.h>
#include "dfl-afu.h"
/**
* port_enable - enable a port
* @pdev: port platform device.
*
* Enable Port by clear the port soft reset bit, which is set by default.
* The AFU is unable to respond to any MMIO access while in reset.
* port_enable function should only be used after port_disable function.
*/
static void port_enable(struct platform_device *pdev)
{
struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev);
void __iomem *base;
u64 v;
WARN_ON(!pdata->disable_count);
if (--pdata->disable_count != 0)
return;
base = dfl_get_feature_ioaddr_by_id(&pdev->dev, PORT_FEATURE_ID_HEADER);
/* Clear port soft reset */
v = readq(base + PORT_HDR_CTRL);
v &= ~PORT_CTRL_SFTRST;
writeq(v, base + PORT_HDR_CTRL);
}
#define RST_POLL_INVL 10 /* us */
#define RST_POLL_TIMEOUT 1000 /* us */
/**
* port_disable - disable a port
* @pdev: port platform device.
*
* Disable Port by setting the port soft reset bit, it puts the port into
* reset.
*/
static int port_disable(struct platform_device *pdev)
{
struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev);
void __iomem *base;
u64 v;
if (pdata->disable_count++ != 0)
return 0;
base = dfl_get_feature_ioaddr_by_id(&pdev->dev, PORT_FEATURE_ID_HEADER);
/* Set port soft reset */
v = readq(base + PORT_HDR_CTRL);
v |= PORT_CTRL_SFTRST;
writeq(v, base + PORT_HDR_CTRL);
/*
* HW sets ack bit to 1 when all outstanding requests have been drained
* on this port and minimum soft reset pulse width has elapsed.
* Driver polls port_soft_reset_ack to determine if reset done by HW.
*/
if (readq_poll_timeout(base + PORT_HDR_CTRL, v, v & PORT_CTRL_SFTRST,
RST_POLL_INVL, RST_POLL_TIMEOUT)) {
dev_err(&pdev->dev, "timeout, fail to reset device\n");
return -ETIMEDOUT;
}
return 0;
}
/*
* This function resets the FPGA Port and its accelerator (AFU) by function
* __port_disable and __port_enable (set port soft reset bit and then clear
* it). Userspace can do Port reset at any time, e.g. during DMA or Partial
* Reconfiguration. But it should never cause any system level issue, only
* functional failure (e.g. DMA or PR operation failure) and be recoverable
* from the failure.
*
* Note: the accelerator (AFU) is not accessible when its port is in reset
* (disabled). Any attempts on MMIO access to AFU while in reset, will
* result errors reported via port error reporting sub feature (if present).
*/
static int __port_reset(struct platform_device *pdev)
{
int ret;
ret = port_disable(pdev);
if (!ret)
port_enable(pdev);
return ret;
}
static int port_reset(struct platform_device *pdev)
{
struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev);
int ret;
mutex_lock(&pdata->lock);
ret = __port_reset(pdev);
mutex_unlock(&pdata->lock);
return ret;
}
static int port_get_id(struct platform_device *pdev)
{
void __iomem *base;
base = dfl_get_feature_ioaddr_by_id(&pdev->dev, PORT_FEATURE_ID_HEADER);
return FIELD_GET(PORT_CAP_PORT_NUM, readq(base + PORT_HDR_CAP));
}
static ssize_t
id_show(struct device *dev, struct device_attribute *attr, char *buf)
{
int id = port_get_id(to_platform_device(dev));
return scnprintf(buf, PAGE_SIZE, "%d\n", id);
}
static DEVICE_ATTR_RO(id);
static const struct attribute *port_hdr_attrs[] = {
&dev_attr_id.attr,
NULL,
};
static int port_hdr_init(struct platform_device *pdev,
struct dfl_feature *feature)
{
dev_dbg(&pdev->dev, "PORT HDR Init.\n");
port_reset(pdev);
return sysfs_create_files(&pdev->dev.kobj, port_hdr_attrs);
}
static void port_hdr_uinit(struct platform_device *pdev,
struct dfl_feature *feature)
{
dev_dbg(&pdev->dev, "PORT HDR UInit.\n");
sysfs_remove_files(&pdev->dev.kobj, port_hdr_attrs);
}
static long
port_hdr_ioctl(struct platform_device *pdev, struct dfl_feature *feature,
unsigned int cmd, unsigned long arg)
{
long ret;
switch (cmd) {
case DFL_FPGA_PORT_RESET:
if (!arg)
ret = port_reset(pdev);
else
ret = -EINVAL;
break;
default:
dev_dbg(&pdev->dev, "%x cmd not handled", cmd);
ret = -ENODEV;
}
return ret;
}
static const struct dfl_feature_ops port_hdr_ops = {
.init = port_hdr_init,
.uinit = port_hdr_uinit,
.ioctl = port_hdr_ioctl,
};
static ssize_t
afu_id_show(struct device *dev, struct device_attribute *attr, char *buf)
{
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
void __iomem *base;
u64 guidl, guidh;
base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_AFU);
mutex_lock(&pdata->lock);
if (pdata->disable_count) {
mutex_unlock(&pdata->lock);
return -EBUSY;
}
guidl = readq(base + GUID_L);
guidh = readq(base + GUID_H);
mutex_unlock(&pdata->lock);
return scnprintf(buf, PAGE_SIZE, "%016llx%016llx\n", guidh, guidl);
}
static DEVICE_ATTR_RO(afu_id);
static const struct attribute *port_afu_attrs[] = {
&dev_attr_afu_id.attr,
NULL
};
static int port_afu_init(struct platform_device *pdev,
struct dfl_feature *feature)
{
struct resource *res = &pdev->resource[feature->resource_index];
int ret;
dev_dbg(&pdev->dev, "PORT AFU Init.\n");
ret = afu_mmio_region_add(dev_get_platdata(&pdev->dev),
DFL_PORT_REGION_INDEX_AFU, resource_size(res),
res->start, DFL_PORT_REGION_READ |
DFL_PORT_REGION_WRITE | DFL_PORT_REGION_MMAP);
if (ret)
return ret;
return sysfs_create_files(&pdev->dev.kobj, port_afu_attrs);
}
static void port_afu_uinit(struct platform_device *pdev,
struct dfl_feature *feature)
{
dev_dbg(&pdev->dev, "PORT AFU UInit.\n");
sysfs_remove_files(&pdev->dev.kobj, port_afu_attrs);
}
static const struct dfl_feature_ops port_afu_ops = {
.init = port_afu_init,
.uinit = port_afu_uinit,
};
static struct dfl_feature_driver port_feature_drvs[] = {
{
.id = PORT_FEATURE_ID_HEADER,
.ops = &port_hdr_ops,
},
{
.id = PORT_FEATURE_ID_AFU,
.ops = &port_afu_ops,
},
{
.ops = NULL,
}
};
static int afu_open(struct inode *inode, struct file *filp)
{
struct platform_device *fdev = dfl_fpga_inode_to_feature_dev(inode);
struct dfl_feature_platform_data *pdata;
int ret;
pdata = dev_get_platdata(&fdev->dev);
if (WARN_ON(!pdata))
return -ENODEV;
ret = dfl_feature_dev_use_begin(pdata);
if (ret)
return ret;
dev_dbg(&fdev->dev, "Device File Open\n");
filp->private_data = fdev;
return 0;
}
static int afu_release(struct inode *inode, struct file *filp)
{
struct platform_device *pdev = filp->private_data;
struct dfl_feature_platform_data *pdata;
dev_dbg(&pdev->dev, "Device File Release\n");
pdata = dev_get_platdata(&pdev->dev);
mutex_lock(&pdata->lock);
__port_reset(pdev);
afu_dma_region_destroy(pdata);
mutex_unlock(&pdata->lock);
dfl_feature_dev_use_end(pdata);
return 0;
}
static long afu_ioctl_check_extension(struct dfl_feature_platform_data *pdata,
unsigned long arg)
{
/* No extension support for now */
return 0;
}
static long
afu_ioctl_get_info(struct dfl_feature_platform_data *pdata, void __user *arg)
{
struct dfl_fpga_port_info info;
struct dfl_afu *afu;
unsigned long minsz;
minsz = offsetofend(struct dfl_fpga_port_info, num_umsgs);
if (copy_from_user(&info, arg, minsz))
return -EFAULT;
if (info.argsz < minsz)
return -EINVAL;
mutex_lock(&pdata->lock);
afu = dfl_fpga_pdata_get_private(pdata);
info.flags = 0;
info.num_regions = afu->num_regions;
info.num_umsgs = afu->num_umsgs;
mutex_unlock(&pdata->lock);
if (copy_to_user(arg, &info, sizeof(info)))
return -EFAULT;
return 0;
}
static long afu_ioctl_get_region_info(struct dfl_feature_platform_data *pdata,
void __user *arg)
{
struct dfl_fpga_port_region_info rinfo;
struct dfl_afu_mmio_region region;
unsigned long minsz;
long ret;
minsz = offsetofend(struct dfl_fpga_port_region_info, offset);
if (copy_from_user(&rinfo, arg, minsz))
return -EFAULT;
if (rinfo.argsz < minsz || rinfo.padding)
return -EINVAL;
ret = afu_mmio_region_get_by_index(pdata, rinfo.index, &region);
if (ret)
return ret;
rinfo.flags = region.flags;
rinfo.size = region.size;
rinfo.offset = region.offset;
if (copy_to_user(arg, &rinfo, sizeof(rinfo)))
return -EFAULT;
return 0;
}
static long
afu_ioctl_dma_map(struct dfl_feature_platform_data *pdata, void __user *arg)
{
struct dfl_fpga_port_dma_map map;
unsigned long minsz;
long ret;
minsz = offsetofend(struct dfl_fpga_port_dma_map, iova);
if (copy_from_user(&map, arg, minsz))
return -EFAULT;
if (map.argsz < minsz || map.flags)
return -EINVAL;
ret = afu_dma_map_region(pdata, map.user_addr, map.length, &map.iova);
if (ret)
return ret;
if (copy_to_user(arg, &map, sizeof(map))) {
afu_dma_unmap_region(pdata, map.iova);
return -EFAULT;
}
dev_dbg(&pdata->dev->dev, "dma map: ua=%llx, len=%llx, iova=%llx\n",
(unsigned long long)map.user_addr,
(unsigned long long)map.length,
(unsigned long long)map.iova);
return 0;
}
static long
afu_ioctl_dma_unmap(struct dfl_feature_platform_data *pdata, void __user *arg)
{
struct dfl_fpga_port_dma_unmap unmap;
unsigned long minsz;
minsz = offsetofend(struct dfl_fpga_port_dma_unmap, iova);
if (copy_from_user(&unmap, arg, minsz))
return -EFAULT;
if (unmap.argsz < minsz || unmap.flags)
return -EINVAL;
return afu_dma_unmap_region(pdata, unmap.iova);
}
static long afu_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
struct platform_device *pdev = filp->private_data;
struct dfl_feature_platform_data *pdata;
struct dfl_feature *f;
long ret;
dev_dbg(&pdev->dev, "%s cmd 0x%x\n", __func__, cmd);
pdata = dev_get_platdata(&pdev->dev);
switch (cmd) {
case DFL_FPGA_GET_API_VERSION:
return DFL_FPGA_API_VERSION;
case DFL_FPGA_CHECK_EXTENSION:
return afu_ioctl_check_extension(pdata, arg);
case DFL_FPGA_PORT_GET_INFO:
return afu_ioctl_get_info(pdata, (void __user *)arg);
case DFL_FPGA_PORT_GET_REGION_INFO:
return afu_ioctl_get_region_info(pdata, (void __user *)arg);
case DFL_FPGA_PORT_DMA_MAP:
return afu_ioctl_dma_map(pdata, (void __user *)arg);
case DFL_FPGA_PORT_DMA_UNMAP:
return afu_ioctl_dma_unmap(pdata, (void __user *)arg);
default:
/*
* Let sub-feature's ioctl function to handle the cmd
* Sub-feature's ioctl returns -ENODEV when cmd is not
* handled in this sub feature, and returns 0 and other
* error code if cmd is handled.
*/
dfl_fpga_dev_for_each_feature(pdata, f)
if (f->ops && f->ops->ioctl) {
ret = f->ops->ioctl(pdev, f, cmd, arg);
if (ret != -ENODEV)
return ret;
}
}
return -EINVAL;
}
static int afu_mmap(struct file *filp, struct vm_area_struct *vma)
{
struct platform_device *pdev = filp->private_data;
struct dfl_feature_platform_data *pdata;
u64 size = vma->vm_end - vma->vm_start;
struct dfl_afu_mmio_region region;
u64 offset;
int ret;
if (!(vma->vm_flags & VM_SHARED))
return -EINVAL;
pdata = dev_get_platdata(&pdev->dev);
offset = vma->vm_pgoff << PAGE_SHIFT;
ret = afu_mmio_region_get_by_offset(pdata, offset, size, &region);
if (ret)
return ret;
if (!(region.flags & DFL_PORT_REGION_MMAP))
return -EINVAL;
if ((vma->vm_flags & VM_READ) && !(region.flags & DFL_PORT_REGION_READ))
return -EPERM;
if ((vma->vm_flags & VM_WRITE) &&
!(region.flags & DFL_PORT_REGION_WRITE))
return -EPERM;
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
return remap_pfn_range(vma, vma->vm_start,
(region.phys + (offset - region.offset)) >> PAGE_SHIFT,
size, vma->vm_page_prot);
}
static const struct file_operations afu_fops = {
.owner = THIS_MODULE,
.open = afu_open,
.release = afu_release,
.unlocked_ioctl = afu_ioctl,
.mmap = afu_mmap,
};
static int afu_dev_init(struct platform_device *pdev)
{
struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev);
struct dfl_afu *afu;
afu = devm_kzalloc(&pdev->dev, sizeof(*afu), GFP_KERNEL);
if (!afu)
return -ENOMEM;
afu->pdata = pdata;
mutex_lock(&pdata->lock);
dfl_fpga_pdata_set_private(pdata, afu);
afu_mmio_region_init(pdata);
afu_dma_region_init(pdata);
mutex_unlock(&pdata->lock);
return 0;
}
static int afu_dev_destroy(struct platform_device *pdev)
{
struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev);
struct dfl_afu *afu;
mutex_lock(&pdata->lock);
afu = dfl_fpga_pdata_get_private(pdata);
afu_mmio_region_destroy(pdata);
afu_dma_region_destroy(pdata);
dfl_fpga_pdata_set_private(pdata, NULL);
mutex_unlock(&pdata->lock);
return 0;
}
static int port_enable_set(struct platform_device *pdev, bool enable)
{
struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev);
int ret = 0;
mutex_lock(&pdata->lock);
if (enable)
port_enable(pdev);
else
ret = port_disable(pdev);
mutex_unlock(&pdata->lock);
return ret;
}
static struct dfl_fpga_port_ops afu_port_ops = {
.name = DFL_FPGA_FEATURE_DEV_PORT,
.owner = THIS_MODULE,
.get_id = port_get_id,
.enable_set = port_enable_set,
};
static int afu_probe(struct platform_device *pdev)
{
int ret;
dev_dbg(&pdev->dev, "%s\n", __func__);
ret = afu_dev_init(pdev);
if (ret)
goto exit;
ret = dfl_fpga_dev_feature_init(pdev, port_feature_drvs);
if (ret)
goto dev_destroy;
ret = dfl_fpga_dev_ops_register(pdev, &afu_fops, THIS_MODULE);
if (ret) {
dfl_fpga_dev_feature_uinit(pdev);
goto dev_destroy;
}
return 0;
dev_destroy:
afu_dev_destroy(pdev);
exit:
return ret;
}
static int afu_remove(struct platform_device *pdev)
{
dev_dbg(&pdev->dev, "%s\n", __func__);
dfl_fpga_dev_ops_unregister(pdev);
dfl_fpga_dev_feature_uinit(pdev);
afu_dev_destroy(pdev);
return 0;
}
static struct platform_driver afu_driver = {
.driver = {
.name = DFL_FPGA_FEATURE_DEV_PORT,
},
.probe = afu_probe,
.remove = afu_remove,
};
static int __init afu_init(void)
{
int ret;
dfl_fpga_port_ops_add(&afu_port_ops);
ret = platform_driver_register(&afu_driver);
if (ret)
dfl_fpga_port_ops_del(&afu_port_ops);
return ret;
}
static void __exit afu_exit(void)
{
platform_driver_unregister(&afu_driver);
dfl_fpga_port_ops_del(&afu_port_ops);
}
module_init(afu_init);
module_exit(afu_exit);
MODULE_DESCRIPTION("FPGA Accelerated Function Unit driver");
MODULE_AUTHOR("Intel Corporation");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS("platform:dfl-port");

View File

@ -0,0 +1,166 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Driver for FPGA Accelerated Function Unit (AFU) MMIO Region Management
*
* Copyright (C) 2017-2018 Intel Corporation, Inc.
*
* Authors:
* Wu Hao <hao.wu@intel.com>
* Xiao Guangrong <guangrong.xiao@linux.intel.com>
*/
#include "dfl-afu.h"
/**
* afu_mmio_region_init - init function for afu mmio region support
* @pdata: afu platform device's pdata.
*/
void afu_mmio_region_init(struct dfl_feature_platform_data *pdata)
{
struct dfl_afu *afu = dfl_fpga_pdata_get_private(pdata);
INIT_LIST_HEAD(&afu->regions);
}
#define for_each_region(region, afu) \
list_for_each_entry((region), &(afu)->regions, node)
static struct dfl_afu_mmio_region *get_region_by_index(struct dfl_afu *afu,
u32 region_index)
{
struct dfl_afu_mmio_region *region;
for_each_region(region, afu)
if (region->index == region_index)
return region;
return NULL;
}
/**
* afu_mmio_region_add - add a mmio region to given feature dev.
*
* @region_index: region index.
* @region_size: region size.
* @phys: region's physical address of this region.
* @flags: region flags (access permission).
*
* Return: 0 on success, negative error code otherwise.
*/
int afu_mmio_region_add(struct dfl_feature_platform_data *pdata,
u32 region_index, u64 region_size, u64 phys, u32 flags)
{
struct dfl_afu_mmio_region *region;
struct dfl_afu *afu;
int ret = 0;
region = devm_kzalloc(&pdata->dev->dev, sizeof(*region), GFP_KERNEL);
if (!region)
return -ENOMEM;
region->index = region_index;
region->size = region_size;
region->phys = phys;
region->flags = flags;
mutex_lock(&pdata->lock);
afu = dfl_fpga_pdata_get_private(pdata);
/* check if @index already exists */
if (get_region_by_index(afu, region_index)) {
mutex_unlock(&pdata->lock);
ret = -EEXIST;
goto exit;
}
region_size = PAGE_ALIGN(region_size);
region->offset = afu->region_cur_offset;
list_add(&region->node, &afu->regions);
afu->region_cur_offset += region_size;
afu->num_regions++;
mutex_unlock(&pdata->lock);
return 0;
exit:
devm_kfree(&pdata->dev->dev, region);
return ret;
}
/**
* afu_mmio_region_destroy - destroy all mmio regions under given feature dev.
* @pdata: afu platform device's pdata.
*/
void afu_mmio_region_destroy(struct dfl_feature_platform_data *pdata)
{
struct dfl_afu *afu = dfl_fpga_pdata_get_private(pdata);
struct dfl_afu_mmio_region *tmp, *region;
list_for_each_entry_safe(region, tmp, &afu->regions, node)
devm_kfree(&pdata->dev->dev, region);
}
/**
* afu_mmio_region_get_by_index - find an afu region by index.
* @pdata: afu platform device's pdata.
* @region_index: region index.
* @pregion: ptr to region for result.
*
* Return: 0 on success, negative error code otherwise.
*/
int afu_mmio_region_get_by_index(struct dfl_feature_platform_data *pdata,
u32 region_index,
struct dfl_afu_mmio_region *pregion)
{
struct dfl_afu_mmio_region *region;
struct dfl_afu *afu;
int ret = 0;
mutex_lock(&pdata->lock);
afu = dfl_fpga_pdata_get_private(pdata);
region = get_region_by_index(afu, region_index);
if (!region) {
ret = -EINVAL;
goto exit;
}
*pregion = *region;
exit:
mutex_unlock(&pdata->lock);
return ret;
}
/**
* afu_mmio_region_get_by_offset - find an afu mmio region by offset and size
*
* @pdata: afu platform device's pdata.
* @offset: region offset from start of the device fd.
* @size: region size.
* @pregion: ptr to region for result.
*
* Find the region which fully contains the region described by input
* parameters (offset and size) from the feature dev's region linked list.
*
* Return: 0 on success, negative error code otherwise.
*/
int afu_mmio_region_get_by_offset(struct dfl_feature_platform_data *pdata,
u64 offset, u64 size,
struct dfl_afu_mmio_region *pregion)
{
struct dfl_afu_mmio_region *region;
struct dfl_afu *afu;
int ret = 0;
mutex_lock(&pdata->lock);
afu = dfl_fpga_pdata_get_private(pdata);
for_each_region(region, afu)
if (region->offset <= offset &&
region->offset + region->size >= offset + size) {
*pregion = *region;
goto exit;
}
ret = -EINVAL;
exit:
mutex_unlock(&pdata->lock);
return ret;
}

100
drivers/fpga/dfl-afu.h Normal file
View File

@ -0,0 +1,100 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Header file for FPGA Accelerated Function Unit (AFU) Driver
*
* Copyright (C) 2017-2018 Intel Corporation, Inc.
*
* Authors:
* Wu Hao <hao.wu@intel.com>
* Xiao Guangrong <guangrong.xiao@linux.intel.com>
* Joseph Grecco <joe.grecco@intel.com>
* Enno Luebbers <enno.luebbers@intel.com>
* Tim Whisonant <tim.whisonant@intel.com>
* Ananda Ravuri <ananda.ravuri@intel.com>
* Henry Mitchel <henry.mitchel@intel.com>
*/
#ifndef __DFL_AFU_H
#define __DFL_AFU_H
#include <linux/mm.h>
#include "dfl.h"
/**
* struct dfl_afu_mmio_region - afu mmio region data structure
*
* @index: region index.
* @flags: region flags (access permission).
* @size: region size.
* @offset: region offset from start of the device fd.
* @phys: region's physical address.
* @node: node to add to afu feature dev's region list.
*/
struct dfl_afu_mmio_region {
u32 index;
u32 flags;
u64 size;
u64 offset;
u64 phys;
struct list_head node;
};
/**
* struct fpga_afu_dma_region - afu DMA region data structure
*
* @user_addr: region userspace virtual address.
* @length: region length.
* @iova: region IO virtual address.
* @pages: ptr to pages of this region.
* @node: rb tree node.
* @in_use: flag to indicate if this region is in_use.
*/
struct dfl_afu_dma_region {
u64 user_addr;
u64 length;
u64 iova;
struct page **pages;
struct rb_node node;
bool in_use;
};
/**
* struct dfl_afu - afu device data structure
*
* @region_cur_offset: current region offset from start to the device fd.
* @num_regions: num of mmio regions.
* @regions: the mmio region linked list of this afu feature device.
* @dma_regions: root of dma regions rb tree.
* @num_umsgs: num of umsgs.
* @pdata: afu platform device's pdata.
*/
struct dfl_afu {
u64 region_cur_offset;
int num_regions;
u8 num_umsgs;
struct list_head regions;
struct rb_root dma_regions;
struct dfl_feature_platform_data *pdata;
};
void afu_mmio_region_init(struct dfl_feature_platform_data *pdata);
int afu_mmio_region_add(struct dfl_feature_platform_data *pdata,
u32 region_index, u64 region_size, u64 phys, u32 flags);
void afu_mmio_region_destroy(struct dfl_feature_platform_data *pdata);
int afu_mmio_region_get_by_index(struct dfl_feature_platform_data *pdata,
u32 region_index,
struct dfl_afu_mmio_region *pregion);
int afu_mmio_region_get_by_offset(struct dfl_feature_platform_data *pdata,
u64 offset, u64 size,
struct dfl_afu_mmio_region *pregion);
void afu_dma_region_init(struct dfl_feature_platform_data *pdata);
void afu_dma_region_destroy(struct dfl_feature_platform_data *pdata);
int afu_dma_map_region(struct dfl_feature_platform_data *pdata,
u64 user_addr, u64 length, u64 *iova);
int afu_dma_unmap_region(struct dfl_feature_platform_data *pdata, u64 iova);
struct dfl_afu_dma_region *
afu_dma_region_find(struct dfl_feature_platform_data *pdata,
u64 iova, u64 size);
#endif /* __DFL_AFU_H */

114
drivers/fpga/dfl-fme-br.c Normal file
View File

@ -0,0 +1,114 @@
// SPDX-License-Identifier: GPL-2.0
/*
* FPGA Bridge Driver for FPGA Management Engine (FME)
*
* Copyright (C) 2017-2018 Intel Corporation, Inc.
*
* Authors:
* Wu Hao <hao.wu@intel.com>
* Joseph Grecco <joe.grecco@intel.com>
* Enno Luebbers <enno.luebbers@intel.com>
* Tim Whisonant <tim.whisonant@intel.com>
* Ananda Ravuri <ananda.ravuri@intel.com>
* Henry Mitchel <henry.mitchel@intel.com>
*/
#include <linux/module.h>
#include <linux/fpga/fpga-bridge.h>
#include "dfl.h"
#include "dfl-fme-pr.h"
struct fme_br_priv {
struct dfl_fme_br_pdata *pdata;
struct dfl_fpga_port_ops *port_ops;
struct platform_device *port_pdev;
};
static int fme_bridge_enable_set(struct fpga_bridge *bridge, bool enable)
{
struct fme_br_priv *priv = bridge->priv;
struct platform_device *port_pdev;
struct dfl_fpga_port_ops *ops;
if (!priv->port_pdev) {
port_pdev = dfl_fpga_cdev_find_port(priv->pdata->cdev,
&priv->pdata->port_id,
dfl_fpga_check_port_id);
if (!port_pdev)
return -ENODEV;
priv->port_pdev = port_pdev;
}
if (priv->port_pdev && !priv->port_ops) {
ops = dfl_fpga_port_ops_get(priv->port_pdev);
if (!ops || !ops->enable_set)
return -ENOENT;
priv->port_ops = ops;
}
return priv->port_ops->enable_set(priv->port_pdev, enable);
}
static const struct fpga_bridge_ops fme_bridge_ops = {
.enable_set = fme_bridge_enable_set,
};
static int fme_br_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct fme_br_priv *priv;
struct fpga_bridge *br;
int ret;
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
return -ENOMEM;
priv->pdata = dev_get_platdata(dev);
br = fpga_bridge_create(dev, "DFL FPGA FME Bridge",
&fme_bridge_ops, priv);
if (!br)
return -ENOMEM;
platform_set_drvdata(pdev, br);
ret = fpga_bridge_register(br);
if (ret)
fpga_bridge_free(br);
return ret;
}
static int fme_br_remove(struct platform_device *pdev)
{
struct fpga_bridge *br = platform_get_drvdata(pdev);
struct fme_br_priv *priv = br->priv;
fpga_bridge_unregister(br);
if (priv->port_pdev)
put_device(&priv->port_pdev->dev);
if (priv->port_ops)
dfl_fpga_port_ops_put(priv->port_ops);
return 0;
}
static struct platform_driver fme_br_driver = {
.driver = {
.name = DFL_FPGA_FME_BRIDGE,
},
.probe = fme_br_probe,
.remove = fme_br_remove,
};
module_platform_driver(fme_br_driver);
MODULE_DESCRIPTION("FPGA Bridge for DFL FPGA Management Engine");
MODULE_AUTHOR("Intel Corporation");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS("platform:dfl-fme-bridge");

279
drivers/fpga/dfl-fme-main.c Normal file
View File

@ -0,0 +1,279 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Driver for FPGA Management Engine (FME)
*
* Copyright (C) 2017-2018 Intel Corporation, Inc.
*
* Authors:
* Kang Luwei <luwei.kang@intel.com>
* Xiao Guangrong <guangrong.xiao@linux.intel.com>
* Joseph Grecco <joe.grecco@intel.com>
* Enno Luebbers <enno.luebbers@intel.com>
* Tim Whisonant <tim.whisonant@intel.com>
* Ananda Ravuri <ananda.ravuri@intel.com>
* Henry Mitchel <henry.mitchel@intel.com>
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/fpga-dfl.h>
#include "dfl.h"
#include "dfl-fme.h"
static ssize_t ports_num_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
void __iomem *base;
u64 v;
base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_HEADER);
v = readq(base + FME_HDR_CAP);
return scnprintf(buf, PAGE_SIZE, "%u\n",
(unsigned int)FIELD_GET(FME_CAP_NUM_PORTS, v));
}
static DEVICE_ATTR_RO(ports_num);
/*
* Bitstream (static FPGA region) identifier number. It contains the
* detailed version and other information of this static FPGA region.
*/
static ssize_t bitstream_id_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
void __iomem *base;
u64 v;
base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_HEADER);
v = readq(base + FME_HDR_BITSTREAM_ID);
return scnprintf(buf, PAGE_SIZE, "0x%llx\n", (unsigned long long)v);
}
static DEVICE_ATTR_RO(bitstream_id);
/*
* Bitstream (static FPGA region) meta data. It contains the synthesis
* date, seed and other information of this static FPGA region.
*/
static ssize_t bitstream_metadata_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
void __iomem *base;
u64 v;
base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_HEADER);
v = readq(base + FME_HDR_BITSTREAM_MD);
return scnprintf(buf, PAGE_SIZE, "0x%llx\n", (unsigned long long)v);
}
static DEVICE_ATTR_RO(bitstream_metadata);
static const struct attribute *fme_hdr_attrs[] = {
&dev_attr_ports_num.attr,
&dev_attr_bitstream_id.attr,
&dev_attr_bitstream_metadata.attr,
NULL,
};
static int fme_hdr_init(struct platform_device *pdev,
struct dfl_feature *feature)
{
void __iomem *base = feature->ioaddr;
int ret;
dev_dbg(&pdev->dev, "FME HDR Init.\n");
dev_dbg(&pdev->dev, "FME cap %llx.\n",
(unsigned long long)readq(base + FME_HDR_CAP));
ret = sysfs_create_files(&pdev->dev.kobj, fme_hdr_attrs);
if (ret)
return ret;
return 0;
}
static void fme_hdr_uinit(struct platform_device *pdev,
struct dfl_feature *feature)
{
dev_dbg(&pdev->dev, "FME HDR UInit.\n");
sysfs_remove_files(&pdev->dev.kobj, fme_hdr_attrs);
}
static const struct dfl_feature_ops fme_hdr_ops = {
.init = fme_hdr_init,
.uinit = fme_hdr_uinit,
};
static struct dfl_feature_driver fme_feature_drvs[] = {
{
.id = FME_FEATURE_ID_HEADER,
.ops = &fme_hdr_ops,
},
{
.id = FME_FEATURE_ID_PR_MGMT,
.ops = &pr_mgmt_ops,
},
{
.ops = NULL,
},
};
static long fme_ioctl_check_extension(struct dfl_feature_platform_data *pdata,
unsigned long arg)
{
/* No extension support for now */
return 0;
}
static int fme_open(struct inode *inode, struct file *filp)
{
struct platform_device *fdev = dfl_fpga_inode_to_feature_dev(inode);
struct dfl_feature_platform_data *pdata = dev_get_platdata(&fdev->dev);
int ret;
if (WARN_ON(!pdata))
return -ENODEV;
ret = dfl_feature_dev_use_begin(pdata);
if (ret)
return ret;
dev_dbg(&fdev->dev, "Device File Open\n");
filp->private_data = pdata;
return 0;
}
static int fme_release(struct inode *inode, struct file *filp)
{
struct dfl_feature_platform_data *pdata = filp->private_data;
struct platform_device *pdev = pdata->dev;
dev_dbg(&pdev->dev, "Device File Release\n");
dfl_feature_dev_use_end(pdata);
return 0;
}
static long fme_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
struct dfl_feature_platform_data *pdata = filp->private_data;
struct platform_device *pdev = pdata->dev;
struct dfl_feature *f;
long ret;
dev_dbg(&pdev->dev, "%s cmd 0x%x\n", __func__, cmd);
switch (cmd) {
case DFL_FPGA_GET_API_VERSION:
return DFL_FPGA_API_VERSION;
case DFL_FPGA_CHECK_EXTENSION:
return fme_ioctl_check_extension(pdata, arg);
default:
/*
* Let sub-feature's ioctl function to handle the cmd.
* Sub-feature's ioctl returns -ENODEV when cmd is not
* handled in this sub feature, and returns 0 or other
* error code if cmd is handled.
*/
dfl_fpga_dev_for_each_feature(pdata, f) {
if (f->ops && f->ops->ioctl) {
ret = f->ops->ioctl(pdev, f, cmd, arg);
if (ret != -ENODEV)
return ret;
}
}
}
return -EINVAL;
}
static int fme_dev_init(struct platform_device *pdev)
{
struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev);
struct dfl_fme *fme;
fme = devm_kzalloc(&pdev->dev, sizeof(*fme), GFP_KERNEL);
if (!fme)
return -ENOMEM;
fme->pdata = pdata;
mutex_lock(&pdata->lock);
dfl_fpga_pdata_set_private(pdata, fme);
mutex_unlock(&pdata->lock);
return 0;
}
static void fme_dev_destroy(struct platform_device *pdev)
{
struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev);
struct dfl_fme *fme;
mutex_lock(&pdata->lock);
fme = dfl_fpga_pdata_get_private(pdata);
dfl_fpga_pdata_set_private(pdata, NULL);
mutex_unlock(&pdata->lock);
}
static const struct file_operations fme_fops = {
.owner = THIS_MODULE,
.open = fme_open,
.release = fme_release,
.unlocked_ioctl = fme_ioctl,
};
static int fme_probe(struct platform_device *pdev)
{
int ret;
ret = fme_dev_init(pdev);
if (ret)
goto exit;
ret = dfl_fpga_dev_feature_init(pdev, fme_feature_drvs);
if (ret)
goto dev_destroy;
ret = dfl_fpga_dev_ops_register(pdev, &fme_fops, THIS_MODULE);
if (ret)
goto feature_uinit;
return 0;
feature_uinit:
dfl_fpga_dev_feature_uinit(pdev);
dev_destroy:
fme_dev_destroy(pdev);
exit:
return ret;
}
static int fme_remove(struct platform_device *pdev)
{
dfl_fpga_dev_ops_unregister(pdev);
dfl_fpga_dev_feature_uinit(pdev);
fme_dev_destroy(pdev);
return 0;
}
static struct platform_driver fme_driver = {
.driver = {
.name = DFL_FPGA_FEATURE_DEV_FME,
},
.probe = fme_probe,
.remove = fme_remove,
};
module_platform_driver(fme_driver);
MODULE_DESCRIPTION("FPGA Management Engine driver");
MODULE_AUTHOR("Intel Corporation");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS("platform:dfl-fme");

349
drivers/fpga/dfl-fme-mgr.c Normal file
View File

@ -0,0 +1,349 @@
// SPDX-License-Identifier: GPL-2.0
/*
* FPGA Manager Driver for FPGA Management Engine (FME)
*
* Copyright (C) 2017-2018 Intel Corporation, Inc.
*
* Authors:
* Kang Luwei <luwei.kang@intel.com>
* Xiao Guangrong <guangrong.xiao@linux.intel.com>
* Wu Hao <hao.wu@intel.com>
* Joseph Grecco <joe.grecco@intel.com>
* Enno Luebbers <enno.luebbers@intel.com>
* Tim Whisonant <tim.whisonant@intel.com>
* Ananda Ravuri <ananda.ravuri@intel.com>
* Christopher Rauer <christopher.rauer@intel.com>
* Henry Mitchel <henry.mitchel@intel.com>
*/
#include <linux/bitfield.h>
#include <linux/module.h>
#include <linux/iopoll.h>
#include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/fpga/fpga-mgr.h>
#include "dfl-fme-pr.h"
/* FME Partial Reconfiguration Sub Feature Register Set */
#define FME_PR_DFH 0x0
#define FME_PR_CTRL 0x8
#define FME_PR_STS 0x10
#define FME_PR_DATA 0x18
#define FME_PR_ERR 0x20
#define FME_PR_INTFC_ID_H 0xA8
#define FME_PR_INTFC_ID_L 0xB0
/* FME PR Control Register Bitfield */
#define FME_PR_CTRL_PR_RST BIT_ULL(0) /* Reset PR engine */
#define FME_PR_CTRL_PR_RSTACK BIT_ULL(4) /* Ack for PR engine reset */
#define FME_PR_CTRL_PR_RGN_ID GENMASK_ULL(9, 7) /* PR Region ID */
#define FME_PR_CTRL_PR_START BIT_ULL(12) /* Start to request PR service */
#define FME_PR_CTRL_PR_COMPLETE BIT_ULL(13) /* PR data push completion */
/* FME PR Status Register Bitfield */
/* Number of available entries in HW queue inside the PR engine. */
#define FME_PR_STS_PR_CREDIT GENMASK_ULL(8, 0)
#define FME_PR_STS_PR_STS BIT_ULL(16) /* PR operation status */
#define FME_PR_STS_PR_STS_IDLE 0
#define FME_PR_STS_PR_CTRLR_STS GENMASK_ULL(22, 20) /* Controller status */
#define FME_PR_STS_PR_HOST_STS GENMASK_ULL(27, 24) /* PR host status */
/* FME PR Data Register Bitfield */
/* PR data from the raw-binary file. */
#define FME_PR_DATA_PR_DATA_RAW GENMASK_ULL(32, 0)
/* FME PR Error Register */
/* PR Operation errors detected. */
#define FME_PR_ERR_OPERATION_ERR BIT_ULL(0)
/* CRC error detected. */
#define FME_PR_ERR_CRC_ERR BIT_ULL(1)
/* Incompatible PR bitstream detected. */
#define FME_PR_ERR_INCOMPATIBLE_BS BIT_ULL(2)
/* PR data push protocol violated. */
#define FME_PR_ERR_PROTOCOL_ERR BIT_ULL(3)
/* PR data fifo overflow error detected */
#define FME_PR_ERR_FIFO_OVERFLOW BIT_ULL(4)
#define PR_WAIT_TIMEOUT 8000000
#define PR_HOST_STATUS_IDLE 0
struct fme_mgr_priv {
void __iomem *ioaddr;
u64 pr_error;
};
static u64 pr_error_to_mgr_status(u64 err)
{
u64 status = 0;
if (err & FME_PR_ERR_OPERATION_ERR)
status |= FPGA_MGR_STATUS_OPERATION_ERR;
if (err & FME_PR_ERR_CRC_ERR)
status |= FPGA_MGR_STATUS_CRC_ERR;
if (err & FME_PR_ERR_INCOMPATIBLE_BS)
status |= FPGA_MGR_STATUS_INCOMPATIBLE_IMAGE_ERR;
if (err & FME_PR_ERR_PROTOCOL_ERR)
status |= FPGA_MGR_STATUS_IP_PROTOCOL_ERR;
if (err & FME_PR_ERR_FIFO_OVERFLOW)
status |= FPGA_MGR_STATUS_FIFO_OVERFLOW_ERR;
return status;
}
static u64 fme_mgr_pr_error_handle(void __iomem *fme_pr)
{
u64 pr_status, pr_error;
pr_status = readq(fme_pr + FME_PR_STS);
if (!(pr_status & FME_PR_STS_PR_STS))
return 0;
pr_error = readq(fme_pr + FME_PR_ERR);
writeq(pr_error, fme_pr + FME_PR_ERR);
return pr_error;
}
static int fme_mgr_write_init(struct fpga_manager *mgr,
struct fpga_image_info *info,
const char *buf, size_t count)
{
struct device *dev = &mgr->dev;
struct fme_mgr_priv *priv = mgr->priv;
void __iomem *fme_pr = priv->ioaddr;
u64 pr_ctrl, pr_status;
if (!(info->flags & FPGA_MGR_PARTIAL_RECONFIG)) {
dev_err(dev, "only supports partial reconfiguration.\n");
return -EINVAL;
}
dev_dbg(dev, "resetting PR before initiated PR\n");
pr_ctrl = readq(fme_pr + FME_PR_CTRL);
pr_ctrl |= FME_PR_CTRL_PR_RST;
writeq(pr_ctrl, fme_pr + FME_PR_CTRL);
if (readq_poll_timeout(fme_pr + FME_PR_CTRL, pr_ctrl,
pr_ctrl & FME_PR_CTRL_PR_RSTACK, 1,
PR_WAIT_TIMEOUT)) {
dev_err(dev, "PR Reset ACK timeout\n");
return -ETIMEDOUT;
}
pr_ctrl = readq(fme_pr + FME_PR_CTRL);
pr_ctrl &= ~FME_PR_CTRL_PR_RST;
writeq(pr_ctrl, fme_pr + FME_PR_CTRL);
dev_dbg(dev,
"waiting for PR resource in HW to be initialized and ready\n");
if (readq_poll_timeout(fme_pr + FME_PR_STS, pr_status,
(pr_status & FME_PR_STS_PR_STS) ==
FME_PR_STS_PR_STS_IDLE, 1, PR_WAIT_TIMEOUT)) {
dev_err(dev, "PR Status timeout\n");
priv->pr_error = fme_mgr_pr_error_handle(fme_pr);
return -ETIMEDOUT;
}
dev_dbg(dev, "check and clear previous PR error\n");
priv->pr_error = fme_mgr_pr_error_handle(fme_pr);
if (priv->pr_error)
dev_dbg(dev, "previous PR error detected %llx\n",
(unsigned long long)priv->pr_error);
dev_dbg(dev, "set PR port ID\n");
pr_ctrl = readq(fme_pr + FME_PR_CTRL);
pr_ctrl &= ~FME_PR_CTRL_PR_RGN_ID;
pr_ctrl |= FIELD_PREP(FME_PR_CTRL_PR_RGN_ID, info->region_id);
writeq(pr_ctrl, fme_pr + FME_PR_CTRL);
return 0;
}
static int fme_mgr_write(struct fpga_manager *mgr,
const char *buf, size_t count)
{
struct device *dev = &mgr->dev;
struct fme_mgr_priv *priv = mgr->priv;
void __iomem *fme_pr = priv->ioaddr;
u64 pr_ctrl, pr_status, pr_data;
int delay = 0, pr_credit, i = 0;
dev_dbg(dev, "start request\n");
pr_ctrl = readq(fme_pr + FME_PR_CTRL);
pr_ctrl |= FME_PR_CTRL_PR_START;
writeq(pr_ctrl, fme_pr + FME_PR_CTRL);
dev_dbg(dev, "pushing data from bitstream to HW\n");
/*
* driver can push data to PR hardware using PR_DATA register once HW
* has enough pr_credit (> 1), pr_credit reduces one for every 32bit
* pr data write to PR_DATA register. If pr_credit <= 1, driver needs
* to wait for enough pr_credit from hardware by polling.
*/
pr_status = readq(fme_pr + FME_PR_STS);
pr_credit = FIELD_GET(FME_PR_STS_PR_CREDIT, pr_status);
while (count > 0) {
while (pr_credit <= 1) {
if (delay++ > PR_WAIT_TIMEOUT) {
dev_err(dev, "PR_CREDIT timeout\n");
return -ETIMEDOUT;
}
udelay(1);
pr_status = readq(fme_pr + FME_PR_STS);
pr_credit = FIELD_GET(FME_PR_STS_PR_CREDIT, pr_status);
}
if (count < 4) {
dev_err(dev, "Invaild PR bitstream size\n");
return -EINVAL;
}
pr_data = 0;
pr_data |= FIELD_PREP(FME_PR_DATA_PR_DATA_RAW,
*(((u32 *)buf) + i));
writeq(pr_data, fme_pr + FME_PR_DATA);
count -= 4;
pr_credit--;
i++;
}
return 0;
}
static int fme_mgr_write_complete(struct fpga_manager *mgr,
struct fpga_image_info *info)
{
struct device *dev = &mgr->dev;
struct fme_mgr_priv *priv = mgr->priv;
void __iomem *fme_pr = priv->ioaddr;
u64 pr_ctrl;
pr_ctrl = readq(fme_pr + FME_PR_CTRL);
pr_ctrl |= FME_PR_CTRL_PR_COMPLETE;
writeq(pr_ctrl, fme_pr + FME_PR_CTRL);
dev_dbg(dev, "green bitstream push complete\n");
dev_dbg(dev, "waiting for HW to release PR resource\n");
if (readq_poll_timeout(fme_pr + FME_PR_CTRL, pr_ctrl,
!(pr_ctrl & FME_PR_CTRL_PR_START), 1,
PR_WAIT_TIMEOUT)) {
dev_err(dev, "PR Completion ACK timeout.\n");
return -ETIMEDOUT;
}
dev_dbg(dev, "PR operation complete, checking status\n");
priv->pr_error = fme_mgr_pr_error_handle(fme_pr);
if (priv->pr_error) {
dev_dbg(dev, "PR error detected %llx\n",
(unsigned long long)priv->pr_error);
return -EIO;
}
dev_dbg(dev, "PR done successfully\n");
return 0;
}
static enum fpga_mgr_states fme_mgr_state(struct fpga_manager *mgr)
{
return FPGA_MGR_STATE_UNKNOWN;
}
static u64 fme_mgr_status(struct fpga_manager *mgr)
{
struct fme_mgr_priv *priv = mgr->priv;
return pr_error_to_mgr_status(priv->pr_error);
}
static const struct fpga_manager_ops fme_mgr_ops = {
.write_init = fme_mgr_write_init,
.write = fme_mgr_write,
.write_complete = fme_mgr_write_complete,
.state = fme_mgr_state,
.status = fme_mgr_status,
};
static void fme_mgr_get_compat_id(void __iomem *fme_pr,
struct fpga_compat_id *id)
{
id->id_l = readq(fme_pr + FME_PR_INTFC_ID_L);
id->id_h = readq(fme_pr + FME_PR_INTFC_ID_H);
}
static int fme_mgr_probe(struct platform_device *pdev)
{
struct dfl_fme_mgr_pdata *pdata = dev_get_platdata(&pdev->dev);
struct fpga_compat_id *compat_id;
struct device *dev = &pdev->dev;
struct fme_mgr_priv *priv;
struct fpga_manager *mgr;
struct resource *res;
int ret;
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
return -ENOMEM;
if (pdata->ioaddr)
priv->ioaddr = pdata->ioaddr;
if (!priv->ioaddr) {
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
priv->ioaddr = devm_ioremap_resource(dev, res);
if (IS_ERR(priv->ioaddr))
return PTR_ERR(priv->ioaddr);
}
compat_id = devm_kzalloc(dev, sizeof(*compat_id), GFP_KERNEL);
if (!compat_id)
return -ENOMEM;
fme_mgr_get_compat_id(priv->ioaddr, compat_id);
mgr = fpga_mgr_create(dev, "DFL FME FPGA Manager",
&fme_mgr_ops, priv);
if (!mgr)
return -ENOMEM;
mgr->compat_id = compat_id;
platform_set_drvdata(pdev, mgr);
ret = fpga_mgr_register(mgr);
if (ret)
fpga_mgr_free(mgr);
return ret;
}
static int fme_mgr_remove(struct platform_device *pdev)
{
struct fpga_manager *mgr = platform_get_drvdata(pdev);
fpga_mgr_unregister(mgr);
return 0;
}
static struct platform_driver fme_mgr_driver = {
.driver = {
.name = DFL_FPGA_FME_MGR,
},
.probe = fme_mgr_probe,
.remove = fme_mgr_remove,
};
module_platform_driver(fme_mgr_driver);
MODULE_DESCRIPTION("FPGA Manager for DFL FPGA Management Engine");
MODULE_AUTHOR("Intel Corporation");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS("platform:dfl-fme-mgr");

479
drivers/fpga/dfl-fme-pr.c Normal file
View File

@ -0,0 +1,479 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Driver for FPGA Management Engine (FME) Partial Reconfiguration
*
* Copyright (C) 2017-2018 Intel Corporation, Inc.
*
* Authors:
* Kang Luwei <luwei.kang@intel.com>
* Xiao Guangrong <guangrong.xiao@linux.intel.com>
* Wu Hao <hao.wu@intel.com>
* Joseph Grecco <joe.grecco@intel.com>
* Enno Luebbers <enno.luebbers@intel.com>
* Tim Whisonant <tim.whisonant@intel.com>
* Ananda Ravuri <ananda.ravuri@intel.com>
* Christopher Rauer <christopher.rauer@intel.com>
* Henry Mitchel <henry.mitchel@intel.com>
*/
#include <linux/types.h>
#include <linux/device.h>
#include <linux/vmalloc.h>
#include <linux/uaccess.h>
#include <linux/fpga/fpga-mgr.h>
#include <linux/fpga/fpga-bridge.h>
#include <linux/fpga/fpga-region.h>
#include <linux/fpga-dfl.h>
#include "dfl.h"
#include "dfl-fme.h"
#include "dfl-fme-pr.h"
static struct dfl_fme_region *
dfl_fme_region_find_by_port_id(struct dfl_fme *fme, int port_id)
{
struct dfl_fme_region *fme_region;
list_for_each_entry(fme_region, &fme->region_list, node)
if (fme_region->port_id == port_id)
return fme_region;
return NULL;
}
static int dfl_fme_region_match(struct device *dev, const void *data)
{
return dev->parent == data;
}
static struct fpga_region *dfl_fme_region_find(struct dfl_fme *fme, int port_id)
{
struct dfl_fme_region *fme_region;
struct fpga_region *region;
fme_region = dfl_fme_region_find_by_port_id(fme, port_id);
if (!fme_region)
return NULL;
region = fpga_region_class_find(NULL, &fme_region->region->dev,
dfl_fme_region_match);
if (!region)
return NULL;
return region;
}
static int fme_pr(struct platform_device *pdev, unsigned long arg)
{
struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev);
void __user *argp = (void __user *)arg;
struct dfl_fpga_fme_port_pr port_pr;
struct fpga_image_info *info;
struct fpga_region *region;
void __iomem *fme_hdr;
struct dfl_fme *fme;
unsigned long minsz;
void *buf = NULL;
int ret = 0;
u64 v;
minsz = offsetofend(struct dfl_fpga_fme_port_pr, buffer_address);
if (copy_from_user(&port_pr, argp, minsz))
return -EFAULT;
if (port_pr.argsz < minsz || port_pr.flags)
return -EINVAL;
if (!IS_ALIGNED(port_pr.buffer_size, 4))
return -EINVAL;
/* get fme header region */
fme_hdr = dfl_get_feature_ioaddr_by_id(&pdev->dev,
FME_FEATURE_ID_HEADER);
/* check port id */
v = readq(fme_hdr + FME_HDR_CAP);
if (port_pr.port_id >= FIELD_GET(FME_CAP_NUM_PORTS, v)) {
dev_dbg(&pdev->dev, "port number more than maximum\n");
return -EINVAL;
}
if (!access_ok(VERIFY_READ,
(void __user *)(unsigned long)port_pr.buffer_address,
port_pr.buffer_size))
return -EFAULT;
buf = vmalloc(port_pr.buffer_size);
if (!buf)
return -ENOMEM;
if (copy_from_user(buf,
(void __user *)(unsigned long)port_pr.buffer_address,
port_pr.buffer_size)) {
ret = -EFAULT;
goto free_exit;
}
/* prepare fpga_image_info for PR */
info = fpga_image_info_alloc(&pdev->dev);
if (!info) {
ret = -ENOMEM;
goto free_exit;
}
info->flags |= FPGA_MGR_PARTIAL_RECONFIG;
mutex_lock(&pdata->lock);
fme = dfl_fpga_pdata_get_private(pdata);
/* fme device has been unregistered. */
if (!fme) {
ret = -EINVAL;
goto unlock_exit;
}
region = dfl_fme_region_find(fme, port_pr.port_id);
if (!region) {
ret = -EINVAL;
goto unlock_exit;
}
fpga_image_info_free(region->info);
info->buf = buf;
info->count = port_pr.buffer_size;
info->region_id = port_pr.port_id;
region->info = info;
ret = fpga_region_program_fpga(region);
/*
* it allows userspace to reset the PR region's logic by disabling and
* reenabling the bridge to clear things out between accleration runs.
* so no need to hold the bridges after partial reconfiguration.
*/
if (region->get_bridges)
fpga_bridges_put(&region->bridge_list);
put_device(&region->dev);
unlock_exit:
mutex_unlock(&pdata->lock);
free_exit:
vfree(buf);
if (copy_to_user((void __user *)arg, &port_pr, minsz))
return -EFAULT;
return ret;
}
/**
* dfl_fme_create_mgr - create fpga mgr platform device as child device
*
* @pdata: fme platform_device's pdata
*
* Return: mgr platform device if successful, and error code otherwise.
*/
static struct platform_device *
dfl_fme_create_mgr(struct dfl_feature_platform_data *pdata,
struct dfl_feature *feature)
{
struct platform_device *mgr, *fme = pdata->dev;
struct dfl_fme_mgr_pdata mgr_pdata;
int ret = -ENOMEM;
if (!feature->ioaddr)
return ERR_PTR(-ENODEV);
mgr_pdata.ioaddr = feature->ioaddr;
/*
* Each FME has only one fpga-mgr, so allocate platform device using
* the same FME platform device id.
*/
mgr = platform_device_alloc(DFL_FPGA_FME_MGR, fme->id);
if (!mgr)
return ERR_PTR(ret);
mgr->dev.parent = &fme->dev;
ret = platform_device_add_data(mgr, &mgr_pdata, sizeof(mgr_pdata));
if (ret)
goto create_mgr_err;
ret = platform_device_add(mgr);
if (ret)
goto create_mgr_err;
return mgr;
create_mgr_err:
platform_device_put(mgr);
return ERR_PTR(ret);
}
/**
* dfl_fme_destroy_mgr - destroy fpga mgr platform device
* @pdata: fme platform device's pdata
*/
static void dfl_fme_destroy_mgr(struct dfl_feature_platform_data *pdata)
{
struct dfl_fme *priv = dfl_fpga_pdata_get_private(pdata);
platform_device_unregister(priv->mgr);
}
/**
* dfl_fme_create_bridge - create fme fpga bridge platform device as child
*
* @pdata: fme platform device's pdata
* @port_id: port id for the bridge to be created.
*
* Return: bridge platform device if successful, and error code otherwise.
*/
static struct dfl_fme_bridge *
dfl_fme_create_bridge(struct dfl_feature_platform_data *pdata, int port_id)
{
struct device *dev = &pdata->dev->dev;
struct dfl_fme_br_pdata br_pdata;
struct dfl_fme_bridge *fme_br;
int ret = -ENOMEM;
fme_br = devm_kzalloc(dev, sizeof(*fme_br), GFP_KERNEL);
if (!fme_br)
return ERR_PTR(ret);
br_pdata.cdev = pdata->dfl_cdev;
br_pdata.port_id = port_id;
fme_br->br = platform_device_alloc(DFL_FPGA_FME_BRIDGE,
PLATFORM_DEVID_AUTO);
if (!fme_br->br)
return ERR_PTR(ret);
fme_br->br->dev.parent = dev;
ret = platform_device_add_data(fme_br->br, &br_pdata, sizeof(br_pdata));
if (ret)
goto create_br_err;
ret = platform_device_add(fme_br->br);
if (ret)
goto create_br_err;
return fme_br;
create_br_err:
platform_device_put(fme_br->br);
return ERR_PTR(ret);
}
/**
* dfl_fme_destroy_bridge - destroy fpga bridge platform device
* @fme_br: fme bridge to destroy
*/
static void dfl_fme_destroy_bridge(struct dfl_fme_bridge *fme_br)
{
platform_device_unregister(fme_br->br);
}
/**
* dfl_fme_destroy_bridge - destroy all fpga bridge platform device
* @pdata: fme platform device's pdata
*/
static void dfl_fme_destroy_bridges(struct dfl_feature_platform_data *pdata)
{
struct dfl_fme *priv = dfl_fpga_pdata_get_private(pdata);
struct dfl_fme_bridge *fbridge, *tmp;
list_for_each_entry_safe(fbridge, tmp, &priv->bridge_list, node) {
list_del(&fbridge->node);
dfl_fme_destroy_bridge(fbridge);
}
}
/**
* dfl_fme_create_region - create fpga region platform device as child
*
* @pdata: fme platform device's pdata
* @mgr: mgr platform device needed for region
* @br: br platform device needed for region
* @port_id: port id
*
* Return: fme region if successful, and error code otherwise.
*/
static struct dfl_fme_region *
dfl_fme_create_region(struct dfl_feature_platform_data *pdata,
struct platform_device *mgr,
struct platform_device *br, int port_id)
{
struct dfl_fme_region_pdata region_pdata;
struct device *dev = &pdata->dev->dev;
struct dfl_fme_region *fme_region;
int ret = -ENOMEM;
fme_region = devm_kzalloc(dev, sizeof(*fme_region), GFP_KERNEL);
if (!fme_region)
return ERR_PTR(ret);
region_pdata.mgr = mgr;
region_pdata.br = br;
/*
* Each FPGA device may have more than one port, so allocate platform
* device using the same port platform device id.
*/
fme_region->region = platform_device_alloc(DFL_FPGA_FME_REGION, br->id);
if (!fme_region->region)
return ERR_PTR(ret);
fme_region->region->dev.parent = dev;
ret = platform_device_add_data(fme_region->region, &region_pdata,
sizeof(region_pdata));
if (ret)
goto create_region_err;
ret = platform_device_add(fme_region->region);
if (ret)
goto create_region_err;
fme_region->port_id = port_id;
return fme_region;
create_region_err:
platform_device_put(fme_region->region);
return ERR_PTR(ret);
}
/**
* dfl_fme_destroy_region - destroy fme region
* @fme_region: fme region to destroy
*/
static void dfl_fme_destroy_region(struct dfl_fme_region *fme_region)
{
platform_device_unregister(fme_region->region);
}
/**
* dfl_fme_destroy_regions - destroy all fme regions
* @pdata: fme platform device's pdata
*/
static void dfl_fme_destroy_regions(struct dfl_feature_platform_data *pdata)
{
struct dfl_fme *priv = dfl_fpga_pdata_get_private(pdata);
struct dfl_fme_region *fme_region, *tmp;
list_for_each_entry_safe(fme_region, tmp, &priv->region_list, node) {
list_del(&fme_region->node);
dfl_fme_destroy_region(fme_region);
}
}
static int pr_mgmt_init(struct platform_device *pdev,
struct dfl_feature *feature)
{
struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev);
struct dfl_fme_region *fme_region;
struct dfl_fme_bridge *fme_br;
struct platform_device *mgr;
struct dfl_fme *priv;
void __iomem *fme_hdr;
int ret = -ENODEV, i = 0;
u64 fme_cap, port_offset;
fme_hdr = dfl_get_feature_ioaddr_by_id(&pdev->dev,
FME_FEATURE_ID_HEADER);
mutex_lock(&pdata->lock);
priv = dfl_fpga_pdata_get_private(pdata);
/* Initialize the region and bridge sub device list */
INIT_LIST_HEAD(&priv->region_list);
INIT_LIST_HEAD(&priv->bridge_list);
/* Create fpga mgr platform device */
mgr = dfl_fme_create_mgr(pdata, feature);
if (IS_ERR(mgr)) {
dev_err(&pdev->dev, "fail to create fpga mgr pdev\n");
goto unlock;
}
priv->mgr = mgr;
/* Read capability register to check number of regions and bridges */
fme_cap = readq(fme_hdr + FME_HDR_CAP);
for (; i < FIELD_GET(FME_CAP_NUM_PORTS, fme_cap); i++) {
port_offset = readq(fme_hdr + FME_HDR_PORT_OFST(i));
if (!(port_offset & FME_PORT_OFST_IMP))
continue;
/* Create bridge for each port */
fme_br = dfl_fme_create_bridge(pdata, i);
if (IS_ERR(fme_br)) {
ret = PTR_ERR(fme_br);
goto destroy_region;
}
list_add(&fme_br->node, &priv->bridge_list);
/* Create region for each port */
fme_region = dfl_fme_create_region(pdata, mgr,
fme_br->br, i);
if (!fme_region) {
ret = PTR_ERR(fme_region);
goto destroy_region;
}
list_add(&fme_region->node, &priv->region_list);
}
mutex_unlock(&pdata->lock);
return 0;
destroy_region:
dfl_fme_destroy_regions(pdata);
dfl_fme_destroy_bridges(pdata);
dfl_fme_destroy_mgr(pdata);
unlock:
mutex_unlock(&pdata->lock);
return ret;
}
static void pr_mgmt_uinit(struct platform_device *pdev,
struct dfl_feature *feature)
{
struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev);
struct dfl_fme *priv;
mutex_lock(&pdata->lock);
priv = dfl_fpga_pdata_get_private(pdata);
dfl_fme_destroy_regions(pdata);
dfl_fme_destroy_bridges(pdata);
dfl_fme_destroy_mgr(pdata);
mutex_unlock(&pdata->lock);
}
static long fme_pr_ioctl(struct platform_device *pdev,
struct dfl_feature *feature,
unsigned int cmd, unsigned long arg)
{
long ret;
switch (cmd) {
case DFL_FPGA_FME_PORT_PR:
ret = fme_pr(pdev, arg);
break;
default:
ret = -ENODEV;
}
return ret;
}
const struct dfl_feature_ops pr_mgmt_ops = {
.init = pr_mgmt_init,
.uinit = pr_mgmt_uinit,
.ioctl = fme_pr_ioctl,
};

84
drivers/fpga/dfl-fme-pr.h Normal file
View File

@ -0,0 +1,84 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Header file for FPGA Management Engine (FME) Partial Reconfiguration Driver
*
* Copyright (C) 2017-2018 Intel Corporation, Inc.
*
* Authors:
* Kang Luwei <luwei.kang@intel.com>
* Xiao Guangrong <guangrong.xiao@linux.intel.com>
* Wu Hao <hao.wu@intel.com>
* Joseph Grecco <joe.grecco@intel.com>
* Enno Luebbers <enno.luebbers@intel.com>
* Tim Whisonant <tim.whisonant@intel.com>
* Ananda Ravuri <ananda.ravuri@intel.com>
* Henry Mitchel <henry.mitchel@intel.com>
*/
#ifndef __DFL_FME_PR_H
#define __DFL_FME_PR_H
#include <linux/platform_device.h>
/**
* struct dfl_fme_region - FME fpga region data structure
*
* @region: platform device of the FPGA region.
* @node: used to link fme_region to a list.
* @port_id: indicate which port this region connected to.
*/
struct dfl_fme_region {
struct platform_device *region;
struct list_head node;
int port_id;
};
/**
* struct dfl_fme_region_pdata - platform data for FME region platform device.
*
* @mgr: platform device of the FPGA manager.
* @br: platform device of the FPGA bridge.
* @region_id: region id (same as port_id).
*/
struct dfl_fme_region_pdata {
struct platform_device *mgr;
struct platform_device *br;
int region_id;
};
/**
* struct dfl_fme_bridge - FME fpga bridge data structure
*
* @br: platform device of the FPGA bridge.
* @node: used to link fme_bridge to a list.
*/
struct dfl_fme_bridge {
struct platform_device *br;
struct list_head node;
};
/**
* struct dfl_fme_bridge_pdata - platform data for FME bridge platform device.
*
* @cdev: container device.
* @port_id: port id.
*/
struct dfl_fme_br_pdata {
struct dfl_fpga_cdev *cdev;
int port_id;
};
/**
* struct dfl_fme_mgr_pdata - platform data for FME manager platform device.
*
* @ioaddr: mapped io address for FME manager platform device.
*/
struct dfl_fme_mgr_pdata {
void __iomem *ioaddr;
};
#define DFL_FPGA_FME_MGR "dfl-fme-mgr"
#define DFL_FPGA_FME_BRIDGE "dfl-fme-bridge"
#define DFL_FPGA_FME_REGION "dfl-fme-region"
#endif /* __DFL_FME_PR_H */

View File

@ -0,0 +1,89 @@
// SPDX-License-Identifier: GPL-2.0
/*
* FPGA Region Driver for FPGA Management Engine (FME)
*
* Copyright (C) 2017-2018 Intel Corporation, Inc.
*
* Authors:
* Wu Hao <hao.wu@intel.com>
* Joseph Grecco <joe.grecco@intel.com>
* Enno Luebbers <enno.luebbers@intel.com>
* Tim Whisonant <tim.whisonant@intel.com>
* Ananda Ravuri <ananda.ravuri@intel.com>
* Henry Mitchel <henry.mitchel@intel.com>
*/
#include <linux/module.h>
#include <linux/fpga/fpga-region.h>
#include "dfl-fme-pr.h"
static int fme_region_get_bridges(struct fpga_region *region)
{
struct dfl_fme_region_pdata *pdata = region->priv;
struct device *dev = &pdata->br->dev;
return fpga_bridge_get_to_list(dev, region->info, &region->bridge_list);
}
static int fme_region_probe(struct platform_device *pdev)
{
struct dfl_fme_region_pdata *pdata = dev_get_platdata(&pdev->dev);
struct device *dev = &pdev->dev;
struct fpga_region *region;
struct fpga_manager *mgr;
int ret;
mgr = fpga_mgr_get(&pdata->mgr->dev);
if (IS_ERR(mgr))
return -EPROBE_DEFER;
region = fpga_region_create(dev, mgr, fme_region_get_bridges);
if (!region) {
ret = -ENOMEM;
goto eprobe_mgr_put;
}
region->priv = pdata;
region->compat_id = mgr->compat_id;
platform_set_drvdata(pdev, region);
ret = fpga_region_register(region);
if (ret)
goto region_free;
dev_dbg(dev, "DFL FME FPGA Region probed\n");
return 0;
region_free:
fpga_region_free(region);
eprobe_mgr_put:
fpga_mgr_put(mgr);
return ret;
}
static int fme_region_remove(struct platform_device *pdev)
{
struct fpga_region *region = dev_get_drvdata(&pdev->dev);
fpga_region_unregister(region);
fpga_mgr_put(region->mgr);
return 0;
}
static struct platform_driver fme_region_driver = {
.driver = {
.name = DFL_FPGA_FME_REGION,
},
.probe = fme_region_probe,
.remove = fme_region_remove,
};
module_platform_driver(fme_region_driver);
MODULE_DESCRIPTION("FPGA Region for DFL FPGA Management Engine");
MODULE_AUTHOR("Intel Corporation");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS("platform:dfl-fme-region");

38
drivers/fpga/dfl-fme.h Normal file
View File

@ -0,0 +1,38 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Header file for FPGA Management Engine (FME) Driver
*
* Copyright (C) 2017-2018 Intel Corporation, Inc.
*
* Authors:
* Kang Luwei <luwei.kang@intel.com>
* Xiao Guangrong <guangrong.xiao@linux.intel.com>
* Wu Hao <hao.wu@intel.com>
* Joseph Grecco <joe.grecco@intel.com>
* Enno Luebbers <enno.luebbers@intel.com>
* Tim Whisonant <tim.whisonant@intel.com>
* Ananda Ravuri <ananda.ravuri@intel.com>
* Henry Mitchel <henry.mitchel@intel.com>
*/
#ifndef __DFL_FME_H
#define __DFL_FME_H
/**
* struct dfl_fme - dfl fme private data
*
* @mgr: FME's FPGA manager platform device.
* @region_list: linked list of FME's FPGA regions.
* @bridge_list: linked list of FME's FPGA bridges.
* @pdata: fme platform device's pdata.
*/
struct dfl_fme {
struct platform_device *mgr;
struct list_head region_list;
struct list_head bridge_list;
struct dfl_feature_platform_data *pdata;
};
extern const struct dfl_feature_ops pr_mgmt_ops;
#endif /* __DFL_FME_H */

243
drivers/fpga/dfl-pci.c Normal file
View File

@ -0,0 +1,243 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Driver for FPGA Device Feature List (DFL) PCIe device
*
* Copyright (C) 2017-2018 Intel Corporation, Inc.
*
* Authors:
* Zhang Yi <Yi.Z.Zhang@intel.com>
* Xiao Guangrong <guangrong.xiao@linux.intel.com>
* Joseph Grecco <joe.grecco@intel.com>
* Enno Luebbers <enno.luebbers@intel.com>
* Tim Whisonant <tim.whisonant@intel.com>
* Ananda Ravuri <ananda.ravuri@intel.com>
* Henry Mitchel <henry.mitchel@intel.com>
*/
#include <linux/pci.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/stddef.h>
#include <linux/errno.h>
#include <linux/aer.h>
#include "dfl.h"
#define DRV_VERSION "0.8"
#define DRV_NAME "dfl-pci"
struct cci_drvdata {
struct dfl_fpga_cdev *cdev; /* container device */
};
static void __iomem *cci_pci_ioremap_bar(struct pci_dev *pcidev, int bar)
{
if (pcim_iomap_regions(pcidev, BIT(bar), DRV_NAME))
return NULL;
return pcim_iomap_table(pcidev)[bar];
}
/* PCI Device ID */
#define PCIE_DEVICE_ID_PF_INT_5_X 0xBCBD
#define PCIE_DEVICE_ID_PF_INT_6_X 0xBCC0
#define PCIE_DEVICE_ID_PF_DSC_1_X 0x09C4
/* VF Device */
#define PCIE_DEVICE_ID_VF_INT_5_X 0xBCBF
#define PCIE_DEVICE_ID_VF_INT_6_X 0xBCC1
#define PCIE_DEVICE_ID_VF_DSC_1_X 0x09C5
static struct pci_device_id cci_pcie_id_tbl[] = {
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_PF_INT_5_X),},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_VF_INT_5_X),},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_PF_INT_6_X),},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_VF_INT_6_X),},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_PF_DSC_1_X),},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_VF_DSC_1_X),},
{0,}
};
MODULE_DEVICE_TABLE(pci, cci_pcie_id_tbl);
static int cci_init_drvdata(struct pci_dev *pcidev)
{
struct cci_drvdata *drvdata;
drvdata = devm_kzalloc(&pcidev->dev, sizeof(*drvdata), GFP_KERNEL);
if (!drvdata)
return -ENOMEM;
pci_set_drvdata(pcidev, drvdata);
return 0;
}
static void cci_remove_feature_devs(struct pci_dev *pcidev)
{
struct cci_drvdata *drvdata = pci_get_drvdata(pcidev);
/* remove all children feature devices */
dfl_fpga_feature_devs_remove(drvdata->cdev);
}
/* enumerate feature devices under pci device */
static int cci_enumerate_feature_devs(struct pci_dev *pcidev)
{
struct cci_drvdata *drvdata = pci_get_drvdata(pcidev);
struct dfl_fpga_enum_info *info;
struct dfl_fpga_cdev *cdev;
resource_size_t start, len;
int port_num, bar, i, ret = 0;
void __iomem *base;
u32 offset;
u64 v;
/* allocate enumeration info via pci_dev */
info = dfl_fpga_enum_info_alloc(&pcidev->dev);
if (!info)
return -ENOMEM;
/* start to find Device Feature List from Bar 0 */
base = cci_pci_ioremap_bar(pcidev, 0);
if (!base) {
ret = -ENOMEM;
goto enum_info_free_exit;
}
/*
* PF device has FME and Ports/AFUs, and VF device only has one
* Port/AFU. Check them and add related "Device Feature List" info
* for the next step enumeration.
*/
if (dfl_feature_is_fme(base)) {
start = pci_resource_start(pcidev, 0);
len = pci_resource_len(pcidev, 0);
dfl_fpga_enum_info_add_dfl(info, start, len, base);
/*
* find more Device Feature Lists (e.g. Ports) per information
* indicated by FME module.
*/
v = readq(base + FME_HDR_CAP);
port_num = FIELD_GET(FME_CAP_NUM_PORTS, v);
WARN_ON(port_num > MAX_DFL_FPGA_PORT_NUM);
for (i = 0; i < port_num; i++) {
v = readq(base + FME_HDR_PORT_OFST(i));
/* skip ports which are not implemented. */
if (!(v & FME_PORT_OFST_IMP))
continue;
/*
* add Port's Device Feature List information for next
* step enumeration.
*/
bar = FIELD_GET(FME_PORT_OFST_BAR_ID, v);
offset = FIELD_GET(FME_PORT_OFST_DFH_OFST, v);
base = cci_pci_ioremap_bar(pcidev, bar);
if (!base)
continue;
start = pci_resource_start(pcidev, bar) + offset;
len = pci_resource_len(pcidev, bar) - offset;
dfl_fpga_enum_info_add_dfl(info, start, len,
base + offset);
}
} else if (dfl_feature_is_port(base)) {
start = pci_resource_start(pcidev, 0);
len = pci_resource_len(pcidev, 0);
dfl_fpga_enum_info_add_dfl(info, start, len, base);
} else {
ret = -ENODEV;
goto enum_info_free_exit;
}
/* start enumeration with prepared enumeration information */
cdev = dfl_fpga_feature_devs_enumerate(info);
if (IS_ERR(cdev)) {
dev_err(&pcidev->dev, "Enumeration failure\n");
ret = PTR_ERR(cdev);
goto enum_info_free_exit;
}
drvdata->cdev = cdev;
enum_info_free_exit:
dfl_fpga_enum_info_free(info);
return ret;
}
static
int cci_pci_probe(struct pci_dev *pcidev, const struct pci_device_id *pcidevid)
{
int ret;
ret = pcim_enable_device(pcidev);
if (ret < 0) {
dev_err(&pcidev->dev, "Failed to enable device %d.\n", ret);
return ret;
}
ret = pci_enable_pcie_error_reporting(pcidev);
if (ret && ret != -EINVAL)
dev_info(&pcidev->dev, "PCIE AER unavailable %d.\n", ret);
pci_set_master(pcidev);
if (!pci_set_dma_mask(pcidev, DMA_BIT_MASK(64))) {
ret = pci_set_consistent_dma_mask(pcidev, DMA_BIT_MASK(64));
if (ret)
goto disable_error_report_exit;
} else if (!pci_set_dma_mask(pcidev, DMA_BIT_MASK(32))) {
ret = pci_set_consistent_dma_mask(pcidev, DMA_BIT_MASK(32));
if (ret)
goto disable_error_report_exit;
} else {
ret = -EIO;
dev_err(&pcidev->dev, "No suitable DMA support available.\n");
goto disable_error_report_exit;
}
ret = cci_init_drvdata(pcidev);
if (ret) {
dev_err(&pcidev->dev, "Fail to init drvdata %d.\n", ret);
goto disable_error_report_exit;
}
ret = cci_enumerate_feature_devs(pcidev);
if (ret) {
dev_err(&pcidev->dev, "enumeration failure %d.\n", ret);
goto disable_error_report_exit;
}
return ret;
disable_error_report_exit:
pci_disable_pcie_error_reporting(pcidev);
return ret;
}
static void cci_pci_remove(struct pci_dev *pcidev)
{
cci_remove_feature_devs(pcidev);
pci_disable_pcie_error_reporting(pcidev);
}
static struct pci_driver cci_pci_driver = {
.name = DRV_NAME,
.id_table = cci_pcie_id_tbl,
.probe = cci_pci_probe,
.remove = cci_pci_remove,
};
module_pci_driver(cci_pci_driver);
MODULE_DESCRIPTION("FPGA DFL PCIe Device Driver");
MODULE_AUTHOR("Intel Corporation");
MODULE_LICENSE("GPL v2");

1044
drivers/fpga/dfl.c Normal file

File diff suppressed because it is too large Load Diff

410
drivers/fpga/dfl.h Normal file
View File

@ -0,0 +1,410 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Driver Header File for FPGA Device Feature List (DFL) Support
*
* Copyright (C) 2017-2018 Intel Corporation, Inc.
*
* Authors:
* Kang Luwei <luwei.kang@intel.com>
* Zhang Yi <yi.z.zhang@intel.com>
* Wu Hao <hao.wu@intel.com>
* Xiao Guangrong <guangrong.xiao@linux.intel.com>
*/
#ifndef __FPGA_DFL_H
#define __FPGA_DFL_H
#include <linux/bitfield.h>
#include <linux/cdev.h>
#include <linux/delay.h>
#include <linux/fs.h>
#include <linux/iopoll.h>
#include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/uuid.h>
#include <linux/fpga/fpga-region.h>
/* maximum supported number of ports */
#define MAX_DFL_FPGA_PORT_NUM 4
/* plus one for fme device */
#define MAX_DFL_FEATURE_DEV_NUM (MAX_DFL_FPGA_PORT_NUM + 1)
/* Reserved 0x0 for Header Group Register and 0xff for AFU */
#define FEATURE_ID_FIU_HEADER 0x0
#define FEATURE_ID_AFU 0xff
#define FME_FEATURE_ID_HEADER FEATURE_ID_FIU_HEADER
#define FME_FEATURE_ID_THERMAL_MGMT 0x1
#define FME_FEATURE_ID_POWER_MGMT 0x2
#define FME_FEATURE_ID_GLOBAL_IPERF 0x3
#define FME_FEATURE_ID_GLOBAL_ERR 0x4
#define FME_FEATURE_ID_PR_MGMT 0x5
#define FME_FEATURE_ID_HSSI 0x6
#define FME_FEATURE_ID_GLOBAL_DPERF 0x7
#define PORT_FEATURE_ID_HEADER FEATURE_ID_FIU_HEADER
#define PORT_FEATURE_ID_AFU FEATURE_ID_AFU
#define PORT_FEATURE_ID_ERROR 0x10
#define PORT_FEATURE_ID_UMSG 0x11
#define PORT_FEATURE_ID_UINT 0x12
#define PORT_FEATURE_ID_STP 0x13
/*
* Device Feature Header Register Set
*
* For FIUs, they all have DFH + GUID + NEXT_AFU as common header registers.
* For AFUs, they have DFH + GUID as common header registers.
* For private features, they only have DFH register as common header.
*/
#define DFH 0x0
#define GUID_L 0x8
#define GUID_H 0x10
#define NEXT_AFU 0x18
#define DFH_SIZE 0x8
/* Device Feature Header Register Bitfield */
#define DFH_ID GENMASK_ULL(11, 0) /* Feature ID */
#define DFH_ID_FIU_FME 0
#define DFH_ID_FIU_PORT 1
#define DFH_REVISION GENMASK_ULL(15, 12) /* Feature revision */
#define DFH_NEXT_HDR_OFST GENMASK_ULL(39, 16) /* Offset to next DFH */
#define DFH_EOL BIT_ULL(40) /* End of list */
#define DFH_TYPE GENMASK_ULL(63, 60) /* Feature type */
#define DFH_TYPE_AFU 1
#define DFH_TYPE_PRIVATE 3
#define DFH_TYPE_FIU 4
/* Next AFU Register Bitfield */
#define NEXT_AFU_NEXT_DFH_OFST GENMASK_ULL(23, 0) /* Offset to next AFU */
/* FME Header Register Set */
#define FME_HDR_DFH DFH
#define FME_HDR_GUID_L GUID_L
#define FME_HDR_GUID_H GUID_H
#define FME_HDR_NEXT_AFU NEXT_AFU
#define FME_HDR_CAP 0x30
#define FME_HDR_PORT_OFST(n) (0x38 + ((n) * 0x8))
#define FME_HDR_BITSTREAM_ID 0x60
#define FME_HDR_BITSTREAM_MD 0x68
/* FME Fab Capability Register Bitfield */
#define FME_CAP_FABRIC_VERID GENMASK_ULL(7, 0) /* Fabric version ID */
#define FME_CAP_SOCKET_ID BIT_ULL(8) /* Socket ID */
#define FME_CAP_PCIE0_LINK_AVL BIT_ULL(12) /* PCIE0 Link */
#define FME_CAP_PCIE1_LINK_AVL BIT_ULL(13) /* PCIE1 Link */
#define FME_CAP_COHR_LINK_AVL BIT_ULL(14) /* Coherent Link */
#define FME_CAP_IOMMU_AVL BIT_ULL(16) /* IOMMU available */
#define FME_CAP_NUM_PORTS GENMASK_ULL(19, 17) /* Number of ports */
#define FME_CAP_ADDR_WIDTH GENMASK_ULL(29, 24) /* Address bus width */
#define FME_CAP_CACHE_SIZE GENMASK_ULL(43, 32) /* cache size in KB */
#define FME_CAP_CACHE_ASSOC GENMASK_ULL(47, 44) /* Associativity */
/* FME Port Offset Register Bitfield */
/* Offset to port device feature header */
#define FME_PORT_OFST_DFH_OFST GENMASK_ULL(23, 0)
/* PCI Bar ID for this port */
#define FME_PORT_OFST_BAR_ID GENMASK_ULL(34, 32)
/* AFU MMIO access permission. 1 - VF, 0 - PF. */
#define FME_PORT_OFST_ACC_CTRL BIT_ULL(55)
#define FME_PORT_OFST_ACC_PF 0
#define FME_PORT_OFST_ACC_VF 1
#define FME_PORT_OFST_IMP BIT_ULL(60)
/* PORT Header Register Set */
#define PORT_HDR_DFH DFH
#define PORT_HDR_GUID_L GUID_L
#define PORT_HDR_GUID_H GUID_H
#define PORT_HDR_NEXT_AFU NEXT_AFU
#define PORT_HDR_CAP 0x30
#define PORT_HDR_CTRL 0x38
/* Port Capability Register Bitfield */
#define PORT_CAP_PORT_NUM GENMASK_ULL(1, 0) /* ID of this port */
#define PORT_CAP_MMIO_SIZE GENMASK_ULL(23, 8) /* MMIO size in KB */
#define PORT_CAP_SUPP_INT_NUM GENMASK_ULL(35, 32) /* Interrupts num */
/* Port Control Register Bitfield */
#define PORT_CTRL_SFTRST BIT_ULL(0) /* Port soft reset */
/* Latency tolerance reporting. '1' >= 40us, '0' < 40us.*/
#define PORT_CTRL_LATENCY BIT_ULL(2)
#define PORT_CTRL_SFTRST_ACK BIT_ULL(4) /* HW ack for reset */
/**
* struct dfl_fpga_port_ops - port ops
*
* @name: name of this port ops, to match with port platform device.
* @owner: pointer to the module which owns this port ops.
* @node: node to link port ops to global list.
* @get_id: get port id from hardware.
* @enable_set: enable/disable the port.
*/
struct dfl_fpga_port_ops {
const char *name;
struct module *owner;
struct list_head node;
int (*get_id)(struct platform_device *pdev);
int (*enable_set)(struct platform_device *pdev, bool enable);
};
void dfl_fpga_port_ops_add(struct dfl_fpga_port_ops *ops);
void dfl_fpga_port_ops_del(struct dfl_fpga_port_ops *ops);
struct dfl_fpga_port_ops *dfl_fpga_port_ops_get(struct platform_device *pdev);
void dfl_fpga_port_ops_put(struct dfl_fpga_port_ops *ops);
int dfl_fpga_check_port_id(struct platform_device *pdev, void *pport_id);
/**
* struct dfl_feature_driver - sub feature's driver
*
* @id: sub feature id.
* @ops: ops of this sub feature.
*/
struct dfl_feature_driver {
u64 id;
const struct dfl_feature_ops *ops;
};
/**
* struct dfl_feature - sub feature of the feature devices
*
* @id: sub feature id.
* @resource_index: each sub feature has one mmio resource for its registers.
* this index is used to find its mmio resource from the
* feature dev (platform device)'s reources.
* @ioaddr: mapped mmio resource address.
* @ops: ops of this sub feature.
*/
struct dfl_feature {
u64 id;
int resource_index;
void __iomem *ioaddr;
const struct dfl_feature_ops *ops;
};
#define DEV_STATUS_IN_USE 0
/**
* struct dfl_feature_platform_data - platform data for feature devices
*
* @node: node to link feature devs to container device's port_dev_list.
* @lock: mutex to protect platform data.
* @cdev: cdev of feature dev.
* @dev: ptr to platform device linked with this platform data.
* @dfl_cdev: ptr to container device.
* @disable_count: count for port disable.
* @num: number for sub features.
* @dev_status: dev status (e.g. DEV_STATUS_IN_USE).
* @private: ptr to feature dev private data.
* @features: sub features of this feature dev.
*/
struct dfl_feature_platform_data {
struct list_head node;
struct mutex lock;
struct cdev cdev;
struct platform_device *dev;
struct dfl_fpga_cdev *dfl_cdev;
unsigned int disable_count;
unsigned long dev_status;
void *private;
int num;
struct dfl_feature features[0];
};
static inline
int dfl_feature_dev_use_begin(struct dfl_feature_platform_data *pdata)
{
/* Test and set IN_USE flags to ensure file is exclusively used */
if (test_and_set_bit_lock(DEV_STATUS_IN_USE, &pdata->dev_status))
return -EBUSY;
return 0;
}
static inline
void dfl_feature_dev_use_end(struct dfl_feature_platform_data *pdata)
{
clear_bit_unlock(DEV_STATUS_IN_USE, &pdata->dev_status);
}
static inline
void dfl_fpga_pdata_set_private(struct dfl_feature_platform_data *pdata,
void *private)
{
pdata->private = private;
}
static inline
void *dfl_fpga_pdata_get_private(struct dfl_feature_platform_data *pdata)
{
return pdata->private;
}
struct dfl_feature_ops {
int (*init)(struct platform_device *pdev, struct dfl_feature *feature);
void (*uinit)(struct platform_device *pdev,
struct dfl_feature *feature);
long (*ioctl)(struct platform_device *pdev, struct dfl_feature *feature,
unsigned int cmd, unsigned long arg);
};
#define DFL_FPGA_FEATURE_DEV_FME "dfl-fme"
#define DFL_FPGA_FEATURE_DEV_PORT "dfl-port"
static inline int dfl_feature_platform_data_size(const int num)
{
return sizeof(struct dfl_feature_platform_data) +
num * sizeof(struct dfl_feature);
}
void dfl_fpga_dev_feature_uinit(struct platform_device *pdev);
int dfl_fpga_dev_feature_init(struct platform_device *pdev,
struct dfl_feature_driver *feature_drvs);
int dfl_fpga_dev_ops_register(struct platform_device *pdev,
const struct file_operations *fops,
struct module *owner);
void dfl_fpga_dev_ops_unregister(struct platform_device *pdev);
static inline
struct platform_device *dfl_fpga_inode_to_feature_dev(struct inode *inode)
{
struct dfl_feature_platform_data *pdata;
pdata = container_of(inode->i_cdev, struct dfl_feature_platform_data,
cdev);
return pdata->dev;
}
#define dfl_fpga_dev_for_each_feature(pdata, feature) \
for ((feature) = (pdata)->features; \
(feature) < (pdata)->features + (pdata)->num; (feature)++)
static inline
struct dfl_feature *dfl_get_feature_by_id(struct device *dev, u64 id)
{
struct dfl_feature_platform_data *pdata = dev_get_platdata(dev);
struct dfl_feature *feature;
dfl_fpga_dev_for_each_feature(pdata, feature)
if (feature->id == id)
return feature;
return NULL;
}
static inline
void __iomem *dfl_get_feature_ioaddr_by_id(struct device *dev, u64 id)
{
struct dfl_feature *feature = dfl_get_feature_by_id(dev, id);
if (feature && feature->ioaddr)
return feature->ioaddr;
WARN_ON(1);
return NULL;
}
static inline bool is_dfl_feature_present(struct device *dev, u64 id)
{
return !!dfl_get_feature_ioaddr_by_id(dev, id);
}
static inline
struct device *dfl_fpga_pdata_to_parent(struct dfl_feature_platform_data *pdata)
{
return pdata->dev->dev.parent->parent;
}
static inline bool dfl_feature_is_fme(void __iomem *base)
{
u64 v = readq(base + DFH);
return (FIELD_GET(DFH_TYPE, v) == DFH_TYPE_FIU) &&
(FIELD_GET(DFH_ID, v) == DFH_ID_FIU_FME);
}
static inline bool dfl_feature_is_port(void __iomem *base)
{
u64 v = readq(base + DFH);
return (FIELD_GET(DFH_TYPE, v) == DFH_TYPE_FIU) &&
(FIELD_GET(DFH_ID, v) == DFH_ID_FIU_PORT);
}
/**
* struct dfl_fpga_enum_info - DFL FPGA enumeration information
*
* @dev: parent device.
* @dfls: list of device feature lists.
*/
struct dfl_fpga_enum_info {
struct device *dev;
struct list_head dfls;
};
/**
* struct dfl_fpga_enum_dfl - DFL FPGA enumeration device feature list info
*
* @start: base address of this device feature list.
* @len: size of this device feature list.
* @ioaddr: mapped base address of this device feature list.
* @node: node in list of device feature lists.
*/
struct dfl_fpga_enum_dfl {
resource_size_t start;
resource_size_t len;
void __iomem *ioaddr;
struct list_head node;
};
struct dfl_fpga_enum_info *dfl_fpga_enum_info_alloc(struct device *dev);
int dfl_fpga_enum_info_add_dfl(struct dfl_fpga_enum_info *info,
resource_size_t start, resource_size_t len,
void __iomem *ioaddr);
void dfl_fpga_enum_info_free(struct dfl_fpga_enum_info *info);
/**
* struct dfl_fpga_cdev - container device of DFL based FPGA
*
* @parent: parent device of this container device.
* @region: base fpga region.
* @fme_dev: FME feature device under this container device.
* @lock: mutex lock to protect the port device list.
* @port_dev_list: list of all port feature devices under this container device.
*/
struct dfl_fpga_cdev {
struct device *parent;
struct fpga_region *region;
struct device *fme_dev;
struct mutex lock;
struct list_head port_dev_list;
};
struct dfl_fpga_cdev *
dfl_fpga_feature_devs_enumerate(struct dfl_fpga_enum_info *info);
void dfl_fpga_feature_devs_remove(struct dfl_fpga_cdev *cdev);
/*
* need to drop the device reference with put_device() after use port platform
* device returned by __dfl_fpga_cdev_find_port and dfl_fpga_cdev_find_port
* functions.
*/
struct platform_device *
__dfl_fpga_cdev_find_port(struct dfl_fpga_cdev *cdev, void *data,
int (*match)(struct platform_device *, void *));
static inline struct platform_device *
dfl_fpga_cdev_find_port(struct dfl_fpga_cdev *cdev, void *data,
int (*match)(struct platform_device *, void *))
{
struct platform_device *pdev;
mutex_lock(&cdev->lock);
pdev = __dfl_fpga_cdev_find_port(cdev, data, match);
mutex_unlock(&cdev->lock);
return pdev;
}
#endif /* __FPGA_DFL_H */

View File

@ -406,12 +406,40 @@ static ssize_t state_show(struct device *dev,
return sprintf(buf, "%s\n", state_str[mgr->state]); return sprintf(buf, "%s\n", state_str[mgr->state]);
} }
static ssize_t status_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct fpga_manager *mgr = to_fpga_manager(dev);
u64 status;
int len = 0;
if (!mgr->mops->status)
return -ENOENT;
status = mgr->mops->status(mgr);
if (status & FPGA_MGR_STATUS_OPERATION_ERR)
len += sprintf(buf + len, "reconfig operation error\n");
if (status & FPGA_MGR_STATUS_CRC_ERR)
len += sprintf(buf + len, "reconfig CRC error\n");
if (status & FPGA_MGR_STATUS_INCOMPATIBLE_IMAGE_ERR)
len += sprintf(buf + len, "reconfig incompatible image\n");
if (status & FPGA_MGR_STATUS_IP_PROTOCOL_ERR)
len += sprintf(buf + len, "reconfig IP protocol error\n");
if (status & FPGA_MGR_STATUS_FIFO_OVERFLOW_ERR)
len += sprintf(buf + len, "reconfig fifo overflow error\n");
return len;
}
static DEVICE_ATTR_RO(name); static DEVICE_ATTR_RO(name);
static DEVICE_ATTR_RO(state); static DEVICE_ATTR_RO(state);
static DEVICE_ATTR_RO(status);
static struct attribute *fpga_mgr_attrs[] = { static struct attribute *fpga_mgr_attrs[] = {
&dev_attr_name.attr, &dev_attr_name.attr,
&dev_attr_state.attr, &dev_attr_state.attr,
&dev_attr_status.attr,
NULL, NULL,
}; };
ATTRIBUTE_GROUPS(fpga_mgr); ATTRIBUTE_GROUPS(fpga_mgr);

View File

@ -158,6 +158,27 @@ err_put_region:
} }
EXPORT_SYMBOL_GPL(fpga_region_program_fpga); EXPORT_SYMBOL_GPL(fpga_region_program_fpga);
static ssize_t compat_id_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct fpga_region *region = to_fpga_region(dev);
if (!region->compat_id)
return -ENOENT;
return sprintf(buf, "%016llx%016llx\n",
(unsigned long long)region->compat_id->id_h,
(unsigned long long)region->compat_id->id_l);
}
static DEVICE_ATTR_RO(compat_id);
static struct attribute *fpga_region_attrs[] = {
&dev_attr_compat_id.attr,
NULL,
};
ATTRIBUTE_GROUPS(fpga_region);
/** /**
* fpga_region_create - alloc and init a struct fpga_region * fpga_region_create - alloc and init a struct fpga_region
* @dev: device parent * @dev: device parent
@ -258,6 +279,7 @@ static int __init fpga_region_init(void)
if (IS_ERR(fpga_region_class)) if (IS_ERR(fpga_region_class))
return PTR_ERR(fpga_region_class); return PTR_ERR(fpga_region_class);
fpga_region_class->dev_groups = fpga_region_groups;
fpga_region_class->dev_release = fpga_region_dev_release; fpga_region_class->dev_release = fpga_region_dev_release;
return 0; return 0;

View File

@ -12,6 +12,21 @@ menuconfig FSI
if FSI if FSI
config FSI_NEW_DEV_NODE
bool "Create '/dev/fsi' directory for char devices"
default n
---help---
This option causes char devices created for FSI devices to be
located under a common /dev/fsi/ directory. Set to N unless your
userspace has been updated to handle the new location.
Additionally, it also causes the char device names to be offset
by one so that chip 0 will have /dev/scom1 and chip1 /dev/scom2
to match old userspace expectations.
New userspace will use udev rules to generate predictable access
symlinks in /dev/fsi/by-path when this option is enabled.
config FSI_MASTER_GPIO config FSI_MASTER_GPIO
tristate "GPIO-based FSI master" tristate "GPIO-based FSI master"
depends on GPIOLIB depends on GPIOLIB
@ -27,9 +42,26 @@ config FSI_MASTER_HUB
allow chaining of FSI links to an arbitrary depth. This allows for allow chaining of FSI links to an arbitrary depth. This allows for
a high target device fanout. a high target device fanout.
config FSI_MASTER_AST_CF
tristate "FSI master based on Aspeed ColdFire coprocessor"
depends on GPIOLIB
depends on GPIO_ASPEED
---help---
This option enables a FSI master using the AST2400 and AST2500 GPIO
lines driven by the internal ColdFire coprocessor. This requires
the corresponding machine specific ColdFire firmware to be available.
config FSI_SCOM config FSI_SCOM
tristate "SCOM FSI client device driver" tristate "SCOM FSI client device driver"
---help--- ---help---
This option enables an FSI based SCOM device driver. This option enables an FSI based SCOM device driver.
config FSI_SBEFIFO
tristate "SBEFIFO FSI client device driver"
depends on OF_ADDRESS
---help---
This option enables an FSI based SBEFIFO device driver. The SBEFIFO is
a pipe-like FSI device for communicating with the self boot engine
(SBE) on POWER processors.
endif endif

View File

@ -2,4 +2,6 @@
obj-$(CONFIG_FSI) += fsi-core.o obj-$(CONFIG_FSI) += fsi-core.o
obj-$(CONFIG_FSI_MASTER_HUB) += fsi-master-hub.o obj-$(CONFIG_FSI_MASTER_HUB) += fsi-master-hub.o
obj-$(CONFIG_FSI_MASTER_GPIO) += fsi-master-gpio.o obj-$(CONFIG_FSI_MASTER_GPIO) += fsi-master-gpio.o
obj-$(CONFIG_FSI_MASTER_AST_CF) += fsi-master-ast-cf.o
obj-$(CONFIG_FSI_SCOM) += fsi-scom.o obj-$(CONFIG_FSI_SCOM) += fsi-scom.o
obj-$(CONFIG_FSI_SBEFIFO) += fsi-sbefifo.o

157
drivers/fsi/cf-fsi-fw.h Normal file
View File

@ -0,0 +1,157 @@
// SPDX-License-Identifier: GPL-2.0+
#ifndef __CF_FSI_FW_H
#define __CF_FSI_FW_H
/*
* uCode file layout
*
* 0000...03ff : m68k exception vectors
* 0400...04ff : Header info & boot config block
* 0500....... : Code & stack
*/
/*
* Header info & boot config area
*
* The Header info is built into the ucode and provide version and
* platform information.
*
* the Boot config needs to be adjusted by the ARM prior to starting
* the ucode if the Command/Status area isn't at 0x320000 in CF space
* (ie. beginning of SRAM).
*/
#define HDR_OFFSET 0x400
/* Info: Signature & version */
#define HDR_SYS_SIG 0x00 /* 2 bytes system signature */
#define SYS_SIG_SHARED 0x5348
#define SYS_SIG_SPLIT 0x5350
#define HDR_FW_VERS 0x02 /* 2 bytes Major.Minor */
#define HDR_API_VERS 0x04 /* 2 bytes Major.Minor */
#define API_VERSION_MAJ 2 /* Current version */
#define API_VERSION_MIN 1
#define HDR_FW_OPTIONS 0x08 /* 4 bytes option flags */
#define FW_OPTION_TRACE_EN 0x00000001 /* FW tracing enabled */
#define FW_OPTION_CONT_CLOCK 0x00000002 /* Continuous clocking supported */
#define HDR_FW_SIZE 0x10 /* 4 bytes size for combo image */
/* Boot Config: Address of Command/Status area */
#define HDR_CMD_STAT_AREA 0x80 /* 4 bytes CF address */
#define HDR_FW_CONTROL 0x84 /* 4 bytes control flags */
#define FW_CONTROL_CONT_CLOCK 0x00000002 /* Continuous clocking enabled */
#define FW_CONTROL_DUMMY_RD 0x00000004 /* Extra dummy read (AST2400) */
#define FW_CONTROL_USE_STOP 0x00000008 /* Use STOP instructions */
#define HDR_CLOCK_GPIO_VADDR 0x90 /* 2 bytes offset from GPIO base */
#define HDR_CLOCK_GPIO_DADDR 0x92 /* 2 bytes offset from GPIO base */
#define HDR_DATA_GPIO_VADDR 0x94 /* 2 bytes offset from GPIO base */
#define HDR_DATA_GPIO_DADDR 0x96 /* 2 bytes offset from GPIO base */
#define HDR_TRANS_GPIO_VADDR 0x98 /* 2 bytes offset from GPIO base */
#define HDR_TRANS_GPIO_DADDR 0x9a /* 2 bytes offset from GPIO base */
#define HDR_CLOCK_GPIO_BIT 0x9c /* 1 byte bit number */
#define HDR_DATA_GPIO_BIT 0x9d /* 1 byte bit number */
#define HDR_TRANS_GPIO_BIT 0x9e /* 1 byte bit number */
/*
* Command/Status area layout: Main part
*/
/* Command/Status register:
*
* +---------------------------+
* | STAT | RLEN | CLEN | CMD |
* | 8 | 8 | 8 | 8 |
* +---------------------------+
* | | | |
* status | | |
* Response len | |
* (in bits) | |
* | |
* Command len |
* (in bits) |
* |
* Command code
*
* Due to the big endian layout, that means that a byte read will
* return the status byte
*/
#define CMD_STAT_REG 0x00
#define CMD_REG_CMD_MASK 0x000000ff
#define CMD_REG_CMD_SHIFT 0
#define CMD_NONE 0x00
#define CMD_COMMAND 0x01
#define CMD_BREAK 0x02
#define CMD_IDLE_CLOCKS 0x03 /* clen = #clocks */
#define CMD_INVALID 0xff
#define CMD_REG_CLEN_MASK 0x0000ff00
#define CMD_REG_CLEN_SHIFT 8
#define CMD_REG_RLEN_MASK 0x00ff0000
#define CMD_REG_RLEN_SHIFT 16
#define CMD_REG_STAT_MASK 0xff000000
#define CMD_REG_STAT_SHIFT 24
#define STAT_WORKING 0x00
#define STAT_COMPLETE 0x01
#define STAT_ERR_INVAL_CMD 0x80
#define STAT_ERR_INVAL_IRQ 0x81
#define STAT_ERR_MTOE 0x82
/* Response tag & CRC */
#define STAT_RTAG 0x04
/* Response CRC */
#define STAT_RCRC 0x05
/* Echo and Send delay */
#define ECHO_DLY_REG 0x08
#define SEND_DLY_REG 0x09
/* Command data area
*
* Last byte of message must be left aligned
*/
#define CMD_DATA 0x10 /* 64 bit of data */
/* Response data area, right aligned, unused top bits are 1 */
#define RSP_DATA 0x20 /* 32 bit of data */
/* Misc */
#define INT_CNT 0x30 /* 32-bit interrupt count */
#define BAD_INT_VEC 0x34 /* 32-bit bad interrupt vector # */
#define CF_STARTED 0x38 /* byte, set to -1 when copro started */
#define CLK_CNT 0x3c /* 32-bit, clock count (debug only) */
/*
* SRAM layout: GPIO arbitration part
*/
#define ARB_REG 0x40
#define ARB_ARM_REQ 0x01
#define ARB_ARM_ACK 0x02
/* Misc2 */
#define CF_RESET_D0 0x50
#define CF_RESET_D1 0x54
#define BAD_INT_S0 0x58
#define BAD_INT_S1 0x5c
#define STOP_CNT 0x60
/* Internal */
/*
* SRAM layout: Trace buffer (debug builds only)
*/
#define TRACEBUF 0x100
#define TR_CLKOBIT0 0xc0
#define TR_CLKOBIT1 0xc1
#define TR_CLKOSTART 0x82
#define TR_OLEN 0x83 /* + len */
#define TR_CLKZ 0x84 /* + count */
#define TR_CLKWSTART 0x85
#define TR_CLKTAG 0x86 /* + tag */
#define TR_CLKDATA 0x87 /* + len */
#define TR_CLKCRC 0x88 /* + raw crc */
#define TR_CLKIBIT0 0x90
#define TR_CLKIBIT1 0x91
#define TR_END 0xff
#endif /* __CF_FSI_FW_H */

View File

@ -11,6 +11,11 @@
* but WITHOUT ANY WARRANTY; without even the implied warranty of * but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details. * GNU General Public License for more details.
*
* TODO:
* - Rework topology
* - s/chip_id/chip_loc
* - s/cfam/chip (cfam_id -> chip_id etc...)
*/ */
#include <linux/crc4.h> #include <linux/crc4.h>
@ -21,6 +26,9 @@
#include <linux/of.h> #include <linux/of.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/bitops.h> #include <linux/bitops.h>
#include <linux/cdev.h>
#include <linux/fs.h>
#include <linux/uaccess.h>
#include "fsi-master.h" #include "fsi-master.h"
@ -78,9 +86,15 @@ static DEFINE_IDA(master_ida);
struct fsi_slave { struct fsi_slave {
struct device dev; struct device dev;
struct fsi_master *master; struct fsi_master *master;
int id; struct cdev cdev;
int link; int cdev_idx;
int id; /* FSI address */
int link; /* FSI link# */
u32 cfam_id;
int chip_id;
uint32_t size; /* size of slave address space */ uint32_t size; /* size of slave address space */
u8 t_send_delay;
u8 t_echo_delay;
}; };
#define to_fsi_master(d) container_of(d, struct fsi_master, dev) #define to_fsi_master(d) container_of(d, struct fsi_master, dev)
@ -89,6 +103,13 @@ struct fsi_slave {
static const int slave_retries = 2; static const int slave_retries = 2;
static int discard_errors; static int discard_errors;
static dev_t fsi_base_dev;
static DEFINE_IDA(fsi_minor_ida);
#define FSI_CHAR_MAX_DEVICES 0x1000
/* Legacy /dev numbering: 4 devices per chip, 16 chips */
#define FSI_CHAR_LEGACY_TOP 64
static int fsi_master_read(struct fsi_master *master, int link, static int fsi_master_read(struct fsi_master *master, int link,
uint8_t slave_id, uint32_t addr, void *val, size_t size); uint8_t slave_id, uint32_t addr, void *val, size_t size);
static int fsi_master_write(struct fsi_master *master, int link, static int fsi_master_write(struct fsi_master *master, int link,
@ -190,7 +211,7 @@ static int fsi_slave_calc_addr(struct fsi_slave *slave, uint32_t *addrp,
static int fsi_slave_report_and_clear_errors(struct fsi_slave *slave) static int fsi_slave_report_and_clear_errors(struct fsi_slave *slave)
{ {
struct fsi_master *master = slave->master; struct fsi_master *master = slave->master;
uint32_t irq, stat; __be32 irq, stat;
int rc, link; int rc, link;
uint8_t id; uint8_t id;
@ -215,7 +236,53 @@ static int fsi_slave_report_and_clear_errors(struct fsi_slave *slave)
&irq, sizeof(irq)); &irq, sizeof(irq));
} }
static int fsi_slave_set_smode(struct fsi_master *master, int link, int id); /* Encode slave local bus echo delay */
static inline uint32_t fsi_smode_echodly(int x)
{
return (x & FSI_SMODE_ED_MASK) << FSI_SMODE_ED_SHIFT;
}
/* Encode slave local bus send delay */
static inline uint32_t fsi_smode_senddly(int x)
{
return (x & FSI_SMODE_SD_MASK) << FSI_SMODE_SD_SHIFT;
}
/* Encode slave local bus clock rate ratio */
static inline uint32_t fsi_smode_lbcrr(int x)
{
return (x & FSI_SMODE_LBCRR_MASK) << FSI_SMODE_LBCRR_SHIFT;
}
/* Encode slave ID */
static inline uint32_t fsi_smode_sid(int x)
{
return (x & FSI_SMODE_SID_MASK) << FSI_SMODE_SID_SHIFT;
}
static uint32_t fsi_slave_smode(int id, u8 t_senddly, u8 t_echodly)
{
return FSI_SMODE_WSC | FSI_SMODE_ECRC
| fsi_smode_sid(id)
| fsi_smode_echodly(t_echodly - 1) | fsi_smode_senddly(t_senddly - 1)
| fsi_smode_lbcrr(0x8);
}
static int fsi_slave_set_smode(struct fsi_slave *slave)
{
uint32_t smode;
__be32 data;
/* set our smode register with the slave ID field to 0; this enables
* extended slave addressing
*/
smode = fsi_slave_smode(slave->id, slave->t_send_delay, slave->t_echo_delay);
data = cpu_to_be32(smode);
return fsi_master_write(slave->master, slave->link, slave->id,
FSI_SLAVE_BASE + FSI_SMODE,
&data, sizeof(data));
}
static int fsi_slave_handle_error(struct fsi_slave *slave, bool write, static int fsi_slave_handle_error(struct fsi_slave *slave, bool write,
uint32_t addr, size_t size) uint32_t addr, size_t size)
@ -223,7 +290,7 @@ static int fsi_slave_handle_error(struct fsi_slave *slave, bool write,
struct fsi_master *master = slave->master; struct fsi_master *master = slave->master;
int rc, link; int rc, link;
uint32_t reg; uint32_t reg;
uint8_t id; uint8_t id, send_delay, echo_delay;
if (discard_errors) if (discard_errors)
return -1; return -1;
@ -254,15 +321,26 @@ static int fsi_slave_handle_error(struct fsi_slave *slave, bool write,
} }
} }
send_delay = slave->t_send_delay;
echo_delay = slave->t_echo_delay;
/* getting serious, reset the slave via BREAK */ /* getting serious, reset the slave via BREAK */
rc = fsi_master_break(master, link); rc = fsi_master_break(master, link);
if (rc) if (rc)
return rc; return rc;
rc = fsi_slave_set_smode(master, link, id); slave->t_send_delay = send_delay;
slave->t_echo_delay = echo_delay;
rc = fsi_slave_set_smode(slave);
if (rc) if (rc)
return rc; return rc;
if (master->link_config)
master->link_config(master, link,
slave->t_send_delay,
slave->t_echo_delay);
return fsi_slave_report_and_clear_errors(slave); return fsi_slave_report_and_clear_errors(slave);
} }
@ -390,7 +468,6 @@ static struct device_node *fsi_device_find_of_node(struct fsi_device *dev)
static int fsi_slave_scan(struct fsi_slave *slave) static int fsi_slave_scan(struct fsi_slave *slave)
{ {
uint32_t engine_addr; uint32_t engine_addr;
uint32_t conf;
int rc, i; int rc, i;
/* /*
@ -404,15 +481,17 @@ static int fsi_slave_scan(struct fsi_slave *slave)
for (i = 2; i < engine_page_size / sizeof(uint32_t); i++) { for (i = 2; i < engine_page_size / sizeof(uint32_t); i++) {
uint8_t slots, version, type, crc; uint8_t slots, version, type, crc;
struct fsi_device *dev; struct fsi_device *dev;
uint32_t conf;
__be32 data;
rc = fsi_slave_read(slave, (i + 1) * sizeof(conf), rc = fsi_slave_read(slave, (i + 1) * sizeof(data),
&conf, sizeof(conf)); &data, sizeof(data));
if (rc) { if (rc) {
dev_warn(&slave->dev, dev_warn(&slave->dev,
"error reading slave registers\n"); "error reading slave registers\n");
return -1; return -1;
} }
conf = be32_to_cpu(conf); conf = be32_to_cpu(data);
crc = crc4(0, conf, 32); crc = crc4(0, conf, 32);
if (crc) { if (crc) {
@ -539,79 +618,11 @@ static const struct bin_attribute fsi_slave_raw_attr = {
.write = fsi_slave_sysfs_raw_write, .write = fsi_slave_sysfs_raw_write,
}; };
static ssize_t fsi_slave_sysfs_term_write(struct file *file,
struct kobject *kobj, struct bin_attribute *attr,
char *buf, loff_t off, size_t count)
{
struct fsi_slave *slave = to_fsi_slave(kobj_to_dev(kobj));
struct fsi_master *master = slave->master;
if (!master->term)
return -ENODEV;
master->term(master, slave->link, slave->id);
return count;
}
static const struct bin_attribute fsi_slave_term_attr = {
.attr = {
.name = "term",
.mode = 0200,
},
.size = 0,
.write = fsi_slave_sysfs_term_write,
};
/* Encode slave local bus echo delay */
static inline uint32_t fsi_smode_echodly(int x)
{
return (x & FSI_SMODE_ED_MASK) << FSI_SMODE_ED_SHIFT;
}
/* Encode slave local bus send delay */
static inline uint32_t fsi_smode_senddly(int x)
{
return (x & FSI_SMODE_SD_MASK) << FSI_SMODE_SD_SHIFT;
}
/* Encode slave local bus clock rate ratio */
static inline uint32_t fsi_smode_lbcrr(int x)
{
return (x & FSI_SMODE_LBCRR_MASK) << FSI_SMODE_LBCRR_SHIFT;
}
/* Encode slave ID */
static inline uint32_t fsi_smode_sid(int x)
{
return (x & FSI_SMODE_SID_MASK) << FSI_SMODE_SID_SHIFT;
}
static uint32_t fsi_slave_smode(int id)
{
return FSI_SMODE_WSC | FSI_SMODE_ECRC
| fsi_smode_sid(id)
| fsi_smode_echodly(0xf) | fsi_smode_senddly(0xf)
| fsi_smode_lbcrr(0x8);
}
static int fsi_slave_set_smode(struct fsi_master *master, int link, int id)
{
uint32_t smode;
/* set our smode register with the slave ID field to 0; this enables
* extended slave addressing
*/
smode = fsi_slave_smode(id);
smode = cpu_to_be32(smode);
return fsi_master_write(master, link, id, FSI_SLAVE_BASE + FSI_SMODE,
&smode, sizeof(smode));
}
static void fsi_slave_release(struct device *dev) static void fsi_slave_release(struct device *dev)
{ {
struct fsi_slave *slave = to_fsi_slave(dev); struct fsi_slave *slave = to_fsi_slave(dev);
fsi_free_minor(slave->dev.devt);
of_node_put(dev->of_node); of_node_put(dev->of_node);
kfree(slave); kfree(slave);
} }
@ -659,11 +670,303 @@ static struct device_node *fsi_slave_find_of_node(struct fsi_master *master,
return NULL; return NULL;
} }
static ssize_t cfam_read(struct file *filep, char __user *buf, size_t count,
loff_t *offset)
{
struct fsi_slave *slave = filep->private_data;
size_t total_len, read_len;
loff_t off = *offset;
ssize_t rc;
if (off < 0)
return -EINVAL;
if (off > 0xffffffff || count > 0xffffffff || off + count > 0xffffffff)
return -EINVAL;
for (total_len = 0; total_len < count; total_len += read_len) {
__be32 data;
read_len = min_t(size_t, count, 4);
read_len -= off & 0x3;
rc = fsi_slave_read(slave, off, &data, read_len);
if (rc)
goto fail;
rc = copy_to_user(buf + total_len, &data, read_len);
if (rc) {
rc = -EFAULT;
goto fail;
}
off += read_len;
}
rc = count;
fail:
*offset = off;
return count;
}
static ssize_t cfam_write(struct file *filep, const char __user *buf,
size_t count, loff_t *offset)
{
struct fsi_slave *slave = filep->private_data;
size_t total_len, write_len;
loff_t off = *offset;
ssize_t rc;
if (off < 0)
return -EINVAL;
if (off > 0xffffffff || count > 0xffffffff || off + count > 0xffffffff)
return -EINVAL;
for (total_len = 0; total_len < count; total_len += write_len) {
__be32 data;
write_len = min_t(size_t, count, 4);
write_len -= off & 0x3;
rc = copy_from_user(&data, buf + total_len, write_len);
if (rc) {
rc = -EFAULT;
goto fail;
}
rc = fsi_slave_write(slave, off, &data, write_len);
if (rc)
goto fail;
off += write_len;
}
rc = count;
fail:
*offset = off;
return count;
}
static loff_t cfam_llseek(struct file *file, loff_t offset, int whence)
{
switch (whence) {
case SEEK_CUR:
break;
case SEEK_SET:
file->f_pos = offset;
break;
default:
return -EINVAL;
}
return offset;
}
static int cfam_open(struct inode *inode, struct file *file)
{
struct fsi_slave *slave = container_of(inode->i_cdev, struct fsi_slave, cdev);
file->private_data = slave;
return 0;
}
static const struct file_operations cfam_fops = {
.owner = THIS_MODULE,
.open = cfam_open,
.llseek = cfam_llseek,
.read = cfam_read,
.write = cfam_write,
};
static ssize_t send_term_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct fsi_slave *slave = to_fsi_slave(dev);
struct fsi_master *master = slave->master;
if (!master->term)
return -ENODEV;
master->term(master, slave->link, slave->id);
return count;
}
static DEVICE_ATTR_WO(send_term);
static ssize_t slave_send_echo_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct fsi_slave *slave = to_fsi_slave(dev);
return sprintf(buf, "%u\n", slave->t_send_delay);
}
static ssize_t slave_send_echo_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct fsi_slave *slave = to_fsi_slave(dev);
struct fsi_master *master = slave->master;
unsigned long val;
int rc;
if (kstrtoul(buf, 0, &val) < 0)
return -EINVAL;
if (val < 1 || val > 16)
return -EINVAL;
if (!master->link_config)
return -ENXIO;
/* Current HW mandates that send and echo delay are identical */
slave->t_send_delay = val;
slave->t_echo_delay = val;
rc = fsi_slave_set_smode(slave);
if (rc < 0)
return rc;
if (master->link_config)
master->link_config(master, slave->link,
slave->t_send_delay,
slave->t_echo_delay);
return count;
}
static DEVICE_ATTR(send_echo_delays, 0600,
slave_send_echo_show, slave_send_echo_store);
static ssize_t chip_id_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct fsi_slave *slave = to_fsi_slave(dev);
return sprintf(buf, "%d\n", slave->chip_id);
}
static DEVICE_ATTR_RO(chip_id);
static ssize_t cfam_id_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct fsi_slave *slave = to_fsi_slave(dev);
return sprintf(buf, "0x%x\n", slave->cfam_id);
}
static DEVICE_ATTR_RO(cfam_id);
static struct attribute *cfam_attr[] = {
&dev_attr_send_echo_delays.attr,
&dev_attr_chip_id.attr,
&dev_attr_cfam_id.attr,
&dev_attr_send_term.attr,
NULL,
};
static const struct attribute_group cfam_attr_group = {
.attrs = cfam_attr,
};
static const struct attribute_group *cfam_attr_groups[] = {
&cfam_attr_group,
NULL,
};
static char *cfam_devnode(struct device *dev, umode_t *mode,
kuid_t *uid, kgid_t *gid)
{
struct fsi_slave *slave = to_fsi_slave(dev);
#ifdef CONFIG_FSI_NEW_DEV_NODE
return kasprintf(GFP_KERNEL, "fsi/cfam%d", slave->cdev_idx);
#else
return kasprintf(GFP_KERNEL, "cfam%d", slave->cdev_idx);
#endif
}
static const struct device_type cfam_type = {
.name = "cfam",
.devnode = cfam_devnode,
.groups = cfam_attr_groups
};
static char *fsi_cdev_devnode(struct device *dev, umode_t *mode,
kuid_t *uid, kgid_t *gid)
{
#ifdef CONFIG_FSI_NEW_DEV_NODE
return kasprintf(GFP_KERNEL, "fsi/%s", dev_name(dev));
#else
return kasprintf(GFP_KERNEL, "%s", dev_name(dev));
#endif
}
const struct device_type fsi_cdev_type = {
.name = "fsi-cdev",
.devnode = fsi_cdev_devnode,
};
EXPORT_SYMBOL_GPL(fsi_cdev_type);
/* Backward compatible /dev/ numbering in "old style" mode */
static int fsi_adjust_index(int index)
{
#ifdef CONFIG_FSI_NEW_DEV_NODE
return index;
#else
return index + 1;
#endif
}
static int __fsi_get_new_minor(struct fsi_slave *slave, enum fsi_dev_type type,
dev_t *out_dev, int *out_index)
{
int cid = slave->chip_id;
int id;
/* Check if we qualify for legacy numbering */
if (cid >= 0 && cid < 16 && type < 4) {
/* Try reserving the legacy number */
id = (cid << 4) | type;
id = ida_simple_get(&fsi_minor_ida, id, id + 1, GFP_KERNEL);
if (id >= 0) {
*out_index = fsi_adjust_index(cid);
*out_dev = fsi_base_dev + id;
return 0;
}
/* Other failure */
if (id != -ENOSPC)
return id;
/* Fallback to non-legacy allocation */
}
id = ida_simple_get(&fsi_minor_ida, FSI_CHAR_LEGACY_TOP,
FSI_CHAR_MAX_DEVICES, GFP_KERNEL);
if (id < 0)
return id;
*out_index = fsi_adjust_index(id);
*out_dev = fsi_base_dev + id;
return 0;
}
int fsi_get_new_minor(struct fsi_device *fdev, enum fsi_dev_type type,
dev_t *out_dev, int *out_index)
{
return __fsi_get_new_minor(fdev->slave, type, out_dev, out_index);
}
EXPORT_SYMBOL_GPL(fsi_get_new_minor);
void fsi_free_minor(dev_t dev)
{
ida_simple_remove(&fsi_minor_ida, MINOR(dev));
}
EXPORT_SYMBOL_GPL(fsi_free_minor);
static int fsi_slave_init(struct fsi_master *master, int link, uint8_t id) static int fsi_slave_init(struct fsi_master *master, int link, uint8_t id)
{ {
uint32_t chip_id, llmode; uint32_t cfam_id;
struct fsi_slave *slave; struct fsi_slave *slave;
uint8_t crc; uint8_t crc;
__be32 data, llmode;
int rc; int rc;
/* Currently, we only support single slaves on a link, and use the /* Currently, we only support single slaves on a link, and use the
@ -672,31 +975,23 @@ static int fsi_slave_init(struct fsi_master *master, int link, uint8_t id)
if (id != 0) if (id != 0)
return -EINVAL; return -EINVAL;
rc = fsi_master_read(master, link, id, 0, &chip_id, sizeof(chip_id)); rc = fsi_master_read(master, link, id, 0, &data, sizeof(data));
if (rc) { if (rc) {
dev_dbg(&master->dev, "can't read slave %02x:%02x %d\n", dev_dbg(&master->dev, "can't read slave %02x:%02x %d\n",
link, id, rc); link, id, rc);
return -ENODEV; return -ENODEV;
} }
chip_id = be32_to_cpu(chip_id); cfam_id = be32_to_cpu(data);
crc = crc4(0, chip_id, 32); crc = crc4(0, cfam_id, 32);
if (crc) { if (crc) {
dev_warn(&master->dev, "slave %02x:%02x invalid chip id CRC!\n", dev_warn(&master->dev, "slave %02x:%02x invalid cfam id CRC!\n",
link, id); link, id);
return -EIO; return -EIO;
} }
dev_dbg(&master->dev, "fsi: found chip %08x at %02x:%02x:%02x\n", dev_dbg(&master->dev, "fsi: found chip %08x at %02x:%02x:%02x\n",
chip_id, master->idx, link, id); cfam_id, master->idx, link, id);
rc = fsi_slave_set_smode(master, link, id);
if (rc) {
dev_warn(&master->dev,
"can't set smode on slave:%02x:%02x %d\n",
link, id, rc);
return -ENODEV;
}
/* If we're behind a master that doesn't provide a self-running bus /* If we're behind a master that doesn't provide a self-running bus
* clock, put the slave into async mode * clock, put the slave into async mode
@ -719,30 +1014,61 @@ static int fsi_slave_init(struct fsi_master *master, int link, uint8_t id)
if (!slave) if (!slave)
return -ENOMEM; return -ENOMEM;
slave->master = master; dev_set_name(&slave->dev, "slave@%02x:%02x", link, id);
slave->dev.type = &cfam_type;
slave->dev.parent = &master->dev; slave->dev.parent = &master->dev;
slave->dev.of_node = fsi_slave_find_of_node(master, link, id); slave->dev.of_node = fsi_slave_find_of_node(master, link, id);
slave->dev.release = fsi_slave_release; slave->dev.release = fsi_slave_release;
device_initialize(&slave->dev);
slave->cfam_id = cfam_id;
slave->master = master;
slave->link = link; slave->link = link;
slave->id = id; slave->id = id;
slave->size = FSI_SLAVE_SIZE_23b; slave->size = FSI_SLAVE_SIZE_23b;
slave->t_send_delay = 16;
slave->t_echo_delay = 16;
/* Get chip ID if any */
slave->chip_id = -1;
if (slave->dev.of_node) {
uint32_t prop;
if (!of_property_read_u32(slave->dev.of_node, "chip-id", &prop))
slave->chip_id = prop;
dev_set_name(&slave->dev, "slave@%02x:%02x", link, id);
rc = device_register(&slave->dev);
if (rc < 0) {
dev_warn(&master->dev, "failed to create slave device: %d\n",
rc);
put_device(&slave->dev);
return rc;
} }
/* Allocate a minor in the FSI space */
rc = __fsi_get_new_minor(slave, fsi_dev_cfam, &slave->dev.devt,
&slave->cdev_idx);
if (rc)
goto err_free;
/* Create chardev for userspace access */
cdev_init(&slave->cdev, &cfam_fops);
rc = cdev_device_add(&slave->cdev, &slave->dev);
if (rc) {
dev_err(&slave->dev, "Error %d creating slave device\n", rc);
goto err_free;
}
rc = fsi_slave_set_smode(slave);
if (rc) {
dev_warn(&master->dev,
"can't set smode on slave:%02x:%02x %d\n",
link, id, rc);
kfree(slave);
return -ENODEV;
}
if (master->link_config)
master->link_config(master, link,
slave->t_send_delay,
slave->t_echo_delay);
/* Legacy raw file -> to be removed */
rc = device_create_bin_file(&slave->dev, &fsi_slave_raw_attr); rc = device_create_bin_file(&slave->dev, &fsi_slave_raw_attr);
if (rc) if (rc)
dev_warn(&slave->dev, "failed to create raw attr: %d\n", rc); dev_warn(&slave->dev, "failed to create raw attr: %d\n", rc);
rc = device_create_bin_file(&slave->dev, &fsi_slave_term_attr);
if (rc)
dev_warn(&slave->dev, "failed to create term attr: %d\n", rc);
rc = fsi_slave_scan(slave); rc = fsi_slave_scan(slave);
if (rc) if (rc)
@ -750,6 +1076,10 @@ static int fsi_slave_init(struct fsi_master *master, int link, uint8_t id)
rc); rc);
return rc; return rc;
err_free:
put_device(&slave->dev);
return rc;
} }
/* FSI master support */ /* FSI master support */
@ -814,12 +1144,16 @@ static int fsi_master_link_enable(struct fsi_master *master, int link)
*/ */
static int fsi_master_break(struct fsi_master *master, int link) static int fsi_master_break(struct fsi_master *master, int link)
{ {
int rc = 0;
trace_fsi_master_break(master, link); trace_fsi_master_break(master, link);
if (master->send_break) if (master->send_break)
return master->send_break(master, link); rc = master->send_break(master, link);
if (master->link_config)
master->link_config(master, link, 16, 16);
return 0; return rc;
} }
static int fsi_master_scan(struct fsi_master *master) static int fsi_master_scan(struct fsi_master *master)
@ -854,8 +1188,11 @@ static int fsi_slave_remove_device(struct device *dev, void *arg)
static int fsi_master_remove_slave(struct device *dev, void *arg) static int fsi_master_remove_slave(struct device *dev, void *arg)
{ {
struct fsi_slave *slave = to_fsi_slave(dev);
device_for_each_child(dev, NULL, fsi_slave_remove_device); device_for_each_child(dev, NULL, fsi_slave_remove_device);
device_unregister(dev); cdev_device_del(&slave->cdev, &slave->dev);
put_device(dev);
return 0; return 0;
} }
@ -866,8 +1203,14 @@ static void fsi_master_unscan(struct fsi_master *master)
int fsi_master_rescan(struct fsi_master *master) int fsi_master_rescan(struct fsi_master *master)
{ {
int rc;
mutex_lock(&master->scan_lock);
fsi_master_unscan(master); fsi_master_unscan(master);
return fsi_master_scan(master); rc = fsi_master_scan(master);
mutex_unlock(&master->scan_lock);
return rc;
} }
EXPORT_SYMBOL_GPL(fsi_master_rescan); EXPORT_SYMBOL_GPL(fsi_master_rescan);
@ -903,9 +1246,7 @@ int fsi_master_register(struct fsi_master *master)
int rc; int rc;
struct device_node *np; struct device_node *np;
if (!master) mutex_init(&master->scan_lock);
return -EINVAL;
master->idx = ida_simple_get(&master_ida, 0, INT_MAX, GFP_KERNEL); master->idx = ida_simple_get(&master_ida, 0, INT_MAX, GFP_KERNEL);
dev_set_name(&master->dev, "fsi%d", master->idx); dev_set_name(&master->dev, "fsi%d", master->idx);
@ -917,21 +1258,24 @@ int fsi_master_register(struct fsi_master *master)
rc = device_create_file(&master->dev, &dev_attr_rescan); rc = device_create_file(&master->dev, &dev_attr_rescan);
if (rc) { if (rc) {
device_unregister(&master->dev); device_del(&master->dev);
ida_simple_remove(&master_ida, master->idx); ida_simple_remove(&master_ida, master->idx);
return rc; return rc;
} }
rc = device_create_file(&master->dev, &dev_attr_break); rc = device_create_file(&master->dev, &dev_attr_break);
if (rc) { if (rc) {
device_unregister(&master->dev); device_del(&master->dev);
ida_simple_remove(&master_ida, master->idx); ida_simple_remove(&master_ida, master->idx);
return rc; return rc;
} }
np = dev_of_node(&master->dev); np = dev_of_node(&master->dev);
if (!of_property_read_bool(np, "no-scan-on-init")) if (!of_property_read_bool(np, "no-scan-on-init")) {
mutex_lock(&master->scan_lock);
fsi_master_scan(master); fsi_master_scan(master);
mutex_unlock(&master->scan_lock);
}
return 0; return 0;
} }
@ -944,7 +1288,9 @@ void fsi_master_unregister(struct fsi_master *master)
master->idx = -1; master->idx = -1;
} }
mutex_lock(&master->scan_lock);
fsi_master_unscan(master); fsi_master_unscan(master);
mutex_unlock(&master->scan_lock);
device_unregister(&master->dev); device_unregister(&master->dev);
} }
EXPORT_SYMBOL_GPL(fsi_master_unregister); EXPORT_SYMBOL_GPL(fsi_master_unregister);
@ -996,13 +1342,27 @@ EXPORT_SYMBOL_GPL(fsi_bus_type);
static int __init fsi_init(void) static int __init fsi_init(void)
{ {
return bus_register(&fsi_bus_type); int rc;
rc = alloc_chrdev_region(&fsi_base_dev, 0, FSI_CHAR_MAX_DEVICES, "fsi");
if (rc)
return rc;
rc = bus_register(&fsi_bus_type);
if (rc)
goto fail_bus;
return 0;
fail_bus:
unregister_chrdev_region(fsi_base_dev, FSI_CHAR_MAX_DEVICES);
return rc;
} }
postcore_initcall(fsi_init); postcore_initcall(fsi_init);
static void fsi_exit(void) static void fsi_exit(void)
{ {
bus_unregister(&fsi_bus_type); bus_unregister(&fsi_bus_type);
unregister_chrdev_region(fsi_base_dev, FSI_CHAR_MAX_DEVICES);
ida_destroy(&fsi_minor_ida);
} }
module_exit(fsi_exit); module_exit(fsi_exit);
module_param(discard_errors, int, 0664); module_param(discard_errors, int, 0664);

File diff suppressed because it is too large Load Diff

View File

@ -8,59 +8,31 @@
#include <linux/fsi.h> #include <linux/fsi.h>
#include <linux/gpio/consumer.h> #include <linux/gpio/consumer.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/irqflags.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/spinlock.h>
#include "fsi-master.h" #include "fsi-master.h"
#define FSI_GPIO_STD_DLY 1 /* Standard pin delay in nS */ #define FSI_GPIO_STD_DLY 1 /* Standard pin delay in nS */
#define FSI_ECHO_DELAY_CLOCKS 16 /* Number clocks for echo delay */ #define LAST_ADDR_INVALID 0x1
#define FSI_PRE_BREAK_CLOCKS 50 /* Number clocks to prep for break */
#define FSI_BREAK_CLOCKS 256 /* Number of clocks to issue break */
#define FSI_POST_BREAK_CLOCKS 16000 /* Number clocks to set up cfam */
#define FSI_INIT_CLOCKS 5000 /* Clock out any old data */
#define FSI_GPIO_STD_DELAY 10 /* Standard GPIO delay in nS */
/* todo: adjust down as low as */
/* possible or eliminate */
#define FSI_GPIO_CMD_DPOLL 0x2
#define FSI_GPIO_CMD_TERM 0x3f
#define FSI_GPIO_CMD_ABS_AR 0x4
#define FSI_GPIO_DPOLL_CLOCKS 100 /* < 21 will cause slave to hang */
/* Bus errors */
#define FSI_GPIO_ERR_BUSY 1 /* Slave stuck in busy state */
#define FSI_GPIO_RESP_ERRA 2 /* Any (misc) Error */
#define FSI_GPIO_RESP_ERRC 3 /* Slave reports master CRC error */
#define FSI_GPIO_MTOE 4 /* Master time out error */
#define FSI_GPIO_CRC_INVAL 5 /* Master reports slave CRC error */
/* Normal slave responses */
#define FSI_GPIO_RESP_BUSY 1
#define FSI_GPIO_RESP_ACK 0
#define FSI_GPIO_RESP_ACKD 4
#define FSI_GPIO_MAX_BUSY 100
#define FSI_GPIO_MTOE_COUNT 1000
#define FSI_GPIO_DRAIN_BITS 20
#define FSI_GPIO_CRC_SIZE 4
#define FSI_GPIO_MSG_ID_SIZE 2
#define FSI_GPIO_MSG_RESPID_SIZE 2
#define FSI_GPIO_PRIME_SLAVE_CLOCKS 100
struct fsi_master_gpio { struct fsi_master_gpio {
struct fsi_master master; struct fsi_master master;
struct device *dev; struct device *dev;
spinlock_t cmd_lock; /* Lock for commands */ struct mutex cmd_lock; /* mutex for command ordering */
struct gpio_desc *gpio_clk; struct gpio_desc *gpio_clk;
struct gpio_desc *gpio_data; struct gpio_desc *gpio_data;
struct gpio_desc *gpio_trans; /* Voltage translator */ struct gpio_desc *gpio_trans; /* Voltage translator */
struct gpio_desc *gpio_enable; /* FSI enable */ struct gpio_desc *gpio_enable; /* FSI enable */
struct gpio_desc *gpio_mux; /* Mux control */ struct gpio_desc *gpio_mux; /* Mux control */
bool external_mode; bool external_mode;
bool no_delays;
uint32_t last_addr;
uint8_t t_send_delay;
uint8_t t_echo_delay;
}; };
#define CREATE_TRACE_POINTS #define CREATE_TRACE_POINTS
@ -78,19 +50,31 @@ static void clock_toggle(struct fsi_master_gpio *master, int count)
int i; int i;
for (i = 0; i < count; i++) { for (i = 0; i < count; i++) {
ndelay(FSI_GPIO_STD_DLY); if (!master->no_delays)
ndelay(FSI_GPIO_STD_DLY);
gpiod_set_value(master->gpio_clk, 0); gpiod_set_value(master->gpio_clk, 0);
ndelay(FSI_GPIO_STD_DLY); if (!master->no_delays)
ndelay(FSI_GPIO_STD_DLY);
gpiod_set_value(master->gpio_clk, 1); gpiod_set_value(master->gpio_clk, 1);
} }
} }
static int sda_in(struct fsi_master_gpio *master) static int sda_clock_in(struct fsi_master_gpio *master)
{ {
int in; int in;
ndelay(FSI_GPIO_STD_DLY); if (!master->no_delays)
ndelay(FSI_GPIO_STD_DLY);
gpiod_set_value(master->gpio_clk, 0);
/* Dummy read to feed the synchronizers */
gpiod_get_value(master->gpio_data);
/* Actual data read */
in = gpiod_get_value(master->gpio_data); in = gpiod_get_value(master->gpio_data);
if (!master->no_delays)
ndelay(FSI_GPIO_STD_DLY);
gpiod_set_value(master->gpio_clk, 1);
return in ? 1 : 0; return in ? 1 : 0;
} }
@ -113,10 +97,17 @@ static void set_sda_output(struct fsi_master_gpio *master, int value)
static void clock_zeros(struct fsi_master_gpio *master, int count) static void clock_zeros(struct fsi_master_gpio *master, int count)
{ {
trace_fsi_master_gpio_clock_zeros(master, count);
set_sda_output(master, 1); set_sda_output(master, 1);
clock_toggle(master, count); clock_toggle(master, count);
} }
static void echo_delay(struct fsi_master_gpio *master)
{
clock_zeros(master, master->t_echo_delay);
}
static void serial_in(struct fsi_master_gpio *master, struct fsi_gpio_msg *msg, static void serial_in(struct fsi_master_gpio *master, struct fsi_gpio_msg *msg,
uint8_t num_bits) uint8_t num_bits)
{ {
@ -125,8 +116,7 @@ static void serial_in(struct fsi_master_gpio *master, struct fsi_gpio_msg *msg,
set_sda_input(master); set_sda_input(master);
for (bit = 0; bit < num_bits; bit++) { for (bit = 0; bit < num_bits; bit++) {
clock_toggle(master, 1); in_bit = sda_clock_in(master);
in_bit = sda_in(master);
msg->msg <<= 1; msg->msg <<= 1;
msg->msg |= ~in_bit & 0x1; /* Data is active low */ msg->msg |= ~in_bit & 0x1; /* Data is active low */
} }
@ -191,22 +181,92 @@ static void msg_push_crc(struct fsi_gpio_msg *msg)
msg_push_bits(msg, crc, 4); msg_push_bits(msg, crc, 4);
} }
/* static bool check_same_address(struct fsi_master_gpio *master, int id,
* Encode an Absolute Address command uint32_t addr)
*/
static void build_abs_ar_command(struct fsi_gpio_msg *cmd,
uint8_t id, uint32_t addr, size_t size, const void *data)
{ {
/* this will also handle LAST_ADDR_INVALID */
return master->last_addr == (((id & 0x3) << 21) | (addr & ~0x3));
}
static bool check_relative_address(struct fsi_master_gpio *master, int id,
uint32_t addr, uint32_t *rel_addrp)
{
uint32_t last_addr = master->last_addr;
int32_t rel_addr;
if (last_addr == LAST_ADDR_INVALID)
return false;
/* We may be in 23-bit addressing mode, which uses the id as the
* top two address bits. So, if we're referencing a different ID,
* use absolute addresses.
*/
if (((last_addr >> 21) & 0x3) != id)
return false;
/* remove the top two bits from any 23-bit addressing */
last_addr &= (1 << 21) - 1;
/* We know that the addresses are limited to 21 bits, so this won't
* overflow the signed rel_addr */
rel_addr = addr - last_addr;
if (rel_addr > 255 || rel_addr < -256)
return false;
*rel_addrp = (uint32_t)rel_addr;
return true;
}
static void last_address_update(struct fsi_master_gpio *master,
int id, bool valid, uint32_t addr)
{
if (!valid)
master->last_addr = LAST_ADDR_INVALID;
else
master->last_addr = ((id & 0x3) << 21) | (addr & ~0x3);
}
/*
* Encode an Absolute/Relative/Same Address command
*/
static void build_ar_command(struct fsi_master_gpio *master,
struct fsi_gpio_msg *cmd, uint8_t id,
uint32_t addr, size_t size, const void *data)
{
int i, addr_bits, opcode_bits;
bool write = !!data; bool write = !!data;
uint8_t ds; uint8_t ds, opcode;
int i; uint32_t rel_addr;
cmd->bits = 0; cmd->bits = 0;
cmd->msg = 0; cmd->msg = 0;
msg_push_bits(cmd, id, 2); /* we have 21 bits of address max */
msg_push_bits(cmd, FSI_GPIO_CMD_ABS_AR, 3); addr &= ((1 << 21) - 1);
msg_push_bits(cmd, write ? 0 : 1, 1);
/* cmd opcodes are variable length - SAME_AR is only two bits */
opcode_bits = 3;
if (check_same_address(master, id, addr)) {
/* we still address the byte offset within the word */
addr_bits = 2;
opcode_bits = 2;
opcode = FSI_CMD_SAME_AR;
trace_fsi_master_gpio_cmd_same_addr(master);
} else if (check_relative_address(master, id, addr, &rel_addr)) {
/* 8 bits plus sign */
addr_bits = 9;
addr = rel_addr;
opcode = FSI_CMD_REL_AR;
trace_fsi_master_gpio_cmd_rel_addr(master, rel_addr);
} else {
addr_bits = 21;
opcode = FSI_CMD_ABS_AR;
trace_fsi_master_gpio_cmd_abs_addr(master, addr);
}
/* /*
* The read/write size is encoded in the lower bits of the address * The read/write size is encoded in the lower bits of the address
@ -223,7 +283,10 @@ static void build_abs_ar_command(struct fsi_gpio_msg *cmd,
if (size == 4) if (size == 4)
addr |= 1; addr |= 1;
msg_push_bits(cmd, addr & ((1 << 21) - 1), 21); msg_push_bits(cmd, id, 2);
msg_push_bits(cmd, opcode, opcode_bits);
msg_push_bits(cmd, write ? 0 : 1, 1);
msg_push_bits(cmd, addr, addr_bits);
msg_push_bits(cmd, ds, 1); msg_push_bits(cmd, ds, 1);
for (i = 0; write && i < size; i++) for (i = 0; write && i < size; i++)
msg_push_bits(cmd, ((uint8_t *)data)[i], 8); msg_push_bits(cmd, ((uint8_t *)data)[i], 8);
@ -237,14 +300,18 @@ static void build_dpoll_command(struct fsi_gpio_msg *cmd, uint8_t slave_id)
cmd->msg = 0; cmd->msg = 0;
msg_push_bits(cmd, slave_id, 2); msg_push_bits(cmd, slave_id, 2);
msg_push_bits(cmd, FSI_GPIO_CMD_DPOLL, 3); msg_push_bits(cmd, FSI_CMD_DPOLL, 3);
msg_push_crc(cmd); msg_push_crc(cmd);
} }
static void echo_delay(struct fsi_master_gpio *master) static void build_epoll_command(struct fsi_gpio_msg *cmd, uint8_t slave_id)
{ {
set_sda_output(master, 1); cmd->bits = 0;
clock_toggle(master, FSI_ECHO_DELAY_CLOCKS); cmd->msg = 0;
msg_push_bits(cmd, slave_id, 2);
msg_push_bits(cmd, FSI_CMD_EPOLL, 3);
msg_push_crc(cmd);
} }
static void build_term_command(struct fsi_gpio_msg *cmd, uint8_t slave_id) static void build_term_command(struct fsi_gpio_msg *cmd, uint8_t slave_id)
@ -253,40 +320,40 @@ static void build_term_command(struct fsi_gpio_msg *cmd, uint8_t slave_id)
cmd->msg = 0; cmd->msg = 0;
msg_push_bits(cmd, slave_id, 2); msg_push_bits(cmd, slave_id, 2);
msg_push_bits(cmd, FSI_GPIO_CMD_TERM, 6); msg_push_bits(cmd, FSI_CMD_TERM, 6);
msg_push_crc(cmd); msg_push_crc(cmd);
} }
/* /*
* Store information on master errors so handler can detect and clean * Note: callers rely specifically on this returning -EAGAIN for
* up the bus * a CRC error detected in the response. Use other error code
* for other situations. It will be converted to something else
* higher up the stack before it reaches userspace.
*/ */
static void fsi_master_gpio_error(struct fsi_master_gpio *master, int error)
{
}
static int read_one_response(struct fsi_master_gpio *master, static int read_one_response(struct fsi_master_gpio *master,
uint8_t data_size, struct fsi_gpio_msg *msgp, uint8_t *tagp) uint8_t data_size, struct fsi_gpio_msg *msgp, uint8_t *tagp)
{ {
struct fsi_gpio_msg msg; struct fsi_gpio_msg msg;
uint8_t id, tag; unsigned long flags;
uint32_t crc; uint32_t crc;
uint8_t tag;
int i; int i;
local_irq_save(flags);
/* wait for the start bit */ /* wait for the start bit */
for (i = 0; i < FSI_GPIO_MTOE_COUNT; i++) { for (i = 0; i < FSI_MASTER_MTOE_COUNT; i++) {
msg.bits = 0; msg.bits = 0;
msg.msg = 0; msg.msg = 0;
serial_in(master, &msg, 1); serial_in(master, &msg, 1);
if (msg.msg) if (msg.msg)
break; break;
} }
if (i == FSI_GPIO_MTOE_COUNT) { if (i == FSI_MASTER_MTOE_COUNT) {
dev_dbg(master->dev, dev_dbg(master->dev,
"Master time out waiting for response\n"); "Master time out waiting for response\n");
fsi_master_gpio_error(master, FSI_GPIO_MTOE); local_irq_restore(flags);
return -EIO; return -ETIMEDOUT;
} }
msg.bits = 0; msg.bits = 0;
@ -295,23 +362,27 @@ static int read_one_response(struct fsi_master_gpio *master,
/* Read slave ID & response tag */ /* Read slave ID & response tag */
serial_in(master, &msg, 4); serial_in(master, &msg, 4);
id = (msg.msg >> FSI_GPIO_MSG_RESPID_SIZE) & 0x3;
tag = msg.msg & 0x3; tag = msg.msg & 0x3;
/* If we have an ACK and we're expecting data, clock the data in too */ /* If we have an ACK and we're expecting data, clock the data in too */
if (tag == FSI_GPIO_RESP_ACK && data_size) if (tag == FSI_RESP_ACK && data_size)
serial_in(master, &msg, data_size * 8); serial_in(master, &msg, data_size * 8);
/* read CRC */ /* read CRC */
serial_in(master, &msg, FSI_GPIO_CRC_SIZE); serial_in(master, &msg, FSI_CRC_SIZE);
local_irq_restore(flags);
/* we have a whole message now; check CRC */ /* we have a whole message now; check CRC */
crc = crc4(0, 1, 1); crc = crc4(0, 1, 1);
crc = crc4(crc, msg.msg, msg.bits); crc = crc4(crc, msg.msg, msg.bits);
if (crc) { if (crc) {
dev_dbg(master->dev, "ERR response CRC\n"); /* Check if it's all 1's, that probably means the host is off */
fsi_master_gpio_error(master, FSI_GPIO_CRC_INVAL); if (((~msg.msg) & ((1ull << msg.bits) - 1)) == 0)
return -EIO; return -ENODEV;
dev_dbg(master->dev, "ERR response CRC msg: 0x%016llx (%d bits)\n",
msg.msg, msg.bits);
return -EAGAIN;
} }
if (msgp) if (msgp)
@ -325,19 +396,23 @@ static int read_one_response(struct fsi_master_gpio *master,
static int issue_term(struct fsi_master_gpio *master, uint8_t slave) static int issue_term(struct fsi_master_gpio *master, uint8_t slave)
{ {
struct fsi_gpio_msg cmd; struct fsi_gpio_msg cmd;
unsigned long flags;
uint8_t tag; uint8_t tag;
int rc; int rc;
build_term_command(&cmd, slave); build_term_command(&cmd, slave);
local_irq_save(flags);
serial_out(master, &cmd); serial_out(master, &cmd);
echo_delay(master); echo_delay(master);
local_irq_restore(flags);
rc = read_one_response(master, 0, NULL, &tag); rc = read_one_response(master, 0, NULL, &tag);
if (rc < 0) { if (rc < 0) {
dev_err(master->dev, dev_err(master->dev,
"TERM failed; lost communication with slave\n"); "TERM failed; lost communication with slave\n");
return -EIO; return -EIO;
} else if (tag != FSI_GPIO_RESP_ACK) { } else if (tag != FSI_RESP_ACK) {
dev_err(master->dev, "TERM failed; response %d\n", tag); dev_err(master->dev, "TERM failed; response %d\n", tag);
return -EIO; return -EIO;
} }
@ -350,16 +425,39 @@ static int poll_for_response(struct fsi_master_gpio *master,
{ {
struct fsi_gpio_msg response, cmd; struct fsi_gpio_msg response, cmd;
int busy_count = 0, rc, i; int busy_count = 0, rc, i;
unsigned long flags;
uint8_t tag; uint8_t tag;
uint8_t *data_byte = data; uint8_t *data_byte = data;
int crc_err_retries = 0;
retry: retry:
rc = read_one_response(master, size, &response, &tag); rc = read_one_response(master, size, &response, &tag);
if (rc)
return rc; /* Handle retries on CRC errors */
if (rc == -EAGAIN) {
/* Too many retries ? */
if (crc_err_retries++ > FSI_CRC_ERR_RETRIES) {
/*
* Pass it up as a -EIO otherwise upper level will retry
* the whole command which isn't what we want here.
*/
rc = -EIO;
goto fail;
}
dev_dbg(master->dev,
"CRC error retry %d\n", crc_err_retries);
trace_fsi_master_gpio_crc_rsp_error(master);
build_epoll_command(&cmd, slave);
local_irq_save(flags);
clock_zeros(master, FSI_MASTER_EPOLL_CLOCKS);
serial_out(master, &cmd);
echo_delay(master);
local_irq_restore(flags);
goto retry;
} else if (rc)
goto fail;
switch (tag) { switch (tag) {
case FSI_GPIO_RESP_ACK: case FSI_RESP_ACK:
if (size && data) { if (size && data) {
uint64_t val = response.msg; uint64_t val = response.msg;
/* clear crc & mask */ /* clear crc & mask */
@ -372,58 +470,90 @@ retry:
} }
} }
break; break;
case FSI_GPIO_RESP_BUSY: case FSI_RESP_BUSY:
/* /*
* Its necessary to clock slave before issuing * Its necessary to clock slave before issuing
* d-poll, not indicated in the hardware protocol * d-poll, not indicated in the hardware protocol
* spec. < 20 clocks causes slave to hang, 21 ok. * spec. < 20 clocks causes slave to hang, 21 ok.
*/ */
clock_zeros(master, FSI_GPIO_DPOLL_CLOCKS); if (busy_count++ < FSI_MASTER_MAX_BUSY) {
if (busy_count++ < FSI_GPIO_MAX_BUSY) {
build_dpoll_command(&cmd, slave); build_dpoll_command(&cmd, slave);
local_irq_save(flags);
clock_zeros(master, FSI_MASTER_DPOLL_CLOCKS);
serial_out(master, &cmd); serial_out(master, &cmd);
echo_delay(master); echo_delay(master);
local_irq_restore(flags);
goto retry; goto retry;
} }
dev_warn(master->dev, dev_warn(master->dev,
"ERR slave is stuck in busy state, issuing TERM\n"); "ERR slave is stuck in busy state, issuing TERM\n");
local_irq_save(flags);
clock_zeros(master, FSI_MASTER_DPOLL_CLOCKS);
local_irq_restore(flags);
issue_term(master, slave); issue_term(master, slave);
rc = -EIO; rc = -EIO;
break; break;
case FSI_GPIO_RESP_ERRA: case FSI_RESP_ERRA:
case FSI_GPIO_RESP_ERRC: dev_dbg(master->dev, "ERRA received: 0x%x\n", (int)response.msg);
dev_dbg(master->dev, "ERR%c received: 0x%x\n",
tag == FSI_GPIO_RESP_ERRA ? 'A' : 'C',
(int)response.msg);
fsi_master_gpio_error(master, response.msg);
rc = -EIO; rc = -EIO;
break; break;
case FSI_RESP_ERRC:
dev_dbg(master->dev, "ERRC received: 0x%x\n", (int)response.msg);
trace_fsi_master_gpio_crc_cmd_error(master);
rc = -EAGAIN;
break;
} }
/* Clock the slave enough to be ready for next operation */ if (busy_count > 0)
clock_zeros(master, FSI_GPIO_PRIME_SLAVE_CLOCKS); trace_fsi_master_gpio_poll_response_busy(master, busy_count);
fail:
/*
* tSendDelay clocks, avoids signal reflections when switching
* from receive of response back to send of data.
*/
local_irq_save(flags);
clock_zeros(master, master->t_send_delay);
local_irq_restore(flags);
return rc; return rc;
} }
static int send_request(struct fsi_master_gpio *master,
struct fsi_gpio_msg *cmd)
{
unsigned long flags;
if (master->external_mode)
return -EBUSY;
local_irq_save(flags);
serial_out(master, cmd);
echo_delay(master);
local_irq_restore(flags);
return 0;
}
static int fsi_master_gpio_xfer(struct fsi_master_gpio *master, uint8_t slave, static int fsi_master_gpio_xfer(struct fsi_master_gpio *master, uint8_t slave,
struct fsi_gpio_msg *cmd, size_t resp_len, void *resp) struct fsi_gpio_msg *cmd, size_t resp_len, void *resp)
{ {
unsigned long flags; int rc = -EAGAIN, retries = 0;
int rc;
spin_lock_irqsave(&master->cmd_lock, flags); while ((retries++) < FSI_CRC_ERR_RETRIES) {
rc = send_request(master, cmd);
if (rc)
break;
rc = poll_for_response(master, slave, resp_len, resp);
if (rc != -EAGAIN)
break;
rc = -EIO;
dev_warn(master->dev, "ECRC retry %d\n", retries);
if (master->external_mode) { /* Pace it a bit before retry */
spin_unlock_irqrestore(&master->cmd_lock, flags); msleep(1);
return -EBUSY;
} }
serial_out(master, cmd);
echo_delay(master);
rc = poll_for_response(master, slave, resp_len, resp);
spin_unlock_irqrestore(&master->cmd_lock, flags);
return rc; return rc;
} }
@ -432,12 +562,18 @@ static int fsi_master_gpio_read(struct fsi_master *_master, int link,
{ {
struct fsi_master_gpio *master = to_fsi_master_gpio(_master); struct fsi_master_gpio *master = to_fsi_master_gpio(_master);
struct fsi_gpio_msg cmd; struct fsi_gpio_msg cmd;
int rc;
if (link != 0) if (link != 0)
return -ENODEV; return -ENODEV;
build_abs_ar_command(&cmd, id, addr, size, NULL); mutex_lock(&master->cmd_lock);
return fsi_master_gpio_xfer(master, id, &cmd, size, val); build_ar_command(master, &cmd, id, addr, size, NULL);
rc = fsi_master_gpio_xfer(master, id, &cmd, size, val);
last_address_update(master, id, rc == 0, addr);
mutex_unlock(&master->cmd_lock);
return rc;
} }
static int fsi_master_gpio_write(struct fsi_master *_master, int link, static int fsi_master_gpio_write(struct fsi_master *_master, int link,
@ -445,12 +581,18 @@ static int fsi_master_gpio_write(struct fsi_master *_master, int link,
{ {
struct fsi_master_gpio *master = to_fsi_master_gpio(_master); struct fsi_master_gpio *master = to_fsi_master_gpio(_master);
struct fsi_gpio_msg cmd; struct fsi_gpio_msg cmd;
int rc;
if (link != 0) if (link != 0)
return -ENODEV; return -ENODEV;
build_abs_ar_command(&cmd, id, addr, size, val); mutex_lock(&master->cmd_lock);
return fsi_master_gpio_xfer(master, id, &cmd, 0, NULL); build_ar_command(master, &cmd, id, addr, size, val);
rc = fsi_master_gpio_xfer(master, id, &cmd, 0, NULL);
last_address_update(master, id, rc == 0, addr);
mutex_unlock(&master->cmd_lock);
return rc;
} }
static int fsi_master_gpio_term(struct fsi_master *_master, static int fsi_master_gpio_term(struct fsi_master *_master,
@ -458,12 +600,18 @@ static int fsi_master_gpio_term(struct fsi_master *_master,
{ {
struct fsi_master_gpio *master = to_fsi_master_gpio(_master); struct fsi_master_gpio *master = to_fsi_master_gpio(_master);
struct fsi_gpio_msg cmd; struct fsi_gpio_msg cmd;
int rc;
if (link != 0) if (link != 0)
return -ENODEV; return -ENODEV;
mutex_lock(&master->cmd_lock);
build_term_command(&cmd, id); build_term_command(&cmd, id);
return fsi_master_gpio_xfer(master, id, &cmd, 0, NULL); rc = fsi_master_gpio_xfer(master, id, &cmd, 0, NULL);
last_address_update(master, id, false, 0);
mutex_unlock(&master->cmd_lock);
return rc;
} }
static int fsi_master_gpio_break(struct fsi_master *_master, int link) static int fsi_master_gpio_break(struct fsi_master *_master, int link)
@ -476,11 +624,14 @@ static int fsi_master_gpio_break(struct fsi_master *_master, int link)
trace_fsi_master_gpio_break(master); trace_fsi_master_gpio_break(master);
spin_lock_irqsave(&master->cmd_lock, flags); mutex_lock(&master->cmd_lock);
if (master->external_mode) { if (master->external_mode) {
spin_unlock_irqrestore(&master->cmd_lock, flags); mutex_unlock(&master->cmd_lock);
return -EBUSY; return -EBUSY;
} }
local_irq_save(flags);
set_sda_output(master, 1); set_sda_output(master, 1);
sda_out(master, 1); sda_out(master, 1);
clock_toggle(master, FSI_PRE_BREAK_CLOCKS); clock_toggle(master, FSI_PRE_BREAK_CLOCKS);
@ -489,7 +640,11 @@ static int fsi_master_gpio_break(struct fsi_master *_master, int link)
echo_delay(master); echo_delay(master);
sda_out(master, 1); sda_out(master, 1);
clock_toggle(master, FSI_POST_BREAK_CLOCKS); clock_toggle(master, FSI_POST_BREAK_CLOCKS);
spin_unlock_irqrestore(&master->cmd_lock, flags);
local_irq_restore(flags);
last_address_update(master, 0, false, 0);
mutex_unlock(&master->cmd_lock);
/* Wait for logic reset to take effect */ /* Wait for logic reset to take effect */
udelay(200); udelay(200);
@ -499,6 +654,8 @@ static int fsi_master_gpio_break(struct fsi_master *_master, int link)
static void fsi_master_gpio_init(struct fsi_master_gpio *master) static void fsi_master_gpio_init(struct fsi_master_gpio *master)
{ {
unsigned long flags;
gpiod_direction_output(master->gpio_mux, 1); gpiod_direction_output(master->gpio_mux, 1);
gpiod_direction_output(master->gpio_trans, 1); gpiod_direction_output(master->gpio_trans, 1);
gpiod_direction_output(master->gpio_enable, 1); gpiod_direction_output(master->gpio_enable, 1);
@ -506,7 +663,9 @@ static void fsi_master_gpio_init(struct fsi_master_gpio *master)
gpiod_direction_output(master->gpio_data, 1); gpiod_direction_output(master->gpio_data, 1);
/* todo: evaluate if clocks can be reduced */ /* todo: evaluate if clocks can be reduced */
local_irq_save(flags);
clock_zeros(master, FSI_INIT_CLOCKS); clock_zeros(master, FSI_INIT_CLOCKS);
local_irq_restore(flags);
} }
static void fsi_master_gpio_init_external(struct fsi_master_gpio *master) static void fsi_master_gpio_init_external(struct fsi_master_gpio *master)
@ -521,22 +680,37 @@ static void fsi_master_gpio_init_external(struct fsi_master_gpio *master)
static int fsi_master_gpio_link_enable(struct fsi_master *_master, int link) static int fsi_master_gpio_link_enable(struct fsi_master *_master, int link)
{ {
struct fsi_master_gpio *master = to_fsi_master_gpio(_master); struct fsi_master_gpio *master = to_fsi_master_gpio(_master);
unsigned long flags;
int rc = -EBUSY; int rc = -EBUSY;
if (link != 0) if (link != 0)
return -ENODEV; return -ENODEV;
spin_lock_irqsave(&master->cmd_lock, flags); mutex_lock(&master->cmd_lock);
if (!master->external_mode) { if (!master->external_mode) {
gpiod_set_value(master->gpio_enable, 1); gpiod_set_value(master->gpio_enable, 1);
rc = 0; rc = 0;
} }
spin_unlock_irqrestore(&master->cmd_lock, flags); mutex_unlock(&master->cmd_lock);
return rc; return rc;
} }
static int fsi_master_gpio_link_config(struct fsi_master *_master, int link,
u8 t_send_delay, u8 t_echo_delay)
{
struct fsi_master_gpio *master = to_fsi_master_gpio(_master);
if (link != 0)
return -ENODEV;
mutex_lock(&master->cmd_lock);
master->t_send_delay = t_send_delay;
master->t_echo_delay = t_echo_delay;
mutex_unlock(&master->cmd_lock);
return 0;
}
static ssize_t external_mode_show(struct device *dev, static ssize_t external_mode_show(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr, char *buf)
{ {
@ -550,7 +724,7 @@ static ssize_t external_mode_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count) struct device_attribute *attr, const char *buf, size_t count)
{ {
struct fsi_master_gpio *master = dev_get_drvdata(dev); struct fsi_master_gpio *master = dev_get_drvdata(dev);
unsigned long flags, val; unsigned long val;
bool external_mode; bool external_mode;
int err; int err;
@ -560,10 +734,10 @@ static ssize_t external_mode_store(struct device *dev,
external_mode = !!val; external_mode = !!val;
spin_lock_irqsave(&master->cmd_lock, flags); mutex_lock(&master->cmd_lock);
if (external_mode == master->external_mode) { if (external_mode == master->external_mode) {
spin_unlock_irqrestore(&master->cmd_lock, flags); mutex_unlock(&master->cmd_lock);
return count; return count;
} }
@ -572,7 +746,8 @@ static ssize_t external_mode_store(struct device *dev,
fsi_master_gpio_init_external(master); fsi_master_gpio_init_external(master);
else else
fsi_master_gpio_init(master); fsi_master_gpio_init(master);
spin_unlock_irqrestore(&master->cmd_lock, flags);
mutex_unlock(&master->cmd_lock);
fsi_master_rescan(&master->master); fsi_master_rescan(&master->master);
@ -582,31 +757,44 @@ static ssize_t external_mode_store(struct device *dev,
static DEVICE_ATTR(external_mode, 0664, static DEVICE_ATTR(external_mode, 0664,
external_mode_show, external_mode_store); external_mode_show, external_mode_store);
static void fsi_master_gpio_release(struct device *dev)
{
struct fsi_master_gpio *master = to_fsi_master_gpio(dev_to_fsi_master(dev));
of_node_put(dev_of_node(master->dev));
kfree(master);
}
static int fsi_master_gpio_probe(struct platform_device *pdev) static int fsi_master_gpio_probe(struct platform_device *pdev)
{ {
struct fsi_master_gpio *master; struct fsi_master_gpio *master;
struct gpio_desc *gpio; struct gpio_desc *gpio;
int rc; int rc;
master = devm_kzalloc(&pdev->dev, sizeof(*master), GFP_KERNEL); master = kzalloc(sizeof(*master), GFP_KERNEL);
if (!master) if (!master)
return -ENOMEM; return -ENOMEM;
master->dev = &pdev->dev; master->dev = &pdev->dev;
master->master.dev.parent = master->dev; master->master.dev.parent = master->dev;
master->master.dev.of_node = of_node_get(dev_of_node(master->dev)); master->master.dev.of_node = of_node_get(dev_of_node(master->dev));
master->master.dev.release = fsi_master_gpio_release;
master->last_addr = LAST_ADDR_INVALID;
gpio = devm_gpiod_get(&pdev->dev, "clock", 0); gpio = devm_gpiod_get(&pdev->dev, "clock", 0);
if (IS_ERR(gpio)) { if (IS_ERR(gpio)) {
dev_err(&pdev->dev, "failed to get clock gpio\n"); dev_err(&pdev->dev, "failed to get clock gpio\n");
return PTR_ERR(gpio); rc = PTR_ERR(gpio);
goto err_free;
} }
master->gpio_clk = gpio; master->gpio_clk = gpio;
gpio = devm_gpiod_get(&pdev->dev, "data", 0); gpio = devm_gpiod_get(&pdev->dev, "data", 0);
if (IS_ERR(gpio)) { if (IS_ERR(gpio)) {
dev_err(&pdev->dev, "failed to get data gpio\n"); dev_err(&pdev->dev, "failed to get data gpio\n");
return PTR_ERR(gpio); rc = PTR_ERR(gpio);
goto err_free;
} }
master->gpio_data = gpio; master->gpio_data = gpio;
@ -614,24 +802,38 @@ static int fsi_master_gpio_probe(struct platform_device *pdev)
gpio = devm_gpiod_get_optional(&pdev->dev, "trans", 0); gpio = devm_gpiod_get_optional(&pdev->dev, "trans", 0);
if (IS_ERR(gpio)) { if (IS_ERR(gpio)) {
dev_err(&pdev->dev, "failed to get trans gpio\n"); dev_err(&pdev->dev, "failed to get trans gpio\n");
return PTR_ERR(gpio); rc = PTR_ERR(gpio);
goto err_free;
} }
master->gpio_trans = gpio; master->gpio_trans = gpio;
gpio = devm_gpiod_get_optional(&pdev->dev, "enable", 0); gpio = devm_gpiod_get_optional(&pdev->dev, "enable", 0);
if (IS_ERR(gpio)) { if (IS_ERR(gpio)) {
dev_err(&pdev->dev, "failed to get enable gpio\n"); dev_err(&pdev->dev, "failed to get enable gpio\n");
return PTR_ERR(gpio); rc = PTR_ERR(gpio);
goto err_free;
} }
master->gpio_enable = gpio; master->gpio_enable = gpio;
gpio = devm_gpiod_get_optional(&pdev->dev, "mux", 0); gpio = devm_gpiod_get_optional(&pdev->dev, "mux", 0);
if (IS_ERR(gpio)) { if (IS_ERR(gpio)) {
dev_err(&pdev->dev, "failed to get mux gpio\n"); dev_err(&pdev->dev, "failed to get mux gpio\n");
return PTR_ERR(gpio); rc = PTR_ERR(gpio);
goto err_free;
} }
master->gpio_mux = gpio; master->gpio_mux = gpio;
/*
* Check if GPIO block is slow enought that no extra delays
* are necessary. This improves performance on ast2500 by
* an order of magnitude.
*/
master->no_delays = device_property_present(&pdev->dev, "no-gpio-delays");
/* Default FSI command delays */
master->t_send_delay = FSI_SEND_DELAY_CLOCKS;
master->t_echo_delay = FSI_ECHO_DELAY_CLOCKS;
master->master.n_links = 1; master->master.n_links = 1;
master->master.flags = FSI_MASTER_FLAG_SWCLOCK; master->master.flags = FSI_MASTER_FLAG_SWCLOCK;
master->master.read = fsi_master_gpio_read; master->master.read = fsi_master_gpio_read;
@ -639,34 +841,37 @@ static int fsi_master_gpio_probe(struct platform_device *pdev)
master->master.term = fsi_master_gpio_term; master->master.term = fsi_master_gpio_term;
master->master.send_break = fsi_master_gpio_break; master->master.send_break = fsi_master_gpio_break;
master->master.link_enable = fsi_master_gpio_link_enable; master->master.link_enable = fsi_master_gpio_link_enable;
master->master.link_config = fsi_master_gpio_link_config;
platform_set_drvdata(pdev, master); platform_set_drvdata(pdev, master);
spin_lock_init(&master->cmd_lock); mutex_init(&master->cmd_lock);
fsi_master_gpio_init(master); fsi_master_gpio_init(master);
rc = device_create_file(&pdev->dev, &dev_attr_external_mode); rc = device_create_file(&pdev->dev, &dev_attr_external_mode);
if (rc) if (rc)
return rc; goto err_free;
return fsi_master_register(&master->master); rc = fsi_master_register(&master->master);
if (rc) {
device_remove_file(&pdev->dev, &dev_attr_external_mode);
put_device(&master->master.dev);
return rc;
}
return 0;
err_free:
kfree(master);
return rc;
} }
static int fsi_master_gpio_remove(struct platform_device *pdev) static int fsi_master_gpio_remove(struct platform_device *pdev)
{ {
struct fsi_master_gpio *master = platform_get_drvdata(pdev); struct fsi_master_gpio *master = platform_get_drvdata(pdev);
devm_gpiod_put(&pdev->dev, master->gpio_clk); device_remove_file(&pdev->dev, &dev_attr_external_mode);
devm_gpiod_put(&pdev->dev, master->gpio_data);
if (master->gpio_trans)
devm_gpiod_put(&pdev->dev, master->gpio_trans);
if (master->gpio_enable)
devm_gpiod_put(&pdev->dev, master->gpio_enable);
if (master->gpio_mux)
devm_gpiod_put(&pdev->dev, master->gpio_mux);
fsi_master_unregister(&master->master);
of_node_put(master->master.dev.of_node); fsi_master_unregister(&master->master);
return 0; return 0;
} }

View File

@ -122,7 +122,8 @@ static int hub_master_write(struct fsi_master *master, int link,
static int hub_master_break(struct fsi_master *master, int link) static int hub_master_break(struct fsi_master *master, int link)
{ {
uint32_t addr, cmd; uint32_t addr;
__be32 cmd;
addr = 0x4; addr = 0x4;
cmd = cpu_to_be32(0xc0de0000); cmd = cpu_to_be32(0xc0de0000);
@ -205,7 +206,7 @@ static int hub_master_init(struct fsi_master_hub *hub)
if (rc) if (rc)
return rc; return rc;
reg = ~0; reg = cpu_to_be32(~0);
rc = fsi_device_write(dev, FSI_MSENP0, &reg, sizeof(reg)); rc = fsi_device_write(dev, FSI_MSENP0, &reg, sizeof(reg));
if (rc) if (rc)
return rc; return rc;

View File

@ -18,7 +18,41 @@
#define DRIVERS_FSI_MASTER_H #define DRIVERS_FSI_MASTER_H
#include <linux/device.h> #include <linux/device.h>
#include <linux/mutex.h>
/* Various protocol delays */
#define FSI_ECHO_DELAY_CLOCKS 16 /* Number clocks for echo delay */
#define FSI_SEND_DELAY_CLOCKS 16 /* Number clocks for send delay */
#define FSI_PRE_BREAK_CLOCKS 50 /* Number clocks to prep for break */
#define FSI_BREAK_CLOCKS 256 /* Number of clocks to issue break */
#define FSI_POST_BREAK_CLOCKS 16000 /* Number clocks to set up cfam */
#define FSI_INIT_CLOCKS 5000 /* Clock out any old data */
#define FSI_MASTER_DPOLL_CLOCKS 50 /* < 21 will cause slave to hang */
#define FSI_MASTER_EPOLL_CLOCKS 50 /* Number of clocks for E_POLL retry */
/* Various retry maximums */
#define FSI_CRC_ERR_RETRIES 10
#define FSI_MASTER_MAX_BUSY 200
#define FSI_MASTER_MTOE_COUNT 1000
/* Command encodings */
#define FSI_CMD_DPOLL 0x2
#define FSI_CMD_EPOLL 0x3
#define FSI_CMD_TERM 0x3f
#define FSI_CMD_ABS_AR 0x4
#define FSI_CMD_REL_AR 0x5
#define FSI_CMD_SAME_AR 0x3 /* but only a 2-bit opcode... */
/* Slave responses */
#define FSI_RESP_ACK 0 /* Success */
#define FSI_RESP_BUSY 1 /* Slave busy */
#define FSI_RESP_ERRA 2 /* Any (misc) Error */
#define FSI_RESP_ERRC 3 /* Slave reports master CRC error */
/* Misc */
#define FSI_CRC_SIZE 4
/* fsi-master definition and flags */
#define FSI_MASTER_FLAG_SWCLOCK 0x1 #define FSI_MASTER_FLAG_SWCLOCK 0x1
struct fsi_master { struct fsi_master {
@ -26,6 +60,7 @@ struct fsi_master {
int idx; int idx;
int n_links; int n_links;
int flags; int flags;
struct mutex scan_lock;
int (*read)(struct fsi_master *, int link, uint8_t id, int (*read)(struct fsi_master *, int link, uint8_t id,
uint32_t addr, void *val, size_t size); uint32_t addr, void *val, size_t size);
int (*write)(struct fsi_master *, int link, uint8_t id, int (*write)(struct fsi_master *, int link, uint8_t id,
@ -33,6 +68,8 @@ struct fsi_master {
int (*term)(struct fsi_master *, int link, uint8_t id); int (*term)(struct fsi_master *, int link, uint8_t id);
int (*send_break)(struct fsi_master *, int link); int (*send_break)(struct fsi_master *, int link);
int (*link_enable)(struct fsi_master *, int link); int (*link_enable)(struct fsi_master *, int link);
int (*link_config)(struct fsi_master *, int link,
u8 t_send_delay, u8 t_echo_delay);
}; };
#define dev_to_fsi_master(d) container_of(d, struct fsi_master, dev) #define dev_to_fsi_master(d) container_of(d, struct fsi_master, dev)

1066
drivers/fsi/fsi-sbefifo.c Normal file

File diff suppressed because it is too large Load Diff

View File

@ -20,42 +20,73 @@
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/miscdevice.h> #include <linux/cdev.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/idr.h>
#include <uapi/linux/fsi.h>
#define FSI_ENGID_SCOM 0x5 #define FSI_ENGID_SCOM 0x5
#define SCOM_FSI2PIB_DELAY 50
/* SCOM engine register set */ /* SCOM engine register set */
#define SCOM_DATA0_REG 0x00 #define SCOM_DATA0_REG 0x00
#define SCOM_DATA1_REG 0x04 #define SCOM_DATA1_REG 0x04
#define SCOM_CMD_REG 0x08 #define SCOM_CMD_REG 0x08
#define SCOM_RESET_REG 0x1C #define SCOM_FSI2PIB_RESET_REG 0x18
#define SCOM_STATUS_REG 0x1C /* Read */
#define SCOM_PIB_RESET_REG 0x1C /* Write */
#define SCOM_RESET_CMD 0x80000000 /* Command register */
#define SCOM_WRITE_CMD 0x80000000 #define SCOM_WRITE_CMD 0x80000000
#define SCOM_READ_CMD 0x00000000
/* Status register bits */
#define SCOM_STATUS_ERR_SUMMARY 0x80000000
#define SCOM_STATUS_PROTECTION 0x01000000
#define SCOM_STATUS_PARITY 0x04000000
#define SCOM_STATUS_PIB_ABORT 0x00100000
#define SCOM_STATUS_PIB_RESP_MASK 0x00007000
#define SCOM_STATUS_PIB_RESP_SHIFT 12
#define SCOM_STATUS_ANY_ERR (SCOM_STATUS_ERR_SUMMARY | \
SCOM_STATUS_PROTECTION | \
SCOM_STATUS_PARITY | \
SCOM_STATUS_PIB_ABORT | \
SCOM_STATUS_PIB_RESP_MASK)
/* SCOM address encodings */
#define XSCOM_ADDR_IND_FLAG BIT_ULL(63)
#define XSCOM_ADDR_INF_FORM1 BIT_ULL(60)
/* SCOM indirect stuff */
#define XSCOM_ADDR_DIRECT_PART 0x7fffffffull
#define XSCOM_ADDR_INDIRECT_PART 0x000fffff00000000ull
#define XSCOM_DATA_IND_READ BIT_ULL(63)
#define XSCOM_DATA_IND_COMPLETE BIT_ULL(31)
#define XSCOM_DATA_IND_ERR_MASK 0x70000000ull
#define XSCOM_DATA_IND_ERR_SHIFT 28
#define XSCOM_DATA_IND_DATA 0x0000ffffull
#define XSCOM_DATA_IND_FORM1_DATA 0x000fffffffffffffull
#define XSCOM_ADDR_FORM1_LOW 0x000ffffffffull
#define XSCOM_ADDR_FORM1_HI 0xfff00000000ull
#define XSCOM_ADDR_FORM1_HI_SHIFT 20
/* Retries */
#define SCOM_MAX_RETRIES 100 /* Retries on busy */
#define SCOM_MAX_IND_RETRIES 10 /* Retries indirect not ready */
struct scom_device { struct scom_device {
struct list_head link; struct list_head link;
struct fsi_device *fsi_dev; struct fsi_device *fsi_dev;
struct miscdevice mdev; struct device dev;
char name[32]; struct cdev cdev;
int idx; struct mutex lock;
bool dead;
}; };
#define to_scom_dev(x) container_of((x), struct scom_device, mdev) static int __put_scom(struct scom_device *scom_dev, uint64_t value,
uint32_t addr, uint32_t *status)
static struct list_head scom_devices;
static DEFINE_IDA(scom_ida);
static int put_scom(struct scom_device *scom_dev, uint64_t value,
uint32_t addr)
{ {
__be32 data, raw_status;
int rc; int rc;
uint32_t data;
data = cpu_to_be32((value >> 32) & 0xffffffff); data = cpu_to_be32((value >> 32) & 0xffffffff);
rc = fsi_device_write(scom_dev->fsi_dev, SCOM_DATA0_REG, &data, rc = fsi_device_write(scom_dev->fsi_dev, SCOM_DATA0_REG, &data,
@ -70,53 +101,286 @@ static int put_scom(struct scom_device *scom_dev, uint64_t value,
return rc; return rc;
data = cpu_to_be32(SCOM_WRITE_CMD | addr); data = cpu_to_be32(SCOM_WRITE_CMD | addr);
return fsi_device_write(scom_dev->fsi_dev, SCOM_CMD_REG, &data,
sizeof(uint32_t));
}
static int get_scom(struct scom_device *scom_dev, uint64_t *value,
uint32_t addr)
{
uint32_t result, data;
int rc;
*value = 0ULL;
data = cpu_to_be32(addr);
rc = fsi_device_write(scom_dev->fsi_dev, SCOM_CMD_REG, &data, rc = fsi_device_write(scom_dev->fsi_dev, SCOM_CMD_REG, &data,
sizeof(uint32_t)); sizeof(uint32_t));
if (rc) if (rc)
return rc; return rc;
rc = fsi_device_read(scom_dev->fsi_dev, SCOM_STATUS_REG, &raw_status,
rc = fsi_device_read(scom_dev->fsi_dev, SCOM_DATA0_REG, &result, sizeof(uint32_t));
sizeof(uint32_t));
if (rc) if (rc)
return rc; return rc;
*status = be32_to_cpu(raw_status);
*value |= (uint64_t)cpu_to_be32(result) << 32;
rc = fsi_device_read(scom_dev->fsi_dev, SCOM_DATA1_REG, &result,
sizeof(uint32_t));
if (rc)
return rc;
*value |= cpu_to_be32(result);
return 0; return 0;
} }
static ssize_t scom_read(struct file *filep, char __user *buf, size_t len, static int __get_scom(struct scom_device *scom_dev, uint64_t *value,
loff_t *offset) uint32_t addr, uint32_t *status)
{ {
__be32 data, raw_status;
int rc; int rc;
struct miscdevice *mdev =
(struct miscdevice *)filep->private_data;
struct scom_device *scom = to_scom_dev(mdev); *value = 0ULL;
data = cpu_to_be32(SCOM_READ_CMD | addr);
rc = fsi_device_write(scom_dev->fsi_dev, SCOM_CMD_REG, &data,
sizeof(uint32_t));
if (rc)
return rc;
rc = fsi_device_read(scom_dev->fsi_dev, SCOM_STATUS_REG, &raw_status,
sizeof(uint32_t));
if (rc)
return rc;
/*
* Read the data registers even on error, so we don't have
* to interpret the status register here.
*/
rc = fsi_device_read(scom_dev->fsi_dev, SCOM_DATA0_REG, &data,
sizeof(uint32_t));
if (rc)
return rc;
*value |= (uint64_t)be32_to_cpu(data) << 32;
rc = fsi_device_read(scom_dev->fsi_dev, SCOM_DATA1_REG, &data,
sizeof(uint32_t));
if (rc)
return rc;
*value |= be32_to_cpu(data);
*status = be32_to_cpu(raw_status);
return rc;
}
static int put_indirect_scom_form0(struct scom_device *scom, uint64_t value,
uint64_t addr, uint32_t *status)
{
uint64_t ind_data, ind_addr;
int rc, retries, err = 0;
if (value & ~XSCOM_DATA_IND_DATA)
return -EINVAL;
ind_addr = addr & XSCOM_ADDR_DIRECT_PART;
ind_data = (addr & XSCOM_ADDR_INDIRECT_PART) | value;
rc = __put_scom(scom, ind_data, ind_addr, status);
if (rc || (*status & SCOM_STATUS_ANY_ERR))
return rc;
for (retries = 0; retries < SCOM_MAX_IND_RETRIES; retries++) {
rc = __get_scom(scom, &ind_data, addr, status);
if (rc || (*status & SCOM_STATUS_ANY_ERR))
return rc;
err = (ind_data & XSCOM_DATA_IND_ERR_MASK) >> XSCOM_DATA_IND_ERR_SHIFT;
*status = err << SCOM_STATUS_PIB_RESP_SHIFT;
if ((ind_data & XSCOM_DATA_IND_COMPLETE) || (err != SCOM_PIB_BLOCKED))
return 0;
msleep(1);
}
return rc;
}
static int put_indirect_scom_form1(struct scom_device *scom, uint64_t value,
uint64_t addr, uint32_t *status)
{
uint64_t ind_data, ind_addr;
if (value & ~XSCOM_DATA_IND_FORM1_DATA)
return -EINVAL;
ind_addr = addr & XSCOM_ADDR_FORM1_LOW;
ind_data = value | (addr & XSCOM_ADDR_FORM1_HI) << XSCOM_ADDR_FORM1_HI_SHIFT;
return __put_scom(scom, ind_data, ind_addr, status);
}
static int get_indirect_scom_form0(struct scom_device *scom, uint64_t *value,
uint64_t addr, uint32_t *status)
{
uint64_t ind_data, ind_addr;
int rc, retries, err = 0;
ind_addr = addr & XSCOM_ADDR_DIRECT_PART;
ind_data = (addr & XSCOM_ADDR_INDIRECT_PART) | XSCOM_DATA_IND_READ;
rc = __put_scom(scom, ind_data, ind_addr, status);
if (rc || (*status & SCOM_STATUS_ANY_ERR))
return rc;
for (retries = 0; retries < SCOM_MAX_IND_RETRIES; retries++) {
rc = __get_scom(scom, &ind_data, addr, status);
if (rc || (*status & SCOM_STATUS_ANY_ERR))
return rc;
err = (ind_data & XSCOM_DATA_IND_ERR_MASK) >> XSCOM_DATA_IND_ERR_SHIFT;
*status = err << SCOM_STATUS_PIB_RESP_SHIFT;
*value = ind_data & XSCOM_DATA_IND_DATA;
if ((ind_data & XSCOM_DATA_IND_COMPLETE) || (err != SCOM_PIB_BLOCKED))
return 0;
msleep(1);
}
return rc;
}
static int raw_put_scom(struct scom_device *scom, uint64_t value,
uint64_t addr, uint32_t *status)
{
if (addr & XSCOM_ADDR_IND_FLAG) {
if (addr & XSCOM_ADDR_INF_FORM1)
return put_indirect_scom_form1(scom, value, addr, status);
else
return put_indirect_scom_form0(scom, value, addr, status);
} else
return __put_scom(scom, value, addr, status);
}
static int raw_get_scom(struct scom_device *scom, uint64_t *value,
uint64_t addr, uint32_t *status)
{
if (addr & XSCOM_ADDR_IND_FLAG) {
if (addr & XSCOM_ADDR_INF_FORM1)
return -ENXIO;
return get_indirect_scom_form0(scom, value, addr, status);
} else
return __get_scom(scom, value, addr, status);
}
static int handle_fsi2pib_status(struct scom_device *scom, uint32_t status)
{
uint32_t dummy = -1;
if (status & SCOM_STATUS_PROTECTION)
return -EPERM;
if (status & SCOM_STATUS_PARITY) {
fsi_device_write(scom->fsi_dev, SCOM_FSI2PIB_RESET_REG, &dummy,
sizeof(uint32_t));
return -EIO;
}
/* Return -EBUSY on PIB abort to force a retry */
if (status & SCOM_STATUS_PIB_ABORT)
return -EBUSY;
if (status & SCOM_STATUS_ERR_SUMMARY) {
fsi_device_write(scom->fsi_dev, SCOM_FSI2PIB_RESET_REG, &dummy,
sizeof(uint32_t));
return -EIO;
}
return 0;
}
static int handle_pib_status(struct scom_device *scom, uint8_t status)
{
uint32_t dummy = -1;
if (status == SCOM_PIB_SUCCESS)
return 0;
if (status == SCOM_PIB_BLOCKED)
return -EBUSY;
/* Reset the bridge */
fsi_device_write(scom->fsi_dev, SCOM_FSI2PIB_RESET_REG, &dummy,
sizeof(uint32_t));
switch(status) {
case SCOM_PIB_OFFLINE:
return -ENODEV;
case SCOM_PIB_BAD_ADDR:
return -ENXIO;
case SCOM_PIB_TIMEOUT:
return -ETIMEDOUT;
case SCOM_PIB_PARTIAL:
case SCOM_PIB_CLK_ERR:
case SCOM_PIB_PARITY_ERR:
default:
return -EIO;
}
}
static int put_scom(struct scom_device *scom, uint64_t value,
uint64_t addr)
{
uint32_t status, dummy = -1;
int rc, retries;
for (retries = 0; retries < SCOM_MAX_RETRIES; retries++) {
rc = raw_put_scom(scom, value, addr, &status);
if (rc) {
/* Try resetting the bridge if FSI fails */
if (rc != -ENODEV && retries == 0) {
fsi_device_write(scom->fsi_dev, SCOM_FSI2PIB_RESET_REG,
&dummy, sizeof(uint32_t));
rc = -EBUSY;
} else
return rc;
} else
rc = handle_fsi2pib_status(scom, status);
if (rc && rc != -EBUSY)
break;
if (rc == 0) {
rc = handle_pib_status(scom,
(status & SCOM_STATUS_PIB_RESP_MASK)
>> SCOM_STATUS_PIB_RESP_SHIFT);
if (rc && rc != -EBUSY)
break;
}
if (rc == 0)
break;
msleep(1);
}
return rc;
}
static int get_scom(struct scom_device *scom, uint64_t *value,
uint64_t addr)
{
uint32_t status, dummy = -1;
int rc, retries;
for (retries = 0; retries < SCOM_MAX_RETRIES; retries++) {
rc = raw_get_scom(scom, value, addr, &status);
if (rc) {
/* Try resetting the bridge if FSI fails */
if (rc != -ENODEV && retries == 0) {
fsi_device_write(scom->fsi_dev, SCOM_FSI2PIB_RESET_REG,
&dummy, sizeof(uint32_t));
rc = -EBUSY;
} else
return rc;
} else
rc = handle_fsi2pib_status(scom, status);
if (rc && rc != -EBUSY)
break;
if (rc == 0) {
rc = handle_pib_status(scom,
(status & SCOM_STATUS_PIB_RESP_MASK)
>> SCOM_STATUS_PIB_RESP_SHIFT);
if (rc && rc != -EBUSY)
break;
}
if (rc == 0)
break;
msleep(1);
}
return rc;
}
static ssize_t scom_read(struct file *filep, char __user *buf, size_t len,
loff_t *offset)
{
struct scom_device *scom = filep->private_data;
struct device *dev = &scom->fsi_dev->dev; struct device *dev = &scom->fsi_dev->dev;
uint64_t val; uint64_t val;
int rc;
if (len != sizeof(uint64_t)) if (len != sizeof(uint64_t))
return -EINVAL; return -EINVAL;
rc = get_scom(scom, &val, *offset); mutex_lock(&scom->lock);
if (scom->dead)
rc = -ENODEV;
else
rc = get_scom(scom, &val, *offset);
mutex_unlock(&scom->lock);
if (rc) { if (rc) {
dev_dbg(dev, "get_scom fail:%d\n", rc); dev_dbg(dev, "get_scom fail:%d\n", rc);
return rc; return rc;
@ -130,11 +394,10 @@ static ssize_t scom_read(struct file *filep, char __user *buf, size_t len,
} }
static ssize_t scom_write(struct file *filep, const char __user *buf, static ssize_t scom_write(struct file *filep, const char __user *buf,
size_t len, loff_t *offset) size_t len, loff_t *offset)
{ {
int rc; int rc;
struct miscdevice *mdev = filep->private_data; struct scom_device *scom = filep->private_data;
struct scom_device *scom = to_scom_dev(mdev);
struct device *dev = &scom->fsi_dev->dev; struct device *dev = &scom->fsi_dev->dev;
uint64_t val; uint64_t val;
@ -147,7 +410,12 @@ static ssize_t scom_write(struct file *filep, const char __user *buf,
return -EINVAL; return -EINVAL;
} }
rc = put_scom(scom, val, *offset); mutex_lock(&scom->lock);
if (scom->dead)
rc = -ENODEV;
else
rc = put_scom(scom, val, *offset);
mutex_unlock(&scom->lock);
if (rc) { if (rc) {
dev_dbg(dev, "put_scom failed with:%d\n", rc); dev_dbg(dev, "put_scom failed with:%d\n", rc);
return rc; return rc;
@ -171,50 +439,205 @@ static loff_t scom_llseek(struct file *file, loff_t offset, int whence)
return offset; return offset;
} }
static void raw_convert_status(struct scom_access *acc, uint32_t status)
{
acc->pib_status = (status & SCOM_STATUS_PIB_RESP_MASK) >>
SCOM_STATUS_PIB_RESP_SHIFT;
acc->intf_errors = 0;
if (status & SCOM_STATUS_PROTECTION)
acc->intf_errors |= SCOM_INTF_ERR_PROTECTION;
else if (status & SCOM_STATUS_PARITY)
acc->intf_errors |= SCOM_INTF_ERR_PARITY;
else if (status & SCOM_STATUS_PIB_ABORT)
acc->intf_errors |= SCOM_INTF_ERR_ABORT;
else if (status & SCOM_STATUS_ERR_SUMMARY)
acc->intf_errors |= SCOM_INTF_ERR_UNKNOWN;
}
static int scom_raw_read(struct scom_device *scom, void __user *argp)
{
struct scom_access acc;
uint32_t status;
int rc;
if (copy_from_user(&acc, argp, sizeof(struct scom_access)))
return -EFAULT;
rc = raw_get_scom(scom, &acc.data, acc.addr, &status);
if (rc)
return rc;
raw_convert_status(&acc, status);
if (copy_to_user(argp, &acc, sizeof(struct scom_access)))
return -EFAULT;
return 0;
}
static int scom_raw_write(struct scom_device *scom, void __user *argp)
{
u64 prev_data, mask, data;
struct scom_access acc;
uint32_t status;
int rc;
if (copy_from_user(&acc, argp, sizeof(struct scom_access)))
return -EFAULT;
if (acc.mask) {
rc = raw_get_scom(scom, &prev_data, acc.addr, &status);
if (rc)
return rc;
if (status & SCOM_STATUS_ANY_ERR)
goto fail;
mask = acc.mask;
} else {
prev_data = mask = -1ull;
}
data = (prev_data & ~mask) | (acc.data & mask);
rc = raw_put_scom(scom, data, acc.addr, &status);
if (rc)
return rc;
fail:
raw_convert_status(&acc, status);
if (copy_to_user(argp, &acc, sizeof(struct scom_access)))
return -EFAULT;
return 0;
}
static int scom_reset(struct scom_device *scom, void __user *argp)
{
uint32_t flags, dummy = -1;
int rc = 0;
if (get_user(flags, (__u32 __user *)argp))
return -EFAULT;
if (flags & SCOM_RESET_PIB)
rc = fsi_device_write(scom->fsi_dev, SCOM_PIB_RESET_REG, &dummy,
sizeof(uint32_t));
if (!rc && (flags & (SCOM_RESET_PIB | SCOM_RESET_INTF)))
rc = fsi_device_write(scom->fsi_dev, SCOM_FSI2PIB_RESET_REG, &dummy,
sizeof(uint32_t));
return rc;
}
static int scom_check(struct scom_device *scom, void __user *argp)
{
/* Still need to find out how to get "protected" */
return put_user(SCOM_CHECK_SUPPORTED, (__u32 __user *)argp);
}
static long scom_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
struct scom_device *scom = file->private_data;
void __user *argp = (void __user *)arg;
int rc = -ENOTTY;
mutex_lock(&scom->lock);
if (scom->dead) {
mutex_unlock(&scom->lock);
return -ENODEV;
}
switch(cmd) {
case FSI_SCOM_CHECK:
rc = scom_check(scom, argp);
break;
case FSI_SCOM_READ:
rc = scom_raw_read(scom, argp);
break;
case FSI_SCOM_WRITE:
rc = scom_raw_write(scom, argp);
break;
case FSI_SCOM_RESET:
rc = scom_reset(scom, argp);
break;
}
mutex_unlock(&scom->lock);
return rc;
}
static int scom_open(struct inode *inode, struct file *file)
{
struct scom_device *scom = container_of(inode->i_cdev, struct scom_device, cdev);
file->private_data = scom;
return 0;
}
static const struct file_operations scom_fops = { static const struct file_operations scom_fops = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
.llseek = scom_llseek, .open = scom_open,
.read = scom_read, .llseek = scom_llseek,
.write = scom_write, .read = scom_read,
.write = scom_write,
.unlocked_ioctl = scom_ioctl,
}; };
static void scom_free(struct device *dev)
{
struct scom_device *scom = container_of(dev, struct scom_device, dev);
put_device(&scom->fsi_dev->dev);
kfree(scom);
}
static int scom_probe(struct device *dev) static int scom_probe(struct device *dev)
{ {
uint32_t data;
struct fsi_device *fsi_dev = to_fsi_dev(dev); struct fsi_device *fsi_dev = to_fsi_dev(dev);
struct scom_device *scom; struct scom_device *scom;
int rc, didx;
scom = devm_kzalloc(dev, sizeof(*scom), GFP_KERNEL); scom = kzalloc(sizeof(*scom), GFP_KERNEL);
if (!scom) if (!scom)
return -ENOMEM; return -ENOMEM;
dev_set_drvdata(dev, scom);
mutex_init(&scom->lock);
scom->idx = ida_simple_get(&scom_ida, 1, INT_MAX, GFP_KERNEL); /* Grab a reference to the device (parent of our cdev), we'll drop it later */
snprintf(scom->name, sizeof(scom->name), "scom%d", scom->idx); if (!get_device(dev)) {
kfree(scom);
return -ENODEV;
}
scom->fsi_dev = fsi_dev; scom->fsi_dev = fsi_dev;
scom->mdev.minor = MISC_DYNAMIC_MINOR;
scom->mdev.fops = &scom_fops;
scom->mdev.name = scom->name;
scom->mdev.parent = dev;
list_add(&scom->link, &scom_devices);
data = cpu_to_be32(SCOM_RESET_CMD); /* Create chardev for userspace access */
fsi_device_write(fsi_dev, SCOM_RESET_REG, &data, sizeof(uint32_t)); scom->dev.type = &fsi_cdev_type;
scom->dev.parent = dev;
scom->dev.release = scom_free;
device_initialize(&scom->dev);
return misc_register(&scom->mdev); /* Allocate a minor in the FSI space */
rc = fsi_get_new_minor(fsi_dev, fsi_dev_scom, &scom->dev.devt, &didx);
if (rc)
goto err;
dev_set_name(&scom->dev, "scom%d", didx);
cdev_init(&scom->cdev, &scom_fops);
rc = cdev_device_add(&scom->cdev, &scom->dev);
if (rc) {
dev_err(dev, "Error %d creating char device %s\n",
rc, dev_name(&scom->dev));
goto err_free_minor;
}
return 0;
err_free_minor:
fsi_free_minor(scom->dev.devt);
err:
put_device(&scom->dev);
return rc;
} }
static int scom_remove(struct device *dev) static int scom_remove(struct device *dev)
{ {
struct scom_device *scom, *scom_tmp; struct scom_device *scom = dev_get_drvdata(dev);
struct fsi_device *fsi_dev = to_fsi_dev(dev);
list_for_each_entry_safe(scom, scom_tmp, &scom_devices, link) { mutex_lock(&scom->lock);
if (scom->fsi_dev == fsi_dev) { scom->dead = true;
list_del(&scom->link); mutex_unlock(&scom->lock);
ida_simple_remove(&scom_ida, scom->idx); cdev_device_del(&scom->cdev, &scom->dev);
misc_deregister(&scom->mdev); fsi_free_minor(scom->dev.devt);
} put_device(&scom->dev);
}
return 0; return 0;
} }
@ -239,20 +662,11 @@ static struct fsi_driver scom_drv = {
static int scom_init(void) static int scom_init(void)
{ {
INIT_LIST_HEAD(&scom_devices);
return fsi_driver_register(&scom_drv); return fsi_driver_register(&scom_drv);
} }
static void scom_exit(void) static void scom_exit(void)
{ {
struct list_head *pos;
struct scom_device *scom;
list_for_each(pos, &scom_devices) {
scom = list_entry(pos, struct scom_device, link);
misc_deregister(&scom->mdev);
devm_kfree(&scom->fsi_dev->dev, scom);
}
fsi_driver_unregister(&scom_drv); fsi_driver_unregister(&scom_drv);
} }

43
drivers/gnss/Kconfig Normal file
View File

@ -0,0 +1,43 @@
#
# GNSS receiver configuration
#
menuconfig GNSS
tristate "GNSS receiver support"
---help---
Say Y here if you have a GNSS receiver (e.g. a GPS receiver).
To compile this driver as a module, choose M here: the module will
be called gnss.
if GNSS
config GNSS_SERIAL
tristate
config GNSS_SIRF_SERIAL
tristate "SiRFstar GNSS receiver support"
depends on SERIAL_DEV_BUS
---help---
Say Y here if you have a SiRFstar-based GNSS receiver which uses a
serial interface.
To compile this driver as a module, choose M here: the module will
be called gnss-sirf.
If unsure, say N.
config GNSS_UBX_SERIAL
tristate "u-blox GNSS receiver support"
depends on SERIAL_DEV_BUS
select GNSS_SERIAL
---help---
Say Y here if you have a u-blox GNSS receiver which uses a serial
interface.
To compile this driver as a module, choose M here: the module will
be called gnss-ubx.
If unsure, say N.
endif # GNSS

16
drivers/gnss/Makefile Normal file
View File

@ -0,0 +1,16 @@
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for the GNSS subsystem.
#
obj-$(CONFIG_GNSS) += gnss.o
gnss-y := core.o
obj-$(CONFIG_GNSS_SERIAL) += gnss-serial.o
gnss-serial-y := serial.o
obj-$(CONFIG_GNSS_SIRF_SERIAL) += gnss-sirf.o
gnss-sirf-y := sirf.o
obj-$(CONFIG_GNSS_UBX_SERIAL) += gnss-ubx.o
gnss-ubx-y := ubx.o

420
drivers/gnss/core.c Normal file
View File

@ -0,0 +1,420 @@
// SPDX-License-Identifier: GPL-2.0
/*
* GNSS receiver core
*
* Copyright (C) 2018 Johan Hovold <johan@kernel.org>
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/cdev.h>
#include <linux/errno.h>
#include <linux/fs.h>
#include <linux/gnss.h>
#include <linux/idr.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/poll.h>
#include <linux/slab.h>
#include <linux/uaccess.h>
#include <linux/wait.h>
#define GNSS_FLAG_HAS_WRITE_RAW BIT(0)
#define GNSS_MINORS 16
static DEFINE_IDA(gnss_minors);
static dev_t gnss_first;
/* FIFO size must be a power of two */
#define GNSS_READ_FIFO_SIZE 4096
#define GNSS_WRITE_BUF_SIZE 1024
#define to_gnss_device(d) container_of((d), struct gnss_device, dev)
static int gnss_open(struct inode *inode, struct file *file)
{
struct gnss_device *gdev;
int ret = 0;
gdev = container_of(inode->i_cdev, struct gnss_device, cdev);
get_device(&gdev->dev);
nonseekable_open(inode, file);
file->private_data = gdev;
down_write(&gdev->rwsem);
if (gdev->disconnected) {
ret = -ENODEV;
goto unlock;
}
if (gdev->count++ == 0) {
ret = gdev->ops->open(gdev);
if (ret)
gdev->count--;
}
unlock:
up_write(&gdev->rwsem);
if (ret)
put_device(&gdev->dev);
return ret;
}
static int gnss_release(struct inode *inode, struct file *file)
{
struct gnss_device *gdev = file->private_data;
down_write(&gdev->rwsem);
if (gdev->disconnected)
goto unlock;
if (--gdev->count == 0) {
gdev->ops->close(gdev);
kfifo_reset(&gdev->read_fifo);
}
unlock:
up_write(&gdev->rwsem);
put_device(&gdev->dev);
return 0;
}
static ssize_t gnss_read(struct file *file, char __user *buf,
size_t count, loff_t *pos)
{
struct gnss_device *gdev = file->private_data;
unsigned int copied;
int ret;
mutex_lock(&gdev->read_mutex);
while (kfifo_is_empty(&gdev->read_fifo)) {
mutex_unlock(&gdev->read_mutex);
if (gdev->disconnected)
return 0;
if (file->f_flags & O_NONBLOCK)
return -EAGAIN;
ret = wait_event_interruptible(gdev->read_queue,
gdev->disconnected ||
!kfifo_is_empty(&gdev->read_fifo));
if (ret)
return -ERESTARTSYS;
mutex_lock(&gdev->read_mutex);
}
ret = kfifo_to_user(&gdev->read_fifo, buf, count, &copied);
if (ret == 0)
ret = copied;
mutex_unlock(&gdev->read_mutex);
return ret;
}
static ssize_t gnss_write(struct file *file, const char __user *buf,
size_t count, loff_t *pos)
{
struct gnss_device *gdev = file->private_data;
size_t written = 0;
int ret;
if (gdev->disconnected)
return -EIO;
if (!count)
return 0;
if (!(gdev->flags & GNSS_FLAG_HAS_WRITE_RAW))
return -EIO;
/* Ignoring O_NONBLOCK, write_raw() is synchronous. */
ret = mutex_lock_interruptible(&gdev->write_mutex);
if (ret)
return -ERESTARTSYS;
for (;;) {
size_t n = count - written;
if (n > GNSS_WRITE_BUF_SIZE)
n = GNSS_WRITE_BUF_SIZE;
if (copy_from_user(gdev->write_buf, buf, n)) {
ret = -EFAULT;
goto out_unlock;
}
/*
* Assumes write_raw can always accept GNSS_WRITE_BUF_SIZE
* bytes.
*
* FIXME: revisit
*/
down_read(&gdev->rwsem);
if (!gdev->disconnected)
ret = gdev->ops->write_raw(gdev, gdev->write_buf, n);
else
ret = -EIO;
up_read(&gdev->rwsem);
if (ret < 0)
break;
written += ret;
buf += ret;
if (written == count)
break;
}
if (written)
ret = written;
out_unlock:
mutex_unlock(&gdev->write_mutex);
return ret;
}
static __poll_t gnss_poll(struct file *file, poll_table *wait)
{
struct gnss_device *gdev = file->private_data;
__poll_t mask = 0;
poll_wait(file, &gdev->read_queue, wait);
if (!kfifo_is_empty(&gdev->read_fifo))
mask |= EPOLLIN | EPOLLRDNORM;
if (gdev->disconnected)
mask |= EPOLLHUP;
return mask;
}
static const struct file_operations gnss_fops = {
.owner = THIS_MODULE,
.open = gnss_open,
.release = gnss_release,
.read = gnss_read,
.write = gnss_write,
.poll = gnss_poll,
.llseek = no_llseek,
};
static struct class *gnss_class;
static void gnss_device_release(struct device *dev)
{
struct gnss_device *gdev = to_gnss_device(dev);
kfree(gdev->write_buf);
kfifo_free(&gdev->read_fifo);
ida_simple_remove(&gnss_minors, gdev->id);
kfree(gdev);
}
struct gnss_device *gnss_allocate_device(struct device *parent)
{
struct gnss_device *gdev;
struct device *dev;
int id;
int ret;
gdev = kzalloc(sizeof(*gdev), GFP_KERNEL);
if (!gdev)
return NULL;
id = ida_simple_get(&gnss_minors, 0, GNSS_MINORS, GFP_KERNEL);
if (id < 0) {
kfree(gdev);
return NULL;
}
gdev->id = id;
dev = &gdev->dev;
device_initialize(dev);
dev->devt = gnss_first + id;
dev->class = gnss_class;
dev->parent = parent;
dev->release = gnss_device_release;
dev_set_drvdata(dev, gdev);
dev_set_name(dev, "gnss%d", id);
init_rwsem(&gdev->rwsem);
mutex_init(&gdev->read_mutex);
mutex_init(&gdev->write_mutex);
init_waitqueue_head(&gdev->read_queue);
ret = kfifo_alloc(&gdev->read_fifo, GNSS_READ_FIFO_SIZE, GFP_KERNEL);
if (ret)
goto err_put_device;
gdev->write_buf = kzalloc(GNSS_WRITE_BUF_SIZE, GFP_KERNEL);
if (!gdev->write_buf)
goto err_put_device;
cdev_init(&gdev->cdev, &gnss_fops);
gdev->cdev.owner = THIS_MODULE;
return gdev;
err_put_device:
put_device(dev);
return NULL;
}
EXPORT_SYMBOL_GPL(gnss_allocate_device);
void gnss_put_device(struct gnss_device *gdev)
{
put_device(&gdev->dev);
}
EXPORT_SYMBOL_GPL(gnss_put_device);
int gnss_register_device(struct gnss_device *gdev)
{
int ret;
/* Set a flag which can be accessed without holding the rwsem. */
if (gdev->ops->write_raw != NULL)
gdev->flags |= GNSS_FLAG_HAS_WRITE_RAW;
ret = cdev_device_add(&gdev->cdev, &gdev->dev);
if (ret) {
dev_err(&gdev->dev, "failed to add device: %d\n", ret);
return ret;
}
return 0;
}
EXPORT_SYMBOL_GPL(gnss_register_device);
void gnss_deregister_device(struct gnss_device *gdev)
{
down_write(&gdev->rwsem);
gdev->disconnected = true;
if (gdev->count) {
wake_up_interruptible(&gdev->read_queue);
gdev->ops->close(gdev);
}
up_write(&gdev->rwsem);
cdev_device_del(&gdev->cdev, &gdev->dev);
}
EXPORT_SYMBOL_GPL(gnss_deregister_device);
/*
* Caller guarantees serialisation.
*
* Must not be called for a closed device.
*/
int gnss_insert_raw(struct gnss_device *gdev, const unsigned char *buf,
size_t count)
{
int ret;
ret = kfifo_in(&gdev->read_fifo, buf, count);
wake_up_interruptible(&gdev->read_queue);
return ret;
}
EXPORT_SYMBOL_GPL(gnss_insert_raw);
static const char * const gnss_type_names[GNSS_TYPE_COUNT] = {
[GNSS_TYPE_NMEA] = "NMEA",
[GNSS_TYPE_SIRF] = "SiRF",
[GNSS_TYPE_UBX] = "UBX",
};
static const char *gnss_type_name(struct gnss_device *gdev)
{
const char *name = NULL;
if (gdev->type < GNSS_TYPE_COUNT)
name = gnss_type_names[gdev->type];
if (!name)
dev_WARN(&gdev->dev, "type name not defined\n");
return name;
}
static ssize_t type_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct gnss_device *gdev = to_gnss_device(dev);
return sprintf(buf, "%s\n", gnss_type_name(gdev));
}
static DEVICE_ATTR_RO(type);
static struct attribute *gnss_attrs[] = {
&dev_attr_type.attr,
NULL,
};
ATTRIBUTE_GROUPS(gnss);
static int gnss_uevent(struct device *dev, struct kobj_uevent_env *env)
{
struct gnss_device *gdev = to_gnss_device(dev);
int ret;
ret = add_uevent_var(env, "GNSS_TYPE=%s", gnss_type_name(gdev));
if (ret)
return ret;
return 0;
}
static int __init gnss_module_init(void)
{
int ret;
ret = alloc_chrdev_region(&gnss_first, 0, GNSS_MINORS, "gnss");
if (ret < 0) {
pr_err("failed to allocate device numbers: %d\n", ret);
return ret;
}
gnss_class = class_create(THIS_MODULE, "gnss");
if (IS_ERR(gnss_class)) {
ret = PTR_ERR(gnss_class);
pr_err("failed to create class: %d\n", ret);
goto err_unregister_chrdev;
}
gnss_class->dev_groups = gnss_groups;
gnss_class->dev_uevent = gnss_uevent;
pr_info("GNSS driver registered with major %d\n", MAJOR(gnss_first));
return 0;
err_unregister_chrdev:
unregister_chrdev_region(gnss_first, GNSS_MINORS);
return ret;
}
module_init(gnss_module_init);
static void __exit gnss_module_exit(void)
{
class_destroy(gnss_class);
unregister_chrdev_region(gnss_first, GNSS_MINORS);
ida_destroy(&gnss_minors);
}
module_exit(gnss_module_exit);
MODULE_AUTHOR("Johan Hovold <johan@kernel.org>");
MODULE_DESCRIPTION("GNSS receiver core");
MODULE_LICENSE("GPL v2");

Some files were not shown because too many files have changed in this diff Show More