ARM: SoC driver updates for 4.16

A number of new drivers get added this time, along with many low-priority
 bugfixes. The most interesting changes by subsystem are:
 
 bus drivers:
   - Updates to the Broadcom bus interface driver to support newer SoC types
   - The TI OMAP sysc driver now supports updated DT bindings
 
 memory controllers:
   - A new driver for Tegra186 gets added
   - A new driver for the ti-emif sram, to allow relocating
     suspend/resume handlers there
 
 SoC specific:
   - A new driver for Qualcomm QMI, the interface to the modem on MSM SoCs
   - A new driver for power domains on the actions S700 SoC
   - A driver for the Xilinx Zynq VCU logicoreIP
 
 reset controllers:
   - A new driver for Amlogic Meson-AGX
   - various bug fixes
 
 tee subsystem:
   - A new user interface got added to enable asynchronous communication
     with the TEE supplicant.
   - A new method of using user space memory for communication with
     the TEE is added
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJac0fQAAoJEGCrR//JCVIntjEQALc6kflEGJc/FPundbx9V3F/
 b+3+EX/uMnBnKsgZprz9ACPhx5eBH9QWja3A1zmIarb5c+q7zbBZDwhzUb8J8Yg8
 xEb0im7Wx/GcKjUYZVKYBxtz9KjkXDzhrq8IAvPg6ShNcIy/8hq7ZO3iOkGsTDcy
 /PyioWKC5g0dhJgtp91X1kgog5tuTaWOg39uUOqyEzwVu1vYVa4w+eeCzjEd6I//
 68R/zDQ52+hWw6WZGoYOsNYzuriOflnJRnNpwuGhMhLNULBJfWnd4hkqGm4E+hFa
 5dzW6vVAdIqjemFqPzCBT2WB4UG871aZX8DJ9HgnfX+g970nlsm1JY8Ck9MJNJum
 aDkqZG41ArUYzDFWu8vJ2SKsue5lEZp6TEO2mLEVYrdOjOgedj0Zxqmq2DYeigxd
 +ccOVgKJ9SsYw9ft1LkQ5BHCgOh3C7y9Kcg7oBnaEI5OTVvtf5PwEkT2cwbvgxYl
 EVKLhlJ0Af+QXOW8E5JbNQETpYw52DMm6UKHiYn/JCGxB/8J0bgJzImDJI4Dtu2h
 zqJITKJeTepqbfA5pmNfKa+20RhmsktdRCw2NN/QynY7EEtGjHAUVnlpZT2mrDco
 0m62b7Erek/776vJN5ECzE5e6XCs2N0MDE6Anp121C5zEmig/SMBrUosMzP7Jnis
 IDVC/QWkb3u85wK20Vc1
 =yz0k
 -----END PGP SIGNATURE-----

Merge tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC driver updates from Arnd Bergmann:
 "A number of new drivers get added this time, along with many
  low-priority bugfixes. The most interesting changes by subsystem are:

  bus drivers:
   - Updates to the Broadcom bus interface driver to support newer SoC
     types
   - The TI OMAP sysc driver now supports updated DT bindings

  memory controllers:
   - A new driver for Tegra186 gets added
   - A new driver for the ti-emif sram, to allow relocating
     suspend/resume handlers there

  SoC specific:
   - A new driver for Qualcomm QMI, the interface to the modem on MSM
     SoCs
   - A new driver for power domains on the actions S700 SoC
   - A driver for the Xilinx Zynq VCU logicoreIP

  reset controllers:
   - A new driver for Amlogic Meson-AGX
   - various bug fixes

  tee subsystem:
   - A new user interface got added to enable asynchronous communication
     with the TEE supplicant.
   - A new method of using user space memory for communication with the
     TEE is added"

* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (84 commits)
  of: platform: fix OF node refcount leak
  soc: fsl: guts: Add a NULL check for devm_kasprintf()
  bus: ti-sysc: Fix smartreflex sysc mask
  psci: add CPU_IDLE dependency
  soc: xilinx: Fix Kconfig alignment
  soc: xilinx: xlnx_vcu: Use bitwise & rather than logical && on clkoutdiv
  soc: xilinx: xlnx_vcu: Depends on HAS_IOMEM for xlnx_vcu
  soc: bcm: brcmstb: Be multi-platform compatible
  soc: brcmstb: biuctrl: exit without warning on non brcmstb platforms
  Revert "soc: brcmstb: Only register SoC device on STB platforms"
  bus: omap: add MODULE_LICENSE tags
  soc: brcmstb: Only register SoC device on STB platforms
  tee: shm: Potential NULL dereference calling tee_shm_register()
  soc: xilinx: xlnx_vcu: Add Xilinx ZYNQMP VCU logicoreIP init driver
  dt-bindings: soc: xilinx: Add DT bindings to xlnx_vcu driver
  soc: xilinx: Create folder structure for soc specific drivers
  of: platform: populate /firmware/ node from of_platform_default_populate_init()
  soc: samsung: Add SPDX license identifiers
  soc: qcom: smp2p: Use common error handling code in qcom_smp2p_probe()
  tee: shm: don't put_page on null shm->pages
  ...
This commit is contained in:
Linus Torvalds 2018-02-01 16:35:31 -08:00
commit fe53d1443a
93 changed files with 6914 additions and 740 deletions

View File

@ -17,21 +17,23 @@ Further, syscon nodes that map platform-specific registers used for general
system control is required:
- compatible: "brcm,bcm<chip_id>-sun-top-ctrl", "syscon"
- compatible: "brcm,bcm<chip_id>-hif-cpubiuctrl", "syscon"
- compatible: "brcm,bcm<chip_id>-cpu-biu-ctrl",
"brcm,brcmstb-cpu-biu-ctrl",
"syscon"
- compatible: "brcm,bcm<chip_id>-hif-continuation", "syscon"
hif-cpubiuctrl node
cpu-biu-ctrl node
-------------------
SoCs with Broadcom Brahma15 ARM-based CPUs have a specific Bus Interface Unit
(BIU) block which controls and interfaces the CPU complex to the different
Memory Controller Ports (MCP), one per memory controller (MEMC). This BIU block
offers a feature called Write Pairing which consists in collapsing two adjacent
cache lines into a single (bursted) write transaction towards the memory
controller (MEMC) to maximize write bandwidth.
SoCs with Broadcom Brahma15 ARM-based and Brahma53 ARM64-based CPUs have a
specific Bus Interface Unit (BIU) block which controls and interfaces the CPU
complex to the different Memory Controller Ports (MCP), one per memory
controller (MEMC). This BIU block offers a feature called Write Pairing which
consists in collapsing two adjacent cache lines into a single (bursted) write
transaction towards the memory controller (MEMC) to maximize write bandwidth.
Required properties:
- compatible: must be "brcm,bcm7445-hif-cpubiuctrl", "syscon"
- compatible: must be "brcm,bcm7445-cpu-biu-ctrl", "brcm,brcmstb-cpu-biu-ctrl", "syscon"
Optional properties:
@ -52,7 +54,7 @@ example:
};
hif_cpubiuctrl: syscon@3e2400 {
compatible = "brcm,bcm7445-hif-cpubiuctrl", "syscon";
compatible = "brcm,bcm7445-cpu-biu-ctrl", "brcm,brcmstb-cpu-biu-ctrl", "syscon";
reg = <0x3e2400 0x5b4>;
brcm,write-pairing;
};

View File

@ -169,6 +169,7 @@ described below.
"arm,cortex-r5"
"arm,cortex-r7"
"brcm,brahma-b15"
"brcm,brahma-b53"
"brcm,vulcan"
"cavium,thunder"
"cavium,thunder2"

View File

@ -19,6 +19,7 @@ Required standard properties:
- compatible shall be one of the following generic types:
"ti,sysc"
"ti,sysc-omap2"
"ti,sysc-omap4"
"ti,sysc-omap4-simple"

View File

@ -23,6 +23,13 @@ Required properties:
the value shall be "emif<n>" where <n> is the number of the EMIF
instance with base 1.
Required only for "ti,emif-am3352" and "ti,emif-am4372":
- sram : Phandles for generic sram driver nodes,
first should be type 'protect-exec' for the driver to use to copy
and run PM functions, second should be regular pool to be used for
data region for code. See Documentation/devicetree/bindings/sram/sram.txt
for more details.
Optional properties:
- cs1-used : Have this property if CS1 of this EMIF
instance has a memory part attached to it. If there is a memory
@ -44,7 +51,7 @@ Optional properties:
- hw-caps-temp-alert : Have this property if the controller
has capability for generating SDRAM temperature alerts
Example:
-Examples:
emif1: emif@4c000000 {
compatible = "ti,emif-4d";
@ -56,3 +63,11 @@ emif1: emif@4c000000 {
hw-caps-ll-interface;
hw-caps-temp-alert;
};
/* From am33xx.dtsi */
emif: emif@4c000000 {
compatible = "ti,emif-am3352";
reg = <0x4C000000 0x1000>;
sram = <&pm_sram_code
&pm_sram_data>;
};

View File

@ -9,6 +9,7 @@ Required properties:
- fsl,imx6q-gpc
- fsl,imx6qp-gpc
- fsl,imx6sl-gpc
- fsl,imx6sx-gpc
- reg: should be register base and length as documented in the
datasheet
- interrupts: Should contain one interrupt specifier for the GPC interrupt
@ -29,6 +30,8 @@ Required properties:
PU_DOMAIN 1
The following additional DOMAIN_INDEX value is valid for i.MX6SL:
DISPLAY_DOMAIN 2
The following additional DOMAIN_INDEX value is valid for i.MX6SX:
PCI_DOMAIN 3
- #power-domain-cells: Should be 0

View File

@ -5,7 +5,8 @@ Please also refer to reset.txt in this directory for common reset
controller binding usage.
Required properties:
- compatible: Should be "amlogic,meson8b-reset" or "amlogic,meson-gxbb-reset"
- compatible: Should be "amlogic,meson8b-reset", "amlogic,meson-gxbb-reset" or
"amlogic,meson-axg-reset".
- reg: should contain the register address base
- #reset-cells: 1, see below

View File

@ -17,9 +17,15 @@ processor ID) and a string identifier.
Value type: <prop-encoded-array>
Definition: one entry specifying the smp2p notification interrupt
- qcom,ipc:
- mboxes:
Usage: required
Value type: <prop-encoded-array>
Definition: reference to the associated doorbell in APCS, as described
in mailbox/mailbox.txt
- qcom,ipc:
Usage: required, unless mboxes is specified
Value type: <prop-encoded-array>
Definition: three entries specifying the outgoing ipc bit used for
signaling the remote end of the smp2p edge:
- phandle to a syscon node representing the apcs registers

View File

@ -0,0 +1,31 @@
LogicoreIP designed compatible with Xilinx ZYNQ family.
-------------------------------------------------------
General concept
---------------
LogicoreIP design to provide the isolation between processing system
and programmable logic. Also provides the list of register set to configure
the frequency.
Required properties:
- compatible: shall be one of:
"xlnx,vcu"
"xlnx,vcu-logicoreip-1.0"
- reg, reg-names: There are two sets of registers need to provide.
1. vcu slcr
2. Logicore
reg-names should contain name for the each register sequence.
- clocks: phandle for aclk and pll_ref clocksource
- clock-names: The identification string, "aclk", is always required for
the axi clock. "pll_ref" is required for pll.
Example:
xlnx_vcu: vcu@a0040000 {
compatible = "xlnx,vcu-logicoreip-1.0";
reg = <0x0 0xa0040000 0x0 0x1000>,
<0x0 0xa0041000 0x0 0x1000>;
reg-names = "vcu_slcr", "logicore";
clocks = <&si570_1>, <&clkc 71>;
clock-names = "pll_ref", "aclk";
};

View File

@ -1523,7 +1523,7 @@
};
target-module@4a0dd000 {
compatible = "ti,sysc-omap4-sr";
compatible = "ti,sysc-omap4-sr", "ti,sysc";
ti,hwmods = "smartreflex_core";
reg = <0x4a0dd038 0x4>;
reg-names = "sysc";
@ -1542,7 +1542,7 @@
};
target-module@4a0d9000 {
compatible = "ti,sysc-omap4-sr";
compatible = "ti,sysc-omap4-sr", "ti,sysc";
ti,hwmods = "smartreflex_mpu";
reg = <0x4a0d9038 0x4>;
reg-names = "sysc";

View File

@ -395,7 +395,7 @@
};
target-module@48076000 {
compatible = "ti,sysc-omap4";
compatible = "ti,sysc-omap4", "ti,sysc";
ti,hwmods = "slimbus2";
reg = <0x48076000 0x4>,
<0x48076010 0x4>;
@ -475,7 +475,7 @@
};
target-module@4a0db000 {
compatible = "ti,sysc-sr";
compatible = "ti,sysc-omap4-sr", "ti,sysc";
ti,hwmods = "smartreflex_iva";
reg = <0x4a0db038 0x4>;
reg-names = "sysc";
@ -498,7 +498,7 @@
};
target-module@4a0dd000 {
compatible = "ti,sysc-sr";
compatible = "ti,sysc-omap4-sr", "ti,sysc";
ti,hwmods = "smartreflex_core";
reg = <0x4a0dd038 0x4>;
reg-names = "sysc";
@ -521,7 +521,7 @@
};
target-module@4a0d9000 {
compatible = "ti,sysc-sr";
compatible = "ti,sysc-omap4-sr", "ti,sysc";
ti,hwmods = "smartreflex_mpu";
reg = <0x4a0d9038 0x4>;
reg-names = "sysc";
@ -747,7 +747,7 @@
};
target-module@52000000 {
compatible = "ti,sysc-omap4";
compatible = "ti,sysc-omap4", "ti,sysc";
ti,hwmods = "iss";
reg = <0x52000000 0x4>,
<0x52000010 0x4>;
@ -866,7 +866,7 @@
};
target-module@40128000 {
compatible = "ti,sysc-mcasp";
compatible = "ti,sysc-mcasp", "ti,sysc";
ti,hwmods = "mcasp";
reg = <0x40128000 0x4>,
<0x40128004 0x4>;
@ -891,7 +891,7 @@
};
target-module@4012c000 {
compatible = "ti,sysc-omap4";
compatible = "ti,sysc-omap4", "ti,sysc";
ti,hwmods = "slimbus1";
reg = <0x4012c000 0x4>,
<0x4012c010 0x4>;
@ -912,7 +912,7 @@
};
target-module@401f1000 {
compatible = "ti,sysc-omap4";
compatible = "ti,sysc-omap4", "ti,sysc";
ti,hwmods = "aess";
reg = <0x401f1000 0x4>,
<0x401f1010 0x4>;
@ -1027,7 +1027,7 @@
};
target-module@4a10a000 {
compatible = "ti,sysc-omap4";
compatible = "ti,sysc-omap4", "ti,sysc";
ti,hwmods = "fdif";
reg = <0x4a10a000 0x4>,
<0x4a10a010 0x4>;
@ -1266,7 +1266,7 @@
};
target-module@56000000 {
compatible = "ti,sysc-omap4";
compatible = "ti,sysc-omap4", "ti,sysc";
ti,hwmods = "gpu";
reg = <0x5601fc00 0x4>,
<0x5601fc10 0x4>;

View File

@ -14,7 +14,6 @@
#include <linux/init.h>
#include <linux/irqchip.h>
#include <linux/of_platform.h>
#include <linux/soc/brcmstb/brcmstb.h>
#include <asm/mach-types.h>
#include <asm/mach/arch.h>
@ -38,7 +37,6 @@ u32 brcmstb_uart_config[3] = {
static void __init brcmstb_init_irq(void)
{
irqchip_init();
brcmstb_biuctrl_init();
}
static const char *const brcmstb_match[] __initconst = {

View File

@ -143,6 +143,8 @@
#include <linux/of_address.h>
#include <linux/bootmem.h>
#include <linux/platform_data/ti-sysc.h>
#include <asm/system_misc.h>
#include "clock.h"

View File

@ -37,9 +37,15 @@
struct omap_device;
extern struct omap_hwmod_sysc_fields omap_hwmod_sysc_type1;
extern struct omap_hwmod_sysc_fields omap_hwmod_sysc_type2;
extern struct omap_hwmod_sysc_fields omap_hwmod_sysc_type3;
extern struct sysc_regbits omap_hwmod_sysc_type1;
extern struct sysc_regbits omap_hwmod_sysc_type2;
extern struct sysc_regbits omap_hwmod_sysc_type3;
extern struct sysc_regbits omap34xx_sr_sysc_fields;
extern struct sysc_regbits omap36xx_sr_sysc_fields;
extern struct sysc_regbits omap3_sham_sysc_fields;
extern struct sysc_regbits omap3xxx_aes_sysc_fields;
extern struct sysc_regbits omap_hwmod_sysc_type_mcasp;
extern struct sysc_regbits omap_hwmod_sysc_type_usb_host_fs;
/*
* OCP SYSCONFIG bit shifts/masks TYPE1. These are for IPs compliant
@ -284,26 +290,6 @@ struct omap_hwmod_ocp_if {
#define CLOCKACT_TEST_ICLK 0x2
#define CLOCKACT_TEST_NONE 0x3
/**
* struct omap_hwmod_sysc_fields - hwmod OCP_SYSCONFIG register field offsets.
* @midle_shift: Offset of the midle bit
* @clkact_shift: Offset of the clockactivity bit
* @sidle_shift: Offset of the sidle bit
* @enwkup_shift: Offset of the enawakeup bit
* @srst_shift: Offset of the softreset bit
* @autoidle_shift: Offset of the autoidle bit
* @dmadisable_shift: Offset of the dmadisable bit
*/
struct omap_hwmod_sysc_fields {
u8 midle_shift;
u8 clkact_shift;
u8 sidle_shift;
u8 enwkup_shift;
u8 srst_shift;
u8 autoidle_shift;
u8 dmadisable_shift;
};
/**
* struct omap_hwmod_class_sysconfig - hwmod class OCP_SYS* data
* @rev_offs: IP block revision register offset (from module base addr)
@ -335,7 +321,7 @@ struct omap_hwmod_class_sysconfig {
u32 sysc_offs;
u32 syss_offs;
u16 sysc_flags;
struct omap_hwmod_sysc_fields *sysc_fields;
struct sysc_regbits *sysc_fields;
u8 srst_udelay;
u8 idlemodes;
};

View File

@ -1108,10 +1108,6 @@ static struct omap_hwmod omap3xxx_mcbsp3_sidetone_hwmod = {
};
/* SR common */
static struct omap_hwmod_sysc_fields omap34xx_sr_sysc_fields = {
.clkact_shift = 20,
};
static struct omap_hwmod_class_sysconfig omap34xx_sr_sysc = {
.sysc_offs = 0x24,
.sysc_flags = (SYSC_HAS_CLOCKACTIVITY | SYSC_NO_CACHE),
@ -1124,11 +1120,6 @@ static struct omap_hwmod_class omap34xx_smartreflex_hwmod_class = {
.rev = 1,
};
static struct omap_hwmod_sysc_fields omap36xx_sr_sysc_fields = {
.sidle_shift = 24,
.enwkup_shift = 26,
};
static struct omap_hwmod_class_sysconfig omap36xx_sr_sysc = {
.sysc_offs = 0x38,
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART),
@ -2607,12 +2598,6 @@ static struct omap_hwmod_ocp_if omap3xxx_l3_main__gpmc = {
};
/* l4_core -> SHAM2 (SHA1/MD5) (similar to omap24xx) */
static struct omap_hwmod_sysc_fields omap3_sham_sysc_fields = {
.sidle_shift = 4,
.srst_shift = 1,
.autoidle_shift = 0,
};
static struct omap_hwmod_class_sysconfig omap3_sham_sysc = {
.rev_offs = 0x5c,
.sysc_offs = 0x60,
@ -2651,12 +2636,6 @@ static struct omap_hwmod_ocp_if omap3xxx_l4_core__sham = {
};
/* l4_core -> AES */
static struct omap_hwmod_sysc_fields omap3xxx_aes_sysc_fields = {
.sidle_shift = 6,
.srst_shift = 1,
.autoidle_shift = 0,
};
static struct omap_hwmod_class_sysconfig omap3_aes_sysc = {
.rev_offs = 0x44,
.sysc_offs = 0x48,

View File

@ -1658,10 +1658,6 @@ static struct omap_hwmod omap44xx_mailbox_hwmod = {
*/
/* The IP is not compliant to type1 / type2 scheme */
static struct omap_hwmod_sysc_fields omap_hwmod_sysc_type_mcasp = {
.sidle_shift = 0,
};
static struct omap_hwmod_class_sysconfig omap44xx_mcasp_sysc = {
.sysc_offs = 0x0004,
.sysc_flags = SYSC_HAS_SIDLEMODE,
@ -2403,17 +2399,12 @@ static struct omap_hwmod omap44xx_slimbus2_hwmod = {
*/
/* The IP is not compliant to type1 / type2 scheme */
static struct omap_hwmod_sysc_fields omap_hwmod_sysc_type_smartreflex = {
.sidle_shift = 24,
.enwkup_shift = 26,
};
static struct omap_hwmod_class_sysconfig omap44xx_smartreflex_sysc = {
.sysc_offs = 0x0038,
.sysc_flags = (SYSC_HAS_ENAWAKEUP | SYSC_HAS_SIDLEMODE),
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
SIDLE_SMART_WKUP),
.sysc_fields = &omap_hwmod_sysc_type_smartreflex,
.sysc_fields = &omap36xx_sr_sysc_fields,
};
static struct omap_hwmod_class omap44xx_smartreflex_hwmod_class = {
@ -2844,12 +2835,6 @@ static struct omap_hwmod omap44xx_uart4_hwmod = {
*/
/* The IP is not compliant to type1 / type2 scheme */
static struct omap_hwmod_sysc_fields omap_hwmod_sysc_type_usb_host_fs = {
.midle_shift = 4,
.sidle_shift = 2,
.srst_shift = 1,
};
static struct omap_hwmod_class_sysconfig omap44xx_usb_host_fs_sysc = {
.rev_offs = 0x0000,
.sysc_offs = 0x0210,

View File

@ -2055,17 +2055,12 @@ static struct omap_hwmod dra7xx_sata_hwmod = {
*/
/* The IP is not compliant to type1 / type2 scheme */
static struct omap_hwmod_sysc_fields omap_hwmod_sysc_type_smartreflex = {
.sidle_shift = 24,
.enwkup_shift = 26,
};
static struct omap_hwmod_class_sysconfig dra7xx_smartreflex_sysc = {
.sysc_offs = 0x0038,
.sysc_flags = (SYSC_HAS_ENAWAKEUP | SYSC_HAS_SIDLEMODE),
.idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART |
SIDLE_SMART_WKUP),
.sysc_fields = &omap_hwmod_sysc_type_smartreflex,
.sysc_fields = &omap36xx_sr_sysc_fields,
};
static struct omap_hwmod_class dra7xx_smartreflex_hwmod_class = {

View File

@ -16,6 +16,9 @@
* data and their integration with other OMAP modules and Linux.
*/
#include <linux/types.h>
#include <linux/platform_data/ti-sysc.h>
#include "omap_hwmod.h"
#include "omap_hwmod_common_data.h"
@ -27,7 +30,7 @@
* if the device ip is compliant with the original PRCM protocol
* defined for OMAP2420.
*/
struct omap_hwmod_sysc_fields omap_hwmod_sysc_type1 = {
struct sysc_regbits omap_hwmod_sysc_type1 = {
.midle_shift = SYSC_TYPE1_MIDLEMODE_SHIFT,
.clkact_shift = SYSC_TYPE1_CLOCKACTIVITY_SHIFT,
.sidle_shift = SYSC_TYPE1_SIDLEMODE_SHIFT,
@ -43,7 +46,7 @@ struct omap_hwmod_sysc_fields omap_hwmod_sysc_type1 = {
* device ip is compliant with the new PRCM protocol defined for new
* OMAP4 IPs.
*/
struct omap_hwmod_sysc_fields omap_hwmod_sysc_type2 = {
struct sysc_regbits omap_hwmod_sysc_type2 = {
.midle_shift = SYSC_TYPE2_MIDLEMODE_SHIFT,
.sidle_shift = SYSC_TYPE2_SIDLEMODE_SHIFT,
.srst_shift = SYSC_TYPE2_SOFTRESET_SHIFT,
@ -54,7 +57,7 @@ struct omap_hwmod_sysc_fields omap_hwmod_sysc_type2 = {
* struct omap_hwmod_sysc_type3 - TYPE3 sysconfig scheme.
* Used by some IPs on AM33xx
*/
struct omap_hwmod_sysc_fields omap_hwmod_sysc_type3 = {
struct sysc_regbits omap_hwmod_sysc_type3 = {
.midle_shift = SYSC_TYPE3_MIDLEMODE_SHIFT,
.sidle_shift = SYSC_TYPE3_SIDLEMODE_SHIFT,
};
@ -63,3 +66,34 @@ struct omap_dss_dispc_dev_attr omap2_3_dss_dispc_dev_attr = {
.manager_count = 2,
.has_framedonetv_irq = 0
};
struct sysc_regbits omap34xx_sr_sysc_fields = {
.clkact_shift = 20,
};
struct sysc_regbits omap36xx_sr_sysc_fields = {
.sidle_shift = 24,
.enwkup_shift = 26,
};
struct sysc_regbits omap3_sham_sysc_fields = {
.sidle_shift = 4,
.srst_shift = 1,
.autoidle_shift = 0,
};
struct sysc_regbits omap3xxx_aes_sysc_fields = {
.sidle_shift = 6,
.srst_shift = 1,
.autoidle_shift = 0,
};
struct sysc_regbits omap_hwmod_sysc_type_mcasp = {
.sidle_shift = 0,
};
struct sysc_regbits omap_hwmod_sysc_type_usb_host_fs = {
.midle_shift = 4,
.sidle_shift = 2,
.srst_shift = 1,
};

View File

@ -375,3 +375,8 @@ static void __exit omap_l3_exit(void)
platform_driver_unregister(&omap_l3_driver);
}
module_exit(omap_l3_exit);
MODULE_AUTHOR("Santosh Shilimkar");
MODULE_AUTHOR("Sricharan R");
MODULE_DESCRIPTION("OMAP L3 Interconnect error handling driver");
MODULE_LICENSE("GPL v2");

View File

@ -309,3 +309,9 @@ static void __exit omap3_l3_exit(void)
platform_driver_unregister(&omap3_l3_driver);
}
module_exit(omap3_l3_exit);
MODULE_AUTHOR("Felipe Balbi");
MODULE_AUTHOR("Santosh Shilimkar");
MODULE_AUTHOR("Sricharan R");
MODULE_DESCRIPTION("OMAP3XXX L3 Interconnect Driver");
MODULE_LICENSE("GPL");

View File

@ -18,6 +18,9 @@
#include <linux/pm_runtime.h>
#include <linux/of_address.h>
#include <linux/of_platform.h>
#include <linux/platform_data/ti-sysc.h>
#include <dt-bindings/bus/ti-sysc.h>
enum sysc_registers {
SYSC_REVISION,
@ -36,6 +39,9 @@ enum sysc_clocks {
static const char * const clock_names[] = { "fck", "ick", };
#define SYSC_IDLEMODE_MASK 3
#define SYSC_CLOCKACTIVITY_MASK 3
/**
* struct sysc - TI sysc interconnect target module registers and capabilities
* @dev: struct device pointer
@ -45,6 +51,10 @@ static const char * const clock_names[] = { "fck", "ick", };
* @offsets: register offsets from module base
* @clocks: clocks used by the interconnect target module
* @legacy_mode: configured for legacy mode if set
* @cap: interconnect target module capabilities
* @cfg: interconnect target module configuration
* @name: name if available
* @revision: interconnect target module revision
*/
struct sysc {
struct device *dev;
@ -54,12 +64,34 @@ struct sysc {
int offsets[SYSC_MAX_REGS];
struct clk *clocks[SYSC_MAX_CLOCKS];
const char *legacy_mode;
const struct sysc_capabilities *cap;
struct sysc_config cfg;
const char *name;
u32 revision;
};
static u32 sysc_read(struct sysc *ddata, int offset)
{
if (ddata->cfg.quirks & SYSC_QUIRK_16BIT) {
u32 val;
val = readw_relaxed(ddata->module_va + offset);
val |= (readw_relaxed(ddata->module_va + offset + 4) << 16);
return val;
}
return readl_relaxed(ddata->module_va + offset);
}
static u32 sysc_read_revision(struct sysc *ddata)
{
return readl_relaxed(ddata->module_va +
ddata->offsets[SYSC_REVISION]);
int offset = ddata->offsets[SYSC_REVISION];
if (offset < 0)
return 0;
return sysc_read(ddata, offset);
}
static int sysc_get_one_clock(struct sysc *ddata,
@ -206,6 +238,21 @@ static int sysc_check_children(struct sysc *ddata)
return 0;
}
/*
* So far only I2C uses 16-bit read access with clockactivity with revision
* in two registers with stride of 4. We can detect this based on the rev
* register size to configure things far enough to be able to properly read
* the revision register.
*/
static void sysc_check_quirk_16bit(struct sysc *ddata, struct resource *res)
{
if (resource_size(res) == 8) {
dev_dbg(ddata->dev,
"enabling 16-bit and clockactivity quirks\n");
ddata->cfg.quirks |= SYSC_QUIRK_16BIT | SYSC_QUIRK_USE_CLOCKACT;
}
}
/**
* sysc_parse_one - parses the interconnect target module registers
* @ddata: device driver data
@ -236,6 +283,8 @@ static int sysc_parse_one(struct sysc *ddata, enum sysc_registers reg)
}
ddata->offsets[reg] = res->start - ddata->module_pa;
if (reg == SYSC_REVISION)
sysc_check_quirk_16bit(ddata, res);
return 0;
}
@ -369,22 +418,12 @@ static int sysc_map_and_check_registers(struct sysc *ddata)
*/
static int sysc_show_rev(char *bufp, struct sysc *ddata)
{
int error, len;
int len;
if (ddata->offsets[SYSC_REVISION] < 0)
return sprintf(bufp, ":NA");
error = pm_runtime_get_sync(ddata->dev);
if (error < 0) {
pm_runtime_put_noidle(ddata->dev);
return 0;
}
len = sprintf(bufp, ":%08x", sysc_read_revision(ddata));
pm_runtime_mark_last_busy(ddata->dev);
pm_runtime_put_autosuspend(ddata->dev);
len = sprintf(bufp, ":%08x", ddata->revision);
return len;
}
@ -464,6 +503,151 @@ static const struct dev_pm_ops sysc_pm_ops = {
NULL)
};
/* At this point the module is configured enough to read the revision */
static int sysc_init_module(struct sysc *ddata)
{
int error;
error = pm_runtime_get_sync(ddata->dev);
if (error < 0) {
pm_runtime_put_noidle(ddata->dev);
return 0;
}
ddata->revision = sysc_read_revision(ddata);
pm_runtime_put_sync(ddata->dev);
return 0;
}
static int sysc_init_sysc_mask(struct sysc *ddata)
{
struct device_node *np = ddata->dev->of_node;
int error;
u32 val;
error = of_property_read_u32(np, "ti,sysc-mask", &val);
if (error)
return 0;
if (val)
ddata->cfg.sysc_val = val & ddata->cap->sysc_mask;
else
ddata->cfg.sysc_val = ddata->cap->sysc_mask;
return 0;
}
static int sysc_init_idlemode(struct sysc *ddata, u8 *idlemodes,
const char *name)
{
struct device_node *np = ddata->dev->of_node;
struct property *prop;
const __be32 *p;
u32 val;
of_property_for_each_u32(np, name, prop, p, val) {
if (val >= SYSC_NR_IDLEMODES) {
dev_err(ddata->dev, "invalid idlemode: %i\n", val);
return -EINVAL;
}
*idlemodes |= (1 << val);
}
return 0;
}
static int sysc_init_idlemodes(struct sysc *ddata)
{
int error;
error = sysc_init_idlemode(ddata, &ddata->cfg.midlemodes,
"ti,sysc-midle");
if (error)
return error;
error = sysc_init_idlemode(ddata, &ddata->cfg.sidlemodes,
"ti,sysc-sidle");
if (error)
return error;
return 0;
}
/*
* Only some devices on omap4 and later have SYSCONFIG reset done
* bit. We can detect this if there is no SYSSTATUS at all, or the
* SYSTATUS bit 0 is not used. Note that some SYSSTATUS registers
* have multiple bits for the child devices like OHCI and EHCI.
* Depends on SYSC being parsed first.
*/
static int sysc_init_syss_mask(struct sysc *ddata)
{
struct device_node *np = ddata->dev->of_node;
int error;
u32 val;
error = of_property_read_u32(np, "ti,syss-mask", &val);
if (error) {
if ((ddata->cap->type == TI_SYSC_OMAP4 ||
ddata->cap->type == TI_SYSC_OMAP4_TIMER) &&
(ddata->cfg.sysc_val & SYSC_OMAP4_SOFTRESET))
ddata->cfg.quirks |= SYSC_QUIRK_RESET_STATUS;
return 0;
}
if (!(val & 1) && (ddata->cfg.sysc_val & SYSC_OMAP4_SOFTRESET))
ddata->cfg.quirks |= SYSC_QUIRK_RESET_STATUS;
ddata->cfg.syss_mask = val;
return 0;
}
/* Device tree configured quirks */
struct sysc_dts_quirk {
const char *name;
u32 mask;
};
static const struct sysc_dts_quirk sysc_dts_quirks[] = {
{ .name = "ti,no-idle-on-init",
.mask = SYSC_QUIRK_NO_IDLE_ON_INIT, },
{ .name = "ti,no-reset-on-init",
.mask = SYSC_QUIRK_NO_RESET_ON_INIT, },
};
static int sysc_init_dts_quirks(struct sysc *ddata)
{
struct device_node *np = ddata->dev->of_node;
const struct property *prop;
int i, len, error;
u32 val;
ddata->legacy_mode = of_get_property(np, "ti,hwmods", NULL);
for (i = 0; i < ARRAY_SIZE(sysc_dts_quirks); i++) {
prop = of_get_property(np, sysc_dts_quirks[i].name, &len);
if (!prop)
break;
ddata->cfg.quirks |= sysc_dts_quirks[i].mask;
}
error = of_property_read_u32(np, "ti,sysc-delay-us", &val);
if (!error) {
if (val > 255) {
dev_warn(ddata->dev, "bad ti,sysc-delay-us: %i\n",
val);
}
ddata->cfg.srst_udelay = (u8)val;
}
return 0;
}
static void sysc_unprepare(struct sysc *ddata)
{
int i;
@ -474,9 +658,230 @@ static void sysc_unprepare(struct sysc *ddata)
}
}
/*
* Common sysc register bits found on omap2, also known as type1
*/
static const struct sysc_regbits sysc_regbits_omap2 = {
.dmadisable_shift = -ENODEV,
.midle_shift = 12,
.sidle_shift = 3,
.clkact_shift = 8,
.emufree_shift = 5,
.enwkup_shift = 2,
.srst_shift = 1,
.autoidle_shift = 0,
};
static const struct sysc_capabilities sysc_omap2 = {
.type = TI_SYSC_OMAP2,
.sysc_mask = SYSC_OMAP2_CLOCKACTIVITY | SYSC_OMAP2_EMUFREE |
SYSC_OMAP2_ENAWAKEUP | SYSC_OMAP2_SOFTRESET |
SYSC_OMAP2_AUTOIDLE,
.regbits = &sysc_regbits_omap2,
};
/* All omap2 and 3 timers, and timers 1, 2 & 10 on omap 4 and 5 */
static const struct sysc_capabilities sysc_omap2_timer = {
.type = TI_SYSC_OMAP2_TIMER,
.sysc_mask = SYSC_OMAP2_CLOCKACTIVITY | SYSC_OMAP2_EMUFREE |
SYSC_OMAP2_ENAWAKEUP | SYSC_OMAP2_SOFTRESET |
SYSC_OMAP2_AUTOIDLE,
.regbits = &sysc_regbits_omap2,
.mod_quirks = SYSC_QUIRK_USE_CLOCKACT,
};
/*
* SHAM2 (SHA1/MD5) sysc found on omap3, a variant of sysc_regbits_omap2
* with different sidle position
*/
static const struct sysc_regbits sysc_regbits_omap3_sham = {
.dmadisable_shift = -ENODEV,
.midle_shift = -ENODEV,
.sidle_shift = 4,
.clkact_shift = -ENODEV,
.enwkup_shift = -ENODEV,
.srst_shift = 1,
.autoidle_shift = 0,
.emufree_shift = -ENODEV,
};
static const struct sysc_capabilities sysc_omap3_sham = {
.type = TI_SYSC_OMAP3_SHAM,
.sysc_mask = SYSC_OMAP2_SOFTRESET | SYSC_OMAP2_AUTOIDLE,
.regbits = &sysc_regbits_omap3_sham,
};
/*
* AES register bits found on omap3 and later, a variant of
* sysc_regbits_omap2 with different sidle position
*/
static const struct sysc_regbits sysc_regbits_omap3_aes = {
.dmadisable_shift = -ENODEV,
.midle_shift = -ENODEV,
.sidle_shift = 6,
.clkact_shift = -ENODEV,
.enwkup_shift = -ENODEV,
.srst_shift = 1,
.autoidle_shift = 0,
.emufree_shift = -ENODEV,
};
static const struct sysc_capabilities sysc_omap3_aes = {
.type = TI_SYSC_OMAP3_AES,
.sysc_mask = SYSC_OMAP2_SOFTRESET | SYSC_OMAP2_AUTOIDLE,
.regbits = &sysc_regbits_omap3_aes,
};
/*
* Common sysc register bits found on omap4, also known as type2
*/
static const struct sysc_regbits sysc_regbits_omap4 = {
.dmadisable_shift = 16,
.midle_shift = 4,
.sidle_shift = 2,
.clkact_shift = -ENODEV,
.enwkup_shift = -ENODEV,
.emufree_shift = 1,
.srst_shift = 0,
.autoidle_shift = -ENODEV,
};
static const struct sysc_capabilities sysc_omap4 = {
.type = TI_SYSC_OMAP4,
.sysc_mask = SYSC_OMAP4_DMADISABLE | SYSC_OMAP4_FREEEMU |
SYSC_OMAP4_SOFTRESET,
.regbits = &sysc_regbits_omap4,
};
static const struct sysc_capabilities sysc_omap4_timer = {
.type = TI_SYSC_OMAP4_TIMER,
.sysc_mask = SYSC_OMAP4_DMADISABLE | SYSC_OMAP4_FREEEMU |
SYSC_OMAP4_SOFTRESET,
.regbits = &sysc_regbits_omap4,
};
/*
* Common sysc register bits found on omap4, also known as type3
*/
static const struct sysc_regbits sysc_regbits_omap4_simple = {
.dmadisable_shift = -ENODEV,
.midle_shift = 2,
.sidle_shift = 0,
.clkact_shift = -ENODEV,
.enwkup_shift = -ENODEV,
.srst_shift = -ENODEV,
.emufree_shift = -ENODEV,
.autoidle_shift = -ENODEV,
};
static const struct sysc_capabilities sysc_omap4_simple = {
.type = TI_SYSC_OMAP4_SIMPLE,
.regbits = &sysc_regbits_omap4_simple,
};
/*
* SmartReflex sysc found on omap34xx
*/
static const struct sysc_regbits sysc_regbits_omap34xx_sr = {
.dmadisable_shift = -ENODEV,
.midle_shift = -ENODEV,
.sidle_shift = -ENODEV,
.clkact_shift = 20,
.enwkup_shift = -ENODEV,
.srst_shift = -ENODEV,
.emufree_shift = -ENODEV,
.autoidle_shift = -ENODEV,
};
static const struct sysc_capabilities sysc_34xx_sr = {
.type = TI_SYSC_OMAP34XX_SR,
.sysc_mask = SYSC_OMAP2_CLOCKACTIVITY,
.regbits = &sysc_regbits_omap34xx_sr,
.mod_quirks = SYSC_QUIRK_USE_CLOCKACT | SYSC_QUIRK_UNCACHED,
};
/*
* SmartReflex sysc found on omap36xx and later
*/
static const struct sysc_regbits sysc_regbits_omap36xx_sr = {
.dmadisable_shift = -ENODEV,
.midle_shift = -ENODEV,
.sidle_shift = 24,
.clkact_shift = -ENODEV,
.enwkup_shift = 26,
.srst_shift = -ENODEV,
.emufree_shift = -ENODEV,
.autoidle_shift = -ENODEV,
};
static const struct sysc_capabilities sysc_36xx_sr = {
.type = TI_SYSC_OMAP36XX_SR,
.sysc_mask = SYSC_OMAP3_SR_ENAWAKEUP,
.regbits = &sysc_regbits_omap36xx_sr,
.mod_quirks = SYSC_QUIRK_UNCACHED,
};
static const struct sysc_capabilities sysc_omap4_sr = {
.type = TI_SYSC_OMAP4_SR,
.regbits = &sysc_regbits_omap36xx_sr,
};
/*
* McASP register bits found on omap4 and later
*/
static const struct sysc_regbits sysc_regbits_omap4_mcasp = {
.dmadisable_shift = -ENODEV,
.midle_shift = -ENODEV,
.sidle_shift = 0,
.clkact_shift = -ENODEV,
.enwkup_shift = -ENODEV,
.srst_shift = -ENODEV,
.emufree_shift = -ENODEV,
.autoidle_shift = -ENODEV,
};
static const struct sysc_capabilities sysc_omap4_mcasp = {
.type = TI_SYSC_OMAP4_MCASP,
.regbits = &sysc_regbits_omap4_mcasp,
};
/*
* FS USB host found on omap4 and later
*/
static const struct sysc_regbits sysc_regbits_omap4_usb_host_fs = {
.dmadisable_shift = -ENODEV,
.midle_shift = -ENODEV,
.sidle_shift = 24,
.clkact_shift = -ENODEV,
.enwkup_shift = 26,
.srst_shift = -ENODEV,
.emufree_shift = -ENODEV,
.autoidle_shift = -ENODEV,
};
static const struct sysc_capabilities sysc_omap4_usb_host_fs = {
.type = TI_SYSC_OMAP4_USB_HOST_FS,
.sysc_mask = SYSC_OMAP2_ENAWAKEUP,
.regbits = &sysc_regbits_omap4_usb_host_fs,
};
static int sysc_init_match(struct sysc *ddata)
{
const struct sysc_capabilities *cap;
cap = of_device_get_match_data(ddata->dev);
if (!cap)
return -EINVAL;
ddata->cap = cap;
if (ddata->cap)
ddata->cfg.quirks |= ddata->cap->mod_quirks;
return 0;
}
static int sysc_probe(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
struct sysc *ddata;
int error;
@ -485,7 +890,15 @@ static int sysc_probe(struct platform_device *pdev)
return -ENOMEM;
ddata->dev = &pdev->dev;
ddata->legacy_mode = of_get_property(np, "ti,hwmods", NULL);
platform_set_drvdata(pdev, ddata);
error = sysc_init_match(ddata);
if (error)
return error;
error = sysc_init_dts_quirks(ddata);
if (error)
goto unprepare;
error = sysc_get_clocks(ddata);
if (error)
@ -495,9 +908,24 @@ static int sysc_probe(struct platform_device *pdev)
if (error)
goto unprepare;
platform_set_drvdata(pdev, ddata);
error = sysc_init_sysc_mask(ddata);
if (error)
goto unprepare;
error = sysc_init_idlemodes(ddata);
if (error)
goto unprepare;
error = sysc_init_syss_mask(ddata);
if (error)
goto unprepare;
pm_runtime_enable(ddata->dev);
error = sysc_init_module(ddata);
if (error)
goto unprepare;
error = pm_runtime_get_sync(ddata->dev);
if (error < 0) {
pm_runtime_put_noidle(ddata->dev);
@ -554,16 +982,19 @@ unprepare:
}
static const struct of_device_id sysc_match[] = {
{ .compatible = "ti,sysc-omap2" },
{ .compatible = "ti,sysc-omap4" },
{ .compatible = "ti,sysc-omap4-simple" },
{ .compatible = "ti,sysc-omap3430-sr" },
{ .compatible = "ti,sysc-omap3630-sr" },
{ .compatible = "ti,sysc-omap4-sr" },
{ .compatible = "ti,sysc-omap3-sham" },
{ .compatible = "ti,sysc-omap-aes" },
{ .compatible = "ti,sysc-mcasp" },
{ .compatible = "ti,sysc-usb-host-fs" },
{ .compatible = "ti,sysc-omap2", .data = &sysc_omap2, },
{ .compatible = "ti,sysc-omap2-timer", .data = &sysc_omap2_timer, },
{ .compatible = "ti,sysc-omap4", .data = &sysc_omap4, },
{ .compatible = "ti,sysc-omap4-timer", .data = &sysc_omap4_timer, },
{ .compatible = "ti,sysc-omap4-simple", .data = &sysc_omap4_simple, },
{ .compatible = "ti,sysc-omap3430-sr", .data = &sysc_34xx_sr, },
{ .compatible = "ti,sysc-omap3630-sr", .data = &sysc_36xx_sr, },
{ .compatible = "ti,sysc-omap4-sr", .data = &sysc_omap4_sr, },
{ .compatible = "ti,sysc-omap3-sham", .data = &sysc_omap3_sham, },
{ .compatible = "ti,sysc-omap-aes", .data = &sysc_omap3_aes, },
{ .compatible = "ti,sysc-mcasp", .data = &sysc_omap4_mcasp, },
{ .compatible = "ti,sysc-usb-host-fs",
.data = &sysc_omap4_usb_host_fs, },
{ },
};
MODULE_DEVICE_TABLE(of, sysc_match);

View File

@ -10,7 +10,7 @@ config ARM_PSCI_FW
config ARM_PSCI_CHECKER
bool "ARM PSCI checker"
depends on ARM_PSCI_FW && HOTPLUG_CPU && !TORTURE_TEST
depends on ARM_PSCI_FW && HOTPLUG_CPU && CPU_IDLE && !TORTURE_TEST
help
Run the PSCI checker during startup. This checks that hotplug and
suspend operations work correctly when using PSCI.

View File

@ -622,30 +622,6 @@ static struct platform_driver qcom_scm_driver = {
static int __init qcom_scm_init(void)
{
struct device_node *np, *fw_np;
int ret;
fw_np = of_find_node_by_name(NULL, "firmware");
if (!fw_np)
return -ENODEV;
np = of_find_matching_node(fw_np, qcom_scm_dt_match);
if (!np) {
of_node_put(fw_np);
return -ENODEV;
}
of_node_put(np);
ret = of_platform_populate(fw_np, qcom_scm_dt_match, NULL, NULL);
of_node_put(fw_np);
if (ret)
return ret;
return platform_driver_register(&qcom_scm_driver);
}
subsys_initcall(qcom_scm_init);

View File

@ -174,7 +174,7 @@ rpi_firmware_print_firmware_revision(struct rpi_firmware *fw)
if (ret == 0) {
struct tm tm;
time_to_tm(packet, 0, &tm);
time64_to_tm(packet, 0, &tm);
dev_info(fw->cl.dev,
"Attached to firmware from %04ld-%02d-%02d %02d:%02d\n",

View File

@ -287,13 +287,13 @@ static void ti_sci_rx_callback(struct mbox_client *cl, void *m)
/* Is the message of valid length? */
if (mbox_msg->len > info->desc->max_msg_size) {
dev_err(dev, "Unable to handle %d xfer(max %d)\n",
dev_err(dev, "Unable to handle %zu xfer(max %d)\n",
mbox_msg->len, info->desc->max_msg_size);
ti_sci_dump_header_dbg(dev, hdr);
return;
}
if (mbox_msg->len < xfer->rx_len) {
dev_err(dev, "Recv xfer %d < expected %d length\n",
dev_err(dev, "Recv xfer %zu < expected %d length\n",
mbox_msg->len, xfer->rx_len);
ti_sci_dump_header_dbg(dev, hdr);
return;

View File

@ -20,6 +20,12 @@
#include <soc/tegra/ahb.h>
#include <soc/tegra/mc.h>
struct tegra_smmu_group {
struct list_head list;
const struct tegra_smmu_group_soc *soc;
struct iommu_group *group;
};
struct tegra_smmu {
void __iomem *regs;
struct device *dev;
@ -27,6 +33,8 @@ struct tegra_smmu {
struct tegra_mc *mc;
const struct tegra_smmu_soc *soc;
struct list_head groups;
unsigned long pfn_mask;
unsigned long tlb_mask;
@ -703,19 +711,47 @@ static struct tegra_smmu *tegra_smmu_find(struct device_node *np)
return mc->smmu;
}
static int tegra_smmu_configure(struct tegra_smmu *smmu, struct device *dev,
struct of_phandle_args *args)
{
const struct iommu_ops *ops = smmu->iommu.ops;
int err;
err = iommu_fwspec_init(dev, &dev->of_node->fwnode, ops);
if (err < 0) {
dev_err(dev, "failed to initialize fwspec: %d\n", err);
return err;
}
err = ops->of_xlate(dev, args);
if (err < 0) {
dev_err(dev, "failed to parse SW group ID: %d\n", err);
iommu_fwspec_free(dev);
return err;
}
return 0;
}
static int tegra_smmu_add_device(struct device *dev)
{
struct device_node *np = dev->of_node;
struct tegra_smmu *smmu = NULL;
struct iommu_group *group;
struct of_phandle_args args;
unsigned int index = 0;
int err;
while (of_parse_phandle_with_args(np, "iommus", "#iommu-cells", index,
&args) == 0) {
struct tegra_smmu *smmu;
smmu = tegra_smmu_find(args.np);
if (smmu) {
err = tegra_smmu_configure(smmu, dev, &args);
of_node_put(args.np);
if (err < 0)
return err;
/*
* Only a single IOMMU master interface is currently
* supported by the Linux kernel, so abort after the
@ -728,9 +764,13 @@ static int tegra_smmu_add_device(struct device *dev)
break;
}
of_node_put(args.np);
index++;
}
if (!smmu)
return -ENODEV;
group = iommu_group_get_for_dev(dev);
if (IS_ERR(group))
return PTR_ERR(group);
@ -751,6 +791,80 @@ static void tegra_smmu_remove_device(struct device *dev)
iommu_group_remove_device(dev);
}
static const struct tegra_smmu_group_soc *
tegra_smmu_find_group(struct tegra_smmu *smmu, unsigned int swgroup)
{
unsigned int i, j;
for (i = 0; i < smmu->soc->num_groups; i++)
for (j = 0; j < smmu->soc->groups[i].num_swgroups; j++)
if (smmu->soc->groups[i].swgroups[j] == swgroup)
return &smmu->soc->groups[i];
return NULL;
}
static struct iommu_group *tegra_smmu_group_get(struct tegra_smmu *smmu,
unsigned int swgroup)
{
const struct tegra_smmu_group_soc *soc;
struct tegra_smmu_group *group;
soc = tegra_smmu_find_group(smmu, swgroup);
if (!soc)
return NULL;
mutex_lock(&smmu->lock);
list_for_each_entry(group, &smmu->groups, list)
if (group->soc == soc) {
mutex_unlock(&smmu->lock);
return group->group;
}
group = devm_kzalloc(smmu->dev, sizeof(*group), GFP_KERNEL);
if (!group) {
mutex_unlock(&smmu->lock);
return NULL;
}
INIT_LIST_HEAD(&group->list);
group->soc = soc;
group->group = iommu_group_alloc();
if (IS_ERR(group->group)) {
devm_kfree(smmu->dev, group);
mutex_unlock(&smmu->lock);
return NULL;
}
list_add_tail(&group->list, &smmu->groups);
mutex_unlock(&smmu->lock);
return group->group;
}
static struct iommu_group *tegra_smmu_device_group(struct device *dev)
{
struct iommu_fwspec *fwspec = dev->iommu_fwspec;
struct tegra_smmu *smmu = dev->archdata.iommu;
struct iommu_group *group;
group = tegra_smmu_group_get(smmu, fwspec->ids[0]);
if (!group)
group = generic_device_group(dev);
return group;
}
static int tegra_smmu_of_xlate(struct device *dev,
struct of_phandle_args *args)
{
u32 id = args->args[0];
return iommu_fwspec_add_ids(dev, &id, 1);
}
static const struct iommu_ops tegra_smmu_ops = {
.capable = tegra_smmu_capable,
.domain_alloc = tegra_smmu_domain_alloc,
@ -759,12 +873,12 @@ static const struct iommu_ops tegra_smmu_ops = {
.detach_dev = tegra_smmu_detach_dev,
.add_device = tegra_smmu_add_device,
.remove_device = tegra_smmu_remove_device,
.device_group = generic_device_group,
.device_group = tegra_smmu_device_group,
.map = tegra_smmu_map,
.unmap = tegra_smmu_unmap,
.map_sg = default_iommu_map_sg,
.iova_to_phys = tegra_smmu_iova_to_phys,
.of_xlate = tegra_smmu_of_xlate,
.pgsize_bitmap = SZ_4K,
};
@ -913,6 +1027,7 @@ struct tegra_smmu *tegra_smmu_probe(struct device *dev,
if (!smmu->asids)
return ERR_PTR(-ENOMEM);
INIT_LIST_HEAD(&smmu->groups);
mutex_init(&smmu->lock);
smmu->regs = mc->regs;
@ -954,6 +1069,7 @@ struct tegra_smmu *tegra_smmu_probe(struct device *dev,
return ERR_PTR(err);
iommu_device_set_ops(&smmu->iommu, &tegra_smmu_ops);
iommu_device_set_fwnode(&smmu->iommu, dev->fwnode);
err = iommu_device_register(&smmu->iommu);
if (err) {

View File

@ -84,6 +84,16 @@ config OMAP_GPMC_DEBUG
bootloader or else the GPMC timings won't be identical with the
bootloader timings.
config TI_EMIF_SRAM
tristate "Texas Instruments EMIF SRAM driver"
depends on (SOC_AM33XX || SOC_AM43XX) && SRAM
help
This driver is for the EMIF module available on Texas Instruments
AM33XX and AM43XX SoCs and is required for PM. Certain parts of
the EMIF PM code must run from on-chip SRAM late in the suspend
sequence so this driver provides several relocatable PM functions
for the SoC PM code to use.
config MVEBU_DEVBUS
bool "Marvell EBU Device Bus Controller"
default y

View File

@ -23,3 +23,11 @@ obj-$(CONFIG_DA8XX_DDRCTL) += da8xx-ddrctl.o
obj-$(CONFIG_SAMSUNG_MC) += samsung/
obj-$(CONFIG_TEGRA_MC) += tegra/
obj-$(CONFIG_TI_EMIF_SRAM) += ti-emif-sram.o
ti-emif-sram-objs := ti-emif-pm.o ti-emif-sram-pm.o
AFLAGS_ti-emif-sram-pm.o :=-Wa,-march=armv7-a
include drivers/memory/Makefile.asm-offsets
drivers/memory/ti-emif-sram-pm.o: include/generated/ti-emif-asm-offsets.h

View File

@ -0,0 +1,5 @@
drivers/memory/emif-asm-offsets.s: drivers/memory/emif-asm-offsets.c
$(call if_changed_dep,cc_s_c)
include/generated/ti-emif-asm-offsets.h: drivers/memory/emif-asm-offsets.s FORCE
$(call filechk,offsets,__TI_EMIF_ASM_OFFSETS_H__)

View File

@ -0,0 +1,92 @@
/*
* TI AM33XX EMIF PM Assembly Offsets
*
* Copyright (C) 2016-2017 Texas Instruments Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation version 2.
*
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
* kind, whether express or implied; without even the implied warranty
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/ti-emif-sram.h>
int main(void)
{
DEFINE(EMIF_SDCFG_VAL_OFFSET,
offsetof(struct emif_regs_amx3, emif_sdcfg_val));
DEFINE(EMIF_TIMING1_VAL_OFFSET,
offsetof(struct emif_regs_amx3, emif_timing1_val));
DEFINE(EMIF_TIMING2_VAL_OFFSET,
offsetof(struct emif_regs_amx3, emif_timing2_val));
DEFINE(EMIF_TIMING3_VAL_OFFSET,
offsetof(struct emif_regs_amx3, emif_timing3_val));
DEFINE(EMIF_REF_CTRL_VAL_OFFSET,
offsetof(struct emif_regs_amx3, emif_ref_ctrl_val));
DEFINE(EMIF_ZQCFG_VAL_OFFSET,
offsetof(struct emif_regs_amx3, emif_zqcfg_val));
DEFINE(EMIF_PMCR_VAL_OFFSET,
offsetof(struct emif_regs_amx3, emif_pmcr_val));
DEFINE(EMIF_PMCR_SHDW_VAL_OFFSET,
offsetof(struct emif_regs_amx3, emif_pmcr_shdw_val));
DEFINE(EMIF_RD_WR_LEVEL_RAMP_CTRL_OFFSET,
offsetof(struct emif_regs_amx3, emif_rd_wr_level_ramp_ctrl));
DEFINE(EMIF_RD_WR_EXEC_THRESH_OFFSET,
offsetof(struct emif_regs_amx3, emif_rd_wr_exec_thresh));
DEFINE(EMIF_COS_CONFIG_OFFSET,
offsetof(struct emif_regs_amx3, emif_cos_config));
DEFINE(EMIF_PRIORITY_TO_COS_MAPPING_OFFSET,
offsetof(struct emif_regs_amx3, emif_priority_to_cos_mapping));
DEFINE(EMIF_CONNECT_ID_SERV_1_MAP_OFFSET,
offsetof(struct emif_regs_amx3, emif_connect_id_serv_1_map));
DEFINE(EMIF_CONNECT_ID_SERV_2_MAP_OFFSET,
offsetof(struct emif_regs_amx3, emif_connect_id_serv_2_map));
DEFINE(EMIF_OCP_CONFIG_VAL_OFFSET,
offsetof(struct emif_regs_amx3, emif_ocp_config_val));
DEFINE(EMIF_LPDDR2_NVM_TIM_OFFSET,
offsetof(struct emif_regs_amx3, emif_lpddr2_nvm_tim));
DEFINE(EMIF_LPDDR2_NVM_TIM_SHDW_OFFSET,
offsetof(struct emif_regs_amx3, emif_lpddr2_nvm_tim_shdw));
DEFINE(EMIF_DLL_CALIB_CTRL_VAL_OFFSET,
offsetof(struct emif_regs_amx3, emif_dll_calib_ctrl_val));
DEFINE(EMIF_DLL_CALIB_CTRL_VAL_SHDW_OFFSET,
offsetof(struct emif_regs_amx3, emif_dll_calib_ctrl_val_shdw));
DEFINE(EMIF_DDR_PHY_CTLR_1_OFFSET,
offsetof(struct emif_regs_amx3, emif_ddr_phy_ctlr_1));
DEFINE(EMIF_EXT_PHY_CTRL_VALS_OFFSET,
offsetof(struct emif_regs_amx3, emif_ext_phy_ctrl_vals));
DEFINE(EMIF_REGS_AMX3_SIZE, sizeof(struct emif_regs_amx3));
BLANK();
DEFINE(EMIF_PM_BASE_ADDR_VIRT_OFFSET,
offsetof(struct ti_emif_pm_data, ti_emif_base_addr_virt));
DEFINE(EMIF_PM_BASE_ADDR_PHYS_OFFSET,
offsetof(struct ti_emif_pm_data, ti_emif_base_addr_phys));
DEFINE(EMIF_PM_CONFIG_OFFSET,
offsetof(struct ti_emif_pm_data, ti_emif_sram_config));
DEFINE(EMIF_PM_REGS_VIRT_OFFSET,
offsetof(struct ti_emif_pm_data, regs_virt));
DEFINE(EMIF_PM_REGS_PHYS_OFFSET,
offsetof(struct ti_emif_pm_data, regs_phys));
DEFINE(EMIF_PM_DATA_SIZE, sizeof(struct ti_emif_pm_data));
BLANK();
DEFINE(EMIF_PM_SAVE_CONTEXT_OFFSET,
offsetof(struct ti_emif_pm_functions, save_context));
DEFINE(EMIF_PM_RESTORE_CONTEXT_OFFSET,
offsetof(struct ti_emif_pm_functions, restore_context));
DEFINE(EMIF_PM_ENTER_SR_OFFSET,
offsetof(struct ti_emif_pm_functions, enter_sr));
DEFINE(EMIF_PM_EXIT_SR_OFFSET,
offsetof(struct ti_emif_pm_functions, exit_sr));
DEFINE(EMIF_PM_ABORT_SR_OFFSET,
offsetof(struct ti_emif_pm_functions, abort_sr));
DEFINE(EMIF_PM_FUNCTIONS_SIZE, sizeof(struct ti_emif_pm_functions));
return 0;
}

View File

@ -555,6 +555,9 @@
#define READ_LATENCY_SHDW_SHIFT 0
#define READ_LATENCY_SHDW_MASK (0x1f << 0)
#define EMIF_SRAM_AM33_REG_LAYOUT 0x00000000
#define EMIF_SRAM_AM43_REG_LAYOUT 0x00000001
#ifndef __ASSEMBLY__
/*
* Structure containing shadow of important registers in EMIF
@ -585,5 +588,19 @@ struct emif_regs {
u32 ext_phy_ctrl_3_shdw;
u32 ext_phy_ctrl_4_shdw;
};
struct ti_emif_pm_functions;
extern unsigned int ti_emif_sram;
extern unsigned int ti_emif_sram_sz;
extern struct ti_emif_pm_data ti_emif_pm_sram_data;
extern struct emif_regs_amx3 ti_emif_regs_amx3;
void ti_emif_save_context(void);
void ti_emif_restore_context(void);
void ti_emif_enter_sr(void);
void ti_emif_exit_sr(void);
void ti_emif_abort_sr(void);
#endif /* __ASSEMBLY__ */
#endif /* __EMIF_H */

View File

@ -10,3 +10,4 @@ tegra-mc-$(CONFIG_ARCH_TEGRA_210_SOC) += tegra210.o
obj-$(CONFIG_TEGRA_MC) += tegra-mc.o
obj-$(CONFIG_TEGRA124_EMC) += tegra124-emc.o
obj-$(CONFIG_ARCH_TEGRA_186_SOC) += tegra186.o

View File

@ -912,11 +912,26 @@ static const struct tegra_smmu_swgroup tegra114_swgroups[] = {
{ .name = "tsec", .swgroup = TEGRA_SWGROUP_TSEC, .reg = 0x294 },
};
static const unsigned int tegra114_group_display[] = {
TEGRA_SWGROUP_DC,
TEGRA_SWGROUP_DCB,
};
static const struct tegra_smmu_group_soc tegra114_groups[] = {
{
.name = "display",
.swgroups = tegra114_group_display,
.num_swgroups = ARRAY_SIZE(tegra114_group_display),
},
};
static const struct tegra_smmu_soc tegra114_smmu_soc = {
.clients = tegra114_mc_clients,
.num_clients = ARRAY_SIZE(tegra114_mc_clients),
.swgroups = tegra114_swgroups,
.num_swgroups = ARRAY_SIZE(tegra114_swgroups),
.groups = tegra114_groups,
.num_groups = ARRAY_SIZE(tegra114_groups),
.supports_round_robin_arbitration = false,
.supports_request_limit = false,
.num_tlb_lines = 32,

View File

@ -999,12 +999,27 @@ static const struct tegra_smmu_swgroup tegra124_swgroups[] = {
{ .name = "vi", .swgroup = TEGRA_SWGROUP_VI, .reg = 0x280 },
};
static const unsigned int tegra124_group_display[] = {
TEGRA_SWGROUP_DC,
TEGRA_SWGROUP_DCB,
};
static const struct tegra_smmu_group_soc tegra124_groups[] = {
{
.name = "display",
.swgroups = tegra124_group_display,
.num_swgroups = ARRAY_SIZE(tegra124_group_display),
},
};
#ifdef CONFIG_ARCH_TEGRA_124_SOC
static const struct tegra_smmu_soc tegra124_smmu_soc = {
.clients = tegra124_mc_clients,
.num_clients = ARRAY_SIZE(tegra124_mc_clients),
.swgroups = tegra124_swgroups,
.num_swgroups = ARRAY_SIZE(tegra124_swgroups),
.groups = tegra124_groups,
.num_groups = ARRAY_SIZE(tegra124_groups),
.supports_round_robin_arbitration = true,
.supports_request_limit = true,
.num_tlb_lines = 32,
@ -1029,6 +1044,8 @@ static const struct tegra_smmu_soc tegra132_smmu_soc = {
.num_clients = ARRAY_SIZE(tegra124_mc_clients),
.swgroups = tegra124_swgroups,
.num_swgroups = ARRAY_SIZE(tegra124_swgroups),
.groups = tegra124_groups,
.num_groups = ARRAY_SIZE(tegra124_groups),
.supports_round_robin_arbitration = true,
.supports_request_limit = true,
.num_tlb_lines = 32,

View File

@ -0,0 +1,600 @@
/*
* Copyright (C) 2017 NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/io.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <dt-bindings/memory/tegra186-mc.h>
struct tegra_mc {
struct device *dev;
void __iomem *regs;
};
struct tegra_mc_client {
const char *name;
unsigned int sid;
struct {
unsigned int override;
unsigned int security;
} regs;
};
static const struct tegra_mc_client tegra186_mc_clients[] = {
{
.name = "ptcr",
.sid = TEGRA186_SID_PASSTHROUGH,
.regs = {
.override = 0x000,
.security = 0x004,
},
}, {
.name = "afir",
.sid = TEGRA186_SID_AFI,
.regs = {
.override = 0x070,
.security = 0x074,
},
}, {
.name = "hdar",
.sid = TEGRA186_SID_HDA,
.regs = {
.override = 0x0a8,
.security = 0x0ac,
},
}, {
.name = "host1xdmar",
.sid = TEGRA186_SID_HOST1X,
.regs = {
.override = 0x0b0,
.security = 0x0b4,
},
}, {
.name = "nvencsrd",
.sid = TEGRA186_SID_NVENC,
.regs = {
.override = 0x0e0,
.security = 0x0e4,
},
}, {
.name = "satar",
.sid = TEGRA186_SID_SATA,
.regs = {
.override = 0x0f8,
.security = 0x0fc,
},
}, {
.name = "mpcorer",
.sid = TEGRA186_SID_PASSTHROUGH,
.regs = {
.override = 0x138,
.security = 0x13c,
},
}, {
.name = "nvencswr",
.sid = TEGRA186_SID_NVENC,
.regs = {
.override = 0x158,
.security = 0x15c,
},
}, {
.name = "afiw",
.sid = TEGRA186_SID_AFI,
.regs = {
.override = 0x188,
.security = 0x18c,
},
}, {
.name = "hdaw",
.sid = TEGRA186_SID_HDA,
.regs = {
.override = 0x1a8,
.security = 0x1ac,
},
}, {
.name = "mpcorew",
.sid = TEGRA186_SID_PASSTHROUGH,
.regs = {
.override = 0x1c8,
.security = 0x1cc,
},
}, {
.name = "sataw",
.sid = TEGRA186_SID_SATA,
.regs = {
.override = 0x1e8,
.security = 0x1ec,
},
}, {
.name = "ispra",
.sid = TEGRA186_SID_ISP,
.regs = {
.override = 0x220,
.security = 0x224,
},
}, {
.name = "ispwa",
.sid = TEGRA186_SID_ISP,
.regs = {
.override = 0x230,
.security = 0x234,
},
}, {
.name = "ispwb",
.sid = TEGRA186_SID_ISP,
.regs = {
.override = 0x238,
.security = 0x23c,
},
}, {
.name = "xusb_hostr",
.sid = TEGRA186_SID_XUSB_HOST,
.regs = {
.override = 0x250,
.security = 0x254,
},
}, {
.name = "xusb_hostw",
.sid = TEGRA186_SID_XUSB_HOST,
.regs = {
.override = 0x258,
.security = 0x25c,
},
}, {
.name = "xusb_devr",
.sid = TEGRA186_SID_XUSB_DEV,
.regs = {
.override = 0x260,
.security = 0x264,
},
}, {
.name = "xusb_devw",
.sid = TEGRA186_SID_XUSB_DEV,
.regs = {
.override = 0x268,
.security = 0x26c,
},
}, {
.name = "tsecsrd",
.sid = TEGRA186_SID_TSEC,
.regs = {
.override = 0x2a0,
.security = 0x2a4,
},
}, {
.name = "tsecswr",
.sid = TEGRA186_SID_TSEC,
.regs = {
.override = 0x2a8,
.security = 0x2ac,
},
}, {
.name = "gpusrd",
.sid = TEGRA186_SID_GPU,
.regs = {
.override = 0x2c0,
.security = 0x2c4,
},
}, {
.name = "gpuswr",
.sid = TEGRA186_SID_GPU,
.regs = {
.override = 0x2c8,
.security = 0x2cc,
},
}, {
.name = "sdmmcra",
.sid = TEGRA186_SID_SDMMC1,
.regs = {
.override = 0x300,
.security = 0x304,
},
}, {
.name = "sdmmcraa",
.sid = TEGRA186_SID_SDMMC2,
.regs = {
.override = 0x308,
.security = 0x30c,
},
}, {
.name = "sdmmcr",
.sid = TEGRA186_SID_SDMMC3,
.regs = {
.override = 0x310,
.security = 0x314,
},
}, {
.name = "sdmmcrab",
.sid = TEGRA186_SID_SDMMC4,
.regs = {
.override = 0x318,
.security = 0x31c,
},
}, {
.name = "sdmmcwa",
.sid = TEGRA186_SID_SDMMC1,
.regs = {
.override = 0x320,
.security = 0x324,
},
}, {
.name = "sdmmcwaa",
.sid = TEGRA186_SID_SDMMC2,
.regs = {
.override = 0x328,
.security = 0x32c,
},
}, {
.name = "sdmmcw",
.sid = TEGRA186_SID_SDMMC3,
.regs = {
.override = 0x330,
.security = 0x334,
},
}, {
.name = "sdmmcwab",
.sid = TEGRA186_SID_SDMMC4,
.regs = {
.override = 0x338,
.security = 0x33c,
},
}, {
.name = "vicsrd",
.sid = TEGRA186_SID_VIC,
.regs = {
.override = 0x360,
.security = 0x364,
},
}, {
.name = "vicswr",
.sid = TEGRA186_SID_VIC,
.regs = {
.override = 0x368,
.security = 0x36c,
},
}, {
.name = "viw",
.sid = TEGRA186_SID_VI,
.regs = {
.override = 0x390,
.security = 0x394,
},
}, {
.name = "nvdecsrd",
.sid = TEGRA186_SID_NVDEC,
.regs = {
.override = 0x3c0,
.security = 0x3c4,
},
}, {
.name = "nvdecswr",
.sid = TEGRA186_SID_NVDEC,
.regs = {
.override = 0x3c8,
.security = 0x3cc,
},
}, {
.name = "aper",
.sid = TEGRA186_SID_APE,
.regs = {
.override = 0x3d0,
.security = 0x3d4,
},
}, {
.name = "apew",
.sid = TEGRA186_SID_APE,
.regs = {
.override = 0x3d8,
.security = 0x3dc,
},
}, {
.name = "nvjpgsrd",
.sid = TEGRA186_SID_NVJPG,
.regs = {
.override = 0x3f0,
.security = 0x3f4,
},
}, {
.name = "nvjpgswr",
.sid = TEGRA186_SID_NVJPG,
.regs = {
.override = 0x3f8,
.security = 0x3fc,
},
}, {
.name = "sesrd",
.sid = TEGRA186_SID_SE,
.regs = {
.override = 0x400,
.security = 0x404,
},
}, {
.name = "seswr",
.sid = TEGRA186_SID_SE,
.regs = {
.override = 0x408,
.security = 0x40c,
},
}, {
.name = "etrr",
.sid = TEGRA186_SID_ETR,
.regs = {
.override = 0x420,
.security = 0x424,
},
}, {
.name = "etrw",
.sid = TEGRA186_SID_ETR,
.regs = {
.override = 0x428,
.security = 0x42c,
},
}, {
.name = "tsecsrdb",
.sid = TEGRA186_SID_TSECB,
.regs = {
.override = 0x430,
.security = 0x434,
},
}, {
.name = "tsecswrb",
.sid = TEGRA186_SID_TSECB,
.regs = {
.override = 0x438,
.security = 0x43c,
},
}, {
.name = "gpusrd2",
.sid = TEGRA186_SID_GPU,
.regs = {
.override = 0x440,
.security = 0x444,
},
}, {
.name = "gpuswr2",
.sid = TEGRA186_SID_GPU,
.regs = {
.override = 0x448,
.security = 0x44c,
},
}, {
.name = "axisr",
.sid = TEGRA186_SID_GPCDMA_0,
.regs = {
.override = 0x460,
.security = 0x464,
},
}, {
.name = "axisw",
.sid = TEGRA186_SID_GPCDMA_0,
.regs = {
.override = 0x468,
.security = 0x46c,
},
}, {
.name = "eqosr",
.sid = TEGRA186_SID_EQOS,
.regs = {
.override = 0x470,
.security = 0x474,
},
}, {
.name = "eqosw",
.sid = TEGRA186_SID_EQOS,
.regs = {
.override = 0x478,
.security = 0x47c,
},
}, {
.name = "ufshcr",
.sid = TEGRA186_SID_UFSHC,
.regs = {
.override = 0x480,
.security = 0x484,
},
}, {
.name = "ufshcw",
.sid = TEGRA186_SID_UFSHC,
.regs = {
.override = 0x488,
.security = 0x48c,
},
}, {
.name = "nvdisplayr",
.sid = TEGRA186_SID_NVDISPLAY,
.regs = {
.override = 0x490,
.security = 0x494,
},
}, {
.name = "bpmpr",
.sid = TEGRA186_SID_BPMP,
.regs = {
.override = 0x498,
.security = 0x49c,
},
}, {
.name = "bpmpw",
.sid = TEGRA186_SID_BPMP,
.regs = {
.override = 0x4a0,
.security = 0x4a4,
},
}, {
.name = "bpmpdmar",
.sid = TEGRA186_SID_BPMP,
.regs = {
.override = 0x4a8,
.security = 0x4ac,
},
}, {
.name = "bpmpdmaw",
.sid = TEGRA186_SID_BPMP,
.regs = {
.override = 0x4b0,
.security = 0x4b4,
},
}, {
.name = "aonr",
.sid = TEGRA186_SID_AON,
.regs = {
.override = 0x4b8,
.security = 0x4bc,
},
}, {
.name = "aonw",
.sid = TEGRA186_SID_AON,
.regs = {
.override = 0x4c0,
.security = 0x4c4,
},
}, {
.name = "aondmar",
.sid = TEGRA186_SID_AON,
.regs = {
.override = 0x4c8,
.security = 0x4cc,
},
}, {
.name = "aondmaw",
.sid = TEGRA186_SID_AON,
.regs = {
.override = 0x4d0,
.security = 0x4d4,
},
}, {
.name = "scer",
.sid = TEGRA186_SID_SCE,
.regs = {
.override = 0x4d8,
.security = 0x4dc,
},
}, {
.name = "scew",
.sid = TEGRA186_SID_SCE,
.regs = {
.override = 0x4e0,
.security = 0x4e4,
},
}, {
.name = "scedmar",
.sid = TEGRA186_SID_SCE,
.regs = {
.override = 0x4e8,
.security = 0x4ec,
},
}, {
.name = "scedmaw",
.sid = TEGRA186_SID_SCE,
.regs = {
.override = 0x4f0,
.security = 0x4f4,
},
}, {
.name = "apedmar",
.sid = TEGRA186_SID_APE,
.regs = {
.override = 0x4f8,
.security = 0x4fc,
},
}, {
.name = "apedmaw",
.sid = TEGRA186_SID_APE,
.regs = {
.override = 0x500,
.security = 0x504,
},
}, {
.name = "nvdisplayr1",
.sid = TEGRA186_SID_NVDISPLAY,
.regs = {
.override = 0x508,
.security = 0x50c,
},
}, {
.name = "vicsrd1",
.sid = TEGRA186_SID_VIC,
.regs = {
.override = 0x510,
.security = 0x514,
},
}, {
.name = "nvdecsrd1",
.sid = TEGRA186_SID_NVDEC,
.regs = {
.override = 0x518,
.security = 0x51c,
},
},
};
static int tegra186_mc_probe(struct platform_device *pdev)
{
struct resource *res;
struct tegra_mc *mc;
unsigned int i;
int err = 0;
mc = devm_kzalloc(&pdev->dev, sizeof(*mc), GFP_KERNEL);
if (!mc)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
mc->regs = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(mc->regs))
return PTR_ERR(mc->regs);
mc->dev = &pdev->dev;
for (i = 0; i < ARRAY_SIZE(tegra186_mc_clients); i++) {
const struct tegra_mc_client *client = &tegra186_mc_clients[i];
u32 override, security;
override = readl(mc->regs + client->regs.override);
security = readl(mc->regs + client->regs.security);
dev_dbg(&pdev->dev, "client %s: override: %x security: %x\n",
client->name, override, security);
dev_dbg(&pdev->dev, "setting SID %u for %s\n", client->sid,
client->name);
writel(client->sid, mc->regs + client->regs.override);
override = readl(mc->regs + client->regs.override);
security = readl(mc->regs + client->regs.security);
dev_dbg(&pdev->dev, "client %s: override: %x security: %x\n",
client->name, override, security);
}
platform_set_drvdata(pdev, mc);
return err;
}
static const struct of_device_id tegra186_mc_of_match[] = {
{ .compatible = "nvidia,tegra186-mc", },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, tegra186_mc_of_match);
static struct platform_driver tegra186_mc_driver = {
.driver = {
.name = "tegra186-mc",
.of_match_table = tegra186_mc_of_match,
.suppress_bind_attrs = true,
},
.prevent_deferred_probe = true,
.probe = tegra186_mc_probe,
};
module_platform_driver(tegra186_mc_driver);
MODULE_AUTHOR("Thierry Reding <treding@nvidia.com>");
MODULE_DESCRIPTION("NVIDIA Tegra186 Memory Controller driver");
MODULE_LICENSE("GPL v2");

View File

@ -1059,11 +1059,26 @@ static const struct tegra_smmu_swgroup tegra210_swgroups[] = {
{ .name = "tsecb", .swgroup = TEGRA_SWGROUP_TSECB, .reg = 0xad4 },
};
static const unsigned int tegra210_group_display[] = {
TEGRA_SWGROUP_DC,
TEGRA_SWGROUP_DCB,
};
static const struct tegra_smmu_group_soc tegra210_groups[] = {
{
.name = "display",
.swgroups = tegra210_group_display,
.num_swgroups = ARRAY_SIZE(tegra210_group_display),
},
};
static const struct tegra_smmu_soc tegra210_smmu_soc = {
.clients = tegra210_mc_clients,
.num_clients = ARRAY_SIZE(tegra210_mc_clients),
.swgroups = tegra210_swgroups,
.num_swgroups = ARRAY_SIZE(tegra210_swgroups),
.groups = tegra210_groups,
.num_groups = ARRAY_SIZE(tegra210_groups),
.supports_round_robin_arbitration = true,
.supports_request_limit = true,
.num_tlb_lines = 32,

View File

@ -934,11 +934,26 @@ static const struct tegra_smmu_swgroup tegra30_swgroups[] = {
{ .name = "isp", .swgroup = TEGRA_SWGROUP_ISP, .reg = 0x258 },
};
static const unsigned int tegra30_group_display[] = {
TEGRA_SWGROUP_DC,
TEGRA_SWGROUP_DCB,
};
static const struct tegra_smmu_group_soc tegra30_groups[] = {
{
.name = "display",
.swgroups = tegra30_group_display,
.num_swgroups = ARRAY_SIZE(tegra30_group_display),
},
};
static const struct tegra_smmu_soc tegra30_smmu_soc = {
.clients = tegra30_mc_clients,
.num_clients = ARRAY_SIZE(tegra30_mc_clients),
.swgroups = tegra30_swgroups,
.num_swgroups = ARRAY_SIZE(tegra30_swgroups),
.groups = tegra30_groups,
.num_groups = ARRAY_SIZE(tegra30_groups),
.supports_round_robin_arbitration = false,
.supports_request_limit = false,
.num_tlb_lines = 16,

324
drivers/memory/ti-emif-pm.c Normal file
View File

@ -0,0 +1,324 @@
/*
* TI AM33XX SRAM EMIF Driver
*
* Copyright (C) 2016-2017 Texas Instruments Inc.
* Dave Gerlach
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/err.h>
#include <linux/genalloc.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/sram.h>
#include <linux/ti-emif-sram.h>
#include "emif.h"
#define TI_EMIF_SRAM_SYMBOL_OFFSET(sym) ((unsigned long)(sym) - \
(unsigned long)&ti_emif_sram)
#define EMIF_POWER_MGMT_WAIT_SELF_REFRESH_8192_CYCLES 0x00a0
struct ti_emif_data {
phys_addr_t ti_emif_sram_phys;
phys_addr_t ti_emif_sram_data_phys;
unsigned long ti_emif_sram_virt;
unsigned long ti_emif_sram_data_virt;
struct gen_pool *sram_pool_code;
struct gen_pool *sram_pool_data;
struct ti_emif_pm_data pm_data;
struct ti_emif_pm_functions pm_functions;
};
static struct ti_emif_data *emif_instance;
static u32 sram_suspend_address(struct ti_emif_data *emif_data,
unsigned long addr)
{
return (emif_data->ti_emif_sram_virt +
TI_EMIF_SRAM_SYMBOL_OFFSET(addr));
}
static phys_addr_t sram_resume_address(struct ti_emif_data *emif_data,
unsigned long addr)
{
return ((unsigned long)emif_data->ti_emif_sram_phys +
TI_EMIF_SRAM_SYMBOL_OFFSET(addr));
}
static void ti_emif_free_sram(struct ti_emif_data *emif_data)
{
gen_pool_free(emif_data->sram_pool_code, emif_data->ti_emif_sram_virt,
ti_emif_sram_sz);
gen_pool_free(emif_data->sram_pool_data,
emif_data->ti_emif_sram_data_virt,
sizeof(struct emif_regs_amx3));
}
static int ti_emif_alloc_sram(struct device *dev,
struct ti_emif_data *emif_data)
{
struct device_node *np = dev->of_node;
int ret;
emif_data->sram_pool_code = of_gen_pool_get(np, "sram", 0);
if (!emif_data->sram_pool_code) {
dev_err(dev, "Unable to get sram pool for ocmcram code\n");
return -ENODEV;
}
emif_data->ti_emif_sram_virt =
gen_pool_alloc(emif_data->sram_pool_code,
ti_emif_sram_sz);
if (!emif_data->ti_emif_sram_virt) {
dev_err(dev, "Unable to allocate code memory from ocmcram\n");
return -ENOMEM;
}
/* Save physical address to calculate resume offset during pm init */
emif_data->ti_emif_sram_phys =
gen_pool_virt_to_phys(emif_data->sram_pool_code,
emif_data->ti_emif_sram_virt);
/* Get sram pool for data section and allocate space */
emif_data->sram_pool_data = of_gen_pool_get(np, "sram", 1);
if (!emif_data->sram_pool_data) {
dev_err(dev, "Unable to get sram pool for ocmcram data\n");
ret = -ENODEV;
goto err_free_sram_code;
}
emif_data->ti_emif_sram_data_virt =
gen_pool_alloc(emif_data->sram_pool_data,
sizeof(struct emif_regs_amx3));
if (!emif_data->ti_emif_sram_data_virt) {
dev_err(dev, "Unable to allocate data memory from ocmcram\n");
ret = -ENOMEM;
goto err_free_sram_code;
}
/* Save physical address to calculate resume offset during pm init */
emif_data->ti_emif_sram_data_phys =
gen_pool_virt_to_phys(emif_data->sram_pool_data,
emif_data->ti_emif_sram_data_virt);
/*
* These functions are called during suspend path while MMU is
* still on so add virtual base to offset for absolute address
*/
emif_data->pm_functions.save_context =
sram_suspend_address(emif_data,
(unsigned long)ti_emif_save_context);
emif_data->pm_functions.enter_sr =
sram_suspend_address(emif_data,
(unsigned long)ti_emif_enter_sr);
emif_data->pm_functions.abort_sr =
sram_suspend_address(emif_data,
(unsigned long)ti_emif_abort_sr);
/*
* These are called during resume path when MMU is not enabled
* so physical address is used instead
*/
emif_data->pm_functions.restore_context =
sram_resume_address(emif_data,
(unsigned long)ti_emif_restore_context);
emif_data->pm_functions.exit_sr =
sram_resume_address(emif_data,
(unsigned long)ti_emif_exit_sr);
emif_data->pm_data.regs_virt =
(struct emif_regs_amx3 *)emif_data->ti_emif_sram_data_virt;
emif_data->pm_data.regs_phys = emif_data->ti_emif_sram_data_phys;
return 0;
err_free_sram_code:
gen_pool_free(emif_data->sram_pool_code, emif_data->ti_emif_sram_virt,
ti_emif_sram_sz);
return ret;
}
static int ti_emif_push_sram(struct device *dev, struct ti_emif_data *emif_data)
{
void *copy_addr;
u32 data_addr;
copy_addr = sram_exec_copy(emif_data->sram_pool_code,
(void *)emif_data->ti_emif_sram_virt,
&ti_emif_sram, ti_emif_sram_sz);
if (!copy_addr) {
dev_err(dev, "Cannot copy emif code to sram\n");
return -ENODEV;
}
data_addr = sram_suspend_address(emif_data,
(unsigned long)&ti_emif_pm_sram_data);
copy_addr = sram_exec_copy(emif_data->sram_pool_code,
(void *)data_addr,
&emif_data->pm_data,
sizeof(emif_data->pm_data));
if (!copy_addr) {
dev_err(dev, "Cannot copy emif data to code sram\n");
return -ENODEV;
}
return 0;
}
/*
* Due to Usage Note 3.1.2 "DDR3: JEDEC Compliance for Maximum
* Self-Refresh Command Limit" found in AM335x Silicon Errata
* (Document SPRZ360F Revised November 2013) we must configure
* the self refresh delay timer to 0xA (8192 cycles) to avoid
* generating too many refresh command from the EMIF.
*/
static void ti_emif_configure_sr_delay(struct ti_emif_data *emif_data)
{
writel(EMIF_POWER_MGMT_WAIT_SELF_REFRESH_8192_CYCLES,
(emif_data->pm_data.ti_emif_base_addr_virt +
EMIF_POWER_MANAGEMENT_CONTROL));
writel(EMIF_POWER_MGMT_WAIT_SELF_REFRESH_8192_CYCLES,
(emif_data->pm_data.ti_emif_base_addr_virt +
EMIF_POWER_MANAGEMENT_CTRL_SHDW));
}
/**
* ti_emif_copy_pm_function_table - copy mapping of pm funcs in sram
* @sram_pool: pointer to struct gen_pool where dst resides
* @dst: void * to address that table should be copied
*
* Returns 0 if success other error code if table is not available
*/
int ti_emif_copy_pm_function_table(struct gen_pool *sram_pool, void *dst)
{
void *copy_addr;
if (!emif_instance)
return -ENODEV;
copy_addr = sram_exec_copy(sram_pool, dst,
&emif_instance->pm_functions,
sizeof(emif_instance->pm_functions));
if (!copy_addr)
return -ENODEV;
return 0;
}
EXPORT_SYMBOL_GPL(ti_emif_copy_pm_function_table);
/**
* ti_emif_get_mem_type - return type for memory type in use
*
* Returns memory type value read from EMIF or error code if fails
*/
int ti_emif_get_mem_type(void)
{
unsigned long temp;
if (!emif_instance)
return -ENODEV;
temp = readl(emif_instance->pm_data.ti_emif_base_addr_virt +
EMIF_SDRAM_CONFIG);
temp = (temp & SDRAM_TYPE_MASK) >> SDRAM_TYPE_SHIFT;
return temp;
}
EXPORT_SYMBOL_GPL(ti_emif_get_mem_type);
static const struct of_device_id ti_emif_of_match[] = {
{ .compatible = "ti,emif-am3352", .data =
(void *)EMIF_SRAM_AM33_REG_LAYOUT, },
{ .compatible = "ti,emif-am4372", .data =
(void *)EMIF_SRAM_AM43_REG_LAYOUT, },
{},
};
MODULE_DEVICE_TABLE(of, ti_emif_of_match);
static int ti_emif_probe(struct platform_device *pdev)
{
int ret;
struct resource *res;
struct device *dev = &pdev->dev;
const struct of_device_id *match;
struct ti_emif_data *emif_data;
emif_data = devm_kzalloc(dev, sizeof(*emif_data), GFP_KERNEL);
if (!emif_data)
return -ENOMEM;
match = of_match_device(ti_emif_of_match, &pdev->dev);
if (!match)
return -ENODEV;
emif_data->pm_data.ti_emif_sram_config = (unsigned long)match->data;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
emif_data->pm_data.ti_emif_base_addr_virt = devm_ioremap_resource(dev,
res);
if (IS_ERR(emif_data->pm_data.ti_emif_base_addr_virt)) {
dev_err(dev, "could not ioremap emif mem\n");
ret = PTR_ERR(emif_data->pm_data.ti_emif_base_addr_virt);
return ret;
}
emif_data->pm_data.ti_emif_base_addr_phys = res->start;
ti_emif_configure_sr_delay(emif_data);
ret = ti_emif_alloc_sram(dev, emif_data);
if (ret)
return ret;
ret = ti_emif_push_sram(dev, emif_data);
if (ret)
goto fail_free_sram;
emif_instance = emif_data;
return 0;
fail_free_sram:
ti_emif_free_sram(emif_data);
return ret;
}
static int ti_emif_remove(struct platform_device *pdev)
{
struct ti_emif_data *emif_data = emif_instance;
emif_instance = NULL;
ti_emif_free_sram(emif_data);
return 0;
}
static struct platform_driver ti_emif_driver = {
.probe = ti_emif_probe,
.remove = ti_emif_remove,
.driver = {
.name = KBUILD_MODNAME,
.of_match_table = of_match_ptr(ti_emif_of_match),
},
};
module_platform_driver(ti_emif_driver);
MODULE_AUTHOR("Dave Gerlach <d-gerlach@ti.com>");
MODULE_DESCRIPTION("Texas Instruments SRAM EMIF driver");
MODULE_LICENSE("GPL v2");

View File

@ -0,0 +1,334 @@
/*
* Low level PM code for TI EMIF
*
* Copyright (C) 2016-2017 Texas Instruments Incorporated - http://www.ti.com/
* Dave Gerlach
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation version 2.
*
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
* kind, whether express or implied; without even the implied warranty
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <generated/ti-emif-asm-offsets.h>
#include <linux/linkage.h>
#include <asm/assembler.h>
#include <asm/memory.h>
#include "emif.h"
#define EMIF_POWER_MGMT_WAIT_SELF_REFRESH_8192_CYCLES 0x00a0
#define EMIF_POWER_MGMT_SR_TIMER_MASK 0x00f0
#define EMIF_POWER_MGMT_SELF_REFRESH_MODE 0x0200
#define EMIF_POWER_MGMT_SELF_REFRESH_MODE_MASK 0x0700
#define EMIF_SDCFG_TYPE_DDR2 0x2 << SDRAM_TYPE_SHIFT
#define EMIF_STATUS_READY 0x4
#define AM43XX_EMIF_PHY_CTRL_REG_COUNT 0x120
#define EMIF_AM437X_REGISTERS 0x1
.arm
.align 3
ENTRY(ti_emif_sram)
/*
* void ti_emif_save_context(void)
*
* Used during suspend to save the context of all required EMIF registers
* to local memory if the EMIF is going to lose context during the sleep
* transition. Operates on the VIRTUAL address of the EMIF.
*/
ENTRY(ti_emif_save_context)
stmfd sp!, {r4 - r11, lr} @ save registers on stack
adr r4, ti_emif_pm_sram_data
ldr r0, [r4, #EMIF_PM_BASE_ADDR_VIRT_OFFSET]
ldr r2, [r4, #EMIF_PM_REGS_VIRT_OFFSET]
/* Save EMIF configuration */
ldr r1, [r0, #EMIF_SDRAM_CONFIG]
str r1, [r2, #EMIF_SDCFG_VAL_OFFSET]
ldr r1, [r0, #EMIF_SDRAM_REFRESH_CONTROL]
str r1, [r2, #EMIF_REF_CTRL_VAL_OFFSET]
ldr r1, [r0, #EMIF_SDRAM_TIMING_1]
str r1, [r2, #EMIF_TIMING1_VAL_OFFSET]
ldr r1, [r0, #EMIF_SDRAM_TIMING_2]
str r1, [r2, #EMIF_TIMING2_VAL_OFFSET]
ldr r1, [r0, #EMIF_SDRAM_TIMING_3]
str r1, [r2, #EMIF_TIMING3_VAL_OFFSET]
ldr r1, [r0, #EMIF_POWER_MANAGEMENT_CONTROL]
str r1, [r2, #EMIF_PMCR_VAL_OFFSET]
ldr r1, [r0, #EMIF_POWER_MANAGEMENT_CTRL_SHDW]
str r1, [r2, #EMIF_PMCR_SHDW_VAL_OFFSET]
ldr r1, [r0, #EMIF_SDRAM_OUTPUT_IMPEDANCE_CALIBRATION_CONFIG]
str r1, [r2, #EMIF_ZQCFG_VAL_OFFSET]
ldr r1, [r0, #EMIF_DDR_PHY_CTRL_1]
str r1, [r2, #EMIF_DDR_PHY_CTLR_1_OFFSET]
ldr r1, [r0, #EMIF_COS_CONFIG]
str r1, [r2, #EMIF_COS_CONFIG_OFFSET]
ldr r1, [r0, #EMIF_PRIORITY_TO_CLASS_OF_SERVICE_MAPPING]
str r1, [r2, #EMIF_PRIORITY_TO_COS_MAPPING_OFFSET]
ldr r1, [r0, #EMIF_CONNECTION_ID_TO_CLASS_OF_SERVICE_1_MAPPING]
str r1, [r2, #EMIF_CONNECT_ID_SERV_1_MAP_OFFSET]
ldr r1, [r0, #EMIF_CONNECTION_ID_TO_CLASS_OF_SERVICE_2_MAPPING]
str r1, [r2, #EMIF_CONNECT_ID_SERV_2_MAP_OFFSET]
ldr r1, [r0, #EMIF_OCP_CONFIG]
str r1, [r2, #EMIF_OCP_CONFIG_VAL_OFFSET]
ldr r5, [r4, #EMIF_PM_CONFIG_OFFSET]
cmp r5, #EMIF_SRAM_AM43_REG_LAYOUT
bne emif_skip_save_extra_regs
ldr r1, [r0, #EMIF_READ_WRITE_LEVELING_RAMP_CONTROL]
str r1, [r2, #EMIF_RD_WR_LEVEL_RAMP_CTRL_OFFSET]
ldr r1, [r0, #EMIF_READ_WRITE_EXECUTION_THRESHOLD]
str r1, [r2, #EMIF_RD_WR_EXEC_THRESH_OFFSET]
ldr r1, [r0, #EMIF_LPDDR2_NVM_TIMING]
str r1, [r2, #EMIF_LPDDR2_NVM_TIM_OFFSET]
ldr r1, [r0, #EMIF_LPDDR2_NVM_TIMING_SHDW]
str r1, [r2, #EMIF_LPDDR2_NVM_TIM_SHDW_OFFSET]
ldr r1, [r0, #EMIF_DLL_CALIB_CTRL]
str r1, [r2, #EMIF_DLL_CALIB_CTRL_VAL_OFFSET]
ldr r1, [r0, #EMIF_DLL_CALIB_CTRL_SHDW]
str r1, [r2, #EMIF_DLL_CALIB_CTRL_VAL_SHDW_OFFSET]
/* Loop and save entire block of emif phy regs */
mov r5, #0x0
add r4, r2, #EMIF_EXT_PHY_CTRL_VALS_OFFSET
add r3, r0, #EMIF_EXT_PHY_CTRL_1
ddr_phy_ctrl_save:
ldr r1, [r3, r5]
str r1, [r4, r5]
add r5, r5, #0x4
cmp r5, #AM43XX_EMIF_PHY_CTRL_REG_COUNT
bne ddr_phy_ctrl_save
emif_skip_save_extra_regs:
ldmfd sp!, {r4 - r11, pc} @ restore regs and return
ENDPROC(ti_emif_save_context)
/*
* void ti_emif_restore_context(void)
*
* Used during resume to restore the context of all required EMIF registers
* from local memory after the EMIF has lost context during a sleep transition.
* Operates on the PHYSICAL address of the EMIF.
*/
ENTRY(ti_emif_restore_context)
adr r4, ti_emif_pm_sram_data
ldr r0, [r4, #EMIF_PM_BASE_ADDR_PHYS_OFFSET]
ldr r2, [r4, #EMIF_PM_REGS_PHYS_OFFSET]
/* Config EMIF Timings */
ldr r1, [r2, #EMIF_DDR_PHY_CTLR_1_OFFSET]
str r1, [r0, #EMIF_DDR_PHY_CTRL_1]
str r1, [r0, #EMIF_DDR_PHY_CTRL_1_SHDW]
ldr r1, [r2, #EMIF_TIMING1_VAL_OFFSET]
str r1, [r0, #EMIF_SDRAM_TIMING_1]
str r1, [r0, #EMIF_SDRAM_TIMING_1_SHDW]
ldr r1, [r2, #EMIF_TIMING2_VAL_OFFSET]
str r1, [r0, #EMIF_SDRAM_TIMING_2]
str r1, [r0, #EMIF_SDRAM_TIMING_2_SHDW]
ldr r1, [r2, #EMIF_TIMING3_VAL_OFFSET]
str r1, [r0, #EMIF_SDRAM_TIMING_3]
str r1, [r0, #EMIF_SDRAM_TIMING_3_SHDW]
ldr r1, [r2, #EMIF_REF_CTRL_VAL_OFFSET]
str r1, [r0, #EMIF_SDRAM_REFRESH_CONTROL]
str r1, [r0, #EMIF_SDRAM_REFRESH_CTRL_SHDW]
ldr r1, [r2, #EMIF_PMCR_VAL_OFFSET]
str r1, [r0, #EMIF_POWER_MANAGEMENT_CONTROL]
ldr r1, [r2, #EMIF_PMCR_SHDW_VAL_OFFSET]
str r1, [r0, #EMIF_POWER_MANAGEMENT_CTRL_SHDW]
ldr r1, [r2, #EMIF_COS_CONFIG_OFFSET]
str r1, [r0, #EMIF_COS_CONFIG]
ldr r1, [r2, #EMIF_PRIORITY_TO_COS_MAPPING_OFFSET]
str r1, [r0, #EMIF_PRIORITY_TO_CLASS_OF_SERVICE_MAPPING]
ldr r1, [r2, #EMIF_CONNECT_ID_SERV_1_MAP_OFFSET]
str r1, [r0, #EMIF_CONNECTION_ID_TO_CLASS_OF_SERVICE_1_MAPPING]
ldr r1, [r2, #EMIF_CONNECT_ID_SERV_2_MAP_OFFSET]
str r1, [r0, #EMIF_CONNECTION_ID_TO_CLASS_OF_SERVICE_2_MAPPING]
ldr r1, [r2, #EMIF_OCP_CONFIG_VAL_OFFSET]
str r1, [r0, #EMIF_OCP_CONFIG]
ldr r5, [r4, #EMIF_PM_CONFIG_OFFSET]
cmp r5, #EMIF_SRAM_AM43_REG_LAYOUT
bne emif_skip_restore_extra_regs
ldr r1, [r2, #EMIF_RD_WR_LEVEL_RAMP_CTRL_OFFSET]
str r1, [r0, #EMIF_READ_WRITE_LEVELING_RAMP_CONTROL]
ldr r1, [r2, #EMIF_RD_WR_EXEC_THRESH_OFFSET]
str r1, [r0, #EMIF_READ_WRITE_EXECUTION_THRESHOLD]
ldr r1, [r2, #EMIF_LPDDR2_NVM_TIM_OFFSET]
str r1, [r0, #EMIF_LPDDR2_NVM_TIMING]
ldr r1, [r2, #EMIF_LPDDR2_NVM_TIM_SHDW_OFFSET]
str r1, [r0, #EMIF_LPDDR2_NVM_TIMING_SHDW]
ldr r1, [r2, #EMIF_DLL_CALIB_CTRL_VAL_OFFSET]
str r1, [r0, #EMIF_DLL_CALIB_CTRL]
ldr r1, [r2, #EMIF_DLL_CALIB_CTRL_VAL_SHDW_OFFSET]
str r1, [r0, #EMIF_DLL_CALIB_CTRL_SHDW]
ldr r1, [r2, #EMIF_ZQCFG_VAL_OFFSET]
str r1, [r0, #EMIF_SDRAM_OUTPUT_IMPEDANCE_CALIBRATION_CONFIG]
/* Loop and restore entire block of emif phy regs */
mov r5, #0x0
/* Load ti_emif_regs_amx3 + EMIF_EXT_PHY_CTRL_VALS_OFFSET for address
* to phy register save space
*/
add r3, r2, #EMIF_EXT_PHY_CTRL_VALS_OFFSET
add r4, r0, #EMIF_EXT_PHY_CTRL_1
ddr_phy_ctrl_restore:
ldr r1, [r3, r5]
str r1, [r4, r5]
add r5, r5, #0x4
cmp r5, #AM43XX_EMIF_PHY_CTRL_REG_COUNT
bne ddr_phy_ctrl_restore
emif_skip_restore_extra_regs:
/*
* Output impedence calib needed only for DDR3
* but since the initial state of this will be
* disabled for DDR2 no harm in restoring the
* old configuration
*/
ldr r1, [r2, #EMIF_ZQCFG_VAL_OFFSET]
str r1, [r0, #EMIF_SDRAM_OUTPUT_IMPEDANCE_CALIBRATION_CONFIG]
/* Write to sdcfg last for DDR2 only */
ldr r1, [r2, #EMIF_SDCFG_VAL_OFFSET]
and r2, r1, #SDRAM_TYPE_MASK
cmp r2, #EMIF_SDCFG_TYPE_DDR2
streq r1, [r0, #EMIF_SDRAM_CONFIG]
mov pc, lr
ENDPROC(ti_emif_restore_context)
/*
* void ti_emif_enter_sr(void)
*
* Programs the EMIF to tell the SDRAM to enter into self-refresh
* mode during a sleep transition. Operates on the VIRTUAL address
* of the EMIF.
*/
ENTRY(ti_emif_enter_sr)
stmfd sp!, {r4 - r11, lr} @ save registers on stack
adr r4, ti_emif_pm_sram_data
ldr r0, [r4, #EMIF_PM_BASE_ADDR_VIRT_OFFSET]
ldr r2, [r4, #EMIF_PM_REGS_VIRT_OFFSET]
ldr r1, [r0, #EMIF_POWER_MANAGEMENT_CONTROL]
bic r1, r1, #EMIF_POWER_MGMT_SELF_REFRESH_MODE_MASK
orr r1, r1, #EMIF_POWER_MGMT_SELF_REFRESH_MODE
str r1, [r0, #EMIF_POWER_MANAGEMENT_CONTROL]
ldmfd sp!, {r4 - r11, pc} @ restore regs and return
ENDPROC(ti_emif_enter_sr)
/*
* void ti_emif_exit_sr(void)
*
* Programs the EMIF to tell the SDRAM to exit self-refresh mode
* after a sleep transition. Operates on the PHYSICAL address of
* the EMIF.
*/
ENTRY(ti_emif_exit_sr)
adr r4, ti_emif_pm_sram_data
ldr r0, [r4, #EMIF_PM_BASE_ADDR_PHYS_OFFSET]
ldr r2, [r4, #EMIF_PM_REGS_PHYS_OFFSET]
/*
* Toggle EMIF to exit refresh mode:
* if EMIF lost context, PWR_MGT_CTRL is currently 0, writing disable
* (0x0), wont do diddly squat! so do a toggle from SR(0x2) to disable
* (0x0) here.
* *If* EMIF did not lose context, nothing broken as we write the same
* value(0x2) to reg before we write a disable (0x0).
*/
ldr r1, [r2, #EMIF_PMCR_VAL_OFFSET]
bic r1, r1, #EMIF_POWER_MGMT_SELF_REFRESH_MODE_MASK
orr r1, r1, #EMIF_POWER_MGMT_SELF_REFRESH_MODE
str r1, [r0, #EMIF_POWER_MANAGEMENT_CONTROL]
bic r1, r1, #EMIF_POWER_MGMT_SELF_REFRESH_MODE_MASK
str r1, [r0, #EMIF_POWER_MANAGEMENT_CONTROL]
/* Wait for EMIF to become ready */
1: ldr r1, [r0, #EMIF_STATUS]
tst r1, #EMIF_STATUS_READY
beq 1b
mov pc, lr
ENDPROC(ti_emif_exit_sr)
/*
* void ti_emif_abort_sr(void)
*
* Disables self-refresh after a failed transition to a low-power
* state so the kernel can jump back to DDR and follow abort path.
* Operates on the VIRTUAL address of the EMIF.
*/
ENTRY(ti_emif_abort_sr)
stmfd sp!, {r4 - r11, lr} @ save registers on stack
adr r4, ti_emif_pm_sram_data
ldr r0, [r4, #EMIF_PM_BASE_ADDR_VIRT_OFFSET]
ldr r2, [r4, #EMIF_PM_REGS_VIRT_OFFSET]
ldr r1, [r2, #EMIF_PMCR_VAL_OFFSET]
bic r1, r1, #EMIF_POWER_MGMT_SELF_REFRESH_MODE_MASK
str r1, [r0, #EMIF_POWER_MANAGEMENT_CONTROL]
/* Wait for EMIF to become ready */
1: ldr r1, [r0, #EMIF_STATUS]
tst r1, #EMIF_STATUS_READY
beq 1b
ldmfd sp!, {r4 - r11, pc} @ restore regs and return
ENDPROC(ti_emif_abort_sr)
.align 3
ENTRY(ti_emif_pm_sram_data)
.space EMIF_PM_DATA_SIZE
ENTRY(ti_emif_sram_sz)
.word . - ti_emif_save_context

View File

@ -513,6 +513,12 @@ static int __init of_platform_default_populate_init(void)
for_each_matching_node(node, reserved_mem_matches)
of_platform_device_create(node, NULL, NULL);
node = of_find_node_by_path("/firmware");
if (node) {
of_platform_populate(node, NULL, NULL, NULL);
of_node_put(node);
}
/* Populate everything else. */
of_platform_default_populate(NULL, NULL, NULL);

View File

@ -236,10 +236,8 @@ static int at91_cf_dt_init(struct platform_device *pdev)
pdev->dev.platform_data = board;
mc = syscon_regmap_lookup_by_compatible("atmel,at91rm9200-sdramc");
if (IS_ERR(mc))
return PTR_ERR(mc);
return 0;
return PTR_ERR_OR_ZERO(mc);
}
#else
static int at91_cf_dt_init(struct platform_device *pdev)

View File

@ -566,17 +566,18 @@ EXPORT_SYMBOL_GPL(__devm_reset_control_get);
* device_reset - find reset controller associated with the device
* and perform reset
* @dev: device to be reset by the controller
* @optional: whether it is optional to reset the device
*
* Convenience wrapper for reset_control_get() and reset_control_reset().
* Convenience wrapper for __reset_control_get() and reset_control_reset().
* This is useful for the common case of devices with single, dedicated reset
* lines.
*/
int device_reset(struct device *dev)
int __device_reset(struct device *dev, bool optional)
{
struct reset_control *rstc;
int ret;
rstc = reset_control_get(dev, NULL);
rstc = __reset_control_get(dev, NULL, 0, 0, optional);
if (IS_ERR(rstc))
return PTR_ERR(rstc);
@ -586,7 +587,7 @@ int device_reset(struct device *dev)
return ret;
}
EXPORT_SYMBOL_GPL(device_reset);
EXPORT_SYMBOL_GPL(__device_reset);
/**
* APIs to manage an array of reset controls.

View File

@ -139,6 +139,8 @@ static const struct of_device_id meson_reset_dt_ids[] = {
.data = &meson_reset_meson8_ops, },
{ .compatible = "amlogic,meson-gxbb-reset",
.data = &meson_reset_gx_ops, },
{ .compatible = "amlogic,meson-axg-reset",
.data = &meson_reset_gx_ops, },
{ /* sentinel */ },
};

View File

@ -16,6 +16,7 @@ source "drivers/soc/tegra/Kconfig"
source "drivers/soc/ti/Kconfig"
source "drivers/soc/ux500/Kconfig"
source "drivers/soc/versatile/Kconfig"
source "drivers/soc/xilinx/Kconfig"
source "drivers/soc/zte/Kconfig"
endmenu

View File

@ -23,4 +23,5 @@ obj-$(CONFIG_ARCH_TEGRA) += tegra/
obj-$(CONFIG_SOC_TI) += ti/
obj-$(CONFIG_ARCH_U8500) += ux500/
obj-$(CONFIG_PLAT_VERSATILE) += versatile/
obj-y += xilinx/
obj-$(CONFIG_ARCH_ZX) += zte/

View File

@ -17,6 +17,7 @@
#include <linux/pm_domain.h>
#include <linux/soc/actions/owl-sps.h>
#include <dt-bindings/power/owl-s500-powergate.h>
#include <dt-bindings/power/owl-s700-powergate.h>
struct owl_sps_domain_info {
const char *name;
@ -203,8 +204,49 @@ static const struct owl_sps_info s500_sps_info = {
.domains = s500_sps_domains,
};
static const struct owl_sps_domain_info s700_sps_domains[] = {
[S700_PD_VDE] = {
.name = "VDE",
.pwr_bit = 0,
},
[S700_PD_VCE_SI] = {
.name = "VCE_SI",
.pwr_bit = 1,
},
[S700_PD_USB2_1] = {
.name = "USB2_1",
.pwr_bit = 2,
},
[S700_PD_HDE] = {
.name = "HDE",
.pwr_bit = 7,
},
[S700_PD_DMA] = {
.name = "DMA",
.pwr_bit = 8,
},
[S700_PD_DS] = {
.name = "DS",
.pwr_bit = 9,
},
[S700_PD_USB3] = {
.name = "USB3",
.pwr_bit = 10,
},
[S700_PD_USB2_0] = {
.name = "USB2_0",
.pwr_bit = 11,
},
};
static const struct owl_sps_info s700_sps_info = {
.num_domains = ARRAY_SIZE(s700_sps_domains),
.domains = s700_sps_domains,
};
static const struct of_device_id owl_sps_of_matches[] = {
{ .compatible = "actions,s500-sps", .data = &s500_sps_info },
{ .compatible = "actions,s700-sps", .data = &s700_sps_info },
{ }
};

View File

@ -21,11 +21,70 @@
#include <linux/syscore_ops.h>
#include <linux/soc/brcmstb/brcmstb.h>
#define CPU_CREDIT_REG_OFFSET 0x184
#define CPU_CREDIT_REG_MCPx_WR_PAIRING_EN_MASK 0x70000000
#define CPU_CREDIT_REG_MCPx_READ_CRED_MASK 0xf
#define CPU_CREDIT_REG_MCPx_WRITE_CRED_MASK 0xf
#define CPU_CREDIT_REG_MCPx_READ_CRED_SHIFT(x) ((x) * 8)
#define CPU_CREDIT_REG_MCPx_WRITE_CRED_SHIFT(x) (((x) * 8) + 4)
#define CPU_MCP_FLOW_REG_MCPx_RDBUFF_CRED_SHIFT(x) ((x) * 8)
#define CPU_MCP_FLOW_REG_MCPx_RDBUFF_CRED_MASK 0xff
#define CPU_WRITEBACK_CTRL_REG_WB_THROTTLE_THRESHOLD_MASK 0xf
#define CPU_WRITEBACK_CTRL_REG_WB_THROTTLE_TIMEOUT_MASK 0xf
#define CPU_WRITEBACK_CTRL_REG_WB_THROTTLE_TIMEOUT_SHIFT 4
#define CPU_WRITEBACK_CTRL_REG_WB_THROTTLE_ENABLE BIT(8)
static void __iomem *cpubiuctrl_base;
static bool mcp_wr_pairing_en;
static const int *cpubiuctrl_regs;
static inline u32 cbc_readl(int reg)
{
int offset = cpubiuctrl_regs[reg];
if (offset == -1)
return (u32)-1;
return readl_relaxed(cpubiuctrl_base + offset);
}
static inline void cbc_writel(u32 val, int reg)
{
int offset = cpubiuctrl_regs[reg];
if (offset == -1)
return;
writel_relaxed(val, cpubiuctrl_base + offset);
}
enum cpubiuctrl_regs {
CPU_CREDIT_REG = 0,
CPU_MCP_FLOW_REG,
CPU_WRITEBACK_CTRL_REG
};
static const int b15_cpubiuctrl_regs[] = {
[CPU_CREDIT_REG] = 0x184,
[CPU_MCP_FLOW_REG] = -1,
[CPU_WRITEBACK_CTRL_REG] = -1,
};
/* Odd cases, e.g: 7260 */
static const int b53_cpubiuctrl_no_wb_regs[] = {
[CPU_CREDIT_REG] = 0x0b0,
[CPU_MCP_FLOW_REG] = 0x0b4,
[CPU_WRITEBACK_CTRL_REG] = -1,
};
static const int b53_cpubiuctrl_regs[] = {
[CPU_CREDIT_REG] = 0x0b0,
[CPU_MCP_FLOW_REG] = 0x0b4,
[CPU_WRITEBACK_CTRL_REG] = 0x22c,
};
#define NUM_CPU_BIUCTRL_REGS 3
static int __init mcp_write_pairing_set(void)
{
@ -34,15 +93,15 @@ static int __init mcp_write_pairing_set(void)
if (!cpubiuctrl_base)
return -1;
creds = readl_relaxed(cpubiuctrl_base + CPU_CREDIT_REG_OFFSET);
creds = cbc_readl(CPU_CREDIT_REG);
if (mcp_wr_pairing_en) {
pr_info("MCP: Enabling write pairing\n");
writel_relaxed(creds | CPU_CREDIT_REG_MCPx_WR_PAIRING_EN_MASK,
cpubiuctrl_base + CPU_CREDIT_REG_OFFSET);
cbc_writel(creds | CPU_CREDIT_REG_MCPx_WR_PAIRING_EN_MASK,
CPU_CREDIT_REG);
} else if (creds & CPU_CREDIT_REG_MCPx_WR_PAIRING_EN_MASK) {
pr_info("MCP: Disabling write pairing\n");
writel_relaxed(creds & ~CPU_CREDIT_REG_MCPx_WR_PAIRING_EN_MASK,
cpubiuctrl_base + CPU_CREDIT_REG_OFFSET);
cbc_writel(creds & ~CPU_CREDIT_REG_MCPx_WR_PAIRING_EN_MASK,
CPU_CREDIT_REG);
} else {
pr_info("MCP: Write pairing already disabled\n");
}
@ -50,17 +109,64 @@ static int __init mcp_write_pairing_set(void)
return 0;
}
static int __init setup_hifcpubiuctrl_regs(void)
{
struct device_node *np;
int ret = 0;
static const u32 b53_mach_compat[] = {
0x7268,
0x7271,
0x7278,
};
np = of_find_compatible_node(NULL, NULL, "brcm,brcmstb-cpu-biu-ctrl");
if (!np) {
pr_err("missing BIU control node\n");
return -ENODEV;
static void __init mcp_b53_set(void)
{
unsigned int i;
u32 reg;
reg = brcmstb_get_family_id();
for (i = 0; i < ARRAY_SIZE(b53_mach_compat); i++) {
if (BRCM_ID(reg) == b53_mach_compat[i])
break;
}
if (i == ARRAY_SIZE(b53_mach_compat))
return;
/* Set all 3 MCP interfaces to 8 credits */
reg = cbc_readl(CPU_CREDIT_REG);
for (i = 0; i < 3; i++) {
reg &= ~(CPU_CREDIT_REG_MCPx_WRITE_CRED_MASK <<
CPU_CREDIT_REG_MCPx_WRITE_CRED_SHIFT(i));
reg &= ~(CPU_CREDIT_REG_MCPx_READ_CRED_MASK <<
CPU_CREDIT_REG_MCPx_READ_CRED_SHIFT(i));
reg |= 8 << CPU_CREDIT_REG_MCPx_WRITE_CRED_SHIFT(i);
reg |= 8 << CPU_CREDIT_REG_MCPx_READ_CRED_SHIFT(i);
}
cbc_writel(reg, CPU_CREDIT_REG);
/* Max out the number of in-flight Jwords reads on the MCP interface */
reg = cbc_readl(CPU_MCP_FLOW_REG);
for (i = 0; i < 3; i++)
reg |= CPU_MCP_FLOW_REG_MCPx_RDBUFF_CRED_MASK <<
CPU_MCP_FLOW_REG_MCPx_RDBUFF_CRED_SHIFT(i);
cbc_writel(reg, CPU_MCP_FLOW_REG);
/* Enable writeback throttling, set timeout to 128 cycles, 256 cycles
* threshold
*/
reg = cbc_readl(CPU_WRITEBACK_CTRL_REG);
reg |= CPU_WRITEBACK_CTRL_REG_WB_THROTTLE_ENABLE;
reg &= ~CPU_WRITEBACK_CTRL_REG_WB_THROTTLE_THRESHOLD_MASK;
reg &= ~(CPU_WRITEBACK_CTRL_REG_WB_THROTTLE_TIMEOUT_MASK <<
CPU_WRITEBACK_CTRL_REG_WB_THROTTLE_TIMEOUT_SHIFT);
reg |= 8;
reg |= 7 << CPU_WRITEBACK_CTRL_REG_WB_THROTTLE_TIMEOUT_SHIFT;
cbc_writel(reg, CPU_WRITEBACK_CTRL_REG);
}
static int __init setup_hifcpubiuctrl_regs(struct device_node *np)
{
struct device_node *cpu_dn;
int ret = 0;
cpubiuctrl_base = of_iomap(np, 0);
if (!cpubiuctrl_base) {
pr_err("failed to remap BIU control base\n");
@ -69,27 +175,56 @@ static int __init setup_hifcpubiuctrl_regs(void)
}
mcp_wr_pairing_en = of_property_read_bool(np, "brcm,write-pairing");
cpu_dn = of_get_cpu_node(0, NULL);
if (!cpu_dn) {
pr_err("failed to obtain CPU device node\n");
ret = -ENODEV;
goto out;
}
if (of_device_is_compatible(cpu_dn, "brcm,brahma-b15"))
cpubiuctrl_regs = b15_cpubiuctrl_regs;
else if (of_device_is_compatible(cpu_dn, "brcm,brahma-b53"))
cpubiuctrl_regs = b53_cpubiuctrl_regs;
else {
pr_err("unsupported CPU\n");
ret = -EINVAL;
}
of_node_put(cpu_dn);
if (BRCM_ID(brcmstb_get_family_id()) == 0x7260)
cpubiuctrl_regs = b53_cpubiuctrl_no_wb_regs;
out:
of_node_put(np);
return ret;
}
#ifdef CONFIG_PM_SLEEP
static u32 cpu_credit_reg_dump; /* for save/restore */
static u32 cpubiuctrl_reg_save[NUM_CPU_BIUCTRL_REGS];
static int brcmstb_cpu_credit_reg_suspend(void)
{
if (cpubiuctrl_base)
cpu_credit_reg_dump =
readl_relaxed(cpubiuctrl_base + CPU_CREDIT_REG_OFFSET);
unsigned int i;
if (!cpubiuctrl_base)
return 0;
for (i = 0; i < NUM_CPU_BIUCTRL_REGS; i++)
cpubiuctrl_reg_save[i] = cbc_readl(i);
return 0;
}
static void brcmstb_cpu_credit_reg_resume(void)
{
if (cpubiuctrl_base)
writel_relaxed(cpu_credit_reg_dump,
cpubiuctrl_base + CPU_CREDIT_REG_OFFSET);
unsigned int i;
if (!cpubiuctrl_base)
return;
for (i = 0; i < NUM_CPU_BIUCTRL_REGS; i++)
cbc_writel(cpubiuctrl_reg_save[i], i);
}
static struct syscore_ops brcmstb_cpu_credit_syscore_ops = {
@ -99,19 +234,30 @@ static struct syscore_ops brcmstb_cpu_credit_syscore_ops = {
#endif
void __init brcmstb_biuctrl_init(void)
static int __init brcmstb_biuctrl_init(void)
{
struct device_node *np;
int ret;
setup_hifcpubiuctrl_regs();
/* We might be running on a multi-platform kernel, don't make this a
* fatal error, just bail out early
*/
np = of_find_compatible_node(NULL, NULL, "brcm,brcmstb-cpu-biu-ctrl");
if (!np)
return 0;
setup_hifcpubiuctrl_regs(np);
ret = mcp_write_pairing_set();
if (ret) {
pr_err("MCP: Unable to disable write pairing!\n");
return;
return ret;
}
mcp_b53_set();
#ifdef CONFIG_PM_SLEEP
register_syscore_ops(&brcmstb_cpu_credit_syscore_ops);
#endif
return 0;
}
early_initcall(brcmstb_biuctrl_init);

View File

@ -66,24 +66,47 @@ static const struct of_device_id sun_top_ctrl_match[] = {
{ }
};
static int __init brcmstb_soc_device_init(void)
static int __init brcmstb_soc_device_early_init(void)
{
struct soc_device_attribute *soc_dev_attr;
struct soc_device *soc_dev;
struct device_node *sun_top_ctrl;
void __iomem *sun_top_ctrl_base;
int ret = 0;
/* We could be on a multi-platform kernel, don't make this fatal but
* bail out early
*/
sun_top_ctrl = of_find_matching_node(NULL, sun_top_ctrl_match);
if (!sun_top_ctrl)
return -ENODEV;
return ret;
sun_top_ctrl_base = of_iomap(sun_top_ctrl, 0);
if (!sun_top_ctrl_base)
return -ENODEV;
if (!sun_top_ctrl_base) {
ret = -ENODEV;
goto out;
}
family_id = readl(sun_top_ctrl_base);
product_id = readl(sun_top_ctrl_base + 0x4);
iounmap(sun_top_ctrl_base);
out:
of_node_put(sun_top_ctrl);
return ret;
}
early_initcall(brcmstb_soc_device_early_init);
static int __init brcmstb_soc_device_init(void)
{
struct soc_device_attribute *soc_dev_attr;
struct device_node *sun_top_ctrl;
struct soc_device *soc_dev;
int ret = 0;
/* We could be on a multi-platform kernel, don't make this fatal but
* bail out early
*/
sun_top_ctrl = of_find_matching_node(NULL, sun_top_ctrl_match);
if (!sun_top_ctrl)
return ret;
soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL);
if (!soc_dev_attr) {
@ -107,14 +130,10 @@ static int __init brcmstb_soc_device_init(void)
kfree(soc_dev_attr->soc_id);
kfree(soc_dev_attr->revision);
kfree(soc_dev_attr);
ret = -ENODEV;
goto out;
ret = -ENOMEM;
}
return 0;
out:
iounmap(sun_top_ctrl_base);
of_node_put(sun_top_ctrl);
return ret;
}
arch_initcall(brcmstb_soc_device_init);

View File

@ -167,10 +167,16 @@ static int fsl_guts_probe(struct platform_device *pdev)
} else {
soc_dev_attr.family = devm_kasprintf(dev, GFP_KERNEL, "QorIQ");
}
if (!soc_dev_attr.family)
return -ENOMEM;
soc_dev_attr.soc_id = devm_kasprintf(dev, GFP_KERNEL,
"svr:0x%08x", svr);
if (!soc_dev_attr.soc_id)
return -ENOMEM;
soc_dev_attr.revision = devm_kasprintf(dev, GFP_KERNEL, "%d.%d",
(svr >> 4) & 0xf, svr & 0xf);
if (!soc_dev_attr.revision)
return -ENOMEM;
soc_dev = soc_device_register(&soc_dev_attr);
if (IS_ERR(soc_dev))
@ -214,6 +220,8 @@ static const struct of_device_id fsl_guts_of_match[] = {
{ .compatible = "fsl,ls1043a-dcfg", },
{ .compatible = "fsl,ls2080a-dcfg", },
{ .compatible = "fsl,ls1088a-dcfg", },
{ .compatible = "fsl,ls1012a-dcfg", },
{ .compatible = "fsl,ls1046a-dcfg", },
{}
};
MODULE_DEVICE_TABLE(of, fsl_guts_of_match);

View File

@ -273,7 +273,15 @@ static struct imx_pm_domain imx_gpc_domains[] = {
},
.reg_offs = 0x240,
.cntr_pdn_bit = 4,
}
}, {
.base = {
.name = "PCI",
.power_off = imx6_pm_domain_power_off,
.power_on = imx6_pm_domain_power_on,
},
.reg_offs = 0x200,
.cntr_pdn_bit = 6,
},
};
struct imx_gpc_dt_data {
@ -296,10 +304,16 @@ static const struct imx_gpc_dt_data imx6sl_dt_data = {
.err009619_present = false,
};
static const struct imx_gpc_dt_data imx6sx_dt_data = {
.num_domains = 4,
.err009619_present = false,
};
static const struct of_device_id imx_gpc_dt_ids[] = {
{ .compatible = "fsl,imx6q-gpc", .data = &imx6q_dt_data },
{ .compatible = "fsl,imx6qp-gpc", .data = &imx6qp_dt_data },
{ .compatible = "fsl,imx6sl-gpc", .data = &imx6sl_dt_data },
{ .compatible = "fsl,imx6sx-gpc", .data = &imx6sx_dt_data },
{ }
};

View File

@ -35,6 +35,15 @@ config QCOM_PM
modes. It interface with various system drivers to put the cores in
low power modes.
config QCOM_QMI_HELPERS
tristate
depends on ARCH_QCOM
help
Helper library for handling QMI encoded messages. QMI encoded
messages are used in communication between the majority of QRTR
clients and this helpers provide the common functionality needed for
doing this from a kernel driver.
config QCOM_RMTFS_MEM
tristate "Qualcomm Remote Filesystem memory driver"
depends on ARCH_QCOM
@ -75,6 +84,7 @@ config QCOM_SMEM_STATE
config QCOM_SMP2P
tristate "Qualcomm Shared Memory Point to Point support"
depends on MAILBOX
depends on QCOM_SMEM
select QCOM_SMEM_STATE
help

View File

@ -3,6 +3,8 @@ obj-$(CONFIG_QCOM_GLINK_SSR) += glink_ssr.o
obj-$(CONFIG_QCOM_GSBI) += qcom_gsbi.o
obj-$(CONFIG_QCOM_MDT_LOADER) += mdt_loader.o
obj-$(CONFIG_QCOM_PM) += spm.o
obj-$(CONFIG_QCOM_QMI_HELPERS) += qmi_helpers.o
qmi_helpers-y += qmi_encdec.o qmi_interface.o
obj-$(CONFIG_QCOM_RMTFS_MEM) += rmtfs_mem.o
obj-$(CONFIG_QCOM_SMD_RPM) += smd-rpm.o
obj-$(CONFIG_QCOM_SMEM) += smem.o

View File

@ -0,0 +1,816 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2012-2015, The Linux Foundation. All rights reserved.
* Copyright (C) 2017 Linaro Ltd.
*/
#include <linux/slab.h>
#include <linux/uaccess.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/string.h>
#include <linux/soc/qcom/qmi.h>
#define QMI_ENCDEC_ENCODE_TLV(type, length, p_dst) do { \
*p_dst++ = type; \
*p_dst++ = ((u8)((length) & 0xFF)); \
*p_dst++ = ((u8)(((length) >> 8) & 0xFF)); \
} while (0)
#define QMI_ENCDEC_DECODE_TLV(p_type, p_length, p_src) do { \
*p_type = (u8)*p_src++; \
*p_length = (u8)*p_src++; \
*p_length |= ((u8)*p_src) << 8; \
} while (0)
#define QMI_ENCDEC_ENCODE_N_BYTES(p_dst, p_src, size) \
do { \
memcpy(p_dst, p_src, size); \
p_dst = (u8 *)p_dst + size; \
p_src = (u8 *)p_src + size; \
} while (0)
#define QMI_ENCDEC_DECODE_N_BYTES(p_dst, p_src, size) \
do { \
memcpy(p_dst, p_src, size); \
p_dst = (u8 *)p_dst + size; \
p_src = (u8 *)p_src + size; \
} while (0)
#define UPDATE_ENCODE_VARIABLES(temp_si, buf_dst, \
encoded_bytes, tlv_len, encode_tlv, rc) \
do { \
buf_dst = (u8 *)buf_dst + rc; \
encoded_bytes += rc; \
tlv_len += rc; \
temp_si = temp_si + 1; \
encode_tlv = 1; \
} while (0)
#define UPDATE_DECODE_VARIABLES(buf_src, decoded_bytes, rc) \
do { \
buf_src = (u8 *)buf_src + rc; \
decoded_bytes += rc; \
} while (0)
#define TLV_LEN_SIZE sizeof(u16)
#define TLV_TYPE_SIZE sizeof(u8)
#define OPTIONAL_TLV_TYPE_START 0x10
static int qmi_encode(struct qmi_elem_info *ei_array, void *out_buf,
const void *in_c_struct, u32 out_buf_len,
int enc_level);
static int qmi_decode(struct qmi_elem_info *ei_array, void *out_c_struct,
const void *in_buf, u32 in_buf_len, int dec_level);
/**
* skip_to_next_elem() - Skip to next element in the structure to be encoded
* @ei_array: Struct info describing the element to be skipped.
* @level: Depth level of encoding/decoding to identify nested structures.
*
* This function is used while encoding optional elements. If the flag
* corresponding to an optional element is not set, then encoding the
* optional element can be skipped. This function can be used to perform
* that operation.
*
* Return: struct info of the next element that can be encoded.
*/
static struct qmi_elem_info *skip_to_next_elem(struct qmi_elem_info *ei_array,
int level)
{
struct qmi_elem_info *temp_ei = ei_array;
u8 tlv_type;
if (level > 1) {
temp_ei = temp_ei + 1;
} else {
do {
tlv_type = temp_ei->tlv_type;
temp_ei = temp_ei + 1;
} while (tlv_type == temp_ei->tlv_type);
}
return temp_ei;
}
/**
* qmi_calc_min_msg_len() - Calculate the minimum length of a QMI message
* @ei_array: Struct info array describing the structure.
* @level: Level to identify the depth of the nested structures.
*
* Return: Expected minimum length of the QMI message or 0 on error.
*/
static int qmi_calc_min_msg_len(struct qmi_elem_info *ei_array,
int level)
{
int min_msg_len = 0;
struct qmi_elem_info *temp_ei = ei_array;
if (!ei_array)
return min_msg_len;
while (temp_ei->data_type != QMI_EOTI) {
/* Optional elements do not count in minimum length */
if (temp_ei->data_type == QMI_OPT_FLAG) {
temp_ei = skip_to_next_elem(temp_ei, level);
continue;
}
if (temp_ei->data_type == QMI_DATA_LEN) {
min_msg_len += (temp_ei->elem_size == sizeof(u8) ?
sizeof(u8) : sizeof(u16));
temp_ei++;
continue;
} else if (temp_ei->data_type == QMI_STRUCT) {
min_msg_len += qmi_calc_min_msg_len(temp_ei->ei_array,
(level + 1));
temp_ei++;
} else if (temp_ei->data_type == QMI_STRING) {
if (level > 1)
min_msg_len += temp_ei->elem_len <= U8_MAX ?
sizeof(u8) : sizeof(u16);
min_msg_len += temp_ei->elem_len * temp_ei->elem_size;
temp_ei++;
} else {
min_msg_len += (temp_ei->elem_len * temp_ei->elem_size);
temp_ei++;
}
/*
* Type & Length info. not prepended for elements in the
* nested structure.
*/
if (level == 1)
min_msg_len += (TLV_TYPE_SIZE + TLV_LEN_SIZE);
}
return min_msg_len;
}
/**
* qmi_encode_basic_elem() - Encodes elements of basic/primary data type
* @buf_dst: Buffer to store the encoded information.
* @buf_src: Buffer containing the elements to be encoded.
* @elem_len: Number of elements, in the buf_src, to be encoded.
* @elem_size: Size of a single instance of the element to be encoded.
*
* This function encodes the "elem_len" number of data elements, each of
* size "elem_size" bytes from the source buffer "buf_src" and stores the
* encoded information in the destination buffer "buf_dst". The elements are
* of primary data type which include u8 - u64 or similar. This
* function returns the number of bytes of encoded information.
*
* Return: The number of bytes of encoded information.
*/
static int qmi_encode_basic_elem(void *buf_dst, const void *buf_src,
u32 elem_len, u32 elem_size)
{
u32 i, rc = 0;
for (i = 0; i < elem_len; i++) {
QMI_ENCDEC_ENCODE_N_BYTES(buf_dst, buf_src, elem_size);
rc += elem_size;
}
return rc;
}
/**
* qmi_encode_struct_elem() - Encodes elements of struct data type
* @ei_array: Struct info array descibing the struct element.
* @buf_dst: Buffer to store the encoded information.
* @buf_src: Buffer containing the elements to be encoded.
* @elem_len: Number of elements, in the buf_src, to be encoded.
* @out_buf_len: Available space in the encode buffer.
* @enc_level: Depth of the nested structure from the main structure.
*
* This function encodes the "elem_len" number of struct elements, each of
* size "ei_array->elem_size" bytes from the source buffer "buf_src" and
* stores the encoded information in the destination buffer "buf_dst". The
* elements are of struct data type which includes any C structure. This
* function returns the number of bytes of encoded information.
*
* Return: The number of bytes of encoded information on success or negative
* errno on error.
*/
static int qmi_encode_struct_elem(struct qmi_elem_info *ei_array,
void *buf_dst, const void *buf_src,
u32 elem_len, u32 out_buf_len,
int enc_level)
{
int i, rc, encoded_bytes = 0;
struct qmi_elem_info *temp_ei = ei_array;
for (i = 0; i < elem_len; i++) {
rc = qmi_encode(temp_ei->ei_array, buf_dst, buf_src,
out_buf_len - encoded_bytes, enc_level);
if (rc < 0) {
pr_err("%s: STRUCT Encode failure\n", __func__);
return rc;
}
buf_dst = buf_dst + rc;
buf_src = buf_src + temp_ei->elem_size;
encoded_bytes += rc;
}
return encoded_bytes;
}
/**
* qmi_encode_string_elem() - Encodes elements of string data type
* @ei_array: Struct info array descibing the string element.
* @buf_dst: Buffer to store the encoded information.
* @buf_src: Buffer containing the elements to be encoded.
* @out_buf_len: Available space in the encode buffer.
* @enc_level: Depth of the string element from the main structure.
*
* This function encodes a string element of maximum length "ei_array->elem_len"
* bytes from the source buffer "buf_src" and stores the encoded information in
* the destination buffer "buf_dst". This function returns the number of bytes
* of encoded information.
*
* Return: The number of bytes of encoded information on success or negative
* errno on error.
*/
static int qmi_encode_string_elem(struct qmi_elem_info *ei_array,
void *buf_dst, const void *buf_src,
u32 out_buf_len, int enc_level)
{
int rc;
int encoded_bytes = 0;
struct qmi_elem_info *temp_ei = ei_array;
u32 string_len = 0;
u32 string_len_sz = 0;
string_len = strlen(buf_src);
string_len_sz = temp_ei->elem_len <= U8_MAX ?
sizeof(u8) : sizeof(u16);
if (string_len > temp_ei->elem_len) {
pr_err("%s: String to be encoded is longer - %d > %d\n",
__func__, string_len, temp_ei->elem_len);
return -EINVAL;
}
if (enc_level == 1) {
if (string_len + TLV_LEN_SIZE + TLV_TYPE_SIZE >
out_buf_len) {
pr_err("%s: Output len %d > Out Buf len %d\n",
__func__, string_len, out_buf_len);
return -ETOOSMALL;
}
} else {
if (string_len + string_len_sz > out_buf_len) {
pr_err("%s: Output len %d > Out Buf len %d\n",
__func__, string_len, out_buf_len);
return -ETOOSMALL;
}
rc = qmi_encode_basic_elem(buf_dst, &string_len,
1, string_len_sz);
encoded_bytes += rc;
}
rc = qmi_encode_basic_elem(buf_dst + encoded_bytes, buf_src,
string_len, temp_ei->elem_size);
encoded_bytes += rc;
return encoded_bytes;
}
/**
* qmi_encode() - Core Encode Function
* @ei_array: Struct info array describing the structure to be encoded.
* @out_buf: Buffer to hold the encoded QMI message.
* @in_c_struct: Pointer to the C structure to be encoded.
* @out_buf_len: Available space in the encode buffer.
* @enc_level: Encode level to indicate the depth of the nested structure,
* within the main structure, being encoded.
*
* Return: The number of bytes of encoded information on success or negative
* errno on error.
*/
static int qmi_encode(struct qmi_elem_info *ei_array, void *out_buf,
const void *in_c_struct, u32 out_buf_len,
int enc_level)
{
struct qmi_elem_info *temp_ei = ei_array;
u8 opt_flag_value = 0;
u32 data_len_value = 0, data_len_sz;
u8 *buf_dst = (u8 *)out_buf;
u8 *tlv_pointer;
u32 tlv_len;
u8 tlv_type;
u32 encoded_bytes = 0;
const void *buf_src;
int encode_tlv = 0;
int rc;
if (!ei_array)
return 0;
tlv_pointer = buf_dst;
tlv_len = 0;
if (enc_level == 1)
buf_dst = buf_dst + (TLV_LEN_SIZE + TLV_TYPE_SIZE);
while (temp_ei->data_type != QMI_EOTI) {
buf_src = in_c_struct + temp_ei->offset;
tlv_type = temp_ei->tlv_type;
if (temp_ei->array_type == NO_ARRAY) {
data_len_value = 1;
} else if (temp_ei->array_type == STATIC_ARRAY) {
data_len_value = temp_ei->elem_len;
} else if (data_len_value <= 0 ||
temp_ei->elem_len < data_len_value) {
pr_err("%s: Invalid data length\n", __func__);
return -EINVAL;
}
switch (temp_ei->data_type) {
case QMI_OPT_FLAG:
rc = qmi_encode_basic_elem(&opt_flag_value, buf_src,
1, sizeof(u8));
if (opt_flag_value)
temp_ei = temp_ei + 1;
else
temp_ei = skip_to_next_elem(temp_ei, enc_level);
break;
case QMI_DATA_LEN:
memcpy(&data_len_value, buf_src, temp_ei->elem_size);
data_len_sz = temp_ei->elem_size == sizeof(u8) ?
sizeof(u8) : sizeof(u16);
/* Check to avoid out of range buffer access */
if ((data_len_sz + encoded_bytes + TLV_LEN_SIZE +
TLV_TYPE_SIZE) > out_buf_len) {
pr_err("%s: Too Small Buffer @DATA_LEN\n",
__func__);
return -ETOOSMALL;
}
rc = qmi_encode_basic_elem(buf_dst, &data_len_value,
1, data_len_sz);
UPDATE_ENCODE_VARIABLES(temp_ei, buf_dst,
encoded_bytes, tlv_len,
encode_tlv, rc);
if (!data_len_value)
temp_ei = skip_to_next_elem(temp_ei, enc_level);
else
encode_tlv = 0;
break;
case QMI_UNSIGNED_1_BYTE:
case QMI_UNSIGNED_2_BYTE:
case QMI_UNSIGNED_4_BYTE:
case QMI_UNSIGNED_8_BYTE:
case QMI_SIGNED_2_BYTE_ENUM:
case QMI_SIGNED_4_BYTE_ENUM:
/* Check to avoid out of range buffer access */
if (((data_len_value * temp_ei->elem_size) +
encoded_bytes + TLV_LEN_SIZE + TLV_TYPE_SIZE) >
out_buf_len) {
pr_err("%s: Too Small Buffer @data_type:%d\n",
__func__, temp_ei->data_type);
return -ETOOSMALL;
}
rc = qmi_encode_basic_elem(buf_dst, buf_src,
data_len_value,
temp_ei->elem_size);
UPDATE_ENCODE_VARIABLES(temp_ei, buf_dst,
encoded_bytes, tlv_len,
encode_tlv, rc);
break;
case QMI_STRUCT:
rc = qmi_encode_struct_elem(temp_ei, buf_dst, buf_src,
data_len_value,
out_buf_len - encoded_bytes,
enc_level + 1);
if (rc < 0)
return rc;
UPDATE_ENCODE_VARIABLES(temp_ei, buf_dst,
encoded_bytes, tlv_len,
encode_tlv, rc);
break;
case QMI_STRING:
rc = qmi_encode_string_elem(temp_ei, buf_dst, buf_src,
out_buf_len - encoded_bytes,
enc_level);
if (rc < 0)
return rc;
UPDATE_ENCODE_VARIABLES(temp_ei, buf_dst,
encoded_bytes, tlv_len,
encode_tlv, rc);
break;
default:
pr_err("%s: Unrecognized data type\n", __func__);
return -EINVAL;
}
if (encode_tlv && enc_level == 1) {
QMI_ENCDEC_ENCODE_TLV(tlv_type, tlv_len, tlv_pointer);
encoded_bytes += (TLV_TYPE_SIZE + TLV_LEN_SIZE);
tlv_pointer = buf_dst;
tlv_len = 0;
buf_dst = buf_dst + TLV_LEN_SIZE + TLV_TYPE_SIZE;
encode_tlv = 0;
}
}
return encoded_bytes;
}
/**
* qmi_decode_basic_elem() - Decodes elements of basic/primary data type
* @buf_dst: Buffer to store the decoded element.
* @buf_src: Buffer containing the elements in QMI wire format.
* @elem_len: Number of elements to be decoded.
* @elem_size: Size of a single instance of the element to be decoded.
*
* This function decodes the "elem_len" number of elements in QMI wire format,
* each of size "elem_size" bytes from the source buffer "buf_src" and stores
* the decoded elements in the destination buffer "buf_dst". The elements are
* of primary data type which include u8 - u64 or similar. This
* function returns the number of bytes of decoded information.
*
* Return: The total size of the decoded data elements, in bytes.
*/
static int qmi_decode_basic_elem(void *buf_dst, const void *buf_src,
u32 elem_len, u32 elem_size)
{
u32 i, rc = 0;
for (i = 0; i < elem_len; i++) {
QMI_ENCDEC_DECODE_N_BYTES(buf_dst, buf_src, elem_size);
rc += elem_size;
}
return rc;
}
/**
* qmi_decode_struct_elem() - Decodes elements of struct data type
* @ei_array: Struct info array descibing the struct element.
* @buf_dst: Buffer to store the decoded element.
* @buf_src: Buffer containing the elements in QMI wire format.
* @elem_len: Number of elements to be decoded.
* @tlv_len: Total size of the encoded inforation corresponding to
* this struct element.
* @dec_level: Depth of the nested structure from the main structure.
*
* This function decodes the "elem_len" number of elements in QMI wire format,
* each of size "(tlv_len/elem_len)" bytes from the source buffer "buf_src"
* and stores the decoded elements in the destination buffer "buf_dst". The
* elements are of struct data type which includes any C structure. This
* function returns the number of bytes of decoded information.
*
* Return: The total size of the decoded data elements on success, negative
* errno on error.
*/
static int qmi_decode_struct_elem(struct qmi_elem_info *ei_array,
void *buf_dst, const void *buf_src,
u32 elem_len, u32 tlv_len,
int dec_level)
{
int i, rc, decoded_bytes = 0;
struct qmi_elem_info *temp_ei = ei_array;
for (i = 0; i < elem_len && decoded_bytes < tlv_len; i++) {
rc = qmi_decode(temp_ei->ei_array, buf_dst, buf_src,
tlv_len - decoded_bytes, dec_level);
if (rc < 0)
return rc;
buf_src = buf_src + rc;
buf_dst = buf_dst + temp_ei->elem_size;
decoded_bytes += rc;
}
if ((dec_level <= 2 && decoded_bytes != tlv_len) ||
(dec_level > 2 && (i < elem_len || decoded_bytes > tlv_len))) {
pr_err("%s: Fault in decoding: dl(%d), db(%d), tl(%d), i(%d), el(%d)\n",
__func__, dec_level, decoded_bytes, tlv_len,
i, elem_len);
return -EFAULT;
}
return decoded_bytes;
}
/**
* qmi_decode_string_elem() - Decodes elements of string data type
* @ei_array: Struct info array descibing the string element.
* @buf_dst: Buffer to store the decoded element.
* @buf_src: Buffer containing the elements in QMI wire format.
* @tlv_len: Total size of the encoded inforation corresponding to
* this string element.
* @dec_level: Depth of the string element from the main structure.
*
* This function decodes the string element of maximum length
* "ei_array->elem_len" from the source buffer "buf_src" and puts it into
* the destination buffer "buf_dst". This function returns number of bytes
* decoded from the input buffer.
*
* Return: The total size of the decoded data elements on success, negative
* errno on error.
*/
static int qmi_decode_string_elem(struct qmi_elem_info *ei_array,
void *buf_dst, const void *buf_src,
u32 tlv_len, int dec_level)
{
int rc;
int decoded_bytes = 0;
u32 string_len = 0;
u32 string_len_sz = 0;
struct qmi_elem_info *temp_ei = ei_array;
if (dec_level == 1) {
string_len = tlv_len;
} else {
string_len_sz = temp_ei->elem_len <= U8_MAX ?
sizeof(u8) : sizeof(u16);
rc = qmi_decode_basic_elem(&string_len, buf_src,
1, string_len_sz);
decoded_bytes += rc;
}
if (string_len > temp_ei->elem_len) {
pr_err("%s: String len %d > Max Len %d\n",
__func__, string_len, temp_ei->elem_len);
return -ETOOSMALL;
} else if (string_len > tlv_len) {
pr_err("%s: String len %d > Input Buffer Len %d\n",
__func__, string_len, tlv_len);
return -EFAULT;
}
rc = qmi_decode_basic_elem(buf_dst, buf_src + decoded_bytes,
string_len, temp_ei->elem_size);
*((char *)buf_dst + string_len) = '\0';
decoded_bytes += rc;
return decoded_bytes;
}
/**
* find_ei() - Find element info corresponding to TLV Type
* @ei_array: Struct info array of the message being decoded.
* @type: TLV Type of the element being searched.
*
* Every element that got encoded in the QMI message will have a type
* information associated with it. While decoding the QMI message,
* this function is used to find the struct info regarding the element
* that corresponds to the type being decoded.
*
* Return: Pointer to struct info, if found
*/
static struct qmi_elem_info *find_ei(struct qmi_elem_info *ei_array,
u32 type)
{
struct qmi_elem_info *temp_ei = ei_array;
while (temp_ei->data_type != QMI_EOTI) {
if (temp_ei->tlv_type == (u8)type)
return temp_ei;
temp_ei = temp_ei + 1;
}
return NULL;
}
/**
* qmi_decode() - Core Decode Function
* @ei_array: Struct info array describing the structure to be decoded.
* @out_c_struct: Buffer to hold the decoded C struct
* @in_buf: Buffer containing the QMI message to be decoded
* @in_buf_len: Length of the QMI message to be decoded
* @dec_level: Decode level to indicate the depth of the nested structure,
* within the main structure, being decoded
*
* Return: The number of bytes of decoded information on success, negative
* errno on error.
*/
static int qmi_decode(struct qmi_elem_info *ei_array, void *out_c_struct,
const void *in_buf, u32 in_buf_len,
int dec_level)
{
struct qmi_elem_info *temp_ei = ei_array;
u8 opt_flag_value = 1;
u32 data_len_value = 0, data_len_sz = 0;
u8 *buf_dst = out_c_struct;
const u8 *tlv_pointer;
u32 tlv_len = 0;
u32 tlv_type;
u32 decoded_bytes = 0;
const void *buf_src = in_buf;
int rc;
while (decoded_bytes < in_buf_len) {
if (dec_level >= 2 && temp_ei->data_type == QMI_EOTI)
return decoded_bytes;
if (dec_level == 1) {
tlv_pointer = buf_src;
QMI_ENCDEC_DECODE_TLV(&tlv_type,
&tlv_len, tlv_pointer);
buf_src += (TLV_TYPE_SIZE + TLV_LEN_SIZE);
decoded_bytes += (TLV_TYPE_SIZE + TLV_LEN_SIZE);
temp_ei = find_ei(ei_array, tlv_type);
if (!temp_ei && tlv_type < OPTIONAL_TLV_TYPE_START) {
pr_err("%s: Inval element info\n", __func__);
return -EINVAL;
} else if (!temp_ei) {
UPDATE_DECODE_VARIABLES(buf_src,
decoded_bytes, tlv_len);
continue;
}
} else {
/*
* No length information for elements in nested
* structures. So use remaining decodable buffer space.
*/
tlv_len = in_buf_len - decoded_bytes;
}
buf_dst = out_c_struct + temp_ei->offset;
if (temp_ei->data_type == QMI_OPT_FLAG) {
memcpy(buf_dst, &opt_flag_value, sizeof(u8));
temp_ei = temp_ei + 1;
buf_dst = out_c_struct + temp_ei->offset;
}
if (temp_ei->data_type == QMI_DATA_LEN) {
data_len_sz = temp_ei->elem_size == sizeof(u8) ?
sizeof(u8) : sizeof(u16);
rc = qmi_decode_basic_elem(&data_len_value, buf_src,
1, data_len_sz);
memcpy(buf_dst, &data_len_value, sizeof(u32));
temp_ei = temp_ei + 1;
buf_dst = out_c_struct + temp_ei->offset;
tlv_len -= data_len_sz;
UPDATE_DECODE_VARIABLES(buf_src, decoded_bytes, rc);
}
if (temp_ei->array_type == NO_ARRAY) {
data_len_value = 1;
} else if (temp_ei->array_type == STATIC_ARRAY) {
data_len_value = temp_ei->elem_len;
} else if (data_len_value > temp_ei->elem_len) {
pr_err("%s: Data len %d > max spec %d\n",
__func__, data_len_value, temp_ei->elem_len);
return -ETOOSMALL;
}
switch (temp_ei->data_type) {
case QMI_UNSIGNED_1_BYTE:
case QMI_UNSIGNED_2_BYTE:
case QMI_UNSIGNED_4_BYTE:
case QMI_UNSIGNED_8_BYTE:
case QMI_SIGNED_2_BYTE_ENUM:
case QMI_SIGNED_4_BYTE_ENUM:
rc = qmi_decode_basic_elem(buf_dst, buf_src,
data_len_value,
temp_ei->elem_size);
UPDATE_DECODE_VARIABLES(buf_src, decoded_bytes, rc);
break;
case QMI_STRUCT:
rc = qmi_decode_struct_elem(temp_ei, buf_dst, buf_src,
data_len_value, tlv_len,
dec_level + 1);
if (rc < 0)
return rc;
UPDATE_DECODE_VARIABLES(buf_src, decoded_bytes, rc);
break;
case QMI_STRING:
rc = qmi_decode_string_elem(temp_ei, buf_dst, buf_src,
tlv_len, dec_level);
if (rc < 0)
return rc;
UPDATE_DECODE_VARIABLES(buf_src, decoded_bytes, rc);
break;
default:
pr_err("%s: Unrecognized data type\n", __func__);
return -EINVAL;
}
temp_ei = temp_ei + 1;
}
return decoded_bytes;
}
/**
* qmi_encode_message() - Encode C structure as QMI encoded message
* @type: Type of QMI message
* @msg_id: Message ID of the message
* @len: Passed as max length of the message, updated to actual size
* @txn_id: Transaction ID
* @ei: QMI message descriptor
* @c_struct: Reference to structure to encode
*
* Return: Buffer with encoded message, or negative ERR_PTR() on error
*/
void *qmi_encode_message(int type, unsigned int msg_id, size_t *len,
unsigned int txn_id, struct qmi_elem_info *ei,
const void *c_struct)
{
struct qmi_header *hdr;
ssize_t msglen = 0;
void *msg;
int ret;
/* Check the possibility of a zero length QMI message */
if (!c_struct) {
ret = qmi_calc_min_msg_len(ei, 1);
if (ret) {
pr_err("%s: Calc. len %d != 0, but NULL c_struct\n",
__func__, ret);
return ERR_PTR(-EINVAL);
}
}
msg = kzalloc(sizeof(*hdr) + *len, GFP_KERNEL);
if (!msg)
return ERR_PTR(-ENOMEM);
/* Encode message, if we have a message */
if (c_struct) {
msglen = qmi_encode(ei, msg + sizeof(*hdr), c_struct, *len, 1);
if (msglen < 0) {
kfree(msg);
return ERR_PTR(msglen);
}
}
hdr = msg;
hdr->type = type;
hdr->txn_id = txn_id;
hdr->msg_id = msg_id;
hdr->msg_len = msglen;
*len = sizeof(*hdr) + msglen;
return msg;
}
EXPORT_SYMBOL(qmi_encode_message);
/**
* qmi_decode_message() - Decode QMI encoded message to C structure
* @buf: Buffer with encoded message
* @len: Amount of data in @buf
* @ei: QMI message descriptor
* @c_struct: Reference to structure to decode into
*
* Return: The number of bytes of decoded information on success, negative
* errno on error.
*/
int qmi_decode_message(const void *buf, size_t len,
struct qmi_elem_info *ei, void *c_struct)
{
if (!ei)
return -EINVAL;
if (!c_struct || !buf || !len)
return -EINVAL;
return qmi_decode(ei, c_struct, buf + sizeof(struct qmi_header),
len - sizeof(struct qmi_header), 1);
}
EXPORT_SYMBOL(qmi_decode_message);
/* Common header in all QMI responses */
struct qmi_elem_info qmi_response_type_v01_ei[] = {
{
.data_type = QMI_SIGNED_2_BYTE_ENUM,
.elem_len = 1,
.elem_size = sizeof(u16),
.array_type = NO_ARRAY,
.tlv_type = QMI_COMMON_TLV_TYPE,
.offset = offsetof(struct qmi_response_type_v01, result),
.ei_array = NULL,
},
{
.data_type = QMI_SIGNED_2_BYTE_ENUM,
.elem_len = 1,
.elem_size = sizeof(u16),
.array_type = NO_ARRAY,
.tlv_type = QMI_COMMON_TLV_TYPE,
.offset = offsetof(struct qmi_response_type_v01, error),
.ei_array = NULL,
},
{
.data_type = QMI_EOTI,
.elem_len = 0,
.elem_size = 0,
.array_type = NO_ARRAY,
.tlv_type = QMI_COMMON_TLV_TYPE,
.offset = 0,
.ei_array = NULL,
},
};
EXPORT_SYMBOL(qmi_response_type_v01_ei);
MODULE_DESCRIPTION("QMI encoder/decoder helper");
MODULE_LICENSE("GPL v2");

View File

@ -0,0 +1,848 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2017 Linaro Ltd.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/device.h>
#include <linux/qrtr.h>
#include <linux/net.h>
#include <linux/completion.h>
#include <linux/idr.h>
#include <linux/string.h>
#include <net/sock.h>
#include <linux/workqueue.h>
#include <linux/soc/qcom/qmi.h>
static struct socket *qmi_sock_create(struct qmi_handle *qmi,
struct sockaddr_qrtr *sq);
/**
* qmi_recv_new_server() - handler of NEW_SERVER control message
* @qmi: qmi handle
* @service: service id of the new server
* @instance: instance id of the new server
* @node: node of the new server
* @port: port of the new server
*
* Calls the new_server callback to inform the client about a newly registered
* server matching the currently registered service lookup.
*/
static void qmi_recv_new_server(struct qmi_handle *qmi,
unsigned int service, unsigned int instance,
unsigned int node, unsigned int port)
{
struct qmi_ops *ops = &qmi->ops;
struct qmi_service *svc;
int ret;
if (!ops->new_server)
return;
/* Ignore EOF marker */
if (!node && !port)
return;
svc = kzalloc(sizeof(*svc), GFP_KERNEL);
if (!svc)
return;
svc->service = service;
svc->version = instance & 0xff;
svc->instance = instance >> 8;
svc->node = node;
svc->port = port;
ret = ops->new_server(qmi, svc);
if (ret < 0)
kfree(svc);
else
list_add(&svc->list_node, &qmi->lookup_results);
}
/**
* qmi_recv_del_server() - handler of DEL_SERVER control message
* @qmi: qmi handle
* @node: node of the dying server, a value of -1 matches all nodes
* @port: port of the dying server, a value of -1 matches all ports
*
* Calls the del_server callback for each previously seen server, allowing the
* client to react to the disappearing server.
*/
static void qmi_recv_del_server(struct qmi_handle *qmi,
unsigned int node, unsigned int port)
{
struct qmi_ops *ops = &qmi->ops;
struct qmi_service *svc;
struct qmi_service *tmp;
list_for_each_entry_safe(svc, tmp, &qmi->lookup_results, list_node) {
if (node != -1 && svc->node != node)
continue;
if (port != -1 && svc->port != port)
continue;
if (ops->del_server)
ops->del_server(qmi, svc);
list_del(&svc->list_node);
kfree(svc);
}
}
/**
* qmi_recv_bye() - handler of BYE control message
* @qmi: qmi handle
* @node: id of the dying node
*
* Signals the client that all previously registered services on this node are
* now gone and then calls the bye callback to allow the client client further
* cleaning up resources associated with this remote.
*/
static void qmi_recv_bye(struct qmi_handle *qmi,
unsigned int node)
{
struct qmi_ops *ops = &qmi->ops;
qmi_recv_del_server(qmi, node, -1);
if (ops->bye)
ops->bye(qmi, node);
}
/**
* qmi_recv_del_client() - handler of DEL_CLIENT control message
* @qmi: qmi handle
* @node: node of the dying client
* @port: port of the dying client
*
* Signals the client about a dying client, by calling the del_client callback.
*/
static void qmi_recv_del_client(struct qmi_handle *qmi,
unsigned int node, unsigned int port)
{
struct qmi_ops *ops = &qmi->ops;
if (ops->del_client)
ops->del_client(qmi, node, port);
}
static void qmi_recv_ctrl_pkt(struct qmi_handle *qmi,
const void *buf, size_t len)
{
const struct qrtr_ctrl_pkt *pkt = buf;
if (len < sizeof(struct qrtr_ctrl_pkt)) {
pr_debug("ignoring short control packet\n");
return;
}
switch (le32_to_cpu(pkt->cmd)) {
case QRTR_TYPE_BYE:
qmi_recv_bye(qmi, le32_to_cpu(pkt->client.node));
break;
case QRTR_TYPE_NEW_SERVER:
qmi_recv_new_server(qmi,
le32_to_cpu(pkt->server.service),
le32_to_cpu(pkt->server.instance),
le32_to_cpu(pkt->server.node),
le32_to_cpu(pkt->server.port));
break;
case QRTR_TYPE_DEL_SERVER:
qmi_recv_del_server(qmi,
le32_to_cpu(pkt->server.node),
le32_to_cpu(pkt->server.port));
break;
case QRTR_TYPE_DEL_CLIENT:
qmi_recv_del_client(qmi,
le32_to_cpu(pkt->client.node),
le32_to_cpu(pkt->client.port));
break;
}
}
static void qmi_send_new_lookup(struct qmi_handle *qmi, struct qmi_service *svc)
{
struct qrtr_ctrl_pkt pkt;
struct sockaddr_qrtr sq;
struct msghdr msg = { };
struct kvec iv = { &pkt, sizeof(pkt) };
int ret;
memset(&pkt, 0, sizeof(pkt));
pkt.cmd = cpu_to_le32(QRTR_TYPE_NEW_LOOKUP);
pkt.server.service = cpu_to_le32(svc->service);
pkt.server.instance = cpu_to_le32(svc->version | svc->instance << 8);
sq.sq_family = qmi->sq.sq_family;
sq.sq_node = qmi->sq.sq_node;
sq.sq_port = QRTR_PORT_CTRL;
msg.msg_name = &sq;
msg.msg_namelen = sizeof(sq);
mutex_lock(&qmi->sock_lock);
if (qmi->sock) {
ret = kernel_sendmsg(qmi->sock, &msg, &iv, 1, sizeof(pkt));
if (ret < 0)
pr_err("failed to send lookup registration: %d\n", ret);
}
mutex_unlock(&qmi->sock_lock);
}
/**
* qmi_add_lookup() - register a new lookup with the name service
* @qmi: qmi handle
* @service: service id of the request
* @instance: instance id of the request
* @version: version number of the request
*
* Registering a lookup query with the name server will cause the name server
* to send NEW_SERVER and DEL_SERVER control messages to this socket as
* matching services are registered.
*
* Return: 0 on success, negative errno on failure.
*/
int qmi_add_lookup(struct qmi_handle *qmi, unsigned int service,
unsigned int version, unsigned int instance)
{
struct qmi_service *svc;
svc = kzalloc(sizeof(*svc), GFP_KERNEL);
if (!svc)
return -ENOMEM;
svc->service = service;
svc->version = version;
svc->instance = instance;
list_add(&svc->list_node, &qmi->lookups);
qmi_send_new_lookup(qmi, svc);
return 0;
}
EXPORT_SYMBOL(qmi_add_lookup);
static void qmi_send_new_server(struct qmi_handle *qmi, struct qmi_service *svc)
{
struct qrtr_ctrl_pkt pkt;
struct sockaddr_qrtr sq;
struct msghdr msg = { };
struct kvec iv = { &pkt, sizeof(pkt) };
int ret;
memset(&pkt, 0, sizeof(pkt));
pkt.cmd = cpu_to_le32(QRTR_TYPE_NEW_SERVER);
pkt.server.service = cpu_to_le32(svc->service);
pkt.server.instance = cpu_to_le32(svc->version | svc->instance << 8);
pkt.server.node = cpu_to_le32(qmi->sq.sq_node);
pkt.server.port = cpu_to_le32(qmi->sq.sq_port);
sq.sq_family = qmi->sq.sq_family;
sq.sq_node = qmi->sq.sq_node;
sq.sq_port = QRTR_PORT_CTRL;
msg.msg_name = &sq;
msg.msg_namelen = sizeof(sq);
mutex_lock(&qmi->sock_lock);
if (qmi->sock) {
ret = kernel_sendmsg(qmi->sock, &msg, &iv, 1, sizeof(pkt));
if (ret < 0)
pr_err("send service registration failed: %d\n", ret);
}
mutex_unlock(&qmi->sock_lock);
}
/**
* qmi_add_server() - register a service with the name service
* @qmi: qmi handle
* @service: type of the service
* @instance: instance of the service
* @version: version of the service
*
* Register a new service with the name service. This allows clients to find
* and start sending messages to the client associated with @qmi.
*
* Return: 0 on success, negative errno on failure.
*/
int qmi_add_server(struct qmi_handle *qmi, unsigned int service,
unsigned int version, unsigned int instance)
{
struct qmi_service *svc;
svc = kzalloc(sizeof(*svc), GFP_KERNEL);
if (!svc)
return -ENOMEM;
svc->service = service;
svc->version = version;
svc->instance = instance;
list_add(&svc->list_node, &qmi->services);
qmi_send_new_server(qmi, svc);
return 0;
}
EXPORT_SYMBOL(qmi_add_server);
/**
* qmi_txn_init() - allocate transaction id within the given QMI handle
* @qmi: QMI handle
* @txn: transaction context
* @ei: description of how to decode a matching response (optional)
* @c_struct: pointer to the object to decode the response into (optional)
*
* This allocates a transaction id within the QMI handle. If @ei and @c_struct
* are specified any responses to this transaction will be decoded as described
* by @ei into @c_struct.
*
* A client calling qmi_txn_init() must call either qmi_txn_wait() or
* qmi_txn_cancel() to free up the allocated resources.
*
* Return: Transaction id on success, negative errno on failure.
*/
int qmi_txn_init(struct qmi_handle *qmi, struct qmi_txn *txn,
struct qmi_elem_info *ei, void *c_struct)
{
int ret;
memset(txn, 0, sizeof(*txn));
mutex_init(&txn->lock);
init_completion(&txn->completion);
txn->qmi = qmi;
txn->ei = ei;
txn->dest = c_struct;
mutex_lock(&qmi->txn_lock);
ret = idr_alloc_cyclic(&qmi->txns, txn, 0, INT_MAX, GFP_KERNEL);
if (ret < 0)
pr_err("failed to allocate transaction id\n");
txn->id = ret;
mutex_unlock(&qmi->txn_lock);
return ret;
}
EXPORT_SYMBOL(qmi_txn_init);
/**
* qmi_txn_wait() - wait for a response on a transaction
* @txn: transaction handle
* @timeout: timeout, in jiffies
*
* If the transaction is decoded by the means of @ei and @c_struct the return
* value will be the returned value of qmi_decode_message(), otherwise it's up
* to the specified message handler to fill out the result.
*
* Return: the transaction response on success, negative errno on failure.
*/
int qmi_txn_wait(struct qmi_txn *txn, unsigned long timeout)
{
struct qmi_handle *qmi = txn->qmi;
int ret;
ret = wait_for_completion_interruptible_timeout(&txn->completion,
timeout);
mutex_lock(&qmi->txn_lock);
mutex_lock(&txn->lock);
idr_remove(&qmi->txns, txn->id);
mutex_unlock(&txn->lock);
mutex_unlock(&qmi->txn_lock);
if (ret < 0)
return ret;
else if (ret == 0)
return -ETIMEDOUT;
else
return txn->result;
}
EXPORT_SYMBOL(qmi_txn_wait);
/**
* qmi_txn_cancel() - cancel an ongoing transaction
* @txn: transaction id
*/
void qmi_txn_cancel(struct qmi_txn *txn)
{
struct qmi_handle *qmi = txn->qmi;
mutex_lock(&qmi->txn_lock);
mutex_lock(&txn->lock);
idr_remove(&qmi->txns, txn->id);
mutex_unlock(&txn->lock);
mutex_unlock(&qmi->txn_lock);
}
EXPORT_SYMBOL(qmi_txn_cancel);
/**
* qmi_invoke_handler() - find and invoke a handler for a message
* @qmi: qmi handle
* @sq: sockaddr of the sender
* @txn: transaction object for the message
* @buf: buffer containing the message
* @len: length of @buf
*
* Find handler and invoke handler for the incoming message.
*/
static void qmi_invoke_handler(struct qmi_handle *qmi, struct sockaddr_qrtr *sq,
struct qmi_txn *txn, const void *buf, size_t len)
{
const struct qmi_msg_handler *handler;
const struct qmi_header *hdr = buf;
void *dest;
int ret;
if (!qmi->handlers)
return;
for (handler = qmi->handlers; handler->fn; handler++) {
if (handler->type == hdr->type &&
handler->msg_id == hdr->msg_id)
break;
}
if (!handler->fn)
return;
dest = kzalloc(handler->decoded_size, GFP_KERNEL);
if (!dest)
return;
ret = qmi_decode_message(buf, len, handler->ei, dest);
if (ret < 0)
pr_err("failed to decode incoming message\n");
else
handler->fn(qmi, sq, txn, dest);
kfree(dest);
}
/**
* qmi_handle_net_reset() - invoked to handle ENETRESET on a QMI handle
* @qmi: the QMI context
*
* As a result of registering a name service with the QRTR all open sockets are
* flagged with ENETRESET and this function will be called. The typical case is
* the initial boot, where this signals that the local node id has been
* configured and as such any bound sockets needs to be rebound. So close the
* socket, inform the client and re-initialize the socket.
*
* For clients it's generally sufficient to react to the del_server callbacks,
* but server code is expected to treat the net_reset callback as a "bye" from
* all nodes.
*
* Finally the QMI handle will send out registration requests for any lookups
* and services.
*/
static void qmi_handle_net_reset(struct qmi_handle *qmi)
{
struct sockaddr_qrtr sq;
struct qmi_service *svc;
struct socket *sock;
sock = qmi_sock_create(qmi, &sq);
if (IS_ERR(sock))
return;
mutex_lock(&qmi->sock_lock);
sock_release(qmi->sock);
qmi->sock = NULL;
mutex_unlock(&qmi->sock_lock);
qmi_recv_del_server(qmi, -1, -1);
if (qmi->ops.net_reset)
qmi->ops.net_reset(qmi);
mutex_lock(&qmi->sock_lock);
qmi->sock = sock;
qmi->sq = sq;
mutex_unlock(&qmi->sock_lock);
list_for_each_entry(svc, &qmi->lookups, list_node)
qmi_send_new_lookup(qmi, svc);
list_for_each_entry(svc, &qmi->services, list_node)
qmi_send_new_server(qmi, svc);
}
static void qmi_handle_message(struct qmi_handle *qmi,
struct sockaddr_qrtr *sq,
const void *buf, size_t len)
{
const struct qmi_header *hdr;
struct qmi_txn tmp_txn;
struct qmi_txn *txn = NULL;
int ret;
if (len < sizeof(*hdr)) {
pr_err("ignoring short QMI packet\n");
return;
}
hdr = buf;
/* If this is a response, find the matching transaction handle */
if (hdr->type == QMI_RESPONSE) {
mutex_lock(&qmi->txn_lock);
txn = idr_find(&qmi->txns, hdr->txn_id);
/* Ignore unexpected responses */
if (!txn) {
mutex_unlock(&qmi->txn_lock);
return;
}
mutex_lock(&txn->lock);
mutex_unlock(&qmi->txn_lock);
if (txn->dest && txn->ei) {
ret = qmi_decode_message(buf, len, txn->ei, txn->dest);
if (ret < 0)
pr_err("failed to decode incoming message\n");
txn->result = ret;
complete(&txn->completion);
} else {
qmi_invoke_handler(qmi, sq, txn, buf, len);
}
mutex_unlock(&txn->lock);
} else {
/* Create a txn based on the txn_id of the incoming message */
memset(&tmp_txn, 0, sizeof(tmp_txn));
tmp_txn.id = hdr->txn_id;
qmi_invoke_handler(qmi, sq, &tmp_txn, buf, len);
}
}
static void qmi_data_ready_work(struct work_struct *work)
{
struct qmi_handle *qmi = container_of(work, struct qmi_handle, work);
struct qmi_ops *ops = &qmi->ops;
struct sockaddr_qrtr sq;
struct msghdr msg = { .msg_name = &sq, .msg_namelen = sizeof(sq) };
struct kvec iv;
ssize_t msglen;
for (;;) {
iv.iov_base = qmi->recv_buf;
iv.iov_len = qmi->recv_buf_size;
mutex_lock(&qmi->sock_lock);
if (qmi->sock)
msglen = kernel_recvmsg(qmi->sock, &msg, &iv, 1,
iv.iov_len, MSG_DONTWAIT);
else
msglen = -EPIPE;
mutex_unlock(&qmi->sock_lock);
if (msglen == -EAGAIN)
break;
if (msglen == -ENETRESET) {
qmi_handle_net_reset(qmi);
/* The old qmi->sock is gone, our work is done */
break;
}
if (msglen < 0) {
pr_err("qmi recvmsg failed: %zd\n", msglen);
break;
}
if (sq.sq_node == qmi->sq.sq_node &&
sq.sq_port == QRTR_PORT_CTRL) {
qmi_recv_ctrl_pkt(qmi, qmi->recv_buf, msglen);
} else if (ops->msg_handler) {
ops->msg_handler(qmi, &sq, qmi->recv_buf, msglen);
} else {
qmi_handle_message(qmi, &sq, qmi->recv_buf, msglen);
}
}
}
static void qmi_data_ready(struct sock *sk)
{
struct qmi_handle *qmi = sk->sk_user_data;
/*
* This will be NULL if we receive data while being in
* qmi_handle_release()
*/
if (!qmi)
return;
queue_work(qmi->wq, &qmi->work);
}
static struct socket *qmi_sock_create(struct qmi_handle *qmi,
struct sockaddr_qrtr *sq)
{
struct socket *sock;
int sl = sizeof(*sq);
int ret;
ret = sock_create_kern(&init_net, AF_QIPCRTR, SOCK_DGRAM,
PF_QIPCRTR, &sock);
if (ret < 0)
return ERR_PTR(ret);
ret = kernel_getsockname(sock, (struct sockaddr *)sq, &sl);
if (ret < 0) {
sock_release(sock);
return ERR_PTR(ret);
}
sock->sk->sk_user_data = qmi;
sock->sk->sk_data_ready = qmi_data_ready;
sock->sk->sk_error_report = qmi_data_ready;
return sock;
}
/**
* qmi_handle_init() - initialize a QMI client handle
* @qmi: QMI handle to initialize
* @recv_buf_size: maximum size of incoming message
* @ops: reference to callbacks for QRTR notifications
* @handlers: NULL-terminated list of QMI message handlers
*
* This initializes the QMI client handle to allow sending and receiving QMI
* messages. As messages are received the appropriate handler will be invoked.
*
* Return: 0 on success, negative errno on failure.
*/
int qmi_handle_init(struct qmi_handle *qmi, size_t recv_buf_size,
const struct qmi_ops *ops,
const struct qmi_msg_handler *handlers)
{
int ret;
mutex_init(&qmi->txn_lock);
mutex_init(&qmi->sock_lock);
idr_init(&qmi->txns);
INIT_LIST_HEAD(&qmi->lookups);
INIT_LIST_HEAD(&qmi->lookup_results);
INIT_LIST_HEAD(&qmi->services);
INIT_WORK(&qmi->work, qmi_data_ready_work);
qmi->handlers = handlers;
if (ops)
qmi->ops = *ops;
if (recv_buf_size < sizeof(struct qrtr_ctrl_pkt))
recv_buf_size = sizeof(struct qrtr_ctrl_pkt);
else
recv_buf_size += sizeof(struct qmi_header);
qmi->recv_buf_size = recv_buf_size;
qmi->recv_buf = kzalloc(recv_buf_size, GFP_KERNEL);
if (!qmi->recv_buf)
return -ENOMEM;
qmi->wq = alloc_workqueue("qmi_msg_handler", WQ_UNBOUND, 1);
if (!qmi->wq) {
ret = -ENOMEM;
goto err_free_recv_buf;
}
qmi->sock = qmi_sock_create(qmi, &qmi->sq);
if (IS_ERR(qmi->sock)) {
pr_err("failed to create QMI socket\n");
ret = PTR_ERR(qmi->sock);
goto err_destroy_wq;
}
return 0;
err_destroy_wq:
destroy_workqueue(qmi->wq);
err_free_recv_buf:
kfree(qmi->recv_buf);
return ret;
}
EXPORT_SYMBOL(qmi_handle_init);
/**
* qmi_handle_release() - release the QMI client handle
* @qmi: QMI client handle
*
* This closes the underlying socket and stops any handling of QMI messages.
*/
void qmi_handle_release(struct qmi_handle *qmi)
{
struct socket *sock = qmi->sock;
struct qmi_service *svc, *tmp;
sock->sk->sk_user_data = NULL;
cancel_work_sync(&qmi->work);
qmi_recv_del_server(qmi, -1, -1);
mutex_lock(&qmi->sock_lock);
sock_release(sock);
qmi->sock = NULL;
mutex_unlock(&qmi->sock_lock);
destroy_workqueue(qmi->wq);
idr_destroy(&qmi->txns);
kfree(qmi->recv_buf);
/* Free registered lookup requests */
list_for_each_entry_safe(svc, tmp, &qmi->lookups, list_node) {
list_del(&svc->list_node);
kfree(svc);
}
/* Free registered service information */
list_for_each_entry_safe(svc, tmp, &qmi->services, list_node) {
list_del(&svc->list_node);
kfree(svc);
}
}
EXPORT_SYMBOL(qmi_handle_release);
/**
* qmi_send_message() - send a QMI message
* @qmi: QMI client handle
* @sq: destination sockaddr
* @txn: transaction object to use for the message
* @type: type of message to send
* @msg_id: message id
* @len: max length of the QMI message
* @ei: QMI message description
* @c_struct: object to be encoded
*
* This function encodes @c_struct using @ei into a message of type @type,
* with @msg_id and @txn into a buffer of maximum size @len, and sends this to
* @sq.
*
* Return: 0 on success, negative errno on failure.
*/
static ssize_t qmi_send_message(struct qmi_handle *qmi,
struct sockaddr_qrtr *sq, struct qmi_txn *txn,
int type, int msg_id, size_t len,
struct qmi_elem_info *ei, const void *c_struct)
{
struct msghdr msghdr = {};
struct kvec iv;
void *msg;
int ret;
msg = qmi_encode_message(type,
msg_id, &len,
txn->id, ei,
c_struct);
if (IS_ERR(msg))
return PTR_ERR(msg);
iv.iov_base = msg;
iv.iov_len = len;
if (sq) {
msghdr.msg_name = sq;
msghdr.msg_namelen = sizeof(*sq);
}
mutex_lock(&qmi->sock_lock);
if (qmi->sock) {
ret = kernel_sendmsg(qmi->sock, &msghdr, &iv, 1, len);
if (ret < 0)
pr_err("failed to send QMI message\n");
} else {
ret = -EPIPE;
}
mutex_unlock(&qmi->sock_lock);
kfree(msg);
return ret < 0 ? ret : 0;
}
/**
* qmi_send_request() - send a request QMI message
* @qmi: QMI client handle
* @sq: destination sockaddr
* @txn: transaction object to use for the message
* @msg_id: message id
* @len: max length of the QMI message
* @ei: QMI message description
* @c_struct: object to be encoded
*
* Return: 0 on success, negative errno on failure.
*/
ssize_t qmi_send_request(struct qmi_handle *qmi, struct sockaddr_qrtr *sq,
struct qmi_txn *txn, int msg_id, size_t len,
struct qmi_elem_info *ei, const void *c_struct)
{
return qmi_send_message(qmi, sq, txn, QMI_REQUEST, msg_id, len, ei,
c_struct);
}
EXPORT_SYMBOL(qmi_send_request);
/**
* qmi_send_response() - send a response QMI message
* @qmi: QMI client handle
* @sq: destination sockaddr
* @txn: transaction object to use for the message
* @msg_id: message id
* @len: max length of the QMI message
* @ei: QMI message description
* @c_struct: object to be encoded
*
* Return: 0 on success, negative errno on failure.
*/
ssize_t qmi_send_response(struct qmi_handle *qmi, struct sockaddr_qrtr *sq,
struct qmi_txn *txn, int msg_id, size_t len,
struct qmi_elem_info *ei, const void *c_struct)
{
return qmi_send_message(qmi, sq, txn, QMI_RESPONSE, msg_id, len, ei,
c_struct);
}
EXPORT_SYMBOL(qmi_send_response);
/**
* qmi_send_indication() - send an indication QMI message
* @qmi: QMI client handle
* @sq: destination sockaddr
* @msg_id: message id
* @len: max length of the QMI message
* @ei: QMI message description
* @c_struct: object to be encoded
*
* Return: 0 on success, negative errno on failure.
*/
ssize_t qmi_send_indication(struct qmi_handle *qmi, struct sockaddr_qrtr *sq,
int msg_id, size_t len, struct qmi_elem_info *ei,
const void *c_struct)
{
struct qmi_txn txn;
ssize_t rval;
int ret;
ret = qmi_txn_init(qmi, &txn, NULL, NULL);
if (ret < 0)
return ret;
rval = qmi_send_message(qmi, sq, &txn, QMI_INDICATION, msg_id, len, ei,
c_struct);
/* We don't care about future messages on this txn */
qmi_txn_cancel(&txn);
return rval;
}
EXPORT_SYMBOL(qmi_send_indication);

View File

@ -267,3 +267,7 @@ static void qcom_rmtfs_mem_exit(void)
unregister_chrdev_region(qcom_rmtfs_mem_major, QCOM_RMTFS_MEM_DEV_MAX);
}
module_exit(qcom_rmtfs_mem_exit);
MODULE_AUTHOR("Linaro Ltd");
MODULE_DESCRIPTION("Qualcomm Remote Filesystem memory driver");
MODULE_LICENSE("GPL v2");

View File

@ -18,6 +18,7 @@
#include <linux/of.h>
#include <linux/irq.h>
#include <linux/irqdomain.h>
#include <linux/mailbox_client.h>
#include <linux/mfd/syscon.h>
#include <linux/module.h>
#include <linux/platform_device.h>
@ -126,6 +127,8 @@ struct smp2p_entry {
* @ipc_regmap: regmap for the outbound ipc
* @ipc_offset: offset within the regmap
* @ipc_bit: bit in regmap@offset to kick to signal remote processor
* @mbox_client: mailbox client handle
* @mbox_chan: apcs ipc mailbox channel handle
* @inbound: list of inbound entries
* @outbound: list of outbound entries
*/
@ -146,6 +149,9 @@ struct qcom_smp2p {
int ipc_offset;
int ipc_bit;
struct mbox_client mbox_client;
struct mbox_chan *mbox_chan;
struct list_head inbound;
struct list_head outbound;
};
@ -154,7 +160,13 @@ static void qcom_smp2p_kick(struct qcom_smp2p *smp2p)
{
/* Make sure any updated data is written before the kick */
wmb();
regmap_write(smp2p->ipc_regmap, smp2p->ipc_offset, BIT(smp2p->ipc_bit));
if (smp2p->mbox_chan) {
mbox_send_message(smp2p->mbox_chan, NULL);
mbox_client_txdone(smp2p->mbox_chan, 0);
} else {
regmap_write(smp2p->ipc_regmap, smp2p->ipc_offset, BIT(smp2p->ipc_bit));
}
}
/**
@ -453,10 +465,6 @@ static int qcom_smp2p_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, smp2p);
ret = smp2p_parse_ipc(smp2p);
if (ret)
return ret;
key = "qcom,smem";
ret = of_property_read_u32_array(pdev->dev.of_node, key,
smp2p->smem_items, 2);
@ -465,17 +473,13 @@ static int qcom_smp2p_probe(struct platform_device *pdev)
key = "qcom,local-pid";
ret = of_property_read_u32(pdev->dev.of_node, key, &smp2p->local_pid);
if (ret < 0) {
dev_err(&pdev->dev, "failed to read %s\n", key);
return -EINVAL;
}
if (ret)
goto report_read_failure;
key = "qcom,remote-pid";
ret = of_property_read_u32(pdev->dev.of_node, key, &smp2p->remote_pid);
if (ret < 0) {
dev_err(&pdev->dev, "failed to read %s\n", key);
return -EINVAL;
}
if (ret)
goto report_read_failure;
irq = platform_get_irq(pdev, 0);
if (irq < 0) {
@ -483,9 +487,23 @@ static int qcom_smp2p_probe(struct platform_device *pdev)
return irq;
}
smp2p->mbox_client.dev = &pdev->dev;
smp2p->mbox_client.knows_txdone = true;
smp2p->mbox_chan = mbox_request_channel(&smp2p->mbox_client, 0);
if (IS_ERR(smp2p->mbox_chan)) {
if (PTR_ERR(smp2p->mbox_chan) != -ENODEV)
return PTR_ERR(smp2p->mbox_chan);
smp2p->mbox_chan = NULL;
ret = smp2p_parse_ipc(smp2p);
if (ret)
return ret;
}
ret = qcom_smp2p_alloc_outbound_item(smp2p);
if (ret < 0)
return ret;
goto release_mbox;
for_each_available_child_of_node(pdev->dev.of_node, node) {
entry = devm_kzalloc(&pdev->dev, sizeof(*entry), GFP_KERNEL);
@ -540,7 +558,14 @@ unwind_interfaces:
smp2p->out->valid_entries = 0;
release_mbox:
mbox_free_channel(smp2p->mbox_chan);
return ret;
report_read_failure:
dev_err(&pdev->dev, "failed to read %s\n", key);
return -EINVAL;
}
static int qcom_smp2p_remove(struct platform_device *pdev)
@ -554,6 +579,8 @@ static int qcom_smp2p_remove(struct platform_device *pdev)
list_for_each_entry(entry, &smp2p->outbound, node)
qcom_smem_state_unregister(entry->state);
mbox_free_channel(smp2p->mbox_chan);
smp2p->out->valid_entries = 0;
return 0;

View File

@ -496,8 +496,10 @@ static int qcom_smsm_probe(struct platform_device *pdev)
if (!smsm->hosts)
return -ENOMEM;
local_node = of_find_node_with_property(of_node_get(pdev->dev.of_node),
"#qcom,smem-state-cells");
for_each_child_of_node(pdev->dev.of_node, local_node) {
if (of_find_property(local_node, "#qcom,smem-state-cells", NULL))
break;
}
if (!local_node) {
dev_err(&pdev->dev, "no state entry\n");
return -EINVAL;

View File

@ -1,3 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
#
# SAMSUNG SoC drivers
#

View File

@ -1,3 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_EXYNOS_PMU) += exynos-pmu.o
obj-$(CONFIG_EXYNOS_PMU_ARM_DRIVERS) += exynos3250-pmu.o exynos4-pmu.o \

View File

@ -1,13 +1,9 @@
/*
* Copyright (c) 2011-2014 Samsung Electronics Co., Ltd.
* http://www.samsung.com/
*
* EXYNOS - CPU PMU(Power Management Unit) support
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
// SPDX-License-Identifier: GPL-2.0
//
// Copyright (c) 2011-2014 Samsung Electronics Co., Ltd.
// http://www.samsung.com/
//
// EXYNOS - CPU PMU(Power Management Unit) support
#include <linux/of.h>
#include <linux/of_address.h>

View File

@ -1,12 +1,9 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2015 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Header for EXYNOS PMU Driver support
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __EXYNOS_PMU_H

View File

@ -1,13 +1,9 @@
/*
* Copyright (c) 2011-2015 Samsung Electronics Co., Ltd.
* http://www.samsung.com/
*
* EXYNOS3250 - CPU PMU (Power Management Unit) support
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
// SPDX-License-Identifier: GPL-2.0
//
// Copyright (c) 2011-2015 Samsung Electronics Co., Ltd.
// http://www.samsung.com/
//
// EXYNOS3250 - CPU PMU (Power Management Unit) support
#include <linux/soc/samsung/exynos-regs-pmu.h>
#include <linux/soc/samsung/exynos-pmu.h>

View File

@ -1,13 +1,9 @@
/*
* Copyright (c) 2011-2015 Samsung Electronics Co., Ltd.
* http://www.samsung.com/
*
* EXYNOS4 - CPU PMU(Power Management Unit) support
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
// SPDX-License-Identifier: GPL-2.0
//
// Copyright (c) 2011-2015 Samsung Electronics Co., Ltd.
// http://www.samsung.com/
//
// EXYNOS4 - CPU PMU(Power Management Unit) support
#include <linux/soc/samsung/exynos-regs-pmu.h>
#include <linux/soc/samsung/exynos-pmu.h>

View File

@ -1,13 +1,9 @@
/*
* Copyright (c) 2011-2015 Samsung Electronics Co., Ltd.
* http://www.samsung.com/
*
* EXYNOS5250 - CPU PMU (Power Management Unit) support
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
// SPDX-License-Identifier: GPL-2.0
//
// Copyright (c) 2011-2015 Samsung Electronics Co., Ltd.
// http://www.samsung.com/
//
// EXYNOS5250 - CPU PMU (Power Management Unit) support
#include <linux/soc/samsung/exynos-regs-pmu.h>
#include <linux/soc/samsung/exynos-pmu.h>

View File

@ -1,13 +1,9 @@
/*
* Copyright (c) 2011-2015 Samsung Electronics Co., Ltd.
* http://www.samsung.com/
*
* EXYNOS5420 - CPU PMU (Power Management Unit) support
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
// SPDX-License-Identifier: GPL-2.0
//
// Copyright (c) 2011-2015 Samsung Electronics Co., Ltd.
// http://www.samsung.com/
//
// EXYNOS5420 - CPU PMU (Power Management Unit) support
#include <linux/pm.h>
#include <linux/soc/samsung/exynos-regs-pmu.h>

View File

@ -1,17 +1,13 @@
/*
* Exynos Generic power domain support.
*
* Copyright (c) 2012 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Implementation of Exynos specific power domain control which is used in
* conjunction with runtime-pm. Support for both device-tree and non-device-tree
* based power domain support is included.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
// SPDX-License-Identifier: GPL-2.0
//
// Exynos Generic power domain support.
//
// Copyright (c) 2012 Samsung Electronics Co., Ltd.
// http://www.samsung.com
//
// Implementation of Exynos specific power domain control which is used in
// conjunction with runtime-pm. Support for both device-tree and non-device-tree
// based power domain support is included.
#include <linux/io.h>
#include <linux/err.h>

View File

@ -225,7 +225,7 @@ static struct knav_queue *__knav_queue_open(struct knav_queue_inst *inst,
if (!knav_queue_is_busy(inst)) {
struct knav_range_info *range = inst->range;
inst->name = kstrndup(name, KNAV_NAME_SIZE, GFP_KERNEL);
inst->name = kstrndup(name, KNAV_NAME_SIZE - 1, GFP_KERNEL);
if (range->ops && range->ops->open_queue)
ret = range->ops->open_queue(range, inst, flags);
@ -779,7 +779,7 @@ void *knav_pool_create(const char *name,
goto err;
}
pool->name = kstrndup(name, KNAV_NAME_SIZE, GFP_KERNEL);
pool->name = kstrndup(name, KNAV_NAME_SIZE - 1, GFP_KERNEL);
pool->kdev = kdev;
pool->dev = kdev->dev;

View File

@ -0,0 +1,20 @@
# SPDX-License-Identifier: GPL-2.0
menu "Xilinx SoC drivers"
config XILINX_VCU
tristate "Xilinx VCU logicoreIP Init"
depends on HAS_IOMEM
help
Provides the driver to enable and disable the isolation between the
processing system and programmable logic part by using the logicoreIP
register set. This driver also configures the frequency based on the
clock information from the logicoreIP register set.
If you say yes here you get support for the logicoreIP.
If unsure, say N.
To compile this driver as a module, choose M here: the
module will be called xlnx_vcu.
endmenu

View File

@ -0,0 +1,2 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_XILINX_VCU) += xlnx_vcu.o

View File

@ -0,0 +1,630 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Xilinx VCU Init
*
* Copyright (C) 2016 - 2017 Xilinx, Inc.
*
* Contacts Dhaval Shah <dshah@xilinx.com>
*/
#include <linux/clk.h>
#include <linux/device.h>
#include <linux/errno.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
/* Address map for different registers implemented in the VCU LogiCORE IP. */
#define VCU_ECODER_ENABLE 0x00
#define VCU_DECODER_ENABLE 0x04
#define VCU_MEMORY_DEPTH 0x08
#define VCU_ENC_COLOR_DEPTH 0x0c
#define VCU_ENC_VERTICAL_RANGE 0x10
#define VCU_ENC_FRAME_SIZE_X 0x14
#define VCU_ENC_FRAME_SIZE_Y 0x18
#define VCU_ENC_COLOR_FORMAT 0x1c
#define VCU_ENC_FPS 0x20
#define VCU_MCU_CLK 0x24
#define VCU_CORE_CLK 0x28
#define VCU_PLL_BYPASS 0x2c
#define VCU_ENC_CLK 0x30
#define VCU_PLL_CLK 0x34
#define VCU_ENC_VIDEO_STANDARD 0x38
#define VCU_STATUS 0x3c
#define VCU_AXI_ENC_CLK 0x40
#define VCU_AXI_DEC_CLK 0x44
#define VCU_AXI_MCU_CLK 0x48
#define VCU_DEC_VIDEO_STANDARD 0x4c
#define VCU_DEC_FRAME_SIZE_X 0x50
#define VCU_DEC_FRAME_SIZE_Y 0x54
#define VCU_DEC_FPS 0x58
#define VCU_BUFFER_B_FRAME 0x5c
#define VCU_WPP_EN 0x60
#define VCU_PLL_CLK_DEC 0x64
#define VCU_GASKET_INIT 0x74
#define VCU_GASKET_VALUE 0x03
/* vcu slcr registers, bitmask and shift */
#define VCU_PLL_CTRL 0x24
#define VCU_PLL_CTRL_RESET_MASK 0x01
#define VCU_PLL_CTRL_RESET_SHIFT 0
#define VCU_PLL_CTRL_BYPASS_MASK 0x01
#define VCU_PLL_CTRL_BYPASS_SHIFT 3
#define VCU_PLL_CTRL_FBDIV_MASK 0x7f
#define VCU_PLL_CTRL_FBDIV_SHIFT 8
#define VCU_PLL_CTRL_POR_IN_MASK 0x01
#define VCU_PLL_CTRL_POR_IN_SHIFT 1
#define VCU_PLL_CTRL_PWR_POR_MASK 0x01
#define VCU_PLL_CTRL_PWR_POR_SHIFT 2
#define VCU_PLL_CTRL_CLKOUTDIV_MASK 0x03
#define VCU_PLL_CTRL_CLKOUTDIV_SHIFT 16
#define VCU_PLL_CTRL_DEFAULT 0
#define VCU_PLL_DIV2 2
#define VCU_PLL_CFG 0x28
#define VCU_PLL_CFG_RES_MASK 0x0f
#define VCU_PLL_CFG_RES_SHIFT 0
#define VCU_PLL_CFG_CP_MASK 0x0f
#define VCU_PLL_CFG_CP_SHIFT 5
#define VCU_PLL_CFG_LFHF_MASK 0x03
#define VCU_PLL_CFG_LFHF_SHIFT 10
#define VCU_PLL_CFG_LOCK_CNT_MASK 0x03ff
#define VCU_PLL_CFG_LOCK_CNT_SHIFT 13
#define VCU_PLL_CFG_LOCK_DLY_MASK 0x7f
#define VCU_PLL_CFG_LOCK_DLY_SHIFT 25
#define VCU_ENC_CORE_CTRL 0x30
#define VCU_ENC_MCU_CTRL 0x34
#define VCU_DEC_CORE_CTRL 0x38
#define VCU_DEC_MCU_CTRL 0x3c
#define VCU_PLL_DIVISOR_MASK 0x3f
#define VCU_PLL_DIVISOR_SHIFT 4
#define VCU_SRCSEL_MASK 0x01
#define VCU_SRCSEL_SHIFT 0
#define VCU_SRCSEL_PLL 1
#define VCU_PLL_STATUS 0x60
#define VCU_PLL_STATUS_LOCK_STATUS_MASK 0x01
#define MHZ 1000000
#define FVCO_MIN (1500U * MHZ)
#define FVCO_MAX (3000U * MHZ)
#define DIVISOR_MIN 0
#define DIVISOR_MAX 63
#define FRAC 100
#define LIMIT (10 * MHZ)
/**
* struct xvcu_device - Xilinx VCU init device structure
* @dev: Platform device
* @pll_ref: pll ref clock source
* @aclk: axi clock source
* @logicore_reg_ba: logicore reg base address
* @vcu_slcr_ba: vcu_slcr Register base address
* @coreclk: core clock frequency
*/
struct xvcu_device {
struct device *dev;
struct clk *pll_ref;
struct clk *aclk;
void __iomem *logicore_reg_ba;
void __iomem *vcu_slcr_ba;
u32 coreclk;
};
/**
* struct xvcu_pll_cfg - Helper data
* @fbdiv: The integer portion of the feedback divider to the PLL
* @cp: PLL charge pump control
* @res: PLL loop filter resistor control
* @lfhf: PLL loop filter high frequency capacitor control
* @lock_dly: Lock circuit configuration settings for lock windowsize
* @lock_cnt: Lock circuit counter setting
*/
struct xvcu_pll_cfg {
u32 fbdiv;
u32 cp;
u32 res;
u32 lfhf;
u32 lock_dly;
u32 lock_cnt;
};
static const struct xvcu_pll_cfg xvcu_pll_cfg[] = {
{ 25, 3, 10, 3, 63, 1000 },
{ 26, 3, 10, 3, 63, 1000 },
{ 27, 4, 6, 3, 63, 1000 },
{ 28, 4, 6, 3, 63, 1000 },
{ 29, 4, 6, 3, 63, 1000 },
{ 30, 4, 6, 3, 63, 1000 },
{ 31, 6, 1, 3, 63, 1000 },
{ 32, 6, 1, 3, 63, 1000 },
{ 33, 4, 10, 3, 63, 1000 },
{ 34, 5, 6, 3, 63, 1000 },
{ 35, 5, 6, 3, 63, 1000 },
{ 36, 5, 6, 3, 63, 1000 },
{ 37, 5, 6, 3, 63, 1000 },
{ 38, 5, 6, 3, 63, 975 },
{ 39, 3, 12, 3, 63, 950 },
{ 40, 3, 12, 3, 63, 925 },
{ 41, 3, 12, 3, 63, 900 },
{ 42, 3, 12, 3, 63, 875 },
{ 43, 3, 12, 3, 63, 850 },
{ 44, 3, 12, 3, 63, 850 },
{ 45, 3, 12, 3, 63, 825 },
{ 46, 3, 12, 3, 63, 800 },
{ 47, 3, 12, 3, 63, 775 },
{ 48, 3, 12, 3, 63, 775 },
{ 49, 3, 12, 3, 63, 750 },
{ 50, 3, 12, 3, 63, 750 },
{ 51, 3, 2, 3, 63, 725 },
{ 52, 3, 2, 3, 63, 700 },
{ 53, 3, 2, 3, 63, 700 },
{ 54, 3, 2, 3, 63, 675 },
{ 55, 3, 2, 3, 63, 675 },
{ 56, 3, 2, 3, 63, 650 },
{ 57, 3, 2, 3, 63, 650 },
{ 58, 3, 2, 3, 63, 625 },
{ 59, 3, 2, 3, 63, 625 },
{ 60, 3, 2, 3, 63, 625 },
{ 61, 3, 2, 3, 63, 600 },
{ 62, 3, 2, 3, 63, 600 },
{ 63, 3, 2, 3, 63, 600 },
{ 64, 3, 2, 3, 63, 600 },
{ 65, 3, 2, 3, 63, 600 },
{ 66, 3, 2, 3, 63, 600 },
{ 67, 3, 2, 3, 63, 600 },
{ 68, 3, 2, 3, 63, 600 },
{ 69, 3, 2, 3, 63, 600 },
{ 70, 3, 2, 3, 63, 600 },
{ 71, 3, 2, 3, 63, 600 },
{ 72, 3, 2, 3, 63, 600 },
{ 73, 3, 2, 3, 63, 600 },
{ 74, 3, 2, 3, 63, 600 },
{ 75, 3, 2, 3, 63, 600 },
{ 76, 3, 2, 3, 63, 600 },
{ 77, 3, 2, 3, 63, 600 },
{ 78, 3, 2, 3, 63, 600 },
{ 79, 3, 2, 3, 63, 600 },
{ 80, 3, 2, 3, 63, 600 },
{ 81, 3, 2, 3, 63, 600 },
{ 82, 3, 2, 3, 63, 600 },
{ 83, 4, 2, 3, 63, 600 },
{ 84, 4, 2, 3, 63, 600 },
{ 85, 4, 2, 3, 63, 600 },
{ 86, 4, 2, 3, 63, 600 },
{ 87, 4, 2, 3, 63, 600 },
{ 88, 4, 2, 3, 63, 600 },
{ 89, 4, 2, 3, 63, 600 },
{ 90, 4, 2, 3, 63, 600 },
{ 91, 4, 2, 3, 63, 600 },
{ 92, 4, 2, 3, 63, 600 },
{ 93, 4, 2, 3, 63, 600 },
{ 94, 4, 2, 3, 63, 600 },
{ 95, 4, 2, 3, 63, 600 },
{ 96, 4, 2, 3, 63, 600 },
{ 97, 4, 2, 3, 63, 600 },
{ 98, 4, 2, 3, 63, 600 },
{ 99, 4, 2, 3, 63, 600 },
{ 100, 4, 2, 3, 63, 600 },
{ 101, 4, 2, 3, 63, 600 },
{ 102, 4, 2, 3, 63, 600 },
{ 103, 5, 2, 3, 63, 600 },
{ 104, 5, 2, 3, 63, 600 },
{ 105, 5, 2, 3, 63, 600 },
{ 106, 5, 2, 3, 63, 600 },
{ 107, 3, 4, 3, 63, 600 },
{ 108, 3, 4, 3, 63, 600 },
{ 109, 3, 4, 3, 63, 600 },
{ 110, 3, 4, 3, 63, 600 },
{ 111, 3, 4, 3, 63, 600 },
{ 112, 3, 4, 3, 63, 600 },
{ 113, 3, 4, 3, 63, 600 },
{ 114, 3, 4, 3, 63, 600 },
{ 115, 3, 4, 3, 63, 600 },
{ 116, 3, 4, 3, 63, 600 },
{ 117, 3, 4, 3, 63, 600 },
{ 118, 3, 4, 3, 63, 600 },
{ 119, 3, 4, 3, 63, 600 },
{ 120, 3, 4, 3, 63, 600 },
{ 121, 3, 4, 3, 63, 600 },
{ 122, 3, 4, 3, 63, 600 },
{ 123, 3, 4, 3, 63, 600 },
{ 124, 3, 4, 3, 63, 600 },
{ 125, 3, 4, 3, 63, 600 },
};
/**
* xvcu_read - Read from the VCU register space
* @iomem: vcu reg space base address
* @offset: vcu reg offset from base
*
* Return: Returns 32bit value from VCU register specified
*
*/
static inline u32 xvcu_read(void __iomem *iomem, u32 offset)
{
return ioread32(iomem + offset);
}
/**
* xvcu_write - Write to the VCU register space
* @iomem: vcu reg space base address
* @offset: vcu reg offset from base
* @value: Value to write
*/
static inline void xvcu_write(void __iomem *iomem, u32 offset, u32 value)
{
iowrite32(value, iomem + offset);
}
/**
* xvcu_write_field_reg - Write to the vcu reg field
* @iomem: vcu reg space base address
* @offset: vcu reg offset from base
* @field: vcu reg field to write to
* @mask: vcu reg mask
* @shift: vcu reg number of bits to shift the bitfield
*/
static void xvcu_write_field_reg(void __iomem *iomem, int offset,
u32 field, u32 mask, int shift)
{
u32 val = xvcu_read(iomem, offset);
val &= ~(mask << shift);
val |= (field & mask) << shift;
xvcu_write(iomem, offset, val);
}
/**
* xvcu_set_vcu_pll_info - Set the VCU PLL info
* @xvcu: Pointer to the xvcu_device structure
*
* Programming the VCU PLL based on the user configuration
* (ref clock freq, core clock freq, mcu clock freq).
* Core clock frequency has higher priority than mcu clock frequency
* Errors in following cases
* - When mcu or clock clock get from logicoreIP is 0
* - When VCU PLL DIV related bits value other than 1
* - When proper data not found for given data
* - When sis570_1 clocksource related operation failed
*
* Return: Returns status, either success or error+reason
*/
static int xvcu_set_vcu_pll_info(struct xvcu_device *xvcu)
{
u32 refclk, coreclk, mcuclk, inte, deci;
u32 divisor_mcu, divisor_core, fvco;
u32 clkoutdiv, vcu_pll_ctrl, pll_clk;
u32 cfg_val, mod, ctrl;
int ret, i;
const struct xvcu_pll_cfg *found = NULL;
inte = xvcu_read(xvcu->logicore_reg_ba, VCU_PLL_CLK);
deci = xvcu_read(xvcu->logicore_reg_ba, VCU_PLL_CLK_DEC);
coreclk = xvcu_read(xvcu->logicore_reg_ba, VCU_CORE_CLK) * MHZ;
mcuclk = xvcu_read(xvcu->logicore_reg_ba, VCU_MCU_CLK) * MHZ;
if (!mcuclk || !coreclk) {
dev_err(xvcu->dev, "Invalid mcu and core clock data\n");
return -EINVAL;
}
refclk = (inte * MHZ) + (deci * (MHZ / FRAC));
dev_dbg(xvcu->dev, "Ref clock from logicoreIP is %uHz\n", refclk);
dev_dbg(xvcu->dev, "Core clock from logicoreIP is %uHz\n", coreclk);
dev_dbg(xvcu->dev, "Mcu clock from logicoreIP is %uHz\n", mcuclk);
clk_disable_unprepare(xvcu->pll_ref);
ret = clk_set_rate(xvcu->pll_ref, refclk);
if (ret)
dev_warn(xvcu->dev, "failed to set logicoreIP refclk rate\n");
ret = clk_prepare_enable(xvcu->pll_ref);
if (ret) {
dev_err(xvcu->dev, "failed to enable pll_ref clock source\n");
return ret;
}
refclk = clk_get_rate(xvcu->pll_ref);
/*
* The divide-by-2 should be always enabled (==1)
* to meet the timing in the design.
* Otherwise, it's an error
*/
vcu_pll_ctrl = xvcu_read(xvcu->vcu_slcr_ba, VCU_PLL_CTRL);
clkoutdiv = vcu_pll_ctrl >> VCU_PLL_CTRL_CLKOUTDIV_SHIFT;
clkoutdiv = clkoutdiv & VCU_PLL_CTRL_CLKOUTDIV_MASK;
if (clkoutdiv != 1) {
dev_err(xvcu->dev, "clkoutdiv value is invalid\n");
return -EINVAL;
}
for (i = ARRAY_SIZE(xvcu_pll_cfg) - 1; i >= 0; i--) {
const struct xvcu_pll_cfg *cfg = &xvcu_pll_cfg[i];
fvco = cfg->fbdiv * refclk;
if (fvco >= FVCO_MIN && fvco <= FVCO_MAX) {
pll_clk = fvco / VCU_PLL_DIV2;
if (fvco % VCU_PLL_DIV2 != 0)
pll_clk++;
mod = pll_clk % coreclk;
if (mod < LIMIT) {
divisor_core = pll_clk / coreclk;
} else if (coreclk - mod < LIMIT) {
divisor_core = pll_clk / coreclk;
divisor_core++;
} else {
continue;
}
if (divisor_core >= DIVISOR_MIN &&
divisor_core <= DIVISOR_MAX) {
found = cfg;
divisor_mcu = pll_clk / mcuclk;
mod = pll_clk % mcuclk;
if (mcuclk - mod < LIMIT)
divisor_mcu++;
break;
}
}
}
if (!found) {
dev_err(xvcu->dev, "Invalid clock combination.\n");
return -EINVAL;
}
xvcu->coreclk = pll_clk / divisor_core;
mcuclk = pll_clk / divisor_mcu;
dev_dbg(xvcu->dev, "Actual Ref clock freq is %uHz\n", refclk);
dev_dbg(xvcu->dev, "Actual Core clock freq is %uHz\n", xvcu->coreclk);
dev_dbg(xvcu->dev, "Actual Mcu clock freq is %uHz\n", mcuclk);
vcu_pll_ctrl &= ~(VCU_PLL_CTRL_FBDIV_MASK << VCU_PLL_CTRL_FBDIV_SHIFT);
vcu_pll_ctrl |= (found->fbdiv & VCU_PLL_CTRL_FBDIV_MASK) <<
VCU_PLL_CTRL_FBDIV_SHIFT;
vcu_pll_ctrl &= ~(VCU_PLL_CTRL_POR_IN_MASK <<
VCU_PLL_CTRL_POR_IN_SHIFT);
vcu_pll_ctrl |= (VCU_PLL_CTRL_DEFAULT & VCU_PLL_CTRL_POR_IN_MASK) <<
VCU_PLL_CTRL_POR_IN_SHIFT;
vcu_pll_ctrl &= ~(VCU_PLL_CTRL_PWR_POR_MASK <<
VCU_PLL_CTRL_PWR_POR_SHIFT);
vcu_pll_ctrl |= (VCU_PLL_CTRL_DEFAULT & VCU_PLL_CTRL_PWR_POR_MASK) <<
VCU_PLL_CTRL_PWR_POR_SHIFT;
xvcu_write(xvcu->vcu_slcr_ba, VCU_PLL_CTRL, vcu_pll_ctrl);
/* Set divisor for the core and mcu clock */
ctrl = xvcu_read(xvcu->vcu_slcr_ba, VCU_ENC_CORE_CTRL);
ctrl &= ~(VCU_PLL_DIVISOR_MASK << VCU_PLL_DIVISOR_SHIFT);
ctrl |= (divisor_core & VCU_PLL_DIVISOR_MASK) <<
VCU_PLL_DIVISOR_SHIFT;
ctrl &= ~(VCU_SRCSEL_MASK << VCU_SRCSEL_SHIFT);
ctrl |= (VCU_SRCSEL_PLL & VCU_SRCSEL_MASK) << VCU_SRCSEL_SHIFT;
xvcu_write(xvcu->vcu_slcr_ba, VCU_ENC_CORE_CTRL, ctrl);
ctrl = xvcu_read(xvcu->vcu_slcr_ba, VCU_DEC_CORE_CTRL);
ctrl &= ~(VCU_PLL_DIVISOR_MASK << VCU_PLL_DIVISOR_SHIFT);
ctrl |= (divisor_core & VCU_PLL_DIVISOR_MASK) <<
VCU_PLL_DIVISOR_SHIFT;
ctrl &= ~(VCU_SRCSEL_MASK << VCU_SRCSEL_SHIFT);
ctrl |= (VCU_SRCSEL_PLL & VCU_SRCSEL_MASK) << VCU_SRCSEL_SHIFT;
xvcu_write(xvcu->vcu_slcr_ba, VCU_DEC_CORE_CTRL, ctrl);
ctrl = xvcu_read(xvcu->vcu_slcr_ba, VCU_ENC_MCU_CTRL);
ctrl &= ~(VCU_PLL_DIVISOR_MASK << VCU_PLL_DIVISOR_SHIFT);
ctrl |= (divisor_mcu & VCU_PLL_DIVISOR_MASK) << VCU_PLL_DIVISOR_SHIFT;
ctrl &= ~(VCU_SRCSEL_MASK << VCU_SRCSEL_SHIFT);
ctrl |= (VCU_SRCSEL_PLL & VCU_SRCSEL_MASK) << VCU_SRCSEL_SHIFT;
xvcu_write(xvcu->vcu_slcr_ba, VCU_ENC_MCU_CTRL, ctrl);
ctrl = xvcu_read(xvcu->vcu_slcr_ba, VCU_DEC_MCU_CTRL);
ctrl &= ~(VCU_PLL_DIVISOR_MASK << VCU_PLL_DIVISOR_SHIFT);
ctrl |= (divisor_mcu & VCU_PLL_DIVISOR_MASK) << VCU_PLL_DIVISOR_SHIFT;
ctrl &= ~(VCU_SRCSEL_MASK << VCU_SRCSEL_SHIFT);
ctrl |= (VCU_SRCSEL_PLL & VCU_SRCSEL_MASK) << VCU_SRCSEL_SHIFT;
xvcu_write(xvcu->vcu_slcr_ba, VCU_DEC_MCU_CTRL, ctrl);
/* Set RES, CP, LFHF, LOCK_CNT and LOCK_DLY cfg values */
cfg_val = (found->res << VCU_PLL_CFG_RES_SHIFT) |
(found->cp << VCU_PLL_CFG_CP_SHIFT) |
(found->lfhf << VCU_PLL_CFG_LFHF_SHIFT) |
(found->lock_cnt << VCU_PLL_CFG_LOCK_CNT_SHIFT) |
(found->lock_dly << VCU_PLL_CFG_LOCK_DLY_SHIFT);
xvcu_write(xvcu->vcu_slcr_ba, VCU_PLL_CFG, cfg_val);
return 0;
}
/**
* xvcu_set_pll - PLL init sequence
* @xvcu: Pointer to the xvcu_device structure
*
* Call the api to set the PLL info and once that is done then
* init the PLL sequence to make the PLL stable.
*
* Return: Returns status, either success or error+reason
*/
static int xvcu_set_pll(struct xvcu_device *xvcu)
{
u32 lock_status;
unsigned long timeout;
int ret;
ret = xvcu_set_vcu_pll_info(xvcu);
if (ret) {
dev_err(xvcu->dev, "failed to set pll info\n");
return ret;
}
xvcu_write_field_reg(xvcu->vcu_slcr_ba, VCU_PLL_CTRL,
1, VCU_PLL_CTRL_BYPASS_MASK,
VCU_PLL_CTRL_BYPASS_SHIFT);
xvcu_write_field_reg(xvcu->vcu_slcr_ba, VCU_PLL_CTRL,
1, VCU_PLL_CTRL_RESET_MASK,
VCU_PLL_CTRL_RESET_SHIFT);
xvcu_write_field_reg(xvcu->vcu_slcr_ba, VCU_PLL_CTRL,
0, VCU_PLL_CTRL_RESET_MASK,
VCU_PLL_CTRL_RESET_SHIFT);
/*
* Defined the timeout for the max time to wait the
* PLL_STATUS to be locked.
*/
timeout = jiffies + msecs_to_jiffies(2000);
do {
lock_status = xvcu_read(xvcu->vcu_slcr_ba, VCU_PLL_STATUS);
if (lock_status & VCU_PLL_STATUS_LOCK_STATUS_MASK) {
xvcu_write_field_reg(xvcu->vcu_slcr_ba, VCU_PLL_CTRL,
0, VCU_PLL_CTRL_BYPASS_MASK,
VCU_PLL_CTRL_BYPASS_SHIFT);
return 0;
}
} while (!time_after(jiffies, timeout));
/* PLL is not locked even after the timeout of the 2sec */
dev_err(xvcu->dev, "PLL is not locked\n");
return -ETIMEDOUT;
}
/**
* xvcu_probe - Probe existence of the logicoreIP
* and initialize PLL
*
* @pdev: Pointer to the platform_device structure
*
* Return: Returns 0 on success
* Negative error code otherwise
*/
static int xvcu_probe(struct platform_device *pdev)
{
struct resource *res;
struct xvcu_device *xvcu;
int ret;
xvcu = devm_kzalloc(&pdev->dev, sizeof(*xvcu), GFP_KERNEL);
if (!xvcu)
return -ENOMEM;
xvcu->dev = &pdev->dev;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "vcu_slcr");
if (!res) {
dev_err(&pdev->dev, "get vcu_slcr memory resource failed.\n");
return -ENODEV;
}
xvcu->vcu_slcr_ba = devm_ioremap_nocache(&pdev->dev, res->start,
resource_size(res));
if (!xvcu->vcu_slcr_ba) {
dev_err(&pdev->dev, "vcu_slcr register mapping failed.\n");
return -ENOMEM;
}
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "logicore");
if (!res) {
dev_err(&pdev->dev, "get logicore memory resource failed.\n");
return -ENODEV;
}
xvcu->logicore_reg_ba = devm_ioremap_nocache(&pdev->dev, res->start,
resource_size(res));
if (!xvcu->logicore_reg_ba) {
dev_err(&pdev->dev, "logicore register mapping failed.\n");
return -ENOMEM;
}
xvcu->aclk = devm_clk_get(&pdev->dev, "aclk");
if (IS_ERR(xvcu->aclk)) {
dev_err(&pdev->dev, "Could not get aclk clock\n");
return PTR_ERR(xvcu->aclk);
}
xvcu->pll_ref = devm_clk_get(&pdev->dev, "pll_ref");
if (IS_ERR(xvcu->pll_ref)) {
dev_err(&pdev->dev, "Could not get pll_ref clock\n");
return PTR_ERR(xvcu->pll_ref);
}
ret = clk_prepare_enable(xvcu->aclk);
if (ret) {
dev_err(&pdev->dev, "aclk clock enable failed\n");
return ret;
}
ret = clk_prepare_enable(xvcu->pll_ref);
if (ret) {
dev_err(&pdev->dev, "pll_ref clock enable failed\n");
goto error_aclk;
}
/*
* Do the Gasket isolation and put the VCU out of reset
* Bit 0 : Gasket isolation
* Bit 1 : put VCU out of reset
*/
xvcu_write(xvcu->logicore_reg_ba, VCU_GASKET_INIT, VCU_GASKET_VALUE);
/* Do the PLL Settings based on the ref clk,core and mcu clk freq */
ret = xvcu_set_pll(xvcu);
if (ret) {
dev_err(&pdev->dev, "Failed to set the pll\n");
goto error_pll_ref;
}
dev_set_drvdata(&pdev->dev, xvcu);
dev_info(&pdev->dev, "%s: Probed successfully\n", __func__);
return 0;
error_pll_ref:
clk_disable_unprepare(xvcu->pll_ref);
error_aclk:
clk_disable_unprepare(xvcu->aclk);
return ret;
}
/**
* xvcu_remove - Insert gasket isolation
* and disable the clock
* @pdev: Pointer to the platform_device structure
*
* Return: Returns 0 on success
* Negative error code otherwise
*/
static int xvcu_remove(struct platform_device *pdev)
{
struct xvcu_device *xvcu;
xvcu = platform_get_drvdata(pdev);
if (!xvcu)
return -ENODEV;
/* Add the the Gasket isolation and put the VCU in reset. */
xvcu_write(xvcu->logicore_reg_ba, VCU_GASKET_INIT, 0);
clk_disable_unprepare(xvcu->pll_ref);
clk_disable_unprepare(xvcu->aclk);
return 0;
}
static const struct of_device_id xvcu_of_id_table[] = {
{ .compatible = "xlnx,vcu" },
{ .compatible = "xlnx,vcu-logicoreip-1.0" },
{ }
};
MODULE_DEVICE_TABLE(of, xvcu_of_id_table);
static struct platform_driver xvcu_driver = {
.driver = {
.name = "xilinx-vcu",
.of_match_table = xvcu_of_id_table,
},
.probe = xvcu_probe,
.remove = xvcu_remove,
};
module_platform_driver(xvcu_driver);
MODULE_AUTHOR("Dhaval Shah <dshah@xilinx.com>");
MODULE_DESCRIPTION("Xilinx VCU init Driver");
MODULE_LICENSE("GPL v2");

View File

@ -4,3 +4,4 @@ optee-objs += core.o
optee-objs += call.o
optee-objs += rpc.o
optee-objs += supp.o
optee-objs += shm_pool.o

View File

@ -15,6 +15,7 @@
#include <linux/device.h>
#include <linux/err.h>
#include <linux/errno.h>
#include <linux/mm.h>
#include <linux/slab.h>
#include <linux/tee_drv.h>
#include <linux/types.h>
@ -135,6 +136,7 @@ u32 optee_do_call_with_arg(struct tee_context *ctx, phys_addr_t parg)
struct optee *optee = tee_get_drvdata(ctx->teedev);
struct optee_call_waiter w;
struct optee_rpc_param param = { };
struct optee_call_ctx call_ctx = { };
u32 ret;
param.a0 = OPTEE_SMC_CALL_WITH_ARG;
@ -159,13 +161,14 @@ u32 optee_do_call_with_arg(struct tee_context *ctx, phys_addr_t parg)
param.a1 = res.a1;
param.a2 = res.a2;
param.a3 = res.a3;
optee_handle_rpc(ctx, &param);
optee_handle_rpc(ctx, &param, &call_ctx);
} else {
ret = res.a0;
break;
}
}
optee_rpc_finalize_call(&call_ctx);
/*
* We're done with our thread in secure world, if there's any
* thread waiters wake up one.
@ -442,3 +445,218 @@ void optee_disable_shm_cache(struct optee *optee)
}
optee_cq_wait_final(&optee->call_queue, &w);
}
#define PAGELIST_ENTRIES_PER_PAGE \
((OPTEE_MSG_NONCONTIG_PAGE_SIZE / sizeof(u64)) - 1)
/**
* optee_fill_pages_list() - write list of user pages to given shared
* buffer.
*
* @dst: page-aligned buffer where list of pages will be stored
* @pages: array of pages that represents shared buffer
* @num_pages: number of entries in @pages
* @page_offset: offset of user buffer from page start
*
* @dst should be big enough to hold list of user page addresses and
* links to the next pages of buffer
*/
void optee_fill_pages_list(u64 *dst, struct page **pages, int num_pages,
size_t page_offset)
{
int n = 0;
phys_addr_t optee_page;
/*
* Refer to OPTEE_MSG_ATTR_NONCONTIG description in optee_msg.h
* for details.
*/
struct {
u64 pages_list[PAGELIST_ENTRIES_PER_PAGE];
u64 next_page_data;
} *pages_data;
/*
* Currently OP-TEE uses 4k page size and it does not looks
* like this will change in the future. On other hand, there are
* no know ARM architectures with page size < 4k.
* Thus the next built assert looks redundant. But the following
* code heavily relies on this assumption, so it is better be
* safe than sorry.
*/
BUILD_BUG_ON(PAGE_SIZE < OPTEE_MSG_NONCONTIG_PAGE_SIZE);
pages_data = (void *)dst;
/*
* If linux page is bigger than 4k, and user buffer offset is
* larger than 4k/8k/12k/etc this will skip first 4k pages,
* because they bear no value data for OP-TEE.
*/
optee_page = page_to_phys(*pages) +
round_down(page_offset, OPTEE_MSG_NONCONTIG_PAGE_SIZE);
while (true) {
pages_data->pages_list[n++] = optee_page;
if (n == PAGELIST_ENTRIES_PER_PAGE) {
pages_data->next_page_data =
virt_to_phys(pages_data + 1);
pages_data++;
n = 0;
}
optee_page += OPTEE_MSG_NONCONTIG_PAGE_SIZE;
if (!(optee_page & ~PAGE_MASK)) {
if (!--num_pages)
break;
pages++;
optee_page = page_to_phys(*pages);
}
}
}
/*
* The final entry in each pagelist page is a pointer to the next
* pagelist page.
*/
static size_t get_pages_list_size(size_t num_entries)
{
int pages = DIV_ROUND_UP(num_entries, PAGELIST_ENTRIES_PER_PAGE);
return pages * OPTEE_MSG_NONCONTIG_PAGE_SIZE;
}
u64 *optee_allocate_pages_list(size_t num_entries)
{
return alloc_pages_exact(get_pages_list_size(num_entries), GFP_KERNEL);
}
void optee_free_pages_list(void *list, size_t num_entries)
{
free_pages_exact(list, get_pages_list_size(num_entries));
}
static bool is_normal_memory(pgprot_t p)
{
#if defined(CONFIG_ARM)
return (pgprot_val(p) & L_PTE_MT_MASK) == L_PTE_MT_WRITEALLOC;
#elif defined(CONFIG_ARM64)
return (pgprot_val(p) & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL);
#else
#error "Unuspported architecture"
#endif
}
static int __check_mem_type(struct vm_area_struct *vma, unsigned long end)
{
while (vma && is_normal_memory(vma->vm_page_prot)) {
if (vma->vm_end >= end)
return 0;
vma = vma->vm_next;
}
return -EINVAL;
}
static int check_mem_type(unsigned long start, size_t num_pages)
{
struct mm_struct *mm = current->mm;
int rc;
down_read(&mm->mmap_sem);
rc = __check_mem_type(find_vma(mm, start),
start + num_pages * PAGE_SIZE);
up_read(&mm->mmap_sem);
return rc;
}
int optee_shm_register(struct tee_context *ctx, struct tee_shm *shm,
struct page **pages, size_t num_pages,
unsigned long start)
{
struct tee_shm *shm_arg = NULL;
struct optee_msg_arg *msg_arg;
u64 *pages_list;
phys_addr_t msg_parg;
int rc;
if (!num_pages)
return -EINVAL;
rc = check_mem_type(start, num_pages);
if (rc)
return rc;
pages_list = optee_allocate_pages_list(num_pages);
if (!pages_list)
return -ENOMEM;
shm_arg = get_msg_arg(ctx, 1, &msg_arg, &msg_parg);
if (IS_ERR(shm_arg)) {
rc = PTR_ERR(shm_arg);
goto out;
}
optee_fill_pages_list(pages_list, pages, num_pages,
tee_shm_get_page_offset(shm));
msg_arg->cmd = OPTEE_MSG_CMD_REGISTER_SHM;
msg_arg->params->attr = OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT |
OPTEE_MSG_ATTR_NONCONTIG;
msg_arg->params->u.tmem.shm_ref = (unsigned long)shm;
msg_arg->params->u.tmem.size = tee_shm_get_size(shm);
/*
* In the least bits of msg_arg->params->u.tmem.buf_ptr we
* store buffer offset from 4k page, as described in OP-TEE ABI.
*/
msg_arg->params->u.tmem.buf_ptr = virt_to_phys(pages_list) |
(tee_shm_get_page_offset(shm) & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1));
if (optee_do_call_with_arg(ctx, msg_parg) ||
msg_arg->ret != TEEC_SUCCESS)
rc = -EINVAL;
tee_shm_free(shm_arg);
out:
optee_free_pages_list(pages_list, num_pages);
return rc;
}
int optee_shm_unregister(struct tee_context *ctx, struct tee_shm *shm)
{
struct tee_shm *shm_arg;
struct optee_msg_arg *msg_arg;
phys_addr_t msg_parg;
int rc = 0;
shm_arg = get_msg_arg(ctx, 1, &msg_arg, &msg_parg);
if (IS_ERR(shm_arg))
return PTR_ERR(shm_arg);
msg_arg->cmd = OPTEE_MSG_CMD_UNREGISTER_SHM;
msg_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_RMEM_INPUT;
msg_arg->params[0].u.rmem.shm_ref = (unsigned long)shm;
if (optee_do_call_with_arg(ctx, msg_parg) ||
msg_arg->ret != TEEC_SUCCESS)
rc = -EINVAL;
tee_shm_free(shm_arg);
return rc;
}
int optee_shm_register_supp(struct tee_context *ctx, struct tee_shm *shm,
struct page **pages, size_t num_pages,
unsigned long start)
{
/*
* We don't want to register supplicant memory in OP-TEE.
* Instead information about it will be passed in RPC code.
*/
return check_mem_type(start, num_pages);
}
int optee_shm_unregister_supp(struct tee_context *ctx, struct tee_shm *shm)
{
return 0;
}

View File

@ -28,6 +28,7 @@
#include <linux/uaccess.h>
#include "optee_private.h"
#include "optee_smc.h"
#include "shm_pool.h"
#define DRIVER_NAME "optee"
@ -97,6 +98,25 @@ int optee_from_msg_param(struct tee_param *params, size_t num_params,
return rc;
}
break;
case OPTEE_MSG_ATTR_TYPE_RMEM_INPUT:
case OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT:
case OPTEE_MSG_ATTR_TYPE_RMEM_INOUT:
p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT +
attr - OPTEE_MSG_ATTR_TYPE_RMEM_INPUT;
p->u.memref.size = mp->u.rmem.size;
shm = (struct tee_shm *)(unsigned long)
mp->u.rmem.shm_ref;
if (!shm) {
p->u.memref.shm_offs = 0;
p->u.memref.shm = NULL;
break;
}
p->u.memref.shm_offs = mp->u.rmem.offs;
p->u.memref.shm = shm;
break;
default:
return -EINVAL;
}
@ -104,6 +124,46 @@ int optee_from_msg_param(struct tee_param *params, size_t num_params,
return 0;
}
static int to_msg_param_tmp_mem(struct optee_msg_param *mp,
const struct tee_param *p)
{
int rc;
phys_addr_t pa;
mp->attr = OPTEE_MSG_ATTR_TYPE_TMEM_INPUT + p->attr -
TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT;
mp->u.tmem.shm_ref = (unsigned long)p->u.memref.shm;
mp->u.tmem.size = p->u.memref.size;
if (!p->u.memref.shm) {
mp->u.tmem.buf_ptr = 0;
return 0;
}
rc = tee_shm_get_pa(p->u.memref.shm, p->u.memref.shm_offs, &pa);
if (rc)
return rc;
mp->u.tmem.buf_ptr = pa;
mp->attr |= OPTEE_MSG_ATTR_CACHE_PREDEFINED <<
OPTEE_MSG_ATTR_CACHE_SHIFT;
return 0;
}
static int to_msg_param_reg_mem(struct optee_msg_param *mp,
const struct tee_param *p)
{
mp->attr = OPTEE_MSG_ATTR_TYPE_RMEM_INPUT + p->attr -
TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT;
mp->u.rmem.shm_ref = (unsigned long)p->u.memref.shm;
mp->u.rmem.size = p->u.memref.size;
mp->u.rmem.offs = p->u.memref.shm_offs;
return 0;
}
/**
* optee_to_msg_param() - convert from struct tee_params to OPTEE_MSG parameters
* @msg_params: OPTEE_MSG parameters
@ -116,7 +176,6 @@ int optee_to_msg_param(struct optee_msg_param *msg_params, size_t num_params,
{
int rc;
size_t n;
phys_addr_t pa;
for (n = 0; n < num_params; n++) {
const struct tee_param *p = params + n;
@ -139,22 +198,12 @@ int optee_to_msg_param(struct optee_msg_param *msg_params, size_t num_params,
case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT:
case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT:
case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT:
mp->attr = OPTEE_MSG_ATTR_TYPE_TMEM_INPUT +
p->attr -
TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT;
mp->u.tmem.shm_ref = (unsigned long)p->u.memref.shm;
mp->u.tmem.size = p->u.memref.size;
if (!p->u.memref.shm) {
mp->u.tmem.buf_ptr = 0;
break;
}
rc = tee_shm_get_pa(p->u.memref.shm,
p->u.memref.shm_offs, &pa);
if (tee_shm_is_registered(p->u.memref.shm))
rc = to_msg_param_reg_mem(mp, p);
else
rc = to_msg_param_tmp_mem(mp, p);
if (rc)
return rc;
mp->u.tmem.buf_ptr = pa;
mp->attr |= OPTEE_MSG_ATTR_CACHE_PREDEFINED <<
OPTEE_MSG_ATTR_CACHE_SHIFT;
break;
default:
return -EINVAL;
@ -171,6 +220,10 @@ static void optee_get_version(struct tee_device *teedev,
.impl_caps = TEE_OPTEE_CAP_TZ,
.gen_caps = TEE_GEN_CAP_GP,
};
struct optee *optee = tee_get_drvdata(teedev);
if (optee->sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM)
v.gen_caps |= TEE_GEN_CAP_REG_MEM;
*vers = v;
}
@ -187,12 +240,12 @@ static int optee_open(struct tee_context *ctx)
if (teedev == optee->supp_teedev) {
bool busy = true;
mutex_lock(&optee->supp.ctx_mutex);
mutex_lock(&optee->supp.mutex);
if (!optee->supp.ctx) {
busy = false;
optee->supp.ctx = ctx;
}
mutex_unlock(&optee->supp.ctx_mutex);
mutex_unlock(&optee->supp.mutex);
if (busy) {
kfree(ctxdata);
return -EBUSY;
@ -252,11 +305,8 @@ static void optee_release(struct tee_context *ctx)
ctx->data = NULL;
if (teedev == optee->supp_teedev) {
mutex_lock(&optee->supp.ctx_mutex);
optee->supp.ctx = NULL;
mutex_unlock(&optee->supp.ctx_mutex);
}
if (teedev == optee->supp_teedev)
optee_supp_release(&optee->supp);
}
static const struct tee_driver_ops optee_ops = {
@ -267,6 +317,8 @@ static const struct tee_driver_ops optee_ops = {
.close_session = optee_close_session,
.invoke_func = optee_invoke_func,
.cancel_req = optee_cancel_req,
.shm_register = optee_shm_register,
.shm_unregister = optee_shm_unregister,
};
static const struct tee_desc optee_desc = {
@ -281,6 +333,8 @@ static const struct tee_driver_ops optee_supp_ops = {
.release = optee_release,
.supp_recv = optee_supp_recv,
.supp_send = optee_supp_send,
.shm_register = optee_shm_register_supp,
.shm_unregister = optee_shm_unregister_supp,
};
static const struct tee_desc optee_supp_desc = {
@ -345,21 +399,22 @@ static bool optee_msg_exchange_capabilities(optee_invoke_fn *invoke_fn,
}
static struct tee_shm_pool *
optee_config_shm_memremap(optee_invoke_fn *invoke_fn, void **memremaped_shm)
optee_config_shm_memremap(optee_invoke_fn *invoke_fn, void **memremaped_shm,
u32 sec_caps)
{
union {
struct arm_smccc_res smccc;
struct optee_smc_get_shm_config_result result;
} res;
struct tee_shm_pool *pool;
unsigned long vaddr;
phys_addr_t paddr;
size_t size;
phys_addr_t begin;
phys_addr_t end;
void *va;
struct tee_shm_pool_mem_info priv_info;
struct tee_shm_pool_mem_info dmabuf_info;
struct tee_shm_pool_mgr *priv_mgr;
struct tee_shm_pool_mgr *dmabuf_mgr;
void *rc;
invoke_fn(OPTEE_SMC_GET_SHM_CONFIG, 0, 0, 0, 0, 0, 0, 0, &res.smccc);
if (res.result.status != OPTEE_SMC_RETURN_OK) {
@ -389,22 +444,49 @@ optee_config_shm_memremap(optee_invoke_fn *invoke_fn, void **memremaped_shm)
}
vaddr = (unsigned long)va;
priv_info.vaddr = vaddr;
priv_info.paddr = paddr;
priv_info.size = OPTEE_SHM_NUM_PRIV_PAGES * PAGE_SIZE;
dmabuf_info.vaddr = vaddr + OPTEE_SHM_NUM_PRIV_PAGES * PAGE_SIZE;
dmabuf_info.paddr = paddr + OPTEE_SHM_NUM_PRIV_PAGES * PAGE_SIZE;
dmabuf_info.size = size - OPTEE_SHM_NUM_PRIV_PAGES * PAGE_SIZE;
/*
* If OP-TEE can work with unregistered SHM, we will use own pool
* for private shm
*/
if (sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) {
rc = optee_shm_pool_alloc_pages();
if (IS_ERR(rc))
goto err_memunmap;
priv_mgr = rc;
} else {
const size_t sz = OPTEE_SHM_NUM_PRIV_PAGES * PAGE_SIZE;
pool = tee_shm_pool_alloc_res_mem(&priv_info, &dmabuf_info);
if (IS_ERR(pool)) {
memunmap(va);
goto out;
rc = tee_shm_pool_mgr_alloc_res_mem(vaddr, paddr, sz,
3 /* 8 bytes aligned */);
if (IS_ERR(rc))
goto err_memunmap;
priv_mgr = rc;
vaddr += sz;
paddr += sz;
size -= sz;
}
rc = tee_shm_pool_mgr_alloc_res_mem(vaddr, paddr, size, PAGE_SHIFT);
if (IS_ERR(rc))
goto err_free_priv_mgr;
dmabuf_mgr = rc;
rc = tee_shm_pool_alloc(priv_mgr, dmabuf_mgr);
if (IS_ERR(rc))
goto err_free_dmabuf_mgr;
*memremaped_shm = va;
out:
return pool;
return rc;
err_free_dmabuf_mgr:
tee_shm_pool_mgr_destroy(dmabuf_mgr);
err_free_priv_mgr:
tee_shm_pool_mgr_destroy(priv_mgr);
err_memunmap:
memunmap(va);
return rc;
}
/* Simple wrapper functions to be able to use a function pointer */
@ -482,7 +564,7 @@ static struct optee *optee_probe(struct device_node *np)
if (!(sec_caps & OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM))
return ERR_PTR(-EINVAL);
pool = optee_config_shm_memremap(invoke_fn, &memremaped_shm);
pool = optee_config_shm_memremap(invoke_fn, &memremaped_shm, sec_caps);
if (IS_ERR(pool))
return (void *)pool;
@ -493,6 +575,7 @@ static struct optee *optee_probe(struct device_node *np)
}
optee->invoke_fn = invoke_fn;
optee->sec_caps = sec_caps;
teedev = tee_device_alloc(&optee_desc, NULL, pool, optee);
if (IS_ERR(teedev)) {

View File

@ -67,11 +67,32 @@
#define OPTEE_MSG_ATTR_META BIT(8)
/*
* The temporary shared memory object is not physically contigous and this
* temp memref is followed by another fragment until the last temp memref
* that doesn't have this bit set.
* Pointer to a list of pages used to register user-defined SHM buffer.
* Used with OPTEE_MSG_ATTR_TYPE_TMEM_*.
* buf_ptr should point to the beginning of the buffer. Buffer will contain
* list of page addresses. OP-TEE core can reconstruct contiguous buffer from
* that page addresses list. Page addresses are stored as 64 bit values.
* Last entry on a page should point to the next page of buffer.
* Every entry in buffer should point to a 4k page beginning (12 least
* significant bits must be equal to zero).
*
* 12 least significant bints of optee_msg_param.u.tmem.buf_ptr should hold page
* offset of the user buffer.
*
* So, entries should be placed like members of this structure:
*
* struct page_data {
* uint64_t pages_array[OPTEE_MSG_NONCONTIG_PAGE_SIZE/sizeof(uint64_t) - 1];
* uint64_t next_page_data;
* };
*
* Structure is designed to exactly fit into the page size
* OPTEE_MSG_NONCONTIG_PAGE_SIZE which is a standard 4KB page.
*
* The size of 4KB is chosen because this is the smallest page size for ARM
* architectures. If REE uses larger pages, it should divide them to 4KB ones.
*/
#define OPTEE_MSG_ATTR_FRAGMENT BIT(9)
#define OPTEE_MSG_ATTR_NONCONTIG BIT(9)
/*
* Memory attributes for caching passed with temp memrefs. The actual value
@ -94,6 +115,11 @@
#define OPTEE_MSG_LOGIN_APPLICATION_USER 0x00000005
#define OPTEE_MSG_LOGIN_APPLICATION_GROUP 0x00000006
/*
* Page size used in non-contiguous buffer entries
*/
#define OPTEE_MSG_NONCONTIG_PAGE_SIZE 4096
/**
* struct optee_msg_param_tmem - temporary memory reference parameter
* @buf_ptr: Address of the buffer
@ -145,8 +171,8 @@ struct optee_msg_param_value {
*
* @attr & OPTEE_MSG_ATTR_TYPE_MASK indicates if tmem, rmem or value is used in
* the union. OPTEE_MSG_ATTR_TYPE_VALUE_* indicates value,
* OPTEE_MSG_ATTR_TYPE_TMEM_* indicates tmem and
* OPTEE_MSG_ATTR_TYPE_RMEM_* indicates rmem.
* OPTEE_MSG_ATTR_TYPE_TMEM_* indicates @tmem and
* OPTEE_MSG_ATTR_TYPE_RMEM_* indicates @rmem,
* OPTEE_MSG_ATTR_TYPE_NONE indicates that none of the members are used.
*/
struct optee_msg_param {

View File

@ -53,36 +53,24 @@ struct optee_wait_queue {
* @ctx the context of current connected supplicant.
* if !NULL the supplicant device is available for use,
* else busy
* @ctx_mutex: held while accessing @ctx
* @func: supplicant function id to call
* @ret: call return value
* @num_params: number of elements in @param
* @param: parameters for @func
* @req_posted: if true, a request has been posted to the supplicant
* @supp_next_send: if true, next step is for supplicant to send response
* @thrd_mutex: held by the thread doing a request to supplicant
* @supp_mutex: held by supplicant while operating on this struct
* @data_to_supp: supplicant is waiting on this for next request
* @data_from_supp: requesting thread is waiting on this to get the result
* @mutex: held while accessing content of this struct
* @req_id: current request id if supplicant is doing synchronous
* communication, else -1
* @reqs: queued request not yet retrieved by supplicant
* @idr: IDR holding all requests currently being processed
* by supplicant
* @reqs_c: completion used by supplicant when waiting for a
* request to be queued.
*/
struct optee_supp {
/* Serializes access to this struct */
struct mutex mutex;
struct tee_context *ctx;
/* Serializes access of ctx */
struct mutex ctx_mutex;
u32 func;
u32 ret;
size_t num_params;
struct tee_param *param;
bool req_posted;
bool supp_next_send;
/* Serializes access to this struct for requesting thread */
struct mutex thrd_mutex;
/* Serializes access to this struct for supplicant threads */
struct mutex supp_mutex;
struct completion data_to_supp;
struct completion data_from_supp;
int req_id;
struct list_head reqs;
struct idr idr;
struct completion reqs_c;
};
/**
@ -96,6 +84,8 @@ struct optee_supp {
* @supp: supplicant synchronization struct for RPC to supplicant
* @pool: shared memory pool
* @memremaped_shm virtual address of memory in shared memory pool
* @sec_caps: secure world capabilities defined by
* OPTEE_SMC_SEC_CAP_* in optee_smc.h
*/
struct optee {
struct tee_device *supp_teedev;
@ -106,6 +96,7 @@ struct optee {
struct optee_supp supp;
struct tee_shm_pool *pool;
void *memremaped_shm;
u32 sec_caps;
};
struct optee_session {
@ -130,7 +121,16 @@ struct optee_rpc_param {
u32 a7;
};
void optee_handle_rpc(struct tee_context *ctx, struct optee_rpc_param *param);
/* Holds context that is preserved during one STD call */
struct optee_call_ctx {
/* information about pages list used in last allocation */
void *pages_list;
size_t num_entries;
};
void optee_handle_rpc(struct tee_context *ctx, struct optee_rpc_param *param,
struct optee_call_ctx *call_ctx);
void optee_rpc_finalize_call(struct optee_call_ctx *call_ctx);
void optee_wait_queue_init(struct optee_wait_queue *wq);
void optee_wait_queue_exit(struct optee_wait_queue *wq);
@ -142,6 +142,7 @@ int optee_supp_read(struct tee_context *ctx, void __user *buf, size_t len);
int optee_supp_write(struct tee_context *ctx, void __user *buf, size_t len);
void optee_supp_init(struct optee_supp *supp);
void optee_supp_uninit(struct optee_supp *supp);
void optee_supp_release(struct optee_supp *supp);
int optee_supp_recv(struct tee_context *ctx, u32 *func, u32 *num_params,
struct tee_param *param);
@ -160,11 +161,26 @@ int optee_cancel_req(struct tee_context *ctx, u32 cancel_id, u32 session);
void optee_enable_shm_cache(struct optee *optee);
void optee_disable_shm_cache(struct optee *optee);
int optee_shm_register(struct tee_context *ctx, struct tee_shm *shm,
struct page **pages, size_t num_pages,
unsigned long start);
int optee_shm_unregister(struct tee_context *ctx, struct tee_shm *shm);
int optee_shm_register_supp(struct tee_context *ctx, struct tee_shm *shm,
struct page **pages, size_t num_pages,
unsigned long start);
int optee_shm_unregister_supp(struct tee_context *ctx, struct tee_shm *shm);
int optee_from_msg_param(struct tee_param *params, size_t num_params,
const struct optee_msg_param *msg_params);
int optee_to_msg_param(struct optee_msg_param *msg_params, size_t num_params,
const struct tee_param *params);
u64 *optee_allocate_pages_list(size_t num_entries);
void optee_free_pages_list(void *array, size_t num_entries);
void optee_fill_pages_list(u64 *dst, struct page **pages, int num_pages,
size_t page_offset);
/*
* Small helpers
*/

View File

@ -222,6 +222,13 @@ struct optee_smc_get_shm_config_result {
#define OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM BIT(0)
/* Secure world can communicate via previously unregistered shared memory */
#define OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM BIT(1)
/*
* Secure world supports commands "register/unregister shared memory",
* secure world accepts command buffers located in any parts of non-secure RAM
*/
#define OPTEE_SMC_SEC_CAP_DYNAMIC_SHM BIT(2)
#define OPTEE_SMC_FUNCID_EXCHANGE_CAPABILITIES 9
#define OPTEE_SMC_EXCHANGE_CAPABILITIES \
OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_EXCHANGE_CAPABILITIES)

View File

@ -192,15 +192,16 @@ static struct tee_shm *cmd_alloc_suppl(struct tee_context *ctx, size_t sz)
if (ret)
return ERR_PTR(-ENOMEM);
mutex_lock(&optee->supp.ctx_mutex);
mutex_lock(&optee->supp.mutex);
/* Increases count as secure world doesn't have a reference */
shm = tee_shm_get_from_id(optee->supp.ctx, param.u.value.c);
mutex_unlock(&optee->supp.ctx_mutex);
mutex_unlock(&optee->supp.mutex);
return shm;
}
static void handle_rpc_func_cmd_shm_alloc(struct tee_context *ctx,
struct optee_msg_arg *arg)
struct optee_msg_arg *arg,
struct optee_call_ctx *call_ctx)
{
phys_addr_t pa;
struct tee_shm *shm;
@ -245,10 +246,49 @@ static void handle_rpc_func_cmd_shm_alloc(struct tee_context *ctx,
goto bad;
}
arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT;
arg->params[0].u.tmem.buf_ptr = pa;
arg->params[0].u.tmem.size = sz;
arg->params[0].u.tmem.shm_ref = (unsigned long)shm;
sz = tee_shm_get_size(shm);
if (tee_shm_is_registered(shm)) {
struct page **pages;
u64 *pages_list;
size_t page_num;
pages = tee_shm_get_pages(shm, &page_num);
if (!pages || !page_num) {
arg->ret = TEEC_ERROR_OUT_OF_MEMORY;
goto bad;
}
pages_list = optee_allocate_pages_list(page_num);
if (!pages_list) {
arg->ret = TEEC_ERROR_OUT_OF_MEMORY;
goto bad;
}
call_ctx->pages_list = pages_list;
call_ctx->num_entries = page_num;
arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT |
OPTEE_MSG_ATTR_NONCONTIG;
/*
* In the least bits of u.tmem.buf_ptr we store buffer offset
* from 4k page, as described in OP-TEE ABI.
*/
arg->params[0].u.tmem.buf_ptr = virt_to_phys(pages_list) |
(tee_shm_get_page_offset(shm) &
(OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1));
arg->params[0].u.tmem.size = tee_shm_get_size(shm);
arg->params[0].u.tmem.shm_ref = (unsigned long)shm;
optee_fill_pages_list(pages_list, pages, page_num,
tee_shm_get_page_offset(shm));
} else {
arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT;
arg->params[0].u.tmem.buf_ptr = pa;
arg->params[0].u.tmem.size = sz;
arg->params[0].u.tmem.shm_ref = (unsigned long)shm;
}
arg->ret = TEEC_SUCCESS;
return;
bad:
@ -307,8 +347,24 @@ static void handle_rpc_func_cmd_shm_free(struct tee_context *ctx,
arg->ret = TEEC_SUCCESS;
}
static void free_pages_list(struct optee_call_ctx *call_ctx)
{
if (call_ctx->pages_list) {
optee_free_pages_list(call_ctx->pages_list,
call_ctx->num_entries);
call_ctx->pages_list = NULL;
call_ctx->num_entries = 0;
}
}
void optee_rpc_finalize_call(struct optee_call_ctx *call_ctx)
{
free_pages_list(call_ctx);
}
static void handle_rpc_func_cmd(struct tee_context *ctx, struct optee *optee,
struct tee_shm *shm)
struct tee_shm *shm,
struct optee_call_ctx *call_ctx)
{
struct optee_msg_arg *arg;
@ -329,7 +385,8 @@ static void handle_rpc_func_cmd(struct tee_context *ctx, struct optee *optee,
handle_rpc_func_cmd_wait(arg);
break;
case OPTEE_MSG_RPC_CMD_SHM_ALLOC:
handle_rpc_func_cmd_shm_alloc(ctx, arg);
free_pages_list(call_ctx);
handle_rpc_func_cmd_shm_alloc(ctx, arg, call_ctx);
break;
case OPTEE_MSG_RPC_CMD_SHM_FREE:
handle_rpc_func_cmd_shm_free(ctx, arg);
@ -343,10 +400,12 @@ static void handle_rpc_func_cmd(struct tee_context *ctx, struct optee *optee,
* optee_handle_rpc() - handle RPC from secure world
* @ctx: context doing the RPC
* @param: value of registers for the RPC
* @call_ctx: call context. Preserved during one OP-TEE invocation
*
* Result of RPC is written back into @param.
*/
void optee_handle_rpc(struct tee_context *ctx, struct optee_rpc_param *param)
void optee_handle_rpc(struct tee_context *ctx, struct optee_rpc_param *param,
struct optee_call_ctx *call_ctx)
{
struct tee_device *teedev = ctx->teedev;
struct optee *optee = tee_get_drvdata(teedev);
@ -381,7 +440,7 @@ void optee_handle_rpc(struct tee_context *ctx, struct optee_rpc_param *param)
break;
case OPTEE_SMC_RPC_FUNC_CMD:
shm = reg_pair_to_ptr(param->a1, param->a2);
handle_rpc_func_cmd(ctx, optee, shm);
handle_rpc_func_cmd(ctx, optee, shm, call_ctx);
break;
default:
pr_warn("Unknown RPC func 0x%x\n",

View File

@ -0,0 +1,75 @@
/*
* Copyright (c) 2015, Linaro Limited
* Copyright (c) 2017, EPAM Systems
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/device.h>
#include <linux/dma-buf.h>
#include <linux/genalloc.h>
#include <linux/slab.h>
#include <linux/tee_drv.h>
#include "optee_private.h"
#include "optee_smc.h"
#include "shm_pool.h"
static int pool_op_alloc(struct tee_shm_pool_mgr *poolm,
struct tee_shm *shm, size_t size)
{
unsigned int order = get_order(size);
struct page *page;
page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order);
if (!page)
return -ENOMEM;
shm->kaddr = page_address(page);
shm->paddr = page_to_phys(page);
shm->size = PAGE_SIZE << order;
return 0;
}
static void pool_op_free(struct tee_shm_pool_mgr *poolm,
struct tee_shm *shm)
{
free_pages((unsigned long)shm->kaddr, get_order(shm->size));
shm->kaddr = NULL;
}
static void pool_op_destroy_poolmgr(struct tee_shm_pool_mgr *poolm)
{
kfree(poolm);
}
static const struct tee_shm_pool_mgr_ops pool_ops = {
.alloc = pool_op_alloc,
.free = pool_op_free,
.destroy_poolmgr = pool_op_destroy_poolmgr,
};
/**
* optee_shm_pool_alloc_pages() - create page-based allocator pool
*
* This pool is used when OP-TEE supports dymanic SHM. In this case
* command buffers and such are allocated from kernel's own memory.
*/
struct tee_shm_pool_mgr *optee_shm_pool_alloc_pages(void)
{
struct tee_shm_pool_mgr *mgr = kzalloc(sizeof(*mgr), GFP_KERNEL);
if (!mgr)
return ERR_PTR(-ENOMEM);
mgr->ops = &pool_ops;
return mgr;
}

View File

@ -0,0 +1,23 @@
/*
* Copyright (c) 2015, Linaro Limited
* Copyright (c) 2016, EPAM Systems
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef SHM_POOL_H
#define SHM_POOL_H
#include <linux/tee_drv.h>
struct tee_shm_pool_mgr *optee_shm_pool_alloc_pages(void);
#endif

View File

@ -16,21 +16,61 @@
#include <linux/uaccess.h>
#include "optee_private.h"
struct optee_supp_req {
struct list_head link;
bool busy;
u32 func;
u32 ret;
size_t num_params;
struct tee_param *param;
struct completion c;
};
void optee_supp_init(struct optee_supp *supp)
{
memset(supp, 0, sizeof(*supp));
mutex_init(&supp->ctx_mutex);
mutex_init(&supp->thrd_mutex);
mutex_init(&supp->supp_mutex);
init_completion(&supp->data_to_supp);
init_completion(&supp->data_from_supp);
mutex_init(&supp->mutex);
init_completion(&supp->reqs_c);
idr_init(&supp->idr);
INIT_LIST_HEAD(&supp->reqs);
supp->req_id = -1;
}
void optee_supp_uninit(struct optee_supp *supp)
{
mutex_destroy(&supp->ctx_mutex);
mutex_destroy(&supp->thrd_mutex);
mutex_destroy(&supp->supp_mutex);
mutex_destroy(&supp->mutex);
idr_destroy(&supp->idr);
}
void optee_supp_release(struct optee_supp *supp)
{
int id;
struct optee_supp_req *req;
struct optee_supp_req *req_tmp;
mutex_lock(&supp->mutex);
/* Abort all request retrieved by supplicant */
idr_for_each_entry(&supp->idr, req, id) {
req->busy = false;
idr_remove(&supp->idr, id);
req->ret = TEEC_ERROR_COMMUNICATION;
complete(&req->c);
}
/* Abort all queued requests */
list_for_each_entry_safe(req, req_tmp, &supp->reqs, link) {
list_del(&req->link);
req->ret = TEEC_ERROR_COMMUNICATION;
complete(&req->c);
}
supp->ctx = NULL;
supp->req_id = -1;
mutex_unlock(&supp->mutex);
}
/**
@ -44,53 +84,42 @@ void optee_supp_uninit(struct optee_supp *supp)
*/
u32 optee_supp_thrd_req(struct tee_context *ctx, u32 func, size_t num_params,
struct tee_param *param)
{
bool interruptable;
struct optee *optee = tee_get_drvdata(ctx->teedev);
struct optee_supp *supp = &optee->supp;
struct optee_supp_req *req = kzalloc(sizeof(*req), GFP_KERNEL);
bool interruptable;
u32 ret;
/*
* Other threads blocks here until we've copied our answer from
* supplicant.
*/
while (mutex_lock_interruptible(&supp->thrd_mutex)) {
/* See comment below on when the RPC can be interrupted. */
mutex_lock(&supp->ctx_mutex);
interruptable = !supp->ctx;
mutex_unlock(&supp->ctx_mutex);
if (interruptable)
return TEEC_ERROR_COMMUNICATION;
}
if (!req)
return TEEC_ERROR_OUT_OF_MEMORY;
/*
* We have exclusive access now since the supplicant at this
* point is either doing a
* wait_for_completion_interruptible(&supp->data_to_supp) or is in
* userspace still about to do the ioctl() to enter
* optee_supp_recv() below.
*/
init_completion(&req->c);
req->func = func;
req->num_params = num_params;
req->param = param;
supp->func = func;
supp->num_params = num_params;
supp->param = param;
supp->req_posted = true;
/* Insert the request in the request list */
mutex_lock(&supp->mutex);
list_add_tail(&req->link, &supp->reqs);
mutex_unlock(&supp->mutex);
/* Let supplicant get the data */
complete(&supp->data_to_supp);
/* Tell an eventual waiter there's a new request */
complete(&supp->reqs_c);
/*
* Wait for supplicant to process and return result, once we've
* returned from wait_for_completion(data_from_supp) we have
* returned from wait_for_completion(&req->c) successfully we have
* exclusive access again.
*/
while (wait_for_completion_interruptible(&supp->data_from_supp)) {
mutex_lock(&supp->ctx_mutex);
while (wait_for_completion_interruptible(&req->c)) {
mutex_lock(&supp->mutex);
interruptable = !supp->ctx;
if (interruptable) {
/*
* There's no supplicant available and since the
* supp->ctx_mutex currently is held none can
* supp->mutex currently is held none can
* become available until the mutex released
* again.
*
@ -101,24 +130,91 @@ u32 optee_supp_thrd_req(struct tee_context *ctx, u32 func, size_t num_params,
* will serve all requests in a timely manner and
* interrupting then wouldn't make sense.
*/
supp->ret = TEEC_ERROR_COMMUNICATION;
init_completion(&supp->data_to_supp);
interruptable = !req->busy;
if (!req->busy)
list_del(&req->link);
}
mutex_unlock(&supp->ctx_mutex);
if (interruptable)
mutex_unlock(&supp->mutex);
if (interruptable) {
req->ret = TEEC_ERROR_COMMUNICATION;
break;
}
}
ret = supp->ret;
supp->param = NULL;
supp->req_posted = false;
/* We're done, let someone else talk to the supplicant now. */
mutex_unlock(&supp->thrd_mutex);
ret = req->ret;
kfree(req);
return ret;
}
static struct optee_supp_req *supp_pop_entry(struct optee_supp *supp,
int num_params, int *id)
{
struct optee_supp_req *req;
if (supp->req_id != -1) {
/*
* Supplicant should not mix synchronous and asnynchronous
* requests.
*/
return ERR_PTR(-EINVAL);
}
if (list_empty(&supp->reqs))
return NULL;
req = list_first_entry(&supp->reqs, struct optee_supp_req, link);
if (num_params < req->num_params) {
/* Not enough room for parameters */
return ERR_PTR(-EINVAL);
}
*id = idr_alloc(&supp->idr, req, 1, 0, GFP_KERNEL);
if (*id < 0)
return ERR_PTR(-ENOMEM);
list_del(&req->link);
req->busy = true;
return req;
}
static int supp_check_recv_params(size_t num_params, struct tee_param *params,
size_t *num_meta)
{
size_t n;
if (!num_params)
return -EINVAL;
/*
* If there's memrefs we need to decrease those as they where
* increased earlier and we'll even refuse to accept any below.
*/
for (n = 0; n < num_params; n++)
if (tee_param_is_memref(params + n) && params[n].u.memref.shm)
tee_shm_put(params[n].u.memref.shm);
/*
* We only expect parameters as TEE_IOCTL_PARAM_ATTR_TYPE_NONE with
* or without the TEE_IOCTL_PARAM_ATTR_META bit set.
*/
for (n = 0; n < num_params; n++)
if (params[n].attr &&
params[n].attr != TEE_IOCTL_PARAM_ATTR_META)
return -EINVAL;
/* At most we'll need one meta parameter so no need to check for more */
if (params->attr == TEE_IOCTL_PARAM_ATTR_META)
*num_meta = 1;
else
*num_meta = 0;
return 0;
}
/**
* optee_supp_recv() - receive request for supplicant
* @ctx: context receiving the request
@ -135,65 +231,99 @@ int optee_supp_recv(struct tee_context *ctx, u32 *func, u32 *num_params,
struct tee_device *teedev = ctx->teedev;
struct optee *optee = tee_get_drvdata(teedev);
struct optee_supp *supp = &optee->supp;
struct optee_supp_req *req = NULL;
int id;
size_t num_meta;
int rc;
/*
* In case two threads in one supplicant is calling this function
* simultaneously we need to protect the data with a mutex which
* we'll release before returning.
*/
mutex_lock(&supp->supp_mutex);
rc = supp_check_recv_params(*num_params, param, &num_meta);
if (rc)
return rc;
if (supp->supp_next_send) {
/*
* optee_supp_recv() has been called again without
* a optee_supp_send() in between. Supplicant has
* probably been restarted before it was able to
* write back last result. Abort last request and
* wait for a new.
*/
if (supp->req_posted) {
supp->ret = TEEC_ERROR_COMMUNICATION;
supp->supp_next_send = false;
complete(&supp->data_from_supp);
while (true) {
mutex_lock(&supp->mutex);
req = supp_pop_entry(supp, *num_params - num_meta, &id);
mutex_unlock(&supp->mutex);
if (req) {
if (IS_ERR(req))
return PTR_ERR(req);
break;
}
}
/*
* This is where supplicant will be hanging most of the
* time, let's make this interruptable so we can easily
* restart supplicant if needed.
*/
if (wait_for_completion_interruptible(&supp->data_to_supp)) {
rc = -ERESTARTSYS;
goto out;
}
/* We have exlusive access to the data */
if (*num_params < supp->num_params) {
/*
* Not enough room for parameters, tell supplicant
* it failed and abort last request.
* If we didn't get a request we'll block in
* wait_for_completion() to avoid needless spinning.
*
* This is where supplicant will be hanging most of
* the time, let's make this interruptable so we
* can easily restart supplicant if needed.
*/
supp->ret = TEEC_ERROR_COMMUNICATION;
rc = -EINVAL;
complete(&supp->data_from_supp);
goto out;
if (wait_for_completion_interruptible(&supp->reqs_c))
return -ERESTARTSYS;
}
*func = supp->func;
*num_params = supp->num_params;
memcpy(param, supp->param,
sizeof(struct tee_param) * supp->num_params);
if (num_meta) {
/*
* tee-supplicant support meta parameters -> requsts can be
* processed asynchronously.
*/
param->attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT |
TEE_IOCTL_PARAM_ATTR_META;
param->u.value.a = id;
param->u.value.b = 0;
param->u.value.c = 0;
} else {
mutex_lock(&supp->mutex);
supp->req_id = id;
mutex_unlock(&supp->mutex);
}
/* Allow optee_supp_send() below to do its work */
supp->supp_next_send = true;
*func = req->func;
*num_params = req->num_params + num_meta;
memcpy(param + num_meta, req->param,
sizeof(struct tee_param) * req->num_params);
rc = 0;
out:
mutex_unlock(&supp->supp_mutex);
return rc;
return 0;
}
static struct optee_supp_req *supp_pop_req(struct optee_supp *supp,
size_t num_params,
struct tee_param *param,
size_t *num_meta)
{
struct optee_supp_req *req;
int id;
size_t nm;
const u32 attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT |
TEE_IOCTL_PARAM_ATTR_META;
if (!num_params)
return ERR_PTR(-EINVAL);
if (supp->req_id == -1) {
if (param->attr != attr)
return ERR_PTR(-EINVAL);
id = param->u.value.a;
nm = 1;
} else {
id = supp->req_id;
nm = 0;
}
req = idr_find(&supp->idr, id);
if (!req)
return ERR_PTR(-ENOENT);
if ((num_params - nm) != req->num_params)
return ERR_PTR(-EINVAL);
req->busy = false;
idr_remove(&supp->idr, id);
supp->req_id = -1;
*num_meta = nm;
return req;
}
/**
@ -211,63 +341,42 @@ int optee_supp_send(struct tee_context *ctx, u32 ret, u32 num_params,
struct tee_device *teedev = ctx->teedev;
struct optee *optee = tee_get_drvdata(teedev);
struct optee_supp *supp = &optee->supp;
struct optee_supp_req *req;
size_t n;
int rc = 0;
size_t num_meta;
/*
* We still have exclusive access to the data since that's how we
* left it when returning from optee_supp_read().
*/
mutex_lock(&supp->mutex);
req = supp_pop_req(supp, num_params, param, &num_meta);
mutex_unlock(&supp->mutex);
/* See comment on mutex in optee_supp_read() above */
mutex_lock(&supp->supp_mutex);
if (!supp->supp_next_send) {
/*
* Something strange is going on, supplicant shouldn't
* enter optee_supp_send() in this state
*/
rc = -ENOENT;
goto out;
}
if (num_params != supp->num_params) {
/*
* Something is wrong, let supplicant restart. Next call to
* optee_supp_recv() will give an error to the requesting
* thread and release it.
*/
rc = -EINVAL;
goto out;
if (IS_ERR(req)) {
/* Something is wrong, let supplicant restart. */
return PTR_ERR(req);
}
/* Update out and in/out parameters */
for (n = 0; n < num_params; n++) {
struct tee_param *p = supp->param + n;
for (n = 0; n < req->num_params; n++) {
struct tee_param *p = req->param + n;
switch (p->attr) {
switch (p->attr & TEE_IOCTL_PARAM_ATTR_TYPE_MASK) {
case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT:
case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT:
p->u.value.a = param[n].u.value.a;
p->u.value.b = param[n].u.value.b;
p->u.value.c = param[n].u.value.c;
p->u.value.a = param[n + num_meta].u.value.a;
p->u.value.b = param[n + num_meta].u.value.b;
p->u.value.c = param[n + num_meta].u.value.c;
break;
case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT:
case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT:
p->u.memref.size = param[n].u.memref.size;
p->u.memref.size = param[n + num_meta].u.memref.size;
break;
default:
break;
}
}
supp->ret = ret;
/* Allow optee_supp_recv() above to do its work */
supp->supp_next_send = false;
req->ret = ret;
/* Let the requesting thread continue */
complete(&supp->data_from_supp);
out:
mutex_unlock(&supp->supp_mutex);
return rc;
complete(&req->c);
return 0;
}

View File

@ -54,6 +54,7 @@ static int tee_open(struct inode *inode, struct file *filp)
goto err;
}
kref_init(&ctx->refcount);
ctx->teedev = teedev;
INIT_LIST_HEAD(&ctx->list_shm);
filp->private_data = ctx;
@ -68,19 +69,40 @@ err:
return rc;
}
void teedev_ctx_get(struct tee_context *ctx)
{
if (ctx->releasing)
return;
kref_get(&ctx->refcount);
}
static void teedev_ctx_release(struct kref *ref)
{
struct tee_context *ctx = container_of(ref, struct tee_context,
refcount);
ctx->releasing = true;
ctx->teedev->desc->ops->release(ctx);
kfree(ctx);
}
void teedev_ctx_put(struct tee_context *ctx)
{
if (ctx->releasing)
return;
kref_put(&ctx->refcount, teedev_ctx_release);
}
static void teedev_close_context(struct tee_context *ctx)
{
tee_device_put(ctx->teedev);
teedev_ctx_put(ctx);
}
static int tee_release(struct inode *inode, struct file *filp)
{
struct tee_context *ctx = filp->private_data;
struct tee_device *teedev = ctx->teedev;
struct tee_shm *shm;
ctx->teedev->desc->ops->release(ctx);
mutex_lock(&ctx->teedev->mutex);
list_for_each_entry(shm, &ctx->list_shm, link)
shm->ctx = NULL;
mutex_unlock(&ctx->teedev->mutex);
kfree(ctx);
tee_device_put(teedev);
teedev_close_context(filp->private_data);
return 0;
}
@ -114,8 +136,6 @@ static int tee_ioctl_shm_alloc(struct tee_context *ctx,
if (data.flags)
return -EINVAL;
data.id = -1;
shm = tee_shm_alloc(ctx, data.size, TEE_SHM_MAPPED | TEE_SHM_DMA_BUF);
if (IS_ERR(shm))
return PTR_ERR(shm);
@ -138,6 +158,43 @@ static int tee_ioctl_shm_alloc(struct tee_context *ctx,
return ret;
}
static int
tee_ioctl_shm_register(struct tee_context *ctx,
struct tee_ioctl_shm_register_data __user *udata)
{
long ret;
struct tee_ioctl_shm_register_data data;
struct tee_shm *shm;
if (copy_from_user(&data, udata, sizeof(data)))
return -EFAULT;
/* Currently no input flags are supported */
if (data.flags)
return -EINVAL;
shm = tee_shm_register(ctx, data.addr, data.length,
TEE_SHM_DMA_BUF | TEE_SHM_USER_MAPPED);
if (IS_ERR(shm))
return PTR_ERR(shm);
data.id = shm->id;
data.flags = shm->flags;
data.length = shm->size;
if (copy_to_user(udata, &data, sizeof(data)))
ret = -EFAULT;
else
ret = tee_shm_get_fd(shm);
/*
* When user space closes the file descriptor the shared memory
* should be freed or if tee_shm_get_fd() failed then it will
* be freed immediately.
*/
tee_shm_put(shm);
return ret;
}
static int params_from_user(struct tee_context *ctx, struct tee_param *params,
size_t num_params,
struct tee_ioctl_param __user *uparams)
@ -152,11 +209,11 @@ static int params_from_user(struct tee_context *ctx, struct tee_param *params,
return -EFAULT;
/* All unused attribute bits has to be zero */
if (ip.attr & ~TEE_IOCTL_PARAM_ATTR_TYPE_MASK)
if (ip.attr & ~TEE_IOCTL_PARAM_ATTR_MASK)
return -EINVAL;
params[n].attr = ip.attr;
switch (ip.attr) {
switch (ip.attr & TEE_IOCTL_PARAM_ATTR_TYPE_MASK) {
case TEE_IOCTL_PARAM_ATTR_TYPE_NONE:
case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT:
break;
@ -221,18 +278,6 @@ static int params_to_user(struct tee_ioctl_param __user *uparams,
return 0;
}
static bool param_is_memref(struct tee_param *param)
{
switch (param->attr & TEE_IOCTL_PARAM_ATTR_TYPE_MASK) {
case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT:
case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT:
case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT:
return true;
default:
return false;
}
}
static int tee_ioctl_open_session(struct tee_context *ctx,
struct tee_ioctl_buf_data __user *ubuf)
{
@ -296,7 +341,7 @@ out:
if (params) {
/* Decrease ref count for all valid shared memory pointers */
for (n = 0; n < arg.num_params; n++)
if (param_is_memref(params + n) &&
if (tee_param_is_memref(params + n) &&
params[n].u.memref.shm)
tee_shm_put(params[n].u.memref.shm);
kfree(params);
@ -358,7 +403,7 @@ out:
if (params) {
/* Decrease ref count for all valid shared memory pointers */
for (n = 0; n < arg.num_params; n++)
if (param_is_memref(params + n) &&
if (tee_param_is_memref(params + n) &&
params[n].u.memref.shm)
tee_shm_put(params[n].u.memref.shm);
kfree(params);
@ -406,8 +451,8 @@ static int params_to_supp(struct tee_context *ctx,
struct tee_ioctl_param ip;
struct tee_param *p = params + n;
ip.attr = p->attr & TEE_IOCTL_PARAM_ATTR_TYPE_MASK;
switch (p->attr) {
ip.attr = p->attr;
switch (p->attr & TEE_IOCTL_PARAM_ATTR_TYPE_MASK) {
case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT:
case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT:
ip.a = p->u.value.a;
@ -471,6 +516,10 @@ static int tee_ioctl_supp_recv(struct tee_context *ctx,
if (!params)
return -ENOMEM;
rc = params_from_user(ctx, params, num_params, uarg->params);
if (rc)
goto out;
rc = ctx->teedev->desc->ops->supp_recv(ctx, &func, &num_params, params);
if (rc)
goto out;
@ -500,11 +549,11 @@ static int params_from_supp(struct tee_param *params, size_t num_params,
return -EFAULT;
/* All unused attribute bits has to be zero */
if (ip.attr & ~TEE_IOCTL_PARAM_ATTR_TYPE_MASK)
if (ip.attr & ~TEE_IOCTL_PARAM_ATTR_MASK)
return -EINVAL;
p->attr = ip.attr;
switch (ip.attr) {
switch (ip.attr & TEE_IOCTL_PARAM_ATTR_TYPE_MASK) {
case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT:
case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT:
/* Only out and in/out values can be updated */
@ -586,6 +635,8 @@ static long tee_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
return tee_ioctl_version(ctx, uarg);
case TEE_IOC_SHM_ALLOC:
return tee_ioctl_shm_alloc(ctx, uarg);
case TEE_IOC_SHM_REGISTER:
return tee_ioctl_shm_register(ctx, uarg);
case TEE_IOC_OPEN_SESSION:
return tee_ioctl_open_session(ctx, uarg);
case TEE_IOC_INVOKE:

View File

@ -21,68 +21,15 @@
#include <linux/mutex.h>
#include <linux/types.h>
struct tee_device;
/**
* struct tee_shm - shared memory object
* @teedev: device used to allocate the object
* @ctx: context using the object, if NULL the context is gone
* @link link element
* @paddr: physical address of the shared memory
* @kaddr: virtual address of the shared memory
* @size: size of shared memory
* @dmabuf: dmabuf used to for exporting to user space
* @flags: defined by TEE_SHM_* in tee_drv.h
* @id: unique id of a shared memory object on this device
*/
struct tee_shm {
struct tee_device *teedev;
struct tee_context *ctx;
struct list_head link;
phys_addr_t paddr;
void *kaddr;
size_t size;
struct dma_buf *dmabuf;
u32 flags;
int id;
};
struct tee_shm_pool_mgr;
/**
* struct tee_shm_pool_mgr_ops - shared memory pool manager operations
* @alloc: called when allocating shared memory
* @free: called when freeing shared memory
*/
struct tee_shm_pool_mgr_ops {
int (*alloc)(struct tee_shm_pool_mgr *poolmgr, struct tee_shm *shm,
size_t size);
void (*free)(struct tee_shm_pool_mgr *poolmgr, struct tee_shm *shm);
};
/**
* struct tee_shm_pool_mgr - shared memory manager
* @ops: operations
* @private_data: private data for the shared memory manager
*/
struct tee_shm_pool_mgr {
const struct tee_shm_pool_mgr_ops *ops;
void *private_data;
};
/**
* struct tee_shm_pool - shared memory pool
* @private_mgr: pool manager for shared memory only between kernel
* and secure world
* @dma_buf_mgr: pool manager for shared memory exported to user space
* @destroy: called when destroying the pool
* @private_data: private data for the pool
*/
struct tee_shm_pool {
struct tee_shm_pool_mgr private_mgr;
struct tee_shm_pool_mgr dma_buf_mgr;
void (*destroy)(struct tee_shm_pool *pool);
void *private_data;
struct tee_shm_pool_mgr *private_mgr;
struct tee_shm_pool_mgr *dma_buf_mgr;
};
#define TEE_DEVICE_FLAG_REGISTERED 0x1
@ -126,4 +73,7 @@ int tee_shm_get_fd(struct tee_shm *shm);
bool tee_device_get(struct tee_device *teedev);
void tee_device_put(struct tee_device *teedev);
void teedev_ctx_get(struct tee_context *ctx);
void teedev_ctx_put(struct tee_context *ctx);
#endif /*TEE_PRIVATE_H*/

View File

@ -23,7 +23,6 @@
static void tee_shm_release(struct tee_shm *shm)
{
struct tee_device *teedev = shm->teedev;
struct tee_shm_pool_mgr *poolm;
mutex_lock(&teedev->mutex);
idr_remove(&teedev->idr, shm->id);
@ -31,12 +30,32 @@ static void tee_shm_release(struct tee_shm *shm)
list_del(&shm->link);
mutex_unlock(&teedev->mutex);
if (shm->flags & TEE_SHM_DMA_BUF)
poolm = &teedev->pool->dma_buf_mgr;
else
poolm = &teedev->pool->private_mgr;
if (shm->flags & TEE_SHM_POOL) {
struct tee_shm_pool_mgr *poolm;
if (shm->flags & TEE_SHM_DMA_BUF)
poolm = teedev->pool->dma_buf_mgr;
else
poolm = teedev->pool->private_mgr;
poolm->ops->free(poolm, shm);
} else if (shm->flags & TEE_SHM_REGISTER) {
size_t n;
int rc = teedev->desc->ops->shm_unregister(shm->ctx, shm);
if (rc)
dev_err(teedev->dev.parent,
"unregister shm %p failed: %d", shm, rc);
for (n = 0; n < shm->num_pages; n++)
put_page(shm->pages[n]);
kfree(shm->pages);
}
if (shm->ctx)
teedev_ctx_put(shm->ctx);
poolm->ops->free(poolm, shm);
kfree(shm);
tee_device_put(teedev);
@ -76,6 +95,10 @@ static int tee_shm_op_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
struct tee_shm *shm = dmabuf->priv;
size_t size = vma->vm_end - vma->vm_start;
/* Refuse sharing shared memory provided by application */
if (shm->flags & TEE_SHM_REGISTER)
return -EINVAL;
return remap_pfn_range(vma, vma->vm_start, shm->paddr >> PAGE_SHIFT,
size, vma->vm_page_prot);
}
@ -89,26 +112,20 @@ static const struct dma_buf_ops tee_shm_dma_buf_ops = {
.mmap = tee_shm_op_mmap,
};
/**
* tee_shm_alloc() - Allocate shared memory
* @ctx: Context that allocates the shared memory
* @size: Requested size of shared memory
* @flags: Flags setting properties for the requested shared memory.
*
* Memory allocated as global shared memory is automatically freed when the
* TEE file pointer is closed. The @flags field uses the bits defined by
* TEE_SHM_* in <linux/tee_drv.h>. TEE_SHM_MAPPED must currently always be
* set. If TEE_SHM_DMA_BUF global shared memory will be allocated and
* associated with a dma-buf handle, else driver private memory.
*/
struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags)
static struct tee_shm *__tee_shm_alloc(struct tee_context *ctx,
struct tee_device *teedev,
size_t size, u32 flags)
{
struct tee_device *teedev = ctx->teedev;
struct tee_shm_pool_mgr *poolm = NULL;
struct tee_shm *shm;
void *ret;
int rc;
if (ctx && ctx->teedev != teedev) {
dev_err(teedev->dev.parent, "ctx and teedev mismatch\n");
return ERR_PTR(-EINVAL);
}
if (!(flags & TEE_SHM_MAPPED)) {
dev_err(teedev->dev.parent,
"only mapped allocations supported\n");
@ -135,13 +152,13 @@ struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags)
goto err_dev_put;
}
shm->flags = flags;
shm->flags = flags | TEE_SHM_POOL;
shm->teedev = teedev;
shm->ctx = ctx;
if (flags & TEE_SHM_DMA_BUF)
poolm = &teedev->pool->dma_buf_mgr;
poolm = teedev->pool->dma_buf_mgr;
else
poolm = &teedev->pool->private_mgr;
poolm = teedev->pool->private_mgr;
rc = poolm->ops->alloc(poolm, shm, size);
if (rc) {
@ -171,9 +188,13 @@ struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags)
goto err_rem;
}
}
mutex_lock(&teedev->mutex);
list_add_tail(&shm->link, &ctx->list_shm);
mutex_unlock(&teedev->mutex);
if (ctx) {
teedev_ctx_get(ctx);
mutex_lock(&teedev->mutex);
list_add_tail(&shm->link, &ctx->list_shm);
mutex_unlock(&teedev->mutex);
}
return shm;
err_rem:
@ -188,8 +209,145 @@ err_dev_put:
tee_device_put(teedev);
return ret;
}
/**
* tee_shm_alloc() - Allocate shared memory
* @ctx: Context that allocates the shared memory
* @size: Requested size of shared memory
* @flags: Flags setting properties for the requested shared memory.
*
* Memory allocated as global shared memory is automatically freed when the
* TEE file pointer is closed. The @flags field uses the bits defined by
* TEE_SHM_* in <linux/tee_drv.h>. TEE_SHM_MAPPED must currently always be
* set. If TEE_SHM_DMA_BUF global shared memory will be allocated and
* associated with a dma-buf handle, else driver private memory.
*/
struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags)
{
return __tee_shm_alloc(ctx, ctx->teedev, size, flags);
}
EXPORT_SYMBOL_GPL(tee_shm_alloc);
struct tee_shm *tee_shm_priv_alloc(struct tee_device *teedev, size_t size)
{
return __tee_shm_alloc(NULL, teedev, size, TEE_SHM_MAPPED);
}
EXPORT_SYMBOL_GPL(tee_shm_priv_alloc);
struct tee_shm *tee_shm_register(struct tee_context *ctx, unsigned long addr,
size_t length, u32 flags)
{
struct tee_device *teedev = ctx->teedev;
const u32 req_flags = TEE_SHM_DMA_BUF | TEE_SHM_USER_MAPPED;
struct tee_shm *shm;
void *ret;
int rc;
int num_pages;
unsigned long start;
if (flags != req_flags)
return ERR_PTR(-ENOTSUPP);
if (!tee_device_get(teedev))
return ERR_PTR(-EINVAL);
if (!teedev->desc->ops->shm_register ||
!teedev->desc->ops->shm_unregister) {
tee_device_put(teedev);
return ERR_PTR(-ENOTSUPP);
}
teedev_ctx_get(ctx);
shm = kzalloc(sizeof(*shm), GFP_KERNEL);
if (!shm) {
ret = ERR_PTR(-ENOMEM);
goto err;
}
shm->flags = flags | TEE_SHM_REGISTER;
shm->teedev = teedev;
shm->ctx = ctx;
shm->id = -1;
start = rounddown(addr, PAGE_SIZE);
shm->offset = addr - start;
shm->size = length;
num_pages = (roundup(addr + length, PAGE_SIZE) - start) / PAGE_SIZE;
shm->pages = kcalloc(num_pages, sizeof(*shm->pages), GFP_KERNEL);
if (!shm->pages) {
ret = ERR_PTR(-ENOMEM);
goto err;
}
rc = get_user_pages_fast(start, num_pages, 1, shm->pages);
if (rc > 0)
shm->num_pages = rc;
if (rc != num_pages) {
if (rc >= 0)
rc = -ENOMEM;
ret = ERR_PTR(rc);
goto err;
}
mutex_lock(&teedev->mutex);
shm->id = idr_alloc(&teedev->idr, shm, 1, 0, GFP_KERNEL);
mutex_unlock(&teedev->mutex);
if (shm->id < 0) {
ret = ERR_PTR(shm->id);
goto err;
}
rc = teedev->desc->ops->shm_register(ctx, shm, shm->pages,
shm->num_pages, start);
if (rc) {
ret = ERR_PTR(rc);
goto err;
}
if (flags & TEE_SHM_DMA_BUF) {
DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
exp_info.ops = &tee_shm_dma_buf_ops;
exp_info.size = shm->size;
exp_info.flags = O_RDWR;
exp_info.priv = shm;
shm->dmabuf = dma_buf_export(&exp_info);
if (IS_ERR(shm->dmabuf)) {
ret = ERR_CAST(shm->dmabuf);
teedev->desc->ops->shm_unregister(ctx, shm);
goto err;
}
}
mutex_lock(&teedev->mutex);
list_add_tail(&shm->link, &ctx->list_shm);
mutex_unlock(&teedev->mutex);
return shm;
err:
if (shm) {
size_t n;
if (shm->id >= 0) {
mutex_lock(&teedev->mutex);
idr_remove(&teedev->idr, shm->id);
mutex_unlock(&teedev->mutex);
}
if (shm->pages) {
for (n = 0; n < shm->num_pages; n++)
put_page(shm->pages[n]);
kfree(shm->pages);
}
}
kfree(shm);
teedev_ctx_put(ctx);
tee_device_put(teedev);
return ret;
}
EXPORT_SYMBOL_GPL(tee_shm_register);
/**
* tee_shm_get_fd() - Increase reference count and return file descriptor
* @shm: Shared memory handle
@ -197,10 +355,9 @@ EXPORT_SYMBOL_GPL(tee_shm_alloc);
*/
int tee_shm_get_fd(struct tee_shm *shm)
{
u32 req_flags = TEE_SHM_MAPPED | TEE_SHM_DMA_BUF;
int fd;
if ((shm->flags & req_flags) != req_flags)
if (!(shm->flags & TEE_SHM_DMA_BUF))
return -EINVAL;
fd = dma_buf_fd(shm->dmabuf, O_CLOEXEC);
@ -238,6 +395,8 @@ EXPORT_SYMBOL_GPL(tee_shm_free);
*/
int tee_shm_va2pa(struct tee_shm *shm, void *va, phys_addr_t *pa)
{
if (!(shm->flags & TEE_SHM_MAPPED))
return -EINVAL;
/* Check that we're in the range of the shm */
if ((char *)va < (char *)shm->kaddr)
return -EINVAL;
@ -258,6 +417,8 @@ EXPORT_SYMBOL_GPL(tee_shm_va2pa);
*/
int tee_shm_pa2va(struct tee_shm *shm, phys_addr_t pa, void **va)
{
if (!(shm->flags & TEE_SHM_MAPPED))
return -EINVAL;
/* Check that we're in the range of the shm */
if (pa < shm->paddr)
return -EINVAL;
@ -284,6 +445,8 @@ EXPORT_SYMBOL_GPL(tee_shm_pa2va);
*/
void *tee_shm_get_va(struct tee_shm *shm, size_t offs)
{
if (!(shm->flags & TEE_SHM_MAPPED))
return ERR_PTR(-EINVAL);
if (offs >= shm->size)
return ERR_PTR(-EINVAL);
return (char *)shm->kaddr + offs;
@ -335,17 +498,6 @@ struct tee_shm *tee_shm_get_from_id(struct tee_context *ctx, int id)
}
EXPORT_SYMBOL_GPL(tee_shm_get_from_id);
/**
* tee_shm_get_id() - Get id of a shared memory object
* @shm: Shared memory handle
* @returns id
*/
int tee_shm_get_id(struct tee_shm *shm)
{
return shm->id;
}
EXPORT_SYMBOL_GPL(tee_shm_get_id);
/**
* tee_shm_put() - Decrease reference count on a shared memory handle
* @shm: Shared memory handle

View File

@ -44,49 +44,18 @@ static void pool_op_gen_free(struct tee_shm_pool_mgr *poolm,
shm->kaddr = NULL;
}
static void pool_op_gen_destroy_poolmgr(struct tee_shm_pool_mgr *poolm)
{
gen_pool_destroy(poolm->private_data);
kfree(poolm);
}
static const struct tee_shm_pool_mgr_ops pool_ops_generic = {
.alloc = pool_op_gen_alloc,
.free = pool_op_gen_free,
.destroy_poolmgr = pool_op_gen_destroy_poolmgr,
};
static void pool_res_mem_destroy(struct tee_shm_pool *pool)
{
gen_pool_destroy(pool->private_mgr.private_data);
gen_pool_destroy(pool->dma_buf_mgr.private_data);
}
static int pool_res_mem_mgr_init(struct tee_shm_pool_mgr *mgr,
struct tee_shm_pool_mem_info *info,
int min_alloc_order)
{
size_t page_mask = PAGE_SIZE - 1;
struct gen_pool *genpool = NULL;
int rc;
/*
* Start and end must be page aligned
*/
if ((info->vaddr & page_mask) || (info->paddr & page_mask) ||
(info->size & page_mask))
return -EINVAL;
genpool = gen_pool_create(min_alloc_order, -1);
if (!genpool)
return -ENOMEM;
gen_pool_set_algo(genpool, gen_pool_best_fit, NULL);
rc = gen_pool_add_virt(genpool, info->vaddr, info->paddr, info->size,
-1);
if (rc) {
gen_pool_destroy(genpool);
return rc;
}
mgr->private_data = genpool;
mgr->ops = &pool_ops_generic;
return 0;
}
/**
* tee_shm_pool_alloc_res_mem() - Create a shared memory pool from reserved
* memory range
@ -104,43 +73,110 @@ struct tee_shm_pool *
tee_shm_pool_alloc_res_mem(struct tee_shm_pool_mem_info *priv_info,
struct tee_shm_pool_mem_info *dmabuf_info)
{
struct tee_shm_pool *pool = NULL;
int ret;
pool = kzalloc(sizeof(*pool), GFP_KERNEL);
if (!pool) {
ret = -ENOMEM;
goto err;
}
struct tee_shm_pool_mgr *priv_mgr;
struct tee_shm_pool_mgr *dmabuf_mgr;
void *rc;
/*
* Create the pool for driver private shared memory
*/
ret = pool_res_mem_mgr_init(&pool->private_mgr, priv_info,
3 /* 8 byte aligned */);
if (ret)
goto err;
rc = tee_shm_pool_mgr_alloc_res_mem(priv_info->vaddr, priv_info->paddr,
priv_info->size,
3 /* 8 byte aligned */);
if (IS_ERR(rc))
return rc;
priv_mgr = rc;
/*
* Create the pool for dma_buf shared memory
*/
ret = pool_res_mem_mgr_init(&pool->dma_buf_mgr, dmabuf_info,
PAGE_SHIFT);
if (ret)
goto err;
rc = tee_shm_pool_mgr_alloc_res_mem(dmabuf_info->vaddr,
dmabuf_info->paddr,
dmabuf_info->size, PAGE_SHIFT);
if (IS_ERR(rc))
goto err_free_priv_mgr;
dmabuf_mgr = rc;
pool->destroy = pool_res_mem_destroy;
return pool;
err:
if (ret == -ENOMEM)
pr_err("%s: can't allocate memory for res_mem shared memory pool\n", __func__);
if (pool && pool->private_mgr.private_data)
gen_pool_destroy(pool->private_mgr.private_data);
kfree(pool);
return ERR_PTR(ret);
rc = tee_shm_pool_alloc(priv_mgr, dmabuf_mgr);
if (IS_ERR(rc))
goto err_free_dmabuf_mgr;
return rc;
err_free_dmabuf_mgr:
tee_shm_pool_mgr_destroy(dmabuf_mgr);
err_free_priv_mgr:
tee_shm_pool_mgr_destroy(priv_mgr);
return rc;
}
EXPORT_SYMBOL_GPL(tee_shm_pool_alloc_res_mem);
struct tee_shm_pool_mgr *tee_shm_pool_mgr_alloc_res_mem(unsigned long vaddr,
phys_addr_t paddr,
size_t size,
int min_alloc_order)
{
const size_t page_mask = PAGE_SIZE - 1;
struct tee_shm_pool_mgr *mgr;
int rc;
/* Start and end must be page aligned */
if (vaddr & page_mask || paddr & page_mask || size & page_mask)
return ERR_PTR(-EINVAL);
mgr = kzalloc(sizeof(*mgr), GFP_KERNEL);
if (!mgr)
return ERR_PTR(-ENOMEM);
mgr->private_data = gen_pool_create(min_alloc_order, -1);
if (!mgr->private_data) {
rc = -ENOMEM;
goto err;
}
gen_pool_set_algo(mgr->private_data, gen_pool_best_fit, NULL);
rc = gen_pool_add_virt(mgr->private_data, vaddr, paddr, size, -1);
if (rc) {
gen_pool_destroy(mgr->private_data);
goto err;
}
mgr->ops = &pool_ops_generic;
return mgr;
err:
kfree(mgr);
return ERR_PTR(rc);
}
EXPORT_SYMBOL_GPL(tee_shm_pool_mgr_alloc_res_mem);
static bool check_mgr_ops(struct tee_shm_pool_mgr *mgr)
{
return mgr && mgr->ops && mgr->ops->alloc && mgr->ops->free &&
mgr->ops->destroy_poolmgr;
}
struct tee_shm_pool *tee_shm_pool_alloc(struct tee_shm_pool_mgr *priv_mgr,
struct tee_shm_pool_mgr *dmabuf_mgr)
{
struct tee_shm_pool *pool;
if (!check_mgr_ops(priv_mgr) || !check_mgr_ops(dmabuf_mgr))
return ERR_PTR(-EINVAL);
pool = kzalloc(sizeof(*pool), GFP_KERNEL);
if (!pool)
return ERR_PTR(-ENOMEM);
pool->private_mgr = priv_mgr;
pool->dma_buf_mgr = dmabuf_mgr;
return pool;
}
EXPORT_SYMBOL_GPL(tee_shm_pool_alloc);
/**
* tee_shm_pool_free() - Free a shared memory pool
* @pool: The shared memory pool to free
@ -150,7 +186,10 @@ EXPORT_SYMBOL_GPL(tee_shm_pool_alloc_res_mem);
*/
void tee_shm_pool_free(struct tee_shm_pool *pool)
{
pool->destroy(pool);
if (pool->private_mgr)
tee_shm_pool_mgr_destroy(pool->private_mgr);
if (pool->dma_buf_mgr)
tee_shm_pool_mgr_destroy(pool->dma_buf_mgr);
kfree(pool);
}
EXPORT_SYMBOL_GPL(tee_shm_pool_free);

View File

@ -0,0 +1,124 @@
/*
*
* Copyright (c) 2016 BayLibre, SAS.
* Author: Neil Armstrong <narmstrong@baylibre.com>
*
* Copyright (c) 2017 Amlogic, inc.
* Author: Yixun Lan <yixun.lan@amlogic.com>
*
* SPDX-License-Identifier: (GPL-2.0+ OR BSD)
*/
#ifndef _DT_BINDINGS_AMLOGIC_MESON_AXG_RESET_H
#define _DT_BINDINGS_AMLOGIC_MESON_AXG_RESET_H
/* RESET0 */
#define RESET_HIU 0
#define RESET_PCIE_A 1
#define RESET_PCIE_B 2
#define RESET_DDR_TOP 3
/* 4 */
#define RESET_VIU 5
#define RESET_PCIE_PHY 6
#define RESET_PCIE_APB 7
/* 8 */
/* 9 */
#define RESET_VENC 10
#define RESET_ASSIST 11
/* 12 */
#define RESET_VCBUS 13
/* 14 */
/* 15 */
#define RESET_GIC 16
#define RESET_CAPB3_DECODE 17
/* 18-21 */
#define RESET_SYS_CPU_CAPB3 22
#define RESET_CBUS_CAPB3 23
#define RESET_AHB_CNTL 24
#define RESET_AHB_DATA 25
#define RESET_VCBUS_CLK81 26
#define RESET_MMC 27
/* 28-31 */
/* RESET1 */
/* 32 */
/* 33 */
#define RESET_USB_OTG 34
#define RESET_DDR 35
#define RESET_AO_RESET 36
/* 37 */
#define RESET_AHB_SRAM 38
/* 39 */
/* 40 */
#define RESET_DMA 41
#define RESET_ISA 42
#define RESET_ETHERNET 43
/* 44 */
#define RESET_SD_EMMC_B 45
#define RESET_SD_EMMC_C 46
#define RESET_ROM_BOOT 47
#define RESET_SYS_CPU_0 48
#define RESET_SYS_CPU_1 49
#define RESET_SYS_CPU_2 50
#define RESET_SYS_CPU_3 51
#define RESET_SYS_CPU_CORE_0 52
#define RESET_SYS_CPU_CORE_1 53
#define RESET_SYS_CPU_CORE_2 54
#define RESET_SYS_CPU_CORE_3 55
#define RESET_SYS_PLL_DIV 56
#define RESET_SYS_CPU_AXI 57
#define RESET_SYS_CPU_L2 58
#define RESET_SYS_CPU_P 59
#define RESET_SYS_CPU_MBIST 60
/* 61-63 */
/* RESET2 */
/* 64 */
/* 65 */
#define RESET_AUDIO 66
/* 67 */
#define RESET_MIPI_HOST 68
#define RESET_AUDIO_LOCKER 69
#define RESET_GE2D 70
/* 71-76 */
#define RESET_AO_CPU_RESET 77
/* 78-95 */
/* RESET3 */
#define RESET_RING_OSCILLATOR 96
/* 97-127 */
/* RESET4 */
/* 128 */
/* 129 */
#define RESET_MIPI_PHY 130
/* 131-140 */
#define RESET_VENCL 141
#define RESET_I2C_MASTER_2 142
#define RESET_I2C_MASTER_1 143
/* 144-159 */
/* RESET5 */
/* 160-191 */
/* RESET6 */
#define RESET_PERIPHS_GENERAL 192
#define RESET_PERIPHS_SPICC 193
/* 194 */
/* 195 */
#define RESET_PERIPHS_I2C_MASTER_0 196
/* 197-200 */
#define RESET_PERIPHS_UART_0 201
#define RESET_PERIPHS_UART_1 202
/* 203-204 */
#define RESET_PERIPHS_SPI_0 205
#define RESET_PERIPHS_I2C_MASTER_3 206
/* 207-223 */
/* RESET7 */
#define RESET_USB_DDR_0 224
#define RESET_USB_DDR_1 225
#define RESET_USB_DDR_2 226
#define RESET_USB_DDR_3 227
/* 228 */
#define RESET_DEVICE_MMC_ARB 229
/* 230 */
#define RESET_VID_LOCK 231
#define RESET_A9_DMC_PIPEL 232
#define RESET_DMC_VPU_PIPEL 233
/* 234-255 */
#endif

View File

@ -0,0 +1,86 @@
#ifndef __TI_SYSC_DATA_H__
#define __TI_SYSC_DATA_H__
enum ti_sysc_module_type {
TI_SYSC_OMAP2,
TI_SYSC_OMAP2_TIMER,
TI_SYSC_OMAP3_SHAM,
TI_SYSC_OMAP3_AES,
TI_SYSC_OMAP4,
TI_SYSC_OMAP4_TIMER,
TI_SYSC_OMAP4_SIMPLE,
TI_SYSC_OMAP34XX_SR,
TI_SYSC_OMAP36XX_SR,
TI_SYSC_OMAP4_SR,
TI_SYSC_OMAP4_MCASP,
TI_SYSC_OMAP4_USB_HOST_FS,
};
/**
* struct sysc_regbits - TI OCP_SYSCONFIG register field offsets
* @midle_shift: Offset of the midle bit
* @clkact_shift: Offset of the clockactivity bit
* @sidle_shift: Offset of the sidle bit
* @enwkup_shift: Offset of the enawakeup bit
* @srst_shift: Offset of the softreset bit
* @autoidle_shift: Offset of the autoidle bit
* @dmadisable_shift: Offset of the dmadisable bit
* @emufree_shift; Offset of the emufree bit
*
* Note that 0 is a valid shift, and for ti-sysc.c -ENODEV can be used if a
* feature is not available.
*/
struct sysc_regbits {
s8 midle_shift;
s8 clkact_shift;
s8 sidle_shift;
s8 enwkup_shift;
s8 srst_shift;
s8 autoidle_shift;
s8 dmadisable_shift;
s8 emufree_shift;
};
#define SYSC_QUIRK_RESET_STATUS BIT(7)
#define SYSC_QUIRK_NO_IDLE_ON_INIT BIT(6)
#define SYSC_QUIRK_NO_RESET_ON_INIT BIT(5)
#define SYSC_QUIRK_OPT_CLKS_NEEDED BIT(4)
#define SYSC_QUIRK_OPT_CLKS_IN_RESET BIT(3)
#define SYSC_QUIRK_16BIT BIT(2)
#define SYSC_QUIRK_UNCACHED BIT(1)
#define SYSC_QUIRK_USE_CLOCKACT BIT(0)
#define SYSC_NR_IDLEMODES 4
/**
* struct sysc_capabilities - capabilities for an interconnect target module
*
* @sysc_mask: bitmask of supported SYSCONFIG register bits
* @regbits: bitmask of SYSCONFIG register bits
* @mod_quirks: bitmask of module specific quirks
*/
struct sysc_capabilities {
const enum ti_sysc_module_type type;
const u32 sysc_mask;
const struct sysc_regbits *regbits;
const u32 mod_quirks;
};
/**
* struct sysc_config - configuration for an interconnect target module
* @sysc_val: configured value for sysc register
* @midlemodes: bitmask of supported master idle modes
* @sidlemodes: bitmask of supported master idle modes
* @srst_udelay: optional delay needed after OCP soft reset
* @quirks: bitmask of enabled quirks
*/
struct sysc_config {
u32 sysc_val;
u32 syss_mask;
u8 midlemodes;
u8 sidlemodes;
u8 srst_udelay;
u32 quirks;
};
#endif /* __TI_SYSC_DATA_H__ */

View File

@ -13,6 +13,9 @@
#ifndef __QCOM_SCM_H
#define __QCOM_SCM_H
#include <linux/types.h>
#include <linux/cpumask.h>
#define QCOM_SCM_VERSION(major, minor) (((major) << 16) | ((minor) & 0xFF))
#define QCOM_SCM_CPU_PWR_DOWN_L2_ON 0x0
#define QCOM_SCM_CPU_PWR_DOWN_L2_OFF 0x1

View File

@ -2,8 +2,10 @@
#ifndef _LINUX_RESET_H_
#define _LINUX_RESET_H_
#include <linux/device.h>
#include <linux/types.h>
struct device;
struct device_node;
struct reset_control;
#ifdef CONFIG_RESET_CONTROLLER
@ -20,22 +22,16 @@ struct reset_control *__reset_control_get(struct device *dev, const char *id,
int index, bool shared,
bool optional);
void reset_control_put(struct reset_control *rstc);
int __device_reset(struct device *dev, bool optional);
struct reset_control *__devm_reset_control_get(struct device *dev,
const char *id, int index, bool shared,
bool optional);
int __must_check device_reset(struct device *dev);
struct reset_control *devm_reset_control_array_get(struct device *dev,
bool shared, bool optional);
struct reset_control *of_reset_control_array_get(struct device_node *np,
bool shared, bool optional);
static inline int device_reset_optional(struct device *dev)
{
return device_reset(dev);
}
#else
static inline int reset_control_reset(struct reset_control *rstc)
@ -62,15 +58,9 @@ static inline void reset_control_put(struct reset_control *rstc)
{
}
static inline int __must_check device_reset(struct device *dev)
static inline int __device_reset(struct device *dev, bool optional)
{
WARN_ON(1);
return -ENOTSUPP;
}
static inline int device_reset_optional(struct device *dev)
{
return -ENOTSUPP;
return optional ? 0 : -ENOTSUPP;
}
static inline struct reset_control *__of_reset_control_get(
@ -109,6 +99,16 @@ of_reset_control_array_get(struct device_node *np, bool shared, bool optional)
#endif /* CONFIG_RESET_CONTROLLER */
static inline int __must_check device_reset(struct device *dev)
{
return __device_reset(dev, false);
}
static inline int device_reset_optional(struct device *dev)
{
return __device_reset(dev, true);
}
/**
* reset_control_get_exclusive - Lookup and obtain an exclusive reference
* to a reset controller.
@ -127,9 +127,6 @@ of_reset_control_array_get(struct device_node *np, bool shared, bool optional)
static inline struct reset_control *
__must_check reset_control_get_exclusive(struct device *dev, const char *id)
{
#ifndef CONFIG_RESET_CONTROLLER
WARN_ON(1);
#endif
return __reset_control_get(dev, id, 0, false, false);
}
@ -275,9 +272,6 @@ static inline struct reset_control *
__must_check devm_reset_control_get_exclusive(struct device *dev,
const char *id)
{
#ifndef CONFIG_RESET_CONTROLLER
WARN_ON(1);
#endif
return __devm_reset_control_get(dev, id, 0, false, false);
}
@ -350,18 +344,6 @@ devm_reset_control_get_shared_by_index(struct device *dev, int index)
* These inline function calls will be removed once all consumers
* have been moved over to the new explicit API.
*/
static inline struct reset_control *reset_control_get(
struct device *dev, const char *id)
{
return reset_control_get_exclusive(dev, id);
}
static inline struct reset_control *reset_control_get_optional(
struct device *dev, const char *id)
{
return reset_control_get_optional_exclusive(dev, id);
}
static inline struct reset_control *of_reset_control_get(
struct device_node *node, const char *id)
{

View File

@ -12,12 +12,6 @@ static inline u32 BRCM_REV(u32 reg)
return reg & 0xff;
}
/*
* Bus Interface Unit control register setup, must happen early during boot,
* before SMP is brought up, called by machine entry point.
*/
void brcmstb_biuctrl_init(void);
/*
* Helper functions for getting family or product id from the
* SoC driver.

View File

@ -0,0 +1,271 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2012-2014, The Linux Foundation. All rights reserved.
* Copyright (c) 2017, Linaro Ltd.
*/
#ifndef __QMI_HELPERS_H__
#define __QMI_HELPERS_H__
#include <linux/completion.h>
#include <linux/idr.h>
#include <linux/list.h>
#include <linux/qrtr.h>
#include <linux/types.h>
#include <linux/workqueue.h>
struct socket;
/**
* qmi_header - wireformat header of QMI messages
* @type: type of message
* @txn_id: transaction id
* @msg_id: message id
* @msg_len: length of message payload following header
*/
struct qmi_header {
u8 type;
u16 txn_id;
u16 msg_id;
u16 msg_len;
} __packed;
#define QMI_REQUEST 0
#define QMI_RESPONSE 2
#define QMI_INDICATION 4
#define QMI_COMMON_TLV_TYPE 0
enum qmi_elem_type {
QMI_EOTI,
QMI_OPT_FLAG,
QMI_DATA_LEN,
QMI_UNSIGNED_1_BYTE,
QMI_UNSIGNED_2_BYTE,
QMI_UNSIGNED_4_BYTE,
QMI_UNSIGNED_8_BYTE,
QMI_SIGNED_2_BYTE_ENUM,
QMI_SIGNED_4_BYTE_ENUM,
QMI_STRUCT,
QMI_STRING,
};
enum qmi_array_type {
NO_ARRAY,
STATIC_ARRAY,
VAR_LEN_ARRAY,
};
/**
* struct qmi_elem_info - describes how to encode a single QMI element
* @data_type: Data type of this element.
* @elem_len: Array length of this element, if an array.
* @elem_size: Size of a single instance of this data type.
* @array_type: Array type of this element.
* @tlv_type: QMI message specific type to identify which element
* is present in an incoming message.
* @offset: Specifies the offset of the first instance of this
* element in the data structure.
* @ei_array: Null-terminated array of @qmi_elem_info to describe nested
* structures.
*/
struct qmi_elem_info {
enum qmi_elem_type data_type;
u32 elem_len;
u32 elem_size;
enum qmi_array_type array_type;
u8 tlv_type;
u32 offset;
struct qmi_elem_info *ei_array;
};
#define QMI_RESULT_SUCCESS_V01 0
#define QMI_RESULT_FAILURE_V01 1
#define QMI_ERR_NONE_V01 0
#define QMI_ERR_MALFORMED_MSG_V01 1
#define QMI_ERR_NO_MEMORY_V01 2
#define QMI_ERR_INTERNAL_V01 3
#define QMI_ERR_CLIENT_IDS_EXHAUSTED_V01 5
#define QMI_ERR_INVALID_ID_V01 41
#define QMI_ERR_ENCODING_V01 58
#define QMI_ERR_INCOMPATIBLE_STATE_V01 90
#define QMI_ERR_NOT_SUPPORTED_V01 94
/**
* qmi_response_type_v01 - common response header (decoded)
* @result: result of the transaction
* @error: error value, when @result is QMI_RESULT_FAILURE_V01
*/
struct qmi_response_type_v01 {
u16 result;
u16 error;
};
extern struct qmi_elem_info qmi_response_type_v01_ei[];
/**
* struct qmi_service - context to track lookup-results
* @service: service type
* @version: version of the @service
* @instance: instance id of the @service
* @node: node of the service
* @port: port of the service
* @priv: handle for client's use
* @list_node: list_head for house keeping
*/
struct qmi_service {
unsigned int service;
unsigned int version;
unsigned int instance;
unsigned int node;
unsigned int port;
void *priv;
struct list_head list_node;
};
struct qmi_handle;
/**
* struct qmi_ops - callbacks for qmi_handle
* @new_server: inform client of a new_server lookup-result, returning
* successfully from this call causes the library to call
* @del_server as the service is removed from the
* lookup-result. @priv of the qmi_service can be used by
* the client
* @del_server: inform client of a del_server lookup-result
* @net_reset: inform client that the name service was restarted and
* that and any state needs to be released
* @msg_handler: invoked for incoming messages, allows a client to
* override the usual QMI message handler
* @bye: inform a client that all clients from a node are gone
* @del_client: inform a client that a particular client is gone
*/
struct qmi_ops {
int (*new_server)(struct qmi_handle *qmi, struct qmi_service *svc);
void (*del_server)(struct qmi_handle *qmi, struct qmi_service *svc);
void (*net_reset)(struct qmi_handle *qmi);
void (*msg_handler)(struct qmi_handle *qmi, struct sockaddr_qrtr *sq,
const void *data, size_t count);
void (*bye)(struct qmi_handle *qmi, unsigned int node);
void (*del_client)(struct qmi_handle *qmi,
unsigned int node, unsigned int port);
};
/**
* struct qmi_txn - transaction context
* @qmi: QMI handle this transaction is associated with
* @id: transaction id
* @lock: for synchronization between handler and waiter of messages
* @completion: completion object as the transaction receives a response
* @result: result code for the completed transaction
* @ei: description of the QMI encoded response (optional)
* @dest: destination buffer to decode message into (optional)
*/
struct qmi_txn {
struct qmi_handle *qmi;
int id;
struct mutex lock;
struct completion completion;
int result;
struct qmi_elem_info *ei;
void *dest;
};
/**
* struct qmi_msg_handler - description of QMI message handler
* @type: type of message
* @msg_id: message id
* @ei: description of the QMI encoded message
* @decoded_size: size of the decoded object
* @fn: function to invoke as the message is decoded
*/
struct qmi_msg_handler {
unsigned int type;
unsigned int msg_id;
struct qmi_elem_info *ei;
size_t decoded_size;
void (*fn)(struct qmi_handle *qmi, struct sockaddr_qrtr *sq,
struct qmi_txn *txn, const void *decoded);
};
/**
* struct qmi_handle - QMI context
* @sock: socket handle
* @sock_lock: synchronization of @sock modifications
* @sq: sockaddr of @sock
* @work: work for handling incoming messages
* @wq: workqueue to post @work on
* @recv_buf: scratch buffer for handling incoming messages
* @recv_buf_size: size of @recv_buf
* @lookups: list of registered lookup requests
* @lookup_results: list of lookup-results advertised to the client
* @services: list of registered services (by this client)
* @ops: reference to callbacks
* @txns: outstanding transactions
* @txn_lock: lock for modifications of @txns
* @handlers: list of handlers for incoming messages
*/
struct qmi_handle {
struct socket *sock;
struct mutex sock_lock;
struct sockaddr_qrtr sq;
struct work_struct work;
struct workqueue_struct *wq;
void *recv_buf;
size_t recv_buf_size;
struct list_head lookups;
struct list_head lookup_results;
struct list_head services;
struct qmi_ops ops;
struct idr txns;
struct mutex txn_lock;
const struct qmi_msg_handler *handlers;
};
int qmi_add_lookup(struct qmi_handle *qmi, unsigned int service,
unsigned int version, unsigned int instance);
int qmi_add_server(struct qmi_handle *qmi, unsigned int service,
unsigned int version, unsigned int instance);
int qmi_handle_init(struct qmi_handle *qmi, size_t max_msg_len,
const struct qmi_ops *ops,
const struct qmi_msg_handler *handlers);
void qmi_handle_release(struct qmi_handle *qmi);
ssize_t qmi_send_request(struct qmi_handle *qmi, struct sockaddr_qrtr *sq,
struct qmi_txn *txn, int msg_id, size_t len,
struct qmi_elem_info *ei, const void *c_struct);
ssize_t qmi_send_response(struct qmi_handle *qmi, struct sockaddr_qrtr *sq,
struct qmi_txn *txn, int msg_id, size_t len,
struct qmi_elem_info *ei, const void *c_struct);
ssize_t qmi_send_indication(struct qmi_handle *qmi, struct sockaddr_qrtr *sq,
int msg_id, size_t len, struct qmi_elem_info *ei,
const void *c_struct);
void *qmi_encode_message(int type, unsigned int msg_id, size_t *len,
unsigned int txn_id, struct qmi_elem_info *ei,
const void *c_struct);
int qmi_decode_message(const void *buf, size_t len,
struct qmi_elem_info *ei, void *c_struct);
int qmi_txn_init(struct qmi_handle *qmi, struct qmi_txn *txn,
struct qmi_elem_info *ei, void *c_struct);
int qmi_txn_wait(struct qmi_txn *txn, unsigned long timeout);
void qmi_txn_cancel(struct qmi_txn *txn);
#endif

View File

@ -17,6 +17,7 @@
#include <linux/types.h>
#include <linux/idr.h>
#include <linux/kref.h>
#include <linux/list.h>
#include <linux/tee.h>
@ -25,8 +26,12 @@
* specific TEE driver.
*/
#define TEE_SHM_MAPPED 0x1 /* Memory mapped by the kernel */
#define TEE_SHM_DMA_BUF 0x2 /* Memory with dma-buf handle */
#define TEE_SHM_MAPPED BIT(0) /* Memory mapped by the kernel */
#define TEE_SHM_DMA_BUF BIT(1) /* Memory with dma-buf handle */
#define TEE_SHM_EXT_DMA_BUF BIT(2) /* Memory with dma-buf handle */
#define TEE_SHM_REGISTER BIT(3) /* Memory registered in secure world */
#define TEE_SHM_USER_MAPPED BIT(4) /* Memory mapped in user space */
#define TEE_SHM_POOL BIT(5) /* Memory allocated from pool */
struct device;
struct tee_device;
@ -38,11 +43,17 @@ struct tee_shm_pool;
* @teedev: pointer to this drivers struct tee_device
* @list_shm: List of shared memory object owned by this context
* @data: driver specific context data, managed by the driver
* @refcount: reference counter for this structure
* @releasing: flag that indicates if context is being released right now.
* It is needed to break circular dependency on context during
* shared memory release.
*/
struct tee_context {
struct tee_device *teedev;
struct list_head list_shm;
void *data;
struct kref refcount;
bool releasing;
};
struct tee_param_memref {
@ -76,6 +87,8 @@ struct tee_param {
* @cancel_req: request cancel of an ongoing invoke or open
* @supp_revc: called for supplicant to get a command
* @supp_send: called for supplicant to send a response
* @shm_register: register shared memory buffer in TEE
* @shm_unregister: unregister shared memory buffer in TEE
*/
struct tee_driver_ops {
void (*get_version)(struct tee_device *teedev,
@ -94,6 +107,10 @@ struct tee_driver_ops {
struct tee_param *param);
int (*supp_send)(struct tee_context *ctx, u32 ret, u32 num_params,
struct tee_param *param);
int (*shm_register)(struct tee_context *ctx, struct tee_shm *shm,
struct page **pages, size_t num_pages,
unsigned long start);
int (*shm_unregister)(struct tee_context *ctx, struct tee_shm *shm);
};
/**
@ -149,6 +166,97 @@ int tee_device_register(struct tee_device *teedev);
*/
void tee_device_unregister(struct tee_device *teedev);
/**
* struct tee_shm - shared memory object
* @teedev: device used to allocate the object
* @ctx: context using the object, if NULL the context is gone
* @link link element
* @paddr: physical address of the shared memory
* @kaddr: virtual address of the shared memory
* @size: size of shared memory
* @offset: offset of buffer in user space
* @pages: locked pages from userspace
* @num_pages: number of locked pages
* @dmabuf: dmabuf used to for exporting to user space
* @flags: defined by TEE_SHM_* in tee_drv.h
* @id: unique id of a shared memory object on this device
*
* This pool is only supposed to be accessed directly from the TEE
* subsystem and from drivers that implements their own shm pool manager.
*/
struct tee_shm {
struct tee_device *teedev;
struct tee_context *ctx;
struct list_head link;
phys_addr_t paddr;
void *kaddr;
size_t size;
unsigned int offset;
struct page **pages;
size_t num_pages;
struct dma_buf *dmabuf;
u32 flags;
int id;
};
/**
* struct tee_shm_pool_mgr - shared memory manager
* @ops: operations
* @private_data: private data for the shared memory manager
*/
struct tee_shm_pool_mgr {
const struct tee_shm_pool_mgr_ops *ops;
void *private_data;
};
/**
* struct tee_shm_pool_mgr_ops - shared memory pool manager operations
* @alloc: called when allocating shared memory
* @free: called when freeing shared memory
* @destroy_poolmgr: called when destroying the pool manager
*/
struct tee_shm_pool_mgr_ops {
int (*alloc)(struct tee_shm_pool_mgr *poolmgr, struct tee_shm *shm,
size_t size);
void (*free)(struct tee_shm_pool_mgr *poolmgr, struct tee_shm *shm);
void (*destroy_poolmgr)(struct tee_shm_pool_mgr *poolmgr);
};
/**
* tee_shm_pool_alloc() - Create a shared memory pool from shm managers
* @priv_mgr: manager for driver private shared memory allocations
* @dmabuf_mgr: manager for dma-buf shared memory allocations
*
* Allocation with the flag TEE_SHM_DMA_BUF set will use the range supplied
* in @dmabuf, others will use the range provided by @priv.
*
* @returns pointer to a 'struct tee_shm_pool' or an ERR_PTR on failure.
*/
struct tee_shm_pool *tee_shm_pool_alloc(struct tee_shm_pool_mgr *priv_mgr,
struct tee_shm_pool_mgr *dmabuf_mgr);
/*
* tee_shm_pool_mgr_alloc_res_mem() - Create a shm manager for reserved
* memory
* @vaddr: Virtual address of start of pool
* @paddr: Physical address of start of pool
* @size: Size in bytes of the pool
*
* @returns pointer to a 'struct tee_shm_pool_mgr' or an ERR_PTR on failure.
*/
struct tee_shm_pool_mgr *tee_shm_pool_mgr_alloc_res_mem(unsigned long vaddr,
phys_addr_t paddr,
size_t size,
int min_alloc_order);
/**
* tee_shm_pool_mgr_destroy() - Free a shared memory manager
*/
static inline void tee_shm_pool_mgr_destroy(struct tee_shm_pool_mgr *poolm)
{
poolm->ops->destroy_poolmgr(poolm);
}
/**
* struct tee_shm_pool_mem_info - holds information needed to create a shared
* memory pool
@ -210,6 +318,40 @@ void *tee_get_drvdata(struct tee_device *teedev);
*/
struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags);
/**
* tee_shm_priv_alloc() - Allocate shared memory privately
* @dev: Device that allocates the shared memory
* @size: Requested size of shared memory
*
* Allocates shared memory buffer that is not associated with any client
* context. Such buffers are owned by TEE driver and used for internal calls.
*
* @returns a pointer to 'struct tee_shm'
*/
struct tee_shm *tee_shm_priv_alloc(struct tee_device *teedev, size_t size);
/**
* tee_shm_register() - Register shared memory buffer
* @ctx: Context that registers the shared memory
* @addr: Address is userspace of the shared buffer
* @length: Length of the shared buffer
* @flags: Flags setting properties for the requested shared memory.
*
* @returns a pointer to 'struct tee_shm'
*/
struct tee_shm *tee_shm_register(struct tee_context *ctx, unsigned long addr,
size_t length, u32 flags);
/**
* tee_shm_is_registered() - Check if shared memory object in registered in TEE
* @shm: Shared memory handle
* @returns true if object is registered in TEE
*/
static inline bool tee_shm_is_registered(struct tee_shm *shm)
{
return shm && (shm->flags & TEE_SHM_REGISTER);
}
/**
* tee_shm_free() - Free shared memory
* @shm: Handle to shared memory to free
@ -259,12 +401,48 @@ void *tee_shm_get_va(struct tee_shm *shm, size_t offs);
*/
int tee_shm_get_pa(struct tee_shm *shm, size_t offs, phys_addr_t *pa);
/**
* tee_shm_get_size() - Get size of shared memory buffer
* @shm: Shared memory handle
* @returns size of shared memory
*/
static inline size_t tee_shm_get_size(struct tee_shm *shm)
{
return shm->size;
}
/**
* tee_shm_get_pages() - Get list of pages that hold shared buffer
* @shm: Shared memory handle
* @num_pages: Number of pages will be stored there
* @returns pointer to pages array
*/
static inline struct page **tee_shm_get_pages(struct tee_shm *shm,
size_t *num_pages)
{
*num_pages = shm->num_pages;
return shm->pages;
}
/**
* tee_shm_get_page_offset() - Get shared buffer offset from page start
* @shm: Shared memory handle
* @returns page offset of shared buffer
*/
static inline size_t tee_shm_get_page_offset(struct tee_shm *shm)
{
return shm->offset;
}
/**
* tee_shm_get_id() - Get id of a shared memory object
* @shm: Shared memory handle
* @returns id
*/
int tee_shm_get_id(struct tee_shm *shm);
static inline int tee_shm_get_id(struct tee_shm *shm)
{
return shm->id;
}
/**
* tee_shm_get_from_id() - Find shared memory object and increase reference
@ -275,4 +453,16 @@ int tee_shm_get_id(struct tee_shm *shm);
*/
struct tee_shm *tee_shm_get_from_id(struct tee_context *ctx, int id);
static inline bool tee_param_is_memref(struct tee_param *param)
{
switch (param->attr & TEE_IOCTL_PARAM_ATTR_TYPE_MASK) {
case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT:
case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT:
case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT:
return true;
default:
return false;
}
}
#endif /*__TEE_DRV_H*/

View File

@ -0,0 +1,69 @@
/*
* TI AM33XX EMIF Routines
*
* Copyright (C) 2016-2017 Texas Instruments Inc.
* Dave Gerlach
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation version 2.
*
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
* kind, whether express or implied; without even the implied warranty
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef __LINUX_TI_EMIF_H
#define __LINUX_TI_EMIF_H
#include <linux/kbuild.h>
#include <linux/types.h>
#ifndef __ASSEMBLY__
struct emif_regs_amx3 {
u32 emif_sdcfg_val;
u32 emif_timing1_val;
u32 emif_timing2_val;
u32 emif_timing3_val;
u32 emif_ref_ctrl_val;
u32 emif_zqcfg_val;
u32 emif_pmcr_val;
u32 emif_pmcr_shdw_val;
u32 emif_rd_wr_level_ramp_ctrl;
u32 emif_rd_wr_exec_thresh;
u32 emif_cos_config;
u32 emif_priority_to_cos_mapping;
u32 emif_connect_id_serv_1_map;
u32 emif_connect_id_serv_2_map;
u32 emif_ocp_config_val;
u32 emif_lpddr2_nvm_tim;
u32 emif_lpddr2_nvm_tim_shdw;
u32 emif_dll_calib_ctrl_val;
u32 emif_dll_calib_ctrl_val_shdw;
u32 emif_ddr_phy_ctlr_1;
u32 emif_ext_phy_ctrl_vals[120];
};
struct ti_emif_pm_data {
void __iomem *ti_emif_base_addr_virt;
phys_addr_t ti_emif_base_addr_phys;
unsigned long ti_emif_sram_config;
struct emif_regs_amx3 *regs_virt;
phys_addr_t regs_phys;
} __packed __aligned(8);
struct ti_emif_pm_functions {
u32 save_context;
u32 restore_context;
u32 enter_sr;
u32 exit_sr;
u32 abort_sr;
} __packed __aligned(8);
struct gen_pool;
int ti_emif_copy_pm_function_table(struct gen_pool *sram_pool, void *dst);
int ti_emif_get_mem_type(void);
#endif
#endif /* __LINUX_TI_EMIF_H */

View File

@ -51,6 +51,12 @@ struct tegra_smmu_swgroup {
unsigned int reg;
};
struct tegra_smmu_group_soc {
const char *name;
const unsigned int *swgroups;
unsigned int num_swgroups;
};
struct tegra_smmu_soc {
const struct tegra_mc_client *clients;
unsigned int num_clients;
@ -58,6 +64,9 @@ struct tegra_smmu_soc {
const struct tegra_smmu_swgroup *swgroups;
unsigned int num_swgroups;
const struct tegra_smmu_group_soc *groups;
unsigned int num_groups;
bool supports_round_robin_arbitration;
bool supports_request_limit;

View File

@ -50,6 +50,7 @@
#define TEE_GEN_CAP_GP (1 << 0)/* GlobalPlatform compliant TEE */
#define TEE_GEN_CAP_PRIVILEGED (1 << 1)/* Privileged device (for supplicant) */
#define TEE_GEN_CAP_REG_MEM (1 << 2)/* Supports registering shared memory */
/*
* TEE Implementation ID
@ -154,6 +155,13 @@ struct tee_ioctl_buf_data {
*/
#define TEE_IOCTL_PARAM_ATTR_TYPE_MASK 0xff
/* Meta parameter carrying extra information about the message. */
#define TEE_IOCTL_PARAM_ATTR_META 0x100
/* Mask of all known attr bits */
#define TEE_IOCTL_PARAM_ATTR_MASK \
(TEE_IOCTL_PARAM_ATTR_TYPE_MASK | TEE_IOCTL_PARAM_ATTR_META)
/*
* Matches TEEC_LOGIN_* in GP TEE Client API
* Are only defined for GP compliant TEEs
@ -332,6 +340,35 @@ struct tee_iocl_supp_send_arg {
#define TEE_IOC_SUPPL_SEND _IOR(TEE_IOC_MAGIC, TEE_IOC_BASE + 7, \
struct tee_ioctl_buf_data)
/**
* struct tee_ioctl_shm_register_data - Shared memory register argument
* @addr: [in] Start address of shared memory to register
* @length: [in/out] Length of shared memory to register
* @flags: [in/out] Flags to/from registration.
* @id: [out] Identifier of the shared memory
*
* The flags field should currently be zero as input. Updated by the call
* with actual flags as defined by TEE_IOCTL_SHM_* above.
* This structure is used as argument for TEE_IOC_SHM_REGISTER below.
*/
struct tee_ioctl_shm_register_data {
__u64 addr;
__u64 length;
__u32 flags;
__s32 id;
};
/**
* TEE_IOC_SHM_REGISTER - Register shared memory argument
*
* Registers shared memory between the user space process and secure OS.
*
* Returns a file descriptor on success or < 0 on failure
*
* The shared memory is unregisterred when the descriptor is closed.
*/
#define TEE_IOC_SHM_REGISTER _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 9, \
struct tee_ioctl_shm_register_data)
/*
* Five syscalls are used when communicating with the TEE driver.
* open(): opens the device associated with the driver