Qualcomm ARM64 Updates for v4.19 - Part 2

* Add thermal nodes for MSM8996 and SDM845
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJbY3lJAAoJEFKiBbHx2RXVttEP/jHngIc9DSOi6r1hXvjNgaTM
 CdvxETAaKlDcFb4H4vwTVbHONor4GHK8UnusPodI3SPG62pbEr2Jad9Z/NpuW3XP
 qAvgyPAMPKDPfHAYdp3lGPKb7yc6BBtL5sSTgl/rbsIOFrSeD6BxKRWdcwRIvwW5
 d5VU/JyyBX0Kb0cIaasmUWP5crYXzFsgV0SggQiekgDzfO5cK8pEVwKc5GbIevDN
 mjXf8u5i58xJSabSrXCltKj13/RMzAzVlQaPl/3L3T5SrGq2AYHGWLOwdVMbeeZg
 8nJ95w0voq+f4euL+l5pPmJFw1AZIEWX4MykVJp3bI8qEQwBKxqscw2zg4ZjAaoT
 BAx363JiOHsCknWuhVd845T2U8yMMgzYExDjeMV21goJCNaPO1fXPZr6IZlcqq3x
 X5NT9J3qFKZdf04EHTO7ov/aUO/QiCcxZl8KO1/Sgjwd/tvDQDkFAlFWFC9nrehR
 IMYnWw/btI6HG6iJAyuGx4gmTvza8dOR2oZLnanHAiECJ+6d8nr863yk/b92uP4d
 qGvIwXiQ4FPDP4ta5biacqQCeyT3MIGRL+cceB4S2/m2IxO9+VMPrGJifkD5aDOp
 jBW+f0G9Fb3mgjqGTAAA/bW7tNkbLX4WtWAlTo1to1BOgKB5zwCCZgf43I8Y8wie
 +Rk13xVxvqrR8idXIFap
 =w/hf
 -----END PGP SIGNATURE-----

Merge tag 'qcom-arm64-for-4.19-2' of git://git.kernel.org/pub/scm/linux/kernel/git/agross/linux into next/dt

Qualcomm ARM64 Updates for v4.19 - Part 2

* Add thermal nodes for MSM8996 and SDM845

* tag 'qcom-arm64-for-4.19-2' of git://git.kernel.org/pub/scm/linux/kernel/git/agross/linux: (21 commits)
  arm64: dts: sdm845: Add tsens nodes
  arm64: dts: msm8996: thermal: Initialise via DT and add second controller
  soc: qcom: rmtfs-mem: fix memleak in probe error paths
  soc: qcom: llc-slice: Add missing MODULE_LICENSE()
  drivers: qcom: rpmh: fix unwanted error check for get_tcs_of_type()
  drivers: qcom: rpmh-rsc: fix the loop index check in get_req_from_tcs
  firmware: qcom: scm: add a dummy qcom_scm_assign_mem()
  drivers: qcom: rpmh-rsc: Check cmd_db_ready() to help children
  drivers: qcom: rpmh-rsc: allow active requests from wake TCS
  drivers: qcom: rpmh: add support for batch RPMH request
  drivers: qcom: rpmh: allow requests to be sent asynchronously
  drivers: qcom: rpmh: cache sleep/wake state requests
  drivers: qcom: rpmh-rsc: allow invalidation of sleep/wake TCS
  drivers: qcom: rpmh-rsc: write sleep/wake requests to TCS
  drivers: qcom: rpmh: add RPMH helper functions
  drivers: qcom: rpmh-rsc: log RPMH requests in FTRACE
  dt-bindings: introduce RPMH RSC bindings for Qualcomm SoCs
  drivers: qcom: rpmh-rsc: add RPMH controller for QCOM SoCs
  drivers: soc: Add LLCC driver
  dt-bindings: Documentation for qcom, llcc
  ...
This commit is contained in:
Olof Johansson 2018-08-04 11:02:54 -07:00
commit afd3e3dad6
19 changed files with 2373 additions and 13 deletions

View File

@ -0,0 +1,26 @@
== Introduction==
LLCC (Last Level Cache Controller) provides last level of cache memory in SOC,
that can be shared by multiple clients. Clients here are different cores in the
SOC, the idea is to minimize the local caches at the clients and migrate to
common pool of memory. Cache memory is divided into partitions called slices
which are assigned to clients. Clients can query the slice details, activate
and deactivate them.
Properties:
- compatible:
Usage: required
Value type: <string>
Definition: must be "qcom,sdm845-llcc"
- reg:
Usage: required
Value Type: <prop-encoded-array>
Definition: Start address and the the size of the register region.
Example:
cache-controller@1100000 {
compatible = "qcom,sdm845-llcc";
reg = <0x1100000 0x250000>;
};

View File

@ -0,0 +1,137 @@
RPMH RSC:
------------
Resource Power Manager Hardened (RPMH) is the mechanism for communicating with
the hardened resource accelerators on Qualcomm SoCs. Requests to the resources
can be written to the Trigger Command Set (TCS) registers and using a (addr,
val) pair and triggered. Messages in the TCS are then sent in sequence over an
internal bus.
The hardware block (Direct Resource Voter or DRV) is a part of the h/w entity
(Resource State Coordinator a.k.a RSC) that can handle multiple sleep and
active/wake resource requests. Multiple such DRVs can exist in a SoC and can
be written to from Linux. The structure of each DRV follows the same template
with a few variations that are captured by the properties here.
A TCS may be triggered from Linux or triggered by the F/W after all the CPUs
have powered off to facilitate idle power saving. TCS could be classified as -
ACTIVE /* Triggered by Linux */
SLEEP /* Triggered by F/W */
WAKE /* Triggered by F/W */
CONTROL /* Triggered by F/W */
The order in which they are described in the DT, should match the hardware
configuration.
Requests can be made for the state of a resource, when the subsystem is active
or idle. When all subsystems like Modem, GPU, CPU are idle, the resource state
will be an aggregate of the sleep votes from each of those subsystems. Clients
may request a sleep value for their shared resources in addition to the active
mode requests.
Properties:
- compatible:
Usage: required
Value type: <string>
Definition: Should be "qcom,rpmh-rsc".
- reg:
Usage: required
Value type: <prop-encoded-array>
Definition: The first register specifies the base address of the
DRV(s). The number of DRVs in the dependent on the RSC.
The tcs-offset specifies the start address of the
TCS in the DRVs.
- reg-names:
Usage: required
Value type: <string>
Definition: Maps the register specified in the reg property. Must be
"drv-0", "drv-1", "drv-2" etc and "tcs-offset". The
- interrupts:
Usage: required
Value type: <prop-encoded-interrupt>
Definition: The interrupt that trips when a message complete/response
is received for this DRV from the accelerators.
- qcom,drv-id:
Usage: required
Value type: <u32>
Definition: The id of the DRV in the RSC block that will be used by
this controller.
- qcom,tcs-config:
Usage: required
Value type: <prop-encoded-array>
Definition: The tuple defining the configuration of TCS.
Must have 2 cells which describe each TCS type.
<type number_of_tcs>.
The order of the TCS must match the hardware
configuration.
- Cell #1 (TCS Type): TCS types to be specified -
ACTIVE_TCS
SLEEP_TCS
WAKE_TCS
CONTROL_TCS
- Cell #2 (Number of TCS): <u32>
- label:
Usage: optional
Value type: <string>
Definition: Name for the RSC. The name would be used in trace logs.
Drivers that want to use the RSC to communicate with RPMH must specify their
bindings as child nodes of the RSC controllers they wish to communicate with.
Example 1:
For a TCS whose RSC base address is is 0x179C0000 and is at a DRV id of 2, the
register offsets for DRV2 start at 0D00, the register calculations are like
this -
DRV0: 0x179C0000
DRV2: 0x179C0000 + 0x10000 = 0x179D0000
DRV2: 0x179C0000 + 0x10000 * 2 = 0x179E0000
TCS-OFFSET: 0xD00
apps_rsc: rsc@179c0000 {
label = "apps_rsc";
compatible = "qcom,rpmh-rsc";
reg = <0x179c0000 0x10000>,
<0x179d0000 0x10000>,
<0x179e0000 0x10000>;
reg-names = "drv-0", "drv-1", "drv-2";
interrupts = <GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>;
qcom,tcs-offset = <0xd00>;
qcom,drv-id = <2>;
qcom,tcs-config = <ACTIVE_TCS 2>,
<SLEEP_TCS 3>,
<WAKE_TCS 3>,
<CONTROL_TCS 1>;
};
Example 2:
For a TCS whose RSC base address is 0xAF20000 and is at DRV id of 0, the
register offsets for DRV0 start at 01C00, the register calculations are like
this -
DRV0: 0xAF20000
TCS-OFFSET: 0x1C00
disp_rsc: rsc@af20000 {
label = "disp_rsc";
compatible = "qcom,rpmh-rsc";
reg = <0xaf20000 0x10000>;
reg-names = "drv-0";
interrupts = <GIC_SPI 129 IRQ_TYPE_LEVEL_HIGH>;
qcom,tcs-offset = <0x1c00>;
qcom,drv-id = <0>;
qcom,tcs-config = <ACTIVE_TCS 0>,
<SLEEP_TCS 1>,
<WAKE_TCS 1>,
<CONTROL_TCS 0>;
};

View File

@ -377,6 +377,22 @@
reg = <0x740000 0x20000>;
};
tsens0: thermal-sensor@4a9000 {
compatible = "qcom,msm8996-tsens";
reg = <0x4a9000 0x1000>, /* TM */
<0x4a8000 0x1000>; /* SROT */
#qcom,sensors = <13>;
#thermal-sensor-cells = <1>;
};
tsens1: thermal-sensor@4ad000 {
compatible = "qcom,msm8996-tsens";
reg = <0x4ad000 0x1000>, /* TM */
<0x4ac000 0x1000>; /* SROT */
#qcom,sensors = <8>;
#thermal-sensor-cells = <1>;
};
tcsr: syscon@7a0000 {
compatible = "qcom,tcsr-msm8996", "syscon";
reg = <0x7a0000 0x18000>;
@ -459,12 +475,6 @@
status = "disabled";
};
tsens0: thermal-sensor@4a8000 {
compatible = "qcom,msm8996-tsens";
reg = <0x4a8000 0x2000>;
#thermal-sensor-cells = <1>;
};
blsp2_uart1: serial@75b0000 {
compatible = "qcom,msm-uartdm-v1.4", "qcom,msm-uartdm";
reg = <0x75b0000 0x1000>;

View File

@ -962,6 +962,22 @@
};
};
tsens0: thermal-sensor@c263000 {
compatible = "qcom,sdm845-tsens", "qcom,tsens-v2";
reg = <0xc263000 0x1ff>, /* TM */
<0xc222000 0x1ff>; /* SROT */
#qcom,sensors = <13>;
#thermal-sensor-cells = <1>;
};
tsens1: thermal-sensor@c265000 {
compatible = "qcom,sdm845-tsens", "qcom,tsens-v2";
reg = <0xc265000 0x1ff>, /* TM */
<0xc223000 0x1ff>; /* SROT */
#qcom,sensors = <8>;
#thermal-sensor-cells = <1>;
};
spmi_bus: spmi@c440000 {
compatible = "qcom,spmi-pmic-arb";
reg = <0xc440000 0x1100>,

View File

@ -40,6 +40,23 @@ config QCOM_GSBI
functions for connecting the underlying serial UART, SPI, and I2C
devices to the output pins.
config QCOM_LLCC
tristate "Qualcomm Technologies, Inc. LLCC driver"
depends on ARCH_QCOM
help
Qualcomm Technologies, Inc. platform specific
Last Level Cache Controller(LLCC) driver. This provides interfaces
to clients that use the LLCC. Say yes here to enable LLCC slice
driver.
config QCOM_SDM845_LLCC
tristate "Qualcomm Technologies, Inc. SDM845 LLCC driver"
depends on QCOM_LLCC
help
Say yes here to enable the LLCC driver for SDM845. This provides
data required to configure LLCC so that clients can start using the
LLCC slices.
config QCOM_MDT_LOADER
tristate
select QCOM_SCM
@ -75,6 +92,16 @@ config QCOM_RMTFS_MEM
Say y here if you intend to boot the modem remoteproc.
config QCOM_RPMH
bool "Qualcomm RPM-Hardened (RPMH) Communication"
depends on ARCH_QCOM && ARM64 && OF || COMPILE_TEST
help
Support for communication with the hardened-RPM blocks in
Qualcomm Technologies Inc (QTI) SoCs. RPMH communication uses an
internal bus to transmit state requests for shared resources. A set
of hardware components aggregate requests for these resources and
help apply the aggregated state on the resource.
config QCOM_SMEM
tristate "Qualcomm Shared Memory Manager (SMEM)"
depends on ARCH_QCOM

View File

@ -1,4 +1,5 @@
# SPDX-License-Identifier: GPL-2.0
CFLAGS_rpmh-rsc.o := -I$(src)
obj-$(CONFIG_QCOM_GENI_SE) += qcom-geni-se.o
obj-$(CONFIG_QCOM_COMMAND_DB) += cmd-db.o
obj-$(CONFIG_QCOM_GLINK_SSR) += glink_ssr.o
@ -8,6 +9,9 @@ obj-$(CONFIG_QCOM_PM) += spm.o
obj-$(CONFIG_QCOM_QMI_HELPERS) += qmi_helpers.o
qmi_helpers-y += qmi_encdec.o qmi_interface.o
obj-$(CONFIG_QCOM_RMTFS_MEM) += rmtfs_mem.o
obj-$(CONFIG_QCOM_RPMH) += qcom_rpmh.o
qcom_rpmh-y += rpmh-rsc.o
qcom_rpmh-y += rpmh.o
obj-$(CONFIG_QCOM_SMD_RPM) += smd-rpm.o
obj-$(CONFIG_QCOM_SMEM) += smem.o
obj-$(CONFIG_QCOM_SMEM_STATE) += smem_state.o
@ -15,3 +19,5 @@ obj-$(CONFIG_QCOM_SMP2P) += smp2p.o
obj-$(CONFIG_QCOM_SMSM) += smsm.o
obj-$(CONFIG_QCOM_WCNSS_CTRL) += wcnss_ctrl.o
obj-$(CONFIG_QCOM_APR) += apr.o
obj-$(CONFIG_QCOM_LLCC) += llcc-slice.o
obj-$(CONFIG_QCOM_SDM845_LLCC) += llcc-sdm845.o

View File

@ -0,0 +1,94 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2017-2018, The Linux Foundation. All rights reserved.
*
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/soc/qcom/llcc-qcom.h>
/*
* SCT(System Cache Table) entry contains of the following members:
* usecase_id: Unique id for the client's use case
* slice_id: llcc slice id for each client
* max_cap: The maximum capacity of the cache slice provided in KB
* priority: Priority of the client used to select victim line for replacement
* fixed_size: Boolean indicating if the slice has a fixed capacity
* bonus_ways: Bonus ways are additional ways to be used for any slice,
* if client ends up using more than reserved cache ways. Bonus
* ways are allocated only if they are not reserved for some
* other client.
* res_ways: Reserved ways for the cache slice, the reserved ways cannot
* be used by any other client than the one its assigned to.
* cache_mode: Each slice operates as a cache, this controls the mode of the
* slice: normal or TCM(Tightly Coupled Memory)
* probe_target_ways: Determines what ways to probe for access hit. When
* configured to 1 only bonus and reserved ways are probed.
* When configured to 0 all ways in llcc are probed.
* dis_cap_alloc: Disable capacity based allocation for a client
* retain_on_pc: If this bit is set and client has maintained active vote
* then the ways assigned to this client are not flushed on power
* collapse.
* activate_on_init: Activate the slice immediately after the SCT is programmed
*/
#define SCT_ENTRY(uid, sid, mc, p, fs, bway, rway, cmod, ptw, dca, rp, a) \
{ \
.usecase_id = uid, \
.slice_id = sid, \
.max_cap = mc, \
.priority = p, \
.fixed_size = fs, \
.bonus_ways = bway, \
.res_ways = rway, \
.cache_mode = cmod, \
.probe_target_ways = ptw, \
.dis_cap_alloc = dca, \
.retain_on_pc = rp, \
.activate_on_init = a, \
}
static struct llcc_slice_config sdm845_data[] = {
SCT_ENTRY(LLCC_CPUSS, 1, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 1),
SCT_ENTRY(LLCC_VIDSC0, 2, 512, 2, 1, 0x0, 0x0f0, 0, 0, 1, 1, 0),
SCT_ENTRY(LLCC_VIDSC1, 3, 512, 2, 1, 0x0, 0x0f0, 0, 0, 1, 1, 0),
SCT_ENTRY(LLCC_ROTATOR, 4, 563, 2, 1, 0x0, 0x00e, 2, 0, 1, 1, 0),
SCT_ENTRY(LLCC_VOICE, 5, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0),
SCT_ENTRY(LLCC_AUDIO, 6, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0),
SCT_ENTRY(LLCC_MDMHPGRW, 7, 1024, 2, 0, 0xfc, 0xf00, 0, 0, 1, 1, 0),
SCT_ENTRY(LLCC_MDM, 8, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0),
SCT_ENTRY(LLCC_CMPT, 10, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0),
SCT_ENTRY(LLCC_GPUHTW, 11, 512, 1, 1, 0xc, 0x0, 0, 0, 1, 1, 0),
SCT_ENTRY(LLCC_GPU, 12, 2304, 1, 0, 0xff0, 0x2, 0, 0, 1, 1, 0),
SCT_ENTRY(LLCC_MMUHWT, 13, 256, 2, 0, 0x0, 0x1, 0, 0, 1, 0, 1),
SCT_ENTRY(LLCC_CMPTDMA, 15, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0),
SCT_ENTRY(LLCC_DISP, 16, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0),
SCT_ENTRY(LLCC_VIDFW, 17, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0),
SCT_ENTRY(LLCC_MDMHPFX, 20, 1024, 2, 1, 0x0, 0xf00, 0, 0, 1, 1, 0),
SCT_ENTRY(LLCC_MDMPNG, 21, 1024, 0, 1, 0x1e, 0x0, 0, 0, 1, 1, 0),
SCT_ENTRY(LLCC_AUDHW, 22, 1024, 1, 1, 0xffc, 0x2, 0, 0, 1, 1, 0),
};
static int sdm845_qcom_llcc_probe(struct platform_device *pdev)
{
return qcom_llcc_probe(pdev, sdm845_data, ARRAY_SIZE(sdm845_data));
}
static const struct of_device_id sdm845_qcom_llcc_of_match[] = {
{ .compatible = "qcom,sdm845-llcc", },
{ }
};
static struct platform_driver sdm845_qcom_llcc_driver = {
.driver = {
.name = "sdm845-llcc",
.of_match_table = sdm845_qcom_llcc_of_match,
},
.probe = sdm845_qcom_llcc_probe,
};
module_platform_driver(sdm845_qcom_llcc_driver);
MODULE_DESCRIPTION("QCOM sdm845 LLCC driver");
MODULE_LICENSE("GPL v2");

View File

@ -0,0 +1,338 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2017-2018, The Linux Foundation. All rights reserved.
*
*/
#include <linux/bitmap.h>
#include <linux/bitops.h>
#include <linux/device.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/of_device.h>
#include <linux/regmap.h>
#include <linux/slab.h>
#include <linux/soc/qcom/llcc-qcom.h>
#define ACTIVATE BIT(0)
#define DEACTIVATE BIT(1)
#define ACT_CTRL_OPCODE_ACTIVATE BIT(0)
#define ACT_CTRL_OPCODE_DEACTIVATE BIT(1)
#define ACT_CTRL_ACT_TRIG BIT(0)
#define ACT_CTRL_OPCODE_SHIFT 0x01
#define ATTR1_PROBE_TARGET_WAYS_SHIFT 0x02
#define ATTR1_FIXED_SIZE_SHIFT 0x03
#define ATTR1_PRIORITY_SHIFT 0x04
#define ATTR1_MAX_CAP_SHIFT 0x10
#define ATTR0_RES_WAYS_MASK GENMASK(11, 0)
#define ATTR0_BONUS_WAYS_MASK GENMASK(27, 16)
#define ATTR0_BONUS_WAYS_SHIFT 0x10
#define LLCC_STATUS_READ_DELAY 100
#define CACHE_LINE_SIZE_SHIFT 6
#define LLCC_COMMON_STATUS0 0x0003000c
#define LLCC_LB_CNT_MASK GENMASK(31, 28)
#define LLCC_LB_CNT_SHIFT 28
#define MAX_CAP_TO_BYTES(n) (n * SZ_1K)
#define LLCC_TRP_ACT_CTRLn(n) (n * SZ_4K)
#define LLCC_TRP_STATUSn(n) (4 + n * SZ_4K)
#define LLCC_TRP_ATTR0_CFGn(n) (0x21000 + SZ_8 * n)
#define LLCC_TRP_ATTR1_CFGn(n) (0x21004 + SZ_8 * n)
#define BANK_OFFSET_STRIDE 0x80000
static struct llcc_drv_data *drv_data;
static const struct regmap_config llcc_regmap_config = {
.reg_bits = 32,
.reg_stride = 4,
.val_bits = 32,
.fast_io = true,
};
/**
* llcc_slice_getd - get llcc slice descriptor
* @uid: usecase_id for the client
*
* A pointer to llcc slice descriptor will be returned on success and
* and error pointer is returned on failure
*/
struct llcc_slice_desc *llcc_slice_getd(u32 uid)
{
const struct llcc_slice_config *cfg;
struct llcc_slice_desc *desc;
u32 sz, count;
cfg = drv_data->cfg;
sz = drv_data->cfg_size;
for (count = 0; cfg && count < sz; count++, cfg++)
if (cfg->usecase_id == uid)
break;
if (count == sz || !cfg)
return ERR_PTR(-ENODEV);
desc = kzalloc(sizeof(*desc), GFP_KERNEL);
if (!desc)
return ERR_PTR(-ENOMEM);
desc->slice_id = cfg->slice_id;
desc->slice_size = cfg->max_cap;
return desc;
}
EXPORT_SYMBOL_GPL(llcc_slice_getd);
/**
* llcc_slice_putd - llcc slice descritpor
* @desc: Pointer to llcc slice descriptor
*/
void llcc_slice_putd(struct llcc_slice_desc *desc)
{
kfree(desc);
}
EXPORT_SYMBOL_GPL(llcc_slice_putd);
static int llcc_update_act_ctrl(u32 sid,
u32 act_ctrl_reg_val, u32 status)
{
u32 act_ctrl_reg;
u32 status_reg;
u32 slice_status;
int ret;
act_ctrl_reg = drv_data->bcast_off + LLCC_TRP_ACT_CTRLn(sid);
status_reg = drv_data->bcast_off + LLCC_TRP_STATUSn(sid);
/* Set the ACTIVE trigger */
act_ctrl_reg_val |= ACT_CTRL_ACT_TRIG;
ret = regmap_write(drv_data->regmap, act_ctrl_reg, act_ctrl_reg_val);
if (ret)
return ret;
/* Clear the ACTIVE trigger */
act_ctrl_reg_val &= ~ACT_CTRL_ACT_TRIG;
ret = regmap_write(drv_data->regmap, act_ctrl_reg, act_ctrl_reg_val);
if (ret)
return ret;
ret = regmap_read_poll_timeout(drv_data->regmap, status_reg,
slice_status, !(slice_status & status),
0, LLCC_STATUS_READ_DELAY);
return ret;
}
/**
* llcc_slice_activate - Activate the llcc slice
* @desc: Pointer to llcc slice descriptor
*
* A value of zero will be returned on success and a negative errno will
* be returned in error cases
*/
int llcc_slice_activate(struct llcc_slice_desc *desc)
{
int ret;
u32 act_ctrl_val;
mutex_lock(&drv_data->lock);
if (test_bit(desc->slice_id, drv_data->bitmap)) {
mutex_unlock(&drv_data->lock);
return 0;
}
act_ctrl_val = ACT_CTRL_OPCODE_ACTIVATE << ACT_CTRL_OPCODE_SHIFT;
ret = llcc_update_act_ctrl(desc->slice_id, act_ctrl_val,
DEACTIVATE);
if (ret) {
mutex_unlock(&drv_data->lock);
return ret;
}
__set_bit(desc->slice_id, drv_data->bitmap);
mutex_unlock(&drv_data->lock);
return ret;
}
EXPORT_SYMBOL_GPL(llcc_slice_activate);
/**
* llcc_slice_deactivate - Deactivate the llcc slice
* @desc: Pointer to llcc slice descriptor
*
* A value of zero will be returned on success and a negative errno will
* be returned in error cases
*/
int llcc_slice_deactivate(struct llcc_slice_desc *desc)
{
u32 act_ctrl_val;
int ret;
mutex_lock(&drv_data->lock);
if (!test_bit(desc->slice_id, drv_data->bitmap)) {
mutex_unlock(&drv_data->lock);
return 0;
}
act_ctrl_val = ACT_CTRL_OPCODE_DEACTIVATE << ACT_CTRL_OPCODE_SHIFT;
ret = llcc_update_act_ctrl(desc->slice_id, act_ctrl_val,
ACTIVATE);
if (ret) {
mutex_unlock(&drv_data->lock);
return ret;
}
__clear_bit(desc->slice_id, drv_data->bitmap);
mutex_unlock(&drv_data->lock);
return ret;
}
EXPORT_SYMBOL_GPL(llcc_slice_deactivate);
/**
* llcc_get_slice_id - return the slice id
* @desc: Pointer to llcc slice descriptor
*/
int llcc_get_slice_id(struct llcc_slice_desc *desc)
{
return desc->slice_id;
}
EXPORT_SYMBOL_GPL(llcc_get_slice_id);
/**
* llcc_get_slice_size - return the slice id
* @desc: Pointer to llcc slice descriptor
*/
size_t llcc_get_slice_size(struct llcc_slice_desc *desc)
{
return desc->slice_size;
}
EXPORT_SYMBOL_GPL(llcc_get_slice_size);
static int qcom_llcc_cfg_program(struct platform_device *pdev)
{
int i;
u32 attr1_cfg;
u32 attr0_cfg;
u32 attr1_val;
u32 attr0_val;
u32 max_cap_cacheline;
u32 sz;
int ret;
const struct llcc_slice_config *llcc_table;
struct llcc_slice_desc desc;
u32 bcast_off = drv_data->bcast_off;
sz = drv_data->cfg_size;
llcc_table = drv_data->cfg;
for (i = 0; i < sz; i++) {
attr1_cfg = bcast_off +
LLCC_TRP_ATTR1_CFGn(llcc_table[i].slice_id);
attr0_cfg = bcast_off +
LLCC_TRP_ATTR0_CFGn(llcc_table[i].slice_id);
attr1_val = llcc_table[i].cache_mode;
attr1_val |= llcc_table[i].probe_target_ways <<
ATTR1_PROBE_TARGET_WAYS_SHIFT;
attr1_val |= llcc_table[i].fixed_size <<
ATTR1_FIXED_SIZE_SHIFT;
attr1_val |= llcc_table[i].priority <<
ATTR1_PRIORITY_SHIFT;
max_cap_cacheline = MAX_CAP_TO_BYTES(llcc_table[i].max_cap);
/* LLCC instances can vary for each target.
* The SW writes to broadcast register which gets propagated
* to each llcc instace (llcc0,.. llccN).
* Since the size of the memory is divided equally amongst the
* llcc instances, we need to configure the max cap accordingly.
*/
max_cap_cacheline = max_cap_cacheline / drv_data->num_banks;
max_cap_cacheline >>= CACHE_LINE_SIZE_SHIFT;
attr1_val |= max_cap_cacheline << ATTR1_MAX_CAP_SHIFT;
attr0_val = llcc_table[i].res_ways & ATTR0_RES_WAYS_MASK;
attr0_val |= llcc_table[i].bonus_ways << ATTR0_BONUS_WAYS_SHIFT;
ret = regmap_write(drv_data->regmap, attr1_cfg, attr1_val);
if (ret)
return ret;
ret = regmap_write(drv_data->regmap, attr0_cfg, attr0_val);
if (ret)
return ret;
if (llcc_table[i].activate_on_init) {
desc.slice_id = llcc_table[i].slice_id;
ret = llcc_slice_activate(&desc);
}
}
return ret;
}
int qcom_llcc_probe(struct platform_device *pdev,
const struct llcc_slice_config *llcc_cfg, u32 sz)
{
u32 num_banks;
struct device *dev = &pdev->dev;
struct resource *res;
void __iomem *base;
int ret, i;
drv_data = devm_kzalloc(dev, sizeof(*drv_data), GFP_KERNEL);
if (!drv_data)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(base))
return PTR_ERR(base);
drv_data->regmap = devm_regmap_init_mmio(dev, base,
&llcc_regmap_config);
if (IS_ERR(drv_data->regmap))
return PTR_ERR(drv_data->regmap);
ret = regmap_read(drv_data->regmap, LLCC_COMMON_STATUS0,
&num_banks);
if (ret)
return ret;
num_banks &= LLCC_LB_CNT_MASK;
num_banks >>= LLCC_LB_CNT_SHIFT;
drv_data->num_banks = num_banks;
for (i = 0; i < sz; i++)
if (llcc_cfg[i].slice_id > drv_data->max_slices)
drv_data->max_slices = llcc_cfg[i].slice_id;
drv_data->offsets = devm_kcalloc(dev, num_banks, sizeof(u32),
GFP_KERNEL);
if (!drv_data->offsets)
return -ENOMEM;
for (i = 0; i < num_banks; i++)
drv_data->offsets[i] = i * BANK_OFFSET_STRIDE;
drv_data->bcast_off = num_banks * BANK_OFFSET_STRIDE;
drv_data->bitmap = devm_kcalloc(dev,
BITS_TO_LONGS(drv_data->max_slices), sizeof(unsigned long),
GFP_KERNEL);
if (!drv_data->bitmap)
return -ENOMEM;
drv_data->cfg = llcc_cfg;
drv_data->cfg_size = sz;
mutex_init(&drv_data->lock);
platform_set_drvdata(pdev, drv_data);
return qcom_llcc_cfg_program(pdev);
}
EXPORT_SYMBOL_GPL(qcom_llcc_probe);
MODULE_LICENSE("GPL v2");

View File

@ -184,6 +184,7 @@ static int qcom_rmtfs_mem_probe(struct platform_device *pdev)
device_initialize(&rmtfs_mem->dev);
rmtfs_mem->dev.parent = &pdev->dev;
rmtfs_mem->dev.groups = qcom_rmtfs_mem_groups;
rmtfs_mem->dev.release = qcom_rmtfs_mem_release_device;
rmtfs_mem->base = devm_memremap(&rmtfs_mem->dev, rmtfs_mem->addr,
rmtfs_mem->size, MEMREMAP_WC);
@ -206,8 +207,6 @@ static int qcom_rmtfs_mem_probe(struct platform_device *pdev)
goto put_device;
}
rmtfs_mem->dev.release = qcom_rmtfs_mem_release_device;
ret = of_property_read_u32(node, "qcom,vmid", &vmid);
if (ret < 0 && ret != -EINVAL) {
dev_err(&pdev->dev, "failed to parse qcom,vmid\n");

View File

@ -0,0 +1,114 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
*/
#ifndef __RPM_INTERNAL_H__
#define __RPM_INTERNAL_H__
#include <linux/bitmap.h>
#include <soc/qcom/tcs.h>
#define TCS_TYPE_NR 4
#define MAX_CMDS_PER_TCS 16
#define MAX_TCS_PER_TYPE 3
#define MAX_TCS_NR (MAX_TCS_PER_TYPE * TCS_TYPE_NR)
#define MAX_TCS_SLOTS (MAX_CMDS_PER_TCS * MAX_TCS_PER_TYPE)
struct rsc_drv;
/**
* struct tcs_group: group of Trigger Command Sets (TCS) to send state requests
* to the controller
*
* @drv: the controller
* @type: type of the TCS in this group - active, sleep, wake
* @mask: mask of the TCSes relative to all the TCSes in the RSC
* @offset: start of the TCS group relative to the TCSes in the RSC
* @num_tcs: number of TCSes in this type
* @ncpt: number of commands in each TCS
* @lock: lock for synchronizing this TCS writes
* @req: requests that are sent from the TCS
* @cmd_cache: flattened cache of cmds in sleep/wake TCS
* @slots: indicates which of @cmd_addr are occupied
*/
struct tcs_group {
struct rsc_drv *drv;
int type;
u32 mask;
u32 offset;
int num_tcs;
int ncpt;
spinlock_t lock;
const struct tcs_request *req[MAX_TCS_PER_TYPE];
u32 *cmd_cache;
DECLARE_BITMAP(slots, MAX_TCS_SLOTS);
};
/**
* struct rpmh_request: the message to be sent to rpmh-rsc
*
* @msg: the request
* @cmd: the payload that will be part of the @msg
* @completion: triggered when request is done
* @dev: the device making the request
* @err: err return from the controller
* @needs_free: check to free dynamically allocated request object
*/
struct rpmh_request {
struct tcs_request msg;
struct tcs_cmd cmd[MAX_RPMH_PAYLOAD];
struct completion *completion;
const struct device *dev;
int err;
bool needs_free;
};
/**
* struct rpmh_ctrlr: our representation of the controller
*
* @cache: the list of cached requests
* @cache_lock: synchronize access to the cache data
* @dirty: was the cache updated since flush
* @batch_cache: Cache sleep and wake requests sent as batch
*/
struct rpmh_ctrlr {
struct list_head cache;
spinlock_t cache_lock;
bool dirty;
struct list_head batch_cache;
};
/**
* struct rsc_drv: the Direct Resource Voter (DRV) of the
* Resource State Coordinator controller (RSC)
*
* @name: controller identifier
* @tcs_base: start address of the TCS registers in this controller
* @id: instance id in the controller (Direct Resource Voter)
* @num_tcs: number of TCSes in this DRV
* @tcs: TCS groups
* @tcs_in_use: s/w state of the TCS
* @lock: synchronize state of the controller
* @client: handle to the DRV's client.
*/
struct rsc_drv {
const char *name;
void __iomem *tcs_base;
int id;
int num_tcs;
struct tcs_group tcs[TCS_TYPE_NR];
DECLARE_BITMAP(tcs_in_use, MAX_TCS_NR);
spinlock_t lock;
struct rpmh_ctrlr client;
};
int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg);
int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv,
const struct tcs_request *msg);
int rpmh_rsc_invalidate(struct rsc_drv *drv);
void rpmh_tx_done(const struct tcs_request *msg, int r);
#endif /* __RPM_INTERNAL_H__ */

693
drivers/soc/qcom/rpmh-rsc.c Normal file
View File

@ -0,0 +1,693 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
*/
#define pr_fmt(fmt) "%s " fmt, KBUILD_MODNAME
#include <linux/atomic.h>
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/of.h>
#include <linux/of_irq.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <soc/qcom/cmd-db.h>
#include <soc/qcom/tcs.h>
#include <dt-bindings/soc/qcom,rpmh-rsc.h>
#include "rpmh-internal.h"
#define CREATE_TRACE_POINTS
#include "trace-rpmh.h"
#define RSC_DRV_TCS_OFFSET 672
#define RSC_DRV_CMD_OFFSET 20
/* DRV Configuration Information Register */
#define DRV_PRNT_CHLD_CONFIG 0x0C
#define DRV_NUM_TCS_MASK 0x3F
#define DRV_NUM_TCS_SHIFT 6
#define DRV_NCPT_MASK 0x1F
#define DRV_NCPT_SHIFT 27
/* Register offsets */
#define RSC_DRV_IRQ_ENABLE 0x00
#define RSC_DRV_IRQ_STATUS 0x04
#define RSC_DRV_IRQ_CLEAR 0x08
#define RSC_DRV_CMD_WAIT_FOR_CMPL 0x10
#define RSC_DRV_CONTROL 0x14
#define RSC_DRV_STATUS 0x18
#define RSC_DRV_CMD_ENABLE 0x1C
#define RSC_DRV_CMD_MSGID 0x30
#define RSC_DRV_CMD_ADDR 0x34
#define RSC_DRV_CMD_DATA 0x38
#define RSC_DRV_CMD_STATUS 0x3C
#define RSC_DRV_CMD_RESP_DATA 0x40
#define TCS_AMC_MODE_ENABLE BIT(16)
#define TCS_AMC_MODE_TRIGGER BIT(24)
/* TCS CMD register bit mask */
#define CMD_MSGID_LEN 8
#define CMD_MSGID_RESP_REQ BIT(8)
#define CMD_MSGID_WRITE BIT(16)
#define CMD_STATUS_ISSUED BIT(8)
#define CMD_STATUS_COMPL BIT(16)
static u32 read_tcs_reg(struct rsc_drv *drv, int reg, int tcs_id, int cmd_id)
{
return readl_relaxed(drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * tcs_id +
RSC_DRV_CMD_OFFSET * cmd_id);
}
static void write_tcs_cmd(struct rsc_drv *drv, int reg, int tcs_id, int cmd_id,
u32 data)
{
writel_relaxed(data, drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * tcs_id +
RSC_DRV_CMD_OFFSET * cmd_id);
}
static void write_tcs_reg(struct rsc_drv *drv, int reg, int tcs_id, u32 data)
{
writel_relaxed(data, drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * tcs_id);
}
static void write_tcs_reg_sync(struct rsc_drv *drv, int reg, int tcs_id,
u32 data)
{
writel(data, drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * tcs_id);
for (;;) {
if (data == readl(drv->tcs_base + reg +
RSC_DRV_TCS_OFFSET * tcs_id))
break;
udelay(1);
}
}
static bool tcs_is_free(struct rsc_drv *drv, int tcs_id)
{
return !test_bit(tcs_id, drv->tcs_in_use) &&
read_tcs_reg(drv, RSC_DRV_STATUS, tcs_id, 0);
}
static struct tcs_group *get_tcs_of_type(struct rsc_drv *drv, int type)
{
return &drv->tcs[type];
}
static int tcs_invalidate(struct rsc_drv *drv, int type)
{
int m;
struct tcs_group *tcs;
tcs = get_tcs_of_type(drv, type);
spin_lock(&tcs->lock);
if (bitmap_empty(tcs->slots, MAX_TCS_SLOTS)) {
spin_unlock(&tcs->lock);
return 0;
}
for (m = tcs->offset; m < tcs->offset + tcs->num_tcs; m++) {
if (!tcs_is_free(drv, m)) {
spin_unlock(&tcs->lock);
return -EAGAIN;
}
write_tcs_reg_sync(drv, RSC_DRV_CMD_ENABLE, m, 0);
}
bitmap_zero(tcs->slots, MAX_TCS_SLOTS);
spin_unlock(&tcs->lock);
return 0;
}
/**
* rpmh_rsc_invalidate - Invalidate sleep and wake TCSes
*
* @drv: the RSC controller
*/
int rpmh_rsc_invalidate(struct rsc_drv *drv)
{
int ret;
ret = tcs_invalidate(drv, SLEEP_TCS);
if (!ret)
ret = tcs_invalidate(drv, WAKE_TCS);
return ret;
}
static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
const struct tcs_request *msg)
{
int type, ret;
struct tcs_group *tcs;
switch (msg->state) {
case RPMH_ACTIVE_ONLY_STATE:
type = ACTIVE_TCS;
break;
case RPMH_WAKE_ONLY_STATE:
type = WAKE_TCS;
break;
case RPMH_SLEEP_STATE:
type = SLEEP_TCS;
break;
default:
return ERR_PTR(-EINVAL);
}
/*
* If we are making an active request on a RSC that does not have a
* dedicated TCS for active state use, then re-purpose a wake TCS to
* send active votes.
* NOTE: The driver must be aware that this RSC does not have a
* dedicated AMC, and therefore would invalidate the sleep and wake
* TCSes before making an active state request.
*/
tcs = get_tcs_of_type(drv, type);
if (msg->state == RPMH_ACTIVE_ONLY_STATE && !tcs->num_tcs) {
tcs = get_tcs_of_type(drv, WAKE_TCS);
if (tcs->num_tcs) {
ret = rpmh_rsc_invalidate(drv);
if (ret)
return ERR_PTR(ret);
}
}
return tcs;
}
static const struct tcs_request *get_req_from_tcs(struct rsc_drv *drv,
int tcs_id)
{
struct tcs_group *tcs;
int i;
for (i = 0; i < TCS_TYPE_NR; i++) {
tcs = &drv->tcs[i];
if (tcs->mask & BIT(tcs_id))
return tcs->req[tcs_id - tcs->offset];
}
return NULL;
}
/**
* tcs_tx_done: TX Done interrupt handler
*/
static irqreturn_t tcs_tx_done(int irq, void *p)
{
struct rsc_drv *drv = p;
int i, j, err = 0;
unsigned long irq_status;
const struct tcs_request *req;
struct tcs_cmd *cmd;
irq_status = read_tcs_reg(drv, RSC_DRV_IRQ_STATUS, 0, 0);
for_each_set_bit(i, &irq_status, BITS_PER_LONG) {
req = get_req_from_tcs(drv, i);
if (!req) {
WARN_ON(1);
goto skip;
}
err = 0;
for (j = 0; j < req->num_cmds; j++) {
u32 sts;
cmd = &req->cmds[j];
sts = read_tcs_reg(drv, RSC_DRV_CMD_STATUS, i, j);
if (!(sts & CMD_STATUS_ISSUED) ||
((req->wait_for_compl || cmd->wait) &&
!(sts & CMD_STATUS_COMPL))) {
pr_err("Incomplete request: %s: addr=%#x data=%#x",
drv->name, cmd->addr, cmd->data);
err = -EIO;
}
}
trace_rpmh_tx_done(drv, i, req, err);
skip:
/* Reclaim the TCS */
write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, i, 0);
write_tcs_reg(drv, RSC_DRV_IRQ_CLEAR, 0, BIT(i));
spin_lock(&drv->lock);
clear_bit(i, drv->tcs_in_use);
spin_unlock(&drv->lock);
if (req)
rpmh_tx_done(req, err);
}
return IRQ_HANDLED;
}
static void __tcs_buffer_write(struct rsc_drv *drv, int tcs_id, int cmd_id,
const struct tcs_request *msg)
{
u32 msgid, cmd_msgid;
u32 cmd_enable = 0;
u32 cmd_complete;
struct tcs_cmd *cmd;
int i, j;
cmd_msgid = CMD_MSGID_LEN;
cmd_msgid |= msg->wait_for_compl ? CMD_MSGID_RESP_REQ : 0;
cmd_msgid |= CMD_MSGID_WRITE;
cmd_complete = read_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, tcs_id, 0);
for (i = 0, j = cmd_id; i < msg->num_cmds; i++, j++) {
cmd = &msg->cmds[i];
cmd_enable |= BIT(j);
cmd_complete |= cmd->wait << j;
msgid = cmd_msgid;
msgid |= cmd->wait ? CMD_MSGID_RESP_REQ : 0;
write_tcs_cmd(drv, RSC_DRV_CMD_MSGID, tcs_id, j, msgid);
write_tcs_cmd(drv, RSC_DRV_CMD_ADDR, tcs_id, j, cmd->addr);
write_tcs_cmd(drv, RSC_DRV_CMD_DATA, tcs_id, j, cmd->data);
trace_rpmh_send_msg(drv, tcs_id, j, msgid, cmd);
}
write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, tcs_id, cmd_complete);
cmd_enable |= read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, tcs_id, 0);
write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, tcs_id, cmd_enable);
}
static void __tcs_trigger(struct rsc_drv *drv, int tcs_id)
{
u32 enable;
/*
* HW req: Clear the DRV_CONTROL and enable TCS again
* While clearing ensure that the AMC mode trigger is cleared
* and then the mode enable is cleared.
*/
enable = read_tcs_reg(drv, RSC_DRV_CONTROL, tcs_id, 0);
enable &= ~TCS_AMC_MODE_TRIGGER;
write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable);
enable &= ~TCS_AMC_MODE_ENABLE;
write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable);
/* Enable the AMC mode on the TCS and then trigger the TCS */
enable = TCS_AMC_MODE_ENABLE;
write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable);
enable |= TCS_AMC_MODE_TRIGGER;
write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable);
}
static int check_for_req_inflight(struct rsc_drv *drv, struct tcs_group *tcs,
const struct tcs_request *msg)
{
unsigned long curr_enabled;
u32 addr;
int i, j, k;
int tcs_id = tcs->offset;
for (i = 0; i < tcs->num_tcs; i++, tcs_id++) {
if (tcs_is_free(drv, tcs_id))
continue;
curr_enabled = read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, tcs_id, 0);
for_each_set_bit(j, &curr_enabled, MAX_CMDS_PER_TCS) {
addr = read_tcs_reg(drv, RSC_DRV_CMD_ADDR, tcs_id, j);
for (k = 0; k < msg->num_cmds; k++) {
if (addr == msg->cmds[k].addr)
return -EBUSY;
}
}
}
return 0;
}
static int find_free_tcs(struct tcs_group *tcs)
{
int i;
for (i = 0; i < tcs->num_tcs; i++) {
if (tcs_is_free(tcs->drv, tcs->offset + i))
return tcs->offset + i;
}
return -EBUSY;
}
static int tcs_write(struct rsc_drv *drv, const struct tcs_request *msg)
{
struct tcs_group *tcs;
int tcs_id;
unsigned long flags;
int ret;
tcs = get_tcs_for_msg(drv, msg);
if (IS_ERR(tcs))
return PTR_ERR(tcs);
spin_lock_irqsave(&tcs->lock, flags);
spin_lock(&drv->lock);
/*
* The h/w does not like if we send a request to the same address,
* when one is already in-flight or being processed.
*/
ret = check_for_req_inflight(drv, tcs, msg);
if (ret) {
spin_unlock(&drv->lock);
goto done_write;
}
tcs_id = find_free_tcs(tcs);
if (tcs_id < 0) {
ret = tcs_id;
spin_unlock(&drv->lock);
goto done_write;
}
tcs->req[tcs_id - tcs->offset] = msg;
set_bit(tcs_id, drv->tcs_in_use);
spin_unlock(&drv->lock);
__tcs_buffer_write(drv, tcs_id, 0, msg);
__tcs_trigger(drv, tcs_id);
done_write:
spin_unlock_irqrestore(&tcs->lock, flags);
return ret;
}
/**
* rpmh_rsc_send_data: Validate the incoming message and write to the
* appropriate TCS block.
*
* @drv: the controller
* @msg: the data to be sent
*
* Return: 0 on success, -EINVAL on error.
* Note: This call blocks until a valid data is written to the TCS.
*/
int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
{
int ret;
if (!msg || !msg->cmds || !msg->num_cmds ||
msg->num_cmds > MAX_RPMH_PAYLOAD) {
WARN_ON(1);
return -EINVAL;
}
do {
ret = tcs_write(drv, msg);
if (ret == -EBUSY) {
pr_info_ratelimited("TCS Busy, retrying RPMH message send: addr=%#x\n",
msg->cmds[0].addr);
udelay(10);
}
} while (ret == -EBUSY);
return ret;
}
static int find_match(const struct tcs_group *tcs, const struct tcs_cmd *cmd,
int len)
{
int i, j;
/* Check for already cached commands */
for_each_set_bit(i, tcs->slots, MAX_TCS_SLOTS) {
if (tcs->cmd_cache[i] != cmd[0].addr)
continue;
if (i + len >= tcs->num_tcs * tcs->ncpt)
goto seq_err;
for (j = 0; j < len; j++) {
if (tcs->cmd_cache[i + j] != cmd[j].addr)
goto seq_err;
}
return i;
}
return -ENODATA;
seq_err:
WARN(1, "Message does not match previous sequence.\n");
return -EINVAL;
}
static int find_slots(struct tcs_group *tcs, const struct tcs_request *msg,
int *tcs_id, int *cmd_id)
{
int slot, offset;
int i = 0;
/* Find if we already have the msg in our TCS */
slot = find_match(tcs, msg->cmds, msg->num_cmds);
if (slot >= 0)
goto copy_data;
/* Do over, until we can fit the full payload in a TCS */
do {
slot = bitmap_find_next_zero_area(tcs->slots, MAX_TCS_SLOTS,
i, msg->num_cmds, 0);
if (slot == tcs->num_tcs * tcs->ncpt)
return -ENOMEM;
i += tcs->ncpt;
} while (slot + msg->num_cmds - 1 >= i);
copy_data:
bitmap_set(tcs->slots, slot, msg->num_cmds);
/* Copy the addresses of the resources over to the slots */
for (i = 0; i < msg->num_cmds; i++)
tcs->cmd_cache[slot + i] = msg->cmds[i].addr;
offset = slot / tcs->ncpt;
*tcs_id = offset + tcs->offset;
*cmd_id = slot % tcs->ncpt;
return 0;
}
static int tcs_ctrl_write(struct rsc_drv *drv, const struct tcs_request *msg)
{
struct tcs_group *tcs;
int tcs_id = 0, cmd_id = 0;
unsigned long flags;
int ret;
tcs = get_tcs_for_msg(drv, msg);
if (IS_ERR(tcs))
return PTR_ERR(tcs);
spin_lock_irqsave(&tcs->lock, flags);
/* find the TCS id and the command in the TCS to write to */
ret = find_slots(tcs, msg, &tcs_id, &cmd_id);
if (!ret)
__tcs_buffer_write(drv, tcs_id, cmd_id, msg);
spin_unlock_irqrestore(&tcs->lock, flags);
return ret;
}
/**
* rpmh_rsc_write_ctrl_data: Write request to the controller
*
* @drv: the controller
* @msg: the data to be written to the controller
*
* There is no response returned for writing the request to the controller.
*/
int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv, const struct tcs_request *msg)
{
if (!msg || !msg->cmds || !msg->num_cmds ||
msg->num_cmds > MAX_RPMH_PAYLOAD) {
pr_err("Payload error\n");
return -EINVAL;
}
/* Data sent to this API will not be sent immediately */
if (msg->state == RPMH_ACTIVE_ONLY_STATE)
return -EINVAL;
return tcs_ctrl_write(drv, msg);
}
static int rpmh_probe_tcs_config(struct platform_device *pdev,
struct rsc_drv *drv)
{
struct tcs_type_config {
u32 type;
u32 n;
} tcs_cfg[TCS_TYPE_NR] = { { 0 } };
struct device_node *dn = pdev->dev.of_node;
u32 config, max_tcs, ncpt, offset;
int i, ret, n, st = 0;
struct tcs_group *tcs;
struct resource *res;
void __iomem *base;
char drv_id[10] = {0};
snprintf(drv_id, ARRAY_SIZE(drv_id), "drv-%d", drv->id);
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, drv_id);
base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(base))
return PTR_ERR(base);
ret = of_property_read_u32(dn, "qcom,tcs-offset", &offset);
if (ret)
return ret;
drv->tcs_base = base + offset;
config = readl_relaxed(base + DRV_PRNT_CHLD_CONFIG);
max_tcs = config;
max_tcs &= DRV_NUM_TCS_MASK << (DRV_NUM_TCS_SHIFT * drv->id);
max_tcs = max_tcs >> (DRV_NUM_TCS_SHIFT * drv->id);
ncpt = config & (DRV_NCPT_MASK << DRV_NCPT_SHIFT);
ncpt = ncpt >> DRV_NCPT_SHIFT;
n = of_property_count_u32_elems(dn, "qcom,tcs-config");
if (n != 2 * TCS_TYPE_NR)
return -EINVAL;
for (i = 0; i < TCS_TYPE_NR; i++) {
ret = of_property_read_u32_index(dn, "qcom,tcs-config",
i * 2, &tcs_cfg[i].type);
if (ret)
return ret;
if (tcs_cfg[i].type >= TCS_TYPE_NR)
return -EINVAL;
ret = of_property_read_u32_index(dn, "qcom,tcs-config",
i * 2 + 1, &tcs_cfg[i].n);
if (ret)
return ret;
if (tcs_cfg[i].n > MAX_TCS_PER_TYPE)
return -EINVAL;
}
for (i = 0; i < TCS_TYPE_NR; i++) {
tcs = &drv->tcs[tcs_cfg[i].type];
if (tcs->drv)
return -EINVAL;
tcs->drv = drv;
tcs->type = tcs_cfg[i].type;
tcs->num_tcs = tcs_cfg[i].n;
tcs->ncpt = ncpt;
spin_lock_init(&tcs->lock);
if (!tcs->num_tcs || tcs->type == CONTROL_TCS)
continue;
if (st + tcs->num_tcs > max_tcs ||
st + tcs->num_tcs >= BITS_PER_BYTE * sizeof(tcs->mask))
return -EINVAL;
tcs->mask = ((1 << tcs->num_tcs) - 1) << st;
tcs->offset = st;
st += tcs->num_tcs;
/*
* Allocate memory to cache sleep and wake requests to
* avoid reading TCS register memory.
*/
if (tcs->type == ACTIVE_TCS)
continue;
tcs->cmd_cache = devm_kcalloc(&pdev->dev,
tcs->num_tcs * ncpt, sizeof(u32),
GFP_KERNEL);
if (!tcs->cmd_cache)
return -ENOMEM;
}
drv->num_tcs = st;
return 0;
}
static int rpmh_rsc_probe(struct platform_device *pdev)
{
struct device_node *dn = pdev->dev.of_node;
struct rsc_drv *drv;
int ret, irq;
/*
* Even though RPMh doesn't directly use cmd-db, all of its children
* do. To avoid adding this check to our children we'll do it now.
*/
ret = cmd_db_ready();
if (ret) {
if (ret != -EPROBE_DEFER)
dev_err(&pdev->dev, "Command DB not available (%d)\n",
ret);
return ret;
}
drv = devm_kzalloc(&pdev->dev, sizeof(*drv), GFP_KERNEL);
if (!drv)
return -ENOMEM;
ret = of_property_read_u32(dn, "qcom,drv-id", &drv->id);
if (ret)
return ret;
drv->name = of_get_property(dn, "label", NULL);
if (!drv->name)
drv->name = dev_name(&pdev->dev);
ret = rpmh_probe_tcs_config(pdev, drv);
if (ret)
return ret;
spin_lock_init(&drv->lock);
bitmap_zero(drv->tcs_in_use, MAX_TCS_NR);
irq = platform_get_irq(pdev, drv->id);
if (irq < 0)
return irq;
ret = devm_request_irq(&pdev->dev, irq, tcs_tx_done,
IRQF_TRIGGER_HIGH | IRQF_NO_SUSPEND,
drv->name, drv);
if (ret)
return ret;
/* Enable the active TCS to send requests immediately */
write_tcs_reg(drv, RSC_DRV_IRQ_ENABLE, 0, drv->tcs[ACTIVE_TCS].mask);
spin_lock_init(&drv->client.cache_lock);
INIT_LIST_HEAD(&drv->client.cache);
INIT_LIST_HEAD(&drv->client.batch_cache);
dev_set_drvdata(&pdev->dev, drv);
return devm_of_platform_populate(&pdev->dev);
}
static const struct of_device_id rpmh_drv_match[] = {
{ .compatible = "qcom,rpmh-rsc", },
{ }
};
static struct platform_driver rpmh_driver = {
.probe = rpmh_rsc_probe,
.driver = {
.name = "rpmh",
.of_match_table = rpmh_drv_match,
},
};
static int __init rpmh_driver_init(void)
{
return platform_driver_register(&rpmh_driver);
}
arch_initcall(rpmh_driver_init);

513
drivers/soc/qcom/rpmh.c Normal file
View File

@ -0,0 +1,513 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
*/
#include <linux/atomic.h>
#include <linux/bug.h>
#include <linux/interrupt.h>
#include <linux/jiffies.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/types.h>
#include <linux/wait.h>
#include <soc/qcom/rpmh.h>
#include "rpmh-internal.h"
#define RPMH_TIMEOUT_MS msecs_to_jiffies(10000)
#define DEFINE_RPMH_MSG_ONSTACK(dev, s, q, name) \
struct rpmh_request name = { \
.msg = { \
.state = s, \
.cmds = name.cmd, \
.num_cmds = 0, \
.wait_for_compl = true, \
}, \
.cmd = { { 0 } }, \
.completion = q, \
.dev = dev, \
.needs_free = false, \
}
#define ctrlr_to_drv(ctrlr) container_of(ctrlr, struct rsc_drv, client)
/**
* struct cache_req: the request object for caching
*
* @addr: the address of the resource
* @sleep_val: the sleep vote
* @wake_val: the wake vote
* @list: linked list obj
*/
struct cache_req {
u32 addr;
u32 sleep_val;
u32 wake_val;
struct list_head list;
};
/**
* struct batch_cache_req - An entry in our batch catch
*
* @list: linked list obj
* @count: number of messages
* @rpm_msgs: the messages
*/
struct batch_cache_req {
struct list_head list;
int count;
struct rpmh_request rpm_msgs[];
};
static struct rpmh_ctrlr *get_rpmh_ctrlr(const struct device *dev)
{
struct rsc_drv *drv = dev_get_drvdata(dev->parent);
return &drv->client;
}
void rpmh_tx_done(const struct tcs_request *msg, int r)
{
struct rpmh_request *rpm_msg = container_of(msg, struct rpmh_request,
msg);
struct completion *compl = rpm_msg->completion;
rpm_msg->err = r;
if (r)
dev_err(rpm_msg->dev, "RPMH TX fail in msg addr=%#x, err=%d\n",
rpm_msg->msg.cmds[0].addr, r);
if (!compl)
goto exit;
/* Signal the blocking thread we are done */
complete(compl);
exit:
if (rpm_msg->needs_free)
kfree(rpm_msg);
}
static struct cache_req *__find_req(struct rpmh_ctrlr *ctrlr, u32 addr)
{
struct cache_req *p, *req = NULL;
list_for_each_entry(p, &ctrlr->cache, list) {
if (p->addr == addr) {
req = p;
break;
}
}
return req;
}
static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,
enum rpmh_state state,
struct tcs_cmd *cmd)
{
struct cache_req *req;
unsigned long flags;
spin_lock_irqsave(&ctrlr->cache_lock, flags);
req = __find_req(ctrlr, cmd->addr);
if (req)
goto existing;
req = kzalloc(sizeof(*req), GFP_ATOMIC);
if (!req) {
req = ERR_PTR(-ENOMEM);
goto unlock;
}
req->addr = cmd->addr;
req->sleep_val = req->wake_val = UINT_MAX;
INIT_LIST_HEAD(&req->list);
list_add_tail(&req->list, &ctrlr->cache);
existing:
switch (state) {
case RPMH_ACTIVE_ONLY_STATE:
if (req->sleep_val != UINT_MAX)
req->wake_val = cmd->data;
break;
case RPMH_WAKE_ONLY_STATE:
req->wake_val = cmd->data;
break;
case RPMH_SLEEP_STATE:
req->sleep_val = cmd->data;
break;
default:
break;
}
ctrlr->dirty = true;
unlock:
spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
return req;
}
/**
* __rpmh_write: Cache and send the RPMH request
*
* @dev: The device making the request
* @state: Active/Sleep request type
* @rpm_msg: The data that needs to be sent (cmds).
*
* Cache the RPMH request and send if the state is ACTIVE_ONLY.
* SLEEP/WAKE_ONLY requests are not sent to the controller at
* this time. Use rpmh_flush() to send them to the controller.
*/
static int __rpmh_write(const struct device *dev, enum rpmh_state state,
struct rpmh_request *rpm_msg)
{
struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
int ret = -EINVAL;
struct cache_req *req;
int i;
rpm_msg->msg.state = state;
/* Cache the request in our store and link the payload */
for (i = 0; i < rpm_msg->msg.num_cmds; i++) {
req = cache_rpm_request(ctrlr, state, &rpm_msg->msg.cmds[i]);
if (IS_ERR(req))
return PTR_ERR(req);
}
rpm_msg->msg.state = state;
if (state == RPMH_ACTIVE_ONLY_STATE) {
WARN_ON(irqs_disabled());
ret = rpmh_rsc_send_data(ctrlr_to_drv(ctrlr), &rpm_msg->msg);
} else {
ret = rpmh_rsc_write_ctrl_data(ctrlr_to_drv(ctrlr),
&rpm_msg->msg);
/* Clean up our call by spoofing tx_done */
rpmh_tx_done(&rpm_msg->msg, ret);
}
return ret;
}
static int __fill_rpmh_msg(struct rpmh_request *req, enum rpmh_state state,
const struct tcs_cmd *cmd, u32 n)
{
if (!cmd || !n || n > MAX_RPMH_PAYLOAD)
return -EINVAL;
memcpy(req->cmd, cmd, n * sizeof(*cmd));
req->msg.state = state;
req->msg.cmds = req->cmd;
req->msg.num_cmds = n;
return 0;
}
/**
* rpmh_write_async: Write a set of RPMH commands
*
* @dev: The device making the request
* @state: Active/sleep set
* @cmd: The payload data
* @n: The number of elements in payload
*
* Write a set of RPMH commands, the order of commands is maintained
* and will be sent as a single shot.
*/
int rpmh_write_async(const struct device *dev, enum rpmh_state state,
const struct tcs_cmd *cmd, u32 n)
{
struct rpmh_request *rpm_msg;
int ret;
rpm_msg = kzalloc(sizeof(*rpm_msg), GFP_ATOMIC);
if (!rpm_msg)
return -ENOMEM;
rpm_msg->needs_free = true;
ret = __fill_rpmh_msg(rpm_msg, state, cmd, n);
if (ret) {
kfree(rpm_msg);
return ret;
}
return __rpmh_write(dev, state, rpm_msg);
}
EXPORT_SYMBOL(rpmh_write_async);
/**
* rpmh_write: Write a set of RPMH commands and block until response
*
* @rc: The RPMH handle got from rpmh_get_client
* @state: Active/sleep set
* @cmd: The payload data
* @n: The number of elements in @cmd
*
* May sleep. Do not call from atomic contexts.
*/
int rpmh_write(const struct device *dev, enum rpmh_state state,
const struct tcs_cmd *cmd, u32 n)
{
DECLARE_COMPLETION_ONSTACK(compl);
DEFINE_RPMH_MSG_ONSTACK(dev, state, &compl, rpm_msg);
int ret;
if (!cmd || !n || n > MAX_RPMH_PAYLOAD)
return -EINVAL;
memcpy(rpm_msg.cmd, cmd, n * sizeof(*cmd));
rpm_msg.msg.num_cmds = n;
ret = __rpmh_write(dev, state, &rpm_msg);
if (ret)
return ret;
ret = wait_for_completion_timeout(&compl, RPMH_TIMEOUT_MS);
WARN_ON(!ret);
return (ret > 0) ? 0 : -ETIMEDOUT;
}
EXPORT_SYMBOL(rpmh_write);
static void cache_batch(struct rpmh_ctrlr *ctrlr, struct batch_cache_req *req)
{
unsigned long flags;
spin_lock_irqsave(&ctrlr->cache_lock, flags);
list_add_tail(&req->list, &ctrlr->batch_cache);
spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
}
static int flush_batch(struct rpmh_ctrlr *ctrlr)
{
struct batch_cache_req *req;
const struct rpmh_request *rpm_msg;
unsigned long flags;
int ret = 0;
int i;
/* Send Sleep/Wake requests to the controller, expect no response */
spin_lock_irqsave(&ctrlr->cache_lock, flags);
list_for_each_entry(req, &ctrlr->batch_cache, list) {
for (i = 0; i < req->count; i++) {
rpm_msg = req->rpm_msgs + i;
ret = rpmh_rsc_write_ctrl_data(ctrlr_to_drv(ctrlr),
&rpm_msg->msg);
if (ret)
break;
}
}
spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
return ret;
}
static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
{
struct batch_cache_req *req, *tmp;
unsigned long flags;
spin_lock_irqsave(&ctrlr->cache_lock, flags);
list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
kfree(req);
INIT_LIST_HEAD(&ctrlr->batch_cache);
spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
}
/**
* rpmh_write_batch: Write multiple sets of RPMH commands and wait for the
* batch to finish.
*
* @dev: the device making the request
* @state: Active/sleep set
* @cmd: The payload data
* @n: The array of count of elements in each batch, 0 terminated.
*
* Write a request to the RSC controller without caching. If the request
* state is ACTIVE, then the requests are treated as completion request
* and sent to the controller immediately. The function waits until all the
* commands are complete. If the request was to SLEEP or WAKE_ONLY, then the
* request is sent as fire-n-forget and no ack is expected.
*
* May sleep. Do not call from atomic contexts for ACTIVE_ONLY requests.
*/
int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
const struct tcs_cmd *cmd, u32 *n)
{
struct batch_cache_req *req;
struct rpmh_request *rpm_msgs;
DECLARE_COMPLETION_ONSTACK(compl);
struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
unsigned long time_left;
int count = 0;
int ret, i, j;
if (!cmd || !n)
return -EINVAL;
while (n[count] > 0)
count++;
if (!count)
return -EINVAL;
req = kzalloc(sizeof(*req) + count * sizeof(req->rpm_msgs[0]),
GFP_ATOMIC);
if (!req)
return -ENOMEM;
req->count = count;
rpm_msgs = req->rpm_msgs;
for (i = 0; i < count; i++) {
__fill_rpmh_msg(rpm_msgs + i, state, cmd, n[i]);
cmd += n[i];
}
if (state != RPMH_ACTIVE_ONLY_STATE) {
cache_batch(ctrlr, req);
return 0;
}
for (i = 0; i < count; i++) {
rpm_msgs[i].completion = &compl;
ret = rpmh_rsc_send_data(ctrlr_to_drv(ctrlr), &rpm_msgs[i].msg);
if (ret) {
pr_err("Error(%d) sending RPMH message addr=%#x\n",
ret, rpm_msgs[i].msg.cmds[0].addr);
for (j = i; j < count; j++)
rpmh_tx_done(&rpm_msgs[j].msg, ret);
break;
}
}
time_left = RPMH_TIMEOUT_MS;
for (i = 0; i < count; i++) {
time_left = wait_for_completion_timeout(&compl, time_left);
if (!time_left) {
/*
* Better hope they never finish because they'll signal
* the completion on our stack and that's bad once
* we've returned from the function.
*/
WARN_ON(1);
ret = -ETIMEDOUT;
goto exit;
}
}
exit:
kfree(req);
return ret;
}
EXPORT_SYMBOL(rpmh_write_batch);
static int is_req_valid(struct cache_req *req)
{
return (req->sleep_val != UINT_MAX &&
req->wake_val != UINT_MAX &&
req->sleep_val != req->wake_val);
}
static int send_single(const struct device *dev, enum rpmh_state state,
u32 addr, u32 data)
{
DEFINE_RPMH_MSG_ONSTACK(dev, state, NULL, rpm_msg);
struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
/* Wake sets are always complete and sleep sets are not */
rpm_msg.msg.wait_for_compl = (state == RPMH_WAKE_ONLY_STATE);
rpm_msg.cmd[0].addr = addr;
rpm_msg.cmd[0].data = data;
rpm_msg.msg.num_cmds = 1;
return rpmh_rsc_write_ctrl_data(ctrlr_to_drv(ctrlr), &rpm_msg.msg);
}
/**
* rpmh_flush: Flushes the buffered active and sleep sets to TCS
*
* @dev: The device making the request
*
* Return: -EBUSY if the controller is busy, probably waiting on a response
* to a RPMH request sent earlier.
*
* This function is always called from the sleep code from the last CPU
* that is powering down the entire system. Since no other RPMH API would be
* executing at this time, it is safe to run lockless.
*/
int rpmh_flush(const struct device *dev)
{
struct cache_req *p;
struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
int ret;
if (!ctrlr->dirty) {
pr_debug("Skipping flush, TCS has latest data.\n");
return 0;
}
/* First flush the cached batch requests */
ret = flush_batch(ctrlr);
if (ret)
return ret;
/*
* Nobody else should be calling this function other than system PM,
* hence we can run without locks.
*/
list_for_each_entry(p, &ctrlr->cache, list) {
if (!is_req_valid(p)) {
pr_debug("%s: skipping RPMH req: a:%#x s:%#x w:%#x",
__func__, p->addr, p->sleep_val, p->wake_val);
continue;
}
ret = send_single(dev, RPMH_SLEEP_STATE, p->addr, p->sleep_val);
if (ret)
return ret;
ret = send_single(dev, RPMH_WAKE_ONLY_STATE,
p->addr, p->wake_val);
if (ret)
return ret;
}
ctrlr->dirty = false;
return 0;
}
EXPORT_SYMBOL(rpmh_flush);
/**
* rpmh_invalidate: Invalidate all sleep and active sets
* sets.
*
* @dev: The device making the request
*
* Invalidate the sleep and active values in the TCS blocks.
*/
int rpmh_invalidate(const struct device *dev)
{
struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
int ret;
invalidate_batch(ctrlr);
ctrlr->dirty = true;
do {
ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
} while (ret == -EAGAIN);
return ret;
}
EXPORT_SYMBOL(rpmh_invalidate);

View File

@ -364,11 +364,6 @@ static int qcom_smem_alloc_private(struct qcom_smem *smem,
end = phdr_to_last_uncached_entry(phdr);
cached = phdr_to_last_cached_entry(phdr);
if (smem->global_partition) {
dev_err(smem->dev, "Already found the global partition\n");
return -EINVAL;
}
while (hdr < end) {
if (hdr->canary != SMEM_PRIVATE_CANARY)
goto bad_canary;
@ -736,6 +731,11 @@ static int qcom_smem_set_global_partition(struct qcom_smem *smem)
bool found = false;
int i;
if (smem->global_partition) {
dev_err(smem->dev, "Already found the global partition\n");
return -EINVAL;
}
ptable = qcom_smem_get_ptable(smem);
if (IS_ERR(ptable))
return PTR_ERR(ptable);

View File

@ -0,0 +1,82 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
*/
#if !defined(_TRACE_RPMH_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_RPMH_H
#undef TRACE_SYSTEM
#define TRACE_SYSTEM rpmh
#include <linux/tracepoint.h>
#include "rpmh-internal.h"
TRACE_EVENT(rpmh_tx_done,
TP_PROTO(struct rsc_drv *d, int m, const struct tcs_request *r, int e),
TP_ARGS(d, m, r, e),
TP_STRUCT__entry(
__string(name, d->name)
__field(int, m)
__field(u32, addr)
__field(u32, data)
__field(int, err)
),
TP_fast_assign(
__assign_str(name, d->name);
__entry->m = m;
__entry->addr = r->cmds[0].addr;
__entry->data = r->cmds[0].data;
__entry->err = e;
),
TP_printk("%s: ack: tcs-m: %d addr: %#x data: %#x errno: %d",
__get_str(name), __entry->m, __entry->addr, __entry->data,
__entry->err)
);
TRACE_EVENT(rpmh_send_msg,
TP_PROTO(struct rsc_drv *d, int m, int n, u32 h,
const struct tcs_cmd *c),
TP_ARGS(d, m, n, h, c),
TP_STRUCT__entry(
__string(name, d->name)
__field(int, m)
__field(int, n)
__field(u32, hdr)
__field(u32, addr)
__field(u32, data)
__field(bool, wait)
),
TP_fast_assign(
__assign_str(name, d->name);
__entry->m = m;
__entry->n = n;
__entry->hdr = h;
__entry->addr = c->addr;
__entry->data = c->data;
__entry->wait = c->wait;
),
TP_printk("%s: send-msg: tcs(m): %d cmd(n): %d msgid: %#x addr: %#x data: %#x complete: %d",
__get_str(name), __entry->m, __entry->n, __entry->hdr,
__entry->addr, __entry->data, __entry->wait)
);
#endif /* _TRACE_RPMH_H */
#undef TRACE_INCLUDE_PATH
#define TRACE_INCLUDE_PATH .
#undef TRACE_INCLUDE_FILE
#define TRACE_INCLUDE_FILE trace-rpmh
#include <trace/define_trace.h>

View File

@ -0,0 +1,14 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
*/
#ifndef __DT_QCOM_RPMH_RSC_H__
#define __DT_QCOM_RPMH_RSC_H__
#define SLEEP_TCS 0
#define WAKE_TCS 1
#define ACTIVE_TCS 2
#define CONTROL_TCS 3
#endif /* __DT_QCOM_RPMH_RSC_H__ */

View File

@ -87,6 +87,10 @@ static inline int qcom_scm_pas_mem_setup(u32 peripheral, phys_addr_t addr,
static inline int
qcom_scm_pas_auth_and_reset(u32 peripheral) { return -ENODEV; }
static inline int qcom_scm_pas_shutdown(u32 peripheral) { return -ENODEV; }
static inline int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz,
unsigned int *src,
struct qcom_scm_vmperm *newvm,
int dest_cnt) { return -ENODEV; }
static inline void qcom_scm_cpu_power_down(u32 flags) {}
static inline u32 qcom_scm_get_version(void) { return 0; }
static inline u32

View File

@ -0,0 +1,180 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2017-2018, The Linux Foundation. All rights reserved.
*
*/
#include <linux/platform_device.h>
#ifndef __LLCC_QCOM__
#define __LLCC_QCOM__
#define LLCC_CPUSS 1
#define LLCC_VIDSC0 2
#define LLCC_VIDSC1 3
#define LLCC_ROTATOR 4
#define LLCC_VOICE 5
#define LLCC_AUDIO 6
#define LLCC_MDMHPGRW 7
#define LLCC_MDM 8
#define LLCC_CMPT 10
#define LLCC_GPUHTW 11
#define LLCC_GPU 12
#define LLCC_MMUHWT 13
#define LLCC_CMPTDMA 15
#define LLCC_DISP 16
#define LLCC_VIDFW 17
#define LLCC_MDMHPFX 20
#define LLCC_MDMPNG 21
#define LLCC_AUDHW 22
/**
* llcc_slice_desc - Cache slice descriptor
* @slice_id: llcc slice id
* @slice_size: Size allocated for the llcc slice
*/
struct llcc_slice_desc {
u32 slice_id;
size_t slice_size;
};
/**
* llcc_slice_config - Data associated with the llcc slice
* @usecase_id: usecase id for which the llcc slice is used
* @slice_id: llcc slice id assigned to each slice
* @max_cap: maximum capacity of the llcc slice
* @priority: priority of the llcc slice
* @fixed_size: whether the llcc slice can grow beyond its size
* @bonus_ways: bonus ways associated with llcc slice
* @res_ways: reserved ways associated with llcc slice
* @cache_mode: mode of the llcc slice
* @probe_target_ways: Probe only reserved and bonus ways on a cache miss
* @dis_cap_alloc: Disable capacity based allocation
* @retain_on_pc: Retain through power collapse
* @activate_on_init: activate the slice on init
*/
struct llcc_slice_config {
u32 usecase_id;
u32 slice_id;
u32 max_cap;
u32 priority;
bool fixed_size;
u32 bonus_ways;
u32 res_ways;
u32 cache_mode;
u32 probe_target_ways;
bool dis_cap_alloc;
bool retain_on_pc;
bool activate_on_init;
};
/**
* llcc_drv_data - Data associated with the llcc driver
* @regmap: regmap associated with the llcc device
* @cfg: pointer to the data structure for slice configuration
* @lock: mutex associated with each slice
* @cfg_size: size of the config data table
* @max_slices: max slices as read from device tree
* @bcast_off: Offset of the broadcast bank
* @num_banks: Number of llcc banks
* @bitmap: Bit map to track the active slice ids
* @offsets: Pointer to the bank offsets array
*/
struct llcc_drv_data {
struct regmap *regmap;
const struct llcc_slice_config *cfg;
struct mutex lock;
u32 cfg_size;
u32 max_slices;
u32 bcast_off;
u32 num_banks;
unsigned long *bitmap;
u32 *offsets;
};
#if IS_ENABLED(CONFIG_QCOM_LLCC)
/**
* llcc_slice_getd - get llcc slice descriptor
* @uid: usecase_id of the client
*/
struct llcc_slice_desc *llcc_slice_getd(u32 uid);
/**
* llcc_slice_putd - llcc slice descritpor
* @desc: Pointer to llcc slice descriptor
*/
void llcc_slice_putd(struct llcc_slice_desc *desc);
/**
* llcc_get_slice_id - get slice id
* @desc: Pointer to llcc slice descriptor
*/
int llcc_get_slice_id(struct llcc_slice_desc *desc);
/**
* llcc_get_slice_size - llcc slice size
* @desc: Pointer to llcc slice descriptor
*/
size_t llcc_get_slice_size(struct llcc_slice_desc *desc);
/**
* llcc_slice_activate - Activate the llcc slice
* @desc: Pointer to llcc slice descriptor
*/
int llcc_slice_activate(struct llcc_slice_desc *desc);
/**
* llcc_slice_deactivate - Deactivate the llcc slice
* @desc: Pointer to llcc slice descriptor
*/
int llcc_slice_deactivate(struct llcc_slice_desc *desc);
/**
* qcom_llcc_probe - program the sct table
* @pdev: platform device pointer
* @table: soc sct table
* @sz: Size of the config table
*/
int qcom_llcc_probe(struct platform_device *pdev,
const struct llcc_slice_config *table, u32 sz);
#else
static inline struct llcc_slice_desc *llcc_slice_getd(u32 uid)
{
return NULL;
}
static inline void llcc_slice_putd(struct llcc_slice_desc *desc)
{
};
static inline int llcc_get_slice_id(struct llcc_slice_desc *desc)
{
return -EINVAL;
}
static inline size_t llcc_get_slice_size(struct llcc_slice_desc *desc)
{
return 0;
}
static inline int llcc_slice_activate(struct llcc_slice_desc *desc)
{
return -EINVAL;
}
static inline int llcc_slice_deactivate(struct llcc_slice_desc *desc)
{
return -EINVAL;
}
static inline int qcom_llcc_probe(struct platform_device *pdev,
const struct llcc_slice_config *table, u32 sz)
{
return -ENODEV;
}
static inline int qcom_llcc_remove(struct platform_device *pdev)
{
return -ENODEV;
}
#endif
#endif

51
include/soc/qcom/rpmh.h Normal file
View File

@ -0,0 +1,51 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
*/
#ifndef __SOC_QCOM_RPMH_H__
#define __SOC_QCOM_RPMH_H__
#include <soc/qcom/tcs.h>
#include <linux/platform_device.h>
#if IS_ENABLED(CONFIG_QCOM_RPMH)
int rpmh_write(const struct device *dev, enum rpmh_state state,
const struct tcs_cmd *cmd, u32 n);
int rpmh_write_async(const struct device *dev, enum rpmh_state state,
const struct tcs_cmd *cmd, u32 n);
int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
const struct tcs_cmd *cmd, u32 *n);
int rpmh_flush(const struct device *dev);
int rpmh_invalidate(const struct device *dev);
#else
static inline int rpmh_write(const struct device *dev, enum rpmh_state state,
const struct tcs_cmd *cmd, u32 n)
{ return -ENODEV; }
static inline int rpmh_write_async(const struct device *dev,
enum rpmh_state state,
const struct tcs_cmd *cmd, u32 n)
{ return -ENODEV; }
static inline int rpmh_write_batch(const struct device *dev,
enum rpmh_state state,
const struct tcs_cmd *cmd, u32 *n)
{ return -ENODEV; }
static inline int rpmh_flush(const struct device *dev)
{ return -ENODEV; }
static inline int rpmh_invalidate(const struct device *dev)
{ return -ENODEV; }
#endif /* CONFIG_QCOM_RPMH */
#endif /* __SOC_QCOM_RPMH_H__ */

56
include/soc/qcom/tcs.h Normal file
View File

@ -0,0 +1,56 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
*/
#ifndef __SOC_QCOM_TCS_H__
#define __SOC_QCOM_TCS_H__
#define MAX_RPMH_PAYLOAD 16
/**
* rpmh_state: state for the request
*
* RPMH_SLEEP_STATE: State of the resource when the processor subsystem
* is powered down. There is no client using the
* resource actively.
* RPMH_WAKE_ONLY_STATE: Resume resource state to the value previously
* requested before the processor was powered down.
* RPMH_ACTIVE_ONLY_STATE: Active or AMC mode requests. Resource state
* is aggregated immediately.
*/
enum rpmh_state {
RPMH_SLEEP_STATE,
RPMH_WAKE_ONLY_STATE,
RPMH_ACTIVE_ONLY_STATE,
};
/**
* struct tcs_cmd: an individual request to RPMH.
*
* @addr: the address of the resource slv_id:18:16 | offset:0:15
* @data: the resource state request
* @wait: wait for this request to be complete before sending the next
*/
struct tcs_cmd {
u32 addr;
u32 data;
u32 wait;
};
/**
* struct tcs_request: A set of tcs_cmds sent together in a TCS
*
* @state: state for the request.
* @wait_for_compl: wait until we get a response from the h/w accelerator
* @num_cmds: the number of @cmds in this request
* @cmds: an array of tcs_cmds
*/
struct tcs_request {
enum rpmh_state state;
u32 wait_for_compl;
u32 num_cmds;
struct tcs_cmd *cmds;
};
#endif /* __SOC_QCOM_TCS_H__ */