Char/Misc drivers for 4.14-rc1
Here is the big char/misc driver update for 4.14-rc1. Lots of different stuff in here, it's been an active development cycle for some reason. Highlights are: - updated binder driver, this brings binder up to date with what shipped in the Android O release, plus some more changes that happened since then that are in the Android development trees. - coresight updates and fixes - mux driver file renames to be a bit "nicer" - intel_th driver updates - normal set of hyper-v updates and changes - small fpga subsystem and driver updates - lots of const code changes all over the driver trees - extcon driver updates - fmc driver subsystem upadates - w1 subsystem minor reworks and new features and drivers added - spmi driver updates Plus a smattering of other minor driver updates and fixes. All of these have been in linux-next with no reported issues for a while. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> -----BEGIN PGP SIGNATURE----- iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCWa1+Ew8cZ3JlZ0Brcm9h aC5jb20ACgkQMUfUDdst+yl26wCgquufNylfhxr65NbJrovduJYzRnUAniCivXg8 bePIh/JI5WxWoHK+wEbY =hYWx -----END PGP SIGNATURE----- Merge tag 'char-misc-4.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc Pull char/misc driver updates from Greg KH: "Here is the big char/misc driver update for 4.14-rc1. Lots of different stuff in here, it's been an active development cycle for some reason. Highlights are: - updated binder driver, this brings binder up to date with what shipped in the Android O release, plus some more changes that happened since then that are in the Android development trees. - coresight updates and fixes - mux driver file renames to be a bit "nicer" - intel_th driver updates - normal set of hyper-v updates and changes - small fpga subsystem and driver updates - lots of const code changes all over the driver trees - extcon driver updates - fmc driver subsystem upadates - w1 subsystem minor reworks and new features and drivers added - spmi driver updates Plus a smattering of other minor driver updates and fixes. All of these have been in linux-next with no reported issues for a while" * tag 'char-misc-4.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (244 commits) ANDROID: binder: don't queue async transactions to thread. ANDROID: binder: don't enqueue death notifications to thread todo. ANDROID: binder: Don't BUG_ON(!spin_is_locked()). ANDROID: binder: Add BINDER_GET_NODE_DEBUG_INFO ioctl ANDROID: binder: push new transactions to waiting threads. ANDROID: binder: remove proc waitqueue android: binder: Add page usage in binder stats android: binder: fixup crash introduced by moving buffer hdr drivers: w1: add hwmon temp support for w1_therm drivers: w1: refactor w1_slave_show to make the temp reading functionality separate drivers: w1: add hwmon support structures eeprom: idt_89hpesx: Support both ACPI and OF probing mcb: Fix an error handling path in 'chameleon_parse_cells()' MCB: add support for SC31 to mcb-lpc mux: make device_type const char: virtio: constify attribute_group structures. Documentation/ABI: document the nvmem sysfs files lkdtm: fix spelling mistake: "incremeted" -> "incremented" perf: cs-etm: Fix ETMv4 CONFIGR entry in perf.data file nvmem: include linux/err.h from header ...
This commit is contained in:
commit
bafb0762cb
|
@ -0,0 +1,19 @@
|
|||
What: /sys/bus/nvmem/devices/.../nvmem
|
||||
Date: July 2015
|
||||
KernelVersion: 4.2
|
||||
Contact: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
|
||||
Description:
|
||||
This file allows user to read/write the raw NVMEM contents.
|
||||
Permissions for write to this file depends on the nvmem
|
||||
provider configuration.
|
||||
|
||||
ex:
|
||||
hexdump /sys/bus/nvmem/devices/qfprom0/nvmem
|
||||
|
||||
0000000 0000 0000 0000 0000 0000 0000 0000 0000
|
||||
*
|
||||
00000a0 db10 2240 0000 e000 0c00 0c00 0000 0c00
|
||||
0000000 0000 0000 0000 0000 0000 0000 0000 0000
|
||||
...
|
||||
*
|
||||
0001000
|
|
@ -45,6 +45,8 @@ Contact: thunderbolt-software@lists.01.org
|
|||
Description: When a devices supports Thunderbolt secure connect it will
|
||||
have this attribute. Writing 32 byte hex string changes
|
||||
authorization to use the secure connection method instead.
|
||||
Writing an empty string clears the key and regular connection
|
||||
method can be used again.
|
||||
|
||||
What: /sys/bus/thunderbolt/devices/.../device
|
||||
Date: Sep 2017
|
||||
|
|
|
@ -0,0 +1,8 @@
|
|||
What: /sys/bus/pci/drivers/altera-cvp/chkcfg
|
||||
Date: May 2017
|
||||
Kernel Version: 4.13
|
||||
Contact: Anatolij Gustschin <agust@denx.de>
|
||||
Description:
|
||||
Contains either 1 or 0 and controls if configuration
|
||||
error checking in altera-cvp driver is turned on or
|
||||
off.
|
|
@ -3081,3 +3081,8 @@
|
|||
1 = /dev/osd1 Second OSD Device
|
||||
...
|
||||
255 = /dev/osd255 256th OSD Device
|
||||
|
||||
384-511 char RESERVED FOR DYNAMIC ASSIGNMENT
|
||||
Character devices that request a dynamic allocation of major
|
||||
number will take numbers starting from 511 and downward,
|
||||
once the 234-254 range is full.
|
||||
|
|
|
@ -34,8 +34,8 @@ its hardware characteristcs.
|
|||
- Embedded Trace Macrocell (version 4.x):
|
||||
"arm,coresight-etm4x", "arm,primecell";
|
||||
|
||||
- Qualcomm Configurable Replicator (version 1.x):
|
||||
"qcom,coresight-replicator1x", "arm,primecell";
|
||||
- Coresight programmable Replicator :
|
||||
"arm,coresight-dynamic-replicator", "arm,primecell";
|
||||
|
||||
- System Trace Macrocell:
|
||||
"arm,coresight-stm", "arm,primecell"; [1]
|
||||
|
|
|
@ -0,0 +1,24 @@
|
|||
ChromeOS EC USB Type-C cable and accessories detection
|
||||
|
||||
On ChromeOS systems with USB Type C ports, the ChromeOS Embedded Controller is
|
||||
able to detect the state of external accessories such as display adapters
|
||||
or USB devices when said accessories are attached or detached.
|
||||
|
||||
The node for this device must be under a cros-ec node like google,cros-ec-spi
|
||||
or google,cros-ec-i2c.
|
||||
|
||||
Required properties:
|
||||
- compatible: Should be "google,extcon-usbc-cros-ec".
|
||||
- google,usb-port-id: Specifies the USB port ID to use.
|
||||
|
||||
Example:
|
||||
cros-ec@0 {
|
||||
compatible = "google,cros-ec-i2c";
|
||||
|
||||
...
|
||||
|
||||
extcon {
|
||||
compatible = "google,extcon-usbc-cros-ec";
|
||||
google,usb-port-id = <0>;
|
||||
};
|
||||
}
|
|
@ -0,0 +1,29 @@
|
|||
Altera Passive Serial SPI FPGA Manager
|
||||
|
||||
Altera FPGAs support a method of loading the bitstream over what is
|
||||
referred to as "passive serial".
|
||||
The passive serial link is not technically SPI, and might require extra
|
||||
circuits in order to play nicely with other SPI slaves on the same bus.
|
||||
|
||||
See https://www.altera.com/literature/hb/cyc/cyc_c51013.pdf
|
||||
|
||||
Required properties:
|
||||
- compatible: Must be one of the following:
|
||||
"altr,fpga-passive-serial",
|
||||
"altr,fpga-arria10-passive-serial"
|
||||
- reg: SPI chip select of the FPGA
|
||||
- nconfig-gpios: config pin (referred to as nCONFIG in the manual)
|
||||
- nstat-gpios: status pin (referred to as nSTATUS in the manual)
|
||||
|
||||
Optional properties:
|
||||
- confd-gpios: confd pin (referred to as CONF_DONE in the manual)
|
||||
|
||||
Example:
|
||||
fpga: fpga@0 {
|
||||
compatible = "altr,fpga-passive-serial";
|
||||
spi-max-frequency = <20000000>;
|
||||
reg = <0>;
|
||||
nconfig-gpios = <&gpio4 9 GPIO_ACTIVE_LOW>;
|
||||
nstat-gpios = <&gpio4 11 GPIO_ACTIVE_LOW>;
|
||||
confd-gpios = <&gpio4 12 GPIO_ACTIVE_LOW>;
|
||||
};
|
|
@ -0,0 +1,36 @@
|
|||
Xilinx LogiCORE Partial Reconfig Decoupler Softcore
|
||||
|
||||
The Xilinx LogiCORE Partial Reconfig Decoupler manages one or more
|
||||
decouplers / fpga bridges.
|
||||
The controller can decouple/disable the bridges which prevents signal
|
||||
changes from passing through the bridge. The controller can also
|
||||
couple / enable the bridges which allows traffic to pass through the
|
||||
bridge normally.
|
||||
|
||||
The Driver supports only MMIO handling. A PR region can have multiple
|
||||
PR Decouplers which can be handled independently or chained via decouple/
|
||||
decouple_status signals.
|
||||
|
||||
Required properties:
|
||||
- compatible : Should contain "xlnx,pr-decoupler-1.00" followed by
|
||||
"xlnx,pr-decoupler"
|
||||
- regs : base address and size for decoupler module
|
||||
- clocks : input clock to IP
|
||||
- clock-names : should contain "aclk"
|
||||
|
||||
Optional properties:
|
||||
- bridge-enable : 0 if driver should disable bridge at startup
|
||||
1 if driver should enable bridge at startup
|
||||
Default is to leave bridge in current state.
|
||||
|
||||
See Documentation/devicetree/bindings/fpga/fpga-region.txt for generic bindings.
|
||||
|
||||
Example:
|
||||
fpga-bridge@100000450 {
|
||||
compatible = "xlnx,pr-decoupler-1.00",
|
||||
"xlnx-pr-decoupler";
|
||||
regs = <0x10000045 0x10>;
|
||||
clocks = <&clkc 15>;
|
||||
clock-names = "aclk";
|
||||
bridge-enable = <0>;
|
||||
};
|
|
@ -175,6 +175,7 @@ kosagi Sutajio Ko-Usagi PTE Ltd.
|
|||
kyo Kyocera Corporation
|
||||
lacie LaCie
|
||||
lantiq Lantiq Semiconductor
|
||||
lattice Lattice Semiconductor
|
||||
lego LEGO Systems A/S
|
||||
lenovo Lenovo Group Ltd.
|
||||
lg LG Corporation
|
||||
|
|
|
@ -281,6 +281,8 @@
|
|||
capabilities of the underlying ICAP hardware
|
||||
differ between different families. May be
|
||||
'virtex2p', 'virtex4', or 'virtex5'.
|
||||
- compatible : should contain "xlnx,xps-hwicap-1.00.a" or
|
||||
"xlnx,opb-hwicap-1.00.b".
|
||||
|
||||
vi) Xilinx Uart 16550
|
||||
|
||||
|
|
|
@ -83,7 +83,7 @@ by writing the name of the desired stm device there, for example:
|
|||
$ echo dummy_stm.0 > /sys/class/stm_source/console/stm_source_link
|
||||
|
||||
For examples on how to use stm_source interface in the kernel, refer
|
||||
to stm_console or stm_heartbeat drivers.
|
||||
to stm_console, stm_heartbeat or stm_ftrace drivers.
|
||||
|
||||
Each stm_source device will need to assume a master and a range of
|
||||
channels, depending on how many channels it requires. These are
|
||||
|
@ -107,5 +107,16 @@ console in the STP stream, create a "console" policy entry (see the
|
|||
beginning of this text on how to do that). When initialized, it will
|
||||
consume one channel.
|
||||
|
||||
stm_ftrace
|
||||
==========
|
||||
|
||||
This is another "stm_source" device, once the stm_ftrace has been
|
||||
linked with an stm device, and if "function" tracer is enabled,
|
||||
function address and parent function address which Ftrace subsystem
|
||||
would store into ring buffer will be exported via the stm device at
|
||||
the same time.
|
||||
|
||||
Currently only Ftrace "function" tracer is supported.
|
||||
|
||||
[1] https://software.intel.com/sites/default/files/managed/d3/3c/intel-th-developer-manual.pdf
|
||||
[2] http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0444b/index.html
|
||||
|
|
|
@ -5362,10 +5362,11 @@ K: fmc_d.*register
|
|||
|
||||
FPGA MANAGER FRAMEWORK
|
||||
M: Alan Tull <atull@kernel.org>
|
||||
R: Moritz Fischer <moritz.fischer@ettus.com>
|
||||
R: Moritz Fischer <mdf@kernel.org>
|
||||
L: linux-fpga@vger.kernel.org
|
||||
S: Maintained
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/atull/linux-fpga.git
|
||||
Q: http://patchwork.kernel.org/project/linux-fpga/list/
|
||||
F: Documentation/fpga/
|
||||
F: Documentation/devicetree/bindings/fpga/
|
||||
F: drivers/fpga/
|
||||
|
@ -9484,6 +9485,7 @@ M: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
|
|||
S: Maintained
|
||||
F: drivers/nvmem/
|
||||
F: Documentation/devicetree/bindings/nvmem/
|
||||
F: Documentation/ABI/stable/sysfs-bus-nvmem
|
||||
F: include/linux/nvmem-consumer.h
|
||||
F: include/linux/nvmem-provider.h
|
||||
|
||||
|
|
|
@ -94,6 +94,15 @@
|
|||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&pinctrl_ecspi1 &pinctrl_ecspi1cs>;
|
||||
status = "okay";
|
||||
|
||||
fpga: fpga@0 {
|
||||
compatible = "altr,fpga-passive-serial";
|
||||
spi-max-frequency = <20000000>;
|
||||
reg = <0>;
|
||||
pinctrl-0 = <&pinctrl_fpgaspi>;
|
||||
nconfig-gpios = <&gpio4 9 GPIO_ACTIVE_LOW>;
|
||||
nstat-gpios = <&gpio4 11 GPIO_ACTIVE_LOW>;
|
||||
};
|
||||
};
|
||||
|
||||
&ecspi3 {
|
||||
|
@ -319,6 +328,13 @@
|
|||
>;
|
||||
};
|
||||
|
||||
pinctrl_fpgaspi: fpgaspigrp {
|
||||
fsl,pins = <
|
||||
MX6QDL_PAD_KEY_ROW1__GPIO4_IO09 0x1b0b0
|
||||
MX6QDL_PAD_KEY_ROW2__GPIO4_IO11 0x1b0b0
|
||||
>;
|
||||
};
|
||||
|
||||
pinctrl_gpminand: gpminandgrp {
|
||||
fsl,pins = <
|
||||
MX6QDL_PAD_NANDF_CLE__NAND_CLE 0xb0b1
|
||||
|
|
|
@ -28,6 +28,8 @@ struct ms_hyperv_info {
|
|||
u32 features;
|
||||
u32 misc_features;
|
||||
u32 hints;
|
||||
u32 max_vp_index;
|
||||
u32 max_lp_index;
|
||||
};
|
||||
|
||||
extern struct ms_hyperv_info ms_hyperv;
|
||||
|
|
|
@ -179,9 +179,15 @@ static void __init ms_hyperv_init_platform(void)
|
|||
ms_hyperv.misc_features = cpuid_edx(HYPERV_CPUID_FEATURES);
|
||||
ms_hyperv.hints = cpuid_eax(HYPERV_CPUID_ENLIGHTMENT_INFO);
|
||||
|
||||
pr_info("HyperV: features 0x%x, hints 0x%x\n",
|
||||
pr_info("Hyper-V: features 0x%x, hints 0x%x\n",
|
||||
ms_hyperv.features, ms_hyperv.hints);
|
||||
|
||||
ms_hyperv.max_vp_index = cpuid_eax(HVCPUID_IMPLEMENTATION_LIMITS);
|
||||
ms_hyperv.max_lp_index = cpuid_ebx(HVCPUID_IMPLEMENTATION_LIMITS);
|
||||
|
||||
pr_debug("Hyper-V: max %u virtual processors, %u logical processors\n",
|
||||
ms_hyperv.max_vp_index, ms_hyperv.max_lp_index);
|
||||
|
||||
/*
|
||||
* Extract host information.
|
||||
*/
|
||||
|
@ -214,7 +220,7 @@ static void __init ms_hyperv_init_platform(void)
|
|||
rdmsrl(HV_X64_MSR_APIC_FREQUENCY, hv_lapic_frequency);
|
||||
hv_lapic_frequency = div_u64(hv_lapic_frequency, HZ);
|
||||
lapic_timer_frequency = hv_lapic_frequency;
|
||||
pr_info("HyperV: LAPIC Timer Frequency: %#x\n",
|
||||
pr_info("Hyper-V: LAPIC Timer Frequency: %#x\n",
|
||||
lapic_timer_frequency);
|
||||
}
|
||||
|
||||
|
@ -248,7 +254,7 @@ static void __init ms_hyperv_init_platform(void)
|
|||
}
|
||||
|
||||
const __refconst struct hypervisor_x86 x86_hyper_ms_hyperv = {
|
||||
.name = "Microsoft HyperV",
|
||||
.name = "Microsoft Hyper-V",
|
||||
.detect = ms_hyperv_platform,
|
||||
.init_platform = ms_hyperv_init_platform,
|
||||
};
|
||||
|
|
|
@ -242,6 +242,7 @@ EXPORT_SYMBOL_GPL(disk_map_sector_rcu);
|
|||
* Can be deleted altogether. Later.
|
||||
*
|
||||
*/
|
||||
#define BLKDEV_MAJOR_HASH_SIZE 255
|
||||
static struct blk_major_name {
|
||||
struct blk_major_name *next;
|
||||
int major;
|
||||
|
@ -259,12 +260,11 @@ void blkdev_show(struct seq_file *seqf, off_t offset)
|
|||
{
|
||||
struct blk_major_name *dp;
|
||||
|
||||
if (offset < BLKDEV_MAJOR_HASH_SIZE) {
|
||||
mutex_lock(&block_class_lock);
|
||||
for (dp = major_names[offset]; dp; dp = dp->next)
|
||||
mutex_lock(&block_class_lock);
|
||||
for (dp = major_names[major_to_index(offset)]; dp; dp = dp->next)
|
||||
if (dp->major == offset)
|
||||
seq_printf(seqf, "%3d %s\n", dp->major, dp->name);
|
||||
mutex_unlock(&block_class_lock);
|
||||
}
|
||||
mutex_unlock(&block_class_lock);
|
||||
}
|
||||
#endif /* CONFIG_PROC_FS */
|
||||
|
||||
|
@ -309,6 +309,14 @@ int register_blkdev(unsigned int major, const char *name)
|
|||
ret = major;
|
||||
}
|
||||
|
||||
if (major >= BLKDEV_MAJOR_MAX) {
|
||||
pr_err("register_blkdev: major requested (%d) is greater than the maximum (%d) for %s\n",
|
||||
major, BLKDEV_MAJOR_MAX, name);
|
||||
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
p = kmalloc(sizeof(struct blk_major_name), GFP_KERNEL);
|
||||
if (p == NULL) {
|
||||
ret = -ENOMEM;
|
||||
|
|
|
@ -22,7 +22,7 @@ config ANDROID_BINDER_IPC
|
|||
config ANDROID_BINDER_DEVICES
|
||||
string "Android Binder devices"
|
||||
depends on ANDROID_BINDER_IPC
|
||||
default "binder,hwbinder"
|
||||
default "binder,hwbinder,vndbinder"
|
||||
---help---
|
||||
Default value for the binder.devices parameter.
|
||||
|
||||
|
@ -32,7 +32,7 @@ config ANDROID_BINDER_DEVICES
|
|||
therefore logically separated from the other devices.
|
||||
|
||||
config ANDROID_BINDER_IPC_32BIT
|
||||
bool
|
||||
bool "Use old (Android 4.4 and earlier) 32-bit binder API"
|
||||
depends on !64BIT && ANDROID_BINDER_IPC
|
||||
default y
|
||||
---help---
|
||||
|
@ -44,6 +44,16 @@ config ANDROID_BINDER_IPC_32BIT
|
|||
|
||||
Note that enabling this will break newer Android user-space.
|
||||
|
||||
config ANDROID_BINDER_IPC_SELFTEST
|
||||
bool "Android Binder IPC Driver Selftest"
|
||||
depends on ANDROID_BINDER_IPC
|
||||
---help---
|
||||
This feature allows binder selftest to run.
|
||||
|
||||
Binder selftest checks the allocation and free of binder buffers
|
||||
exhaustively with combinations of various buffer sizes and
|
||||
alignments.
|
||||
|
||||
endif # if ANDROID
|
||||
|
||||
endmenu
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
ccflags-y += -I$(src) # needed for trace events
|
||||
|
||||
obj-$(CONFIG_ANDROID_BINDER_IPC) += binder.o
|
||||
obj-$(CONFIG_ANDROID_BINDER_IPC) += binder.o binder_alloc.o
|
||||
obj-$(CONFIG_ANDROID_BINDER_IPC_SELFTEST) += binder_alloc_selftest.o
|
||||
|
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,187 @@
|
|||
/*
|
||||
* Copyright (C) 2017 Google, Inc.
|
||||
*
|
||||
* This software is licensed under the terms of the GNU General Public
|
||||
* License version 2, as published by the Free Software Foundation, and
|
||||
* may be copied, distributed, and modified under those terms.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
*/
|
||||
|
||||
#ifndef _LINUX_BINDER_ALLOC_H
|
||||
#define _LINUX_BINDER_ALLOC_H
|
||||
|
||||
#include <linux/rbtree.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/rtmutex.h>
|
||||
#include <linux/vmalloc.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/list_lru.h>
|
||||
|
||||
extern struct list_lru binder_alloc_lru;
|
||||
struct binder_transaction;
|
||||
|
||||
/**
|
||||
* struct binder_buffer - buffer used for binder transactions
|
||||
* @entry: entry alloc->buffers
|
||||
* @rb_node: node for allocated_buffers/free_buffers rb trees
|
||||
* @free: true if buffer is free
|
||||
* @allow_user_free: describe the second member of struct blah,
|
||||
* @async_transaction: describe the second member of struct blah,
|
||||
* @debug_id: describe the second member of struct blah,
|
||||
* @transaction: describe the second member of struct blah,
|
||||
* @target_node: describe the second member of struct blah,
|
||||
* @data_size: describe the second member of struct blah,
|
||||
* @offsets_size: describe the second member of struct blah,
|
||||
* @extra_buffers_size: describe the second member of struct blah,
|
||||
* @data:i describe the second member of struct blah,
|
||||
*
|
||||
* Bookkeeping structure for binder transaction buffers
|
||||
*/
|
||||
struct binder_buffer {
|
||||
struct list_head entry; /* free and allocated entries by address */
|
||||
struct rb_node rb_node; /* free entry by size or allocated entry */
|
||||
/* by address */
|
||||
unsigned free:1;
|
||||
unsigned allow_user_free:1;
|
||||
unsigned async_transaction:1;
|
||||
unsigned free_in_progress:1;
|
||||
unsigned debug_id:28;
|
||||
|
||||
struct binder_transaction *transaction;
|
||||
|
||||
struct binder_node *target_node;
|
||||
size_t data_size;
|
||||
size_t offsets_size;
|
||||
size_t extra_buffers_size;
|
||||
void *data;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct binder_lru_page - page object used for binder shrinker
|
||||
* @page_ptr: pointer to physical page in mmap'd space
|
||||
* @lru: entry in binder_alloc_lru
|
||||
* @alloc: binder_alloc for a proc
|
||||
*/
|
||||
struct binder_lru_page {
|
||||
struct list_head lru;
|
||||
struct page *page_ptr;
|
||||
struct binder_alloc *alloc;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct binder_alloc - per-binder proc state for binder allocator
|
||||
* @vma: vm_area_struct passed to mmap_handler
|
||||
* (invarient after mmap)
|
||||
* @tsk: tid for task that called init for this proc
|
||||
* (invariant after init)
|
||||
* @vma_vm_mm: copy of vma->vm_mm (invarient after mmap)
|
||||
* @buffer: base of per-proc address space mapped via mmap
|
||||
* @user_buffer_offset: offset between user and kernel VAs for buffer
|
||||
* @buffers: list of all buffers for this proc
|
||||
* @free_buffers: rb tree of buffers available for allocation
|
||||
* sorted by size
|
||||
* @allocated_buffers: rb tree of allocated buffers sorted by address
|
||||
* @free_async_space: VA space available for async buffers. This is
|
||||
* initialized at mmap time to 1/2 the full VA space
|
||||
* @pages: array of binder_lru_page
|
||||
* @buffer_size: size of address space specified via mmap
|
||||
* @pid: pid for associated binder_proc (invariant after init)
|
||||
*
|
||||
* Bookkeeping structure for per-proc address space management for binder
|
||||
* buffers. It is normally initialized during binder_init() and binder_mmap()
|
||||
* calls. The address space is used for both user-visible buffers and for
|
||||
* struct binder_buffer objects used to track the user buffers
|
||||
*/
|
||||
struct binder_alloc {
|
||||
struct mutex mutex;
|
||||
struct task_struct *tsk;
|
||||
struct vm_area_struct *vma;
|
||||
struct mm_struct *vma_vm_mm;
|
||||
void *buffer;
|
||||
ptrdiff_t user_buffer_offset;
|
||||
struct list_head buffers;
|
||||
struct rb_root free_buffers;
|
||||
struct rb_root allocated_buffers;
|
||||
size_t free_async_space;
|
||||
struct binder_lru_page *pages;
|
||||
size_t buffer_size;
|
||||
uint32_t buffer_free;
|
||||
int pid;
|
||||
};
|
||||
|
||||
#ifdef CONFIG_ANDROID_BINDER_IPC_SELFTEST
|
||||
void binder_selftest_alloc(struct binder_alloc *alloc);
|
||||
#else
|
||||
static inline void binder_selftest_alloc(struct binder_alloc *alloc) {}
|
||||
#endif
|
||||
enum lru_status binder_alloc_free_page(struct list_head *item,
|
||||
struct list_lru_one *lru,
|
||||
spinlock_t *lock, void *cb_arg);
|
||||
extern struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
|
||||
size_t data_size,
|
||||
size_t offsets_size,
|
||||
size_t extra_buffers_size,
|
||||
int is_async);
|
||||
extern void binder_alloc_init(struct binder_alloc *alloc);
|
||||
void binder_alloc_shrinker_init(void);
|
||||
extern void binder_alloc_vma_close(struct binder_alloc *alloc);
|
||||
extern struct binder_buffer *
|
||||
binder_alloc_prepare_to_free(struct binder_alloc *alloc,
|
||||
uintptr_t user_ptr);
|
||||
extern void binder_alloc_free_buf(struct binder_alloc *alloc,
|
||||
struct binder_buffer *buffer);
|
||||
extern int binder_alloc_mmap_handler(struct binder_alloc *alloc,
|
||||
struct vm_area_struct *vma);
|
||||
extern void binder_alloc_deferred_release(struct binder_alloc *alloc);
|
||||
extern int binder_alloc_get_allocated_count(struct binder_alloc *alloc);
|
||||
extern void binder_alloc_print_allocated(struct seq_file *m,
|
||||
struct binder_alloc *alloc);
|
||||
void binder_alloc_print_pages(struct seq_file *m,
|
||||
struct binder_alloc *alloc);
|
||||
|
||||
/**
|
||||
* binder_alloc_get_free_async_space() - get free space available for async
|
||||
* @alloc: binder_alloc for this proc
|
||||
*
|
||||
* Return: the bytes remaining in the address-space for async transactions
|
||||
*/
|
||||
static inline size_t
|
||||
binder_alloc_get_free_async_space(struct binder_alloc *alloc)
|
||||
{
|
||||
size_t free_async_space;
|
||||
|
||||
mutex_lock(&alloc->mutex);
|
||||
free_async_space = alloc->free_async_space;
|
||||
mutex_unlock(&alloc->mutex);
|
||||
return free_async_space;
|
||||
}
|
||||
|
||||
/**
|
||||
* binder_alloc_get_user_buffer_offset() - get offset between kernel/user addrs
|
||||
* @alloc: binder_alloc for this proc
|
||||
*
|
||||
* Return: the offset between kernel and user-space addresses to use for
|
||||
* virtual address conversion
|
||||
*/
|
||||
static inline ptrdiff_t
|
||||
binder_alloc_get_user_buffer_offset(struct binder_alloc *alloc)
|
||||
{
|
||||
/*
|
||||
* user_buffer_offset is constant if vma is set and
|
||||
* undefined if vma is not set. It is possible to
|
||||
* get here with !alloc->vma if the target process
|
||||
* is dying while a transaction is being initiated.
|
||||
* Returning the old value is ok in this case and
|
||||
* the transaction will fail.
|
||||
*/
|
||||
return alloc->user_buffer_offset;
|
||||
}
|
||||
|
||||
#endif /* _LINUX_BINDER_ALLOC_H */
|
||||
|
|
@ -0,0 +1,310 @@
|
|||
/* binder_alloc_selftest.c
|
||||
*
|
||||
* Android IPC Subsystem
|
||||
*
|
||||
* Copyright (C) 2017 Google, Inc.
|
||||
*
|
||||
* This software is licensed under the terms of the GNU General Public
|
||||
* License version 2, as published by the Free Software Foundation, and
|
||||
* may be copied, distributed, and modified under those terms.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
*/
|
||||
|
||||
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
||||
|
||||
#include <linux/mm_types.h>
|
||||
#include <linux/err.h>
|
||||
#include "binder_alloc.h"
|
||||
|
||||
#define BUFFER_NUM 5
|
||||
#define BUFFER_MIN_SIZE (PAGE_SIZE / 8)
|
||||
|
||||
static bool binder_selftest_run = true;
|
||||
static int binder_selftest_failures;
|
||||
static DEFINE_MUTEX(binder_selftest_lock);
|
||||
|
||||
/**
|
||||
* enum buf_end_align_type - Page alignment of a buffer
|
||||
* end with regard to the end of the previous buffer.
|
||||
*
|
||||
* In the pictures below, buf2 refers to the buffer we
|
||||
* are aligning. buf1 refers to previous buffer by addr.
|
||||
* Symbol [ means the start of a buffer, ] means the end
|
||||
* of a buffer, and | means page boundaries.
|
||||
*/
|
||||
enum buf_end_align_type {
|
||||
/**
|
||||
* @SAME_PAGE_UNALIGNED: The end of this buffer is on
|
||||
* the same page as the end of the previous buffer and
|
||||
* is not page aligned. Examples:
|
||||
* buf1 ][ buf2 ][ ...
|
||||
* buf1 ]|[ buf2 ][ ...
|
||||
*/
|
||||
SAME_PAGE_UNALIGNED = 0,
|
||||
/**
|
||||
* @SAME_PAGE_ALIGNED: When the end of the previous buffer
|
||||
* is not page aligned, the end of this buffer is on the
|
||||
* same page as the end of the previous buffer and is page
|
||||
* aligned. When the previous buffer is page aligned, the
|
||||
* end of this buffer is aligned to the next page boundary.
|
||||
* Examples:
|
||||
* buf1 ][ buf2 ]| ...
|
||||
* buf1 ]|[ buf2 ]| ...
|
||||
*/
|
||||
SAME_PAGE_ALIGNED,
|
||||
/**
|
||||
* @NEXT_PAGE_UNALIGNED: The end of this buffer is on
|
||||
* the page next to the end of the previous buffer and
|
||||
* is not page aligned. Examples:
|
||||
* buf1 ][ buf2 | buf2 ][ ...
|
||||
* buf1 ]|[ buf2 | buf2 ][ ...
|
||||
*/
|
||||
NEXT_PAGE_UNALIGNED,
|
||||
/**
|
||||
* @NEXT_PAGE_ALIGNED: The end of this buffer is on
|
||||
* the page next to the end of the previous buffer and
|
||||
* is page aligned. Examples:
|
||||
* buf1 ][ buf2 | buf2 ]| ...
|
||||
* buf1 ]|[ buf2 | buf2 ]| ...
|
||||
*/
|
||||
NEXT_PAGE_ALIGNED,
|
||||
/**
|
||||
* @NEXT_NEXT_UNALIGNED: The end of this buffer is on
|
||||
* the page that follows the page after the end of the
|
||||
* previous buffer and is not page aligned. Examples:
|
||||
* buf1 ][ buf2 | buf2 | buf2 ][ ...
|
||||
* buf1 ]|[ buf2 | buf2 | buf2 ][ ...
|
||||
*/
|
||||
NEXT_NEXT_UNALIGNED,
|
||||
LOOP_END,
|
||||
};
|
||||
|
||||
static void pr_err_size_seq(size_t *sizes, int *seq)
|
||||
{
|
||||
int i;
|
||||
|
||||
pr_err("alloc sizes: ");
|
||||
for (i = 0; i < BUFFER_NUM; i++)
|
||||
pr_cont("[%zu]", sizes[i]);
|
||||
pr_cont("\n");
|
||||
pr_err("free seq: ");
|
||||
for (i = 0; i < BUFFER_NUM; i++)
|
||||
pr_cont("[%d]", seq[i]);
|
||||
pr_cont("\n");
|
||||
}
|
||||
|
||||
static bool check_buffer_pages_allocated(struct binder_alloc *alloc,
|
||||
struct binder_buffer *buffer,
|
||||
size_t size)
|
||||
{
|
||||
void *page_addr, *end;
|
||||
int page_index;
|
||||
|
||||
end = (void *)PAGE_ALIGN((uintptr_t)buffer->data + size);
|
||||
page_addr = buffer->data;
|
||||
for (; page_addr < end; page_addr += PAGE_SIZE) {
|
||||
page_index = (page_addr - alloc->buffer) / PAGE_SIZE;
|
||||
if (!alloc->pages[page_index].page_ptr ||
|
||||
!list_empty(&alloc->pages[page_index].lru)) {
|
||||
pr_err("expect alloc but is %s at page index %d\n",
|
||||
alloc->pages[page_index].page_ptr ?
|
||||
"lru" : "free", page_index);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
static void binder_selftest_alloc_buf(struct binder_alloc *alloc,
|
||||
struct binder_buffer *buffers[],
|
||||
size_t *sizes, int *seq)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < BUFFER_NUM; i++) {
|
||||
buffers[i] = binder_alloc_new_buf(alloc, sizes[i], 0, 0, 0);
|
||||
if (IS_ERR(buffers[i]) ||
|
||||
!check_buffer_pages_allocated(alloc, buffers[i],
|
||||
sizes[i])) {
|
||||
pr_err_size_seq(sizes, seq);
|
||||
binder_selftest_failures++;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void binder_selftest_free_buf(struct binder_alloc *alloc,
|
||||
struct binder_buffer *buffers[],
|
||||
size_t *sizes, int *seq, size_t end)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < BUFFER_NUM; i++)
|
||||
binder_alloc_free_buf(alloc, buffers[seq[i]]);
|
||||
|
||||
for (i = 0; i < end / PAGE_SIZE; i++) {
|
||||
/**
|
||||
* Error message on a free page can be false positive
|
||||
* if binder shrinker ran during binder_alloc_free_buf
|
||||
* calls above.
|
||||
*/
|
||||
if (list_empty(&alloc->pages[i].lru)) {
|
||||
pr_err_size_seq(sizes, seq);
|
||||
pr_err("expect lru but is %s at page index %d\n",
|
||||
alloc->pages[i].page_ptr ? "alloc" : "free", i);
|
||||
binder_selftest_failures++;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void binder_selftest_free_page(struct binder_alloc *alloc)
|
||||
{
|
||||
int i;
|
||||
unsigned long count;
|
||||
|
||||
while ((count = list_lru_count(&binder_alloc_lru))) {
|
||||
list_lru_walk(&binder_alloc_lru, binder_alloc_free_page,
|
||||
NULL, count);
|
||||
}
|
||||
|
||||
for (i = 0; i < (alloc->buffer_size / PAGE_SIZE); i++) {
|
||||
if (alloc->pages[i].page_ptr) {
|
||||
pr_err("expect free but is %s at page index %d\n",
|
||||
list_empty(&alloc->pages[i].lru) ?
|
||||
"alloc" : "lru", i);
|
||||
binder_selftest_failures++;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void binder_selftest_alloc_free(struct binder_alloc *alloc,
|
||||
size_t *sizes, int *seq, size_t end)
|
||||
{
|
||||
struct binder_buffer *buffers[BUFFER_NUM];
|
||||
|
||||
binder_selftest_alloc_buf(alloc, buffers, sizes, seq);
|
||||
binder_selftest_free_buf(alloc, buffers, sizes, seq, end);
|
||||
|
||||
/* Allocate from lru. */
|
||||
binder_selftest_alloc_buf(alloc, buffers, sizes, seq);
|
||||
if (list_lru_count(&binder_alloc_lru))
|
||||
pr_err("lru list should be empty but is not\n");
|
||||
|
||||
binder_selftest_free_buf(alloc, buffers, sizes, seq, end);
|
||||
binder_selftest_free_page(alloc);
|
||||
}
|
||||
|
||||
static bool is_dup(int *seq, int index, int val)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < index; i++) {
|
||||
if (seq[i] == val)
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/* Generate BUFFER_NUM factorial free orders. */
|
||||
static void binder_selftest_free_seq(struct binder_alloc *alloc,
|
||||
size_t *sizes, int *seq,
|
||||
int index, size_t end)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (index == BUFFER_NUM) {
|
||||
binder_selftest_alloc_free(alloc, sizes, seq, end);
|
||||
return;
|
||||
}
|
||||
for (i = 0; i < BUFFER_NUM; i++) {
|
||||
if (is_dup(seq, index, i))
|
||||
continue;
|
||||
seq[index] = i;
|
||||
binder_selftest_free_seq(alloc, sizes, seq, index + 1, end);
|
||||
}
|
||||
}
|
||||
|
||||
static void binder_selftest_alloc_size(struct binder_alloc *alloc,
|
||||
size_t *end_offset)
|
||||
{
|
||||
int i;
|
||||
int seq[BUFFER_NUM] = {0};
|
||||
size_t front_sizes[BUFFER_NUM];
|
||||
size_t back_sizes[BUFFER_NUM];
|
||||
size_t last_offset, offset = 0;
|
||||
|
||||
for (i = 0; i < BUFFER_NUM; i++) {
|
||||
last_offset = offset;
|
||||
offset = end_offset[i];
|
||||
front_sizes[i] = offset - last_offset;
|
||||
back_sizes[BUFFER_NUM - i - 1] = front_sizes[i];
|
||||
}
|
||||
/*
|
||||
* Buffers share the first or last few pages.
|
||||
* Only BUFFER_NUM - 1 buffer sizes are adjustable since
|
||||
* we need one giant buffer before getting to the last page.
|
||||
*/
|
||||
back_sizes[0] += alloc->buffer_size - end_offset[BUFFER_NUM - 1];
|
||||
binder_selftest_free_seq(alloc, front_sizes, seq, 0,
|
||||
end_offset[BUFFER_NUM - 1]);
|
||||
binder_selftest_free_seq(alloc, back_sizes, seq, 0, alloc->buffer_size);
|
||||
}
|
||||
|
||||
static void binder_selftest_alloc_offset(struct binder_alloc *alloc,
|
||||
size_t *end_offset, int index)
|
||||
{
|
||||
int align;
|
||||
size_t end, prev;
|
||||
|
||||
if (index == BUFFER_NUM) {
|
||||
binder_selftest_alloc_size(alloc, end_offset);
|
||||
return;
|
||||
}
|
||||
prev = index == 0 ? 0 : end_offset[index - 1];
|
||||
end = prev;
|
||||
|
||||
BUILD_BUG_ON(BUFFER_MIN_SIZE * BUFFER_NUM >= PAGE_SIZE);
|
||||
|
||||
for (align = SAME_PAGE_UNALIGNED; align < LOOP_END; align++) {
|
||||
if (align % 2)
|
||||
end = ALIGN(end, PAGE_SIZE);
|
||||
else
|
||||
end += BUFFER_MIN_SIZE;
|
||||
end_offset[index] = end;
|
||||
binder_selftest_alloc_offset(alloc, end_offset, index + 1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* binder_selftest_alloc() - Test alloc and free of buffer pages.
|
||||
* @alloc: Pointer to alloc struct.
|
||||
*
|
||||
* Allocate BUFFER_NUM buffers to cover all page alignment cases,
|
||||
* then free them in all orders possible. Check that pages are
|
||||
* correctly allocated, put onto lru when buffers are freed, and
|
||||
* are freed when binder_alloc_free_page is called.
|
||||
*/
|
||||
void binder_selftest_alloc(struct binder_alloc *alloc)
|
||||
{
|
||||
size_t end_offset[BUFFER_NUM];
|
||||
|
||||
if (!binder_selftest_run)
|
||||
return;
|
||||
mutex_lock(&binder_selftest_lock);
|
||||
if (!binder_selftest_run || !alloc->vma)
|
||||
goto done;
|
||||
pr_info("STARTED\n");
|
||||
binder_selftest_alloc_offset(alloc, end_offset, 0);
|
||||
binder_selftest_run = false;
|
||||
if (binder_selftest_failures > 0)
|
||||
pr_info("%d tests FAILED\n", binder_selftest_failures);
|
||||
else
|
||||
pr_info("PASSED\n");
|
||||
|
||||
done:
|
||||
mutex_unlock(&binder_selftest_lock);
|
||||
}
|
|
@ -23,7 +23,8 @@
|
|||
struct binder_buffer;
|
||||
struct binder_node;
|
||||
struct binder_proc;
|
||||
struct binder_ref;
|
||||
struct binder_alloc;
|
||||
struct binder_ref_data;
|
||||
struct binder_thread;
|
||||
struct binder_transaction;
|
||||
|
||||
|
@ -146,8 +147,8 @@ TRACE_EVENT(binder_transaction_received,
|
|||
|
||||
TRACE_EVENT(binder_transaction_node_to_ref,
|
||||
TP_PROTO(struct binder_transaction *t, struct binder_node *node,
|
||||
struct binder_ref *ref),
|
||||
TP_ARGS(t, node, ref),
|
||||
struct binder_ref_data *rdata),
|
||||
TP_ARGS(t, node, rdata),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(int, debug_id)
|
||||
|
@ -160,8 +161,8 @@ TRACE_EVENT(binder_transaction_node_to_ref,
|
|||
__entry->debug_id = t->debug_id;
|
||||
__entry->node_debug_id = node->debug_id;
|
||||
__entry->node_ptr = node->ptr;
|
||||
__entry->ref_debug_id = ref->debug_id;
|
||||
__entry->ref_desc = ref->desc;
|
||||
__entry->ref_debug_id = rdata->debug_id;
|
||||
__entry->ref_desc = rdata->desc;
|
||||
),
|
||||
TP_printk("transaction=%d node=%d src_ptr=0x%016llx ==> dest_ref=%d dest_desc=%d",
|
||||
__entry->debug_id, __entry->node_debug_id,
|
||||
|
@ -170,8 +171,9 @@ TRACE_EVENT(binder_transaction_node_to_ref,
|
|||
);
|
||||
|
||||
TRACE_EVENT(binder_transaction_ref_to_node,
|
||||
TP_PROTO(struct binder_transaction *t, struct binder_ref *ref),
|
||||
TP_ARGS(t, ref),
|
||||
TP_PROTO(struct binder_transaction *t, struct binder_node *node,
|
||||
struct binder_ref_data *rdata),
|
||||
TP_ARGS(t, node, rdata),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(int, debug_id)
|
||||
|
@ -182,10 +184,10 @@ TRACE_EVENT(binder_transaction_ref_to_node,
|
|||
),
|
||||
TP_fast_assign(
|
||||
__entry->debug_id = t->debug_id;
|
||||
__entry->ref_debug_id = ref->debug_id;
|
||||
__entry->ref_desc = ref->desc;
|
||||
__entry->node_debug_id = ref->node->debug_id;
|
||||
__entry->node_ptr = ref->node->ptr;
|
||||
__entry->ref_debug_id = rdata->debug_id;
|
||||
__entry->ref_desc = rdata->desc;
|
||||
__entry->node_debug_id = node->debug_id;
|
||||
__entry->node_ptr = node->ptr;
|
||||
),
|
||||
TP_printk("transaction=%d node=%d src_ref=%d src_desc=%d ==> dest_ptr=0x%016llx",
|
||||
__entry->debug_id, __entry->node_debug_id,
|
||||
|
@ -194,9 +196,10 @@ TRACE_EVENT(binder_transaction_ref_to_node,
|
|||
);
|
||||
|
||||
TRACE_EVENT(binder_transaction_ref_to_ref,
|
||||
TP_PROTO(struct binder_transaction *t, struct binder_ref *src_ref,
|
||||
struct binder_ref *dest_ref),
|
||||
TP_ARGS(t, src_ref, dest_ref),
|
||||
TP_PROTO(struct binder_transaction *t, struct binder_node *node,
|
||||
struct binder_ref_data *src_ref,
|
||||
struct binder_ref_data *dest_ref),
|
||||
TP_ARGS(t, node, src_ref, dest_ref),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(int, debug_id)
|
||||
|
@ -208,7 +211,7 @@ TRACE_EVENT(binder_transaction_ref_to_ref,
|
|||
),
|
||||
TP_fast_assign(
|
||||
__entry->debug_id = t->debug_id;
|
||||
__entry->node_debug_id = src_ref->node->debug_id;
|
||||
__entry->node_debug_id = node->debug_id;
|
||||
__entry->src_ref_debug_id = src_ref->debug_id;
|
||||
__entry->src_ref_desc = src_ref->desc;
|
||||
__entry->dest_ref_debug_id = dest_ref->debug_id;
|
||||
|
@ -268,9 +271,9 @@ DEFINE_EVENT(binder_buffer_class, binder_transaction_failed_buffer_release,
|
|||
TP_ARGS(buffer));
|
||||
|
||||
TRACE_EVENT(binder_update_page_range,
|
||||
TP_PROTO(struct binder_proc *proc, bool allocate,
|
||||
TP_PROTO(struct binder_alloc *alloc, bool allocate,
|
||||
void *start, void *end),
|
||||
TP_ARGS(proc, allocate, start, end),
|
||||
TP_ARGS(alloc, allocate, start, end),
|
||||
TP_STRUCT__entry(
|
||||
__field(int, proc)
|
||||
__field(bool, allocate)
|
||||
|
@ -278,9 +281,9 @@ TRACE_EVENT(binder_update_page_range,
|
|||
__field(size_t, size)
|
||||
),
|
||||
TP_fast_assign(
|
||||
__entry->proc = proc->pid;
|
||||
__entry->proc = alloc->pid;
|
||||
__entry->allocate = allocate;
|
||||
__entry->offset = start - proc->buffer;
|
||||
__entry->offset = start - alloc->buffer;
|
||||
__entry->size = end - start;
|
||||
),
|
||||
TP_printk("proc=%d allocate=%d offset=%zu size=%zu",
|
||||
|
@ -288,6 +291,61 @@ TRACE_EVENT(binder_update_page_range,
|
|||
__entry->offset, __entry->size)
|
||||
);
|
||||
|
||||
DECLARE_EVENT_CLASS(binder_lru_page_class,
|
||||
TP_PROTO(const struct binder_alloc *alloc, size_t page_index),
|
||||
TP_ARGS(alloc, page_index),
|
||||
TP_STRUCT__entry(
|
||||
__field(int, proc)
|
||||
__field(size_t, page_index)
|
||||
),
|
||||
TP_fast_assign(
|
||||
__entry->proc = alloc->pid;
|
||||
__entry->page_index = page_index;
|
||||
),
|
||||
TP_printk("proc=%d page_index=%zu",
|
||||
__entry->proc, __entry->page_index)
|
||||
);
|
||||
|
||||
DEFINE_EVENT(binder_lru_page_class, binder_alloc_lru_start,
|
||||
TP_PROTO(const struct binder_alloc *alloc, size_t page_index),
|
||||
TP_ARGS(alloc, page_index));
|
||||
|
||||
DEFINE_EVENT(binder_lru_page_class, binder_alloc_lru_end,
|
||||
TP_PROTO(const struct binder_alloc *alloc, size_t page_index),
|
||||
TP_ARGS(alloc, page_index));
|
||||
|
||||
DEFINE_EVENT(binder_lru_page_class, binder_free_lru_start,
|
||||
TP_PROTO(const struct binder_alloc *alloc, size_t page_index),
|
||||
TP_ARGS(alloc, page_index));
|
||||
|
||||
DEFINE_EVENT(binder_lru_page_class, binder_free_lru_end,
|
||||
TP_PROTO(const struct binder_alloc *alloc, size_t page_index),
|
||||
TP_ARGS(alloc, page_index));
|
||||
|
||||
DEFINE_EVENT(binder_lru_page_class, binder_alloc_page_start,
|
||||
TP_PROTO(const struct binder_alloc *alloc, size_t page_index),
|
||||
TP_ARGS(alloc, page_index));
|
||||
|
||||
DEFINE_EVENT(binder_lru_page_class, binder_alloc_page_end,
|
||||
TP_PROTO(const struct binder_alloc *alloc, size_t page_index),
|
||||
TP_ARGS(alloc, page_index));
|
||||
|
||||
DEFINE_EVENT(binder_lru_page_class, binder_unmap_user_start,
|
||||
TP_PROTO(const struct binder_alloc *alloc, size_t page_index),
|
||||
TP_ARGS(alloc, page_index));
|
||||
|
||||
DEFINE_EVENT(binder_lru_page_class, binder_unmap_user_end,
|
||||
TP_PROTO(const struct binder_alloc *alloc, size_t page_index),
|
||||
TP_ARGS(alloc, page_index));
|
||||
|
||||
DEFINE_EVENT(binder_lru_page_class, binder_unmap_kernel_start,
|
||||
TP_PROTO(const struct binder_alloc *alloc, size_t page_index),
|
||||
TP_ARGS(alloc, page_index));
|
||||
|
||||
DEFINE_EVENT(binder_lru_page_class, binder_unmap_kernel_end,
|
||||
TP_PROTO(const struct binder_alloc *alloc, size_t page_index),
|
||||
TP_ARGS(alloc, page_index));
|
||||
|
||||
TRACE_EVENT(binder_command,
|
||||
TP_PROTO(uint32_t cmd),
|
||||
TP_ARGS(cmd),
|
||||
|
|
|
@ -877,21 +877,21 @@ static void lcd_clear_fast_tilcd(struct charlcd *charlcd)
|
|||
spin_unlock_irq(&pprt_lock);
|
||||
}
|
||||
|
||||
static struct charlcd_ops charlcd_serial_ops = {
|
||||
static const struct charlcd_ops charlcd_serial_ops = {
|
||||
.write_cmd = lcd_write_cmd_s,
|
||||
.write_data = lcd_write_data_s,
|
||||
.clear_fast = lcd_clear_fast_s,
|
||||
.backlight = lcd_backlight,
|
||||
};
|
||||
|
||||
static struct charlcd_ops charlcd_parallel_ops = {
|
||||
static const struct charlcd_ops charlcd_parallel_ops = {
|
||||
.write_cmd = lcd_write_cmd_p8,
|
||||
.write_data = lcd_write_data_p8,
|
||||
.clear_fast = lcd_clear_fast_p8,
|
||||
.backlight = lcd_backlight,
|
||||
};
|
||||
|
||||
static struct charlcd_ops charlcd_tilcd_ops = {
|
||||
static const struct charlcd_ops charlcd_tilcd_ops = {
|
||||
.write_cmd = lcd_write_cmd_tilcd,
|
||||
.write_data = lcd_write_data_tilcd,
|
||||
.clear_fast = lcd_clear_fast_tilcd,
|
||||
|
|
|
@ -67,7 +67,7 @@ static char *applicom_pci_devnames[] = {
|
|||
"PCI2000PFB"
|
||||
};
|
||||
|
||||
static struct pci_device_id applicom_pci_tbl[] = {
|
||||
static const struct pci_device_id applicom_pci_tbl[] = {
|
||||
{ PCI_VDEVICE(APPLICOM, PCI_DEVICE_ID_APPLICOM_PCIGENERIC) },
|
||||
{ PCI_VDEVICE(APPLICOM, PCI_DEVICE_ID_APPLICOM_PCI2000IBS_CAN) },
|
||||
{ PCI_VDEVICE(APPLICOM, PCI_DEVICE_ID_APPLICOM_PCI2000PFB) },
|
||||
|
|
|
@ -128,10 +128,11 @@ int smapi_query_DSP_cfg(SMAPI_DSP_SETTINGS * pSettings)
|
|||
{
|
||||
int bRC = -EIO;
|
||||
unsigned short usAX, usBX, usCX, usDX, usDI, usSI;
|
||||
unsigned short ausDspBases[] = { 0x0030, 0x4E30, 0x8E30, 0xCE30, 0x0130, 0x0350, 0x0070, 0x0DB0 };
|
||||
unsigned short ausUartBases[] = { 0x03F8, 0x02F8, 0x03E8, 0x02E8 };
|
||||
unsigned short numDspBases = 8;
|
||||
unsigned short numUartBases = 4;
|
||||
static const unsigned short ausDspBases[] = {
|
||||
0x0030, 0x4E30, 0x8E30, 0xCE30,
|
||||
0x0130, 0x0350, 0x0070, 0x0DB0 };
|
||||
static const unsigned short ausUartBases[] = {
|
||||
0x03F8, 0x02F8, 0x03E8, 0x02E8 };
|
||||
|
||||
PRINTK_1(TRACE_SMAPI, "smapi::smapi_query_DSP_cfg entry\n");
|
||||
|
||||
|
@ -148,7 +149,7 @@ int smapi_query_DSP_cfg(SMAPI_DSP_SETTINGS * pSettings)
|
|||
pSettings->bDSPEnabled = ((usCX & 0x0001) != 0);
|
||||
pSettings->usDspIRQ = usSI & 0x00FF;
|
||||
pSettings->usDspDMA = (usSI & 0xFF00) >> 8;
|
||||
if ((usDI & 0x00FF) < numDspBases) {
|
||||
if ((usDI & 0x00FF) < ARRAY_SIZE(ausDspBases)) {
|
||||
pSettings->usDspBaseIO = ausDspBases[usDI & 0x00FF];
|
||||
} else {
|
||||
pSettings->usDspBaseIO = 0;
|
||||
|
@ -176,7 +177,7 @@ int smapi_query_DSP_cfg(SMAPI_DSP_SETTINGS * pSettings)
|
|||
|
||||
pSettings->bModemEnabled = ((usCX & 0x0001) != 0);
|
||||
pSettings->usUartIRQ = usSI & 0x000F;
|
||||
if (((usSI & 0xFF00) >> 8) < numUartBases) {
|
||||
if (((usSI & 0xFF00) >> 8) < ARRAY_SIZE(ausUartBases)) {
|
||||
pSettings->usUartBaseIO = ausUartBases[(usSI & 0xFF00) >> 8];
|
||||
} else {
|
||||
pSettings->usUartBaseIO = 0;
|
||||
|
@ -205,15 +206,16 @@ int smapi_set_DSP_cfg(void)
|
|||
int bRC = -EIO;
|
||||
int i;
|
||||
unsigned short usAX, usBX, usCX, usDX, usDI, usSI;
|
||||
unsigned short ausDspBases[] = { 0x0030, 0x4E30, 0x8E30, 0xCE30, 0x0130, 0x0350, 0x0070, 0x0DB0 };
|
||||
unsigned short ausUartBases[] = { 0x03F8, 0x02F8, 0x03E8, 0x02E8 };
|
||||
unsigned short ausDspIrqs[] = { 5, 7, 10, 11, 15 };
|
||||
unsigned short ausUartIrqs[] = { 3, 4 };
|
||||
static const unsigned short ausDspBases[] = {
|
||||
0x0030, 0x4E30, 0x8E30, 0xCE30,
|
||||
0x0130, 0x0350, 0x0070, 0x0DB0 };
|
||||
static const unsigned short ausUartBases[] = {
|
||||
0x03F8, 0x02F8, 0x03E8, 0x02E8 };
|
||||
static const unsigned short ausDspIrqs[] = {
|
||||
5, 7, 10, 11, 15 };
|
||||
static const unsigned short ausUartIrqs[] = {
|
||||
3, 4 };
|
||||
|
||||
unsigned short numDspBases = 8;
|
||||
unsigned short numUartBases = 4;
|
||||
unsigned short numDspIrqs = 5;
|
||||
unsigned short numUartIrqs = 2;
|
||||
unsigned short dspio_index = 0, uartio_index = 0;
|
||||
|
||||
PRINTK_5(TRACE_SMAPI,
|
||||
|
@ -221,11 +223,11 @@ int smapi_set_DSP_cfg(void)
|
|||
mwave_3780i_irq, mwave_3780i_io, mwave_uart_irq, mwave_uart_io);
|
||||
|
||||
if (mwave_3780i_io) {
|
||||
for (i = 0; i < numDspBases; i++) {
|
||||
for (i = 0; i < ARRAY_SIZE(ausDspBases); i++) {
|
||||
if (mwave_3780i_io == ausDspBases[i])
|
||||
break;
|
||||
}
|
||||
if (i == numDspBases) {
|
||||
if (i == ARRAY_SIZE(ausDspBases)) {
|
||||
PRINTK_ERROR(KERN_ERR_MWAVE "smapi::smapi_set_DSP_cfg: Error: Invalid mwave_3780i_io address %x. Aborting.\n", mwave_3780i_io);
|
||||
return bRC;
|
||||
}
|
||||
|
@ -233,22 +235,22 @@ int smapi_set_DSP_cfg(void)
|
|||
}
|
||||
|
||||
if (mwave_3780i_irq) {
|
||||
for (i = 0; i < numDspIrqs; i++) {
|
||||
for (i = 0; i < ARRAY_SIZE(ausDspIrqs); i++) {
|
||||
if (mwave_3780i_irq == ausDspIrqs[i])
|
||||
break;
|
||||
}
|
||||
if (i == numDspIrqs) {
|
||||
if (i == ARRAY_SIZE(ausDspIrqs)) {
|
||||
PRINTK_ERROR(KERN_ERR_MWAVE "smapi::smapi_set_DSP_cfg: Error: Invalid mwave_3780i_irq %x. Aborting.\n", mwave_3780i_irq);
|
||||
return bRC;
|
||||
}
|
||||
}
|
||||
|
||||
if (mwave_uart_io) {
|
||||
for (i = 0; i < numUartBases; i++) {
|
||||
for (i = 0; i < ARRAY_SIZE(ausUartBases); i++) {
|
||||
if (mwave_uart_io == ausUartBases[i])
|
||||
break;
|
||||
}
|
||||
if (i == numUartBases) {
|
||||
if (i == ARRAY_SIZE(ausUartBases)) {
|
||||
PRINTK_ERROR(KERN_ERR_MWAVE "smapi::smapi_set_DSP_cfg: Error: Invalid mwave_uart_io address %x. Aborting.\n", mwave_uart_io);
|
||||
return bRC;
|
||||
}
|
||||
|
@ -257,11 +259,11 @@ int smapi_set_DSP_cfg(void)
|
|||
|
||||
|
||||
if (mwave_uart_irq) {
|
||||
for (i = 0; i < numUartIrqs; i++) {
|
||||
for (i = 0; i < ARRAY_SIZE(ausUartIrqs); i++) {
|
||||
if (mwave_uart_irq == ausUartIrqs[i])
|
||||
break;
|
||||
}
|
||||
if (i == numUartIrqs) {
|
||||
if (i == ARRAY_SIZE(ausUartIrqs)) {
|
||||
PRINTK_ERROR(KERN_ERR_MWAVE "smapi::smapi_set_DSP_cfg: Error: Invalid mwave_uart_irq %x. Aborting.\n", mwave_uart_irq);
|
||||
return bRC;
|
||||
}
|
||||
|
|
|
@ -101,9 +101,6 @@ static DEFINE_IDA(ida_index);
|
|||
#define PP_BUFFER_SIZE 1024
|
||||
#define PARDEVICE_MAX 8
|
||||
|
||||
/* ROUND_UP macro from fs/select.c */
|
||||
#define ROUND_UP(x,y) (((x)+(y)-1)/(y))
|
||||
|
||||
static DEFINE_MUTEX(pp_do_mutex);
|
||||
|
||||
/* define fixed sized ioctl cmd for y2038 migration */
|
||||
|
|
|
@ -766,7 +766,7 @@ static struct attribute *tlclk_sysfs_entries[] = {
|
|||
NULL
|
||||
};
|
||||
|
||||
static struct attribute_group tlclk_attribute_group = {
|
||||
static const struct attribute_group tlclk_attribute_group = {
|
||||
.name = NULL, /* put in device directory */
|
||||
.attrs = tlclk_sysfs_entries,
|
||||
};
|
||||
|
|
|
@ -1308,7 +1308,7 @@ static struct attribute *port_sysfs_entries[] = {
|
|||
NULL
|
||||
};
|
||||
|
||||
static struct attribute_group port_attribute_group = {
|
||||
static const struct attribute_group port_attribute_group = {
|
||||
.name = NULL, /* put in device directory */
|
||||
.attrs = port_sysfs_entries,
|
||||
};
|
||||
|
|
|
@ -86,8 +86,7 @@
|
|||
#include <linux/cdev.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include <asm/io.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/uaccess.h>
|
||||
|
||||
#ifdef CONFIG_OF
|
||||
|
@ -222,6 +221,8 @@ static const struct config_registers v6_config_registers = {
|
|||
* hwicap_command_desync - Send a DESYNC command to the ICAP port.
|
||||
* @drvdata: a pointer to the drvdata.
|
||||
*
|
||||
* Returns: '0' on success and failure value on error
|
||||
*
|
||||
* This command desynchronizes the ICAP After this command, a
|
||||
* bitstream containing a NULL packet, followed by a SYNCH packet is
|
||||
* required before the ICAP will recognize commands.
|
||||
|
@ -251,10 +252,12 @@ static int hwicap_command_desync(struct hwicap_drvdata *drvdata)
|
|||
* hwicap_get_configuration_register - Query a configuration register.
|
||||
* @drvdata: a pointer to the drvdata.
|
||||
* @reg: a constant which represents the configuration
|
||||
* register value to be returned.
|
||||
* Examples: XHI_IDCODE, XHI_FLR.
|
||||
* register value to be returned.
|
||||
* Examples: XHI_IDCODE, XHI_FLR.
|
||||
* @reg_data: returns the value of the register.
|
||||
*
|
||||
* Returns: '0' on success and failure value on error
|
||||
*
|
||||
* Sends a query packet to the ICAP and then receives the response.
|
||||
* The icap is left in Synched state.
|
||||
*/
|
||||
|
@ -320,7 +323,8 @@ static int hwicap_initialize_hwicap(struct hwicap_drvdata *drvdata)
|
|||
dev_dbg(drvdata->dev, "initializing\n");
|
||||
|
||||
/* Abort any current transaction, to make sure we have the
|
||||
* ICAP in a good state. */
|
||||
* ICAP in a good state.
|
||||
*/
|
||||
dev_dbg(drvdata->dev, "Reset...\n");
|
||||
drvdata->config->reset(drvdata);
|
||||
|
||||
|
@ -632,7 +636,6 @@ static int hwicap_setup(struct device *dev, int id,
|
|||
|
||||
drvdata = kzalloc(sizeof(struct hwicap_drvdata), GFP_KERNEL);
|
||||
if (!drvdata) {
|
||||
dev_err(dev, "Couldn't allocate device private record\n");
|
||||
retval = -ENOMEM;
|
||||
goto failed0;
|
||||
}
|
||||
|
@ -759,20 +762,20 @@ static int hwicap_of_probe(struct platform_device *op,
|
|||
id = of_get_property(op->dev.of_node, "port-number", NULL);
|
||||
|
||||
/* It's most likely that we're using V4, if the family is not
|
||||
specified */
|
||||
* specified
|
||||
*/
|
||||
regs = &v4_config_registers;
|
||||
family = of_get_property(op->dev.of_node, "xlnx,family", NULL);
|
||||
|
||||
if (family) {
|
||||
if (!strcmp(family, "virtex2p")) {
|
||||
if (!strcmp(family, "virtex2p"))
|
||||
regs = &v2_config_registers;
|
||||
} else if (!strcmp(family, "virtex4")) {
|
||||
else if (!strcmp(family, "virtex4"))
|
||||
regs = &v4_config_registers;
|
||||
} else if (!strcmp(family, "virtex5")) {
|
||||
else if (!strcmp(family, "virtex5"))
|
||||
regs = &v5_config_registers;
|
||||
} else if (!strcmp(family, "virtex6")) {
|
||||
else if (!strcmp(family, "virtex6"))
|
||||
regs = &v6_config_registers;
|
||||
}
|
||||
}
|
||||
return hwicap_setup(&op->dev, id ? *id : -1, &res, config,
|
||||
regs);
|
||||
|
@ -802,20 +805,20 @@ static int hwicap_drv_probe(struct platform_device *pdev)
|
|||
return -ENODEV;
|
||||
|
||||
/* It's most likely that we're using V4, if the family is not
|
||||
specified */
|
||||
* specified
|
||||
*/
|
||||
regs = &v4_config_registers;
|
||||
family = pdev->dev.platform_data;
|
||||
|
||||
if (family) {
|
||||
if (!strcmp(family, "virtex2p")) {
|
||||
if (!strcmp(family, "virtex2p"))
|
||||
regs = &v2_config_registers;
|
||||
} else if (!strcmp(family, "virtex4")) {
|
||||
else if (!strcmp(family, "virtex4"))
|
||||
regs = &v4_config_registers;
|
||||
} else if (!strcmp(family, "virtex5")) {
|
||||
else if (!strcmp(family, "virtex5"))
|
||||
regs = &v5_config_registers;
|
||||
} else if (!strcmp(family, "virtex6")) {
|
||||
else if (!strcmp(family, "virtex6"))
|
||||
regs = &v6_config_registers;
|
||||
}
|
||||
}
|
||||
|
||||
return hwicap_setup(&pdev->dev, pdev->id, res,
|
||||
|
|
|
@ -62,11 +62,13 @@ struct hwicap_drvdata {
|
|||
|
||||
struct hwicap_driver_config {
|
||||
/* Read configuration data given by size into the data buffer.
|
||||
Return 0 if successful. */
|
||||
* Return 0 if successful.
|
||||
*/
|
||||
int (*get_configuration)(struct hwicap_drvdata *drvdata, u32 *data,
|
||||
u32 size);
|
||||
/* Write configuration data given by size from the data buffer.
|
||||
Return 0 if successful. */
|
||||
* Return 0 if successful.
|
||||
*/
|
||||
int (*set_configuration)(struct hwicap_drvdata *drvdata, u32 *data,
|
||||
u32 size);
|
||||
/* Get the status register, bit pattern given by:
|
||||
|
@ -193,11 +195,12 @@ struct config_registers {
|
|||
* hwicap_type_1_read - Generates a Type 1 read packet header.
|
||||
* @reg: is the address of the register to be read back.
|
||||
*
|
||||
* Return:
|
||||
* Generates a Type 1 read packet header, which is used to indirectly
|
||||
* read registers in the configuration logic. This packet must then
|
||||
* be sent through the icap device, and a return packet received with
|
||||
* the information.
|
||||
**/
|
||||
*/
|
||||
static inline u32 hwicap_type_1_read(u32 reg)
|
||||
{
|
||||
return (XHI_TYPE_1 << XHI_TYPE_SHIFT) |
|
||||
|
@ -208,7 +211,9 @@ static inline u32 hwicap_type_1_read(u32 reg)
|
|||
/**
|
||||
* hwicap_type_1_write - Generates a Type 1 write packet header
|
||||
* @reg: is the address of the register to be read back.
|
||||
**/
|
||||
*
|
||||
* Return: Type 1 write packet header
|
||||
*/
|
||||
static inline u32 hwicap_type_1_write(u32 reg)
|
||||
{
|
||||
return (XHI_TYPE_1 << XHI_TYPE_SHIFT) |
|
||||
|
|
|
@ -150,4 +150,11 @@ config EXTCON_USB_GPIO
|
|||
Say Y here to enable GPIO based USB cable detection extcon support.
|
||||
Used typically if GPIO is used for USB ID pin detection.
|
||||
|
||||
config EXTCON_USBC_CROS_EC
|
||||
tristate "ChromeOS Embedded Controller EXTCON support"
|
||||
depends on MFD_CROS_EC
|
||||
help
|
||||
Say Y here to enable USB Type C cable detection extcon support when
|
||||
using Chrome OS EC based USB Type-C ports.
|
||||
|
||||
endif
|
||||
|
|
|
@ -20,3 +20,4 @@ obj-$(CONFIG_EXTCON_QCOM_SPMI_MISC) += extcon-qcom-spmi-misc.o
|
|||
obj-$(CONFIG_EXTCON_RT8973A) += extcon-rt8973a.o
|
||||
obj-$(CONFIG_EXTCON_SM5502) += extcon-sm5502.o
|
||||
obj-$(CONFIG_EXTCON_USB_GPIO) += extcon-usb-gpio.o
|
||||
obj-$(CONFIG_EXTCON_USBC_CROS_EC) += extcon-usbc-cros-ec.o
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
/*
|
||||
* drivers/extcon/devres.c - EXTCON device's resource management
|
||||
* drivers/extcon/devres.c - EXTCON device's resource management
|
||||
*
|
||||
* Copyright (C) 2016 Samsung Electronics
|
||||
* Author: Chanwoo Choi <cw00.choi@samsung.com>
|
||||
|
@ -59,10 +59,9 @@ static void devm_extcon_dev_notifier_all_unreg(struct device *dev, void *res)
|
|||
|
||||
/**
|
||||
* devm_extcon_dev_allocate - Allocate managed extcon device
|
||||
* @dev: device owning the extcon device being created
|
||||
* @supported_cable: Array of supported extcon ending with EXTCON_NONE.
|
||||
* If supported_cable is NULL, cable name related APIs
|
||||
* are disabled.
|
||||
* @dev: the device owning the extcon device being created
|
||||
* @supported_cable: the array of the supported external connectors
|
||||
* ending with EXTCON_NONE.
|
||||
*
|
||||
* This function manages automatically the memory of extcon device using device
|
||||
* resource management and simplify the control of freeing the memory of extcon
|
||||
|
@ -97,8 +96,8 @@ EXPORT_SYMBOL_GPL(devm_extcon_dev_allocate);
|
|||
|
||||
/**
|
||||
* devm_extcon_dev_free() - Resource-managed extcon_dev_unregister()
|
||||
* @dev: device the extcon belongs to
|
||||
* @edev: the extcon device to unregister
|
||||
* @dev: the device owning the extcon device being created
|
||||
* @edev: the extcon device to be freed
|
||||
*
|
||||
* Free the memory that is allocated with devm_extcon_dev_allocate()
|
||||
* function.
|
||||
|
@ -112,10 +111,9 @@ EXPORT_SYMBOL_GPL(devm_extcon_dev_free);
|
|||
|
||||
/**
|
||||
* devm_extcon_dev_register() - Resource-managed extcon_dev_register()
|
||||
* @dev: device to allocate extcon device
|
||||
* @edev: the new extcon device to register
|
||||
* @dev: the device owning the extcon device being created
|
||||
* @edev: the extcon device to be registered
|
||||
*
|
||||
* Managed extcon_dev_register() function. If extcon device is attached with
|
||||
* this function, that extcon device is automatically unregistered on driver
|
||||
* detach. Internally this function calls extcon_dev_register() function.
|
||||
* To get more information, refer that function.
|
||||
|
@ -149,8 +147,8 @@ EXPORT_SYMBOL_GPL(devm_extcon_dev_register);
|
|||
|
||||
/**
|
||||
* devm_extcon_dev_unregister() - Resource-managed extcon_dev_unregister()
|
||||
* @dev: device the extcon belongs to
|
||||
* @edev: the extcon device to unregister
|
||||
* @dev: the device owning the extcon device being created
|
||||
* @edev: the extcon device to unregistered
|
||||
*
|
||||
* Unregister extcon device that is registered with devm_extcon_dev_register()
|
||||
* function.
|
||||
|
@ -164,10 +162,10 @@ EXPORT_SYMBOL_GPL(devm_extcon_dev_unregister);
|
|||
|
||||
/**
|
||||
* devm_extcon_register_notifier() - Resource-managed extcon_register_notifier()
|
||||
* @dev: device to allocate extcon device
|
||||
* @edev: the extcon device that has the external connecotr.
|
||||
* @id: the unique id of each external connector in extcon enumeration.
|
||||
* @nb: a notifier block to be registered.
|
||||
* @dev: the device owning the extcon device being created
|
||||
* @edev: the extcon device
|
||||
* @id: the unique id among the extcon enumeration
|
||||
* @nb: a notifier block to be registered
|
||||
*
|
||||
* This function manages automatically the notifier of extcon device using
|
||||
* device resource management and simplify the control of unregistering
|
||||
|
@ -208,10 +206,10 @@ EXPORT_SYMBOL(devm_extcon_register_notifier);
|
|||
/**
|
||||
* devm_extcon_unregister_notifier()
|
||||
- Resource-managed extcon_unregister_notifier()
|
||||
* @dev: device to allocate extcon device
|
||||
* @edev: the extcon device that has the external connecotr.
|
||||
* @id: the unique id of each external connector in extcon enumeration.
|
||||
* @nb: a notifier block to be registered.
|
||||
* @dev: the device owning the extcon device being created
|
||||
* @edev: the extcon device
|
||||
* @id: the unique id among the extcon enumeration
|
||||
* @nb: a notifier block to be registered
|
||||
*/
|
||||
void devm_extcon_unregister_notifier(struct device *dev,
|
||||
struct extcon_dev *edev, unsigned int id,
|
||||
|
@ -225,9 +223,9 @@ EXPORT_SYMBOL(devm_extcon_unregister_notifier);
|
|||
/**
|
||||
* devm_extcon_register_notifier_all()
|
||||
* - Resource-managed extcon_register_notifier_all()
|
||||
* @dev: device to allocate extcon device
|
||||
* @edev: the extcon device that has the external connecotr.
|
||||
* @nb: a notifier block to be registered.
|
||||
* @dev: the device owning the extcon device being created
|
||||
* @edev: the extcon device
|
||||
* @nb: a notifier block to be registered
|
||||
*
|
||||
* This function manages automatically the notifier of extcon device using
|
||||
* device resource management and simplify the control of unregistering
|
||||
|
@ -263,9 +261,9 @@ EXPORT_SYMBOL(devm_extcon_register_notifier_all);
|
|||
/**
|
||||
* devm_extcon_unregister_notifier_all()
|
||||
* - Resource-managed extcon_unregister_notifier_all()
|
||||
* @dev: device to allocate extcon device
|
||||
* @edev: the extcon device that has the external connecotr.
|
||||
* @nb: a notifier block to be registered.
|
||||
* @dev: the device owning the extcon device being created
|
||||
* @edev: the extcon device
|
||||
* @nb: a notifier block to be registered
|
||||
*/
|
||||
void devm_extcon_unregister_notifier_all(struct device *dev,
|
||||
struct extcon_dev *edev,
|
||||
|
|
|
@ -171,7 +171,7 @@ static int int3496_remove(struct platform_device *pdev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static struct acpi_device_id int3496_acpi_match[] = {
|
||||
static const struct acpi_device_id int3496_acpi_match[] = {
|
||||
{ "INT3496" },
|
||||
{ }
|
||||
};
|
||||
|
|
|
@ -811,9 +811,8 @@ static int max77693_muic_chg_handler(struct max77693_muic_info *info)
|
|||
*/
|
||||
extcon_set_state_sync(info->edev, EXTCON_CHG_USB_DCP,
|
||||
attached);
|
||||
if (!cable_attached)
|
||||
extcon_set_state_sync(info->edev,
|
||||
EXTCON_DISP_MHL, cable_attached);
|
||||
extcon_set_state_sync(info->edev, EXTCON_DISP_MHL,
|
||||
cable_attached);
|
||||
break;
|
||||
}
|
||||
|
||||
|
|
|
@ -0,0 +1,417 @@
|
|||
/**
|
||||
* drivers/extcon/extcon-usbc-cros-ec - ChromeOS Embedded Controller extcon
|
||||
*
|
||||
* Copyright (C) 2017 Google, Inc
|
||||
* Author: Benson Leung <bleung@chromium.org>
|
||||
*
|
||||
* This software is licensed under the terms of the GNU General Public
|
||||
* License version 2, as published by the Free Software Foundation, and
|
||||
* may be copied, distributed, and modified under those terms.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#include <linux/extcon.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/mfd/cros_ec.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/notifier.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/sched.h>
|
||||
|
||||
struct cros_ec_extcon_info {
|
||||
struct device *dev;
|
||||
struct extcon_dev *edev;
|
||||
|
||||
int port_id;
|
||||
|
||||
struct cros_ec_device *ec;
|
||||
|
||||
struct notifier_block notifier;
|
||||
|
||||
bool dp; /* DisplayPort enabled */
|
||||
bool mux; /* SuperSpeed (usb3) enabled */
|
||||
unsigned int power_type;
|
||||
};
|
||||
|
||||
static const unsigned int usb_type_c_cable[] = {
|
||||
EXTCON_DISP_DP,
|
||||
EXTCON_NONE,
|
||||
};
|
||||
|
||||
/**
|
||||
* cros_ec_pd_command() - Send a command to the EC.
|
||||
* @info: pointer to struct cros_ec_extcon_info
|
||||
* @command: EC command
|
||||
* @version: EC command version
|
||||
* @outdata: EC command output data
|
||||
* @outsize: Size of outdata
|
||||
* @indata: EC command input data
|
||||
* @insize: Size of indata
|
||||
*
|
||||
* Return: 0 on success, <0 on failure.
|
||||
*/
|
||||
static int cros_ec_pd_command(struct cros_ec_extcon_info *info,
|
||||
unsigned int command,
|
||||
unsigned int version,
|
||||
void *outdata,
|
||||
unsigned int outsize,
|
||||
void *indata,
|
||||
unsigned int insize)
|
||||
{
|
||||
struct cros_ec_command *msg;
|
||||
int ret;
|
||||
|
||||
msg = kzalloc(sizeof(*msg) + max(outsize, insize), GFP_KERNEL);
|
||||
if (!msg)
|
||||
return -ENOMEM;
|
||||
|
||||
msg->version = version;
|
||||
msg->command = command;
|
||||
msg->outsize = outsize;
|
||||
msg->insize = insize;
|
||||
|
||||
if (outsize)
|
||||
memcpy(msg->data, outdata, outsize);
|
||||
|
||||
ret = cros_ec_cmd_xfer_status(info->ec, msg);
|
||||
if (ret >= 0 && insize)
|
||||
memcpy(indata, msg->data, insize);
|
||||
|
||||
kfree(msg);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* cros_ec_usb_get_power_type() - Get power type info about PD device attached
|
||||
* to given port.
|
||||
* @info: pointer to struct cros_ec_extcon_info
|
||||
*
|
||||
* Return: power type on success, <0 on failure.
|
||||
*/
|
||||
static int cros_ec_usb_get_power_type(struct cros_ec_extcon_info *info)
|
||||
{
|
||||
struct ec_params_usb_pd_power_info req;
|
||||
struct ec_response_usb_pd_power_info resp;
|
||||
int ret;
|
||||
|
||||
req.port = info->port_id;
|
||||
ret = cros_ec_pd_command(info, EC_CMD_USB_PD_POWER_INFO, 0,
|
||||
&req, sizeof(req), &resp, sizeof(resp));
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
return resp.type;
|
||||
}
|
||||
|
||||
/**
|
||||
* cros_ec_usb_get_pd_mux_state() - Get PD mux state for given port.
|
||||
* @info: pointer to struct cros_ec_extcon_info
|
||||
*
|
||||
* Return: PD mux state on success, <0 on failure.
|
||||
*/
|
||||
static int cros_ec_usb_get_pd_mux_state(struct cros_ec_extcon_info *info)
|
||||
{
|
||||
struct ec_params_usb_pd_mux_info req;
|
||||
struct ec_response_usb_pd_mux_info resp;
|
||||
int ret;
|
||||
|
||||
req.port = info->port_id;
|
||||
ret = cros_ec_pd_command(info, EC_CMD_USB_PD_MUX_INFO, 0,
|
||||
&req, sizeof(req),
|
||||
&resp, sizeof(resp));
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
return resp.flags;
|
||||
}
|
||||
|
||||
/**
|
||||
* cros_ec_usb_get_role() - Get role info about possible PD device attached to a
|
||||
* given port.
|
||||
* @info: pointer to struct cros_ec_extcon_info
|
||||
* @polarity: pointer to cable polarity (return value)
|
||||
*
|
||||
* Return: role info on success, -ENOTCONN if no cable is connected, <0 on
|
||||
* failure.
|
||||
*/
|
||||
static int cros_ec_usb_get_role(struct cros_ec_extcon_info *info,
|
||||
bool *polarity)
|
||||
{
|
||||
struct ec_params_usb_pd_control pd_control;
|
||||
struct ec_response_usb_pd_control_v1 resp;
|
||||
int ret;
|
||||
|
||||
pd_control.port = info->port_id;
|
||||
pd_control.role = USB_PD_CTRL_ROLE_NO_CHANGE;
|
||||
pd_control.mux = USB_PD_CTRL_MUX_NO_CHANGE;
|
||||
ret = cros_ec_pd_command(info, EC_CMD_USB_PD_CONTROL, 1,
|
||||
&pd_control, sizeof(pd_control),
|
||||
&resp, sizeof(resp));
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
if (!(resp.enabled & PD_CTRL_RESP_ENABLED_CONNECTED))
|
||||
return -ENOTCONN;
|
||||
|
||||
*polarity = resp.polarity;
|
||||
|
||||
return resp.role;
|
||||
}
|
||||
|
||||
/**
|
||||
* cros_ec_pd_get_num_ports() - Get number of EC charge ports.
|
||||
* @info: pointer to struct cros_ec_extcon_info
|
||||
*
|
||||
* Return: number of ports on success, <0 on failure.
|
||||
*/
|
||||
static int cros_ec_pd_get_num_ports(struct cros_ec_extcon_info *info)
|
||||
{
|
||||
struct ec_response_usb_pd_ports resp;
|
||||
int ret;
|
||||
|
||||
ret = cros_ec_pd_command(info, EC_CMD_USB_PD_PORTS,
|
||||
0, NULL, 0, &resp, sizeof(resp));
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
return resp.num_ports;
|
||||
}
|
||||
|
||||
static int extcon_cros_ec_detect_cable(struct cros_ec_extcon_info *info,
|
||||
bool force)
|
||||
{
|
||||
struct device *dev = info->dev;
|
||||
int role, power_type;
|
||||
bool polarity = false;
|
||||
bool dp = false;
|
||||
bool mux = false;
|
||||
bool hpd = false;
|
||||
|
||||
power_type = cros_ec_usb_get_power_type(info);
|
||||
if (power_type < 0) {
|
||||
dev_err(dev, "failed getting power type err = %d\n",
|
||||
power_type);
|
||||
return power_type;
|
||||
}
|
||||
|
||||
role = cros_ec_usb_get_role(info, &polarity);
|
||||
if (role < 0) {
|
||||
if (role != -ENOTCONN) {
|
||||
dev_err(dev, "failed getting role err = %d\n", role);
|
||||
return role;
|
||||
}
|
||||
} else {
|
||||
int pd_mux_state;
|
||||
|
||||
pd_mux_state = cros_ec_usb_get_pd_mux_state(info);
|
||||
if (pd_mux_state < 0)
|
||||
pd_mux_state = USB_PD_MUX_USB_ENABLED;
|
||||
|
||||
dp = pd_mux_state & USB_PD_MUX_DP_ENABLED;
|
||||
mux = pd_mux_state & USB_PD_MUX_USB_ENABLED;
|
||||
hpd = pd_mux_state & USB_PD_MUX_HPD_IRQ;
|
||||
}
|
||||
|
||||
if (force || info->dp != dp || info->mux != mux ||
|
||||
info->power_type != power_type) {
|
||||
|
||||
info->dp = dp;
|
||||
info->mux = mux;
|
||||
info->power_type = power_type;
|
||||
|
||||
extcon_set_state(info->edev, EXTCON_DISP_DP, dp);
|
||||
|
||||
extcon_set_property(info->edev, EXTCON_DISP_DP,
|
||||
EXTCON_PROP_USB_TYPEC_POLARITY,
|
||||
(union extcon_property_value)(int)polarity);
|
||||
extcon_set_property(info->edev, EXTCON_DISP_DP,
|
||||
EXTCON_PROP_USB_SS,
|
||||
(union extcon_property_value)(int)mux);
|
||||
extcon_set_property(info->edev, EXTCON_DISP_DP,
|
||||
EXTCON_PROP_DISP_HPD,
|
||||
(union extcon_property_value)(int)hpd);
|
||||
|
||||
extcon_sync(info->edev, EXTCON_DISP_DP);
|
||||
|
||||
} else if (hpd) {
|
||||
extcon_set_property(info->edev, EXTCON_DISP_DP,
|
||||
EXTCON_PROP_DISP_HPD,
|
||||
(union extcon_property_value)(int)hpd);
|
||||
extcon_sync(info->edev, EXTCON_DISP_DP);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int extcon_cros_ec_event(struct notifier_block *nb,
|
||||
unsigned long queued_during_suspend,
|
||||
void *_notify)
|
||||
{
|
||||
struct cros_ec_extcon_info *info;
|
||||
struct cros_ec_device *ec;
|
||||
u32 host_event;
|
||||
|
||||
info = container_of(nb, struct cros_ec_extcon_info, notifier);
|
||||
ec = info->ec;
|
||||
|
||||
host_event = cros_ec_get_host_event(ec);
|
||||
if (host_event & (EC_HOST_EVENT_MASK(EC_HOST_EVENT_PD_MCU) |
|
||||
EC_HOST_EVENT_MASK(EC_HOST_EVENT_USB_MUX))) {
|
||||
extcon_cros_ec_detect_cable(info, false);
|
||||
return NOTIFY_OK;
|
||||
}
|
||||
|
||||
return NOTIFY_DONE;
|
||||
}
|
||||
|
||||
static int extcon_cros_ec_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct cros_ec_extcon_info *info;
|
||||
struct cros_ec_device *ec = dev_get_drvdata(pdev->dev.parent);
|
||||
struct device *dev = &pdev->dev;
|
||||
struct device_node *np = dev->of_node;
|
||||
int numports, ret;
|
||||
|
||||
info = devm_kzalloc(dev, sizeof(*info), GFP_KERNEL);
|
||||
if (!info)
|
||||
return -ENOMEM;
|
||||
|
||||
info->dev = dev;
|
||||
info->ec = ec;
|
||||
|
||||
if (np) {
|
||||
u32 port;
|
||||
|
||||
ret = of_property_read_u32(np, "google,usb-port-id", &port);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "Missing google,usb-port-id property\n");
|
||||
return ret;
|
||||
}
|
||||
info->port_id = port;
|
||||
} else {
|
||||
info->port_id = pdev->id;
|
||||
}
|
||||
|
||||
numports = cros_ec_pd_get_num_ports(info);
|
||||
if (numports < 0) {
|
||||
dev_err(dev, "failed getting number of ports! ret = %d\n",
|
||||
numports);
|
||||
return numports;
|
||||
}
|
||||
|
||||
if (info->port_id >= numports) {
|
||||
dev_err(dev, "This system only supports %d ports\n", numports);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
info->edev = devm_extcon_dev_allocate(dev, usb_type_c_cable);
|
||||
if (IS_ERR(info->edev)) {
|
||||
dev_err(dev, "failed to allocate extcon device\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
ret = devm_extcon_dev_register(dev, info->edev);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "failed to register extcon device\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
extcon_set_property_capability(info->edev, EXTCON_DISP_DP,
|
||||
EXTCON_PROP_USB_TYPEC_POLARITY);
|
||||
extcon_set_property_capability(info->edev, EXTCON_DISP_DP,
|
||||
EXTCON_PROP_USB_SS);
|
||||
extcon_set_property_capability(info->edev, EXTCON_DISP_DP,
|
||||
EXTCON_PROP_DISP_HPD);
|
||||
|
||||
platform_set_drvdata(pdev, info);
|
||||
|
||||
/* Get PD events from the EC */
|
||||
info->notifier.notifier_call = extcon_cros_ec_event;
|
||||
ret = blocking_notifier_chain_register(&info->ec->event_notifier,
|
||||
&info->notifier);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "failed to register notifier\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Perform initial detection */
|
||||
ret = extcon_cros_ec_detect_cable(info, true);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "failed to detect initial cable state\n");
|
||||
goto unregister_notifier;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
unregister_notifier:
|
||||
blocking_notifier_chain_unregister(&info->ec->event_notifier,
|
||||
&info->notifier);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int extcon_cros_ec_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct cros_ec_extcon_info *info = platform_get_drvdata(pdev);
|
||||
|
||||
blocking_notifier_chain_unregister(&info->ec->event_notifier,
|
||||
&info->notifier);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
static int extcon_cros_ec_suspend(struct device *dev)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int extcon_cros_ec_resume(struct device *dev)
|
||||
{
|
||||
int ret;
|
||||
struct cros_ec_extcon_info *info = dev_get_drvdata(dev);
|
||||
|
||||
ret = extcon_cros_ec_detect_cable(info, true);
|
||||
if (ret < 0)
|
||||
dev_err(dev, "failed to detect cable state on resume\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct dev_pm_ops extcon_cros_ec_dev_pm_ops = {
|
||||
SET_SYSTEM_SLEEP_PM_OPS(extcon_cros_ec_suspend, extcon_cros_ec_resume)
|
||||
};
|
||||
|
||||
#define DEV_PM_OPS (&extcon_cros_ec_dev_pm_ops)
|
||||
#else
|
||||
#define DEV_PM_OPS NULL
|
||||
#endif /* CONFIG_PM_SLEEP */
|
||||
|
||||
#ifdef CONFIG_OF
|
||||
static const struct of_device_id extcon_cros_ec_of_match[] = {
|
||||
{ .compatible = "google,extcon-usbc-cros-ec" },
|
||||
{ /* sentinel */ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, extcon_cros_ec_of_match);
|
||||
#endif /* CONFIG_OF */
|
||||
|
||||
static struct platform_driver extcon_cros_ec_driver = {
|
||||
.driver = {
|
||||
.name = "extcon-usbc-cros-ec",
|
||||
.of_match_table = of_match_ptr(extcon_cros_ec_of_match),
|
||||
.pm = DEV_PM_OPS,
|
||||
},
|
||||
.remove = extcon_cros_ec_remove,
|
||||
.probe = extcon_cros_ec_probe,
|
||||
};
|
||||
|
||||
module_platform_driver(extcon_cros_ec_driver);
|
||||
|
||||
MODULE_DESCRIPTION("ChromeOS Embedded Controller extcon driver");
|
||||
MODULE_AUTHOR("Benson Leung <bleung@chromium.org>");
|
||||
MODULE_LICENSE("GPL");
|
|
@ -1,7 +1,5 @@
|
|||
/*
|
||||
* drivers/extcon/extcon.c - External Connector (extcon) framework.
|
||||
*
|
||||
* External connector (extcon) class driver
|
||||
* drivers/extcon/extcon.c - External Connector (extcon) framework.
|
||||
*
|
||||
* Copyright (C) 2015 Samsung Electronics
|
||||
* Author: Chanwoo Choi <cw00.choi@samsung.com>
|
||||
|
@ -37,7 +35,6 @@
|
|||
#include "extcon.h"
|
||||
|
||||
#define SUPPORTED_CABLE_MAX 32
|
||||
#define CABLE_NAME_MAX 30
|
||||
|
||||
struct __extcon_info {
|
||||
unsigned int type;
|
||||
|
@ -200,13 +197,13 @@ struct __extcon_info {
|
|||
};
|
||||
|
||||
/**
|
||||
* struct extcon_cable - An internal data for each cable of extcon device.
|
||||
* @edev: The extcon device
|
||||
* @cable_index: Index of this cable in the edev
|
||||
* @attr_g: Attribute group for the cable
|
||||
* struct extcon_cable - An internal data for an external connector.
|
||||
* @edev: the extcon device
|
||||
* @cable_index: the index of this cable in the edev
|
||||
* @attr_g: the attribute group for the cable
|
||||
* @attr_name: "name" sysfs entry
|
||||
* @attr_state: "state" sysfs entry
|
||||
* @attrs: Array pointing to attr_name and attr_state for attr_g
|
||||
* @attrs: the array pointing to attr_name and attr_state for attr_g
|
||||
*/
|
||||
struct extcon_cable {
|
||||
struct extcon_dev *edev;
|
||||
|
@ -234,15 +231,6 @@ static struct class *extcon_class;
|
|||
static LIST_HEAD(extcon_dev_list);
|
||||
static DEFINE_MUTEX(extcon_dev_list_lock);
|
||||
|
||||
/**
|
||||
* check_mutually_exclusive - Check if new_state violates mutually_exclusive
|
||||
* condition.
|
||||
* @edev: the extcon device
|
||||
* @new_state: new cable attach status for @edev
|
||||
*
|
||||
* Returns 0 if nothing violates. Returns the index + 1 for the first
|
||||
* violated condition.
|
||||
*/
|
||||
static int check_mutually_exclusive(struct extcon_dev *edev, u32 new_state)
|
||||
{
|
||||
int i = 0;
|
||||
|
@ -417,11 +405,13 @@ static ssize_t cable_state_show(struct device *dev,
|
|||
}
|
||||
|
||||
/**
|
||||
* extcon_sync() - Synchronize the states for both the attached/detached
|
||||
* @edev: the extcon device that has the cable.
|
||||
* extcon_sync() - Synchronize the state for an external connector.
|
||||
* @edev: the extcon device
|
||||
*
|
||||
* This function send a notification to synchronize the all states of a
|
||||
* specific external connector
|
||||
* Note that this function send a notification in order to synchronize
|
||||
* the state and property of an external connector.
|
||||
*
|
||||
* Returns 0 if success or error number if fail.
|
||||
*/
|
||||
int extcon_sync(struct extcon_dev *edev, unsigned int id)
|
||||
{
|
||||
|
@ -497,9 +487,11 @@ int extcon_sync(struct extcon_dev *edev, unsigned int id)
|
|||
EXPORT_SYMBOL_GPL(extcon_sync);
|
||||
|
||||
/**
|
||||
* extcon_get_state() - Get the state of a external connector.
|
||||
* @edev: the extcon device that has the cable.
|
||||
* @id: the unique id of each external connector in extcon enumeration.
|
||||
* extcon_get_state() - Get the state of an external connector.
|
||||
* @edev: the extcon device
|
||||
* @id: the unique id indicating an external connector
|
||||
*
|
||||
* Returns 0 if success or error number if fail.
|
||||
*/
|
||||
int extcon_get_state(struct extcon_dev *edev, const unsigned int id)
|
||||
{
|
||||
|
@ -522,20 +514,19 @@ int extcon_get_state(struct extcon_dev *edev, const unsigned int id)
|
|||
EXPORT_SYMBOL_GPL(extcon_get_state);
|
||||
|
||||
/**
|
||||
* extcon_set_state() - Set the state of a external connector.
|
||||
* without a notification.
|
||||
* @edev: the extcon device that has the cable.
|
||||
* @id: the unique id of each external connector
|
||||
* in extcon enumeration.
|
||||
* @state: the new cable status. The default semantics is
|
||||
* true: attached / false: detached.
|
||||
* extcon_set_state() - Set the state of an external connector.
|
||||
* @edev: the extcon device
|
||||
* @id: the unique id indicating an external connector
|
||||
* @state: the new state of an external connector.
|
||||
* the default semantics is true: attached / false: detached.
|
||||
*
|
||||
* This function only set the state of a external connector without
|
||||
* a notification. To synchronize the data of a external connector,
|
||||
* use extcon_set_state_sync() and extcon_sync().
|
||||
* Note that this function set the state of an external connector without
|
||||
* a notification. To synchronize the state of an external connector,
|
||||
* have to use extcon_set_state_sync() and extcon_sync().
|
||||
*
|
||||
* Returns 0 if success or error number if fail.
|
||||
*/
|
||||
int extcon_set_state(struct extcon_dev *edev, unsigned int id,
|
||||
bool cable_state)
|
||||
int extcon_set_state(struct extcon_dev *edev, unsigned int id, bool state)
|
||||
{
|
||||
unsigned long flags;
|
||||
int index, ret = 0;
|
||||
|
@ -550,11 +541,11 @@ int extcon_set_state(struct extcon_dev *edev, unsigned int id,
|
|||
spin_lock_irqsave(&edev->lock, flags);
|
||||
|
||||
/* Check whether the external connector's state is changed. */
|
||||
if (!is_extcon_changed(edev, index, cable_state))
|
||||
if (!is_extcon_changed(edev, index, state))
|
||||
goto out;
|
||||
|
||||
if (check_mutually_exclusive(edev,
|
||||
(edev->state & ~BIT(index)) | (cable_state & BIT(index)))) {
|
||||
(edev->state & ~BIT(index)) | (state & BIT(index)))) {
|
||||
ret = -EPERM;
|
||||
goto out;
|
||||
}
|
||||
|
@ -563,11 +554,11 @@ int extcon_set_state(struct extcon_dev *edev, unsigned int id,
|
|||
* Initialize the value of extcon property before setting
|
||||
* the detached state for an external connector.
|
||||
*/
|
||||
if (!cable_state)
|
||||
if (!state)
|
||||
init_property(edev, id, index);
|
||||
|
||||
/* Update the state for a external connector. */
|
||||
if (cable_state)
|
||||
/* Update the state for an external connector. */
|
||||
if (state)
|
||||
edev->state |= BIT(index);
|
||||
else
|
||||
edev->state &= ~(BIT(index));
|
||||
|
@ -579,19 +570,18 @@ out:
|
|||
EXPORT_SYMBOL_GPL(extcon_set_state);
|
||||
|
||||
/**
|
||||
* extcon_set_state_sync() - Set the state of a external connector
|
||||
* with a notification.
|
||||
* @edev: the extcon device that has the cable.
|
||||
* @id: the unique id of each external connector
|
||||
* in extcon enumeration.
|
||||
* @state: the new cable status. The default semantics is
|
||||
* true: attached / false: detached.
|
||||
* extcon_set_state_sync() - Set the state of an external connector with sync.
|
||||
* @edev: the extcon device
|
||||
* @id: the unique id indicating an external connector
|
||||
* @state: the new state of external connector.
|
||||
* the default semantics is true: attached / false: detached.
|
||||
*
|
||||
* This function set the state of external connector and synchronize the data
|
||||
* by usning a notification.
|
||||
* Note that this function set the state of external connector
|
||||
* and synchronize the state by sending a notification.
|
||||
*
|
||||
* Returns 0 if success or error number if fail.
|
||||
*/
|
||||
int extcon_set_state_sync(struct extcon_dev *edev, unsigned int id,
|
||||
bool cable_state)
|
||||
int extcon_set_state_sync(struct extcon_dev *edev, unsigned int id, bool state)
|
||||
{
|
||||
int ret, index;
|
||||
unsigned long flags;
|
||||
|
@ -602,12 +592,12 @@ int extcon_set_state_sync(struct extcon_dev *edev, unsigned int id,
|
|||
|
||||
/* Check whether the external connector's state is changed. */
|
||||
spin_lock_irqsave(&edev->lock, flags);
|
||||
ret = is_extcon_changed(edev, index, cable_state);
|
||||
ret = is_extcon_changed(edev, index, state);
|
||||
spin_unlock_irqrestore(&edev->lock, flags);
|
||||
if (!ret)
|
||||
return 0;
|
||||
|
||||
ret = extcon_set_state(edev, id, cable_state);
|
||||
ret = extcon_set_state(edev, id, state);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
|
@ -616,19 +606,18 @@ int extcon_set_state_sync(struct extcon_dev *edev, unsigned int id,
|
|||
EXPORT_SYMBOL_GPL(extcon_set_state_sync);
|
||||
|
||||
/**
|
||||
* extcon_get_property() - Get the property value of a specific cable.
|
||||
* @edev: the extcon device that has the cable.
|
||||
* @id: the unique id of each external connector
|
||||
* in extcon enumeration.
|
||||
* @prop: the property id among enum extcon_property.
|
||||
* @prop_val: the pointer which store the value of property.
|
||||
* extcon_get_property() - Get the property value of an external connector.
|
||||
* @edev: the extcon device
|
||||
* @id: the unique id indicating an external connector
|
||||
* @prop: the property id indicating an extcon property
|
||||
* @prop_val: the pointer which store the value of extcon property
|
||||
*
|
||||
* When getting the property value of external connector, the external connector
|
||||
* should be attached. If detached state, function just return 0 without
|
||||
* property value. Also, the each property should be included in the list of
|
||||
* supported properties according to the type of external connectors.
|
||||
* Note that when getting the property value of external connector,
|
||||
* the external connector should be attached. If detached state, function
|
||||
* return 0 without property value. Also, the each property should be
|
||||
* included in the list of supported properties according to extcon type.
|
||||
*
|
||||
* Returns 0 if success or error number if fail
|
||||
* Returns 0 if success or error number if fail.
|
||||
*/
|
||||
int extcon_get_property(struct extcon_dev *edev, unsigned int id,
|
||||
unsigned int prop,
|
||||
|
@ -698,17 +687,16 @@ int extcon_get_property(struct extcon_dev *edev, unsigned int id,
|
|||
EXPORT_SYMBOL_GPL(extcon_get_property);
|
||||
|
||||
/**
|
||||
* extcon_set_property() - Set the property value of a specific cable.
|
||||
* @edev: the extcon device that has the cable.
|
||||
* @id: the unique id of each external connector
|
||||
* in extcon enumeration.
|
||||
* @prop: the property id among enum extcon_property.
|
||||
* @prop_val: the pointer including the new value of property.
|
||||
* extcon_set_property() - Set the property value of an external connector.
|
||||
* @edev: the extcon device
|
||||
* @id: the unique id indicating an external connector
|
||||
* @prop: the property id indicating an extcon property
|
||||
* @prop_val: the pointer including the new value of extcon property
|
||||
*
|
||||
* The each property should be included in the list of supported properties
|
||||
* according to the type of external connectors.
|
||||
* Note that each property should be included in the list of supported
|
||||
* properties according to the extcon type.
|
||||
*
|
||||
* Returns 0 if success or error number if fail
|
||||
* Returns 0 if success or error number if fail.
|
||||
*/
|
||||
int extcon_set_property(struct extcon_dev *edev, unsigned int id,
|
||||
unsigned int prop,
|
||||
|
@ -766,15 +754,14 @@ int extcon_set_property(struct extcon_dev *edev, unsigned int id,
|
|||
EXPORT_SYMBOL_GPL(extcon_set_property);
|
||||
|
||||
/**
|
||||
* extcon_set_property_sync() - Set the property value of a specific cable
|
||||
with a notification.
|
||||
* @prop_val: the pointer including the new value of property.
|
||||
* extcon_set_property_sync() - Set property of an external connector with sync.
|
||||
* @prop_val: the pointer including the new value of extcon property
|
||||
*
|
||||
* When setting the property value of external connector, the external connector
|
||||
* should be attached. The each property should be included in the list of
|
||||
* supported properties according to the type of external connectors.
|
||||
* Note that when setting the property value of external connector,
|
||||
* the external connector should be attached. The each property should
|
||||
* be included in the list of supported properties according to extcon type.
|
||||
*
|
||||
* Returns 0 if success or error number if fail
|
||||
* Returns 0 if success or error number if fail.
|
||||
*/
|
||||
int extcon_set_property_sync(struct extcon_dev *edev, unsigned int id,
|
||||
unsigned int prop,
|
||||
|
@ -791,12 +778,11 @@ int extcon_set_property_sync(struct extcon_dev *edev, unsigned int id,
|
|||
EXPORT_SYMBOL_GPL(extcon_set_property_sync);
|
||||
|
||||
/**
|
||||
* extcon_get_property_capability() - Get the capability of property
|
||||
* of an external connector.
|
||||
* @edev: the extcon device that has the cable.
|
||||
* @id: the unique id of each external connector
|
||||
* in extcon enumeration.
|
||||
* @prop: the property id among enum extcon_property.
|
||||
* extcon_get_property_capability() - Get the capability of the property
|
||||
* for an external connector.
|
||||
* @edev: the extcon device
|
||||
* @id: the unique id indicating an external connector
|
||||
* @prop: the property id indicating an extcon property
|
||||
*
|
||||
* Returns 1 if the property is available or 0 if not available.
|
||||
*/
|
||||
|
@ -822,18 +808,17 @@ int extcon_get_property_capability(struct extcon_dev *edev, unsigned int id,
|
|||
EXPORT_SYMBOL_GPL(extcon_get_property_capability);
|
||||
|
||||
/**
|
||||
* extcon_set_property_capability() - Set the capability of a property
|
||||
* of an external connector.
|
||||
* @edev: the extcon device that has the cable.
|
||||
* @id: the unique id of each external connector
|
||||
* in extcon enumeration.
|
||||
* @prop: the property id among enum extcon_property.
|
||||
* extcon_set_property_capability() - Set the capability of the property
|
||||
* for an external connector.
|
||||
* @edev: the extcon device
|
||||
* @id: the unique id indicating an external connector
|
||||
* @prop: the property id indicating an extcon property
|
||||
*
|
||||
* This function set the capability of a property for an external connector
|
||||
* to mark the bit in capability bitmap which mean the available state of
|
||||
* a property.
|
||||
* Note that this function set the capability of the property
|
||||
* for an external connector in order to mark the bit in capability
|
||||
* bitmap which mean the available state of the property.
|
||||
*
|
||||
* Returns 0 if success or error number if fail
|
||||
* Returns 0 if success or error number if fail.
|
||||
*/
|
||||
int extcon_set_property_capability(struct extcon_dev *edev, unsigned int id,
|
||||
unsigned int prop)
|
||||
|
@ -881,8 +866,10 @@ int extcon_set_property_capability(struct extcon_dev *edev, unsigned int id,
|
|||
EXPORT_SYMBOL_GPL(extcon_set_property_capability);
|
||||
|
||||
/**
|
||||
* extcon_get_extcon_dev() - Get the extcon device instance from the name
|
||||
* @extcon_name: The extcon name provided with extcon_dev_register()
|
||||
* extcon_get_extcon_dev() - Get the extcon device instance from the name.
|
||||
* @extcon_name: the extcon name provided with extcon_dev_register()
|
||||
*
|
||||
* Return the pointer of extcon device if success or ERR_PTR(err) if fail.
|
||||
*/
|
||||
struct extcon_dev *extcon_get_extcon_dev(const char *extcon_name)
|
||||
{
|
||||
|
@ -904,15 +891,17 @@ out:
|
|||
EXPORT_SYMBOL_GPL(extcon_get_extcon_dev);
|
||||
|
||||
/**
|
||||
* extcon_register_notifier() - Register a notifiee to get notified by
|
||||
* any attach status changes from the extcon.
|
||||
* @edev: the extcon device that has the external connecotr.
|
||||
* @id: the unique id of each external connector in extcon enumeration.
|
||||
* @nb: a notifier block to be registered.
|
||||
* extcon_register_notifier() - Register a notifier block to get notified by
|
||||
* any state changes from the extcon.
|
||||
* @edev: the extcon device
|
||||
* @id: the unique id indicating an external connector
|
||||
* @nb: a notifier block to be registered
|
||||
*
|
||||
* Note that the second parameter given to the callback of nb (val) is
|
||||
* "old_state", not the current state. The current state can be retrieved
|
||||
* by looking at the third pameter (edev pointer)'s state value.
|
||||
* the current state of an external connector and the third pameter
|
||||
* is the pointer of extcon device.
|
||||
*
|
||||
* Returns 0 if success or error number if fail.
|
||||
*/
|
||||
int extcon_register_notifier(struct extcon_dev *edev, unsigned int id,
|
||||
struct notifier_block *nb)
|
||||
|
@ -936,10 +925,12 @@ int extcon_register_notifier(struct extcon_dev *edev, unsigned int id,
|
|||
EXPORT_SYMBOL_GPL(extcon_register_notifier);
|
||||
|
||||
/**
|
||||
* extcon_unregister_notifier() - Unregister a notifiee from the extcon device.
|
||||
* @edev: the extcon device that has the external connecotr.
|
||||
* @id: the unique id of each external connector in extcon enumeration.
|
||||
* @nb: a notifier block to be registered.
|
||||
* extcon_unregister_notifier() - Unregister a notifier block from the extcon.
|
||||
* @edev: the extcon device
|
||||
* @id: the unique id indicating an external connector
|
||||
* @nb: a notifier block to be registered
|
||||
*
|
||||
* Returns 0 if success or error number if fail.
|
||||
*/
|
||||
int extcon_unregister_notifier(struct extcon_dev *edev, unsigned int id,
|
||||
struct notifier_block *nb)
|
||||
|
@ -963,16 +954,16 @@ int extcon_unregister_notifier(struct extcon_dev *edev, unsigned int id,
|
|||
EXPORT_SYMBOL_GPL(extcon_unregister_notifier);
|
||||
|
||||
/**
|
||||
* extcon_register_notifier_all() - Register a notifier block for all connectors
|
||||
* @edev: the extcon device that has the external connector.
|
||||
* @nb: a notifier block to be registered.
|
||||
* extcon_register_notifier_all() - Register a notifier block for all connectors.
|
||||
* @edev: the extcon device
|
||||
* @nb: a notifier block to be registered
|
||||
*
|
||||
* This function registers a notifier block in order to receive the state
|
||||
* change of all supported external connectors from extcon device.
|
||||
* Note that this function registers a notifier block in order to receive
|
||||
* the state change of all supported external connectors from extcon device.
|
||||
* And the second parameter given to the callback of nb (val) is
|
||||
* the current state and third parameter is the edev pointer.
|
||||
* the current state and the third pameter is the pointer of extcon device.
|
||||
*
|
||||
* Returns 0 if success or error number if fail
|
||||
* Returns 0 if success or error number if fail.
|
||||
*/
|
||||
int extcon_register_notifier_all(struct extcon_dev *edev,
|
||||
struct notifier_block *nb)
|
||||
|
@ -993,10 +984,10 @@ EXPORT_SYMBOL_GPL(extcon_register_notifier_all);
|
|||
|
||||
/**
|
||||
* extcon_unregister_notifier_all() - Unregister a notifier block from extcon.
|
||||
* @edev: the extcon device that has the external connecotr.
|
||||
* @nb: a notifier block to be registered.
|
||||
* @edev: the extcon device
|
||||
* @nb: a notifier block to be registered
|
||||
*
|
||||
* Returns 0 if success or error number if fail
|
||||
* Returns 0 if success or error number if fail.
|
||||
*/
|
||||
int extcon_unregister_notifier_all(struct extcon_dev *edev,
|
||||
struct notifier_block *nb)
|
||||
|
@ -1045,15 +1036,14 @@ static void dummy_sysfs_dev_release(struct device *dev)
|
|||
|
||||
/*
|
||||
* extcon_dev_allocate() - Allocate the memory of extcon device.
|
||||
* @supported_cable: Array of supported extcon ending with EXTCON_NONE.
|
||||
* If supported_cable is NULL, cable name related APIs
|
||||
* are disabled.
|
||||
* @supported_cable: the array of the supported external connectors
|
||||
* ending with EXTCON_NONE.
|
||||
*
|
||||
* This function allocates the memory for extcon device without allocating
|
||||
* memory in each extcon provider driver and initialize default setting for
|
||||
* extcon device.
|
||||
* Note that this function allocates the memory for extcon device
|
||||
* and initialize default setting for the extcon device.
|
||||
*
|
||||
* Return the pointer of extcon device if success or ERR_PTR(err) if fail
|
||||
* Returns the pointer memory of allocated extcon_dev if success
|
||||
* or ERR_PTR(err) if fail.
|
||||
*/
|
||||
struct extcon_dev *extcon_dev_allocate(const unsigned int *supported_cable)
|
||||
{
|
||||
|
@ -1074,7 +1064,7 @@ struct extcon_dev *extcon_dev_allocate(const unsigned int *supported_cable)
|
|||
|
||||
/*
|
||||
* extcon_dev_free() - Free the memory of extcon device.
|
||||
* @edev: the extcon device to free
|
||||
* @edev: the extcon device
|
||||
*/
|
||||
void extcon_dev_free(struct extcon_dev *edev)
|
||||
{
|
||||
|
@ -1083,13 +1073,18 @@ void extcon_dev_free(struct extcon_dev *edev)
|
|||
EXPORT_SYMBOL_GPL(extcon_dev_free);
|
||||
|
||||
/**
|
||||
* extcon_dev_register() - Register a new extcon device
|
||||
* @edev : the new extcon device (should be allocated before calling)
|
||||
* extcon_dev_register() - Register an new extcon device
|
||||
* @edev: the extcon device to be registered
|
||||
*
|
||||
* Among the members of edev struct, please set the "user initializing data"
|
||||
* in any case and set the "optional callbacks" if required. However, please
|
||||
* do not set the values of "internal data", which are initialized by
|
||||
* this function.
|
||||
*
|
||||
* Note that before calling this funciton, have to allocate the memory
|
||||
* of an extcon device by using the extcon_dev_allocate(). And the extcon
|
||||
* dev should include the supported_cable information.
|
||||
*
|
||||
* Returns 0 if success or error number if fail.
|
||||
*/
|
||||
int extcon_dev_register(struct extcon_dev *edev)
|
||||
{
|
||||
|
@ -1296,7 +1291,7 @@ EXPORT_SYMBOL_GPL(extcon_dev_register);
|
|||
|
||||
/**
|
||||
* extcon_dev_unregister() - Unregister the extcon device.
|
||||
* @edev: the extcon device instance to be unregistered.
|
||||
* @edev: the extcon device to be unregistered.
|
||||
*
|
||||
* Note that this does not call kfree(edev) because edev was not allocated
|
||||
* by this class.
|
||||
|
@ -1342,11 +1337,11 @@ EXPORT_SYMBOL_GPL(extcon_dev_unregister);
|
|||
|
||||
#ifdef CONFIG_OF
|
||||
/*
|
||||
* extcon_get_edev_by_phandle - Get the extcon device from devicetree
|
||||
* @dev - instance to the given device
|
||||
* @index - index into list of extcon_dev
|
||||
* extcon_get_edev_by_phandle - Get the extcon device from devicetree.
|
||||
* @dev : the instance to the given device
|
||||
* @index : the index into list of extcon_dev
|
||||
*
|
||||
* return the instance of extcon device
|
||||
* Return the pointer of extcon device if success or ERR_PTR(err) if fail.
|
||||
*/
|
||||
struct extcon_dev *extcon_get_edev_by_phandle(struct device *dev, int index)
|
||||
{
|
||||
|
@ -1363,8 +1358,8 @@ struct extcon_dev *extcon_get_edev_by_phandle(struct device *dev, int index)
|
|||
|
||||
node = of_parse_phandle(dev->of_node, "extcon", index);
|
||||
if (!node) {
|
||||
dev_dbg(dev, "failed to get phandle in %s node\n",
|
||||
dev->of_node->full_name);
|
||||
dev_dbg(dev, "failed to get phandle in %pOF node\n",
|
||||
dev->of_node);
|
||||
return ERR_PTR(-ENODEV);
|
||||
}
|
||||
|
||||
|
@ -1411,8 +1406,6 @@ static void __exit extcon_class_exit(void)
|
|||
module_exit(extcon_class_exit);
|
||||
|
||||
MODULE_AUTHOR("Chanwoo Choi <cw00.choi@samsung.com>");
|
||||
MODULE_AUTHOR("Mike Lockwood <lockwood@android.com>");
|
||||
MODULE_AUTHOR("Donggeun Kim <dg77.kim@samsung.com>");
|
||||
MODULE_AUTHOR("MyungJoo Ham <myungjoo.ham@samsung.com>");
|
||||
MODULE_DESCRIPTION("External connector (extcon) class driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_DESCRIPTION("External Connector (extcon) framework");
|
||||
MODULE_LICENSE("GPL v2");
|
||||
|
|
|
@ -202,7 +202,7 @@ static int vpd_section_init(const char *name, struct vpd_section *sec,
|
|||
sec->raw_name = kasprintf(GFP_KERNEL, "%s_raw", name);
|
||||
if (!sec->raw_name) {
|
||||
err = -ENOMEM;
|
||||
goto err_iounmap;
|
||||
goto err_memunmap;
|
||||
}
|
||||
|
||||
sysfs_bin_attr_init(&sec->bin_attr);
|
||||
|
@ -233,8 +233,8 @@ err_sysfs_remove:
|
|||
sysfs_remove_bin_file(vpd_kobj, &sec->bin_attr);
|
||||
err_free_raw_name:
|
||||
kfree(sec->raw_name);
|
||||
err_iounmap:
|
||||
iounmap(sec->baseaddr);
|
||||
err_memunmap:
|
||||
memunmap(sec->baseaddr);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -245,7 +245,7 @@ static int vpd_section_destroy(struct vpd_section *sec)
|
|||
kobject_put(sec->kobj);
|
||||
sysfs_remove_bin_file(vpd_kobj, &sec->bin_attr);
|
||||
kfree(sec->raw_name);
|
||||
iounmap(sec->baseaddr);
|
||||
memunmap(sec->baseaddr);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
@ -262,7 +262,7 @@ static int vpd_sections_init(phys_addr_t physaddr)
|
|||
return -ENOMEM;
|
||||
|
||||
memcpy_fromio(&header, temp, sizeof(struct vpd_cbmem));
|
||||
iounmap(temp);
|
||||
memunmap(temp);
|
||||
|
||||
if (header.magic != VPD_CBMEM_MAGIC)
|
||||
return -ENODEV;
|
||||
|
|
|
@ -6,6 +6,7 @@ fmc-y += fmc-match.o
|
|||
fmc-y += fmc-sdb.o
|
||||
fmc-y += fru-parse.o
|
||||
fmc-y += fmc-dump.o
|
||||
fmc-y += fmc-debug.o
|
||||
|
||||
obj-$(CONFIG_FMC_FAKEDEV) += fmc-fakedev.o
|
||||
obj-$(CONFIG_FMC_TRIVIAL) += fmc-trivial.o
|
||||
|
|
|
@ -129,8 +129,7 @@ static int fc_probe(struct fmc_device *fmc)
|
|||
|
||||
struct fc_instance *fc;
|
||||
|
||||
if (fmc->op->validate)
|
||||
index = fmc->op->validate(fmc, &fc_drv);
|
||||
index = fmc_validate(fmc, &fc_drv);
|
||||
if (index < 0)
|
||||
return -EINVAL; /* not our device: invalid */
|
||||
|
||||
|
|
|
@ -13,6 +13,9 @@
|
|||
#include <linux/init.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/fmc.h>
|
||||
#include <linux/fmc-sdb.h>
|
||||
|
||||
#include "fmc-private.h"
|
||||
|
||||
static int fmc_check_version(unsigned long version, const char *name)
|
||||
{
|
||||
|
@ -118,6 +121,61 @@ static struct bin_attribute fmc_eeprom_attr = {
|
|||
.write = fmc_write_eeprom,
|
||||
};
|
||||
|
||||
int fmc_irq_request(struct fmc_device *fmc, irq_handler_t h,
|
||||
char *name, int flags)
|
||||
{
|
||||
if (fmc->op->irq_request)
|
||||
return fmc->op->irq_request(fmc, h, name, flags);
|
||||
return -EPERM;
|
||||
}
|
||||
EXPORT_SYMBOL(fmc_irq_request);
|
||||
|
||||
void fmc_irq_free(struct fmc_device *fmc)
|
||||
{
|
||||
if (fmc->op->irq_free)
|
||||
fmc->op->irq_free(fmc);
|
||||
}
|
||||
EXPORT_SYMBOL(fmc_irq_free);
|
||||
|
||||
void fmc_irq_ack(struct fmc_device *fmc)
|
||||
{
|
||||
if (likely(fmc->op->irq_ack))
|
||||
fmc->op->irq_ack(fmc);
|
||||
}
|
||||
EXPORT_SYMBOL(fmc_irq_ack);
|
||||
|
||||
int fmc_validate(struct fmc_device *fmc, struct fmc_driver *drv)
|
||||
{
|
||||
if (fmc->op->validate)
|
||||
return fmc->op->validate(fmc, drv);
|
||||
return -EPERM;
|
||||
}
|
||||
EXPORT_SYMBOL(fmc_validate);
|
||||
|
||||
int fmc_gpio_config(struct fmc_device *fmc, struct fmc_gpio *gpio, int ngpio)
|
||||
{
|
||||
if (fmc->op->gpio_config)
|
||||
return fmc->op->gpio_config(fmc, gpio, ngpio);
|
||||
return -EPERM;
|
||||
}
|
||||
EXPORT_SYMBOL(fmc_gpio_config);
|
||||
|
||||
int fmc_read_ee(struct fmc_device *fmc, int pos, void *d, int l)
|
||||
{
|
||||
if (fmc->op->read_ee)
|
||||
return fmc->op->read_ee(fmc, pos, d, l);
|
||||
return -EPERM;
|
||||
}
|
||||
EXPORT_SYMBOL(fmc_read_ee);
|
||||
|
||||
int fmc_write_ee(struct fmc_device *fmc, int pos, const void *d, int l)
|
||||
{
|
||||
if (fmc->op->write_ee)
|
||||
return fmc->op->write_ee(fmc, pos, d, l);
|
||||
return -EPERM;
|
||||
}
|
||||
EXPORT_SYMBOL(fmc_write_ee);
|
||||
|
||||
/*
|
||||
* Functions for client modules follow
|
||||
*/
|
||||
|
@ -141,7 +199,8 @@ EXPORT_SYMBOL(fmc_driver_unregister);
|
|||
* When a device set is registered, all eeproms must be read
|
||||
* and all FRUs must be parsed
|
||||
*/
|
||||
int fmc_device_register_n(struct fmc_device **devs, int n)
|
||||
int fmc_device_register_n_gw(struct fmc_device **devs, int n,
|
||||
struct fmc_gateware *gw)
|
||||
{
|
||||
struct fmc_device *fmc, **devarray;
|
||||
uint32_t device_id;
|
||||
|
@ -221,6 +280,21 @@ int fmc_device_register_n(struct fmc_device **devs, int n)
|
|||
else
|
||||
dev_set_name(&fmc->dev, "%s-%04x", fmc->mezzanine_name,
|
||||
device_id);
|
||||
|
||||
if (gw) {
|
||||
/*
|
||||
* The carrier already know the bitstream to load
|
||||
* for this set of FMC mezzanines.
|
||||
*/
|
||||
ret = fmc->op->reprogram_raw(fmc, NULL,
|
||||
gw->bitstream, gw->len);
|
||||
if (ret) {
|
||||
dev_warn(fmc->hwdev,
|
||||
"Invalid gateware for FMC mezzanine\n");
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
ret = device_add(&fmc->dev);
|
||||
if (ret < 0) {
|
||||
dev_err(fmc->hwdev, "Slot %i: Failed in registering "
|
||||
|
@ -234,18 +308,16 @@ int fmc_device_register_n(struct fmc_device **devs, int n)
|
|||
}
|
||||
/* This device went well, give information to the user */
|
||||
fmc_dump_eeprom(fmc);
|
||||
fmc_dump_sdb(fmc);
|
||||
fmc_debug_init(fmc);
|
||||
}
|
||||
return 0;
|
||||
|
||||
out1:
|
||||
device_del(&fmc->dev);
|
||||
out:
|
||||
fmc_free_id_info(fmc);
|
||||
put_device(&fmc->dev);
|
||||
|
||||
kfree(devarray);
|
||||
for (i--; i >= 0; i--) {
|
||||
fmc_debug_exit(devs[i]);
|
||||
sysfs_remove_bin_file(&devs[i]->dev.kobj, &fmc_eeprom_attr);
|
||||
device_del(&devs[i]->dev);
|
||||
fmc_free_id_info(devs[i]);
|
||||
|
@ -254,8 +326,20 @@ out:
|
|||
return ret;
|
||||
|
||||
}
|
||||
EXPORT_SYMBOL(fmc_device_register_n_gw);
|
||||
|
||||
int fmc_device_register_n(struct fmc_device **devs, int n)
|
||||
{
|
||||
return fmc_device_register_n_gw(devs, n, NULL);
|
||||
}
|
||||
EXPORT_SYMBOL(fmc_device_register_n);
|
||||
|
||||
int fmc_device_register_gw(struct fmc_device *fmc, struct fmc_gateware *gw)
|
||||
{
|
||||
return fmc_device_register_n_gw(&fmc, 1, gw);
|
||||
}
|
||||
EXPORT_SYMBOL(fmc_device_register_gw);
|
||||
|
||||
int fmc_device_register(struct fmc_device *fmc)
|
||||
{
|
||||
return fmc_device_register_n(&fmc, 1);
|
||||
|
@ -273,6 +357,7 @@ void fmc_device_unregister_n(struct fmc_device **devs, int n)
|
|||
kfree(devs[0]->devarray);
|
||||
|
||||
for (i = 0; i < n; i++) {
|
||||
fmc_debug_exit(devs[i]);
|
||||
sysfs_remove_bin_file(&devs[i]->dev.kobj, &fmc_eeprom_attr);
|
||||
device_del(&devs[i]->dev);
|
||||
fmc_free_id_info(devs[i]);
|
||||
|
|
|
@ -0,0 +1,173 @@
|
|||
/*
|
||||
* Copyright (C) 2015 CERN (www.cern.ch)
|
||||
* Author: Federico Vaga <federico.vaga@cern.ch>
|
||||
*
|
||||
* Released according to the GNU GPL, version 2 or any later version.
|
||||
*/
|
||||
|
||||
#include <linux/module.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/seq_file.h>
|
||||
#include <asm/byteorder.h>
|
||||
|
||||
#include <linux/fmc.h>
|
||||
#include <linux/sdb.h>
|
||||
#include <linux/fmc-sdb.h>
|
||||
|
||||
#define FMC_DBG_SDB_DUMP "dump_sdb"
|
||||
|
||||
static char *__strip_trailing_space(char *buf, char *str, int len)
|
||||
{
|
||||
int i = len - 1;
|
||||
|
||||
memcpy(buf, str, len);
|
||||
buf[len] = '\0';
|
||||
while (i >= 0 && buf[i] == ' ')
|
||||
buf[i--] = '\0';
|
||||
return buf;
|
||||
}
|
||||
|
||||
#define __sdb_string(buf, field) ({ \
|
||||
BUILD_BUG_ON(sizeof(buf) < sizeof(field)); \
|
||||
__strip_trailing_space(buf, (void *)(field), sizeof(field)); \
|
||||
})
|
||||
|
||||
/**
|
||||
* We do not check seq_printf() errors because we want to see things in any case
|
||||
*/
|
||||
static void fmc_sdb_dump_recursive(struct fmc_device *fmc, struct seq_file *s,
|
||||
const struct sdb_array *arr)
|
||||
{
|
||||
unsigned long base = arr->baseaddr;
|
||||
int i, j, n = arr->len, level = arr->level;
|
||||
char tmp[64];
|
||||
|
||||
for (i = 0; i < n; i++) {
|
||||
union sdb_record *r;
|
||||
struct sdb_product *p;
|
||||
struct sdb_component *c;
|
||||
|
||||
r = &arr->record[i];
|
||||
c = &r->dev.sdb_component;
|
||||
p = &c->product;
|
||||
|
||||
for (j = 0; j < level; j++)
|
||||
seq_printf(s, " ");
|
||||
switch (r->empty.record_type) {
|
||||
case sdb_type_interconnect:
|
||||
seq_printf(s, "%08llx:%08x %.19s\n",
|
||||
__be64_to_cpu(p->vendor_id),
|
||||
__be32_to_cpu(p->device_id),
|
||||
p->name);
|
||||
break;
|
||||
case sdb_type_device:
|
||||
seq_printf(s, "%08llx:%08x %.19s (%08llx-%08llx)\n",
|
||||
__be64_to_cpu(p->vendor_id),
|
||||
__be32_to_cpu(p->device_id),
|
||||
p->name,
|
||||
__be64_to_cpu(c->addr_first) + base,
|
||||
__be64_to_cpu(c->addr_last) + base);
|
||||
break;
|
||||
case sdb_type_bridge:
|
||||
seq_printf(s, "%08llx:%08x %.19s (bridge: %08llx)\n",
|
||||
__be64_to_cpu(p->vendor_id),
|
||||
__be32_to_cpu(p->device_id),
|
||||
p->name,
|
||||
__be64_to_cpu(c->addr_first) + base);
|
||||
if (IS_ERR(arr->subtree[i])) {
|
||||
seq_printf(s, "SDB: (bridge error %li)\n",
|
||||
PTR_ERR(arr->subtree[i]));
|
||||
break;
|
||||
}
|
||||
fmc_sdb_dump_recursive(fmc, s, arr->subtree[i]);
|
||||
break;
|
||||
case sdb_type_integration:
|
||||
seq_printf(s, "integration\n");
|
||||
break;
|
||||
case sdb_type_repo_url:
|
||||
seq_printf(s, "Synthesis repository: %s\n",
|
||||
__sdb_string(tmp, r->repo_url.repo_url));
|
||||
break;
|
||||
case sdb_type_synthesis:
|
||||
seq_printf(s, "Bitstream '%s' ",
|
||||
__sdb_string(tmp, r->synthesis.syn_name));
|
||||
seq_printf(s, "synthesized %08x by %s ",
|
||||
__be32_to_cpu(r->synthesis.date),
|
||||
__sdb_string(tmp, r->synthesis.user_name));
|
||||
seq_printf(s, "(%s version %x), ",
|
||||
__sdb_string(tmp, r->synthesis.tool_name),
|
||||
__be32_to_cpu(r->synthesis.tool_version));
|
||||
seq_printf(s, "commit %pm\n",
|
||||
r->synthesis.commit_id);
|
||||
break;
|
||||
case sdb_type_empty:
|
||||
seq_printf(s, "empty\n");
|
||||
break;
|
||||
default:
|
||||
seq_printf(s, "UNKNOWN TYPE 0x%02x\n",
|
||||
r->empty.record_type);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static int fmc_sdb_dump(struct seq_file *s, void *offset)
|
||||
{
|
||||
struct fmc_device *fmc = s->private;
|
||||
|
||||
if (!fmc->sdb) {
|
||||
seq_printf(s, "no SDB information\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
seq_printf(s, "FMC: %s (%s), slot %i, device %s\n", dev_name(fmc->hwdev),
|
||||
fmc->carrier_name, fmc->slot_id, dev_name(&fmc->dev));
|
||||
/* Dump SDB information */
|
||||
fmc_sdb_dump_recursive(fmc, s, fmc->sdb);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
static int fmc_sdb_dump_open(struct inode *inode, struct file *file)
|
||||
{
|
||||
struct fmc_device *fmc = inode->i_private;
|
||||
|
||||
return single_open(file, fmc_sdb_dump, fmc);
|
||||
}
|
||||
|
||||
|
||||
const struct file_operations fmc_dbgfs_sdb_dump = {
|
||||
.owner = THIS_MODULE,
|
||||
.open = fmc_sdb_dump_open,
|
||||
.read = seq_read,
|
||||
.llseek = seq_lseek,
|
||||
.release = single_release,
|
||||
};
|
||||
|
||||
int fmc_debug_init(struct fmc_device *fmc)
|
||||
{
|
||||
fmc->dbg_dir = debugfs_create_dir(dev_name(&fmc->dev), NULL);
|
||||
if (IS_ERR_OR_NULL(fmc->dbg_dir)) {
|
||||
pr_err("FMC: Cannot create debugfs\n");
|
||||
return PTR_ERR(fmc->dbg_dir);
|
||||
}
|
||||
|
||||
fmc->dbg_sdb_dump = debugfs_create_file(FMC_DBG_SDB_DUMP, 0444,
|
||||
fmc->dbg_dir, fmc,
|
||||
&fmc_dbgfs_sdb_dump);
|
||||
if (IS_ERR_OR_NULL(fmc->dbg_sdb_dump))
|
||||
pr_err("FMC: Cannot create debugfs file %s\n",
|
||||
FMC_DBG_SDB_DUMP);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void fmc_debug_exit(struct fmc_device *fmc)
|
||||
{
|
||||
if (fmc->dbg_dir)
|
||||
debugfs_remove_recursive(fmc->dbg_dir);
|
||||
}
|
|
@ -15,8 +15,6 @@
|
|||
|
||||
static int fmc_must_dump_eeprom;
|
||||
module_param_named(dump_eeprom, fmc_must_dump_eeprom, int, 0644);
|
||||
static int fmc_must_dump_sdb;
|
||||
module_param_named(dump_sdb, fmc_must_dump_sdb, int, 0644);
|
||||
|
||||
#define LINELEN 16
|
||||
|
||||
|
@ -59,42 +57,3 @@ void fmc_dump_eeprom(const struct fmc_device *fmc)
|
|||
for (i = 0; i < fmc->eeprom_len; i += LINELEN, line += LINELEN)
|
||||
prev = dump_line(i, line, prev);
|
||||
}
|
||||
|
||||
void fmc_dump_sdb(const struct fmc_device *fmc)
|
||||
{
|
||||
const uint8_t *line, *prev;
|
||||
int i, len;
|
||||
|
||||
if (!fmc->sdb)
|
||||
return;
|
||||
if (!fmc_must_dump_sdb)
|
||||
return;
|
||||
|
||||
/* If the argument is not-zero, do simple dump (== show) */
|
||||
if (fmc_must_dump_sdb > 0)
|
||||
fmc_show_sdb_tree(fmc);
|
||||
|
||||
if (fmc_must_dump_sdb == 1)
|
||||
return;
|
||||
|
||||
/* If bigger than 1, dump it seriously, to help debugging */
|
||||
|
||||
/*
|
||||
* Here we should really use libsdbfs (which is designed to
|
||||
* work in kernel space as well) , but it doesn't support
|
||||
* directories yet, and it requires better intergration (it
|
||||
* should be used instead of fmc-specific code).
|
||||
*
|
||||
* So, lazily, just dump the top-level array
|
||||
*/
|
||||
pr_info("FMC: %s (%s), slot %i, device %s\n", dev_name(fmc->hwdev),
|
||||
fmc->carrier_name, fmc->slot_id, dev_name(&fmc->dev));
|
||||
pr_info("FMC: poor dump of sdb first level:\n");
|
||||
|
||||
len = fmc->sdb->len * sizeof(union sdb_record);
|
||||
line = (void *)fmc->sdb->record;
|
||||
prev = NULL;
|
||||
for (i = 0; i < len; i += LINELEN, line += LINELEN)
|
||||
prev = dump_line(i, line, prev);
|
||||
return;
|
||||
}
|
||||
|
|
|
@ -63,7 +63,7 @@ int fmc_fill_id_info(struct fmc_device *fmc)
|
|||
if (!fmc->eeprom)
|
||||
return -ENOMEM;
|
||||
allocated = 1;
|
||||
ret = fmc->op->read_ee(fmc, 0, fmc->eeprom, fmc->eeprom_len);
|
||||
ret = fmc_read_ee(fmc, 0, fmc->eeprom, fmc->eeprom_len);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
}
|
||||
|
|
|
@ -0,0 +1,9 @@
|
|||
/*
|
||||
* Copyright (C) 2015 CERN (www.cern.ch)
|
||||
* Author: Federico Vaga <federico.vaga@cern.ch>
|
||||
*
|
||||
* Released according to the GNU GPL, version 2 or any later version.
|
||||
*/
|
||||
|
||||
extern int fmc_debug_init(struct fmc_device *fmc);
|
||||
extern void fmc_debug_exit(struct fmc_device *fmc);
|
|
@ -126,6 +126,30 @@ int fmc_free_sdb_tree(struct fmc_device *fmc)
|
|||
}
|
||||
EXPORT_SYMBOL(fmc_free_sdb_tree);
|
||||
|
||||
/* This helper calls reprogram and inizialized sdb as well */
|
||||
int fmc_reprogram_raw(struct fmc_device *fmc, struct fmc_driver *d,
|
||||
void *gw, unsigned long len, int sdb_entry)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = fmc->op->reprogram_raw(fmc, d, gw, len);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
if (sdb_entry < 0)
|
||||
return ret;
|
||||
|
||||
/* We are required to find SDB at a given offset */
|
||||
ret = fmc_scan_sdb_tree(fmc, sdb_entry);
|
||||
if (ret < 0) {
|
||||
dev_err(&fmc->dev, "Can't find SDB at address 0x%x\n",
|
||||
sdb_entry);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(fmc_reprogram_raw);
|
||||
|
||||
/* This helper calls reprogram and inizialized sdb as well */
|
||||
int fmc_reprogram(struct fmc_device *fmc, struct fmc_driver *d, char *gw,
|
||||
int sdb_entry)
|
||||
|
@ -145,108 +169,15 @@ int fmc_reprogram(struct fmc_device *fmc, struct fmc_driver *d, char *gw,
|
|||
sdb_entry);
|
||||
return -ENODEV;
|
||||
}
|
||||
fmc_dump_sdb(fmc);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(fmc_reprogram);
|
||||
|
||||
static char *__strip_trailing_space(char *buf, char *str, int len)
|
||||
{
|
||||
int i = len - 1;
|
||||
|
||||
memcpy(buf, str, len);
|
||||
while(i >= 0 && buf[i] == ' ')
|
||||
buf[i--] = '\0';
|
||||
return buf;
|
||||
}
|
||||
|
||||
#define __sdb_string(buf, field) ({ \
|
||||
BUILD_BUG_ON(sizeof(buf) < sizeof(field)); \
|
||||
__strip_trailing_space(buf, (void *)(field), sizeof(field)); \
|
||||
})
|
||||
|
||||
static void __fmc_show_sdb_tree(const struct fmc_device *fmc,
|
||||
const struct sdb_array *arr)
|
||||
{
|
||||
unsigned long base = arr->baseaddr;
|
||||
int i, j, n = arr->len, level = arr->level;
|
||||
char buf[64];
|
||||
|
||||
for (i = 0; i < n; i++) {
|
||||
union sdb_record *r;
|
||||
struct sdb_product *p;
|
||||
struct sdb_component *c;
|
||||
r = &arr->record[i];
|
||||
c = &r->dev.sdb_component;
|
||||
p = &c->product;
|
||||
|
||||
dev_info(&fmc->dev, "SDB: ");
|
||||
|
||||
for (j = 0; j < level; j++)
|
||||
printk(KERN_CONT " ");
|
||||
switch (r->empty.record_type) {
|
||||
case sdb_type_interconnect:
|
||||
printk(KERN_CONT "%08llx:%08x %.19s\n",
|
||||
__be64_to_cpu(p->vendor_id),
|
||||
__be32_to_cpu(p->device_id),
|
||||
p->name);
|
||||
break;
|
||||
case sdb_type_device:
|
||||
printk(KERN_CONT "%08llx:%08x %.19s (%08llx-%08llx)\n",
|
||||
__be64_to_cpu(p->vendor_id),
|
||||
__be32_to_cpu(p->device_id),
|
||||
p->name,
|
||||
__be64_to_cpu(c->addr_first) + base,
|
||||
__be64_to_cpu(c->addr_last) + base);
|
||||
break;
|
||||
case sdb_type_bridge:
|
||||
printk(KERN_CONT "%08llx:%08x %.19s (bridge: %08llx)\n",
|
||||
__be64_to_cpu(p->vendor_id),
|
||||
__be32_to_cpu(p->device_id),
|
||||
p->name,
|
||||
__be64_to_cpu(c->addr_first) + base);
|
||||
if (IS_ERR(arr->subtree[i])) {
|
||||
dev_info(&fmc->dev, "SDB: (bridge error %li)\n",
|
||||
PTR_ERR(arr->subtree[i]));
|
||||
break;
|
||||
}
|
||||
__fmc_show_sdb_tree(fmc, arr->subtree[i]);
|
||||
break;
|
||||
case sdb_type_integration:
|
||||
printk(KERN_CONT "integration\n");
|
||||
break;
|
||||
case sdb_type_repo_url:
|
||||
printk(KERN_CONT "Synthesis repository: %s\n",
|
||||
__sdb_string(buf, r->repo_url.repo_url));
|
||||
break;
|
||||
case sdb_type_synthesis:
|
||||
printk(KERN_CONT "Bitstream '%s' ",
|
||||
__sdb_string(buf, r->synthesis.syn_name));
|
||||
printk(KERN_CONT "synthesized %08x by %s ",
|
||||
__be32_to_cpu(r->synthesis.date),
|
||||
__sdb_string(buf, r->synthesis.user_name));
|
||||
printk(KERN_CONT "(%s version %x), ",
|
||||
__sdb_string(buf, r->synthesis.tool_name),
|
||||
__be32_to_cpu(r->synthesis.tool_version));
|
||||
printk(KERN_CONT "commit %pm\n",
|
||||
r->synthesis.commit_id);
|
||||
break;
|
||||
case sdb_type_empty:
|
||||
printk(KERN_CONT "empty\n");
|
||||
break;
|
||||
default:
|
||||
printk(KERN_CONT "UNKNOWN TYPE 0x%02x\n",
|
||||
r->empty.record_type);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void fmc_show_sdb_tree(const struct fmc_device *fmc)
|
||||
{
|
||||
if (!fmc->sdb)
|
||||
return;
|
||||
__fmc_show_sdb_tree(fmc, fmc->sdb);
|
||||
pr_err("%s: not supported anymore, use debugfs to dump SDB\n",
|
||||
__func__);
|
||||
}
|
||||
EXPORT_SYMBOL(fmc_show_sdb_tree);
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@ static irqreturn_t t_handler(int irq, void *dev_id)
|
|||
{
|
||||
struct fmc_device *fmc = dev_id;
|
||||
|
||||
fmc->op->irq_ack(fmc);
|
||||
fmc_irq_ack(fmc);
|
||||
dev_info(&fmc->dev, "received irq %i\n", irq);
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
@ -46,25 +46,21 @@ static int t_probe(struct fmc_device *fmc)
|
|||
int ret;
|
||||
int index = 0;
|
||||
|
||||
if (fmc->op->validate)
|
||||
index = fmc->op->validate(fmc, &t_drv);
|
||||
index = fmc_validate(fmc, &t_drv);
|
||||
if (index < 0)
|
||||
return -EINVAL; /* not our device: invalid */
|
||||
|
||||
ret = fmc->op->irq_request(fmc, t_handler, "fmc-trivial", IRQF_SHARED);
|
||||
ret = fmc_irq_request(fmc, t_handler, "fmc-trivial", IRQF_SHARED);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
/* ignore error code of call below, we really don't care */
|
||||
fmc->op->gpio_config(fmc, t_gpio, ARRAY_SIZE(t_gpio));
|
||||
fmc_gpio_config(fmc, t_gpio, ARRAY_SIZE(t_gpio));
|
||||
|
||||
/* Reprogram, if asked to. ESRCH == no filename specified */
|
||||
ret = -ESRCH;
|
||||
if (fmc->op->reprogram)
|
||||
ret = fmc->op->reprogram(fmc, &t_drv, "");
|
||||
if (ret == -ESRCH)
|
||||
ret = fmc_reprogram(fmc, &t_drv, "", 0);
|
||||
if (ret == -EPERM) /* programming not supported */
|
||||
ret = 0;
|
||||
if (ret < 0)
|
||||
fmc->op->irq_free(fmc);
|
||||
fmc_irq_free(fmc);
|
||||
|
||||
/* FIXME: reprogram LM32 too */
|
||||
return ret;
|
||||
|
@ -72,7 +68,7 @@ static int t_probe(struct fmc_device *fmc)
|
|||
|
||||
static int t_remove(struct fmc_device *fmc)
|
||||
{
|
||||
fmc->op->irq_free(fmc);
|
||||
fmc_irq_free(fmc);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -50,7 +50,7 @@ static int fwe_run_tlv(struct fmc_device *fmc, const struct firmware *fw,
|
|||
if (write) {
|
||||
dev_info(&fmc->dev, "write %i bytes at 0x%04x\n",
|
||||
thislen, thisaddr);
|
||||
err = fmc->op->write_ee(fmc, thisaddr, p + 5, thislen);
|
||||
err = fmc_write_ee(fmc, thisaddr, p + 5, thislen);
|
||||
}
|
||||
if (err < 0) {
|
||||
dev_err(&fmc->dev, "write failure @0x%04x\n",
|
||||
|
@ -70,7 +70,7 @@ static int fwe_run_bin(struct fmc_device *fmc, const struct firmware *fw)
|
|||
int ret;
|
||||
|
||||
dev_info(&fmc->dev, "programming %zi bytes\n", fw->size);
|
||||
ret = fmc->op->write_ee(fmc, 0, (void *)fw->data, fw->size);
|
||||
ret = fmc_write_ee(fmc, 0, (void *)fw->data, fw->size);
|
||||
if (ret < 0) {
|
||||
dev_info(&fmc->dev, "write_eeprom: error %i\n", ret);
|
||||
return ret;
|
||||
|
@ -115,8 +115,8 @@ static int fwe_probe(struct fmc_device *fmc)
|
|||
KBUILD_MODNAME);
|
||||
return -ENODEV;
|
||||
}
|
||||
if (fmc->op->validate)
|
||||
index = fmc->op->validate(fmc, &fwe_drv);
|
||||
|
||||
index = fmc_validate(fmc, &fwe_drv);
|
||||
if (index < 0) {
|
||||
pr_err("%s: refusing device \"%s\"\n", KBUILD_MODNAME,
|
||||
dev_name(dev));
|
||||
|
|
|
@ -31,12 +31,11 @@ static char *__fru_alloc_get_tl(struct fru_common_header *header, int nr)
|
|||
{
|
||||
struct fru_type_length *tl;
|
||||
char *res;
|
||||
int len;
|
||||
|
||||
tl = __fru_get_board_tl(header, nr);
|
||||
if (!tl)
|
||||
return NULL;
|
||||
len = fru_strlen(tl);
|
||||
|
||||
res = fru_alloc(fru_strlen(tl) + 1);
|
||||
if (!res)
|
||||
return NULL;
|
||||
|
|
|
@ -2,9 +2,7 @@
|
|||
# FPGA framework configuration
|
||||
#
|
||||
|
||||
menu "FPGA Configuration Support"
|
||||
|
||||
config FPGA
|
||||
menuconfig FPGA
|
||||
tristate "FPGA Configuration Framework"
|
||||
help
|
||||
Say Y here if you want support for configuring FPGAs from the
|
||||
|
@ -26,6 +24,20 @@ config FPGA_MGR_ICE40_SPI
|
|||
help
|
||||
FPGA manager driver support for Lattice iCE40 FPGAs over SPI.
|
||||
|
||||
config FPGA_MGR_ALTERA_CVP
|
||||
tristate "Altera Arria-V/Cyclone-V/Stratix-V CvP FPGA Manager"
|
||||
depends on PCI
|
||||
help
|
||||
FPGA manager driver support for Arria-V, Cyclone-V, Stratix-V
|
||||
and Arria 10 Altera FPGAs using the CvP interface over PCIe.
|
||||
|
||||
config FPGA_MGR_ALTERA_PS_SPI
|
||||
tristate "Altera FPGA Passive Serial over SPI"
|
||||
depends on SPI
|
||||
help
|
||||
FPGA manager driver support for Altera Arria/Cyclone/Stratix
|
||||
using the passive serial interface over SPI.
|
||||
|
||||
config FPGA_MGR_SOCFPGA
|
||||
tristate "Altera SOCFPGA FPGA Manager"
|
||||
depends on ARCH_SOCFPGA || COMPILE_TEST
|
||||
|
@ -106,5 +118,3 @@ config XILINX_PR_DECOUPLER
|
|||
being reprogrammed during partial reconfig.
|
||||
|
||||
endif # FPGA
|
||||
|
||||
endmenu
|
||||
|
|
|
@ -6,6 +6,8 @@
|
|||
obj-$(CONFIG_FPGA) += fpga-mgr.o
|
||||
|
||||
# FPGA Manager Drivers
|
||||
obj-$(CONFIG_FPGA_MGR_ALTERA_CVP) += altera-cvp.o
|
||||
obj-$(CONFIG_FPGA_MGR_ALTERA_PS_SPI) += altera-ps-spi.o
|
||||
obj-$(CONFIG_FPGA_MGR_ICE40_SPI) += ice40-spi.o
|
||||
obj-$(CONFIG_FPGA_MGR_SOCFPGA) += socfpga.o
|
||||
obj-$(CONFIG_FPGA_MGR_SOCFPGA_A10) += socfpga-a10.o
|
||||
|
|
|
@ -0,0 +1,500 @@
|
|||
/*
|
||||
* FPGA Manager Driver for Altera Arria/Cyclone/Stratix CvP
|
||||
*
|
||||
* Copyright (C) 2017 DENX Software Engineering
|
||||
*
|
||||
* Anatolij Gustschin <agust@denx.de>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
* the Free Software Foundation; version 2 of the License.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* Manage Altera FPGA firmware using PCIe CvP.
|
||||
* Firmware must be in binary "rbf" format.
|
||||
*/
|
||||
|
||||
#include <linux/delay.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/fpga/fpga-mgr.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/sizes.h>
|
||||
|
||||
#define CVP_BAR 0 /* BAR used for data transfer in memory mode */
|
||||
#define CVP_DUMMY_WR 244 /* dummy writes to clear CvP state machine */
|
||||
#define TIMEOUT_US 2000 /* CVP STATUS timeout for USERMODE polling */
|
||||
|
||||
/* Vendor Specific Extended Capability Registers */
|
||||
#define VSE_PCIE_EXT_CAP_ID 0x200
|
||||
#define VSE_PCIE_EXT_CAP_ID_VAL 0x000b /* 16bit */
|
||||
|
||||
#define VSE_CVP_STATUS 0x21c /* 32bit */
|
||||
#define VSE_CVP_STATUS_CFG_RDY BIT(18) /* CVP_CONFIG_READY */
|
||||
#define VSE_CVP_STATUS_CFG_ERR BIT(19) /* CVP_CONFIG_ERROR */
|
||||
#define VSE_CVP_STATUS_CVP_EN BIT(20) /* ctrl block is enabling CVP */
|
||||
#define VSE_CVP_STATUS_USERMODE BIT(21) /* USERMODE */
|
||||
#define VSE_CVP_STATUS_CFG_DONE BIT(23) /* CVP_CONFIG_DONE */
|
||||
#define VSE_CVP_STATUS_PLD_CLK_IN_USE BIT(24) /* PLD_CLK_IN_USE */
|
||||
|
||||
#define VSE_CVP_MODE_CTRL 0x220 /* 32bit */
|
||||
#define VSE_CVP_MODE_CTRL_CVP_MODE BIT(0) /* CVP (1) or normal mode (0) */
|
||||
#define VSE_CVP_MODE_CTRL_HIP_CLK_SEL BIT(1) /* PMA (1) or fabric clock (0) */
|
||||
#define VSE_CVP_MODE_CTRL_NUMCLKS_OFF 8 /* NUMCLKS bits offset */
|
||||
#define VSE_CVP_MODE_CTRL_NUMCLKS_MASK GENMASK(15, 8)
|
||||
|
||||
#define VSE_CVP_DATA 0x228 /* 32bit */
|
||||
#define VSE_CVP_PROG_CTRL 0x22c /* 32bit */
|
||||
#define VSE_CVP_PROG_CTRL_CONFIG BIT(0)
|
||||
#define VSE_CVP_PROG_CTRL_START_XFER BIT(1)
|
||||
|
||||
#define VSE_UNCOR_ERR_STATUS 0x234 /* 32bit */
|
||||
#define VSE_UNCOR_ERR_CVP_CFG_ERR BIT(5) /* CVP_CONFIG_ERROR_LATCHED */
|
||||
|
||||
#define DRV_NAME "altera-cvp"
|
||||
#define ALTERA_CVP_MGR_NAME "Altera CvP FPGA Manager"
|
||||
|
||||
/* Optional CvP config error status check for debugging */
|
||||
static bool altera_cvp_chkcfg;
|
||||
|
||||
struct altera_cvp_conf {
|
||||
struct fpga_manager *mgr;
|
||||
struct pci_dev *pci_dev;
|
||||
void __iomem *map;
|
||||
void (*write_data)(struct altera_cvp_conf *, u32);
|
||||
char mgr_name[64];
|
||||
u8 numclks;
|
||||
};
|
||||
|
||||
static enum fpga_mgr_states altera_cvp_state(struct fpga_manager *mgr)
|
||||
{
|
||||
struct altera_cvp_conf *conf = mgr->priv;
|
||||
u32 status;
|
||||
|
||||
pci_read_config_dword(conf->pci_dev, VSE_CVP_STATUS, &status);
|
||||
|
||||
if (status & VSE_CVP_STATUS_CFG_DONE)
|
||||
return FPGA_MGR_STATE_OPERATING;
|
||||
|
||||
if (status & VSE_CVP_STATUS_CVP_EN)
|
||||
return FPGA_MGR_STATE_POWER_UP;
|
||||
|
||||
return FPGA_MGR_STATE_UNKNOWN;
|
||||
}
|
||||
|
||||
static void altera_cvp_write_data_iomem(struct altera_cvp_conf *conf, u32 val)
|
||||
{
|
||||
writel(val, conf->map);
|
||||
}
|
||||
|
||||
static void altera_cvp_write_data_config(struct altera_cvp_conf *conf, u32 val)
|
||||
{
|
||||
pci_write_config_dword(conf->pci_dev, VSE_CVP_DATA, val);
|
||||
}
|
||||
|
||||
/* switches between CvP clock and internal clock */
|
||||
static void altera_cvp_dummy_write(struct altera_cvp_conf *conf)
|
||||
{
|
||||
unsigned int i;
|
||||
u32 val;
|
||||
|
||||
/* set 1 CVP clock cycle for every CVP Data Register Write */
|
||||
pci_read_config_dword(conf->pci_dev, VSE_CVP_MODE_CTRL, &val);
|
||||
val &= ~VSE_CVP_MODE_CTRL_NUMCLKS_MASK;
|
||||
val |= 1 << VSE_CVP_MODE_CTRL_NUMCLKS_OFF;
|
||||
pci_write_config_dword(conf->pci_dev, VSE_CVP_MODE_CTRL, val);
|
||||
|
||||
for (i = 0; i < CVP_DUMMY_WR; i++)
|
||||
conf->write_data(conf, 0); /* dummy data, could be any value */
|
||||
}
|
||||
|
||||
static int altera_cvp_wait_status(struct altera_cvp_conf *conf, u32 status_mask,
|
||||
u32 status_val, int timeout_us)
|
||||
{
|
||||
unsigned int retries;
|
||||
u32 val;
|
||||
|
||||
retries = timeout_us / 10;
|
||||
if (timeout_us % 10)
|
||||
retries++;
|
||||
|
||||
do {
|
||||
pci_read_config_dword(conf->pci_dev, VSE_CVP_STATUS, &val);
|
||||
if ((val & status_mask) == status_val)
|
||||
return 0;
|
||||
|
||||
/* use small usleep value to re-check and break early */
|
||||
usleep_range(10, 11);
|
||||
} while (--retries);
|
||||
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
static int altera_cvp_teardown(struct fpga_manager *mgr,
|
||||
struct fpga_image_info *info)
|
||||
{
|
||||
struct altera_cvp_conf *conf = mgr->priv;
|
||||
struct pci_dev *pdev = conf->pci_dev;
|
||||
int ret;
|
||||
u32 val;
|
||||
|
||||
/* STEP 12 - reset START_XFER bit */
|
||||
pci_read_config_dword(pdev, VSE_CVP_PROG_CTRL, &val);
|
||||
val &= ~VSE_CVP_PROG_CTRL_START_XFER;
|
||||
pci_write_config_dword(pdev, VSE_CVP_PROG_CTRL, val);
|
||||
|
||||
/* STEP 13 - reset CVP_CONFIG bit */
|
||||
val &= ~VSE_CVP_PROG_CTRL_CONFIG;
|
||||
pci_write_config_dword(pdev, VSE_CVP_PROG_CTRL, val);
|
||||
|
||||
/*
|
||||
* STEP 14
|
||||
* - set CVP_NUMCLKS to 1 and then issue CVP_DUMMY_WR dummy
|
||||
* writes to the HIP
|
||||
*/
|
||||
altera_cvp_dummy_write(conf); /* from CVP clock to internal clock */
|
||||
|
||||
/* STEP 15 - poll CVP_CONFIG_READY bit for 0 with 10us timeout */
|
||||
ret = altera_cvp_wait_status(conf, VSE_CVP_STATUS_CFG_RDY, 0, 10);
|
||||
if (ret)
|
||||
dev_err(&mgr->dev, "CFG_RDY == 0 timeout\n");
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int altera_cvp_write_init(struct fpga_manager *mgr,
|
||||
struct fpga_image_info *info,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct altera_cvp_conf *conf = mgr->priv;
|
||||
struct pci_dev *pdev = conf->pci_dev;
|
||||
u32 iflags, val;
|
||||
int ret;
|
||||
|
||||
iflags = info ? info->flags : 0;
|
||||
|
||||
if (iflags & FPGA_MGR_PARTIAL_RECONFIG) {
|
||||
dev_err(&mgr->dev, "Partial reconfiguration not supported.\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Determine allowed clock to data ratio */
|
||||
if (iflags & FPGA_MGR_COMPRESSED_BITSTREAM)
|
||||
conf->numclks = 8; /* ratio for all compressed images */
|
||||
else if (iflags & FPGA_MGR_ENCRYPTED_BITSTREAM)
|
||||
conf->numclks = 4; /* for uncompressed and encrypted images */
|
||||
else
|
||||
conf->numclks = 1; /* for uncompressed and unencrypted images */
|
||||
|
||||
/* STEP 1 - read CVP status and check CVP_EN flag */
|
||||
pci_read_config_dword(pdev, VSE_CVP_STATUS, &val);
|
||||
if (!(val & VSE_CVP_STATUS_CVP_EN)) {
|
||||
dev_err(&mgr->dev, "CVP mode off: 0x%04x\n", val);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
if (val & VSE_CVP_STATUS_CFG_RDY) {
|
||||
dev_warn(&mgr->dev, "CvP already started, teardown first\n");
|
||||
ret = altera_cvp_teardown(mgr, info);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* STEP 2
|
||||
* - set HIP_CLK_SEL and CVP_MODE (must be set in the order mentioned)
|
||||
*/
|
||||
/* switch from fabric to PMA clock */
|
||||
pci_read_config_dword(pdev, VSE_CVP_MODE_CTRL, &val);
|
||||
val |= VSE_CVP_MODE_CTRL_HIP_CLK_SEL;
|
||||
pci_write_config_dword(pdev, VSE_CVP_MODE_CTRL, val);
|
||||
|
||||
/* set CVP mode */
|
||||
pci_read_config_dword(pdev, VSE_CVP_MODE_CTRL, &val);
|
||||
val |= VSE_CVP_MODE_CTRL_CVP_MODE;
|
||||
pci_write_config_dword(pdev, VSE_CVP_MODE_CTRL, val);
|
||||
|
||||
/*
|
||||
* STEP 3
|
||||
* - set CVP_NUMCLKS to 1 and issue CVP_DUMMY_WR dummy writes to the HIP
|
||||
*/
|
||||
altera_cvp_dummy_write(conf);
|
||||
|
||||
/* STEP 4 - set CVP_CONFIG bit */
|
||||
pci_read_config_dword(pdev, VSE_CVP_PROG_CTRL, &val);
|
||||
/* request control block to begin transfer using CVP */
|
||||
val |= VSE_CVP_PROG_CTRL_CONFIG;
|
||||
pci_write_config_dword(pdev, VSE_CVP_PROG_CTRL, val);
|
||||
|
||||
/* STEP 5 - poll CVP_CONFIG READY for 1 with 10us timeout */
|
||||
ret = altera_cvp_wait_status(conf, VSE_CVP_STATUS_CFG_RDY,
|
||||
VSE_CVP_STATUS_CFG_RDY, 10);
|
||||
if (ret) {
|
||||
dev_warn(&mgr->dev, "CFG_RDY == 1 timeout\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* STEP 6
|
||||
* - set CVP_NUMCLKS to 1 and issue CVP_DUMMY_WR dummy writes to the HIP
|
||||
*/
|
||||
altera_cvp_dummy_write(conf);
|
||||
|
||||
/* STEP 7 - set START_XFER */
|
||||
pci_read_config_dword(pdev, VSE_CVP_PROG_CTRL, &val);
|
||||
val |= VSE_CVP_PROG_CTRL_START_XFER;
|
||||
pci_write_config_dword(pdev, VSE_CVP_PROG_CTRL, val);
|
||||
|
||||
/* STEP 8 - start transfer (set CVP_NUMCLKS for bitstream) */
|
||||
pci_read_config_dword(pdev, VSE_CVP_MODE_CTRL, &val);
|
||||
val &= ~VSE_CVP_MODE_CTRL_NUMCLKS_MASK;
|
||||
val |= conf->numclks << VSE_CVP_MODE_CTRL_NUMCLKS_OFF;
|
||||
pci_write_config_dword(pdev, VSE_CVP_MODE_CTRL, val);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int altera_cvp_chk_error(struct fpga_manager *mgr, size_t bytes)
|
||||
{
|
||||
struct altera_cvp_conf *conf = mgr->priv;
|
||||
u32 val;
|
||||
|
||||
/* STEP 10 (optional) - check CVP_CONFIG_ERROR flag */
|
||||
pci_read_config_dword(conf->pci_dev, VSE_CVP_STATUS, &val);
|
||||
if (val & VSE_CVP_STATUS_CFG_ERR) {
|
||||
dev_err(&mgr->dev, "CVP_CONFIG_ERROR after %zu bytes!\n",
|
||||
bytes);
|
||||
return -EPROTO;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int altera_cvp_write(struct fpga_manager *mgr, const char *buf,
|
||||
size_t count)
|
||||
{
|
||||
struct altera_cvp_conf *conf = mgr->priv;
|
||||
const u32 *data;
|
||||
size_t done, remaining;
|
||||
int status = 0;
|
||||
u32 mask;
|
||||
|
||||
/* STEP 9 - write 32-bit data from RBF file to CVP data register */
|
||||
data = (u32 *)buf;
|
||||
remaining = count;
|
||||
done = 0;
|
||||
|
||||
while (remaining >= 4) {
|
||||
conf->write_data(conf, *data++);
|
||||
done += 4;
|
||||
remaining -= 4;
|
||||
|
||||
/*
|
||||
* STEP 10 (optional) and STEP 11
|
||||
* - check error flag
|
||||
* - loop until data transfer completed
|
||||
* Config images can be huge (more than 40 MiB), so
|
||||
* only check after a new 4k data block has been written.
|
||||
* This reduces the number of checks and speeds up the
|
||||
* configuration process.
|
||||
*/
|
||||
if (altera_cvp_chkcfg && !(done % SZ_4K)) {
|
||||
status = altera_cvp_chk_error(mgr, done);
|
||||
if (status < 0)
|
||||
return status;
|
||||
}
|
||||
}
|
||||
|
||||
/* write up to 3 trailing bytes, if any */
|
||||
mask = BIT(remaining * 8) - 1;
|
||||
if (mask)
|
||||
conf->write_data(conf, *data & mask);
|
||||
|
||||
if (altera_cvp_chkcfg)
|
||||
status = altera_cvp_chk_error(mgr, count);
|
||||
|
||||
return status;
|
||||
}
|
||||
|
||||
static int altera_cvp_write_complete(struct fpga_manager *mgr,
|
||||
struct fpga_image_info *info)
|
||||
{
|
||||
struct altera_cvp_conf *conf = mgr->priv;
|
||||
struct pci_dev *pdev = conf->pci_dev;
|
||||
int ret;
|
||||
u32 mask;
|
||||
u32 val;
|
||||
|
||||
ret = altera_cvp_teardown(mgr, info);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* STEP 16 - check CVP_CONFIG_ERROR_LATCHED bit */
|
||||
pci_read_config_dword(pdev, VSE_UNCOR_ERR_STATUS, &val);
|
||||
if (val & VSE_UNCOR_ERR_CVP_CFG_ERR) {
|
||||
dev_err(&mgr->dev, "detected CVP_CONFIG_ERROR_LATCHED!\n");
|
||||
return -EPROTO;
|
||||
}
|
||||
|
||||
/* STEP 17 - reset CVP_MODE and HIP_CLK_SEL bit */
|
||||
pci_read_config_dword(pdev, VSE_CVP_MODE_CTRL, &val);
|
||||
val &= ~VSE_CVP_MODE_CTRL_HIP_CLK_SEL;
|
||||
val &= ~VSE_CVP_MODE_CTRL_CVP_MODE;
|
||||
pci_write_config_dword(pdev, VSE_CVP_MODE_CTRL, val);
|
||||
|
||||
/* STEP 18 - poll PLD_CLK_IN_USE and USER_MODE bits */
|
||||
mask = VSE_CVP_STATUS_PLD_CLK_IN_USE | VSE_CVP_STATUS_USERMODE;
|
||||
ret = altera_cvp_wait_status(conf, mask, mask, TIMEOUT_US);
|
||||
if (ret)
|
||||
dev_err(&mgr->dev, "PLD_CLK_IN_USE|USERMODE timeout\n");
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const struct fpga_manager_ops altera_cvp_ops = {
|
||||
.state = altera_cvp_state,
|
||||
.write_init = altera_cvp_write_init,
|
||||
.write = altera_cvp_write,
|
||||
.write_complete = altera_cvp_write_complete,
|
||||
};
|
||||
|
||||
static ssize_t show_chkcfg(struct device_driver *dev, char *buf)
|
||||
{
|
||||
return snprintf(buf, 3, "%d\n", altera_cvp_chkcfg);
|
||||
}
|
||||
|
||||
static ssize_t store_chkcfg(struct device_driver *drv, const char *buf,
|
||||
size_t count)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = kstrtobool(buf, &altera_cvp_chkcfg);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static DRIVER_ATTR(chkcfg, 0600, show_chkcfg, store_chkcfg);
|
||||
|
||||
static int altera_cvp_probe(struct pci_dev *pdev,
|
||||
const struct pci_device_id *dev_id);
|
||||
static void altera_cvp_remove(struct pci_dev *pdev);
|
||||
|
||||
#define PCI_VENDOR_ID_ALTERA 0x1172
|
||||
|
||||
static struct pci_device_id altera_cvp_id_tbl[] = {
|
||||
{ PCI_VDEVICE(ALTERA, PCI_ANY_ID) },
|
||||
{ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(pci, altera_cvp_id_tbl);
|
||||
|
||||
static struct pci_driver altera_cvp_driver = {
|
||||
.name = DRV_NAME,
|
||||
.id_table = altera_cvp_id_tbl,
|
||||
.probe = altera_cvp_probe,
|
||||
.remove = altera_cvp_remove,
|
||||
};
|
||||
|
||||
static int altera_cvp_probe(struct pci_dev *pdev,
|
||||
const struct pci_device_id *dev_id)
|
||||
{
|
||||
struct altera_cvp_conf *conf;
|
||||
u16 cmd, val;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* First check if this is the expected FPGA device. PCI config
|
||||
* space access works without enabling the PCI device, memory
|
||||
* space access is enabled further down.
|
||||
*/
|
||||
pci_read_config_word(pdev, VSE_PCIE_EXT_CAP_ID, &val);
|
||||
if (val != VSE_PCIE_EXT_CAP_ID_VAL) {
|
||||
dev_err(&pdev->dev, "Wrong EXT_CAP_ID value 0x%x\n", val);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
conf = devm_kzalloc(&pdev->dev, sizeof(*conf), GFP_KERNEL);
|
||||
if (!conf)
|
||||
return -ENOMEM;
|
||||
|
||||
/*
|
||||
* Enable memory BAR access. We cannot use pci_enable_device() here
|
||||
* because it will make the driver unusable with FPGA devices that
|
||||
* have additional big IOMEM resources (e.g. 4GiB BARs) on 32-bit
|
||||
* platform. Such BARs will not have an assigned address range and
|
||||
* pci_enable_device() will fail, complaining about not claimed BAR,
|
||||
* even if the concerned BAR is not needed for FPGA configuration
|
||||
* at all. Thus, enable the device via PCI config space command.
|
||||
*/
|
||||
pci_read_config_word(pdev, PCI_COMMAND, &cmd);
|
||||
if (!(cmd & PCI_COMMAND_MEMORY)) {
|
||||
cmd |= PCI_COMMAND_MEMORY;
|
||||
pci_write_config_word(pdev, PCI_COMMAND, cmd);
|
||||
}
|
||||
|
||||
ret = pci_request_region(pdev, CVP_BAR, "CVP");
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "Requesting CVP BAR region failed\n");
|
||||
goto err_disable;
|
||||
}
|
||||
|
||||
conf->pci_dev = pdev;
|
||||
conf->write_data = altera_cvp_write_data_iomem;
|
||||
|
||||
conf->map = pci_iomap(pdev, CVP_BAR, 0);
|
||||
if (!conf->map) {
|
||||
dev_warn(&pdev->dev, "Mapping CVP BAR failed\n");
|
||||
conf->write_data = altera_cvp_write_data_config;
|
||||
}
|
||||
|
||||
snprintf(conf->mgr_name, sizeof(conf->mgr_name), "%s @%s",
|
||||
ALTERA_CVP_MGR_NAME, pci_name(pdev));
|
||||
|
||||
ret = fpga_mgr_register(&pdev->dev, conf->mgr_name,
|
||||
&altera_cvp_ops, conf);
|
||||
if (ret)
|
||||
goto err_unmap;
|
||||
|
||||
ret = driver_create_file(&altera_cvp_driver.driver,
|
||||
&driver_attr_chkcfg);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "Can't create sysfs chkcfg file\n");
|
||||
fpga_mgr_unregister(&pdev->dev);
|
||||
goto err_unmap;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err_unmap:
|
||||
pci_iounmap(pdev, conf->map);
|
||||
pci_release_region(pdev, CVP_BAR);
|
||||
err_disable:
|
||||
cmd &= ~PCI_COMMAND_MEMORY;
|
||||
pci_write_config_word(pdev, PCI_COMMAND, cmd);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void altera_cvp_remove(struct pci_dev *pdev)
|
||||
{
|
||||
struct fpga_manager *mgr = pci_get_drvdata(pdev);
|
||||
struct altera_cvp_conf *conf = mgr->priv;
|
||||
u16 cmd;
|
||||
|
||||
driver_remove_file(&altera_cvp_driver.driver, &driver_attr_chkcfg);
|
||||
fpga_mgr_unregister(&pdev->dev);
|
||||
pci_iounmap(pdev, conf->map);
|
||||
pci_release_region(pdev, CVP_BAR);
|
||||
pci_read_config_word(pdev, PCI_COMMAND, &cmd);
|
||||
cmd &= ~PCI_COMMAND_MEMORY;
|
||||
pci_write_config_word(pdev, PCI_COMMAND, cmd);
|
||||
}
|
||||
|
||||
module_pci_driver(altera_cvp_driver);
|
||||
|
||||
MODULE_LICENSE("GPL v2");
|
||||
MODULE_AUTHOR("Anatolij Gustschin <agust@denx.de>");
|
||||
MODULE_DESCRIPTION("Module to load Altera FPGA over CvP");
|
|
@ -66,7 +66,7 @@ static int alt_hps2fpga_enable_show(struct fpga_bridge *bridge)
|
|||
|
||||
/* The L3 REMAP register is write only, so keep a cached value. */
|
||||
static unsigned int l3_remap_shadow;
|
||||
static spinlock_t l3_remap_lock;
|
||||
static DEFINE_SPINLOCK(l3_remap_lock);
|
||||
|
||||
static int _alt_hps2fpga_enable_set(struct altera_hps2fpga_data *priv,
|
||||
bool enable)
|
||||
|
@ -143,9 +143,15 @@ static int alt_fpga_bridge_probe(struct platform_device *pdev)
|
|||
int ret;
|
||||
|
||||
of_id = of_match_device(altera_fpga_of_match, dev);
|
||||
if (!of_id) {
|
||||
dev_err(dev, "failed to match device\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
priv = (struct altera_hps2fpga_data *)of_id->data;
|
||||
|
||||
priv->bridge_reset = of_reset_control_get_by_index(dev->of_node, 0);
|
||||
priv->bridge_reset = of_reset_control_get_exclusive_by_index(dev->of_node,
|
||||
0);
|
||||
if (IS_ERR(priv->bridge_reset)) {
|
||||
dev_err(dev, "Could not get %s reset control\n", priv->name);
|
||||
return PTR_ERR(priv->bridge_reset);
|
||||
|
@ -171,8 +177,6 @@ static int alt_fpga_bridge_probe(struct platform_device *pdev)
|
|||
return -EBUSY;
|
||||
}
|
||||
|
||||
spin_lock_init(&l3_remap_lock);
|
||||
|
||||
if (!of_property_read_u32(dev->of_node, "bridge-enable", &enable)) {
|
||||
if (enable > 1) {
|
||||
dev_warn(dev, "invalid bridge-enable %u > 1\n", enable);
|
||||
|
|
|
@ -0,0 +1,308 @@
|
|||
/*
|
||||
* Altera Passive Serial SPI Driver
|
||||
*
|
||||
* Copyright (c) 2017 United Western Technologies, Corporation
|
||||
*
|
||||
* Joshua Clayton <stillcompiling@gmail.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms and conditions of the GNU General Public License,
|
||||
* version 2, as published by the Free Software Foundation.
|
||||
*
|
||||
* Manage Altera FPGA firmware that is loaded over SPI using the passive
|
||||
* serial configuration method.
|
||||
* Firmware must be in binary "rbf" format.
|
||||
* Works on Arria 10, Cyclone V and Stratix V. Should work on Cyclone series.
|
||||
* May work on other Altera FPGAs.
|
||||
*/
|
||||
|
||||
#include <linux/bitrev.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/fpga/fpga-mgr.h>
|
||||
#include <linux/gpio/consumer.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_gpio.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/spi/spi.h>
|
||||
#include <linux/sizes.h>
|
||||
|
||||
enum altera_ps_devtype {
|
||||
CYCLONE5,
|
||||
ARRIA10,
|
||||
};
|
||||
|
||||
struct altera_ps_data {
|
||||
enum altera_ps_devtype devtype;
|
||||
int status_wait_min_us;
|
||||
int status_wait_max_us;
|
||||
int t_cfg_us;
|
||||
int t_st2ck_us;
|
||||
};
|
||||
|
||||
struct altera_ps_conf {
|
||||
struct gpio_desc *config;
|
||||
struct gpio_desc *confd;
|
||||
struct gpio_desc *status;
|
||||
struct spi_device *spi;
|
||||
const struct altera_ps_data *data;
|
||||
u32 info_flags;
|
||||
char mgr_name[64];
|
||||
};
|
||||
|
||||
/* | Arria 10 | Cyclone5 | Stratix5 |
|
||||
* t_CF2ST0 | [; 600] | [; 600] | [; 600] |ns
|
||||
* t_CFG | [2;] | [2;] | [2;] |µs
|
||||
* t_STATUS | [268; 3000] | [268; 1506] | [268; 1506] |µs
|
||||
* t_CF2ST1 | [; 3000] | [; 1506] | [; 1506] |µs
|
||||
* t_CF2CK | [3010;] | [1506;] | [1506;] |µs
|
||||
* t_ST2CK | [10;] | [2;] | [2;] |µs
|
||||
* t_CD2UM | [175; 830] | [175; 437] | [175; 437] |µs
|
||||
*/
|
||||
static struct altera_ps_data c5_data = {
|
||||
/* these values for Cyclone5 are compatible with Stratix5 */
|
||||
.devtype = CYCLONE5,
|
||||
.status_wait_min_us = 268,
|
||||
.status_wait_max_us = 1506,
|
||||
.t_cfg_us = 2,
|
||||
.t_st2ck_us = 2,
|
||||
};
|
||||
|
||||
static struct altera_ps_data a10_data = {
|
||||
.devtype = ARRIA10,
|
||||
.status_wait_min_us = 268, /* min(t_STATUS) */
|
||||
.status_wait_max_us = 3000, /* max(t_CF2ST1) */
|
||||
.t_cfg_us = 2, /* max { min(t_CFG), max(tCF2ST0) } */
|
||||
.t_st2ck_us = 10, /* min(t_ST2CK) */
|
||||
};
|
||||
|
||||
static const struct of_device_id of_ef_match[] = {
|
||||
{ .compatible = "altr,fpga-passive-serial", .data = &c5_data },
|
||||
{ .compatible = "altr,fpga-arria10-passive-serial", .data = &a10_data },
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, of_ef_match);
|
||||
|
||||
static enum fpga_mgr_states altera_ps_state(struct fpga_manager *mgr)
|
||||
{
|
||||
struct altera_ps_conf *conf = mgr->priv;
|
||||
|
||||
if (gpiod_get_value_cansleep(conf->status))
|
||||
return FPGA_MGR_STATE_RESET;
|
||||
|
||||
return FPGA_MGR_STATE_UNKNOWN;
|
||||
}
|
||||
|
||||
static inline void altera_ps_delay(int delay_us)
|
||||
{
|
||||
if (delay_us > 10)
|
||||
usleep_range(delay_us, delay_us + 5);
|
||||
else
|
||||
udelay(delay_us);
|
||||
}
|
||||
|
||||
static int altera_ps_write_init(struct fpga_manager *mgr,
|
||||
struct fpga_image_info *info,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct altera_ps_conf *conf = mgr->priv;
|
||||
int min, max, waits;
|
||||
int i;
|
||||
|
||||
conf->info_flags = info->flags;
|
||||
|
||||
if (info->flags & FPGA_MGR_PARTIAL_RECONFIG) {
|
||||
dev_err(&mgr->dev, "Partial reconfiguration not supported.\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
gpiod_set_value_cansleep(conf->config, 1);
|
||||
|
||||
/* wait min reset pulse time */
|
||||
altera_ps_delay(conf->data->t_cfg_us);
|
||||
|
||||
if (!gpiod_get_value_cansleep(conf->status)) {
|
||||
dev_err(&mgr->dev, "Status pin failed to show a reset\n");
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
gpiod_set_value_cansleep(conf->config, 0);
|
||||
|
||||
min = conf->data->status_wait_min_us;
|
||||
max = conf->data->status_wait_max_us;
|
||||
waits = max / min;
|
||||
if (max % min)
|
||||
waits++;
|
||||
|
||||
/* wait for max { max(t_STATUS), max(t_CF2ST1) } */
|
||||
for (i = 0; i < waits; i++) {
|
||||
usleep_range(min, min + 10);
|
||||
if (!gpiod_get_value_cansleep(conf->status)) {
|
||||
/* wait for min(t_ST2CK)*/
|
||||
altera_ps_delay(conf->data->t_st2ck_us);
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
dev_err(&mgr->dev, "Status pin not ready.\n");
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
static void rev_buf(char *buf, size_t len)
|
||||
{
|
||||
u32 *fw32 = (u32 *)buf;
|
||||
size_t extra_bytes = (len & 0x03);
|
||||
const u32 *fw_end = (u32 *)(buf + len - extra_bytes);
|
||||
|
||||
/* set buffer to lsb first */
|
||||
while (fw32 < fw_end) {
|
||||
*fw32 = bitrev8x4(*fw32);
|
||||
fw32++;
|
||||
}
|
||||
|
||||
if (extra_bytes) {
|
||||
buf = (char *)fw_end;
|
||||
while (extra_bytes) {
|
||||
*buf = bitrev8(*buf);
|
||||
buf++;
|
||||
extra_bytes--;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static int altera_ps_write(struct fpga_manager *mgr, const char *buf,
|
||||
size_t count)
|
||||
{
|
||||
struct altera_ps_conf *conf = mgr->priv;
|
||||
const char *fw_data = buf;
|
||||
const char *fw_data_end = fw_data + count;
|
||||
|
||||
while (fw_data < fw_data_end) {
|
||||
int ret;
|
||||
size_t stride = min_t(size_t, fw_data_end - fw_data, SZ_4K);
|
||||
|
||||
if (!(conf->info_flags & FPGA_MGR_BITSTREAM_LSB_FIRST))
|
||||
rev_buf((char *)fw_data, stride);
|
||||
|
||||
ret = spi_write(conf->spi, fw_data, stride);
|
||||
if (ret) {
|
||||
dev_err(&mgr->dev, "spi error in firmware write: %d\n",
|
||||
ret);
|
||||
return ret;
|
||||
}
|
||||
fw_data += stride;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int altera_ps_write_complete(struct fpga_manager *mgr,
|
||||
struct fpga_image_info *info)
|
||||
{
|
||||
struct altera_ps_conf *conf = mgr->priv;
|
||||
const char dummy[] = {0};
|
||||
int ret;
|
||||
|
||||
if (gpiod_get_value_cansleep(conf->status)) {
|
||||
dev_err(&mgr->dev, "Error during configuration.\n");
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
if (!IS_ERR(conf->confd)) {
|
||||
if (!gpiod_get_raw_value_cansleep(conf->confd)) {
|
||||
dev_err(&mgr->dev, "CONF_DONE is inactive!\n");
|
||||
return -EIO;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* After CONF_DONE goes high, send two additional falling edges on DCLK
|
||||
* to begin initialization and enter user mode
|
||||
*/
|
||||
ret = spi_write(conf->spi, dummy, 1);
|
||||
if (ret) {
|
||||
dev_err(&mgr->dev, "spi error during end sequence: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct fpga_manager_ops altera_ps_ops = {
|
||||
.state = altera_ps_state,
|
||||
.write_init = altera_ps_write_init,
|
||||
.write = altera_ps_write,
|
||||
.write_complete = altera_ps_write_complete,
|
||||
};
|
||||
|
||||
static int altera_ps_probe(struct spi_device *spi)
|
||||
{
|
||||
struct altera_ps_conf *conf;
|
||||
const struct of_device_id *of_id;
|
||||
|
||||
conf = devm_kzalloc(&spi->dev, sizeof(*conf), GFP_KERNEL);
|
||||
if (!conf)
|
||||
return -ENOMEM;
|
||||
|
||||
of_id = of_match_device(of_ef_match, &spi->dev);
|
||||
if (!of_id)
|
||||
return -ENODEV;
|
||||
|
||||
conf->data = of_id->data;
|
||||
conf->spi = spi;
|
||||
conf->config = devm_gpiod_get(&spi->dev, "nconfig", GPIOD_OUT_HIGH);
|
||||
if (IS_ERR(conf->config)) {
|
||||
dev_err(&spi->dev, "Failed to get config gpio: %ld\n",
|
||||
PTR_ERR(conf->config));
|
||||
return PTR_ERR(conf->config);
|
||||
}
|
||||
|
||||
conf->status = devm_gpiod_get(&spi->dev, "nstat", GPIOD_IN);
|
||||
if (IS_ERR(conf->status)) {
|
||||
dev_err(&spi->dev, "Failed to get status gpio: %ld\n",
|
||||
PTR_ERR(conf->status));
|
||||
return PTR_ERR(conf->status);
|
||||
}
|
||||
|
||||
conf->confd = devm_gpiod_get(&spi->dev, "confd", GPIOD_IN);
|
||||
if (IS_ERR(conf->confd)) {
|
||||
dev_warn(&spi->dev, "Not using confd gpio: %ld\n",
|
||||
PTR_ERR(conf->confd));
|
||||
}
|
||||
|
||||
/* Register manager with unique name */
|
||||
snprintf(conf->mgr_name, sizeof(conf->mgr_name), "%s %s",
|
||||
dev_driver_string(&spi->dev), dev_name(&spi->dev));
|
||||
|
||||
return fpga_mgr_register(&spi->dev, conf->mgr_name,
|
||||
&altera_ps_ops, conf);
|
||||
}
|
||||
|
||||
static int altera_ps_remove(struct spi_device *spi)
|
||||
{
|
||||
fpga_mgr_unregister(&spi->dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct spi_device_id altera_ps_spi_ids[] = {
|
||||
{"cyclone-ps-spi", 0},
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(spi, altera_ps_spi_ids);
|
||||
|
||||
static struct spi_driver altera_ps_driver = {
|
||||
.driver = {
|
||||
.name = "altera-ps-spi",
|
||||
.owner = THIS_MODULE,
|
||||
.of_match_table = of_match_ptr(of_ef_match),
|
||||
},
|
||||
.id_table = altera_ps_spi_ids,
|
||||
.probe = altera_ps_probe,
|
||||
.remove = altera_ps_remove,
|
||||
};
|
||||
|
||||
module_spi_driver(altera_ps_driver)
|
||||
|
||||
MODULE_LICENSE("GPL v2");
|
||||
MODULE_AUTHOR("Joshua Clayton <stillcompiling@gmail.com>");
|
||||
MODULE_DESCRIPTION("Module to load Altera FPGA firmware over SPI");
|
|
@ -319,8 +319,8 @@ static int child_regions_with_firmware(struct device_node *overlay)
|
|||
of_node_put(child_region);
|
||||
|
||||
if (ret)
|
||||
pr_err("firmware-name not allowed in child FPGA region: %s",
|
||||
child_region->full_name);
|
||||
pr_err("firmware-name not allowed in child FPGA region: %pOF",
|
||||
child_region);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -475,7 +475,7 @@ static ssize_t fsi_slave_sysfs_raw_write(struct file *file,
|
|||
return count;
|
||||
}
|
||||
|
||||
static struct bin_attribute fsi_slave_raw_attr = {
|
||||
static const struct bin_attribute fsi_slave_raw_attr = {
|
||||
.attr = {
|
||||
.name = "raw",
|
||||
.mode = 0600,
|
||||
|
@ -499,7 +499,7 @@ static ssize_t fsi_slave_sysfs_term_write(struct file *file,
|
|||
return count;
|
||||
}
|
||||
|
||||
static struct bin_attribute fsi_slave_term_attr = {
|
||||
static const struct bin_attribute fsi_slave_term_attr = {
|
||||
.attr = {
|
||||
.name = "term",
|
||||
.mode = 0200,
|
||||
|
|
|
@ -57,12 +57,6 @@ static int put_scom(struct scom_device *scom_dev, uint64_t value,
|
|||
int rc;
|
||||
uint32_t data;
|
||||
|
||||
data = cpu_to_be32(SCOM_RESET_CMD);
|
||||
rc = fsi_device_write(scom_dev->fsi_dev, SCOM_RESET_REG, &data,
|
||||
sizeof(uint32_t));
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
data = cpu_to_be32((value >> 32) & 0xffffffff);
|
||||
rc = fsi_device_write(scom_dev->fsi_dev, SCOM_DATA0_REG, &data,
|
||||
sizeof(uint32_t));
|
||||
|
@ -186,6 +180,7 @@ static const struct file_operations scom_fops = {
|
|||
|
||||
static int scom_probe(struct device *dev)
|
||||
{
|
||||
uint32_t data;
|
||||
struct fsi_device *fsi_dev = to_fsi_dev(dev);
|
||||
struct scom_device *scom;
|
||||
|
||||
|
@ -202,6 +197,9 @@ static int scom_probe(struct device *dev)
|
|||
scom->mdev.parent = dev;
|
||||
list_add(&scom->link, &scom_devices);
|
||||
|
||||
data = cpu_to_be32(SCOM_RESET_CMD);
|
||||
fsi_device_write(fsi_dev, SCOM_RESET_REG, &data, sizeof(uint32_t));
|
||||
|
||||
return misc_register(&scom->mdev);
|
||||
}
|
||||
|
||||
|
|
|
@ -177,6 +177,11 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
|
|||
&vmbus_connection.chn_msg_list);
|
||||
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
|
||||
|
||||
if (newchannel->rescind) {
|
||||
err = -ENODEV;
|
||||
goto error_free_gpadl;
|
||||
}
|
||||
|
||||
ret = vmbus_post_msg(open_msg,
|
||||
sizeof(struct vmbus_channel_open_channel), true);
|
||||
|
||||
|
@ -421,6 +426,11 @@ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer,
|
|||
|
||||
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
|
||||
|
||||
if (channel->rescind) {
|
||||
ret = -ENODEV;
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
ret = vmbus_post_msg(gpadlmsg, msginfo->msgsize -
|
||||
sizeof(*msginfo), true);
|
||||
if (ret != 0)
|
||||
|
@ -494,6 +504,10 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle)
|
|||
list_add_tail(&info->msglistentry,
|
||||
&vmbus_connection.chn_msg_list);
|
||||
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
|
||||
|
||||
if (channel->rescind)
|
||||
goto post_msg_err;
|
||||
|
||||
ret = vmbus_post_msg(msg, sizeof(struct vmbus_channel_gpadl_teardown),
|
||||
true);
|
||||
|
||||
|
|
|
@ -451,6 +451,12 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
|
|||
/* Make sure this is a new offer */
|
||||
mutex_lock(&vmbus_connection.channel_mutex);
|
||||
|
||||
/*
|
||||
* Now that we have acquired the channel_mutex,
|
||||
* we can release the potentially racing rescind thread.
|
||||
*/
|
||||
atomic_dec(&vmbus_connection.offer_in_progress);
|
||||
|
||||
list_for_each_entry(channel, &vmbus_connection.chn_list, listentry) {
|
||||
if (!uuid_le_cmp(channel->offermsg.offer.if_type,
|
||||
newchannel->offermsg.offer.if_type) &&
|
||||
|
@ -481,7 +487,6 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
|
|||
channel->num_sc++;
|
||||
spin_unlock_irqrestore(&channel->lock, flags);
|
||||
} else {
|
||||
atomic_dec(&vmbus_connection.offer_in_progress);
|
||||
goto err_free_chan;
|
||||
}
|
||||
}
|
||||
|
@ -510,7 +515,6 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
|
|||
if (!fnew) {
|
||||
if (channel->sc_creation_callback != NULL)
|
||||
channel->sc_creation_callback(newchannel);
|
||||
atomic_dec(&vmbus_connection.offer_in_progress);
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -541,7 +545,7 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
|
|||
goto err_deq_chan;
|
||||
}
|
||||
|
||||
atomic_dec(&vmbus_connection.offer_in_progress);
|
||||
newchannel->probe_done = true;
|
||||
return;
|
||||
|
||||
err_deq_chan:
|
||||
|
@ -882,8 +886,27 @@ static void vmbus_onoffer_rescind(struct vmbus_channel_message_header *hdr)
|
|||
channel->rescind = true;
|
||||
spin_unlock_irqrestore(&channel->lock, flags);
|
||||
|
||||
/*
|
||||
* Now that we have posted the rescind state, perform
|
||||
* rescind related cleanup.
|
||||
*/
|
||||
vmbus_rescind_cleanup(channel);
|
||||
|
||||
/*
|
||||
* Now wait for offer handling to complete.
|
||||
*/
|
||||
while (READ_ONCE(channel->probe_done) == false) {
|
||||
/*
|
||||
* We wait here until any channel offer is currently
|
||||
* being processed.
|
||||
*/
|
||||
msleep(1);
|
||||
}
|
||||
|
||||
/*
|
||||
* At this point, the rescind handling can proceed safely.
|
||||
*/
|
||||
|
||||
if (channel->device_obj) {
|
||||
if (channel->chn_rescind_callback) {
|
||||
channel->chn_rescind_callback(channel);
|
||||
|
|
|
@ -584,10 +584,6 @@ static int hv_memory_notifier(struct notifier_block *nb, unsigned long val,
|
|||
|
||||
switch (val) {
|
||||
case MEM_ONLINE:
|
||||
spin_lock_irqsave(&dm_device.ha_lock, flags);
|
||||
dm_device.num_pages_onlined += mem->nr_pages;
|
||||
spin_unlock_irqrestore(&dm_device.ha_lock, flags);
|
||||
/* Fall through */
|
||||
case MEM_CANCEL_ONLINE:
|
||||
if (dm_device.ha_waiting) {
|
||||
dm_device.ha_waiting = false;
|
||||
|
@ -644,6 +640,9 @@ static void hv_page_online_one(struct hv_hotadd_state *has, struct page *pg)
|
|||
__online_page_set_limits(pg);
|
||||
__online_page_increment_counters(pg);
|
||||
__online_page_free(pg);
|
||||
|
||||
WARN_ON_ONCE(!spin_is_locked(&dm_device.ha_lock));
|
||||
dm_device.num_pages_onlined++;
|
||||
}
|
||||
|
||||
static void hv_bring_pgs_online(struct hv_hotadd_state *has,
|
||||
|
@ -1036,8 +1035,8 @@ static void process_info(struct hv_dynmem_device *dm, struct dm_info_msg *msg)
|
|||
if (info_hdr->data_size == sizeof(__u64)) {
|
||||
__u64 *max_page_count = (__u64 *)&info_hdr[1];
|
||||
|
||||
pr_info("INFO_TYPE_MAX_PAGE_CNT = %llu\n",
|
||||
*max_page_count);
|
||||
pr_info("Max. dynamic memory size: %llu MB\n",
|
||||
(*max_page_count) >> (20 - PAGE_SHIFT));
|
||||
}
|
||||
|
||||
break;
|
||||
|
@ -1656,6 +1655,7 @@ static int balloon_probe(struct hv_device *dev,
|
|||
}
|
||||
|
||||
dm_device.state = DM_INITIALIZED;
|
||||
last_post_time = jiffies;
|
||||
|
||||
return 0;
|
||||
|
||||
|
|
|
@ -304,7 +304,7 @@ static int process_ob_ipinfo(void *in_msg, void *out_msg, int op)
|
|||
strlen((char *)in->body.kvp_ip_val.adapter_id),
|
||||
UTF16_HOST_ENDIAN,
|
||||
(wchar_t *)out->kvp_ip_val.adapter_id,
|
||||
MAX_IP_ADDR_SIZE);
|
||||
MAX_ADAPTER_ID_SIZE);
|
||||
if (len < 0)
|
||||
return len;
|
||||
|
||||
|
|
|
@ -29,6 +29,7 @@
|
|||
#include <linux/uio.h>
|
||||
#include <linux/vmalloc.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/prefetch.h>
|
||||
|
||||
#include "hyperv_vmbus.h"
|
||||
|
||||
|
@ -94,30 +95,6 @@ hv_set_next_write_location(struct hv_ring_buffer_info *ring_info,
|
|||
ring_info->ring_buffer->write_index = next_write_location;
|
||||
}
|
||||
|
||||
/* Get the next read location for the specified ring buffer. */
|
||||
static inline u32
|
||||
hv_get_next_read_location(const struct hv_ring_buffer_info *ring_info)
|
||||
{
|
||||
return ring_info->ring_buffer->read_index;
|
||||
}
|
||||
|
||||
/*
|
||||
* Get the next read location + offset for the specified ring buffer.
|
||||
* This allows the caller to skip.
|
||||
*/
|
||||
static inline u32
|
||||
hv_get_next_readlocation_withoffset(const struct hv_ring_buffer_info *ring_info,
|
||||
u32 offset)
|
||||
{
|
||||
u32 next = ring_info->ring_buffer->read_index;
|
||||
|
||||
next += offset;
|
||||
if (next >= ring_info->ring_datasize)
|
||||
next -= ring_info->ring_datasize;
|
||||
|
||||
return next;
|
||||
}
|
||||
|
||||
/* Set the next read location for the specified ring buffer. */
|
||||
static inline void
|
||||
hv_set_next_read_location(struct hv_ring_buffer_info *ring_info,
|
||||
|
@ -141,29 +118,6 @@ hv_get_ring_bufferindices(struct hv_ring_buffer_info *ring_info)
|
|||
return (u64)ring_info->ring_buffer->write_index << 32;
|
||||
}
|
||||
|
||||
/*
|
||||
* Helper routine to copy to source from ring buffer.
|
||||
* Assume there is enough room. Handles wrap-around in src case only!!
|
||||
*/
|
||||
static u32 hv_copyfrom_ringbuffer(
|
||||
const struct hv_ring_buffer_info *ring_info,
|
||||
void *dest,
|
||||
u32 destlen,
|
||||
u32 start_read_offset)
|
||||
{
|
||||
void *ring_buffer = hv_get_ring_buffer(ring_info);
|
||||
u32 ring_buffer_size = hv_get_ring_buffersize(ring_info);
|
||||
|
||||
memcpy(dest, ring_buffer + start_read_offset, destlen);
|
||||
|
||||
start_read_offset += destlen;
|
||||
if (start_read_offset >= ring_buffer_size)
|
||||
start_read_offset -= ring_buffer_size;
|
||||
|
||||
return start_read_offset;
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* Helper routine to copy from source to ring buffer.
|
||||
* Assume there is enough room. Handles wrap-around in dest case only!!
|
||||
|
@ -334,33 +288,22 @@ int hv_ringbuffer_write(struct vmbus_channel *channel,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static inline void
|
||||
init_cached_read_index(struct hv_ring_buffer_info *rbi)
|
||||
{
|
||||
rbi->cached_read_index = rbi->ring_buffer->read_index;
|
||||
}
|
||||
|
||||
int hv_ringbuffer_read(struct vmbus_channel *channel,
|
||||
void *buffer, u32 buflen, u32 *buffer_actual_len,
|
||||
u64 *requestid, bool raw)
|
||||
{
|
||||
u32 bytes_avail_toread;
|
||||
u32 next_read_location;
|
||||
u64 prev_indices = 0;
|
||||
struct vmpacket_descriptor desc;
|
||||
u32 offset;
|
||||
u32 packetlen;
|
||||
struct hv_ring_buffer_info *inring_info = &channel->inbound;
|
||||
struct vmpacket_descriptor *desc;
|
||||
u32 packetlen, offset;
|
||||
|
||||
if (buflen <= 0)
|
||||
if (unlikely(buflen == 0))
|
||||
return -EINVAL;
|
||||
|
||||
*buffer_actual_len = 0;
|
||||
*requestid = 0;
|
||||
|
||||
bytes_avail_toread = hv_get_bytes_to_read(inring_info);
|
||||
/* Make sure there is something to read */
|
||||
if (bytes_avail_toread < sizeof(desc)) {
|
||||
desc = hv_pkt_iter_first(channel);
|
||||
if (desc == NULL) {
|
||||
/*
|
||||
* No error is set when there is even no header, drivers are
|
||||
* supposed to analyze buffer_actual_len.
|
||||
|
@ -368,48 +311,22 @@ int hv_ringbuffer_read(struct vmbus_channel *channel,
|
|||
return 0;
|
||||
}
|
||||
|
||||
init_cached_read_index(inring_info);
|
||||
|
||||
next_read_location = hv_get_next_read_location(inring_info);
|
||||
next_read_location = hv_copyfrom_ringbuffer(inring_info, &desc,
|
||||
sizeof(desc),
|
||||
next_read_location);
|
||||
|
||||
offset = raw ? 0 : (desc.offset8 << 3);
|
||||
packetlen = (desc.len8 << 3) - offset;
|
||||
offset = raw ? 0 : (desc->offset8 << 3);
|
||||
packetlen = (desc->len8 << 3) - offset;
|
||||
*buffer_actual_len = packetlen;
|
||||
*requestid = desc.trans_id;
|
||||
*requestid = desc->trans_id;
|
||||
|
||||
if (bytes_avail_toread < packetlen + offset)
|
||||
return -EAGAIN;
|
||||
|
||||
if (packetlen > buflen)
|
||||
if (unlikely(packetlen > buflen))
|
||||
return -ENOBUFS;
|
||||
|
||||
next_read_location =
|
||||
hv_get_next_readlocation_withoffset(inring_info, offset);
|
||||
/* since ring is double mapped, only one copy is necessary */
|
||||
memcpy(buffer, (const char *)desc + offset, packetlen);
|
||||
|
||||
next_read_location = hv_copyfrom_ringbuffer(inring_info,
|
||||
buffer,
|
||||
packetlen,
|
||||
next_read_location);
|
||||
/* Advance ring index to next packet descriptor */
|
||||
__hv_pkt_iter_next(channel, desc);
|
||||
|
||||
next_read_location = hv_copyfrom_ringbuffer(inring_info,
|
||||
&prev_indices,
|
||||
sizeof(u64),
|
||||
next_read_location);
|
||||
|
||||
/*
|
||||
* Make sure all reads are done before we update the read index since
|
||||
* the writer may start writing to the read area once the read index
|
||||
* is updated.
|
||||
*/
|
||||
virt_mb();
|
||||
|
||||
/* Update the read index */
|
||||
hv_set_next_read_location(inring_info, next_read_location);
|
||||
|
||||
hv_signal_on_read(channel);
|
||||
/* Notify host of update */
|
||||
hv_pkt_iter_close(channel);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -440,14 +357,16 @@ static u32 hv_pkt_iter_avail(const struct hv_ring_buffer_info *rbi)
|
|||
struct vmpacket_descriptor *hv_pkt_iter_first(struct vmbus_channel *channel)
|
||||
{
|
||||
struct hv_ring_buffer_info *rbi = &channel->inbound;
|
||||
|
||||
/* set state for later hv_signal_on_read() */
|
||||
init_cached_read_index(rbi);
|
||||
struct vmpacket_descriptor *desc;
|
||||
|
||||
if (hv_pkt_iter_avail(rbi) < sizeof(struct vmpacket_descriptor))
|
||||
return NULL;
|
||||
|
||||
return hv_get_ring_buffer(rbi) + rbi->priv_read_index;
|
||||
desc = hv_get_ring_buffer(rbi) + rbi->priv_read_index;
|
||||
if (desc)
|
||||
prefetch((char *)desc + (desc->len8 << 3));
|
||||
|
||||
return desc;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hv_pkt_iter_first);
|
||||
|
||||
|
@ -471,10 +390,7 @@ __hv_pkt_iter_next(struct vmbus_channel *channel,
|
|||
rbi->priv_read_index -= dsize;
|
||||
|
||||
/* more data? */
|
||||
if (hv_pkt_iter_avail(rbi) < sizeof(struct vmpacket_descriptor))
|
||||
return NULL;
|
||||
else
|
||||
return hv_get_ring_buffer(rbi) + rbi->priv_read_index;
|
||||
return hv_pkt_iter_first(channel);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(__hv_pkt_iter_next);
|
||||
|
||||
|
@ -484,6 +400,7 @@ EXPORT_SYMBOL_GPL(__hv_pkt_iter_next);
|
|||
void hv_pkt_iter_close(struct vmbus_channel *channel)
|
||||
{
|
||||
struct hv_ring_buffer_info *rbi = &channel->inbound;
|
||||
u32 orig_write_sz = hv_get_bytes_to_write(rbi);
|
||||
|
||||
/*
|
||||
* Make sure all reads are done before we update the read index since
|
||||
|
@ -493,6 +410,40 @@ void hv_pkt_iter_close(struct vmbus_channel *channel)
|
|||
virt_rmb();
|
||||
rbi->ring_buffer->read_index = rbi->priv_read_index;
|
||||
|
||||
hv_signal_on_read(channel);
|
||||
/*
|
||||
* Issue a full memory barrier before making the signaling decision.
|
||||
* Here is the reason for having this barrier:
|
||||
* If the reading of the pend_sz (in this function)
|
||||
* were to be reordered and read before we commit the new read
|
||||
* index (in the calling function) we could
|
||||
* have a problem. If the host were to set the pending_sz after we
|
||||
* have sampled pending_sz and go to sleep before we commit the
|
||||
* read index, we could miss sending the interrupt. Issue a full
|
||||
* memory barrier to address this.
|
||||
*/
|
||||
virt_mb();
|
||||
|
||||
/* If host has disabled notifications then skip */
|
||||
if (rbi->ring_buffer->interrupt_mask)
|
||||
return;
|
||||
|
||||
if (rbi->ring_buffer->feature_bits.feat_pending_send_sz) {
|
||||
u32 pending_sz = READ_ONCE(rbi->ring_buffer->pending_send_sz);
|
||||
|
||||
/*
|
||||
* If there was space before we began iteration,
|
||||
* then host was not blocked. Also handles case where
|
||||
* pending_sz is zero then host has nothing pending
|
||||
* and does not need to be signaled.
|
||||
*/
|
||||
if (orig_write_sz > pending_sz)
|
||||
return;
|
||||
|
||||
/* If pending write will not fit, don't give false hope. */
|
||||
if (hv_get_bytes_to_write(rbi) < pending_sz)
|
||||
return;
|
||||
}
|
||||
|
||||
vmbus_setevent(channel);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hv_pkt_iter_close);
|
||||
|
|
|
@ -940,6 +940,9 @@ static void vmbus_chan_sched(struct hv_per_cpu_context *hv_cpu)
|
|||
if (channel->offermsg.child_relid != relid)
|
||||
continue;
|
||||
|
||||
if (channel->rescind)
|
||||
continue;
|
||||
|
||||
switch (channel->callback_mode) {
|
||||
case HV_CALL_ISR:
|
||||
vmbus_channel_isr(channel);
|
||||
|
|
|
@ -70,13 +70,13 @@ config CORESIGHT_SOURCE_ETM4X
|
|||
for instruction level tracing. Depending on the implemented version
|
||||
data tracing may also be available.
|
||||
|
||||
config CORESIGHT_QCOM_REPLICATOR
|
||||
bool "Qualcomm CoreSight Replicator driver"
|
||||
config CORESIGHT_DYNAMIC_REPLICATOR
|
||||
bool "CoreSight Programmable Replicator driver"
|
||||
depends on CORESIGHT_LINKS_AND_SINKS
|
||||
help
|
||||
This enables support for Qualcomm CoreSight link driver. The
|
||||
programmable ATB replicator sends the ATB trace stream from the
|
||||
ETB/ETF to the TPIUi and ETR.
|
||||
This enables support for dynamic CoreSight replicator link driver.
|
||||
The programmable ATB replicator allows independent filtering of the
|
||||
trace data based on the traceid.
|
||||
|
||||
config CORESIGHT_STM
|
||||
bool "CoreSight System Trace Macrocell driver"
|
||||
|
|
|
@ -14,6 +14,6 @@ obj-$(CONFIG_CORESIGHT_SOURCE_ETM3X) += coresight-etm3x.o coresight-etm-cp14.o \
|
|||
coresight-etm3x-sysfs.o
|
||||
obj-$(CONFIG_CORESIGHT_SOURCE_ETM4X) += coresight-etm4x.o \
|
||||
coresight-etm4x-sysfs.o
|
||||
obj-$(CONFIG_CORESIGHT_QCOM_REPLICATOR) += coresight-replicator-qcom.o
|
||||
obj-$(CONFIG_CORESIGHT_DYNAMIC_REPLICATOR) += coresight-dynamic-replicator.o
|
||||
obj-$(CONFIG_CORESIGHT_STM) += coresight-stm.o
|
||||
obj-$(CONFIG_CORESIGHT_CPU_DEBUG) += coresight-cpu-debug.o
|
||||
|
|
|
@ -667,7 +667,7 @@ static int debug_remove(struct amba_device *adev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static struct amba_id debug_ids[] = {
|
||||
static const struct amba_id debug_ids[] = {
|
||||
{ /* Debug for Cortex-A53 */
|
||||
.id = 0x000bbd03,
|
||||
.mask = 0x000fffff,
|
||||
|
|
|
@ -95,6 +95,28 @@ static const struct coresight_ops replicator_cs_ops = {
|
|||
.link_ops = &replicator_link_ops,
|
||||
};
|
||||
|
||||
#define coresight_replicator_reg(name, offset) \
|
||||
coresight_simple_reg32(struct replicator_state, name, offset)
|
||||
|
||||
coresight_replicator_reg(idfilter0, REPLICATOR_IDFILTER0);
|
||||
coresight_replicator_reg(idfilter1, REPLICATOR_IDFILTER1);
|
||||
|
||||
static struct attribute *replicator_mgmt_attrs[] = {
|
||||
&dev_attr_idfilter0.attr,
|
||||
&dev_attr_idfilter1.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static const struct attribute_group replicator_mgmt_group = {
|
||||
.attrs = replicator_mgmt_attrs,
|
||||
.name = "mgmt",
|
||||
};
|
||||
|
||||
static const struct attribute_group *replicator_groups[] = {
|
||||
&replicator_mgmt_group,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static int replicator_probe(struct amba_device *adev, const struct amba_id *id)
|
||||
{
|
||||
int ret;
|
||||
|
@ -139,11 +161,11 @@ static int replicator_probe(struct amba_device *adev, const struct amba_id *id)
|
|||
desc.ops = &replicator_cs_ops;
|
||||
desc.pdata = adev->dev.platform_data;
|
||||
desc.dev = &adev->dev;
|
||||
desc.groups = replicator_groups;
|
||||
drvdata->csdev = coresight_register(&desc);
|
||||
if (IS_ERR(drvdata->csdev))
|
||||
return PTR_ERR(drvdata->csdev);
|
||||
|
||||
dev_info(dev, "%s initialized\n", (char *)id->data);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -175,18 +197,22 @@ static const struct dev_pm_ops replicator_dev_pm_ops = {
|
|||
NULL)
|
||||
};
|
||||
|
||||
static struct amba_id replicator_ids[] = {
|
||||
static const struct amba_id replicator_ids[] = {
|
||||
{
|
||||
.id = 0x0003b909,
|
||||
.mask = 0x0003ffff,
|
||||
.data = "REPLICATOR 1.0",
|
||||
},
|
||||
{
|
||||
/* Coresight SoC-600 */
|
||||
.id = 0x000bb9ec,
|
||||
.mask = 0x000fffff,
|
||||
},
|
||||
{ 0, 0 },
|
||||
};
|
||||
|
||||
static struct amba_driver replicator_driver = {
|
||||
.drv = {
|
||||
.name = "coresight-replicator-qcom",
|
||||
.name = "coresight-dynamic-replicator",
|
||||
.pm = &replicator_dev_pm_ops,
|
||||
.suppress_bind_attrs = true,
|
||||
},
|
|
@ -200,8 +200,10 @@ static void etb_disable_hw(struct etb_drvdata *drvdata)
|
|||
|
||||
static void etb_dump_hw(struct etb_drvdata *drvdata)
|
||||
{
|
||||
bool lost = false;
|
||||
int i;
|
||||
u8 *buf_ptr;
|
||||
const u32 *barrier;
|
||||
u32 read_data, depth;
|
||||
u32 read_ptr, write_ptr;
|
||||
u32 frame_off, frame_endoff;
|
||||
|
@ -223,20 +225,26 @@ static void etb_dump_hw(struct etb_drvdata *drvdata)
|
|||
}
|
||||
|
||||
if ((readl_relaxed(drvdata->base + ETB_STATUS_REG)
|
||||
& ETB_STATUS_RAM_FULL) == 0)
|
||||
& ETB_STATUS_RAM_FULL) == 0) {
|
||||
writel_relaxed(0x0, drvdata->base + ETB_RAM_READ_POINTER);
|
||||
else
|
||||
} else {
|
||||
writel_relaxed(write_ptr, drvdata->base + ETB_RAM_READ_POINTER);
|
||||
lost = true;
|
||||
}
|
||||
|
||||
depth = drvdata->buffer_depth;
|
||||
buf_ptr = drvdata->buf;
|
||||
barrier = barrier_pkt;
|
||||
for (i = 0; i < depth; i++) {
|
||||
read_data = readl_relaxed(drvdata->base +
|
||||
ETB_RAM_READ_DATA_REG);
|
||||
*buf_ptr++ = read_data >> 0;
|
||||
*buf_ptr++ = read_data >> 8;
|
||||
*buf_ptr++ = read_data >> 16;
|
||||
*buf_ptr++ = read_data >> 24;
|
||||
if (lost && *barrier) {
|
||||
read_data = *barrier;
|
||||
barrier++;
|
||||
}
|
||||
|
||||
*(u32 *)buf_ptr = read_data;
|
||||
buf_ptr += 4;
|
||||
}
|
||||
|
||||
if (frame_off) {
|
||||
|
@ -353,8 +361,10 @@ static void etb_update_buffer(struct coresight_device *csdev,
|
|||
struct perf_output_handle *handle,
|
||||
void *sink_config)
|
||||
{
|
||||
bool lost = false;
|
||||
int i, cur;
|
||||
u8 *buf_ptr;
|
||||
const u32 *barrier;
|
||||
u32 read_ptr, write_ptr, capacity;
|
||||
u32 status, read_data, to_read;
|
||||
unsigned long offset;
|
||||
|
@ -366,8 +376,8 @@ static void etb_update_buffer(struct coresight_device *csdev,
|
|||
|
||||
capacity = drvdata->buffer_depth * ETB_FRAME_SIZE_WORDS;
|
||||
|
||||
CS_UNLOCK(drvdata->base);
|
||||
etb_disable_hw(drvdata);
|
||||
CS_UNLOCK(drvdata->base);
|
||||
|
||||
/* unit is in words, not bytes */
|
||||
read_ptr = readl_relaxed(drvdata->base + ETB_RAM_READ_POINTER);
|
||||
|
@ -384,7 +394,7 @@ static void etb_update_buffer(struct coresight_device *csdev,
|
|||
(unsigned long)write_ptr);
|
||||
|
||||
write_ptr &= ~(ETB_FRAME_SIZE_WORDS - 1);
|
||||
perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
|
||||
lost = true;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -395,7 +405,7 @@ static void etb_update_buffer(struct coresight_device *csdev,
|
|||
*/
|
||||
status = readl_relaxed(drvdata->base + ETB_STATUS_REG);
|
||||
if (status & ETB_STATUS_RAM_FULL) {
|
||||
perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
|
||||
lost = true;
|
||||
to_read = capacity;
|
||||
read_ptr = write_ptr;
|
||||
} else {
|
||||
|
@ -428,22 +438,30 @@ static void etb_update_buffer(struct coresight_device *csdev,
|
|||
if (read_ptr > (drvdata->buffer_depth - 1))
|
||||
read_ptr -= drvdata->buffer_depth;
|
||||
/* let the decoder know we've skipped ahead */
|
||||
perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
|
||||
lost = true;
|
||||
}
|
||||
|
||||
if (lost)
|
||||
perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
|
||||
|
||||
/* finally tell HW where we want to start reading from */
|
||||
writel_relaxed(read_ptr, drvdata->base + ETB_RAM_READ_POINTER);
|
||||
|
||||
cur = buf->cur;
|
||||
offset = buf->offset;
|
||||
barrier = barrier_pkt;
|
||||
|
||||
for (i = 0; i < to_read; i += 4) {
|
||||
buf_ptr = buf->data_pages[cur] + offset;
|
||||
read_data = readl_relaxed(drvdata->base +
|
||||
ETB_RAM_READ_DATA_REG);
|
||||
*buf_ptr++ = read_data >> 0;
|
||||
*buf_ptr++ = read_data >> 8;
|
||||
*buf_ptr++ = read_data >> 16;
|
||||
*buf_ptr++ = read_data >> 24;
|
||||
if (lost && *barrier) {
|
||||
read_data = *barrier;
|
||||
barrier++;
|
||||
}
|
||||
|
||||
*(u32 *)buf_ptr = read_data;
|
||||
buf_ptr += 4;
|
||||
|
||||
offset += 4;
|
||||
if (offset >= PAGE_SIZE) {
|
||||
|
@ -557,17 +575,17 @@ static const struct file_operations etb_fops = {
|
|||
.llseek = no_llseek,
|
||||
};
|
||||
|
||||
#define coresight_etb10_simple_func(name, offset) \
|
||||
coresight_simple_func(struct etb_drvdata, NULL, name, offset)
|
||||
#define coresight_etb10_reg(name, offset) \
|
||||
coresight_simple_reg32(struct etb_drvdata, name, offset)
|
||||
|
||||
coresight_etb10_simple_func(rdp, ETB_RAM_DEPTH_REG);
|
||||
coresight_etb10_simple_func(sts, ETB_STATUS_REG);
|
||||
coresight_etb10_simple_func(rrp, ETB_RAM_READ_POINTER);
|
||||
coresight_etb10_simple_func(rwp, ETB_RAM_WRITE_POINTER);
|
||||
coresight_etb10_simple_func(trg, ETB_TRG);
|
||||
coresight_etb10_simple_func(ctl, ETB_CTL_REG);
|
||||
coresight_etb10_simple_func(ffsr, ETB_FFSR);
|
||||
coresight_etb10_simple_func(ffcr, ETB_FFCR);
|
||||
coresight_etb10_reg(rdp, ETB_RAM_DEPTH_REG);
|
||||
coresight_etb10_reg(sts, ETB_STATUS_REG);
|
||||
coresight_etb10_reg(rrp, ETB_RAM_READ_POINTER);
|
||||
coresight_etb10_reg(rwp, ETB_RAM_WRITE_POINTER);
|
||||
coresight_etb10_reg(trg, ETB_TRG);
|
||||
coresight_etb10_reg(ctl, ETB_CTL_REG);
|
||||
coresight_etb10_reg(ffsr, ETB_FFSR);
|
||||
coresight_etb10_reg(ffcr, ETB_FFCR);
|
||||
|
||||
static struct attribute *coresight_etb_mgmt_attrs[] = {
|
||||
&dev_attr_rdp.attr,
|
||||
|
@ -728,7 +746,7 @@ static const struct dev_pm_ops etb_dev_pm_ops = {
|
|||
SET_RUNTIME_PM_OPS(etb_runtime_suspend, etb_runtime_resume, NULL)
|
||||
};
|
||||
|
||||
static struct amba_id etb_ids[] = {
|
||||
static const struct amba_id etb_ids[] = {
|
||||
{
|
||||
.id = 0x0003b907,
|
||||
.mask = 0x0003ffff,
|
||||
|
|
|
@ -53,14 +53,16 @@ static DEFINE_PER_CPU(struct coresight_device *, csdev_src);
|
|||
/* ETMv3.5/PTM's ETMCR is 'config' */
|
||||
PMU_FORMAT_ATTR(cycacc, "config:" __stringify(ETM_OPT_CYCACC));
|
||||
PMU_FORMAT_ATTR(timestamp, "config:" __stringify(ETM_OPT_TS));
|
||||
PMU_FORMAT_ATTR(retstack, "config:" __stringify(ETM_OPT_RETSTK));
|
||||
|
||||
static struct attribute *etm_config_formats_attr[] = {
|
||||
&format_attr_cycacc.attr,
|
||||
&format_attr_timestamp.attr,
|
||||
&format_attr_retstack.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static struct attribute_group etm_pmu_format_group = {
|
||||
static const struct attribute_group etm_pmu_format_group = {
|
||||
.name = "format",
|
||||
.attrs = etm_config_formats_attr,
|
||||
};
|
||||
|
|
|
@ -106,6 +106,7 @@
|
|||
#define ETMTECR1_START_STOP BIT(25)
|
||||
/* ETMCCER - 0x1E8 */
|
||||
#define ETMCCER_TIMESTAMP BIT(22)
|
||||
#define ETMCCER_RETSTACK BIT(23)
|
||||
|
||||
#define ETM_MODE_EXCLUDE BIT(0)
|
||||
#define ETM_MODE_CYCACC BIT(1)
|
||||
|
|
|
@ -1232,19 +1232,19 @@ static struct attribute *coresight_etm_attrs[] = {
|
|||
NULL,
|
||||
};
|
||||
|
||||
#define coresight_etm3x_simple_func(name, offset) \
|
||||
coresight_simple_func(struct etm_drvdata, NULL, name, offset)
|
||||
#define coresight_etm3x_reg(name, offset) \
|
||||
coresight_simple_reg32(struct etm_drvdata, name, offset)
|
||||
|
||||
coresight_etm3x_simple_func(etmccr, ETMCCR);
|
||||
coresight_etm3x_simple_func(etmccer, ETMCCER);
|
||||
coresight_etm3x_simple_func(etmscr, ETMSCR);
|
||||
coresight_etm3x_simple_func(etmidr, ETMIDR);
|
||||
coresight_etm3x_simple_func(etmcr, ETMCR);
|
||||
coresight_etm3x_simple_func(etmtraceidr, ETMTRACEIDR);
|
||||
coresight_etm3x_simple_func(etmteevr, ETMTEEVR);
|
||||
coresight_etm3x_simple_func(etmtssvr, ETMTSSCR);
|
||||
coresight_etm3x_simple_func(etmtecr1, ETMTECR1);
|
||||
coresight_etm3x_simple_func(etmtecr2, ETMTECR2);
|
||||
coresight_etm3x_reg(etmccr, ETMCCR);
|
||||
coresight_etm3x_reg(etmccer, ETMCCER);
|
||||
coresight_etm3x_reg(etmscr, ETMSCR);
|
||||
coresight_etm3x_reg(etmidr, ETMIDR);
|
||||
coresight_etm3x_reg(etmcr, ETMCR);
|
||||
coresight_etm3x_reg(etmtraceidr, ETMTRACEIDR);
|
||||
coresight_etm3x_reg(etmteevr, ETMTEEVR);
|
||||
coresight_etm3x_reg(etmtssvr, ETMTSSCR);
|
||||
coresight_etm3x_reg(etmtecr1, ETMTECR1);
|
||||
coresight_etm3x_reg(etmtecr2, ETMTECR2);
|
||||
|
||||
static struct attribute *coresight_etm_mgmt_attrs[] = {
|
||||
&dev_attr_etmccr.attr,
|
||||
|
|
|
@ -243,6 +243,8 @@ void etm_set_default(struct etm_config *config)
|
|||
}
|
||||
|
||||
config->ctxid_mask = 0x0;
|
||||
/* Setting default to 1024 as per TRM recommendation */
|
||||
config->sync_freq = 0x400;
|
||||
}
|
||||
|
||||
void etm_config_trace_mode(struct etm_config *config)
|
||||
|
@ -308,7 +310,9 @@ void etm_config_trace_mode(struct etm_config *config)
|
|||
config->addr_type[1] = ETM_ADDR_TYPE_RANGE;
|
||||
}
|
||||
|
||||
#define ETM3X_SUPPORTED_OPTIONS (ETMCR_CYC_ACC | ETMCR_TIMESTAMP_EN)
|
||||
#define ETM3X_SUPPORTED_OPTIONS (ETMCR_CYC_ACC | \
|
||||
ETMCR_TIMESTAMP_EN | \
|
||||
ETMCR_RETURN_STACK)
|
||||
|
||||
static int etm_parse_event_config(struct etm_drvdata *drvdata,
|
||||
struct perf_event *event)
|
||||
|
@ -339,14 +343,24 @@ static int etm_parse_event_config(struct etm_drvdata *drvdata,
|
|||
etm_config_trace_mode(config);
|
||||
|
||||
/*
|
||||
* At this time only cycle accurate and timestamp options are
|
||||
* available.
|
||||
* At this time only cycle accurate, return stack and timestamp
|
||||
* options are available.
|
||||
*/
|
||||
if (attr->config & ~ETM3X_SUPPORTED_OPTIONS)
|
||||
return -EINVAL;
|
||||
|
||||
config->ctrl = attr->config;
|
||||
|
||||
/*
|
||||
* Possible to have cores with PTM (supports ret stack) and ETM
|
||||
* (never has ret stack) on the same SoC. So if we have a request
|
||||
* for return stack that can't be honoured on this core then
|
||||
* clear the bit - trace will still continue normally
|
||||
*/
|
||||
if ((config->ctrl & ETMCR_RETURN_STACK) &&
|
||||
!(drvdata->etmccer & ETMCCER_RETSTACK))
|
||||
config->ctrl &= ~ETMCR_RETURN_STACK;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -885,7 +899,7 @@ static const struct dev_pm_ops etm_dev_pm_ops = {
|
|||
SET_RUNTIME_PM_OPS(etm_runtime_suspend, etm_runtime_resume, NULL)
|
||||
};
|
||||
|
||||
static struct amba_id etm_ids[] = {
|
||||
static const struct amba_id etm_ids[] = {
|
||||
{ /* ETM 3.3 */
|
||||
.id = 0x0003b921,
|
||||
.mask = 0x0003ffff,
|
||||
|
|
|
@ -2066,23 +2066,23 @@ static u32 etmv4_cross_read(const struct device *dev, u32 offset)
|
|||
return reg.data;
|
||||
}
|
||||
|
||||
#define coresight_etm4x_simple_func(name, offset) \
|
||||
coresight_simple_func(struct etmv4_drvdata, NULL, name, offset)
|
||||
#define coresight_etm4x_reg(name, offset) \
|
||||
coresight_simple_reg32(struct etmv4_drvdata, name, offset)
|
||||
|
||||
#define coresight_etm4x_cross_read(name, offset) \
|
||||
coresight_simple_func(struct etmv4_drvdata, etmv4_cross_read, \
|
||||
name, offset)
|
||||
|
||||
coresight_etm4x_simple_func(trcpdcr, TRCPDCR);
|
||||
coresight_etm4x_simple_func(trcpdsr, TRCPDSR);
|
||||
coresight_etm4x_simple_func(trclsr, TRCLSR);
|
||||
coresight_etm4x_simple_func(trcauthstatus, TRCAUTHSTATUS);
|
||||
coresight_etm4x_simple_func(trcdevid, TRCDEVID);
|
||||
coresight_etm4x_simple_func(trcdevtype, TRCDEVTYPE);
|
||||
coresight_etm4x_simple_func(trcpidr0, TRCPIDR0);
|
||||
coresight_etm4x_simple_func(trcpidr1, TRCPIDR1);
|
||||
coresight_etm4x_simple_func(trcpidr2, TRCPIDR2);
|
||||
coresight_etm4x_simple_func(trcpidr3, TRCPIDR3);
|
||||
coresight_etm4x_reg(trcpdcr, TRCPDCR);
|
||||
coresight_etm4x_reg(trcpdsr, TRCPDSR);
|
||||
coresight_etm4x_reg(trclsr, TRCLSR);
|
||||
coresight_etm4x_reg(trcauthstatus, TRCAUTHSTATUS);
|
||||
coresight_etm4x_reg(trcdevid, TRCDEVID);
|
||||
coresight_etm4x_reg(trcdevtype, TRCDEVTYPE);
|
||||
coresight_etm4x_reg(trcpidr0, TRCPIDR0);
|
||||
coresight_etm4x_reg(trcpidr1, TRCPIDR1);
|
||||
coresight_etm4x_reg(trcpidr2, TRCPIDR2);
|
||||
coresight_etm4x_reg(trcpidr3, TRCPIDR3);
|
||||
coresight_etm4x_cross_read(trcoslsr, TRCOSLSR);
|
||||
coresight_etm4x_cross_read(trcconfig, TRCCONFIGR);
|
||||
coresight_etm4x_cross_read(trctraceid, TRCTRACEIDR);
|
||||
|
|
|
@ -224,6 +224,10 @@ static int etm4_parse_event_config(struct etmv4_drvdata *drvdata,
|
|||
if (attr->config & BIT(ETM_OPT_TS))
|
||||
/* bit[11], Global timestamp tracing bit */
|
||||
config->cfg |= BIT(11);
|
||||
/* return stack - enable if selected and supported */
|
||||
if ((attr->config & BIT(ETM_OPT_RETSTK)) && drvdata->retstack)
|
||||
/* bit[12], Return stack enable bit */
|
||||
config->cfg |= BIT(12);
|
||||
|
||||
out:
|
||||
return ret;
|
||||
|
@ -1048,7 +1052,7 @@ err_arch_supported:
|
|||
return ret;
|
||||
}
|
||||
|
||||
static struct amba_id etm4_ids[] = {
|
||||
static const struct amba_id etm4_ids[] = {
|
||||
{ /* ETM 4.0 - Cortex-A53 */
|
||||
.id = 0x000bb95d,
|
||||
.mask = 0x000fffff,
|
||||
|
|
|
@ -246,11 +246,16 @@ static const struct dev_pm_ops funnel_dev_pm_ops = {
|
|||
SET_RUNTIME_PM_OPS(funnel_runtime_suspend, funnel_runtime_resume, NULL)
|
||||
};
|
||||
|
||||
static struct amba_id funnel_ids[] = {
|
||||
static const struct amba_id funnel_ids[] = {
|
||||
{
|
||||
.id = 0x0003b908,
|
||||
.mask = 0x0003ffff,
|
||||
},
|
||||
{
|
||||
/* Coresight SoC-600 */
|
||||
.id = 0x000bb9eb,
|
||||
.mask = 0x000fffff,
|
||||
},
|
||||
{ 0, 0},
|
||||
};
|
||||
|
||||
|
|
|
@ -39,23 +39,33 @@
|
|||
#define ETM_MODE_EXCL_USER BIT(31)
|
||||
|
||||
typedef u32 (*coresight_read_fn)(const struct device *, u32 offset);
|
||||
#define coresight_simple_func(type, func, name, offset) \
|
||||
#define __coresight_simple_func(type, func, name, lo_off, hi_off) \
|
||||
static ssize_t name##_show(struct device *_dev, \
|
||||
struct device_attribute *attr, char *buf) \
|
||||
{ \
|
||||
type *drvdata = dev_get_drvdata(_dev->parent); \
|
||||
coresight_read_fn fn = func; \
|
||||
u32 val; \
|
||||
u64 val; \
|
||||
pm_runtime_get_sync(_dev->parent); \
|
||||
if (fn) \
|
||||
val = fn(_dev->parent, offset); \
|
||||
val = (u64)fn(_dev->parent, lo_off); \
|
||||
else \
|
||||
val = readl_relaxed(drvdata->base + offset); \
|
||||
val = coresight_read_reg_pair(drvdata->base, \
|
||||
lo_off, hi_off); \
|
||||
pm_runtime_put_sync(_dev->parent); \
|
||||
return scnprintf(buf, PAGE_SIZE, "0x%x\n", val); \
|
||||
return scnprintf(buf, PAGE_SIZE, "0x%llx\n", val); \
|
||||
} \
|
||||
static DEVICE_ATTR_RO(name)
|
||||
|
||||
#define coresight_simple_func(type, func, name, offset) \
|
||||
__coresight_simple_func(type, func, name, offset, -1)
|
||||
#define coresight_simple_reg32(type, name, offset) \
|
||||
__coresight_simple_func(type, NULL, name, offset, -1)
|
||||
#define coresight_simple_reg64(type, name, lo_off, hi_off) \
|
||||
__coresight_simple_func(type, NULL, name, lo_off, hi_off)
|
||||
|
||||
extern const u32 barrier_pkt[5];
|
||||
|
||||
enum etm_addr_type {
|
||||
ETM_ADDR_TYPE_NONE,
|
||||
ETM_ADDR_TYPE_SINGLE,
|
||||
|
@ -106,6 +116,25 @@ static inline void CS_UNLOCK(void __iomem *addr)
|
|||
} while (0);
|
||||
}
|
||||
|
||||
static inline u64
|
||||
coresight_read_reg_pair(void __iomem *addr, s32 lo_offset, s32 hi_offset)
|
||||
{
|
||||
u64 val;
|
||||
|
||||
val = readl_relaxed(addr + lo_offset);
|
||||
val |= (hi_offset < 0) ? 0 :
|
||||
(u64)readl_relaxed(addr + hi_offset) << 32;
|
||||
return val;
|
||||
}
|
||||
|
||||
static inline void coresight_write_reg_pair(void __iomem *addr, u64 val,
|
||||
s32 lo_offset, s32 hi_offset)
|
||||
{
|
||||
writel_relaxed((u32)val, addr + lo_offset);
|
||||
if (hi_offset >= 0)
|
||||
writel_relaxed((u32)(val >> 32), addr + hi_offset);
|
||||
}
|
||||
|
||||
void coresight_disable_path(struct list_head *path);
|
||||
int coresight_enable_path(struct list_head *path, u32 mode);
|
||||
struct coresight_device *coresight_get_sink(struct list_head *path);
|
||||
|
|
|
@ -276,7 +276,7 @@ static void stm_disable(struct coresight_device *csdev,
|
|||
spin_unlock(&drvdata->spinlock);
|
||||
|
||||
/* Wait until the engine has completely stopped */
|
||||
coresight_timeout(drvdata, STMTCSR, STMTCSR_BUSY_BIT, 0);
|
||||
coresight_timeout(drvdata->base, STMTCSR, STMTCSR_BUSY_BIT, 0);
|
||||
|
||||
pm_runtime_put(drvdata->dev);
|
||||
|
||||
|
@ -307,7 +307,8 @@ static inline bool stm_addr_unaligned(const void *addr, u8 write_bytes)
|
|||
return ((unsigned long)addr & (write_bytes - 1));
|
||||
}
|
||||
|
||||
static void stm_send(void *addr, const void *data, u32 size, u8 write_bytes)
|
||||
static void stm_send(void __iomem *addr, const void *data,
|
||||
u32 size, u8 write_bytes)
|
||||
{
|
||||
u8 paload[8];
|
||||
|
||||
|
@ -414,7 +415,7 @@ static ssize_t notrace stm_generic_packet(struct stm_data *stm_data,
|
|||
unsigned int size,
|
||||
const unsigned char *payload)
|
||||
{
|
||||
unsigned long ch_addr;
|
||||
void __iomem *ch_addr;
|
||||
struct stm_drvdata *drvdata = container_of(stm_data,
|
||||
struct stm_drvdata, stm);
|
||||
|
||||
|
@ -424,7 +425,7 @@ static ssize_t notrace stm_generic_packet(struct stm_data *stm_data,
|
|||
if (channel >= drvdata->numsp)
|
||||
return -EINVAL;
|
||||
|
||||
ch_addr = (unsigned long)stm_channel_addr(drvdata, channel);
|
||||
ch_addr = stm_channel_addr(drvdata, channel);
|
||||
|
||||
flags = (flags == STP_PACKET_TIMESTAMPED) ? STM_FLAG_TIMESTAMPED : 0;
|
||||
flags |= test_bit(channel, drvdata->chs.guaranteed) ?
|
||||
|
@ -437,20 +438,20 @@ static ssize_t notrace stm_generic_packet(struct stm_data *stm_data,
|
|||
|
||||
switch (packet) {
|
||||
case STP_PACKET_FLAG:
|
||||
ch_addr |= stm_channel_off(STM_PKT_TYPE_FLAG, flags);
|
||||
ch_addr += stm_channel_off(STM_PKT_TYPE_FLAG, flags);
|
||||
|
||||
/*
|
||||
* The generic STM core sets a size of '0' on flag packets.
|
||||
* As such send a flag packet of size '1' and tell the
|
||||
* core we did so.
|
||||
*/
|
||||
stm_send((void *)ch_addr, payload, 1, drvdata->write_bytes);
|
||||
stm_send(ch_addr, payload, 1, drvdata->write_bytes);
|
||||
size = 1;
|
||||
break;
|
||||
|
||||
case STP_PACKET_DATA:
|
||||
ch_addr |= stm_channel_off(STM_PKT_TYPE_DATA, flags);
|
||||
stm_send((void *)ch_addr, payload, size,
|
||||
ch_addr += stm_channel_off(STM_PKT_TYPE_DATA, flags);
|
||||
stm_send(ch_addr, payload, size,
|
||||
drvdata->write_bytes);
|
||||
break;
|
||||
|
||||
|
@ -635,21 +636,21 @@ static ssize_t traceid_store(struct device *dev,
|
|||
}
|
||||
static DEVICE_ATTR_RW(traceid);
|
||||
|
||||
#define coresight_stm_simple_func(name, offset) \
|
||||
coresight_simple_func(struct stm_drvdata, NULL, name, offset)
|
||||
#define coresight_stm_reg(name, offset) \
|
||||
coresight_simple_reg32(struct stm_drvdata, name, offset)
|
||||
|
||||
coresight_stm_simple_func(tcsr, STMTCSR);
|
||||
coresight_stm_simple_func(tsfreqr, STMTSFREQR);
|
||||
coresight_stm_simple_func(syncr, STMSYNCR);
|
||||
coresight_stm_simple_func(sper, STMSPER);
|
||||
coresight_stm_simple_func(spter, STMSPTER);
|
||||
coresight_stm_simple_func(privmaskr, STMPRIVMASKR);
|
||||
coresight_stm_simple_func(spscr, STMSPSCR);
|
||||
coresight_stm_simple_func(spmscr, STMSPMSCR);
|
||||
coresight_stm_simple_func(spfeat1r, STMSPFEAT1R);
|
||||
coresight_stm_simple_func(spfeat2r, STMSPFEAT2R);
|
||||
coresight_stm_simple_func(spfeat3r, STMSPFEAT3R);
|
||||
coresight_stm_simple_func(devid, CORESIGHT_DEVID);
|
||||
coresight_stm_reg(tcsr, STMTCSR);
|
||||
coresight_stm_reg(tsfreqr, STMTSFREQR);
|
||||
coresight_stm_reg(syncr, STMSYNCR);
|
||||
coresight_stm_reg(sper, STMSPER);
|
||||
coresight_stm_reg(spter, STMSPTER);
|
||||
coresight_stm_reg(privmaskr, STMPRIVMASKR);
|
||||
coresight_stm_reg(spscr, STMSPSCR);
|
||||
coresight_stm_reg(spmscr, STMSPMSCR);
|
||||
coresight_stm_reg(spfeat1r, STMSPFEAT1R);
|
||||
coresight_stm_reg(spfeat2r, STMSPFEAT2R);
|
||||
coresight_stm_reg(spfeat3r, STMSPFEAT3R);
|
||||
coresight_stm_reg(devid, CORESIGHT_DEVID);
|
||||
|
||||
static struct attribute *coresight_stm_attrs[] = {
|
||||
&dev_attr_hwevent_enable.attr,
|
||||
|
@ -914,7 +915,7 @@ static const struct dev_pm_ops stm_dev_pm_ops = {
|
|||
SET_RUNTIME_PM_OPS(stm_runtime_suspend, stm_runtime_resume, NULL)
|
||||
};
|
||||
|
||||
static struct amba_id stm_ids[] = {
|
||||
static const struct amba_id stm_ids[] = {
|
||||
{
|
||||
.id = 0x0003b962,
|
||||
.mask = 0x0003ffff,
|
||||
|
|
|
@ -43,17 +43,34 @@ static void tmc_etb_enable_hw(struct tmc_drvdata *drvdata)
|
|||
|
||||
static void tmc_etb_dump_hw(struct tmc_drvdata *drvdata)
|
||||
{
|
||||
bool lost = false;
|
||||
char *bufp;
|
||||
u32 read_data;
|
||||
const u32 *barrier;
|
||||
u32 read_data, status;
|
||||
int i;
|
||||
|
||||
/*
|
||||
* Get a hold of the status register and see if a wrap around
|
||||
* has occurred.
|
||||
*/
|
||||
status = readl_relaxed(drvdata->base + TMC_STS);
|
||||
if (status & TMC_STS_FULL)
|
||||
lost = true;
|
||||
|
||||
bufp = drvdata->buf;
|
||||
drvdata->len = 0;
|
||||
barrier = barrier_pkt;
|
||||
while (1) {
|
||||
for (i = 0; i < drvdata->memwidth; i++) {
|
||||
read_data = readl_relaxed(drvdata->base + TMC_RRD);
|
||||
if (read_data == 0xFFFFFFFF)
|
||||
return;
|
||||
|
||||
if (lost && *barrier) {
|
||||
read_data = *barrier;
|
||||
barrier++;
|
||||
}
|
||||
|
||||
memcpy(bufp, &read_data, 4);
|
||||
bufp += 4;
|
||||
drvdata->len += 4;
|
||||
|
@ -369,9 +386,11 @@ static void tmc_update_etf_buffer(struct coresight_device *csdev,
|
|||
struct perf_output_handle *handle,
|
||||
void *sink_config)
|
||||
{
|
||||
bool lost = false;
|
||||
int i, cur;
|
||||
const u32 *barrier;
|
||||
u32 *buf_ptr;
|
||||
u32 read_ptr, write_ptr;
|
||||
u64 read_ptr, write_ptr;
|
||||
u32 status, to_read;
|
||||
unsigned long offset;
|
||||
struct cs_buffers *buf = sink_config;
|
||||
|
@ -388,8 +407,8 @@ static void tmc_update_etf_buffer(struct coresight_device *csdev,
|
|||
|
||||
tmc_flush_and_stop(drvdata);
|
||||
|
||||
read_ptr = readl_relaxed(drvdata->base + TMC_RRP);
|
||||
write_ptr = readl_relaxed(drvdata->base + TMC_RWP);
|
||||
read_ptr = tmc_read_rrp(drvdata);
|
||||
write_ptr = tmc_read_rwp(drvdata);
|
||||
|
||||
/*
|
||||
* Get a hold of the status register and see if a wrap around
|
||||
|
@ -397,7 +416,7 @@ static void tmc_update_etf_buffer(struct coresight_device *csdev,
|
|||
*/
|
||||
status = readl_relaxed(drvdata->base + TMC_STS);
|
||||
if (status & TMC_STS_FULL) {
|
||||
perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
|
||||
lost = true;
|
||||
to_read = drvdata->size;
|
||||
} else {
|
||||
to_read = CIRC_CNT(write_ptr, read_ptr, drvdata->size);
|
||||
|
@ -441,18 +460,27 @@ static void tmc_update_etf_buffer(struct coresight_device *csdev,
|
|||
if (read_ptr > (drvdata->size - 1))
|
||||
read_ptr -= drvdata->size;
|
||||
/* Tell the HW */
|
||||
writel_relaxed(read_ptr, drvdata->base + TMC_RRP);
|
||||
perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
|
||||
tmc_write_rrp(drvdata, read_ptr);
|
||||
lost = true;
|
||||
}
|
||||
|
||||
if (lost)
|
||||
perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
|
||||
|
||||
cur = buf->cur;
|
||||
offset = buf->offset;
|
||||
barrier = barrier_pkt;
|
||||
|
||||
/* for every byte to read */
|
||||
for (i = 0; i < to_read; i += 4) {
|
||||
buf_ptr = buf->data_pages[cur] + offset;
|
||||
*buf_ptr = readl_relaxed(drvdata->base + TMC_RRD);
|
||||
|
||||
if (lost && *barrier) {
|
||||
*buf_ptr = *barrier;
|
||||
barrier++;
|
||||
}
|
||||
|
||||
offset += 4;
|
||||
if (offset >= PAGE_SIZE) {
|
||||
offset = 0;
|
||||
|
|
|
@ -22,7 +22,7 @@
|
|||
|
||||
static void tmc_etr_enable_hw(struct tmc_drvdata *drvdata)
|
||||
{
|
||||
u32 axictl;
|
||||
u32 axictl, sts;
|
||||
|
||||
/* Zero out the memory to help with debug */
|
||||
memset(drvdata->vaddr, 0, drvdata->size);
|
||||
|
@ -36,17 +36,29 @@ static void tmc_etr_enable_hw(struct tmc_drvdata *drvdata)
|
|||
writel_relaxed(TMC_MODE_CIRCULAR_BUFFER, drvdata->base + TMC_MODE);
|
||||
|
||||
axictl = readl_relaxed(drvdata->base + TMC_AXICTL);
|
||||
axictl |= TMC_AXICTL_WR_BURST_16;
|
||||
writel_relaxed(axictl, drvdata->base + TMC_AXICTL);
|
||||
axictl &= ~TMC_AXICTL_SCT_GAT_MODE;
|
||||
writel_relaxed(axictl, drvdata->base + TMC_AXICTL);
|
||||
axictl = (axictl &
|
||||
~(TMC_AXICTL_PROT_CTL_B0 | TMC_AXICTL_PROT_CTL_B1)) |
|
||||
TMC_AXICTL_PROT_CTL_B1;
|
||||
writel_relaxed(axictl, drvdata->base + TMC_AXICTL);
|
||||
axictl &= ~TMC_AXICTL_CLEAR_MASK;
|
||||
axictl |= (TMC_AXICTL_PROT_CTL_B1 | TMC_AXICTL_WR_BURST_16);
|
||||
axictl |= TMC_AXICTL_AXCACHE_OS;
|
||||
|
||||
if (tmc_etr_has_cap(drvdata, TMC_ETR_AXI_ARCACHE)) {
|
||||
axictl &= ~TMC_AXICTL_ARCACHE_MASK;
|
||||
axictl |= TMC_AXICTL_ARCACHE_OS;
|
||||
}
|
||||
|
||||
writel_relaxed(axictl, drvdata->base + TMC_AXICTL);
|
||||
tmc_write_dba(drvdata, drvdata->paddr);
|
||||
/*
|
||||
* If the TMC pointers must be programmed before the session,
|
||||
* we have to set it properly (i.e, RRP/RWP to base address and
|
||||
* STS to "not full").
|
||||
*/
|
||||
if (tmc_etr_has_cap(drvdata, TMC_ETR_SAVE_RESTORE)) {
|
||||
tmc_write_rrp(drvdata, drvdata->paddr);
|
||||
tmc_write_rwp(drvdata, drvdata->paddr);
|
||||
sts = readl_relaxed(drvdata->base + TMC_STS) & ~TMC_STS_FULL;
|
||||
writel_relaxed(sts, drvdata->base + TMC_STS);
|
||||
}
|
||||
|
||||
writel_relaxed(drvdata->paddr, drvdata->base + TMC_DBALO);
|
||||
writel_relaxed(0x0, drvdata->base + TMC_DBAHI);
|
||||
writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI |
|
||||
TMC_FFCR_FON_FLIN | TMC_FFCR_FON_TRIG_EVT |
|
||||
TMC_FFCR_TRIGON_TRIGIN,
|
||||
|
@ -59,9 +71,12 @@ static void tmc_etr_enable_hw(struct tmc_drvdata *drvdata)
|
|||
|
||||
static void tmc_etr_dump_hw(struct tmc_drvdata *drvdata)
|
||||
{
|
||||
u32 rwp, val;
|
||||
const u32 *barrier;
|
||||
u32 val;
|
||||
u32 *temp;
|
||||
u64 rwp;
|
||||
|
||||
rwp = readl_relaxed(drvdata->base + TMC_RWP);
|
||||
rwp = tmc_read_rwp(drvdata);
|
||||
val = readl_relaxed(drvdata->base + TMC_STS);
|
||||
|
||||
/*
|
||||
|
@ -71,6 +86,16 @@ static void tmc_etr_dump_hw(struct tmc_drvdata *drvdata)
|
|||
if (val & TMC_STS_FULL) {
|
||||
drvdata->buf = drvdata->vaddr + rwp - drvdata->paddr;
|
||||
drvdata->len = drvdata->size;
|
||||
|
||||
barrier = barrier_pkt;
|
||||
temp = (u32 *)drvdata->buf;
|
||||
|
||||
while (*barrier) {
|
||||
*temp = *barrier;
|
||||
temp++;
|
||||
barrier++;
|
||||
}
|
||||
|
||||
} else {
|
||||
drvdata->buf = drvdata->vaddr;
|
||||
drvdata->len = rwp - drvdata->paddr;
|
||||
|
|
|
@ -217,20 +217,24 @@ static enum tmc_mem_intf_width tmc_get_memwidth(u32 devid)
|
|||
return memwidth;
|
||||
}
|
||||
|
||||
#define coresight_tmc_simple_func(name, offset) \
|
||||
coresight_simple_func(struct tmc_drvdata, NULL, name, offset)
|
||||
#define coresight_tmc_reg(name, offset) \
|
||||
coresight_simple_reg32(struct tmc_drvdata, name, offset)
|
||||
#define coresight_tmc_reg64(name, lo_off, hi_off) \
|
||||
coresight_simple_reg64(struct tmc_drvdata, name, lo_off, hi_off)
|
||||
|
||||
coresight_tmc_simple_func(rsz, TMC_RSZ);
|
||||
coresight_tmc_simple_func(sts, TMC_STS);
|
||||
coresight_tmc_simple_func(rrp, TMC_RRP);
|
||||
coresight_tmc_simple_func(rwp, TMC_RWP);
|
||||
coresight_tmc_simple_func(trg, TMC_TRG);
|
||||
coresight_tmc_simple_func(ctl, TMC_CTL);
|
||||
coresight_tmc_simple_func(ffsr, TMC_FFSR);
|
||||
coresight_tmc_simple_func(ffcr, TMC_FFCR);
|
||||
coresight_tmc_simple_func(mode, TMC_MODE);
|
||||
coresight_tmc_simple_func(pscr, TMC_PSCR);
|
||||
coresight_tmc_simple_func(devid, CORESIGHT_DEVID);
|
||||
coresight_tmc_reg(rsz, TMC_RSZ);
|
||||
coresight_tmc_reg(sts, TMC_STS);
|
||||
coresight_tmc_reg(trg, TMC_TRG);
|
||||
coresight_tmc_reg(ctl, TMC_CTL);
|
||||
coresight_tmc_reg(ffsr, TMC_FFSR);
|
||||
coresight_tmc_reg(ffcr, TMC_FFCR);
|
||||
coresight_tmc_reg(mode, TMC_MODE);
|
||||
coresight_tmc_reg(pscr, TMC_PSCR);
|
||||
coresight_tmc_reg(axictl, TMC_AXICTL);
|
||||
coresight_tmc_reg(devid, CORESIGHT_DEVID);
|
||||
coresight_tmc_reg64(rrp, TMC_RRP, TMC_RRPHI);
|
||||
coresight_tmc_reg64(rwp, TMC_RWP, TMC_RWPHI);
|
||||
coresight_tmc_reg64(dba, TMC_DBALO, TMC_DBAHI);
|
||||
|
||||
static struct attribute *coresight_tmc_mgmt_attrs[] = {
|
||||
&dev_attr_rsz.attr,
|
||||
|
@ -244,6 +248,8 @@ static struct attribute *coresight_tmc_mgmt_attrs[] = {
|
|||
&dev_attr_mode.attr,
|
||||
&dev_attr_pscr.attr,
|
||||
&dev_attr_devid.attr,
|
||||
&dev_attr_dba.attr,
|
||||
&dev_attr_axictl.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
|
@ -293,6 +299,42 @@ const struct attribute_group *coresight_tmc_groups[] = {
|
|||
NULL,
|
||||
};
|
||||
|
||||
/* Detect and initialise the capabilities of a TMC ETR */
|
||||
static int tmc_etr_setup_caps(struct tmc_drvdata *drvdata,
|
||||
u32 devid, void *dev_caps)
|
||||
{
|
||||
u32 dma_mask = 0;
|
||||
|
||||
/* Set the unadvertised capabilities */
|
||||
tmc_etr_init_caps(drvdata, (u32)(unsigned long)dev_caps);
|
||||
|
||||
if (!(devid & TMC_DEVID_NOSCAT))
|
||||
tmc_etr_set_cap(drvdata, TMC_ETR_SG);
|
||||
|
||||
/* Check if the AXI address width is available */
|
||||
if (devid & TMC_DEVID_AXIAW_VALID)
|
||||
dma_mask = ((devid >> TMC_DEVID_AXIAW_SHIFT) &
|
||||
TMC_DEVID_AXIAW_MASK);
|
||||
|
||||
/*
|
||||
* Unless specified in the device configuration, ETR uses a 40-bit
|
||||
* AXI master in place of the embedded SRAM of ETB/ETF.
|
||||
*/
|
||||
switch (dma_mask) {
|
||||
case 32:
|
||||
case 40:
|
||||
case 44:
|
||||
case 48:
|
||||
case 52:
|
||||
dev_info(drvdata->dev, "Detected dma mask %dbits\n", dma_mask);
|
||||
break;
|
||||
default:
|
||||
dma_mask = 40;
|
||||
}
|
||||
|
||||
return dma_set_mask_and_coherent(drvdata->dev, DMA_BIT_MASK(dma_mask));
|
||||
}
|
||||
|
||||
static int tmc_probe(struct amba_device *adev, const struct amba_id *id)
|
||||
{
|
||||
int ret = 0;
|
||||
|
@ -354,25 +396,29 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id)
|
|||
desc.dev = dev;
|
||||
desc.groups = coresight_tmc_groups;
|
||||
|
||||
if (drvdata->config_type == TMC_CONFIG_TYPE_ETB) {
|
||||
switch (drvdata->config_type) {
|
||||
case TMC_CONFIG_TYPE_ETB:
|
||||
desc.type = CORESIGHT_DEV_TYPE_SINK;
|
||||
desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_BUFFER;
|
||||
desc.ops = &tmc_etb_cs_ops;
|
||||
} else if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) {
|
||||
break;
|
||||
case TMC_CONFIG_TYPE_ETR:
|
||||
desc.type = CORESIGHT_DEV_TYPE_SINK;
|
||||
desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_BUFFER;
|
||||
desc.ops = &tmc_etr_cs_ops;
|
||||
/*
|
||||
* ETR configuration uses a 40-bit AXI master in place of
|
||||
* the embedded SRAM of ETB/ETF.
|
||||
*/
|
||||
ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(40));
|
||||
ret = tmc_etr_setup_caps(drvdata, devid, id->data);
|
||||
if (ret)
|
||||
goto out;
|
||||
} else {
|
||||
break;
|
||||
case TMC_CONFIG_TYPE_ETF:
|
||||
desc.type = CORESIGHT_DEV_TYPE_LINKSINK;
|
||||
desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_FIFO;
|
||||
desc.ops = &tmc_etf_cs_ops;
|
||||
break;
|
||||
default:
|
||||
pr_err("%s: Unsupported TMC config\n", pdata->name);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
drvdata->csdev = coresight_register(&desc);
|
||||
|
@ -391,11 +437,27 @@ out:
|
|||
return ret;
|
||||
}
|
||||
|
||||
static struct amba_id tmc_ids[] = {
|
||||
static const struct amba_id tmc_ids[] = {
|
||||
{
|
||||
.id = 0x0003b961,
|
||||
.mask = 0x0003ffff,
|
||||
},
|
||||
{
|
||||
/* Coresight SoC 600 TMC-ETR/ETS */
|
||||
.id = 0x000bb9e8,
|
||||
.mask = 0x000fffff,
|
||||
.data = (void *)(unsigned long)CORESIGHT_SOC_600_ETR_CAPS,
|
||||
},
|
||||
{
|
||||
/* Coresight SoC 600 TMC-ETB */
|
||||
.id = 0x000bb9e9,
|
||||
.mask = 0x000fffff,
|
||||
},
|
||||
{
|
||||
/* Coresight SoC 600 TMC-ETF */
|
||||
.id = 0x000bb9ea,
|
||||
.mask = 0x000fffff,
|
||||
},
|
||||
{ 0, 0},
|
||||
};
|
||||
|
||||
|
|
|
@ -54,11 +54,32 @@
|
|||
#define TMC_STS_TMCREADY_BIT 2
|
||||
#define TMC_STS_FULL BIT(0)
|
||||
#define TMC_STS_TRIGGERED BIT(1)
|
||||
/* TMC_AXICTL - 0x110 */
|
||||
/*
|
||||
* TMC_AXICTL - 0x110
|
||||
*
|
||||
* TMC AXICTL format for SoC-400
|
||||
* Bits [0-1] : ProtCtrlBit0-1
|
||||
* Bits [2-5] : CacheCtrlBits 0-3 (AXCACHE)
|
||||
* Bit 6 : Reserved
|
||||
* Bit 7 : ScatterGatherMode
|
||||
* Bits [8-11] : WrBurstLen
|
||||
* Bits [12-31] : Reserved.
|
||||
* TMC AXICTL format for SoC-600, as above except:
|
||||
* Bits [2-5] : AXI WCACHE
|
||||
* Bits [16-19] : AXI RCACHE
|
||||
* Bits [20-31] : Reserved
|
||||
*/
|
||||
#define TMC_AXICTL_CLEAR_MASK 0xfbf
|
||||
#define TMC_AXICTL_ARCACHE_MASK (0xf << 16)
|
||||
|
||||
#define TMC_AXICTL_PROT_CTL_B0 BIT(0)
|
||||
#define TMC_AXICTL_PROT_CTL_B1 BIT(1)
|
||||
#define TMC_AXICTL_SCT_GAT_MODE BIT(7)
|
||||
#define TMC_AXICTL_WR_BURST_16 0xF00
|
||||
/* Write-back Read and Write-allocate */
|
||||
#define TMC_AXICTL_AXCACHE_OS (0xf << 2)
|
||||
#define TMC_AXICTL_ARCACHE_OS (0xf << 16)
|
||||
|
||||
/* TMC_FFCR - 0x304 */
|
||||
#define TMC_FFCR_FLUSHMAN_BIT 6
|
||||
#define TMC_FFCR_EN_FMT BIT(0)
|
||||
|
@ -69,6 +90,12 @@
|
|||
#define TMC_FFCR_STOP_ON_FLUSH BIT(12)
|
||||
|
||||
|
||||
#define TMC_DEVID_NOSCAT BIT(24)
|
||||
|
||||
#define TMC_DEVID_AXIAW_VALID BIT(16)
|
||||
#define TMC_DEVID_AXIAW_SHIFT 17
|
||||
#define TMC_DEVID_AXIAW_MASK 0x7f
|
||||
|
||||
enum tmc_config_type {
|
||||
TMC_CONFIG_TYPE_ETB,
|
||||
TMC_CONFIG_TYPE_ETR,
|
||||
|
@ -88,6 +115,24 @@ enum tmc_mem_intf_width {
|
|||
TMC_MEM_INTF_WIDTH_256BITS = 8,
|
||||
};
|
||||
|
||||
/* TMC ETR Capability bit definitions */
|
||||
#define TMC_ETR_SG (0x1U << 0)
|
||||
/* ETR has separate read/write cache encodings */
|
||||
#define TMC_ETR_AXI_ARCACHE (0x1U << 1)
|
||||
/*
|
||||
* TMC_ETR_SAVE_RESTORE - Values of RRP/RWP/STS.Full are
|
||||
* retained when TMC leaves Disabled state, allowing us to continue
|
||||
* the tracing from a point where we stopped. This also implies that
|
||||
* the RRP/RWP/STS.Full should always be programmed to the correct
|
||||
* value. Unfortunately this is not advertised by the hardware,
|
||||
* so we have to rely on PID of the IP to detect the functionality.
|
||||
*/
|
||||
#define TMC_ETR_SAVE_RESTORE (0x1U << 2)
|
||||
|
||||
/* Coresight SoC-600 TMC-ETR unadvertised capabilities */
|
||||
#define CORESIGHT_SOC_600_ETR_CAPS \
|
||||
(TMC_ETR_SAVE_RESTORE | TMC_ETR_AXI_ARCACHE)
|
||||
|
||||
/**
|
||||
* struct tmc_drvdata - specifics associated to an TMC component
|
||||
* @base: memory mapped base address for this component.
|
||||
|
@ -104,6 +149,8 @@ enum tmc_mem_intf_width {
|
|||
* @config_type: TMC variant, must be of type @tmc_config_type.
|
||||
* @memwidth: width of the memory interface databus, in bytes.
|
||||
* @trigger_cntr: amount of words to store after a trigger.
|
||||
* @etr_caps: Bitmask of capabilities of the TMC ETR, inferred from the
|
||||
* device configuration register (DEVID)
|
||||
*/
|
||||
struct tmc_drvdata {
|
||||
void __iomem *base;
|
||||
|
@ -121,6 +168,7 @@ struct tmc_drvdata {
|
|||
enum tmc_config_type config_type;
|
||||
enum tmc_mem_intf_width memwidth;
|
||||
u32 trigger_cntr;
|
||||
u32 etr_caps;
|
||||
};
|
||||
|
||||
/* Generic functions */
|
||||
|
@ -139,4 +187,39 @@ extern const struct coresight_ops tmc_etf_cs_ops;
|
|||
int tmc_read_prepare_etr(struct tmc_drvdata *drvdata);
|
||||
int tmc_read_unprepare_etr(struct tmc_drvdata *drvdata);
|
||||
extern const struct coresight_ops tmc_etr_cs_ops;
|
||||
|
||||
|
||||
#define TMC_REG_PAIR(name, lo_off, hi_off) \
|
||||
static inline u64 \
|
||||
tmc_read_##name(struct tmc_drvdata *drvdata) \
|
||||
{ \
|
||||
return coresight_read_reg_pair(drvdata->base, lo_off, hi_off); \
|
||||
} \
|
||||
static inline void \
|
||||
tmc_write_##name(struct tmc_drvdata *drvdata, u64 val) \
|
||||
{ \
|
||||
coresight_write_reg_pair(drvdata->base, val, lo_off, hi_off); \
|
||||
}
|
||||
|
||||
TMC_REG_PAIR(rrp, TMC_RRP, TMC_RRPHI)
|
||||
TMC_REG_PAIR(rwp, TMC_RWP, TMC_RWPHI)
|
||||
TMC_REG_PAIR(dba, TMC_DBALO, TMC_DBAHI)
|
||||
|
||||
/* Initialise the caps from unadvertised static capabilities of the device */
|
||||
static inline void tmc_etr_init_caps(struct tmc_drvdata *drvdata, u32 dev_caps)
|
||||
{
|
||||
WARN_ON(drvdata->etr_caps);
|
||||
drvdata->etr_caps = dev_caps;
|
||||
}
|
||||
|
||||
static inline void tmc_etr_set_cap(struct tmc_drvdata *drvdata, u32 cap)
|
||||
{
|
||||
drvdata->etr_caps |= cap;
|
||||
}
|
||||
|
||||
static inline bool tmc_etr_has_cap(struct tmc_drvdata *drvdata, u32 cap)
|
||||
{
|
||||
return !!(drvdata->etr_caps & cap);
|
||||
}
|
||||
|
||||
#endif
|
||||
|
|
|
@ -192,7 +192,7 @@ static const struct dev_pm_ops tpiu_dev_pm_ops = {
|
|||
SET_RUNTIME_PM_OPS(tpiu_runtime_suspend, tpiu_runtime_resume, NULL)
|
||||
};
|
||||
|
||||
static struct amba_id tpiu_ids[] = {
|
||||
static const struct amba_id tpiu_ids[] = {
|
||||
{
|
||||
.id = 0x0003b912,
|
||||
.mask = 0x0003ffff,
|
||||
|
@ -201,6 +201,11 @@ static struct amba_id tpiu_ids[] = {
|
|||
.id = 0x0004b912,
|
||||
.mask = 0x0007ffff,
|
||||
},
|
||||
{
|
||||
/* Coresight SoC-600 */
|
||||
.id = 0x000bb9e7,
|
||||
.mask = 0x000fffff,
|
||||
},
|
||||
{ 0, 0},
|
||||
};
|
||||
|
||||
|
|
|
@ -53,6 +53,14 @@ static DEFINE_PER_CPU(struct list_head *, tracer_path);
|
|||
*/
|
||||
static struct list_head *stm_path;
|
||||
|
||||
/*
|
||||
* When losing synchronisation a new barrier packet needs to be inserted at the
|
||||
* beginning of the data collected in a buffer. That way the decoder knows that
|
||||
* it needs to look for another sync sequence.
|
||||
*/
|
||||
const u32 barrier_pkt[5] = {0x7fffffff, 0x7fffffff,
|
||||
0x7fffffff, 0x7fffffff, 0x0};
|
||||
|
||||
static int coresight_id_match(struct device *dev, void *data)
|
||||
{
|
||||
int trace_id, i_trace_id;
|
||||
|
|
|
@ -101,17 +101,53 @@ out_pm:
|
|||
return ret;
|
||||
}
|
||||
|
||||
static void intel_th_device_remove(struct intel_th_device *thdev);
|
||||
|
||||
static int intel_th_remove(struct device *dev)
|
||||
{
|
||||
struct intel_th_driver *thdrv = to_intel_th_driver(dev->driver);
|
||||
struct intel_th_device *thdev = to_intel_th_device(dev);
|
||||
struct intel_th_device *hub = to_intel_th_device(dev->parent);
|
||||
struct intel_th_device *hub = to_intel_th_hub(thdev);
|
||||
int err;
|
||||
|
||||
if (thdev->type == INTEL_TH_SWITCH) {
|
||||
struct intel_th *th = to_intel_th(hub);
|
||||
int i, lowest;
|
||||
|
||||
/* disconnect outputs */
|
||||
err = device_for_each_child(dev, thdev, intel_th_child_remove);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
/*
|
||||
* Remove outputs, that is, hub's children: they are created
|
||||
* at hub's probe time by having the hub call
|
||||
* intel_th_output_enable() for each of them.
|
||||
*/
|
||||
for (i = 0, lowest = -1; i < th->num_thdevs; i++) {
|
||||
/*
|
||||
* Move the non-output devices from higher up the
|
||||
* th->thdev[] array to lower positions to maintain
|
||||
* a contiguous array.
|
||||
*/
|
||||
if (th->thdev[i]->type != INTEL_TH_OUTPUT) {
|
||||
if (lowest >= 0) {
|
||||
th->thdev[lowest] = th->thdev[i];
|
||||
th->thdev[i] = NULL;
|
||||
++lowest;
|
||||
}
|
||||
|
||||
continue;
|
||||
}
|
||||
|
||||
if (lowest == -1)
|
||||
lowest = i;
|
||||
|
||||
intel_th_device_remove(th->thdev[i]);
|
||||
th->thdev[i] = NULL;
|
||||
}
|
||||
|
||||
th->num_thdevs = lowest;
|
||||
}
|
||||
|
||||
if (thdrv->attr_group)
|
||||
|
@ -156,21 +192,6 @@ static struct device_type intel_th_source_device_type = {
|
|||
.release = intel_th_device_release,
|
||||
};
|
||||
|
||||
static struct intel_th *to_intel_th(struct intel_th_device *thdev)
|
||||
{
|
||||
/*
|
||||
* subdevice tree is flat: if this one is not a switch, its
|
||||
* parent must be
|
||||
*/
|
||||
if (thdev->type != INTEL_TH_SWITCH)
|
||||
thdev = to_intel_th_hub(thdev);
|
||||
|
||||
if (WARN_ON_ONCE(!thdev || thdev->type != INTEL_TH_SWITCH))
|
||||
return NULL;
|
||||
|
||||
return dev_get_drvdata(thdev->dev.parent);
|
||||
}
|
||||
|
||||
static char *intel_th_output_devnode(struct device *dev, umode_t *mode,
|
||||
kuid_t *uid, kgid_t *gid)
|
||||
{
|
||||
|
@ -205,6 +226,7 @@ static int intel_th_output_activate(struct intel_th_device *thdev)
|
|||
{
|
||||
struct intel_th_driver *thdrv =
|
||||
to_intel_th_driver_or_null(thdev->dev.driver);
|
||||
struct intel_th *th = to_intel_th(thdev);
|
||||
int ret = 0;
|
||||
|
||||
if (!thdrv)
|
||||
|
@ -215,15 +237,28 @@ static int intel_th_output_activate(struct intel_th_device *thdev)
|
|||
|
||||
pm_runtime_get_sync(&thdev->dev);
|
||||
|
||||
if (th->activate)
|
||||
ret = th->activate(th);
|
||||
if (ret)
|
||||
goto fail_put;
|
||||
|
||||
if (thdrv->activate)
|
||||
ret = thdrv->activate(thdev);
|
||||
else
|
||||
intel_th_trace_enable(thdev);
|
||||
|
||||
if (ret) {
|
||||
pm_runtime_put(&thdev->dev);
|
||||
module_put(thdrv->driver.owner);
|
||||
}
|
||||
if (ret)
|
||||
goto fail_deactivate;
|
||||
|
||||
return 0;
|
||||
|
||||
fail_deactivate:
|
||||
if (th->deactivate)
|
||||
th->deactivate(th);
|
||||
|
||||
fail_put:
|
||||
pm_runtime_put(&thdev->dev);
|
||||
module_put(thdrv->driver.owner);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -232,6 +267,7 @@ static void intel_th_output_deactivate(struct intel_th_device *thdev)
|
|||
{
|
||||
struct intel_th_driver *thdrv =
|
||||
to_intel_th_driver_or_null(thdev->dev.driver);
|
||||
struct intel_th *th = to_intel_th(thdev);
|
||||
|
||||
if (!thdrv)
|
||||
return;
|
||||
|
@ -241,6 +277,9 @@ static void intel_th_output_deactivate(struct intel_th_device *thdev)
|
|||
else
|
||||
intel_th_trace_disable(thdev);
|
||||
|
||||
if (th->deactivate)
|
||||
th->deactivate(th);
|
||||
|
||||
pm_runtime_put(&thdev->dev);
|
||||
module_put(thdrv->driver.owner);
|
||||
}
|
||||
|
@ -326,10 +365,10 @@ intel_th_device_alloc(struct intel_th *th, unsigned int type, const char *name,
|
|||
struct device *parent;
|
||||
struct intel_th_device *thdev;
|
||||
|
||||
if (type == INTEL_TH_SWITCH)
|
||||
parent = th->dev;
|
||||
else
|
||||
if (type == INTEL_TH_OUTPUT)
|
||||
parent = &th->hub->dev;
|
||||
else
|
||||
parent = th->dev;
|
||||
|
||||
thdev = kzalloc(sizeof(*thdev) + strlen(name) + 1, GFP_KERNEL);
|
||||
if (!thdev)
|
||||
|
@ -392,13 +431,14 @@ static const struct intel_th_subdevice {
|
|||
unsigned otype;
|
||||
unsigned scrpd;
|
||||
int id;
|
||||
} intel_th_subdevices[TH_SUBDEVICE_MAX] = {
|
||||
} intel_th_subdevices[] = {
|
||||
{
|
||||
.nres = 1,
|
||||
.res = {
|
||||
{
|
||||
/* Handle TSCU from GTH driver */
|
||||
.start = REG_GTH_OFFSET,
|
||||
.end = REG_GTH_OFFSET + REG_GTH_LENGTH - 1,
|
||||
.end = REG_TSCU_OFFSET + REG_TSCU_LENGTH - 1,
|
||||
.flags = IORESOURCE_MEM,
|
||||
},
|
||||
},
|
||||
|
@ -479,6 +519,21 @@ static const struct intel_th_subdevice {
|
|||
.otype = GTH_PTI,
|
||||
.scrpd = SCRPD_PTI_IS_PRIM_DEST,
|
||||
},
|
||||
{
|
||||
.nres = 1,
|
||||
.res = {
|
||||
{
|
||||
.start = REG_PTI_OFFSET,
|
||||
.end = REG_PTI_OFFSET + REG_PTI_LENGTH - 1,
|
||||
.flags = IORESOURCE_MEM,
|
||||
},
|
||||
},
|
||||
.id = -1,
|
||||
.name = "lpp",
|
||||
.type = INTEL_TH_OUTPUT,
|
||||
.otype = GTH_LPP,
|
||||
.scrpd = SCRPD_PTI_IS_PRIM_DEST,
|
||||
},
|
||||
{
|
||||
.nres = 1,
|
||||
.res = {
|
||||
|
@ -526,98 +581,182 @@ static inline void intel_th_request_hub_module_flush(struct intel_th *th)
|
|||
}
|
||||
#endif /* CONFIG_MODULES */
|
||||
|
||||
static int intel_th_populate(struct intel_th *th, struct resource *devres,
|
||||
unsigned int ndevres, int irq)
|
||||
static struct intel_th_device *
|
||||
intel_th_subdevice_alloc(struct intel_th *th,
|
||||
const struct intel_th_subdevice *subdev)
|
||||
{
|
||||
struct intel_th_device *thdev;
|
||||
struct resource res[3];
|
||||
unsigned int req = 0;
|
||||
int src, dst, err;
|
||||
int r, err;
|
||||
|
||||
thdev = intel_th_device_alloc(th, subdev->type, subdev->name,
|
||||
subdev->id);
|
||||
if (!thdev)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
thdev->drvdata = th->drvdata;
|
||||
|
||||
memcpy(res, subdev->res,
|
||||
sizeof(struct resource) * subdev->nres);
|
||||
|
||||
for (r = 0; r < subdev->nres; r++) {
|
||||
struct resource *devres = th->resource;
|
||||
int bar = TH_MMIO_CONFIG;
|
||||
|
||||
/*
|
||||
* Take .end == 0 to mean 'take the whole bar',
|
||||
* .start then tells us which bar it is. Default to
|
||||
* TH_MMIO_CONFIG.
|
||||
*/
|
||||
if (!res[r].end && res[r].flags == IORESOURCE_MEM) {
|
||||
bar = res[r].start;
|
||||
res[r].start = 0;
|
||||
res[r].end = resource_size(&devres[bar]) - 1;
|
||||
}
|
||||
|
||||
if (res[r].flags & IORESOURCE_MEM) {
|
||||
res[r].start += devres[bar].start;
|
||||
res[r].end += devres[bar].start;
|
||||
|
||||
dev_dbg(th->dev, "%s:%d @ %pR\n",
|
||||
subdev->name, r, &res[r]);
|
||||
} else if (res[r].flags & IORESOURCE_IRQ) {
|
||||
res[r].start = th->irq;
|
||||
}
|
||||
}
|
||||
|
||||
err = intel_th_device_add_resources(thdev, res, subdev->nres);
|
||||
if (err) {
|
||||
put_device(&thdev->dev);
|
||||
goto fail_put_device;
|
||||
}
|
||||
|
||||
if (subdev->type == INTEL_TH_OUTPUT) {
|
||||
thdev->dev.devt = MKDEV(th->major, th->num_thdevs);
|
||||
thdev->output.type = subdev->otype;
|
||||
thdev->output.port = -1;
|
||||
thdev->output.scratchpad = subdev->scrpd;
|
||||
} else if (subdev->type == INTEL_TH_SWITCH) {
|
||||
thdev->host_mode = host_mode;
|
||||
th->hub = thdev;
|
||||
}
|
||||
|
||||
err = device_add(&thdev->dev);
|
||||
if (err) {
|
||||
put_device(&thdev->dev);
|
||||
goto fail_free_res;
|
||||
}
|
||||
|
||||
/* need switch driver to be loaded to enumerate the rest */
|
||||
if (subdev->type == INTEL_TH_SWITCH && !req) {
|
||||
err = intel_th_request_hub_module(th);
|
||||
if (!err)
|
||||
req++;
|
||||
}
|
||||
|
||||
return thdev;
|
||||
|
||||
fail_free_res:
|
||||
kfree(thdev->resource);
|
||||
|
||||
fail_put_device:
|
||||
put_device(&thdev->dev);
|
||||
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_th_output_enable() - find and enable a device for a given output type
|
||||
* @th: Intel TH instance
|
||||
* @otype: output type
|
||||
*
|
||||
* Go through the unallocated output devices, find the first one whos type
|
||||
* matches @otype and instantiate it. These devices are removed when the hub
|
||||
* device is removed, see intel_th_remove().
|
||||
*/
|
||||
int intel_th_output_enable(struct intel_th *th, unsigned int otype)
|
||||
{
|
||||
struct intel_th_device *thdev;
|
||||
int src = 0, dst = 0;
|
||||
|
||||
for (src = 0, dst = 0; dst <= th->num_thdevs; src++, dst++) {
|
||||
for (; src < ARRAY_SIZE(intel_th_subdevices); src++) {
|
||||
if (intel_th_subdevices[src].type != INTEL_TH_OUTPUT)
|
||||
continue;
|
||||
|
||||
if (intel_th_subdevices[src].otype != otype)
|
||||
continue;
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
/* no unallocated matching subdevices */
|
||||
if (src == ARRAY_SIZE(intel_th_subdevices))
|
||||
return -ENODEV;
|
||||
|
||||
for (; dst < th->num_thdevs; dst++) {
|
||||
if (th->thdev[dst]->type != INTEL_TH_OUTPUT)
|
||||
continue;
|
||||
|
||||
if (th->thdev[dst]->output.type != otype)
|
||||
continue;
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
/*
|
||||
* intel_th_subdevices[src] matches our requirements and is
|
||||
* not matched in th::thdev[]
|
||||
*/
|
||||
if (dst == th->num_thdevs)
|
||||
goto found;
|
||||
}
|
||||
|
||||
return -ENODEV;
|
||||
|
||||
found:
|
||||
thdev = intel_th_subdevice_alloc(th, &intel_th_subdevices[src]);
|
||||
if (IS_ERR(thdev))
|
||||
return PTR_ERR(thdev);
|
||||
|
||||
th->thdev[th->num_thdevs++] = thdev;
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(intel_th_output_enable);
|
||||
|
||||
static int intel_th_populate(struct intel_th *th)
|
||||
{
|
||||
int src;
|
||||
|
||||
/* create devices for each intel_th_subdevice */
|
||||
for (src = 0, dst = 0; src < ARRAY_SIZE(intel_th_subdevices); src++) {
|
||||
for (src = 0; src < ARRAY_SIZE(intel_th_subdevices); src++) {
|
||||
const struct intel_th_subdevice *subdev =
|
||||
&intel_th_subdevices[src];
|
||||
struct intel_th_device *thdev;
|
||||
int r;
|
||||
|
||||
/* only allow SOURCE and SWITCH devices in host mode */
|
||||
if (host_mode && subdev->type == INTEL_TH_OUTPUT)
|
||||
continue;
|
||||
|
||||
thdev = intel_th_device_alloc(th, subdev->type, subdev->name,
|
||||
subdev->id);
|
||||
if (!thdev) {
|
||||
err = -ENOMEM;
|
||||
goto kill_subdevs;
|
||||
}
|
||||
/*
|
||||
* don't enable port OUTPUTs in this path; SWITCH enables them
|
||||
* via intel_th_output_enable()
|
||||
*/
|
||||
if (subdev->type == INTEL_TH_OUTPUT &&
|
||||
subdev->otype != GTH_NONE)
|
||||
continue;
|
||||
|
||||
memcpy(res, subdev->res,
|
||||
sizeof(struct resource) * subdev->nres);
|
||||
thdev = intel_th_subdevice_alloc(th, subdev);
|
||||
/* note: caller should free subdevices from th::thdev[] */
|
||||
if (IS_ERR(thdev))
|
||||
return PTR_ERR(thdev);
|
||||
|
||||
for (r = 0; r < subdev->nres; r++) {
|
||||
int bar = TH_MMIO_CONFIG;
|
||||
|
||||
/*
|
||||
* Take .end == 0 to mean 'take the whole bar',
|
||||
* .start then tells us which bar it is. Default to
|
||||
* TH_MMIO_CONFIG.
|
||||
*/
|
||||
if (!res[r].end && res[r].flags == IORESOURCE_MEM) {
|
||||
bar = res[r].start;
|
||||
res[r].start = 0;
|
||||
res[r].end = resource_size(&devres[bar]) - 1;
|
||||
}
|
||||
|
||||
if (res[r].flags & IORESOURCE_MEM) {
|
||||
res[r].start += devres[bar].start;
|
||||
res[r].end += devres[bar].start;
|
||||
|
||||
dev_dbg(th->dev, "%s:%d @ %pR\n",
|
||||
subdev->name, r, &res[r]);
|
||||
} else if (res[r].flags & IORESOURCE_IRQ) {
|
||||
res[r].start = irq;
|
||||
}
|
||||
}
|
||||
|
||||
err = intel_th_device_add_resources(thdev, res, subdev->nres);
|
||||
if (err) {
|
||||
put_device(&thdev->dev);
|
||||
goto kill_subdevs;
|
||||
}
|
||||
|
||||
if (subdev->type == INTEL_TH_OUTPUT) {
|
||||
thdev->dev.devt = MKDEV(th->major, dst);
|
||||
thdev->output.type = subdev->otype;
|
||||
thdev->output.port = -1;
|
||||
thdev->output.scratchpad = subdev->scrpd;
|
||||
} else if (subdev->type == INTEL_TH_SWITCH) {
|
||||
thdev->host_mode = host_mode;
|
||||
}
|
||||
|
||||
err = device_add(&thdev->dev);
|
||||
if (err) {
|
||||
put_device(&thdev->dev);
|
||||
goto kill_subdevs;
|
||||
}
|
||||
|
||||
/* need switch driver to be loaded to enumerate the rest */
|
||||
if (subdev->type == INTEL_TH_SWITCH && !req) {
|
||||
th->hub = thdev;
|
||||
err = intel_th_request_hub_module(th);
|
||||
if (!err)
|
||||
req++;
|
||||
}
|
||||
|
||||
th->thdev[dst++] = thdev;
|
||||
th->thdev[th->num_thdevs++] = thdev;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
kill_subdevs:
|
||||
for (; dst >= 0; dst--)
|
||||
intel_th_device_remove(th->thdev[dst]);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static int match_devt(struct device *dev, void *data)
|
||||
|
@ -670,8 +809,8 @@ static const struct file_operations intel_th_output_fops = {
|
|||
* @irq: irq number
|
||||
*/
|
||||
struct intel_th *
|
||||
intel_th_alloc(struct device *dev, struct resource *devres,
|
||||
unsigned int ndevres, int irq)
|
||||
intel_th_alloc(struct device *dev, struct intel_th_drvdata *drvdata,
|
||||
struct resource *devres, unsigned int ndevres, int irq)
|
||||
{
|
||||
struct intel_th *th;
|
||||
int err;
|
||||
|
@ -693,6 +832,11 @@ intel_th_alloc(struct device *dev, struct resource *devres,
|
|||
goto err_ida;
|
||||
}
|
||||
th->dev = dev;
|
||||
th->drvdata = drvdata;
|
||||
|
||||
th->resource = devres;
|
||||
th->num_resources = ndevres;
|
||||
th->irq = irq;
|
||||
|
||||
dev_set_drvdata(dev, th);
|
||||
|
||||
|
@ -700,18 +844,15 @@ intel_th_alloc(struct device *dev, struct resource *devres,
|
|||
pm_runtime_put(dev);
|
||||
pm_runtime_allow(dev);
|
||||
|
||||
err = intel_th_populate(th, devres, ndevres, irq);
|
||||
if (err)
|
||||
goto err_chrdev;
|
||||
err = intel_th_populate(th);
|
||||
if (err) {
|
||||
/* free the subdevices and undo everything */
|
||||
intel_th_free(th);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
|
||||
return th;
|
||||
|
||||
err_chrdev:
|
||||
pm_runtime_forbid(dev);
|
||||
|
||||
__unregister_chrdev(th->major, 0, TH_POSSIBLE_OUTPUTS,
|
||||
"intel_th/output");
|
||||
|
||||
err_ida:
|
||||
ida_simple_remove(&intel_th_ida, th->id);
|
||||
|
||||
|
@ -727,11 +868,15 @@ void intel_th_free(struct intel_th *th)
|
|||
int i;
|
||||
|
||||
intel_th_request_hub_module_flush(th);
|
||||
for (i = 0; i < TH_SUBDEVICE_MAX; i++)
|
||||
if (th->thdev[i] && th->thdev[i] != th->hub)
|
||||
intel_th_device_remove(th->thdev[i]);
|
||||
|
||||
intel_th_device_remove(th->hub);
|
||||
for (i = 0; i < th->num_thdevs; i++) {
|
||||
if (th->thdev[i] != th->hub)
|
||||
intel_th_device_remove(th->thdev[i]);
|
||||
th->thdev[i] = NULL;
|
||||
}
|
||||
|
||||
th->num_thdevs = 0;
|
||||
|
||||
pm_runtime_get_sync(th->dev);
|
||||
pm_runtime_forbid(th->dev);
|
||||
|
|
|
@ -285,16 +285,16 @@ gth_output_parm_get(struct gth_device *gth, int port, unsigned int parm)
|
|||
*/
|
||||
static int intel_th_gth_reset(struct gth_device *gth)
|
||||
{
|
||||
u32 scratchpad;
|
||||
u32 reg;
|
||||
int port, i;
|
||||
|
||||
scratchpad = ioread32(gth->base + REG_GTH_SCRPD0);
|
||||
if (scratchpad & SCRPD_DEBUGGER_IN_USE)
|
||||
reg = ioread32(gth->base + REG_GTH_SCRPD0);
|
||||
if (reg & SCRPD_DEBUGGER_IN_USE)
|
||||
return -EBUSY;
|
||||
|
||||
/* Always save/restore STH and TU registers in S0ix entry/exit */
|
||||
scratchpad |= SCRPD_STH_IS_ENABLED | SCRPD_TRIGGER_IS_ENABLED;
|
||||
iowrite32(scratchpad, gth->base + REG_GTH_SCRPD0);
|
||||
reg |= SCRPD_STH_IS_ENABLED | SCRPD_TRIGGER_IS_ENABLED;
|
||||
iowrite32(reg, gth->base + REG_GTH_SCRPD0);
|
||||
|
||||
/* output ports */
|
||||
for (port = 0; port < 8; port++) {
|
||||
|
@ -512,6 +512,15 @@ static void intel_th_gth_disable(struct intel_th_device *thdev,
|
|||
iowrite32(reg, gth->base + REG_GTH_SCRPD0);
|
||||
}
|
||||
|
||||
static void gth_tscu_resync(struct gth_device *gth)
|
||||
{
|
||||
u32 reg;
|
||||
|
||||
reg = ioread32(gth->base + REG_TSCU_TSUCTRL);
|
||||
reg &= ~TSUCTRL_CTCRESYNC;
|
||||
iowrite32(reg, gth->base + REG_TSCU_TSUCTRL);
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_th_gth_enable() - enable tracing to an output device
|
||||
* @thdev: GTH device
|
||||
|
@ -524,6 +533,7 @@ static void intel_th_gth_enable(struct intel_th_device *thdev,
|
|||
struct intel_th_output *output)
|
||||
{
|
||||
struct gth_device *gth = dev_get_drvdata(&thdev->dev);
|
||||
struct intel_th *th = to_intel_th(thdev);
|
||||
u32 scr = 0xfc0000, scrpd;
|
||||
int master;
|
||||
|
||||
|
@ -539,6 +549,9 @@ static void intel_th_gth_enable(struct intel_th_device *thdev,
|
|||
output->active = true;
|
||||
spin_unlock(>h->gth_lock);
|
||||
|
||||
if (INTEL_TH_CAP(th, tscu_enable))
|
||||
gth_tscu_resync(gth);
|
||||
|
||||
scrpd = ioread32(gth->base + REG_GTH_SCRPD0);
|
||||
scrpd |= output->scratchpad;
|
||||
iowrite32(scrpd, gth->base + REG_GTH_SCRPD0);
|
||||
|
@ -639,6 +652,7 @@ intel_th_gth_set_output(struct intel_th_device *thdev, unsigned int master)
|
|||
static int intel_th_gth_probe(struct intel_th_device *thdev)
|
||||
{
|
||||
struct device *dev = &thdev->dev;
|
||||
struct intel_th *th = dev_get_drvdata(dev->parent);
|
||||
struct gth_device *gth;
|
||||
struct resource *res;
|
||||
void __iomem *base;
|
||||
|
@ -660,6 +674,8 @@ static int intel_th_gth_probe(struct intel_th_device *thdev)
|
|||
gth->base = base;
|
||||
spin_lock_init(>h->gth_lock);
|
||||
|
||||
dev_set_drvdata(dev, gth);
|
||||
|
||||
/*
|
||||
* Host mode can be signalled via SW means or via SCRPD_DEBUGGER_IN_USE
|
||||
* bit. Either way, don't reset HW in this case, and don't export any
|
||||
|
@ -667,7 +683,7 @@ static int intel_th_gth_probe(struct intel_th_device *thdev)
|
|||
* drivers to ports, see intel_th_gth_assign().
|
||||
*/
|
||||
if (thdev->host_mode)
|
||||
goto done;
|
||||
return 0;
|
||||
|
||||
ret = intel_th_gth_reset(gth);
|
||||
if (ret) {
|
||||
|
@ -676,7 +692,7 @@ static int intel_th_gth_probe(struct intel_th_device *thdev)
|
|||
|
||||
thdev->host_mode = true;
|
||||
|
||||
goto done;
|
||||
return 0;
|
||||
}
|
||||
|
||||
for (i = 0; i < TH_CONFIGURABLE_MASTERS + 1; i++)
|
||||
|
@ -687,6 +703,13 @@ static int intel_th_gth_probe(struct intel_th_device *thdev)
|
|||
gth->output[i].index = i;
|
||||
gth->output[i].port_type =
|
||||
gth_output_parm_get(gth, i, TH_OUTPUT_PARM(port));
|
||||
if (gth->output[i].port_type == GTH_NONE)
|
||||
continue;
|
||||
|
||||
ret = intel_th_output_enable(th, gth->output[i].port_type);
|
||||
/* -ENODEV is ok, we just won't have that device enumerated */
|
||||
if (ret && ret != -ENODEV)
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (intel_th_output_attributes(gth) ||
|
||||
|
@ -698,9 +721,6 @@ static int intel_th_gth_probe(struct intel_th_device *thdev)
|
|||
return -ENOMEM;
|
||||
}
|
||||
|
||||
done:
|
||||
dev_set_drvdata(dev, gth);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -55,9 +55,14 @@ enum {
|
|||
REG_GTH_SCRPD1 = 0xe4, /* ScratchPad[1] */
|
||||
REG_GTH_SCRPD2 = 0xe8, /* ScratchPad[2] */
|
||||
REG_GTH_SCRPD3 = 0xec, /* ScratchPad[3] */
|
||||
REG_TSCU_TSUCTRL = 0x2000, /* TSCU control register */
|
||||
REG_TSCU_TSCUSTAT = 0x2004, /* TSCU status register */
|
||||
};
|
||||
|
||||
/* waiting for Pipeline Empty bit(s) to assert for GTH */
|
||||
#define GTH_PLE_WAITLOOP_DEPTH 10000
|
||||
|
||||
#define TSUCTRL_CTCRESYNC BIT(0)
|
||||
#define TSCUSTAT_CTCSYNCING BIT(1)
|
||||
|
||||
#endif /* __INTEL_TH_GTH_H__ */
|
||||
|
|
|
@ -47,9 +47,20 @@ struct intel_th_output {
|
|||
bool active;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct intel_th_drvdata - describes hardware capabilities and quirks
|
||||
* @tscu_enable: device needs SW to enable time stamping unit
|
||||
*/
|
||||
struct intel_th_drvdata {
|
||||
unsigned int tscu_enable : 1;
|
||||
};
|
||||
|
||||
#define INTEL_TH_CAP(_th, _cap) ((_th)->drvdata ? (_th)->drvdata->_cap : 0)
|
||||
|
||||
/**
|
||||
* struct intel_th_device - device on the intel_th bus
|
||||
* @dev: device
|
||||
* @drvdata: hardware capabilities/quirks
|
||||
* @resource: array of resources available to this device
|
||||
* @num_resources: number of resources in @resource array
|
||||
* @type: INTEL_TH_{SOURCE,OUTPUT,SWITCH}
|
||||
|
@ -59,11 +70,12 @@ struct intel_th_output {
|
|||
* @name: device name to match the driver
|
||||
*/
|
||||
struct intel_th_device {
|
||||
struct device dev;
|
||||
struct resource *resource;
|
||||
unsigned int num_resources;
|
||||
unsigned int type;
|
||||
int id;
|
||||
struct device dev;
|
||||
struct intel_th_drvdata *drvdata;
|
||||
struct resource *resource;
|
||||
unsigned int num_resources;
|
||||
unsigned int type;
|
||||
int id;
|
||||
|
||||
/* INTEL_TH_SWITCH specific */
|
||||
bool host_mode;
|
||||
|
@ -96,6 +108,17 @@ intel_th_device_get_resource(struct intel_th_device *thdev, unsigned int type,
|
|||
return NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
* GTH, output ports configuration
|
||||
*/
|
||||
enum {
|
||||
GTH_NONE = 0,
|
||||
GTH_MSU, /* memory/usb */
|
||||
GTH_CTP, /* Common Trace Port */
|
||||
GTH_LPP, /* Low Power Path */
|
||||
GTH_PTI, /* MIPI-PTI */
|
||||
};
|
||||
|
||||
/**
|
||||
* intel_th_output_assigned() - if an output device is assigned to a switch port
|
||||
* @thdev: the output device
|
||||
|
@ -106,7 +129,8 @@ static inline bool
|
|||
intel_th_output_assigned(struct intel_th_device *thdev)
|
||||
{
|
||||
return thdev->type == INTEL_TH_OUTPUT &&
|
||||
thdev->output.port >= 0;
|
||||
(thdev->output.port >= 0 ||
|
||||
thdev->output.type == GTH_NONE);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -161,8 +185,18 @@ struct intel_th_driver {
|
|||
#define to_intel_th_driver_or_null(_d) \
|
||||
((_d) ? to_intel_th_driver(_d) : NULL)
|
||||
|
||||
/*
|
||||
* Subdevice tree structure is as follows:
|
||||
* + struct intel_th device (pci; dev_{get,set}_drvdata()
|
||||
* + struct intel_th_device INTEL_TH_SWITCH (GTH)
|
||||
* + struct intel_th_device INTEL_TH_OUTPUT (MSU, PTI)
|
||||
* + struct intel_th_device INTEL_TH_SOURCE (STH)
|
||||
*
|
||||
* In other words, INTEL_TH_OUTPUT devices are children of INTEL_TH_SWITCH;
|
||||
* INTEL_TH_SWITCH and INTEL_TH_SOURCE are children of the intel_th device.
|
||||
*/
|
||||
static inline struct intel_th_device *
|
||||
to_intel_th_hub(struct intel_th_device *thdev)
|
||||
to_intel_th_parent(struct intel_th_device *thdev)
|
||||
{
|
||||
struct device *parent = thdev->dev.parent;
|
||||
|
||||
|
@ -172,9 +206,20 @@ to_intel_th_hub(struct intel_th_device *thdev)
|
|||
return to_intel_th_device(parent);
|
||||
}
|
||||
|
||||
static inline struct intel_th *to_intel_th(struct intel_th_device *thdev)
|
||||
{
|
||||
if (thdev->type == INTEL_TH_OUTPUT)
|
||||
thdev = to_intel_th_parent(thdev);
|
||||
|
||||
if (WARN_ON_ONCE(!thdev || thdev->type == INTEL_TH_OUTPUT))
|
||||
return NULL;
|
||||
|
||||
return dev_get_drvdata(thdev->dev.parent);
|
||||
}
|
||||
|
||||
struct intel_th *
|
||||
intel_th_alloc(struct device *dev, struct resource *devres,
|
||||
unsigned int ndevres, int irq);
|
||||
intel_th_alloc(struct device *dev, struct intel_th_drvdata *drvdata,
|
||||
struct resource *devres, unsigned int ndevres, int irq);
|
||||
void intel_th_free(struct intel_th *th);
|
||||
|
||||
int intel_th_driver_register(struct intel_th_driver *thdrv);
|
||||
|
@ -184,6 +229,7 @@ int intel_th_trace_enable(struct intel_th_device *thdev);
|
|||
int intel_th_trace_disable(struct intel_th_device *thdev);
|
||||
int intel_th_set_output(struct intel_th_device *thdev,
|
||||
unsigned int master);
|
||||
int intel_th_output_enable(struct intel_th *th, unsigned int otype);
|
||||
|
||||
enum {
|
||||
TH_MMIO_CONFIG = 0,
|
||||
|
@ -191,8 +237,9 @@ enum {
|
|||
TH_MMIO_END,
|
||||
};
|
||||
|
||||
#define TH_SUBDEVICE_MAX 6
|
||||
#define TH_POSSIBLE_OUTPUTS 8
|
||||
/* Total number of possible subdevices: outputs + GTH + STH */
|
||||
#define TH_SUBDEVICE_MAX (TH_POSSIBLE_OUTPUTS + 2)
|
||||
#define TH_CONFIGURABLE_MASTERS 256
|
||||
#define TH_MSC_MAX 2
|
||||
|
||||
|
@ -201,6 +248,10 @@ enum {
|
|||
* @dev: driver core's device
|
||||
* @thdev: subdevices
|
||||
* @hub: "switch" subdevice (GTH)
|
||||
* @resource: resources of the entire controller
|
||||
* @num_thdevs: number of devices in the @thdev array
|
||||
* @num_resources: number or resources in the @resource array
|
||||
* @irq: irq number
|
||||
* @id: this Intel TH controller's device ID in the system
|
||||
* @major: device node major for output devices
|
||||
*/
|
||||
|
@ -209,6 +260,14 @@ struct intel_th {
|
|||
|
||||
struct intel_th_device *thdev[TH_SUBDEVICE_MAX];
|
||||
struct intel_th_device *hub;
|
||||
struct intel_th_drvdata *drvdata;
|
||||
|
||||
struct resource *resource;
|
||||
int (*activate)(struct intel_th *);
|
||||
void (*deactivate)(struct intel_th *);
|
||||
unsigned int num_thdevs;
|
||||
unsigned int num_resources;
|
||||
int irq;
|
||||
|
||||
int id;
|
||||
int major;
|
||||
|
@ -220,6 +279,17 @@ struct intel_th {
|
|||
#endif
|
||||
};
|
||||
|
||||
static inline struct intel_th_device *
|
||||
to_intel_th_hub(struct intel_th_device *thdev)
|
||||
{
|
||||
if (thdev->type == INTEL_TH_SWITCH)
|
||||
return thdev;
|
||||
else if (thdev->type == INTEL_TH_OUTPUT)
|
||||
return to_intel_th_parent(thdev);
|
||||
|
||||
return to_intel_th(thdev)->hub;
|
||||
}
|
||||
|
||||
/*
|
||||
* Register windows
|
||||
*/
|
||||
|
@ -228,6 +298,10 @@ enum {
|
|||
REG_GTH_OFFSET = 0x0000,
|
||||
REG_GTH_LENGTH = 0x2000,
|
||||
|
||||
/* Timestamp counter unit (TSCU) */
|
||||
REG_TSCU_OFFSET = 0x2000,
|
||||
REG_TSCU_LENGTH = 0x1000,
|
||||
|
||||
/* Software Trace Hub (STH) [0x4000..0x4fff] */
|
||||
REG_STH_OFFSET = 0x4000,
|
||||
REG_STH_LENGTH = 0x2000,
|
||||
|
@ -249,16 +323,6 @@ enum {
|
|||
REG_DCIH_LENGTH = REG_MSU_LENGTH,
|
||||
};
|
||||
|
||||
/*
|
||||
* GTH, output ports configuration
|
||||
*/
|
||||
enum {
|
||||
GTH_NONE = 0,
|
||||
GTH_MSU, /* memory/usb */
|
||||
GTH_CTP, /* Common Trace Port */
|
||||
GTH_PTI = 4, /* MIPI-PTI */
|
||||
};
|
||||
|
||||
/*
|
||||
* Scratchpad bits: tell firmware and external debuggers
|
||||
* what we are up to.
|
||||
|
|
|
@ -709,17 +709,17 @@ static int msc_buffer_win_alloc(struct msc *msc, unsigned int nr_blocks)
|
|||
}
|
||||
|
||||
for (i = 0; i < nr_blocks; i++) {
|
||||
win->block[i].bdesc = dma_alloc_coherent(msc_dev(msc), size,
|
||||
&win->block[i].addr,
|
||||
GFP_KERNEL);
|
||||
win->block[i].bdesc =
|
||||
dma_alloc_coherent(msc_dev(msc)->parent->parent, size,
|
||||
&win->block[i].addr, GFP_KERNEL);
|
||||
|
||||
if (!win->block[i].bdesc)
|
||||
goto err_nomem;
|
||||
|
||||
#ifdef CONFIG_X86
|
||||
/* Set the page as uncached */
|
||||
set_memory_uc((unsigned long)win->block[i].bdesc, 1);
|
||||
#endif
|
||||
|
||||
if (!win->block[i].bdesc)
|
||||
goto err_nomem;
|
||||
}
|
||||
|
||||
win->msc = msc;
|
||||
|
|
|
@ -27,9 +27,53 @@
|
|||
|
||||
#define BAR_MASK (BIT(TH_MMIO_CONFIG) | BIT(TH_MMIO_SW))
|
||||
|
||||
#define PCI_REG_NPKDSC 0x80
|
||||
#define NPKDSC_TSACT BIT(5)
|
||||
|
||||
static int intel_th_pci_activate(struct intel_th *th)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(th->dev);
|
||||
u32 npkdsc;
|
||||
int err;
|
||||
|
||||
if (!INTEL_TH_CAP(th, tscu_enable))
|
||||
return 0;
|
||||
|
||||
err = pci_read_config_dword(pdev, PCI_REG_NPKDSC, &npkdsc);
|
||||
if (!err) {
|
||||
npkdsc |= NPKDSC_TSACT;
|
||||
err = pci_write_config_dword(pdev, PCI_REG_NPKDSC, npkdsc);
|
||||
}
|
||||
|
||||
if (err)
|
||||
dev_err(&pdev->dev, "failed to read NPKDSC register\n");
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static void intel_th_pci_deactivate(struct intel_th *th)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(th->dev);
|
||||
u32 npkdsc;
|
||||
int err;
|
||||
|
||||
if (!INTEL_TH_CAP(th, tscu_enable))
|
||||
return;
|
||||
|
||||
err = pci_read_config_dword(pdev, PCI_REG_NPKDSC, &npkdsc);
|
||||
if (!err) {
|
||||
npkdsc |= NPKDSC_TSACT;
|
||||
err = pci_write_config_dword(pdev, PCI_REG_NPKDSC, npkdsc);
|
||||
}
|
||||
|
||||
if (err)
|
||||
dev_err(&pdev->dev, "failed to read NPKDSC register\n");
|
||||
}
|
||||
|
||||
static int intel_th_pci_probe(struct pci_dev *pdev,
|
||||
const struct pci_device_id *id)
|
||||
{
|
||||
struct intel_th_drvdata *drvdata = (void *)id->driver_data;
|
||||
struct intel_th *th;
|
||||
int err;
|
||||
|
||||
|
@ -41,11 +85,16 @@ static int intel_th_pci_probe(struct pci_dev *pdev,
|
|||
if (err)
|
||||
return err;
|
||||
|
||||
th = intel_th_alloc(&pdev->dev, pdev->resource,
|
||||
th = intel_th_alloc(&pdev->dev, drvdata, pdev->resource,
|
||||
DEVICE_COUNT_RESOURCE, pdev->irq);
|
||||
if (IS_ERR(th))
|
||||
return PTR_ERR(th);
|
||||
|
||||
th->activate = intel_th_pci_activate;
|
||||
th->deactivate = intel_th_pci_deactivate;
|
||||
|
||||
pci_set_master(pdev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -56,6 +105,10 @@ static void intel_th_pci_remove(struct pci_dev *pdev)
|
|||
intel_th_free(th);
|
||||
}
|
||||
|
||||
static const struct intel_th_drvdata intel_th_2x = {
|
||||
.tscu_enable = 1,
|
||||
};
|
||||
|
||||
static const struct pci_device_id intel_th_pci_id_table[] = {
|
||||
{
|
||||
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x9d26),
|
||||
|
@ -93,7 +146,17 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
|
|||
{
|
||||
/* Gemini Lake */
|
||||
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x318e),
|
||||
.driver_data = (kernel_ulong_t)0,
|
||||
.driver_data = (kernel_ulong_t)&intel_th_2x,
|
||||
},
|
||||
{
|
||||
/* Cannon Lake H */
|
||||
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa326),
|
||||
.driver_data = (kernel_ulong_t)&intel_th_2x,
|
||||
},
|
||||
{
|
||||
/* Cannon Lake LP */
|
||||
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x9da6),
|
||||
.driver_data = (kernel_ulong_t)&intel_th_2x,
|
||||
},
|
||||
{ 0 },
|
||||
};
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
/*
|
||||
* Intel(R) Trace Hub PTI output driver
|
||||
*
|
||||
* Copyright (C) 2014-2015 Intel Corporation.
|
||||
* Copyright (C) 2014-2016 Intel Corporation.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms and conditions of the GNU General Public License,
|
||||
|
@ -34,6 +34,8 @@ struct pti_device {
|
|||
unsigned int freeclk;
|
||||
unsigned int clkdiv;
|
||||
unsigned int patgen;
|
||||
unsigned int lpp_dest_mask;
|
||||
unsigned int lpp_dest;
|
||||
};
|
||||
|
||||
/* map PTI widths to MODE settings of PTI_CTL register */
|
||||
|
@ -163,6 +165,7 @@ static int intel_th_pti_activate(struct intel_th_device *thdev)
|
|||
ctl |= PTI_FCEN;
|
||||
ctl |= pti->mode << __ffs(PTI_MODE);
|
||||
ctl |= pti->clkdiv << __ffs(PTI_CLKDIV);
|
||||
ctl |= pti->lpp_dest << __ffs(LPP_DEST);
|
||||
|
||||
iowrite32(ctl, pti->base + REG_PTI_CTL);
|
||||
|
||||
|
@ -192,6 +195,15 @@ static void read_hw_config(struct pti_device *pti)
|
|||
pti->mode = pti_width_mode(4);
|
||||
if (!pti->clkdiv)
|
||||
pti->clkdiv = 1;
|
||||
|
||||
if (pti->thdev->output.type == GTH_LPP) {
|
||||
if (ctl & LPP_PTIPRESENT)
|
||||
pti->lpp_dest_mask |= LPP_DEST_PTI;
|
||||
if (ctl & LPP_BSSBPRESENT)
|
||||
pti->lpp_dest_mask |= LPP_DEST_EXI;
|
||||
if (ctl & LPP_DEST)
|
||||
pti->lpp_dest = 1;
|
||||
}
|
||||
}
|
||||
|
||||
static int intel_th_pti_probe(struct intel_th_device *thdev)
|
||||
|
@ -239,10 +251,103 @@ static struct intel_th_driver intel_th_pti_driver = {
|
|||
},
|
||||
};
|
||||
|
||||
module_driver(intel_th_pti_driver,
|
||||
intel_th_driver_register,
|
||||
intel_th_driver_unregister);
|
||||
static const char * const lpp_dest_str[] = { "pti", "exi" };
|
||||
|
||||
static ssize_t lpp_dest_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
struct pti_device *pti = dev_get_drvdata(dev);
|
||||
ssize_t ret = 0;
|
||||
int i;
|
||||
|
||||
for (i = ARRAY_SIZE(lpp_dest_str) - 1; i >= 0; i--) {
|
||||
const char *fmt = pti->lpp_dest == i ? "[%s] " : "%s ";
|
||||
|
||||
if (!(pti->lpp_dest_mask & BIT(i)))
|
||||
continue;
|
||||
|
||||
ret += scnprintf(buf + ret, PAGE_SIZE - ret,
|
||||
fmt, lpp_dest_str[i]);
|
||||
}
|
||||
|
||||
if (ret)
|
||||
buf[ret - 1] = '\n';
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static ssize_t lpp_dest_store(struct device *dev, struct device_attribute *attr,
|
||||
const char *buf, size_t size)
|
||||
{
|
||||
struct pti_device *pti = dev_get_drvdata(dev);
|
||||
ssize_t ret = -EINVAL;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(lpp_dest_str); i++)
|
||||
if (sysfs_streq(buf, lpp_dest_str[i]))
|
||||
break;
|
||||
|
||||
if (i < ARRAY_SIZE(lpp_dest_str) && pti->lpp_dest_mask & BIT(i)) {
|
||||
pti->lpp_dest = i;
|
||||
ret = size;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR_RW(lpp_dest);
|
||||
|
||||
static struct attribute *lpp_output_attrs[] = {
|
||||
&dev_attr_mode.attr,
|
||||
&dev_attr_freerunning_clock.attr,
|
||||
&dev_attr_clock_divider.attr,
|
||||
&dev_attr_lpp_dest.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static struct attribute_group lpp_output_group = {
|
||||
.attrs = lpp_output_attrs,
|
||||
};
|
||||
|
||||
static struct intel_th_driver intel_th_lpp_driver = {
|
||||
.probe = intel_th_pti_probe,
|
||||
.remove = intel_th_pti_remove,
|
||||
.activate = intel_th_pti_activate,
|
||||
.deactivate = intel_th_pti_deactivate,
|
||||
.attr_group = &lpp_output_group,
|
||||
.driver = {
|
||||
.name = "lpp",
|
||||
.owner = THIS_MODULE,
|
||||
},
|
||||
};
|
||||
|
||||
static int __init intel_th_pti_lpp_init(void)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = intel_th_driver_register(&intel_th_pti_driver);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
err = intel_th_driver_register(&intel_th_lpp_driver);
|
||||
if (err) {
|
||||
intel_th_driver_unregister(&intel_th_pti_driver);
|
||||
return err;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
module_init(intel_th_pti_lpp_init);
|
||||
|
||||
static void __exit intel_th_pti_lpp_exit(void)
|
||||
{
|
||||
intel_th_driver_unregister(&intel_th_pti_driver);
|
||||
intel_th_driver_unregister(&intel_th_lpp_driver);
|
||||
}
|
||||
|
||||
module_exit(intel_th_pti_lpp_exit);
|
||||
|
||||
MODULE_LICENSE("GPL v2");
|
||||
MODULE_DESCRIPTION("Intel(R) Trace Hub PTI output driver");
|
||||
MODULE_DESCRIPTION("Intel(R) Trace Hub PTI/LPP output driver");
|
||||
MODULE_AUTHOR("Alexander Shishkin <alexander.shishkin@linux.intel.com>");
|
||||
|
|
|
@ -23,7 +23,15 @@ enum {
|
|||
#define PTI_EN BIT(0)
|
||||
#define PTI_FCEN BIT(1)
|
||||
#define PTI_MODE 0xf0
|
||||
#define LPP_PTIPRESENT BIT(8)
|
||||
#define LPP_BSSBPRESENT BIT(9)
|
||||
#define PTI_CLKDIV 0x000f0000
|
||||
#define PTI_PATGENMODE 0x00f00000
|
||||
#define LPP_DEST BIT(25)
|
||||
#define LPP_BSSBACT BIT(30)
|
||||
#define LPP_LPPBUSY BIT(31)
|
||||
|
||||
#define LPP_DEST_PTI BIT(0)
|
||||
#define LPP_DEST_EXI BIT(1)
|
||||
|
||||
#endif /* __INTEL_TH_STH_H__ */
|
||||
|
|
|
@ -566,7 +566,7 @@ static int stm_char_policy_set_ioctl(struct stm_file *stmf, void __user *arg)
|
|||
if (copy_from_user(&size, arg, sizeof(size)))
|
||||
return -EFAULT;
|
||||
|
||||
if (size >= PATH_MAX + sizeof(*id))
|
||||
if (size < sizeof(*id) || size >= PATH_MAX + sizeof(*id))
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
|
|
|
@ -114,6 +114,12 @@ static struct resource sc24_fpga_resource = {
|
|||
.flags = IORESOURCE_MEM,
|
||||
};
|
||||
|
||||
static struct resource sc31_fpga_resource = {
|
||||
.start = 0xf000e000,
|
||||
.end = 0xf000e000 + CHAM_HEADER_SIZE,
|
||||
.flags = IORESOURCE_MEM,
|
||||
};
|
||||
|
||||
static struct platform_driver mcb_lpc_driver = {
|
||||
.driver = {
|
||||
.name = "mcb-lpc",
|
||||
|
@ -132,6 +138,15 @@ static const struct dmi_system_id mcb_lpc_dmi_table[] = {
|
|||
.driver_data = (void *)&sc24_fpga_resource,
|
||||
.callback = mcb_lpc_create_platform_device,
|
||||
},
|
||||
{
|
||||
.ident = "SC31",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "MEN"),
|
||||
DMI_MATCH(DMI_PRODUCT_VERSION, "14SC31"),
|
||||
},
|
||||
.driver_data = (void *)&sc31_fpga_resource,
|
||||
.callback = mcb_lpc_create_platform_device,
|
||||
},
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(dmi, mcb_lpc_dmi_table);
|
||||
|
|
|
@ -182,7 +182,7 @@ int chameleon_parse_cells(struct mcb_bus *bus, phys_addr_t mapbase,
|
|||
int num_cells = 0;
|
||||
uint32_t dtype;
|
||||
int bar_count;
|
||||
int ret = 0;
|
||||
int ret;
|
||||
u32 hsize;
|
||||
|
||||
hsize = sizeof(struct chameleon_fpga_header);
|
||||
|
@ -210,8 +210,10 @@ int chameleon_parse_cells(struct mcb_bus *bus, phys_addr_t mapbase,
|
|||
header->filename);
|
||||
|
||||
bar_count = chameleon_get_bar(&p, mapbase, &cb);
|
||||
if (bar_count < 0)
|
||||
if (bar_count < 0) {
|
||||
ret = bar_count;
|
||||
goto free_header;
|
||||
}
|
||||
|
||||
for_each_chameleon_cell(dtype, p) {
|
||||
switch (dtype) {
|
||||
|
|
|
@ -60,6 +60,7 @@ lkdtm-$(CONFIG_LKDTM) += lkdtm_core.o
|
|||
lkdtm-$(CONFIG_LKDTM) += lkdtm_bugs.o
|
||||
lkdtm-$(CONFIG_LKDTM) += lkdtm_heap.o
|
||||
lkdtm-$(CONFIG_LKDTM) += lkdtm_perms.o
|
||||
lkdtm-$(CONFIG_LKDTM) += lkdtm_refcount.o
|
||||
lkdtm-$(CONFIG_LKDTM) += lkdtm_rodata_objcopy.o
|
||||
lkdtm-$(CONFIG_LKDTM) += lkdtm_usercopy.o
|
||||
|
||||
|
|
|
@ -197,7 +197,7 @@ static struct attribute *mid_att_als[] = {
|
|||
NULL
|
||||
};
|
||||
|
||||
static struct attribute_group m_als_gr = {
|
||||
static const struct attribute_group m_als_gr = {
|
||||
.name = "apds9802als",
|
||||
.attrs = mid_att_als
|
||||
};
|
||||
|
@ -298,7 +298,7 @@ static UNIVERSAL_DEV_PM_OPS(apds9802als_pm_ops, apds9802als_suspend,
|
|||
#define APDS9802ALS_PM_OPS NULL
|
||||
#endif /* CONFIG_PM */
|
||||
|
||||
static struct i2c_device_id apds9802als_id[] = {
|
||||
static const struct i2c_device_id apds9802als_id[] = {
|
||||
{ DRIVER_NAME, 0 },
|
||||
{ }
|
||||
};
|
||||
|
|
|
@ -1051,7 +1051,7 @@ static struct attribute *sysfs_attrs_ctrl[] = {
|
|||
NULL
|
||||
};
|
||||
|
||||
static struct attribute_group apds990x_attribute_group[] = {
|
||||
static const struct attribute_group apds990x_attribute_group[] = {
|
||||
{.attrs = sysfs_attrs_ctrl },
|
||||
};
|
||||
|
||||
|
|
|
@ -20,6 +20,7 @@
|
|||
#include <linux/mfd/syscon.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/regmap.h>
|
||||
|
||||
|
@ -51,6 +52,13 @@
|
|||
#define HICRB_ENSNP0D BIT(14)
|
||||
#define HICRB_ENSNP1D BIT(15)
|
||||
|
||||
struct aspeed_lpc_snoop_model_data {
|
||||
/* The ast2400 has bits 14 and 15 as reserved, whereas the ast2500
|
||||
* can use them.
|
||||
*/
|
||||
unsigned int has_hicrb_ensnp;
|
||||
};
|
||||
|
||||
struct aspeed_lpc_snoop {
|
||||
struct regmap *regmap;
|
||||
int irq;
|
||||
|
@ -123,10 +131,13 @@ static int aspeed_lpc_snoop_config_irq(struct aspeed_lpc_snoop *lpc_snoop,
|
|||
}
|
||||
|
||||
static int aspeed_lpc_enable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
|
||||
int channel, u16 lpc_port)
|
||||
struct device *dev,
|
||||
int channel, u16 lpc_port)
|
||||
{
|
||||
int rc = 0;
|
||||
u32 hicr5_en, snpwadr_mask, snpwadr_shift, hicrb_en;
|
||||
const struct aspeed_lpc_snoop_model_data *model_data =
|
||||
of_device_get_match_data(dev);
|
||||
|
||||
/* Create FIFO datastructure */
|
||||
rc = kfifo_alloc(&lpc_snoop->snoop_fifo[channel],
|
||||
|
@ -155,7 +166,9 @@ static int aspeed_lpc_enable_snoop(struct aspeed_lpc_snoop *lpc_snoop,
|
|||
regmap_update_bits(lpc_snoop->regmap, HICR5, hicr5_en, hicr5_en);
|
||||
regmap_update_bits(lpc_snoop->regmap, SNPWADR, snpwadr_mask,
|
||||
lpc_port << snpwadr_shift);
|
||||
regmap_update_bits(lpc_snoop->regmap, HICRB, hicrb_en, hicrb_en);
|
||||
if (model_data->has_hicrb_ensnp)
|
||||
regmap_update_bits(lpc_snoop->regmap, HICRB,
|
||||
hicrb_en, hicrb_en);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
@ -213,14 +226,14 @@ static int aspeed_lpc_snoop_probe(struct platform_device *pdev)
|
|||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = aspeed_lpc_enable_snoop(lpc_snoop, 0, port);
|
||||
rc = aspeed_lpc_enable_snoop(lpc_snoop, dev, 0, port);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Configuration of 2nd snoop channel port is optional */
|
||||
if (of_property_read_u32_index(dev->of_node, "snoop-ports",
|
||||
1, &port) == 0) {
|
||||
rc = aspeed_lpc_enable_snoop(lpc_snoop, 1, port);
|
||||
rc = aspeed_lpc_enable_snoop(lpc_snoop, dev, 1, port);
|
||||
if (rc)
|
||||
aspeed_lpc_disable_snoop(lpc_snoop, 0);
|
||||
}
|
||||
|
@ -239,8 +252,19 @@ static int aspeed_lpc_snoop_remove(struct platform_device *pdev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static const struct aspeed_lpc_snoop_model_data ast2400_model_data = {
|
||||
.has_hicrb_ensnp = 0,
|
||||
};
|
||||
|
||||
static const struct aspeed_lpc_snoop_model_data ast2500_model_data = {
|
||||
.has_hicrb_ensnp = 1,
|
||||
};
|
||||
|
||||
static const struct of_device_id aspeed_lpc_snoop_match[] = {
|
||||
{ .compatible = "aspeed,ast2500-lpc-snoop" },
|
||||
{ .compatible = "aspeed,ast2400-lpc-snoop",
|
||||
.data = &ast2400_model_data },
|
||||
{ .compatible = "aspeed,ast2500-lpc-snoop",
|
||||
.data = &ast2500_model_data },
|
||||
{ },
|
||||
};
|
||||
|
||||
|
|
|
@ -1175,7 +1175,7 @@ static struct attribute *sysfs_attrs[] = {
|
|||
NULL
|
||||
};
|
||||
|
||||
static struct attribute_group bh1770_attribute_group = {
|
||||
static const struct attribute_group bh1770_attribute_group = {
|
||||
.attrs = sysfs_attrs
|
||||
};
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue