Char/Misc driver patches for 5.1-rc1

Here is the big char/misc driver patch pull request for 5.1-rc1.
 
 The largest thing by far is the new habanalabs driver for their AI
 accelerator chip.  For now it is in the drivers/misc directory but will
 probably move to a new directory soon along with other drivers of this
 type.
 
 Other than that, just the usual set of individual driver updates and
 fixes.  There's an "odd" merge in here from the DRM tree that they asked
 me to do as the MEI driver is starting to interact with the i915 driver,
 and it needed some coordination.  All of those patches have been
 properly acked by the relevant subsystem maintainers.
 
 All of these have been in linux-next with no reported issues, most for
 quite some time.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCXH+dPQ8cZ3JlZ0Brcm9h
 aC5jb20ACgkQMUfUDdst+ym1fACgvpZAxjNzoRQJ6f06tc8ujtPk9rUAnR+tCtrZ
 9e3l7H76oe33o96Qjhor
 =8A2k
 -----END PGP SIGNATURE-----

Merge tag 'char-misc-5.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc driver updates from Greg KH:
 "Here is the big char/misc driver patch pull request for 5.1-rc1.

  The largest thing by far is the new habanalabs driver for their AI
  accelerator chip. For now it is in the drivers/misc directory but will
  probably move to a new directory soon along with other drivers of this
  type.

  Other than that, just the usual set of individual driver updates and
  fixes. There's an "odd" merge in here from the DRM tree that they
  asked me to do as the MEI driver is starting to interact with the i915
  driver, and it needed some coordination. All of those patches have
  been properly acked by the relevant subsystem maintainers.

  All of these have been in linux-next with no reported issues, most for
  quite some time"

* tag 'char-misc-5.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (219 commits)
  habanalabs: adjust Kconfig to fix build errors
  habanalabs: use %px instead of %p in error print
  habanalabs: use do_div for 64-bit divisions
  intel_th: gth: Fix an off-by-one in output unassigning
  habanalabs: fix little-endian<->cpu conversion warnings
  habanalabs: use NULL to initialize array of pointers
  habanalabs: fix little-endian<->cpu conversion warnings
  habanalabs: soft-reset device if context-switch fails
  habanalabs: print pointer using %p
  habanalabs: fix memory leak with CBs with unaligned size
  habanalabs: return correct error code on MMU mapping failure
  habanalabs: add comments in uapi/misc/habanalabs.h
  habanalabs: extend QMAN0 job timeout
  habanalabs: set DMA0 completion to SOB 1007
  habanalabs: fix validation of WREG32 to DMA completion
  habanalabs: fix mmu cache registers init
  habanalabs: disable CPU access on timeouts
  habanalabs: add MMU DRAM default page mapping
  habanalabs: Dissociate RAZWI info from event types
  misc/habanalabs: adjust Kconfig to fix build errors
  ...
This commit is contained in:
Linus Torvalds 2019-03-06 14:18:59 -08:00
commit 45763bf4bc
315 changed files with 60754 additions and 1948 deletions

View File

@ -1221,7 +1221,7 @@ S: Brazil
N: Oded Gabbay N: Oded Gabbay
E: oded.gabbay@gmail.com E: oded.gabbay@gmail.com
D: AMD KFD maintainer D: HabanaLabs and AMD KFD maintainer
S: 12 Shraga Raphaeli S: 12 Shraga Raphaeli
S: Petah-Tikva, 4906418 S: Petah-Tikva, 4906418
S: Israel S: Israel

View File

@ -146,3 +146,36 @@ KernelVersion: 4.16
Contact: Stephen Hemminger <sthemmin@microsoft.com> Contact: Stephen Hemminger <sthemmin@microsoft.com>
Description: Binary file created by uio_hv_generic for ring buffer Description: Binary file created by uio_hv_generic for ring buffer
Users: Userspace drivers Users: Userspace drivers
What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/intr_in_full
Date: February 2019
KernelVersion: 5.0
Contact: Michael Kelley <mikelley@microsoft.com>
Description: Number of guest to host interrupts caused by the inbound ring
buffer transitioning from full to not full while a packet is
waiting for buffer space to become available
Users: Debugging tools
What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/intr_out_empty
Date: February 2019
KernelVersion: 5.0
Contact: Michael Kelley <mikelley@microsoft.com>
Description: Number of guest to host interrupts caused by the outbound ring
buffer transitioning from empty to not empty
Users: Debugging tools
What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/out_full_first
Date: February 2019
KernelVersion: 5.0
Contact: Michael Kelley <mikelley@microsoft.com>
Description: Number of write operations that were the first to encounter an
outbound ring buffer full condition
Users: Debugging tools
What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/out_full_total
Date: February 2019
KernelVersion: 5.0
Contact: Michael Kelley <mikelley@microsoft.com>
Description: Total number of write operations that encountered an outbound
ring buffer full condition
Users: Debugging tools

View File

@ -0,0 +1,126 @@
What: /sys/kernel/debug/habanalabs/hl<n>/addr
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Sets the device address to be used for read or write through
PCI bar. The acceptable value is a string that starts with "0x"
What: /sys/kernel/debug/habanalabs/hl<n>/command_buffers
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Displays a list with information about the currently allocated
command buffers
What: /sys/kernel/debug/habanalabs/hl<n>/command_submission
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Displays a list with information about the currently active
command submissions
What: /sys/kernel/debug/habanalabs/hl<n>/command_submission_jobs
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Displays a list with detailed information about each JOB (CB) of
each active command submission
What: /sys/kernel/debug/habanalabs/hl<n>/data32
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Allows the root user to read or write directly through the
device's PCI bar. Writing to this file generates a write
transaction while reading from the file generates a read
transcation. This custom interface is needed (instead of using
the generic Linux user-space PCI mapping) because the DDR bar
is very small compared to the DDR memory and only the driver can
move the bar before and after the transaction
What: /sys/kernel/debug/habanalabs/hl<n>/device
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Enables the root user to set the device to specific state.
Valid values are "disable", "enable", "suspend", "resume".
User can read this property to see the valid values
What: /sys/kernel/debug/habanalabs/hl<n>/i2c_addr
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Sets I2C device address for I2C transaction that is generated
by the device's CPU
What: /sys/kernel/debug/habanalabs/hl<n>/i2c_bus
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Sets I2C bus address for I2C transaction that is generated by
the device's CPU
What: /sys/kernel/debug/habanalabs/hl<n>/i2c_data
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Triggers an I2C transaction that is generated by the device's
CPU. Writing to this file generates a write transaction while
reading from the file generates a read transcation
What: /sys/kernel/debug/habanalabs/hl<n>/i2c_reg
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Sets I2C register id for I2C transaction that is generated by
the device's CPU
What: /sys/kernel/debug/habanalabs/hl<n>/led0
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Sets the state of the first S/W led on the device
What: /sys/kernel/debug/habanalabs/hl<n>/led1
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Sets the state of the second S/W led on the device
What: /sys/kernel/debug/habanalabs/hl<n>/led2
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Sets the state of the third S/W led on the device
What: /sys/kernel/debug/habanalabs/hl<n>/mmu
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Displays the hop values and physical address for a given ASID
and virtual address. The user should write the ASID and VA into
the file and then read the file to get the result.
e.g. to display info about VA 0x1000 for ASID 1 you need to do:
echo "1 0x1000" > /sys/kernel/debug/habanalabs/hl0/mmu
What: /sys/kernel/debug/habanalabs/hl<n>/set_power_state
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Sets the PCI power state. Valid values are "1" for D0 and "2"
for D3Hot
What: /sys/kernel/debug/habanalabs/hl<n>/userptr
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Displays a list with information about the currently user
pointers (user virtual addresses) that are pinned and mapped
to DMA addresses
What: /sys/kernel/debug/habanalabs/hl<n>/vm
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Displays a list with information about all the active virtual
address mappings per ASID

View File

@ -3,11 +3,13 @@ Date: June 2015
KernelVersion: 4.3 KernelVersion: 4.3
Contact: Alexander Shishkin <alexander.shishkin@linux.intel.com> Contact: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Description: (RW) Writes of 1 or 0 enable or disable trace output to this Description: (RW) Writes of 1 or 0 enable or disable trace output to this
output device. Reads return current status. output device. Reads return current status. Requires that the
correstponding output port driver be loaded.
What: /sys/bus/intel_th/devices/<intel_th_id>-msc<msc-id>/port What: /sys/bus/intel_th/devices/<intel_th_id>-msc<msc-id>/port
Date: June 2015 Date: June 2015
KernelVersion: 4.3 KernelVersion: 4.3
Contact: Alexander Shishkin <alexander.shishkin@linux.intel.com> Contact: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Description: (RO) Port number, corresponding to this output device on the Description: (RO) Port number, corresponding to this output device on the
switch (GTH). switch (GTH) or "unassigned" if the corresponding output
port driver is not loaded.

View File

@ -0,0 +1,190 @@
What: /sys/class/habanalabs/hl<n>/armcp_kernel_ver
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Version of the Linux kernel running on the device's CPU
What: /sys/class/habanalabs/hl<n>/armcp_ver
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Version of the application running on the device's CPU
What: /sys/class/habanalabs/hl<n>/cpld_ver
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Version of the Device's CPLD F/W
What: /sys/class/habanalabs/hl<n>/device_type
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Displays the code name of the device according to its type.
The supported values are: "GOYA"
What: /sys/class/habanalabs/hl<n>/eeprom
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: A binary file attribute that contains the contents of the
on-board EEPROM
What: /sys/class/habanalabs/hl<n>/fuse_ver
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Displays the device's version from the eFuse
What: /sys/class/habanalabs/hl<n>/hard_reset
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Interface to trigger a hard-reset operation for the device.
Hard-reset will reset ALL internal components of the device
except for the PCI interface and the internal PLLs
What: /sys/class/habanalabs/hl<n>/hard_reset_cnt
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Displays how many times the device have undergone a hard-reset
operation since the driver was loaded
What: /sys/class/habanalabs/hl<n>/high_pll
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Allows the user to set the maximum clock frequency for MME, TPC
and IC when the power management profile is set to "automatic".
What: /sys/class/habanalabs/hl<n>/ic_clk
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Allows the user to set the maximum clock frequency of the
Interconnect fabric. Writes to this parameter affect the device
only when the power management profile is set to "manual" mode.
The device IC clock might be set to lower value then the
maximum. The user should read the ic_clk_curr to see the actual
frequency value of the IC
What: /sys/class/habanalabs/hl<n>/ic_clk_curr
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Displays the current clock frequency of the Interconnect fabric
What: /sys/class/habanalabs/hl<n>/infineon_ver
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Version of the Device's power supply F/W code
What: /sys/class/habanalabs/hl<n>/max_power
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Allows the user to set the maximum power consumption of the
device in milliwatts.
What: /sys/class/habanalabs/hl<n>/mme_clk
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Allows the user to set the maximum clock frequency of the
MME compute engine. Writes to this parameter affect the device
only when the power management profile is set to "manual" mode.
The device MME clock might be set to lower value then the
maximum. The user should read the mme_clk_curr to see the actual
frequency value of the MME
What: /sys/class/habanalabs/hl<n>/mme_clk_curr
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Displays the current clock frequency of the MME compute engine
What: /sys/class/habanalabs/hl<n>/pci_addr
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Displays the PCI address of the device. This is needed so the
user would be able to open a device based on its PCI address
What: /sys/class/habanalabs/hl<n>/pm_mng_profile
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Power management profile. Values are "auto", "manual". In "auto"
mode, the driver will set the maximum clock frequency to a high
value when a user-space process opens the device's file (unless
it was already opened by another process). The driver will set
the max clock frequency to a low value when there are no user
processes that are opened on the device's file. In "manual"
mode, the user sets the maximum clock frequency by writing to
ic_clk, mme_clk and tpc_clk
What: /sys/class/habanalabs/hl<n>/preboot_btl_ver
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Version of the device's preboot F/W code
What: /sys/class/habanalabs/hl<n>/soft_reset
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Interface to trigger a soft-reset operation for the device.
Soft-reset will reset only the compute and DMA engines of the
device
What: /sys/class/habanalabs/hl<n>/soft_reset_cnt
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Displays how many times the device have undergone a soft-reset
operation since the driver was loaded
What: /sys/class/habanalabs/hl<n>/status
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Status of the card: "Operational", "Malfunction", "In reset".
What: /sys/class/habanalabs/hl<n>/thermal_ver
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Version of the Device's thermal daemon
What: /sys/class/habanalabs/hl<n>/tpc_clk
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Allows the user to set the maximum clock frequency of the
TPC compute engines. Writes to this parameter affect the device
only when the power management profile is set to "manual" mode.
The device TPC clock might be set to lower value then the
maximum. The user should read the tpc_clk_curr to see the actual
frequency value of the TPC
What: /sys/class/habanalabs/hl<n>/tpc_clk_curr
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Displays the current clock frequency of the TPC compute engines
What: /sys/class/habanalabs/hl<n>/uboot_ver
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Version of the u-boot running on the device's CPU
What: /sys/class/habanalabs/hl<n>/write_open_cnt
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Displays the total number of user processes that are currently
opened on the device's file

View File

@ -0,0 +1,27 @@
* PTN5150 CC (Configuration Channel) Logic device
PTN5150 is a small thin low power CC logic chip supporting the USB Type-C
connector application with CC control logic detection and indication functions.
It is interfaced to the host controller using an I2C interface.
Required properties:
- compatible: should be "nxp,ptn5150"
- reg: specifies the I2C slave address of the device
- int-gpio: should contain a phandle and GPIO specifier for the GPIO pin
connected to the PTN5150's INTB pin.
- vbus-gpio: should contain a phandle and GPIO specifier for the GPIO pin which
is used to control VBUS.
- pinctrl-names : a pinctrl state named "default" must be defined.
- pinctrl-0 : phandle referencing pin configuration of interrupt and vbus
control.
Example:
ptn5150@1d {
compatible = "nxp,ptn5150";
reg = <0x1d>;
int-gpio = <&msmgpio 78 GPIO_ACTIVE_HIGH>;
vbus-gpio = <&msmgpio 148 GPIO_ACTIVE_HIGH>;
pinctrl-names = "default";
pinctrl-0 = <&ptn5150_default>;
status = "okay";
};

View File

@ -17,6 +17,7 @@ Required properties:
represents represents
Optional properties: Optional properties:
- lna-supply : Separate supply for an LNA
- enable-gpios : GPIO used to enable the device - enable-gpios : GPIO used to enable the device
- timepulse-gpios : Time pulse GPIO - timepulse-gpios : Time pulse GPIO

View File

@ -0,0 +1,35 @@
Mediatek-based GNSS Receiver DT binding
Mediatek chipsets are used in GNSS-receiver modules produced by several
vendors and can use a UART interface.
Please see Documentation/devicetree/bindings/gnss/gnss.txt for generic
properties.
Required properties:
- compatible : Must be
"globaltop,pa6h"
- vcc-supply : Main voltage regulator (pin name: VCC)
Optional properties:
- current-speed : Default UART baud rate
- gnss-fix-gpios : GPIO used to determine device position fix state
(pin name: FIX, 3D_FIX)
- reset-gpios : GPIO used to reset the device (pin name: RESET, NRESET)
- timepulse-gpios : Time pulse GPIO (pin name: PPS1, 1PPS)
- vbackup-supply : Backup voltage regulator (pin name: VBAT, VBACKUP)
Example:
serial@1234 {
compatible = "ns16550a";
gnss {
compatible = "globaltop,pa6h";
vcc-supply = <&vcc_3v3>;
};
};

View File

@ -12,6 +12,7 @@ Required properties:
"fastrax,uc430" "fastrax,uc430"
"linx,r4" "linx,r4"
"wi2wi,w2sg0004"
"wi2wi,w2sg0008i" "wi2wi,w2sg0008i"
"wi2wi,w2sg0084i" "wi2wi,w2sg0084i"

View File

@ -0,0 +1,60 @@
Interconnect Provider Device Tree Bindings
=========================================
The purpose of this document is to define a common set of generic interconnect
providers/consumers properties.
= interconnect providers =
The interconnect provider binding is intended to represent the interconnect
controllers in the system. Each provider registers a set of interconnect
nodes, which expose the interconnect related capabilities of the interconnect
to consumer drivers. These capabilities can be throughput, latency, priority
etc. The consumer drivers set constraints on interconnect path (or endpoints)
depending on the use case. Interconnect providers can also be interconnect
consumers, such as in the case where two network-on-chip fabrics interface
directly.
Required properties:
- compatible : contains the interconnect provider compatible string
- #interconnect-cells : number of cells in a interconnect specifier needed to
encode the interconnect node id
Example:
snoc: interconnect@580000 {
compatible = "qcom,msm8916-snoc";
#interconnect-cells = <1>;
reg = <0x580000 0x14000>;
clock-names = "bus_clk", "bus_a_clk";
clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
<&rpmcc RPM_SMD_SNOC_A_CLK>;
};
= interconnect consumers =
The interconnect consumers are device nodes which dynamically express their
bandwidth requirements along interconnect paths they are connected to. There
can be multiple interconnect providers on a SoC and the consumer may consume
multiple paths from different providers depending on use case and the
components it has to interact with.
Required properties:
interconnects : Pairs of phandles and interconnect provider specifier to denote
the edge source and destination ports of the interconnect path.
Optional properties:
interconnect-names : List of interconnect path name strings sorted in the same
order as the interconnects property. Consumers drivers will use
interconnect-names to match interconnect paths with interconnect
specifier pairs.
Example:
sdhci@7864000 {
...
interconnects = <&pnoc MASTER_SDCC_1 &bimc SLAVE_EBI_CH0>;
interconnect-names = "sdhc-mem";
};

View File

@ -0,0 +1,24 @@
Qualcomm SDM845 Network-On-Chip interconnect driver binding
-----------------------------------------------------------
SDM845 interconnect providers support system bandwidth requirements through
RPMh hardware accelerators known as Bus Clock Manager (BCM). The provider is
able to communicate with the BCM through the Resource State Coordinator (RSC)
associated with each execution environment. Provider nodes must reside within
an RPMh device node pertaining to their RSC and each provider maps to a single
RPMh resource.
Required properties :
- compatible : shall contain only one of the following:
"qcom,sdm845-rsc-hlos"
- #interconnect-cells : should contain 1
Examples:
apps_rsc: rsc {
rsc_hlos: interconnect {
compatible = "qcom,sdm845-rsc-hlos";
#interconnect-cells = <1>;
};
};

View File

@ -0,0 +1,78 @@
Qualcomm Technologies, Inc. FastRPC Driver
The FastRPC implements an IPC (Inter-Processor Communication)
mechanism that allows for clients to transparently make remote method
invocations across DSP and APPS boundaries. This enables developers
to offload tasks to the DSP and free up the application processor for
other tasks.
- compatible:
Usage: required
Value type: <stringlist>
Definition: must be "qcom,fastrpc"
- label
Usage: required
Value type: <string>
Definition: should specify the dsp domain name this fastrpc
corresponds to. must be one of this: "adsp", "mdsp", "sdsp", "cdsp"
- #address-cells
Usage: required
Value type: <u32>
Definition: Must be 1
- #size-cells
Usage: required
Value type: <u32>
Definition: Must be 0
= COMPUTE BANKS
Each subnode of the Fastrpc represents compute context banks available
on the dsp.
- All Compute context banks MUST contain the following properties:
- compatible:
Usage: required
Value type: <stringlist>
Definition: must be "qcom,fastrpc-compute-cb"
- reg
Usage: required
Value type: <u32>
Definition: Context Bank ID.
- qcom,nsessions:
Usage: Optional
Value type: <u32>
Defination: A value indicating how many sessions can share this
context bank. Defaults to 1 when this property
is not specified.
Example:
adsp-pil {
compatible = "qcom,msm8996-adsp-pil";
...
smd-edge {
label = "lpass";
fastrpc {
compatible = "qcom,fastrpc";
qcom,smd-channels = "fastrpcsmd-apps-dsp";
label = "adsp";
#address-cells = <1>;
#size-cells = <0>;
cb@1 {
compatible = "qcom,fastrpc-compute-cb";
reg = <1>;
};
cb@2 {
compatible = "qcom,fastrpc-compute-cb";
reg = <2>;
};
...
};
};
};

View File

@ -1,7 +1,7 @@
Freescale i.MX6 On-Chip OTP Controller (OCOTP) device tree bindings Freescale i.MX6 On-Chip OTP Controller (OCOTP) device tree bindings
This binding represents the on-chip eFuse OTP controller found on This binding represents the on-chip eFuse OTP controller found on
i.MX6Q/D, i.MX6DL/S, i.MX6SL, i.MX6SX, i.MX6UL and i.MX6SLL SoCs. i.MX6Q/D, i.MX6DL/S, i.MX6SL, i.MX6SX, i.MX6UL, i.MX6ULL/ULZ and i.MX6SLL SoCs.
Required properties: Required properties:
- compatible: should be one of - compatible: should be one of
@ -9,8 +9,10 @@ Required properties:
"fsl,imx6sl-ocotp" (i.MX6SL), or "fsl,imx6sl-ocotp" (i.MX6SL), or
"fsl,imx6sx-ocotp" (i.MX6SX), "fsl,imx6sx-ocotp" (i.MX6SX),
"fsl,imx6ul-ocotp" (i.MX6UL), "fsl,imx6ul-ocotp" (i.MX6UL),
"fsl,imx6ull-ocotp" (i.MX6ULL/ULZ),
"fsl,imx7d-ocotp" (i.MX7D/S), "fsl,imx7d-ocotp" (i.MX7D/S),
"fsl,imx6sll-ocotp" (i.MX6SLL), "fsl,imx6sll-ocotp" (i.MX6SLL),
"fsl,imx7ulp-ocotp" (i.MX7ULP),
followed by "syscon". followed by "syscon".
- #address-cells : Should be 1 - #address-cells : Should be 1
- #size-cells : Should be 1 - #size-cells : Should be 1

View File

@ -154,6 +154,7 @@ geniatech Geniatech, Inc.
giantec Giantec Semiconductor, Inc. giantec Giantec Semiconductor, Inc.
giantplus Giantplus Technology Co., Ltd. giantplus Giantplus Technology Co., Ltd.
globalscale Globalscale Technologies, Inc. globalscale Globalscale Technologies, Inc.
globaltop GlobalTop Technology, Inc.
gmt Global Mixed-mode Technology, Inc. gmt Global Mixed-mode Technology, Inc.
goodix Shenzhen Huiding Technology Co., Ltd. goodix Shenzhen Huiding Technology Co., Ltd.
google Google, Inc. google Google, Inc.

View File

@ -0,0 +1,17 @@
======================================
Component Helper for Aggregate Drivers
======================================
.. kernel-doc:: drivers/base/component.c
:doc: overview
API
===
.. kernel-doc:: include/linux/component.h
:internal:
.. kernel-doc:: drivers/base/component.c
:export:

View File

@ -1,6 +1,9 @@
.. |struct dev_pm_domain| replace:: :c:type:`struct dev_pm_domain <dev_pm_domain>` .. |struct dev_pm_domain| replace:: :c:type:`struct dev_pm_domain <dev_pm_domain>`
.. |struct generic_pm_domain| replace:: :c:type:`struct generic_pm_domain <generic_pm_domain>` .. |struct generic_pm_domain| replace:: :c:type:`struct generic_pm_domain <generic_pm_domain>`
.. _device_link:
============ ============
Device links Device links
============ ============

View File

@ -22,6 +22,7 @@ available subsections can be seen below.
device_connection device_connection
dma-buf dma-buf
device_link device_link
component
message-based message-based
sound sound
frame-buffer frame-buffer

View File

@ -0,0 +1,94 @@
.. SPDX-License-Identifier: GPL-2.0
=====================================
GENERIC SYSTEM INTERCONNECT SUBSYSTEM
=====================================
Introduction
------------
This framework is designed to provide a standard kernel interface to control
the settings of the interconnects on an SoC. These settings can be throughput,
latency and priority between multiple interconnected devices or functional
blocks. This can be controlled dynamically in order to save power or provide
maximum performance.
The interconnect bus is hardware with configurable parameters, which can be
set on a data path according to the requests received from various drivers.
An example of interconnect buses are the interconnects between various
components or functional blocks in chipsets. There can be multiple interconnects
on an SoC that can be multi-tiered.
Below is a simplified diagram of a real-world SoC interconnect bus topology.
::
+----------------+ +----------------+
| HW Accelerator |--->| M NoC |<---------------+
+----------------+ +----------------+ |
| | +------------+
+-----+ +-------------+ V +------+ | |
| DDR | | +--------+ | PCIe | | |
+-----+ | | Slaves | +------+ | |
^ ^ | +--------+ | | C NoC |
| | V V | |
+------------------+ +------------------------+ | | +-----+
| |-->| |-->| |-->| CPU |
| |-->| |<--| | +-----+
| Mem NoC | | S NoC | +------------+
| |<--| |---------+ |
| |<--| |<------+ | | +--------+
+------------------+ +------------------------+ | | +-->| Slaves |
^ ^ ^ ^ ^ | | +--------+
| | | | | | V
+------+ | +-----+ +-----+ +---------+ +----------------+ +--------+
| CPUs | | | GPU | | DSP | | Masters |-->| P NoC |-->| Slaves |
+------+ | +-----+ +-----+ +---------+ +----------------+ +--------+
|
+-------+
| Modem |
+-------+
Terminology
-----------
Interconnect provider is the software definition of the interconnect hardware.
The interconnect providers on the above diagram are M NoC, S NoC, C NoC, P NoC
and Mem NoC.
Interconnect node is the software definition of the interconnect hardware
port. Each interconnect provider consists of multiple interconnect nodes,
which are connected to other SoC components including other interconnect
providers. The point on the diagram where the CPUs connect to the memory is
called an interconnect node, which belongs to the Mem NoC interconnect provider.
Interconnect endpoints are the first or the last element of the path. Every
endpoint is a node, but not every node is an endpoint.
Interconnect path is everything between two endpoints including all the nodes
that have to be traversed to reach from a source to destination node. It may
include multiple master-slave pairs across several interconnect providers.
Interconnect consumers are the entities which make use of the data paths exposed
by the providers. The consumers send requests to providers requesting various
throughput, latency and priority. Usually the consumers are device drivers, that
send request based on their needs. An example for a consumer is a video decoder
that supports various formats and image sizes.
Interconnect providers
----------------------
Interconnect provider is an entity that implements methods to initialize and
configure interconnect bus hardware. The interconnect provider drivers should
be registered with the interconnect provider core.
.. kernel-doc:: include/linux/interconnect-provider.h
Interconnect consumers
----------------------
Interconnect consumers are the clients which use the interconnect APIs to
get paths between endpoints and set their bandwidth/latency/QoS requirements
for these interconnect paths.
.. kernel-doc:: include/linux/interconnect.h

View File

@ -6699,6 +6699,15 @@ F: drivers/clocksource/h8300_*.c
F: drivers/clk/h8300/ F: drivers/clk/h8300/
F: drivers/irqchip/irq-renesas-h8*.c F: drivers/irqchip/irq-renesas-h8*.c
HABANALABS PCI DRIVER
M: Oded Gabbay <oded.gabbay@gmail.com>
T: git https://github.com/HabanaAI/linux.git
S: Supported
F: drivers/misc/habanalabs/
F: include/uapi/misc/habanalabs.h
F: Documentation/ABI/testing/sysfs-driver-habanalabs
F: Documentation/ABI/testing/debugfs-driver-habanalabs
HACKRF MEDIA DRIVER HACKRF MEDIA DRIVER
M: Antti Palosaari <crope@iki.fi> M: Antti Palosaari <crope@iki.fi>
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org
@ -7056,7 +7065,7 @@ M: Haiyang Zhang <haiyangz@microsoft.com>
M: Stephen Hemminger <sthemmin@microsoft.com> M: Stephen Hemminger <sthemmin@microsoft.com>
M: Sasha Levin <sashal@kernel.org> M: Sasha Levin <sashal@kernel.org>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux.git
L: devel@linuxdriverproject.org L: linux-hyperv@vger.kernel.org
S: Supported S: Supported
F: Documentation/networking/device_drivers/microsoft/netvsc.txt F: Documentation/networking/device_drivers/microsoft/netvsc.txt
F: arch/x86/include/asm/mshyperv.h F: arch/x86/include/asm/mshyperv.h
@ -7941,6 +7950,16 @@ L: linux-gpio@vger.kernel.org
S: Maintained S: Maintained
F: drivers/gpio/gpio-intel-mid.c F: drivers/gpio/gpio-intel-mid.c
INTERCONNECT API
M: Georgi Djakov <georgi.djakov@linaro.org>
S: Maintained
F: Documentation/interconnect/
F: Documentation/devicetree/bindings/interconnect/
F: drivers/interconnect/
F: include/dt-bindings/interconnect/
F: include/linux/interconnect-provider.h
F: include/linux/interconnect.h
INVENSENSE MPU-3050 GYROSCOPE DRIVER INVENSENSE MPU-3050 GYROSCOPE DRIVER
M: Linus Walleij <linus.walleij@linaro.org> M: Linus Walleij <linus.walleij@linaro.org>
L: linux-iio@vger.kernel.org L: linux-iio@vger.kernel.org

View File

@ -711,6 +711,9 @@ config HAVE_ARCH_HASH
file which provides platform-specific implementations of some file which provides platform-specific implementations of some
functions in <linux/hash.h> or fs/namei.c. functions in <linux/hash.h> or fs/namei.c.
config HAVE_ARCH_NVRAM_OPS
bool
config ISA_BUS_API config ISA_BUS_API
def_bool ISA def_bool ISA

View File

@ -16,6 +16,7 @@ config ATARI
bool "Atari support" bool "Atari support"
depends on MMU depends on MMU
select MMU_MOTOROLA if MMU select MMU_MOTOROLA if MMU
select HAVE_ARCH_NVRAM_OPS
help help
This option enables support for the 68000-based Atari series of This option enables support for the 68000-based Atari series of
computers (including the TT, Falcon and Medusa). If you plan to use computers (including the TT, Falcon and Medusa). If you plan to use
@ -26,6 +27,7 @@ config MAC
bool "Macintosh support" bool "Macintosh support"
depends on MMU depends on MMU
select MMU_MOTOROLA if MMU select MMU_MOTOROLA if MMU
select HAVE_ARCH_NVRAM_OPS
help help
This option enables support for the Apple Macintosh series of This option enables support for the Apple Macintosh series of
computers (yes, there is experimental support now, at least for part computers (yes, there is experimental support now, at least for part

View File

@ -6,3 +6,5 @@ obj-y := config.o time.o debug.o ataints.o stdma.o \
atasound.o stram.o atasound.o stram.o
obj-$(CONFIG_ATARI_KBD_CORE) += atakeyb.o obj-$(CONFIG_ATARI_KBD_CORE) += atakeyb.o
obj-$(CONFIG_NVRAM:m=y) += nvram.o

272
arch/m68k/atari/nvram.c Normal file
View File

@ -0,0 +1,272 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* CMOS/NV-RAM driver for Atari. Adapted from drivers/char/nvram.c.
* Copyright (C) 1997 Roman Hodek <Roman.Hodek@informatik.uni-erlangen.de>
* idea by and with help from Richard Jelinek <rj@suse.de>
* Portions copyright (c) 2001,2002 Sun Microsystems (thockin@sun.com)
* Further contributions from Cesar Barros, Erik Gilling, Tim Hockin and
* Wim Van Sebroeck.
*/
#include <linux/errno.h>
#include <linux/init.h>
#include <linux/mc146818rtc.h>
#include <linux/module.h>
#include <linux/nvram.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
#include <linux/spinlock.h>
#include <linux/types.h>
#include <asm/atarihw.h>
#include <asm/atariints.h>
#define NVRAM_BYTES 50
/* It is worth noting that these functions all access bytes of general
* purpose memory in the NVRAM - that is to say, they all add the
* NVRAM_FIRST_BYTE offset. Pass them offsets into NVRAM as if you did not
* know about the RTC cruft.
*/
/* Note that *all* calls to CMOS_READ and CMOS_WRITE must be done with
* rtc_lock held. Due to the index-port/data-port design of the RTC, we
* don't want two different things trying to get to it at once. (e.g. the
* periodic 11 min sync from kernel/time/ntp.c vs. this driver.)
*/
static unsigned char __nvram_read_byte(int i)
{
return CMOS_READ(NVRAM_FIRST_BYTE + i);
}
/* This races nicely with trying to read with checksum checking */
static void __nvram_write_byte(unsigned char c, int i)
{
CMOS_WRITE(c, NVRAM_FIRST_BYTE + i);
}
/* On Ataris, the checksum is over all bytes except the checksum bytes
* themselves; these are at the very end.
*/
#define ATARI_CKS_RANGE_START 0
#define ATARI_CKS_RANGE_END 47
#define ATARI_CKS_LOC 48
static int __nvram_check_checksum(void)
{
int i;
unsigned char sum = 0;
for (i = ATARI_CKS_RANGE_START; i <= ATARI_CKS_RANGE_END; ++i)
sum += __nvram_read_byte(i);
return (__nvram_read_byte(ATARI_CKS_LOC) == (~sum & 0xff)) &&
(__nvram_read_byte(ATARI_CKS_LOC + 1) == (sum & 0xff));
}
static void __nvram_set_checksum(void)
{
int i;
unsigned char sum = 0;
for (i = ATARI_CKS_RANGE_START; i <= ATARI_CKS_RANGE_END; ++i)
sum += __nvram_read_byte(i);
__nvram_write_byte(~sum, ATARI_CKS_LOC);
__nvram_write_byte(sum, ATARI_CKS_LOC + 1);
}
long atari_nvram_set_checksum(void)
{
spin_lock_irq(&rtc_lock);
__nvram_set_checksum();
spin_unlock_irq(&rtc_lock);
return 0;
}
long atari_nvram_initialize(void)
{
loff_t i;
spin_lock_irq(&rtc_lock);
for (i = 0; i < NVRAM_BYTES; ++i)
__nvram_write_byte(0, i);
__nvram_set_checksum();
spin_unlock_irq(&rtc_lock);
return 0;
}
ssize_t atari_nvram_read(char *buf, size_t count, loff_t *ppos)
{
char *p = buf;
loff_t i;
spin_lock_irq(&rtc_lock);
if (!__nvram_check_checksum()) {
spin_unlock_irq(&rtc_lock);
return -EIO;
}
for (i = *ppos; count > 0 && i < NVRAM_BYTES; --count, ++i, ++p)
*p = __nvram_read_byte(i);
spin_unlock_irq(&rtc_lock);
*ppos = i;
return p - buf;
}
ssize_t atari_nvram_write(char *buf, size_t count, loff_t *ppos)
{
char *p = buf;
loff_t i;
spin_lock_irq(&rtc_lock);
if (!__nvram_check_checksum()) {
spin_unlock_irq(&rtc_lock);
return -EIO;
}
for (i = *ppos; count > 0 && i < NVRAM_BYTES; --count, ++i, ++p)
__nvram_write_byte(*p, i);
__nvram_set_checksum();
spin_unlock_irq(&rtc_lock);
*ppos = i;
return p - buf;
}
ssize_t atari_nvram_get_size(void)
{
return NVRAM_BYTES;
}
#ifdef CONFIG_PROC_FS
static struct {
unsigned char val;
const char *name;
} boot_prefs[] = {
{ 0x80, "TOS" },
{ 0x40, "ASV" },
{ 0x20, "NetBSD (?)" },
{ 0x10, "Linux" },
{ 0x00, "unspecified" },
};
static const char * const languages[] = {
"English (US)",
"German",
"French",
"English (UK)",
"Spanish",
"Italian",
"6 (undefined)",
"Swiss (French)",
"Swiss (German)",
};
static const char * const dateformat[] = {
"MM%cDD%cYY",
"DD%cMM%cYY",
"YY%cMM%cDD",
"YY%cDD%cMM",
"4 (undefined)",
"5 (undefined)",
"6 (undefined)",
"7 (undefined)",
};
static const char * const colors[] = {
"2", "4", "16", "256", "65536", "??", "??", "??"
};
static void atari_nvram_proc_read(unsigned char *nvram, struct seq_file *seq,
void *offset)
{
int checksum;
int i;
unsigned int vmode;
spin_lock_irq(&rtc_lock);
checksum = __nvram_check_checksum();
spin_unlock_irq(&rtc_lock);
seq_printf(seq, "Checksum status : %svalid\n", checksum ? "" : "not ");
seq_puts(seq, "Boot preference : ");
for (i = ARRAY_SIZE(boot_prefs) - 1; i >= 0; --i)
if (nvram[1] == boot_prefs[i].val) {
seq_printf(seq, "%s\n", boot_prefs[i].name);
break;
}
if (i < 0)
seq_printf(seq, "0x%02x (undefined)\n", nvram[1]);
seq_printf(seq, "SCSI arbitration : %s\n",
(nvram[16] & 0x80) ? "on" : "off");
seq_puts(seq, "SCSI host ID : ");
if (nvram[16] & 0x80)
seq_printf(seq, "%d\n", nvram[16] & 7);
else
seq_puts(seq, "n/a\n");
if (!MACH_IS_FALCON)
return;
seq_puts(seq, "OS language : ");
if (nvram[6] < ARRAY_SIZE(languages))
seq_printf(seq, "%s\n", languages[nvram[6]]);
else
seq_printf(seq, "%u (undefined)\n", nvram[6]);
seq_puts(seq, "Keyboard language: ");
if (nvram[7] < ARRAY_SIZE(languages))
seq_printf(seq, "%s\n", languages[nvram[7]]);
else
seq_printf(seq, "%u (undefined)\n", nvram[7]);
seq_puts(seq, "Date format : ");
seq_printf(seq, dateformat[nvram[8] & 7],
nvram[9] ? nvram[9] : '/', nvram[9] ? nvram[9] : '/');
seq_printf(seq, ", %dh clock\n", nvram[8] & 16 ? 24 : 12);
seq_puts(seq, "Boot delay : ");
if (nvram[10] == 0)
seq_puts(seq, "default\n");
else
seq_printf(seq, "%ds%s\n", nvram[10],
nvram[10] < 8 ? ", no memory test" : "");
vmode = (nvram[14] << 8) | nvram[15];
seq_printf(seq,
"Video mode : %s colors, %d columns, %s %s monitor\n",
colors[vmode & 7], vmode & 8 ? 80 : 40,
vmode & 16 ? "VGA" : "TV", vmode & 32 ? "PAL" : "NTSC");
seq_printf(seq,
" %soverscan, compat. mode %s%s\n",
vmode & 64 ? "" : "no ", vmode & 128 ? "on" : "off",
vmode & 256 ?
(vmode & 16 ? ", line doubling" : ", half screen") : "");
}
static int nvram_proc_read(struct seq_file *seq, void *offset)
{
unsigned char contents[NVRAM_BYTES];
int i;
spin_lock_irq(&rtc_lock);
for (i = 0; i < NVRAM_BYTES; ++i)
contents[i] = __nvram_read_byte(i);
spin_unlock_irq(&rtc_lock);
atari_nvram_proc_read(contents, seq, offset);
return 0;
}
static int __init atari_nvram_init(void)
{
if (!(MACH_IS_ATARI && ATARIHW_PRESENT(TT_CLK)))
return -ENODEV;
if (!proc_create_single("driver/nvram", 0, NULL, nvram_proc_read)) {
pr_err("nvram: can't create /proc/driver/nvram\n");
return -ENOMEM;
}
return 0;
}
device_initcall(atari_nvram_init);
#endif /* CONFIG_PROC_FS */

View File

@ -33,6 +33,12 @@ extern int atari_dont_touch_floppy_select;
extern int atari_SCC_reset_done; extern int atari_SCC_reset_done;
extern ssize_t atari_nvram_read(char *, size_t, loff_t *);
extern ssize_t atari_nvram_write(char *, size_t, loff_t *);
extern ssize_t atari_nvram_get_size(void);
extern long atari_nvram_set_checksum(void);
extern long atari_nvram_initialize(void);
/* convenience macros for testing machine type */ /* convenience macros for testing machine type */
#define MACH_IS_ST ((atari_mch_cookie >> 16) == ATARI_MCH_ST) #define MACH_IS_ST ((atari_mch_cookie >> 16) == ATARI_MCH_ST)
#define MACH_IS_STE ((atari_mch_cookie >> 16) == ATARI_MCH_STE && \ #define MACH_IS_STE ((atari_mch_cookie >> 16) == ATARI_MCH_STE && \

View File

@ -19,6 +19,10 @@ extern void mac_init_IRQ(void);
extern void mac_irq_enable(struct irq_data *data); extern void mac_irq_enable(struct irq_data *data);
extern void mac_irq_disable(struct irq_data *data); extern void mac_irq_disable(struct irq_data *data);
extern unsigned char mac_pram_read_byte(int);
extern void mac_pram_write_byte(unsigned char, int);
extern ssize_t mac_pram_get_size(void);
/* /*
* Macintosh Table * Macintosh Table
*/ */

View File

@ -24,6 +24,7 @@
#include <linux/proc_fs.h> #include <linux/proc_fs.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/nvram.h>
#include <linux/initrd.h> #include <linux/initrd.h>
#include <asm/bootinfo.h> #include <asm/bootinfo.h>
@ -37,13 +38,14 @@
#ifdef CONFIG_AMIGA #ifdef CONFIG_AMIGA
#include <asm/amigahw.h> #include <asm/amigahw.h>
#endif #endif
#ifdef CONFIG_ATARI
#include <asm/atarihw.h> #include <asm/atarihw.h>
#ifdef CONFIG_ATARI
#include <asm/atari_stram.h> #include <asm/atari_stram.h>
#endif #endif
#ifdef CONFIG_SUN3X #ifdef CONFIG_SUN3X
#include <asm/dvma.h> #include <asm/dvma.h>
#endif #endif
#include <asm/macintosh.h>
#include <asm/natfeat.h> #include <asm/natfeat.h>
#if !FPSTATESIZE || !NR_IRQS #if !FPSTATESIZE || !NR_IRQS
@ -547,3 +549,81 @@ static int __init adb_probe_sync_enable (char *str) {
__setup("adb_sync", adb_probe_sync_enable); __setup("adb_sync", adb_probe_sync_enable);
#endif /* CONFIG_ADB */ #endif /* CONFIG_ADB */
#if IS_ENABLED(CONFIG_NVRAM)
#ifdef CONFIG_MAC
static unsigned char m68k_nvram_read_byte(int addr)
{
if (MACH_IS_MAC)
return mac_pram_read_byte(addr);
return 0xff;
}
static void m68k_nvram_write_byte(unsigned char val, int addr)
{
if (MACH_IS_MAC)
mac_pram_write_byte(val, addr);
}
#endif /* CONFIG_MAC */
#ifdef CONFIG_ATARI
static ssize_t m68k_nvram_read(char *buf, size_t count, loff_t *ppos)
{
if (MACH_IS_ATARI)
return atari_nvram_read(buf, count, ppos);
else if (MACH_IS_MAC)
return nvram_read_bytes(buf, count, ppos);
return -EINVAL;
}
static ssize_t m68k_nvram_write(char *buf, size_t count, loff_t *ppos)
{
if (MACH_IS_ATARI)
return atari_nvram_write(buf, count, ppos);
else if (MACH_IS_MAC)
return nvram_write_bytes(buf, count, ppos);
return -EINVAL;
}
static long m68k_nvram_set_checksum(void)
{
if (MACH_IS_ATARI)
return atari_nvram_set_checksum();
return -EINVAL;
}
static long m68k_nvram_initialize(void)
{
if (MACH_IS_ATARI)
return atari_nvram_initialize();
return -EINVAL;
}
#endif /* CONFIG_ATARI */
static ssize_t m68k_nvram_get_size(void)
{
if (MACH_IS_ATARI)
return atari_nvram_get_size();
else if (MACH_IS_MAC)
return mac_pram_get_size();
return -ENODEV;
}
/* Atari device drivers call .read (to get checksum validation) whereas
* Mac and PowerMac device drivers just use .read_byte.
*/
const struct nvram_ops arch_nvram_ops = {
#ifdef CONFIG_MAC
.read_byte = m68k_nvram_read_byte,
.write_byte = m68k_nvram_write_byte,
#endif
#ifdef CONFIG_ATARI
.read = m68k_nvram_read,
.write = m68k_nvram_write,
.set_checksum = m68k_nvram_set_checksum,
.initialize = m68k_nvram_initialize,
#endif
.get_size = m68k_nvram_get_size,
};
EXPORT_SYMBOL(arch_nvram_ops);
#endif /* CONFIG_NVRAM */

View File

@ -36,8 +36,9 @@
static void (*rom_reset)(void); static void (*rom_reset)(void);
#if IS_ENABLED(CONFIG_NVRAM)
#ifdef CONFIG_ADB_CUDA #ifdef CONFIG_ADB_CUDA
static __u8 cuda_read_pram(int offset) static unsigned char cuda_pram_read_byte(int offset)
{ {
struct adb_request req; struct adb_request req;
@ -49,7 +50,7 @@ static __u8 cuda_read_pram(int offset)
return req.reply[3]; return req.reply[3];
} }
static void cuda_write_pram(int offset, __u8 data) static void cuda_pram_write_byte(unsigned char data, int offset)
{ {
struct adb_request req; struct adb_request req;
@ -62,29 +63,29 @@ static void cuda_write_pram(int offset, __u8 data)
#endif /* CONFIG_ADB_CUDA */ #endif /* CONFIG_ADB_CUDA */
#ifdef CONFIG_ADB_PMU #ifdef CONFIG_ADB_PMU
static __u8 pmu_read_pram(int offset) static unsigned char pmu_pram_read_byte(int offset)
{ {
struct adb_request req; struct adb_request req;
if (pmu_request(&req, NULL, 3, PMU_READ_NVRAM, if (pmu_request(&req, NULL, 3, PMU_READ_XPRAM,
(offset >> 8) & 0xFF, offset & 0xFF) < 0) offset & 0xFF, 1) < 0)
return 0; return 0;
while (!req.complete) pmu_wait_complete(&req);
pmu_poll();
return req.reply[3]; return req.reply[0];
} }
static void pmu_write_pram(int offset, __u8 data) static void pmu_pram_write_byte(unsigned char data, int offset)
{ {
struct adb_request req; struct adb_request req;
if (pmu_request(&req, NULL, 4, PMU_WRITE_NVRAM, if (pmu_request(&req, NULL, 4, PMU_WRITE_XPRAM,
(offset >> 8) & 0xFF, offset & 0xFF, data) < 0) offset & 0xFF, 1, data) < 0)
return; return;
while (!req.complete) pmu_wait_complete(&req);
pmu_poll();
} }
#endif /* CONFIG_ADB_PMU */ #endif /* CONFIG_ADB_PMU */
#endif /* CONFIG_NVRAM */
/* /*
* VIA PRAM/RTC access routines * VIA PRAM/RTC access routines
@ -93,7 +94,7 @@ static void pmu_write_pram(int offset, __u8 data)
* the RTC should be enabled. * the RTC should be enabled.
*/ */
static __u8 via_pram_readbyte(void) static __u8 via_rtc_recv(void)
{ {
int i, reg; int i, reg;
__u8 data; __u8 data;
@ -120,7 +121,7 @@ static __u8 via_pram_readbyte(void)
return data; return data;
} }
static void via_pram_writebyte(__u8 data) static void via_rtc_send(__u8 data)
{ {
int i, reg, bit; int i, reg, bit;
@ -136,6 +137,31 @@ static void via_pram_writebyte(__u8 data)
} }
} }
/*
* These values can be found in Inside Macintosh vol. III ch. 2
* which has a description of the RTC chip in the original Mac.
*/
#define RTC_FLG_READ BIT(7)
#define RTC_FLG_WRITE_PROTECT BIT(7)
#define RTC_CMD_READ(r) (RTC_FLG_READ | (r << 2))
#define RTC_CMD_WRITE(r) (r << 2)
#define RTC_REG_SECONDS_0 0
#define RTC_REG_SECONDS_1 1
#define RTC_REG_SECONDS_2 2
#define RTC_REG_SECONDS_3 3
#define RTC_REG_WRITE_PROTECT 13
/*
* Inside Mac has no information about two-byte RTC commands but
* the MAME/MESS source code has the essentials.
*/
#define RTC_REG_XPRAM 14
#define RTC_CMD_XPRAM_READ (RTC_CMD_READ(RTC_REG_XPRAM) << 8)
#define RTC_CMD_XPRAM_WRITE (RTC_CMD_WRITE(RTC_REG_XPRAM) << 8)
#define RTC_CMD_XPRAM_ARG(a) (((a & 0xE0) << 3) | ((a & 0x1F) << 2))
/* /*
* Execute a VIA PRAM/RTC command. For read commands * Execute a VIA PRAM/RTC command. For read commands
* data should point to a one-byte buffer for the * data should point to a one-byte buffer for the
@ -145,29 +171,33 @@ static void via_pram_writebyte(__u8 data)
* This function disables all interrupts while running. * This function disables all interrupts while running.
*/ */
static void via_pram_command(int command, __u8 *data) static void via_rtc_command(int command, __u8 *data)
{ {
unsigned long flags; unsigned long flags;
int is_read; int is_read;
local_irq_save(flags); local_irq_save(flags);
/* The least significant bits must be 0b01 according to Inside Mac */
command = (command & ~3) | 1;
/* Enable the RTC and make sure the strobe line is high */ /* Enable the RTC and make sure the strobe line is high */
via1[vBufB] = (via1[vBufB] | VIA1B_vRTCClk) & ~VIA1B_vRTCEnb; via1[vBufB] = (via1[vBufB] | VIA1B_vRTCClk) & ~VIA1B_vRTCEnb;
if (command & 0xFF00) { /* extended (two-byte) command */ if (command & 0xFF00) { /* extended (two-byte) command */
via_pram_writebyte((command & 0xFF00) >> 8); via_rtc_send((command & 0xFF00) >> 8);
via_pram_writebyte(command & 0xFF); via_rtc_send(command & 0xFF);
is_read = command & 0x8000; is_read = command & (RTC_FLG_READ << 8);
} else { /* one-byte command */ } else { /* one-byte command */
via_pram_writebyte(command); via_rtc_send(command);
is_read = command & 0x80; is_read = command & RTC_FLG_READ;
} }
if (is_read) { if (is_read) {
*data = via_pram_readbyte(); *data = via_rtc_recv();
} else { } else {
via_pram_writebyte(*data); via_rtc_send(*data);
} }
/* All done, disable the RTC */ /* All done, disable the RTC */
@ -177,14 +207,30 @@ static void via_pram_command(int command, __u8 *data)
local_irq_restore(flags); local_irq_restore(flags);
} }
static __u8 via_read_pram(int offset) #if IS_ENABLED(CONFIG_NVRAM)
static unsigned char via_pram_read_byte(int offset)
{ {
return 0; unsigned char temp;
via_rtc_command(RTC_CMD_XPRAM_READ | RTC_CMD_XPRAM_ARG(offset), &temp);
return temp;
} }
static void via_write_pram(int offset, __u8 data) static void via_pram_write_byte(unsigned char data, int offset)
{ {
unsigned char temp;
temp = 0x55;
via_rtc_command(RTC_CMD_WRITE(RTC_REG_WRITE_PROTECT), &temp);
temp = data;
via_rtc_command(RTC_CMD_XPRAM_WRITE | RTC_CMD_XPRAM_ARG(offset), &temp);
temp = 0x55 | RTC_FLG_WRITE_PROTECT;
via_rtc_command(RTC_CMD_WRITE(RTC_REG_WRITE_PROTECT), &temp);
} }
#endif /* CONFIG_NVRAM */
/* /*
* Return the current time in seconds since January 1, 1904. * Return the current time in seconds since January 1, 1904.
@ -201,10 +247,10 @@ static time64_t via_read_time(void)
} result, last_result; } result, last_result;
int count = 1; int count = 1;
via_pram_command(0x81, &last_result.cdata[3]); via_rtc_command(RTC_CMD_READ(RTC_REG_SECONDS_0), &last_result.cdata[3]);
via_pram_command(0x85, &last_result.cdata[2]); via_rtc_command(RTC_CMD_READ(RTC_REG_SECONDS_1), &last_result.cdata[2]);
via_pram_command(0x89, &last_result.cdata[1]); via_rtc_command(RTC_CMD_READ(RTC_REG_SECONDS_2), &last_result.cdata[1]);
via_pram_command(0x8D, &last_result.cdata[0]); via_rtc_command(RTC_CMD_READ(RTC_REG_SECONDS_3), &last_result.cdata[0]);
/* /*
* The NetBSD guys say to loop until you get the same reading * The NetBSD guys say to loop until you get the same reading
@ -212,10 +258,14 @@ static time64_t via_read_time(void)
*/ */
while (1) { while (1) {
via_pram_command(0x81, &result.cdata[3]); via_rtc_command(RTC_CMD_READ(RTC_REG_SECONDS_0),
via_pram_command(0x85, &result.cdata[2]); &result.cdata[3]);
via_pram_command(0x89, &result.cdata[1]); via_rtc_command(RTC_CMD_READ(RTC_REG_SECONDS_1),
via_pram_command(0x8D, &result.cdata[0]); &result.cdata[2]);
via_rtc_command(RTC_CMD_READ(RTC_REG_SECONDS_2),
&result.cdata[1]);
via_rtc_command(RTC_CMD_READ(RTC_REG_SECONDS_3),
&result.cdata[0]);
if (result.idata == last_result.idata) if (result.idata == last_result.idata)
return (time64_t)result.idata - RTC_OFFSET; return (time64_t)result.idata - RTC_OFFSET;
@ -254,18 +304,18 @@ static void via_set_rtc_time(struct rtc_time *tm)
/* Clear the write protect bit */ /* Clear the write protect bit */
temp = 0x55; temp = 0x55;
via_pram_command(0x35, &temp); via_rtc_command(RTC_CMD_WRITE(RTC_REG_WRITE_PROTECT), &temp);
data.idata = lower_32_bits(time + RTC_OFFSET); data.idata = lower_32_bits(time + RTC_OFFSET);
via_pram_command(0x01, &data.cdata[3]); via_rtc_command(RTC_CMD_WRITE(RTC_REG_SECONDS_0), &data.cdata[3]);
via_pram_command(0x05, &data.cdata[2]); via_rtc_command(RTC_CMD_WRITE(RTC_REG_SECONDS_1), &data.cdata[2]);
via_pram_command(0x09, &data.cdata[1]); via_rtc_command(RTC_CMD_WRITE(RTC_REG_SECONDS_2), &data.cdata[1]);
via_pram_command(0x0D, &data.cdata[0]); via_rtc_command(RTC_CMD_WRITE(RTC_REG_SECONDS_3), &data.cdata[0]);
/* Set the write protect bit */ /* Set the write protect bit */
temp = 0xD5; temp = 0x55 | RTC_FLG_WRITE_PROTECT;
via_pram_command(0x35, &temp); via_rtc_command(RTC_CMD_WRITE(RTC_REG_WRITE_PROTECT), &temp);
} }
static void via_shutdown(void) static void via_shutdown(void)
@ -326,66 +376,58 @@ static void cuda_shutdown(void)
*------------------------------------------------------------------- *-------------------------------------------------------------------
*/ */
void mac_pram_read(int offset, __u8 *buffer, int len) #if IS_ENABLED(CONFIG_NVRAM)
unsigned char mac_pram_read_byte(int addr)
{ {
__u8 (*func)(int);
int i;
switch (macintosh_config->adb_type) { switch (macintosh_config->adb_type) {
case MAC_ADB_IOP: case MAC_ADB_IOP:
case MAC_ADB_II: case MAC_ADB_II:
case MAC_ADB_PB1: case MAC_ADB_PB1:
func = via_read_pram; return via_pram_read_byte(addr);
break;
#ifdef CONFIG_ADB_CUDA #ifdef CONFIG_ADB_CUDA
case MAC_ADB_EGRET: case MAC_ADB_EGRET:
case MAC_ADB_CUDA: case MAC_ADB_CUDA:
func = cuda_read_pram; return cuda_pram_read_byte(addr);
break;
#endif #endif
#ifdef CONFIG_ADB_PMU #ifdef CONFIG_ADB_PMU
case MAC_ADB_PB2: case MAC_ADB_PB2:
func = pmu_read_pram; return pmu_pram_read_byte(addr);
break;
#endif #endif
default: default:
return; return 0xFF;
}
for (i = 0 ; i < len ; i++) {
buffer[i] = (*func)(offset++);
} }
} }
void mac_pram_write(int offset, __u8 *buffer, int len) void mac_pram_write_byte(unsigned char val, int addr)
{ {
void (*func)(int, __u8);
int i;
switch (macintosh_config->adb_type) { switch (macintosh_config->adb_type) {
case MAC_ADB_IOP: case MAC_ADB_IOP:
case MAC_ADB_II: case MAC_ADB_II:
case MAC_ADB_PB1: case MAC_ADB_PB1:
func = via_write_pram; via_pram_write_byte(val, addr);
break; break;
#ifdef CONFIG_ADB_CUDA #ifdef CONFIG_ADB_CUDA
case MAC_ADB_EGRET: case MAC_ADB_EGRET:
case MAC_ADB_CUDA: case MAC_ADB_CUDA:
func = cuda_write_pram; cuda_pram_write_byte(val, addr);
break; break;
#endif #endif
#ifdef CONFIG_ADB_PMU #ifdef CONFIG_ADB_PMU
case MAC_ADB_PB2: case MAC_ADB_PB2:
func = pmu_write_pram; pmu_pram_write_byte(val, addr);
break; break;
#endif #endif
default: default:
return; break;
}
for (i = 0 ; i < len ; i++) {
(*func)(offset++, buffer[i]);
} }
} }
ssize_t mac_pram_get_size(void)
{
return 256;
}
#endif /* CONFIG_NVRAM */
void mac_poweroff(void) void mac_poweroff(void)
{ {
if (oss_present) { if (oss_present) {

View File

@ -311,6 +311,15 @@ extern void outsl (unsigned long port, const void *src, unsigned long count);
* value for either 32 or 64 bit mode */ * value for either 32 or 64 bit mode */
#define F_EXTEND(x) ((unsigned long)((x) | (0xffffffff00000000ULL))) #define F_EXTEND(x) ((unsigned long)((x) | (0xffffffff00000000ULL)))
#define ioread64 ioread64
#define ioread64be ioread64be
#define iowrite64 iowrite64
#define iowrite64be iowrite64be
extern u64 ioread64(void __iomem *addr);
extern u64 ioread64be(void __iomem *addr);
extern void iowrite64(u64 val, void __iomem *addr);
extern void iowrite64be(u64 val, void __iomem *addr);
#include <asm-generic/iomap.h> #include <asm-generic/iomap.h>
/* /*

View File

@ -48,11 +48,15 @@ struct iomap_ops {
unsigned int (*read16be)(void __iomem *); unsigned int (*read16be)(void __iomem *);
unsigned int (*read32)(void __iomem *); unsigned int (*read32)(void __iomem *);
unsigned int (*read32be)(void __iomem *); unsigned int (*read32be)(void __iomem *);
u64 (*read64)(void __iomem *);
u64 (*read64be)(void __iomem *);
void (*write8)(u8, void __iomem *); void (*write8)(u8, void __iomem *);
void (*write16)(u16, void __iomem *); void (*write16)(u16, void __iomem *);
void (*write16be)(u16, void __iomem *); void (*write16be)(u16, void __iomem *);
void (*write32)(u32, void __iomem *); void (*write32)(u32, void __iomem *);
void (*write32be)(u32, void __iomem *); void (*write32be)(u32, void __iomem *);
void (*write64)(u64, void __iomem *);
void (*write64be)(u64, void __iomem *);
void (*read8r)(void __iomem *, void *, unsigned long); void (*read8r)(void __iomem *, void *, unsigned long);
void (*read16r)(void __iomem *, void *, unsigned long); void (*read16r)(void __iomem *, void *, unsigned long);
void (*read32r)(void __iomem *, void *, unsigned long); void (*read32r)(void __iomem *, void *, unsigned long);
@ -171,6 +175,16 @@ static unsigned int iomem_read32be(void __iomem *addr)
return __raw_readl(addr); return __raw_readl(addr);
} }
static u64 iomem_read64(void __iomem *addr)
{
return readq(addr);
}
static u64 iomem_read64be(void __iomem *addr)
{
return __raw_readq(addr);
}
static void iomem_write8(u8 datum, void __iomem *addr) static void iomem_write8(u8 datum, void __iomem *addr)
{ {
writeb(datum, addr); writeb(datum, addr);
@ -196,6 +210,16 @@ static void iomem_write32be(u32 datum, void __iomem *addr)
__raw_writel(datum, addr); __raw_writel(datum, addr);
} }
static void iomem_write64(u64 datum, void __iomem *addr)
{
writel(datum, addr);
}
static void iomem_write64be(u64 datum, void __iomem *addr)
{
__raw_writel(datum, addr);
}
static void iomem_read8r(void __iomem *addr, void *dst, unsigned long count) static void iomem_read8r(void __iomem *addr, void *dst, unsigned long count)
{ {
while (count--) { while (count--) {
@ -250,11 +274,15 @@ static const struct iomap_ops iomem_ops = {
.read16be = iomem_read16be, .read16be = iomem_read16be,
.read32 = iomem_read32, .read32 = iomem_read32,
.read32be = iomem_read32be, .read32be = iomem_read32be,
.read64 = iomem_read64,
.read64be = iomem_read64be,
.write8 = iomem_write8, .write8 = iomem_write8,
.write16 = iomem_write16, .write16 = iomem_write16,
.write16be = iomem_write16be, .write16be = iomem_write16be,
.write32 = iomem_write32, .write32 = iomem_write32,
.write32be = iomem_write32be, .write32be = iomem_write32be,
.write64 = iomem_write64,
.write64be = iomem_write64be,
.read8r = iomem_read8r, .read8r = iomem_read8r,
.read16r = iomem_read16r, .read16r = iomem_read16r,
.read32r = iomem_read32r, .read32r = iomem_read32r,
@ -304,6 +332,20 @@ unsigned int ioread32be(void __iomem *addr)
return *((u32 *)addr); return *((u32 *)addr);
} }
u64 ioread64(void __iomem *addr)
{
if (unlikely(INDIRECT_ADDR(addr)))
return iomap_ops[ADDR_TO_REGION(addr)]->read64(addr);
return le64_to_cpup((u64 *)addr);
}
u64 ioread64be(void __iomem *addr)
{
if (unlikely(INDIRECT_ADDR(addr)))
return iomap_ops[ADDR_TO_REGION(addr)]->read64be(addr);
return *((u64 *)addr);
}
void iowrite8(u8 datum, void __iomem *addr) void iowrite8(u8 datum, void __iomem *addr)
{ {
if (unlikely(INDIRECT_ADDR(addr))) { if (unlikely(INDIRECT_ADDR(addr))) {
@ -349,6 +391,24 @@ void iowrite32be(u32 datum, void __iomem *addr)
} }
} }
void iowrite64(u64 datum, void __iomem *addr)
{
if (unlikely(INDIRECT_ADDR(addr))) {
iomap_ops[ADDR_TO_REGION(addr)]->write64(datum, addr);
} else {
*((u64 *)addr) = cpu_to_le64(datum);
}
}
void iowrite64be(u64 datum, void __iomem *addr)
{
if (unlikely(INDIRECT_ADDR(addr))) {
iomap_ops[ADDR_TO_REGION(addr)]->write64be(datum, addr);
} else {
*((u64 *)addr) = datum;
}
}
/* Repeating interfaces */ /* Repeating interfaces */
void ioread8_rep(void __iomem *addr, void *dst, unsigned long count) void ioread8_rep(void __iomem *addr, void *dst, unsigned long count)
@ -449,11 +509,15 @@ EXPORT_SYMBOL(ioread16);
EXPORT_SYMBOL(ioread16be); EXPORT_SYMBOL(ioread16be);
EXPORT_SYMBOL(ioread32); EXPORT_SYMBOL(ioread32);
EXPORT_SYMBOL(ioread32be); EXPORT_SYMBOL(ioread32be);
EXPORT_SYMBOL(ioread64);
EXPORT_SYMBOL(ioread64be);
EXPORT_SYMBOL(iowrite8); EXPORT_SYMBOL(iowrite8);
EXPORT_SYMBOL(iowrite16); EXPORT_SYMBOL(iowrite16);
EXPORT_SYMBOL(iowrite16be); EXPORT_SYMBOL(iowrite16be);
EXPORT_SYMBOL(iowrite32); EXPORT_SYMBOL(iowrite32);
EXPORT_SYMBOL(iowrite32be); EXPORT_SYMBOL(iowrite32be);
EXPORT_SYMBOL(iowrite64);
EXPORT_SYMBOL(iowrite64be);
EXPORT_SYMBOL(ioread8_rep); EXPORT_SYMBOL(ioread8_rep);
EXPORT_SYMBOL(ioread16_rep); EXPORT_SYMBOL(ioread16_rep);
EXPORT_SYMBOL(ioread32_rep); EXPORT_SYMBOL(ioread32_rep);

View File

@ -179,6 +179,7 @@ config PPC
select HAVE_ARCH_KGDB select HAVE_ARCH_KGDB
select HAVE_ARCH_MMAP_RND_BITS select HAVE_ARCH_MMAP_RND_BITS
select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
select HAVE_ARCH_NVRAM_OPS
select HAVE_ARCH_SECCOMP_FILTER select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_CBPF_JIT if !PPC64 select HAVE_CBPF_JIT if !PPC64
@ -275,11 +276,6 @@ config SYSVIPC_COMPAT
depends on COMPAT && SYSVIPC depends on COMPAT && SYSVIPC
default y default y
# All PPC32s use generic nvram driver through ppc_md
config GENERIC_NVRAM
bool
default y if PPC32
config SCHED_OMIT_FRAME_POINTER config SCHED_OMIT_FRAME_POINTER
bool bool
default y default y

View File

@ -783,8 +783,10 @@ extern void __iounmap_at(void *ea, unsigned long size);
#define mmio_read16be(addr) readw_be(addr) #define mmio_read16be(addr) readw_be(addr)
#define mmio_read32be(addr) readl_be(addr) #define mmio_read32be(addr) readl_be(addr)
#define mmio_read64be(addr) readq_be(addr)
#define mmio_write16be(val, addr) writew_be(val, addr) #define mmio_write16be(val, addr) writew_be(val, addr)
#define mmio_write32be(val, addr) writel_be(val, addr) #define mmio_write32be(val, addr) writel_be(val, addr)
#define mmio_write64be(val, addr) writeq_be(val, addr)
#define mmio_insb(addr, dst, count) readsb(addr, dst, count) #define mmio_insb(addr, dst, count) readsb(addr, dst, count)
#define mmio_insw(addr, dst, count) readsw(addr, dst, count) #define mmio_insw(addr, dst, count) readsw(addr, dst, count)
#define mmio_insl(addr, dst, count) readsl(addr, dst, count) #define mmio_insl(addr, dst, count) readsl(addr, dst, count)

View File

@ -78,9 +78,6 @@ extern int pmac_get_partition(int partition);
extern u8 pmac_xpram_read(int xpaddr); extern u8 pmac_xpram_read(int xpaddr);
extern void pmac_xpram_write(int xpaddr, u8 data); extern void pmac_xpram_write(int xpaddr, u8 data);
/* Synchronize NVRAM */
extern void nvram_sync(void);
/* Initialize NVRAM OS partition */ /* Initialize NVRAM OS partition */
extern int __init nvram_init_os_partition(struct nvram_os_partition *part); extern int __init nvram_init_os_partition(struct nvram_os_partition *part);
@ -98,10 +95,4 @@ extern int nvram_write_os_partition(struct nvram_os_partition *part,
unsigned int err_type, unsigned int err_type,
unsigned int error_log_cnt); unsigned int error_log_cnt);
/* Determine NVRAM size */
extern ssize_t nvram_get_size(void);
/* Normal access to NVRAM */
extern unsigned char nvram_read_byte(int i);
extern void nvram_write_byte(unsigned char c, int i);
#endif /* _ASM_POWERPC_NVRAM_H */ #endif /* _ASM_POWERPC_NVRAM_H */

View File

@ -7,12 +7,6 @@
* 2 of the License, or (at your option) any later version. * 2 of the License, or (at your option) any later version.
* *
* /dev/nvram driver for PPC64 * /dev/nvram driver for PPC64
*
* This perhaps should live in drivers/char
*
* TODO: Split the /dev/nvram part (that one can use
* drivers/char/generic_nvram.c) from the arch & partition
* parsing code.
*/ */
#include <linux/types.h> #include <linux/types.h>
@ -714,137 +708,6 @@ static void oops_to_nvram(struct kmsg_dumper *dumper,
spin_unlock_irqrestore(&lock, flags); spin_unlock_irqrestore(&lock, flags);
} }
static loff_t dev_nvram_llseek(struct file *file, loff_t offset, int origin)
{
if (ppc_md.nvram_size == NULL)
return -ENODEV;
return generic_file_llseek_size(file, offset, origin, MAX_LFS_FILESIZE,
ppc_md.nvram_size());
}
static ssize_t dev_nvram_read(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
ssize_t ret;
char *tmp = NULL;
ssize_t size;
if (!ppc_md.nvram_size) {
ret = -ENODEV;
goto out;
}
size = ppc_md.nvram_size();
if (size < 0) {
ret = size;
goto out;
}
if (*ppos >= size) {
ret = 0;
goto out;
}
count = min_t(size_t, count, size - *ppos);
count = min(count, PAGE_SIZE);
tmp = kmalloc(count, GFP_KERNEL);
if (!tmp) {
ret = -ENOMEM;
goto out;
}
ret = ppc_md.nvram_read(tmp, count, ppos);
if (ret <= 0)
goto out;
if (copy_to_user(buf, tmp, ret))
ret = -EFAULT;
out:
kfree(tmp);
return ret;
}
static ssize_t dev_nvram_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
ssize_t ret;
char *tmp = NULL;
ssize_t size;
ret = -ENODEV;
if (!ppc_md.nvram_size)
goto out;
ret = 0;
size = ppc_md.nvram_size();
if (*ppos >= size || size < 0)
goto out;
count = min_t(size_t, count, size - *ppos);
count = min(count, PAGE_SIZE);
tmp = memdup_user(buf, count);
if (IS_ERR(tmp)) {
ret = PTR_ERR(tmp);
goto out;
}
ret = ppc_md.nvram_write(tmp, count, ppos);
kfree(tmp);
out:
return ret;
}
static long dev_nvram_ioctl(struct file *file, unsigned int cmd,
unsigned long arg)
{
switch(cmd) {
#ifdef CONFIG_PPC_PMAC
case OBSOLETE_PMAC_NVRAM_GET_OFFSET:
printk(KERN_WARNING "nvram: Using obsolete PMAC_NVRAM_GET_OFFSET ioctl\n");
/* fall through */
case IOC_NVRAM_GET_OFFSET: {
int part, offset;
if (!machine_is(powermac))
return -EINVAL;
if (copy_from_user(&part, (void __user*)arg, sizeof(part)) != 0)
return -EFAULT;
if (part < pmac_nvram_OF || part > pmac_nvram_NR)
return -EINVAL;
offset = pmac_get_partition(part);
if (offset < 0)
return offset;
if (copy_to_user((void __user*)arg, &offset, sizeof(offset)) != 0)
return -EFAULT;
return 0;
}
#endif /* CONFIG_PPC_PMAC */
default:
return -EINVAL;
}
}
static const struct file_operations nvram_fops = {
.owner = THIS_MODULE,
.llseek = dev_nvram_llseek,
.read = dev_nvram_read,
.write = dev_nvram_write,
.unlocked_ioctl = dev_nvram_ioctl,
};
static struct miscdevice nvram_dev = {
NVRAM_MINOR,
"nvram",
&nvram_fops
};
#ifdef DEBUG_NVRAM #ifdef DEBUG_NVRAM
static void __init nvram_print_partitions(char * label) static void __init nvram_print_partitions(char * label)
{ {
@ -992,6 +855,8 @@ loff_t __init nvram_create_partition(const char *name, int sig,
long size = 0; long size = 0;
int rc; int rc;
BUILD_BUG_ON(NVRAM_BLOCK_LEN != 16);
/* Convert sizes from bytes to blocks */ /* Convert sizes from bytes to blocks */
req_size = _ALIGN_UP(req_size, NVRAM_BLOCK_LEN) / NVRAM_BLOCK_LEN; req_size = _ALIGN_UP(req_size, NVRAM_BLOCK_LEN) / NVRAM_BLOCK_LEN;
min_size = _ALIGN_UP(min_size, NVRAM_BLOCK_LEN) / NVRAM_BLOCK_LEN; min_size = _ALIGN_UP(min_size, NVRAM_BLOCK_LEN) / NVRAM_BLOCK_LEN;
@ -1192,22 +1057,3 @@ int __init nvram_scan_partitions(void)
kfree(header); kfree(header);
return err; return err;
} }
static int __init nvram_init(void)
{
int rc;
BUILD_BUG_ON(NVRAM_BLOCK_LEN != 16);
if (ppc_md.nvram_size == NULL || ppc_md.nvram_size() <= 0)
return -ENODEV;
rc = misc_register(&nvram_dev);
if (rc != 0) {
printk(KERN_ERR "nvram_init: failed to register device\n");
return rc;
}
return rc;
}
device_initcall(nvram_init);

View File

@ -17,6 +17,7 @@
#include <linux/console.h> #include <linux/console.h>
#include <linux/memblock.h> #include <linux/memblock.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/nvram.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/prom.h> #include <asm/prom.h>
@ -147,41 +148,6 @@ static int __init ppc_setup_l3cr(char *str)
} }
__setup("l3cr=", ppc_setup_l3cr); __setup("l3cr=", ppc_setup_l3cr);
#ifdef CONFIG_GENERIC_NVRAM
/* Generic nvram hooks used by drivers/char/gen_nvram.c */
unsigned char nvram_read_byte(int addr)
{
if (ppc_md.nvram_read_val)
return ppc_md.nvram_read_val(addr);
return 0xff;
}
EXPORT_SYMBOL(nvram_read_byte);
void nvram_write_byte(unsigned char val, int addr)
{
if (ppc_md.nvram_write_val)
ppc_md.nvram_write_val(addr, val);
}
EXPORT_SYMBOL(nvram_write_byte);
ssize_t nvram_get_size(void)
{
if (ppc_md.nvram_size)
return ppc_md.nvram_size();
return -1;
}
EXPORT_SYMBOL(nvram_get_size);
void nvram_sync(void)
{
if (ppc_md.nvram_sync)
ppc_md.nvram_sync();
}
EXPORT_SYMBOL(nvram_sync);
#endif /* CONFIG_NVRAM */
static int __init ppc_init(void) static int __init ppc_init(void)
{ {
/* clear the progress line */ /* clear the progress line */

View File

@ -1,3 +1,3 @@
obj-y += setup.o time.o pegasos_eth.o pci.o obj-y += setup.o time.o pegasos_eth.o pci.o
obj-$(CONFIG_SMP) += smp.o obj-$(CONFIG_SMP) += smp.o
obj-$(CONFIG_NVRAM) += nvram.o obj-$(CONFIG_NVRAM:m=y) += nvram.o

View File

@ -24,7 +24,7 @@ static unsigned int nvram_size;
static unsigned char nvram_buf[4]; static unsigned char nvram_buf[4];
static DEFINE_SPINLOCK(nvram_lock); static DEFINE_SPINLOCK(nvram_lock);
static unsigned char chrp_nvram_read(int addr) static unsigned char chrp_nvram_read_val(int addr)
{ {
unsigned int done; unsigned int done;
unsigned long flags; unsigned long flags;
@ -46,7 +46,7 @@ static unsigned char chrp_nvram_read(int addr)
return ret; return ret;
} }
static void chrp_nvram_write(int addr, unsigned char val) static void chrp_nvram_write_val(int addr, unsigned char val)
{ {
unsigned int done; unsigned int done;
unsigned long flags; unsigned long flags;
@ -64,6 +64,11 @@ static void chrp_nvram_write(int addr, unsigned char val)
spin_unlock_irqrestore(&nvram_lock, flags); spin_unlock_irqrestore(&nvram_lock, flags);
} }
static ssize_t chrp_nvram_size(void)
{
return nvram_size;
}
void __init chrp_nvram_init(void) void __init chrp_nvram_init(void)
{ {
struct device_node *nvram; struct device_node *nvram;
@ -85,8 +90,9 @@ void __init chrp_nvram_init(void)
printk(KERN_INFO "CHRP nvram contains %u bytes\n", nvram_size); printk(KERN_INFO "CHRP nvram contains %u bytes\n", nvram_size);
of_node_put(nvram); of_node_put(nvram);
ppc_md.nvram_read_val = chrp_nvram_read; ppc_md.nvram_read_val = chrp_nvram_read_val;
ppc_md.nvram_write_val = chrp_nvram_write; ppc_md.nvram_write_val = chrp_nvram_write_val;
ppc_md.nvram_size = chrp_nvram_size;
return; return;
} }

View File

@ -549,7 +549,7 @@ static void __init chrp_init_IRQ(void)
static void __init static void __init
chrp_init2(void) chrp_init2(void)
{ {
#ifdef CONFIG_NVRAM #if IS_ENABLED(CONFIG_NVRAM)
chrp_nvram_init(); chrp_nvram_init();
#endif #endif

View File

@ -15,7 +15,5 @@ obj-$(CONFIG_PMAC_BACKLIGHT) += backlight.o
# need this to be a bool. Cheat here and pretend CONFIG_NVRAM=m is really # need this to be a bool. Cheat here and pretend CONFIG_NVRAM=m is really
# CONFIG_NVRAM=y # CONFIG_NVRAM=y
obj-$(CONFIG_NVRAM:m=y) += nvram.o obj-$(CONFIG_NVRAM:m=y) += nvram.o
# ppc64 pmac doesn't define CONFIG_NVRAM but needs nvram stuff
obj-$(CONFIG_PPC64) += nvram.o
obj-$(CONFIG_PPC32) += bootx_init.o obj-$(CONFIG_PPC32) += bootx_init.o
obj-$(CONFIG_SMP) += smp.o obj-$(CONFIG_SMP) += smp.o

View File

@ -147,6 +147,11 @@ static ssize_t core99_nvram_size(void)
static volatile unsigned char __iomem *nvram_addr; static volatile unsigned char __iomem *nvram_addr;
static int nvram_mult; static int nvram_mult;
static ssize_t ppc32_nvram_size(void)
{
return NVRAM_SIZE;
}
static unsigned char direct_nvram_read_byte(int addr) static unsigned char direct_nvram_read_byte(int addr)
{ {
return in_8(&nvram_data[(addr & (NVRAM_SIZE - 1)) * nvram_mult]); return in_8(&nvram_data[(addr & (NVRAM_SIZE - 1)) * nvram_mult]);
@ -590,21 +595,25 @@ int __init pmac_nvram_init(void)
nvram_mult = 1; nvram_mult = 1;
ppc_md.nvram_read_val = direct_nvram_read_byte; ppc_md.nvram_read_val = direct_nvram_read_byte;
ppc_md.nvram_write_val = direct_nvram_write_byte; ppc_md.nvram_write_val = direct_nvram_write_byte;
ppc_md.nvram_size = ppc32_nvram_size;
} else if (nvram_naddrs == 1) { } else if (nvram_naddrs == 1) {
nvram_data = ioremap(r1.start, s1); nvram_data = ioremap(r1.start, s1);
nvram_mult = (s1 + NVRAM_SIZE - 1) / NVRAM_SIZE; nvram_mult = (s1 + NVRAM_SIZE - 1) / NVRAM_SIZE;
ppc_md.nvram_read_val = direct_nvram_read_byte; ppc_md.nvram_read_val = direct_nvram_read_byte;
ppc_md.nvram_write_val = direct_nvram_write_byte; ppc_md.nvram_write_val = direct_nvram_write_byte;
ppc_md.nvram_size = ppc32_nvram_size;
} else if (nvram_naddrs == 2) { } else if (nvram_naddrs == 2) {
nvram_addr = ioremap(r1.start, s1); nvram_addr = ioremap(r1.start, s1);
nvram_data = ioremap(r2.start, s2); nvram_data = ioremap(r2.start, s2);
ppc_md.nvram_read_val = indirect_nvram_read_byte; ppc_md.nvram_read_val = indirect_nvram_read_byte;
ppc_md.nvram_write_val = indirect_nvram_write_byte; ppc_md.nvram_write_val = indirect_nvram_write_byte;
ppc_md.nvram_size = ppc32_nvram_size;
} else if (nvram_naddrs == 0 && sys_ctrler == SYS_CTRLER_PMU) { } else if (nvram_naddrs == 0 && sys_ctrler == SYS_CTRLER_PMU) {
#ifdef CONFIG_ADB_PMU #ifdef CONFIG_ADB_PMU
nvram_naddrs = -1; nvram_naddrs = -1;
ppc_md.nvram_read_val = pmu_nvram_read_byte; ppc_md.nvram_read_val = pmu_nvram_read_byte;
ppc_md.nvram_write_val = pmu_nvram_write_byte; ppc_md.nvram_write_val = pmu_nvram_write_byte;
ppc_md.nvram_size = ppc32_nvram_size;
#endif /* CONFIG_ADB_PMU */ #endif /* CONFIG_ADB_PMU */
} else { } else {
printk(KERN_ERR "Incompatible type of NVRAM\n"); printk(KERN_ERR "Incompatible type of NVRAM\n");

View File

@ -316,8 +316,7 @@ static void __init pmac_setup_arch(void)
find_via_pmu(); find_via_pmu();
smu_init(); smu_init();
#if defined(CONFIG_NVRAM) || defined(CONFIG_NVRAM_MODULE) || \ #if IS_ENABLED(CONFIG_NVRAM)
defined(CONFIG_PPC64)
pmac_nvram_init(); pmac_nvram_init();
#endif #endif
#ifdef CONFIG_PPC32 #ifdef CONFIG_PPC32

View File

@ -68,7 +68,7 @@
long __init pmac_time_init(void) long __init pmac_time_init(void)
{ {
s32 delta = 0; s32 delta = 0;
#ifdef CONFIG_NVRAM #if defined(CONFIG_NVRAM) && defined(CONFIG_PPC32)
int dst; int dst;
delta = ((s32)pmac_xpram_read(PMAC_XPRAM_MACHINE_LOC + 0x9)) << 16; delta = ((s32)pmac_xpram_read(PMAC_XPRAM_MACHINE_LOC + 0x9)) << 16;

View File

@ -7,8 +7,6 @@
* 2 of the License, or (at your option) any later version. * 2 of the License, or (at your option) any later version.
* *
* /dev/nvram driver for PPC64 * /dev/nvram driver for PPC64
*
* This perhaps should live in drivers/char
*/ */

View File

@ -228,4 +228,6 @@ source "drivers/siox/Kconfig"
source "drivers/slimbus/Kconfig" source "drivers/slimbus/Kconfig"
source "drivers/interconnect/Kconfig"
endmenu endmenu

View File

@ -186,3 +186,4 @@ obj-$(CONFIG_MULTIPLEXER) += mux/
obj-$(CONFIG_UNISYS_VISORBUS) += visorbus/ obj-$(CONFIG_UNISYS_VISORBUS) += visorbus/
obj-$(CONFIG_SIOX) += siox/ obj-$(CONFIG_SIOX) += siox/
obj-$(CONFIG_GNSS) += gnss/ obj-$(CONFIG_GNSS) += gnss/
obj-$(CONFIG_INTERCONNECT) += interconnect/

View File

@ -10,7 +10,7 @@ if ANDROID
config ANDROID_BINDER_IPC config ANDROID_BINDER_IPC
bool "Android Binder IPC Driver" bool "Android Binder IPC Driver"
depends on MMU && !CPU_CACHE_VIVT depends on MMU
default n default n
---help--- ---help---
Binder is used in Android for both communication between processes, Binder is used in Android for both communication between processes,

File diff suppressed because it is too large Load Diff

View File

@ -29,6 +29,8 @@
#include <linux/list_lru.h> #include <linux/list_lru.h>
#include <linux/ratelimit.h> #include <linux/ratelimit.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <linux/uaccess.h>
#include <linux/highmem.h>
#include "binder_alloc.h" #include "binder_alloc.h"
#include "binder_trace.h" #include "binder_trace.h"
@ -67,9 +69,8 @@ static size_t binder_alloc_buffer_size(struct binder_alloc *alloc,
struct binder_buffer *buffer) struct binder_buffer *buffer)
{ {
if (list_is_last(&buffer->entry, &alloc->buffers)) if (list_is_last(&buffer->entry, &alloc->buffers))
return (u8 *)alloc->buffer + return alloc->buffer + alloc->buffer_size - buffer->user_data;
alloc->buffer_size - (u8 *)buffer->data; return binder_buffer_next(buffer)->user_data - buffer->user_data;
return (u8 *)binder_buffer_next(buffer)->data - (u8 *)buffer->data;
} }
static void binder_insert_free_buffer(struct binder_alloc *alloc, static void binder_insert_free_buffer(struct binder_alloc *alloc,
@ -119,9 +120,9 @@ static void binder_insert_allocated_buffer_locked(
buffer = rb_entry(parent, struct binder_buffer, rb_node); buffer = rb_entry(parent, struct binder_buffer, rb_node);
BUG_ON(buffer->free); BUG_ON(buffer->free);
if (new_buffer->data < buffer->data) if (new_buffer->user_data < buffer->user_data)
p = &parent->rb_left; p = &parent->rb_left;
else if (new_buffer->data > buffer->data) else if (new_buffer->user_data > buffer->user_data)
p = &parent->rb_right; p = &parent->rb_right;
else else
BUG(); BUG();
@ -136,17 +137,17 @@ static struct binder_buffer *binder_alloc_prepare_to_free_locked(
{ {
struct rb_node *n = alloc->allocated_buffers.rb_node; struct rb_node *n = alloc->allocated_buffers.rb_node;
struct binder_buffer *buffer; struct binder_buffer *buffer;
void *kern_ptr; void __user *uptr;
kern_ptr = (void *)(user_ptr - alloc->user_buffer_offset); uptr = (void __user *)user_ptr;
while (n) { while (n) {
buffer = rb_entry(n, struct binder_buffer, rb_node); buffer = rb_entry(n, struct binder_buffer, rb_node);
BUG_ON(buffer->free); BUG_ON(buffer->free);
if (kern_ptr < buffer->data) if (uptr < buffer->user_data)
n = n->rb_left; n = n->rb_left;
else if (kern_ptr > buffer->data) else if (uptr > buffer->user_data)
n = n->rb_right; n = n->rb_right;
else { else {
/* /*
@ -186,9 +187,9 @@ struct binder_buffer *binder_alloc_prepare_to_free(struct binder_alloc *alloc,
} }
static int binder_update_page_range(struct binder_alloc *alloc, int allocate, static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
void *start, void *end) void __user *start, void __user *end)
{ {
void *page_addr; void __user *page_addr;
unsigned long user_page_addr; unsigned long user_page_addr;
struct binder_lru_page *page; struct binder_lru_page *page;
struct vm_area_struct *vma = NULL; struct vm_area_struct *vma = NULL;
@ -263,18 +264,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
page->alloc = alloc; page->alloc = alloc;
INIT_LIST_HEAD(&page->lru); INIT_LIST_HEAD(&page->lru);
ret = map_kernel_range_noflush((unsigned long)page_addr, user_page_addr = (uintptr_t)page_addr;
PAGE_SIZE, PAGE_KERNEL,
&page->page_ptr);
flush_cache_vmap((unsigned long)page_addr,
(unsigned long)page_addr + PAGE_SIZE);
if (ret != 1) {
pr_err("%d: binder_alloc_buf failed to map page at %pK in kernel\n",
alloc->pid, page_addr);
goto err_map_kernel_failed;
}
user_page_addr =
(uintptr_t)page_addr + alloc->user_buffer_offset;
ret = vm_insert_page(vma, user_page_addr, page[0].page_ptr); ret = vm_insert_page(vma, user_page_addr, page[0].page_ptr);
if (ret) { if (ret) {
pr_err("%d: binder_alloc_buf failed to map page at %lx in userspace\n", pr_err("%d: binder_alloc_buf failed to map page at %lx in userspace\n",
@ -312,8 +302,6 @@ free_range:
continue; continue;
err_vm_insert_page_failed: err_vm_insert_page_failed:
unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);
err_map_kernel_failed:
__free_page(page->page_ptr); __free_page(page->page_ptr);
page->page_ptr = NULL; page->page_ptr = NULL;
err_alloc_page_failed: err_alloc_page_failed:
@ -368,8 +356,8 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
struct binder_buffer *buffer; struct binder_buffer *buffer;
size_t buffer_size; size_t buffer_size;
struct rb_node *best_fit = NULL; struct rb_node *best_fit = NULL;
void *has_page_addr; void __user *has_page_addr;
void *end_page_addr; void __user *end_page_addr;
size_t size, data_offsets_size; size_t size, data_offsets_size;
int ret; int ret;
@ -467,15 +455,15 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
"%d: binder_alloc_buf size %zd got buffer %pK size %zd\n", "%d: binder_alloc_buf size %zd got buffer %pK size %zd\n",
alloc->pid, size, buffer, buffer_size); alloc->pid, size, buffer, buffer_size);
has_page_addr = has_page_addr = (void __user *)
(void *)(((uintptr_t)buffer->data + buffer_size) & PAGE_MASK); (((uintptr_t)buffer->user_data + buffer_size) & PAGE_MASK);
WARN_ON(n && buffer_size != size); WARN_ON(n && buffer_size != size);
end_page_addr = end_page_addr =
(void *)PAGE_ALIGN((uintptr_t)buffer->data + size); (void __user *)PAGE_ALIGN((uintptr_t)buffer->user_data + size);
if (end_page_addr > has_page_addr) if (end_page_addr > has_page_addr)
end_page_addr = has_page_addr; end_page_addr = has_page_addr;
ret = binder_update_page_range(alloc, 1, ret = binder_update_page_range(alloc, 1, (void __user *)
(void *)PAGE_ALIGN((uintptr_t)buffer->data), end_page_addr); PAGE_ALIGN((uintptr_t)buffer->user_data), end_page_addr);
if (ret) if (ret)
return ERR_PTR(ret); return ERR_PTR(ret);
@ -488,7 +476,7 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
__func__, alloc->pid); __func__, alloc->pid);
goto err_alloc_buf_struct_failed; goto err_alloc_buf_struct_failed;
} }
new_buffer->data = (u8 *)buffer->data + size; new_buffer->user_data = (u8 __user *)buffer->user_data + size;
list_add(&new_buffer->entry, &buffer->entry); list_add(&new_buffer->entry, &buffer->entry);
new_buffer->free = 1; new_buffer->free = 1;
binder_insert_free_buffer(alloc, new_buffer); binder_insert_free_buffer(alloc, new_buffer);
@ -514,8 +502,8 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
return buffer; return buffer;
err_alloc_buf_struct_failed: err_alloc_buf_struct_failed:
binder_update_page_range(alloc, 0, binder_update_page_range(alloc, 0, (void __user *)
(void *)PAGE_ALIGN((uintptr_t)buffer->data), PAGE_ALIGN((uintptr_t)buffer->user_data),
end_page_addr); end_page_addr);
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
} }
@ -550,14 +538,15 @@ struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
return buffer; return buffer;
} }
static void *buffer_start_page(struct binder_buffer *buffer) static void __user *buffer_start_page(struct binder_buffer *buffer)
{ {
return (void *)((uintptr_t)buffer->data & PAGE_MASK); return (void __user *)((uintptr_t)buffer->user_data & PAGE_MASK);
} }
static void *prev_buffer_end_page(struct binder_buffer *buffer) static void __user *prev_buffer_end_page(struct binder_buffer *buffer)
{ {
return (void *)(((uintptr_t)(buffer->data) - 1) & PAGE_MASK); return (void __user *)
(((uintptr_t)(buffer->user_data) - 1) & PAGE_MASK);
} }
static void binder_delete_free_buffer(struct binder_alloc *alloc, static void binder_delete_free_buffer(struct binder_alloc *alloc,
@ -572,7 +561,8 @@ static void binder_delete_free_buffer(struct binder_alloc *alloc,
to_free = false; to_free = false;
binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC, binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
"%d: merge free, buffer %pK share page with %pK\n", "%d: merge free, buffer %pK share page with %pK\n",
alloc->pid, buffer->data, prev->data); alloc->pid, buffer->user_data,
prev->user_data);
} }
if (!list_is_last(&buffer->entry, &alloc->buffers)) { if (!list_is_last(&buffer->entry, &alloc->buffers)) {
@ -582,23 +572,24 @@ static void binder_delete_free_buffer(struct binder_alloc *alloc,
binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC, binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
"%d: merge free, buffer %pK share page with %pK\n", "%d: merge free, buffer %pK share page with %pK\n",
alloc->pid, alloc->pid,
buffer->data, buffer->user_data,
next->data); next->user_data);
} }
} }
if (PAGE_ALIGNED(buffer->data)) { if (PAGE_ALIGNED(buffer->user_data)) {
binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC, binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
"%d: merge free, buffer start %pK is page aligned\n", "%d: merge free, buffer start %pK is page aligned\n",
alloc->pid, buffer->data); alloc->pid, buffer->user_data);
to_free = false; to_free = false;
} }
if (to_free) { if (to_free) {
binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC, binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
"%d: merge free, buffer %pK do not share page with %pK or %pK\n", "%d: merge free, buffer %pK do not share page with %pK or %pK\n",
alloc->pid, buffer->data, alloc->pid, buffer->user_data,
prev->data, next ? next->data : NULL); prev->user_data,
next ? next->user_data : NULL);
binder_update_page_range(alloc, 0, buffer_start_page(buffer), binder_update_page_range(alloc, 0, buffer_start_page(buffer),
buffer_start_page(buffer) + PAGE_SIZE); buffer_start_page(buffer) + PAGE_SIZE);
} }
@ -624,8 +615,8 @@ static void binder_free_buf_locked(struct binder_alloc *alloc,
BUG_ON(buffer->free); BUG_ON(buffer->free);
BUG_ON(size > buffer_size); BUG_ON(size > buffer_size);
BUG_ON(buffer->transaction != NULL); BUG_ON(buffer->transaction != NULL);
BUG_ON(buffer->data < alloc->buffer); BUG_ON(buffer->user_data < alloc->buffer);
BUG_ON(buffer->data > alloc->buffer + alloc->buffer_size); BUG_ON(buffer->user_data > alloc->buffer + alloc->buffer_size);
if (buffer->async_transaction) { if (buffer->async_transaction) {
alloc->free_async_space += size + sizeof(struct binder_buffer); alloc->free_async_space += size + sizeof(struct binder_buffer);
@ -636,8 +627,9 @@ static void binder_free_buf_locked(struct binder_alloc *alloc,
} }
binder_update_page_range(alloc, 0, binder_update_page_range(alloc, 0,
(void *)PAGE_ALIGN((uintptr_t)buffer->data), (void __user *)PAGE_ALIGN((uintptr_t)buffer->user_data),
(void *)(((uintptr_t)buffer->data + buffer_size) & PAGE_MASK)); (void __user *)(((uintptr_t)
buffer->user_data + buffer_size) & PAGE_MASK));
rb_erase(&buffer->rb_node, &alloc->allocated_buffers); rb_erase(&buffer->rb_node, &alloc->allocated_buffers);
buffer->free = 1; buffer->free = 1;
@ -693,7 +685,6 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
struct vm_area_struct *vma) struct vm_area_struct *vma)
{ {
int ret; int ret;
struct vm_struct *area;
const char *failure_string; const char *failure_string;
struct binder_buffer *buffer; struct binder_buffer *buffer;
@ -704,28 +695,9 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
goto err_already_mapped; goto err_already_mapped;
} }
area = get_vm_area(vma->vm_end - vma->vm_start, VM_ALLOC); alloc->buffer = (void __user *)vma->vm_start;
if (area == NULL) {
ret = -ENOMEM;
failure_string = "get_vm_area";
goto err_get_vm_area_failed;
}
alloc->buffer = area->addr;
alloc->user_buffer_offset =
vma->vm_start - (uintptr_t)alloc->buffer;
mutex_unlock(&binder_alloc_mmap_lock); mutex_unlock(&binder_alloc_mmap_lock);
#ifdef CONFIG_CPU_CACHE_VIPT
if (cache_is_vipt_aliasing()) {
while (CACHE_COLOUR(
(vma->vm_start ^ (uint32_t)alloc->buffer))) {
pr_info("%s: %d %lx-%lx maps %pK bad alignment\n",
__func__, alloc->pid, vma->vm_start,
vma->vm_end, alloc->buffer);
vma->vm_start += PAGE_SIZE;
}
}
#endif
alloc->pages = kcalloc((vma->vm_end - vma->vm_start) / PAGE_SIZE, alloc->pages = kcalloc((vma->vm_end - vma->vm_start) / PAGE_SIZE,
sizeof(alloc->pages[0]), sizeof(alloc->pages[0]),
GFP_KERNEL); GFP_KERNEL);
@ -743,7 +715,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
goto err_alloc_buf_struct_failed; goto err_alloc_buf_struct_failed;
} }
buffer->data = alloc->buffer; buffer->user_data = alloc->buffer;
list_add(&buffer->entry, &alloc->buffers); list_add(&buffer->entry, &alloc->buffers);
buffer->free = 1; buffer->free = 1;
binder_insert_free_buffer(alloc, buffer); binder_insert_free_buffer(alloc, buffer);
@ -758,9 +730,7 @@ err_alloc_buf_struct_failed:
alloc->pages = NULL; alloc->pages = NULL;
err_alloc_pages_failed: err_alloc_pages_failed:
mutex_lock(&binder_alloc_mmap_lock); mutex_lock(&binder_alloc_mmap_lock);
vfree(alloc->buffer);
alloc->buffer = NULL; alloc->buffer = NULL;
err_get_vm_area_failed:
err_already_mapped: err_already_mapped:
mutex_unlock(&binder_alloc_mmap_lock); mutex_unlock(&binder_alloc_mmap_lock);
binder_alloc_debug(BINDER_DEBUG_USER_ERROR, binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
@ -806,7 +776,7 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc)
int i; int i;
for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) { for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
void *page_addr; void __user *page_addr;
bool on_lru; bool on_lru;
if (!alloc->pages[i].page_ptr) if (!alloc->pages[i].page_ptr)
@ -819,12 +789,10 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc)
"%s: %d: page %d at %pK %s\n", "%s: %d: page %d at %pK %s\n",
__func__, alloc->pid, i, page_addr, __func__, alloc->pid, i, page_addr,
on_lru ? "on lru" : "active"); on_lru ? "on lru" : "active");
unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);
__free_page(alloc->pages[i].page_ptr); __free_page(alloc->pages[i].page_ptr);
page_count++; page_count++;
} }
kfree(alloc->pages); kfree(alloc->pages);
vfree(alloc->buffer);
} }
mutex_unlock(&alloc->mutex); mutex_unlock(&alloc->mutex);
if (alloc->vma_vm_mm) if (alloc->vma_vm_mm)
@ -839,7 +807,7 @@ static void print_binder_buffer(struct seq_file *m, const char *prefix,
struct binder_buffer *buffer) struct binder_buffer *buffer)
{ {
seq_printf(m, "%s %d: %pK size %zd:%zd:%zd %s\n", seq_printf(m, "%s %d: %pK size %zd:%zd:%zd %s\n",
prefix, buffer->debug_id, buffer->data, prefix, buffer->debug_id, buffer->user_data,
buffer->data_size, buffer->offsets_size, buffer->data_size, buffer->offsets_size,
buffer->extra_buffers_size, buffer->extra_buffers_size,
buffer->transaction ? "active" : "delivered"); buffer->transaction ? "active" : "delivered");
@ -964,7 +932,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
if (!mmget_not_zero(alloc->vma_vm_mm)) if (!mmget_not_zero(alloc->vma_vm_mm))
goto err_mmget; goto err_mmget;
mm = alloc->vma_vm_mm; mm = alloc->vma_vm_mm;
if (!down_write_trylock(&mm->mmap_sem)) if (!down_read_trylock(&mm->mmap_sem))
goto err_down_write_mmap_sem_failed; goto err_down_write_mmap_sem_failed;
} }
@ -974,19 +942,16 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
if (vma) { if (vma) {
trace_binder_unmap_user_start(alloc, index); trace_binder_unmap_user_start(alloc, index);
zap_page_range(vma, zap_page_range(vma, page_addr, PAGE_SIZE);
page_addr + alloc->user_buffer_offset,
PAGE_SIZE);
trace_binder_unmap_user_end(alloc, index); trace_binder_unmap_user_end(alloc, index);
up_write(&mm->mmap_sem); up_read(&mm->mmap_sem);
mmput(mm); mmput(mm);
} }
trace_binder_unmap_kernel_start(alloc, index); trace_binder_unmap_kernel_start(alloc, index);
unmap_kernel_range(page_addr, PAGE_SIZE);
__free_page(page->page_ptr); __free_page(page->page_ptr);
page->page_ptr = NULL; page->page_ptr = NULL;
@ -1053,3 +1018,173 @@ int binder_alloc_shrinker_init(void)
} }
return ret; return ret;
} }
/**
* check_buffer() - verify that buffer/offset is safe to access
* @alloc: binder_alloc for this proc
* @buffer: binder buffer to be accessed
* @offset: offset into @buffer data
* @bytes: bytes to access from offset
*
* Check that the @offset/@bytes are within the size of the given
* @buffer and that the buffer is currently active and not freeable.
* Offsets must also be multiples of sizeof(u32). The kernel is
* allowed to touch the buffer in two cases:
*
* 1) when the buffer is being created:
* (buffer->free == 0 && buffer->allow_user_free == 0)
* 2) when the buffer is being torn down:
* (buffer->free == 0 && buffer->transaction == NULL).
*
* Return: true if the buffer is safe to access
*/
static inline bool check_buffer(struct binder_alloc *alloc,
struct binder_buffer *buffer,
binder_size_t offset, size_t bytes)
{
size_t buffer_size = binder_alloc_buffer_size(alloc, buffer);
return buffer_size >= bytes &&
offset <= buffer_size - bytes &&
IS_ALIGNED(offset, sizeof(u32)) &&
!buffer->free &&
(!buffer->allow_user_free || !buffer->transaction);
}
/**
* binder_alloc_get_page() - get kernel pointer for given buffer offset
* @alloc: binder_alloc for this proc
* @buffer: binder buffer to be accessed
* @buffer_offset: offset into @buffer data
* @pgoffp: address to copy final page offset to
*
* Lookup the struct page corresponding to the address
* at @buffer_offset into @buffer->user_data. If @pgoffp is not
* NULL, the byte-offset into the page is written there.
*
* The caller is responsible to ensure that the offset points
* to a valid address within the @buffer and that @buffer is
* not freeable by the user. Since it can't be freed, we are
* guaranteed that the corresponding elements of @alloc->pages[]
* cannot change.
*
* Return: struct page
*/
static struct page *binder_alloc_get_page(struct binder_alloc *alloc,
struct binder_buffer *buffer,
binder_size_t buffer_offset,
pgoff_t *pgoffp)
{
binder_size_t buffer_space_offset = buffer_offset +
(buffer->user_data - alloc->buffer);
pgoff_t pgoff = buffer_space_offset & ~PAGE_MASK;
size_t index = buffer_space_offset >> PAGE_SHIFT;
struct binder_lru_page *lru_page;
lru_page = &alloc->pages[index];
*pgoffp = pgoff;
return lru_page->page_ptr;
}
/**
* binder_alloc_copy_user_to_buffer() - copy src user to tgt user
* @alloc: binder_alloc for this proc
* @buffer: binder buffer to be accessed
* @buffer_offset: offset into @buffer data
* @from: userspace pointer to source buffer
* @bytes: bytes to copy
*
* Copy bytes from source userspace to target buffer.
*
* Return: bytes remaining to be copied
*/
unsigned long
binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc,
struct binder_buffer *buffer,
binder_size_t buffer_offset,
const void __user *from,
size_t bytes)
{
if (!check_buffer(alloc, buffer, buffer_offset, bytes))
return bytes;
while (bytes) {
unsigned long size;
unsigned long ret;
struct page *page;
pgoff_t pgoff;
void *kptr;
page = binder_alloc_get_page(alloc, buffer,
buffer_offset, &pgoff);
size = min_t(size_t, bytes, PAGE_SIZE - pgoff);
kptr = kmap(page) + pgoff;
ret = copy_from_user(kptr, from, size);
kunmap(page);
if (ret)
return bytes - size + ret;
bytes -= size;
from += size;
buffer_offset += size;
}
return 0;
}
static void binder_alloc_do_buffer_copy(struct binder_alloc *alloc,
bool to_buffer,
struct binder_buffer *buffer,
binder_size_t buffer_offset,
void *ptr,
size_t bytes)
{
/* All copies must be 32-bit aligned and 32-bit size */
BUG_ON(!check_buffer(alloc, buffer, buffer_offset, bytes));
while (bytes) {
unsigned long size;
struct page *page;
pgoff_t pgoff;
void *tmpptr;
void *base_ptr;
page = binder_alloc_get_page(alloc, buffer,
buffer_offset, &pgoff);
size = min_t(size_t, bytes, PAGE_SIZE - pgoff);
base_ptr = kmap_atomic(page);
tmpptr = base_ptr + pgoff;
if (to_buffer)
memcpy(tmpptr, ptr, size);
else
memcpy(ptr, tmpptr, size);
/*
* kunmap_atomic() takes care of flushing the cache
* if this device has VIVT cache arch
*/
kunmap_atomic(base_ptr);
bytes -= size;
pgoff = 0;
ptr = ptr + size;
buffer_offset += size;
}
}
void binder_alloc_copy_to_buffer(struct binder_alloc *alloc,
struct binder_buffer *buffer,
binder_size_t buffer_offset,
void *src,
size_t bytes)
{
binder_alloc_do_buffer_copy(alloc, true, buffer, buffer_offset,
src, bytes);
}
void binder_alloc_copy_from_buffer(struct binder_alloc *alloc,
void *dest,
struct binder_buffer *buffer,
binder_size_t buffer_offset,
size_t bytes)
{
binder_alloc_do_buffer_copy(alloc, false, buffer, buffer_offset,
dest, bytes);
}

View File

@ -22,6 +22,7 @@
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/list_lru.h> #include <linux/list_lru.h>
#include <uapi/linux/android/binder.h>
extern struct list_lru binder_alloc_lru; extern struct list_lru binder_alloc_lru;
struct binder_transaction; struct binder_transaction;
@ -39,7 +40,7 @@ struct binder_transaction;
* @data_size: size of @transaction data * @data_size: size of @transaction data
* @offsets_size: size of array of offsets * @offsets_size: size of array of offsets
* @extra_buffers_size: size of space for other objects (like sg lists) * @extra_buffers_size: size of space for other objects (like sg lists)
* @data: pointer to base of buffer space * @user_data: user pointer to base of buffer space
* *
* Bookkeeping structure for binder transaction buffers * Bookkeeping structure for binder transaction buffers
*/ */
@ -58,7 +59,7 @@ struct binder_buffer {
size_t data_size; size_t data_size;
size_t offsets_size; size_t offsets_size;
size_t extra_buffers_size; size_t extra_buffers_size;
void *data; void __user *user_data;
}; };
/** /**
@ -81,7 +82,6 @@ struct binder_lru_page {
* (invariant after init) * (invariant after init)
* @vma_vm_mm: copy of vma->vm_mm (invarient after mmap) * @vma_vm_mm: copy of vma->vm_mm (invarient after mmap)
* @buffer: base of per-proc address space mapped via mmap * @buffer: base of per-proc address space mapped via mmap
* @user_buffer_offset: offset between user and kernel VAs for buffer
* @buffers: list of all buffers for this proc * @buffers: list of all buffers for this proc
* @free_buffers: rb tree of buffers available for allocation * @free_buffers: rb tree of buffers available for allocation
* sorted by size * sorted by size
@ -102,8 +102,7 @@ struct binder_alloc {
struct mutex mutex; struct mutex mutex;
struct vm_area_struct *vma; struct vm_area_struct *vma;
struct mm_struct *vma_vm_mm; struct mm_struct *vma_vm_mm;
void *buffer; void __user *buffer;
ptrdiff_t user_buffer_offset;
struct list_head buffers; struct list_head buffers;
struct rb_root free_buffers; struct rb_root free_buffers;
struct rb_root allocated_buffers; struct rb_root allocated_buffers;
@ -162,26 +161,24 @@ binder_alloc_get_free_async_space(struct binder_alloc *alloc)
return free_async_space; return free_async_space;
} }
/** unsigned long
* binder_alloc_get_user_buffer_offset() - get offset between kernel/user addrs binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc,
* @alloc: binder_alloc for this proc struct binder_buffer *buffer,
* binder_size_t buffer_offset,
* Return: the offset between kernel and user-space addresses to use for const void __user *from,
* virtual address conversion size_t bytes);
*/
static inline ptrdiff_t void binder_alloc_copy_to_buffer(struct binder_alloc *alloc,
binder_alloc_get_user_buffer_offset(struct binder_alloc *alloc) struct binder_buffer *buffer,
{ binder_size_t buffer_offset,
/* void *src,
* user_buffer_offset is constant if vma is set and size_t bytes);
* undefined if vma is not set. It is possible to
* get here with !alloc->vma if the target process void binder_alloc_copy_from_buffer(struct binder_alloc *alloc,
* is dying while a transaction is being initiated. void *dest,
* Returning the old value is ok in this case and struct binder_buffer *buffer,
* the transaction will fail. binder_size_t buffer_offset,
*/ size_t bytes);
return alloc->user_buffer_offset;
}
#endif /* _LINUX_BINDER_ALLOC_H */ #endif /* _LINUX_BINDER_ALLOC_H */

View File

@ -102,11 +102,12 @@ static bool check_buffer_pages_allocated(struct binder_alloc *alloc,
struct binder_buffer *buffer, struct binder_buffer *buffer,
size_t size) size_t size)
{ {
void *page_addr, *end; void __user *page_addr;
void __user *end;
int page_index; int page_index;
end = (void *)PAGE_ALIGN((uintptr_t)buffer->data + size); end = (void __user *)PAGE_ALIGN((uintptr_t)buffer->user_data + size);
page_addr = buffer->data; page_addr = buffer->user_data;
for (; page_addr < end; page_addr += PAGE_SIZE) { for (; page_addr < end; page_addr += PAGE_SIZE) {
page_index = (page_addr - alloc->buffer) / PAGE_SIZE; page_index = (page_addr - alloc->buffer) / PAGE_SIZE;
if (!alloc->pages[page_index].page_ptr || if (!alloc->pages[page_index].page_ptr ||

View File

@ -293,7 +293,7 @@ DEFINE_EVENT(binder_buffer_class, binder_transaction_failed_buffer_release,
TRACE_EVENT(binder_update_page_range, TRACE_EVENT(binder_update_page_range,
TP_PROTO(struct binder_alloc *alloc, bool allocate, TP_PROTO(struct binder_alloc *alloc, bool allocate,
void *start, void *end), void __user *start, void __user *end),
TP_ARGS(alloc, allocate, start, end), TP_ARGS(alloc, allocate, start, end),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(int, proc) __field(int, proc)

View File

@ -16,11 +16,38 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/debugfs.h> #include <linux/debugfs.h>
/**
* DOC: overview
*
* The component helper allows drivers to collect a pile of sub-devices,
* including their bound drivers, into an aggregate driver. Various subsystems
* already provide functions to get hold of such components, e.g.
* of_clk_get_by_name(). The component helper can be used when such a
* subsystem-specific way to find a device is not available: The component
* helper fills the niche of aggregate drivers for specific hardware, where
* further standardization into a subsystem would not be practical. The common
* example is when a logical device (e.g. a DRM display driver) is spread around
* the SoC on various component (scanout engines, blending blocks, transcoders
* for various outputs and so on).
*
* The component helper also doesn't solve runtime dependencies, e.g. for system
* suspend and resume operations. See also :ref:`device links<device_link>`.
*
* Components are registered using component_add() and unregistered with
* component_del(), usually from the driver's probe and disconnect functions.
*
* Aggregate drivers first assemble a component match list of what they need
* using component_match_add(). This is then registered as an aggregate driver
* using component_master_add_with_match(), and unregistered using
* component_master_del().
*/
struct component; struct component;
struct component_match_array { struct component_match_array {
void *data; void *data;
int (*compare)(struct device *, void *); int (*compare)(struct device *, void *);
int (*compare_typed)(struct device *, int, void *);
void (*release)(struct device *, void *); void (*release)(struct device *, void *);
struct component *component; struct component *component;
bool duplicate; bool duplicate;
@ -48,6 +75,7 @@ struct component {
bool bound; bool bound;
const struct component_ops *ops; const struct component_ops *ops;
int subcomponent;
struct device *dev; struct device *dev;
}; };
@ -132,7 +160,7 @@ static struct master *__master_find(struct device *dev,
} }
static struct component *find_component(struct master *master, static struct component *find_component(struct master *master,
int (*compare)(struct device *, void *), void *compare_data) struct component_match_array *mc)
{ {
struct component *c; struct component *c;
@ -140,7 +168,11 @@ static struct component *find_component(struct master *master,
if (c->master && c->master != master) if (c->master && c->master != master)
continue; continue;
if (compare(c->dev, compare_data)) if (mc->compare && mc->compare(c->dev, mc->data))
return c;
if (mc->compare_typed &&
mc->compare_typed(c->dev, c->subcomponent, mc->data))
return c; return c;
} }
@ -166,7 +198,7 @@ static int find_components(struct master *master)
if (match->compare[i].component) if (match->compare[i].component)
continue; continue;
c = find_component(master, mc->compare, mc->data); c = find_component(master, mc);
if (!c) { if (!c) {
ret = -ENXIO; ret = -ENXIO;
break; break;
@ -301,15 +333,12 @@ static int component_match_realloc(struct device *dev,
return 0; return 0;
} }
/* static void __component_match_add(struct device *master,
* Add a component to be matched, with a release function.
*
* The match array is first created or extended if necessary.
*/
void component_match_add_release(struct device *master,
struct component_match **matchptr, struct component_match **matchptr,
void (*release)(struct device *, void *), void (*release)(struct device *, void *),
int (*compare)(struct device *, void *), void *compare_data) int (*compare)(struct device *, void *),
int (*compare_typed)(struct device *, int, void *),
void *compare_data)
{ {
struct component_match *match = *matchptr; struct component_match *match = *matchptr;
@ -341,13 +370,69 @@ void component_match_add_release(struct device *master,
} }
match->compare[match->num].compare = compare; match->compare[match->num].compare = compare;
match->compare[match->num].compare_typed = compare_typed;
match->compare[match->num].release = release; match->compare[match->num].release = release;
match->compare[match->num].data = compare_data; match->compare[match->num].data = compare_data;
match->compare[match->num].component = NULL; match->compare[match->num].component = NULL;
match->num++; match->num++;
} }
/**
* component_match_add_release - add a component match with release callback
* @master: device with the aggregate driver
* @matchptr: pointer to the list of component matches
* @release: release function for @compare_data
* @compare: compare function to match against all components
* @compare_data: opaque pointer passed to the @compare function
*
* Adds a new component match to the list stored in @matchptr, which the @master
* aggregate driver needs to function. The list of component matches pointed to
* by @matchptr must be initialized to NULL before adding the first match. This
* only matches against components added with component_add().
*
* The allocated match list in @matchptr is automatically released using devm
* actions, where upon @release will be called to free any references held by
* @compare_data, e.g. when @compare_data is a &device_node that must be
* released with of_node_put().
*
* See also component_match_add() and component_match_add_typed().
*/
void component_match_add_release(struct device *master,
struct component_match **matchptr,
void (*release)(struct device *, void *),
int (*compare)(struct device *, void *), void *compare_data)
{
__component_match_add(master, matchptr, release, compare, NULL,
compare_data);
}
EXPORT_SYMBOL(component_match_add_release); EXPORT_SYMBOL(component_match_add_release);
/**
* component_match_add_typed - add a compent match for a typed component
* @master: device with the aggregate driver
* @matchptr: pointer to the list of component matches
* @compare_typed: compare function to match against all typed components
* @compare_data: opaque pointer passed to the @compare function
*
* Adds a new component match to the list stored in @matchptr, which the @master
* aggregate driver needs to function. The list of component matches pointed to
* by @matchptr must be initialized to NULL before adding the first match. This
* only matches against components added with component_add_typed().
*
* The allocated match list in @matchptr is automatically released using devm
* actions.
*
* See also component_match_add_release() and component_match_add_typed().
*/
void component_match_add_typed(struct device *master,
struct component_match **matchptr,
int (*compare_typed)(struct device *, int, void *), void *compare_data)
{
__component_match_add(master, matchptr, NULL, NULL, compare_typed,
compare_data);
}
EXPORT_SYMBOL(component_match_add_typed);
static void free_master(struct master *master) static void free_master(struct master *master)
{ {
struct component_match *match = master->match; struct component_match *match = master->match;
@ -367,6 +452,18 @@ static void free_master(struct master *master)
kfree(master); kfree(master);
} }
/**
* component_master_add_with_match - register an aggregate driver
* @dev: device with the aggregate driver
* @ops: callbacks for the aggregate driver
* @match: component match list for the aggregate driver
*
* Registers a new aggregate driver consisting of the components added to @match
* by calling one of the component_match_add() functions. Once all components in
* @match are available, it will be assembled by calling
* &component_master_ops.bind from @ops. Must be unregistered by calling
* component_master_del().
*/
int component_master_add_with_match(struct device *dev, int component_master_add_with_match(struct device *dev,
const struct component_master_ops *ops, const struct component_master_ops *ops,
struct component_match *match) struct component_match *match)
@ -403,6 +500,15 @@ int component_master_add_with_match(struct device *dev,
} }
EXPORT_SYMBOL_GPL(component_master_add_with_match); EXPORT_SYMBOL_GPL(component_master_add_with_match);
/**
* component_master_del - unregister an aggregate driver
* @dev: device with the aggregate driver
* @ops: callbacks for the aggregate driver
*
* Unregisters an aggregate driver registered with
* component_master_add_with_match(). If necessary the aggregate driver is first
* disassembled by calling &component_master_ops.unbind from @ops.
*/
void component_master_del(struct device *dev, void component_master_del(struct device *dev,
const struct component_master_ops *ops) const struct component_master_ops *ops)
{ {
@ -430,6 +536,15 @@ static void component_unbind(struct component *component,
devres_release_group(component->dev, component); devres_release_group(component->dev, component);
} }
/**
* component_unbind_all - unbind all component to an aggregate driver
* @master_dev: device with the aggregate driver
* @data: opaque pointer, passed to all components
*
* Unbinds all components to the aggregate @dev by passing @data to their
* &component_ops.unbind functions. Should be called from
* &component_master_ops.unbind.
*/
void component_unbind_all(struct device *master_dev, void *data) void component_unbind_all(struct device *master_dev, void *data)
{ {
struct master *master; struct master *master;
@ -503,6 +618,15 @@ static int component_bind(struct component *component, struct master *master,
return ret; return ret;
} }
/**
* component_bind_all - bind all component to an aggregate driver
* @master_dev: device with the aggregate driver
* @data: opaque pointer, passed to all components
*
* Binds all components to the aggregate @dev by passing @data to their
* &component_ops.bind functions. Should be called from
* &component_master_ops.bind.
*/
int component_bind_all(struct device *master_dev, void *data) int component_bind_all(struct device *master_dev, void *data)
{ {
struct master *master; struct master *master;
@ -537,7 +661,8 @@ int component_bind_all(struct device *master_dev, void *data)
} }
EXPORT_SYMBOL_GPL(component_bind_all); EXPORT_SYMBOL_GPL(component_bind_all);
int component_add(struct device *dev, const struct component_ops *ops) static int __component_add(struct device *dev, const struct component_ops *ops,
int subcomponent)
{ {
struct component *component; struct component *component;
int ret; int ret;
@ -548,6 +673,7 @@ int component_add(struct device *dev, const struct component_ops *ops)
component->ops = ops; component->ops = ops;
component->dev = dev; component->dev = dev;
component->subcomponent = subcomponent;
dev_dbg(dev, "adding component (ops %ps)\n", ops); dev_dbg(dev, "adding component (ops %ps)\n", ops);
@ -566,8 +692,66 @@ int component_add(struct device *dev, const struct component_ops *ops)
return ret < 0 ? ret : 0; return ret < 0 ? ret : 0;
} }
/**
* component_add_typed - register a component
* @dev: component device
* @ops: component callbacks
* @subcomponent: nonzero identifier for subcomponents
*
* Register a new component for @dev. Functions in @ops will be call when the
* aggregate driver is ready to bind the overall driver by calling
* component_bind_all(). See also &struct component_ops.
*
* @subcomponent must be nonzero and is used to differentiate between multiple
* components registerd on the same device @dev. These components are match
* using component_match_add_typed().
*
* The component needs to be unregistered at driver unload/disconnect by
* calling component_del().
*
* See also component_add().
*/
int component_add_typed(struct device *dev, const struct component_ops *ops,
int subcomponent)
{
if (WARN_ON(subcomponent == 0))
return -EINVAL;
return __component_add(dev, ops, subcomponent);
}
EXPORT_SYMBOL_GPL(component_add_typed);
/**
* component_add - register a component
* @dev: component device
* @ops: component callbacks
*
* Register a new component for @dev. Functions in @ops will be called when the
* aggregate driver is ready to bind the overall driver by calling
* component_bind_all(). See also &struct component_ops.
*
* The component needs to be unregistered at driver unload/disconnect by
* calling component_del().
*
* See also component_add_typed() for a variant that allows multipled different
* components on the same device.
*/
int component_add(struct device *dev, const struct component_ops *ops)
{
return __component_add(dev, ops, 0);
}
EXPORT_SYMBOL_GPL(component_add); EXPORT_SYMBOL_GPL(component_add);
/**
* component_del - unregister a component
* @dev: component device
* @ops: component callbacks
*
* Unregister a component added with component_add(). If the component is bound
* into an aggregate driver, this will force the entire aggregate driver, including
* all its components, to be unbound.
*/
void component_del(struct device *dev, const struct component_ops *ops) void component_del(struct device *dev, const struct component_ops *ops)
{ {
struct component *c, *component = NULL; struct component *c, *component = NULL;

View File

@ -244,26 +244,23 @@ source "drivers/char/hw_random/Kconfig"
config NVRAM config NVRAM
tristate "/dev/nvram support" tristate "/dev/nvram support"
depends on ATARI || X86 || GENERIC_NVRAM depends on X86 || HAVE_ARCH_NVRAM_OPS
default M68K || PPC
---help--- ---help---
If you say Y here and create a character special file /dev/nvram If you say Y here and create a character special file /dev/nvram
with major number 10 and minor number 144 using mknod ("man mknod"), with major number 10 and minor number 144 using mknod ("man mknod"),
you get read and write access to the extra bytes of non-volatile you get read and write access to the non-volatile memory.
memory in the real time clock (RTC), which is contained in every PC
and most Ataris. The actual number of bytes varies, depending on the
nvram in the system, but is usually 114 (128-14 for the RTC).
This memory is conventionally called "CMOS RAM" on PCs and "NVRAM" /dev/nvram may be used to view settings in NVRAM or to change them
on Ataris. /dev/nvram may be used to view settings there, or to (with some utility). It could also be used to frequently
change them (with some utility). It could also be used to frequently
save a few bits of very important data that may not be lost over save a few bits of very important data that may not be lost over
power-off and for which writing to disk is too insecure. Note power-off and for which writing to disk is too insecure. Note
however that most NVRAM space in a PC belongs to the BIOS and you however that most NVRAM space in a PC belongs to the BIOS and you
should NEVER idly tamper with it. See Ralf Brown's interrupt list should NEVER idly tamper with it. See Ralf Brown's interrupt list
for a guide to the use of CMOS bytes by your BIOS. for a guide to the use of CMOS bytes by your BIOS.
On Atari machines, /dev/nvram is always configured and does not need This memory is conventionally called "NVRAM" on PowerPC machines,
to be selected. "CMOS RAM" on PCs, "NVRAM" on Ataris and "PRAM" on Macintoshes.
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called nvram. module will be called nvram.

View File

@ -26,11 +26,7 @@ obj-$(CONFIG_RTC) += rtc.o
obj-$(CONFIG_HPET) += hpet.o obj-$(CONFIG_HPET) += hpet.o
obj-$(CONFIG_EFI_RTC) += efirtc.o obj-$(CONFIG_EFI_RTC) += efirtc.o
obj-$(CONFIG_XILINX_HWICAP) += xilinx_hwicap/ obj-$(CONFIG_XILINX_HWICAP) += xilinx_hwicap/
ifeq ($(CONFIG_GENERIC_NVRAM),y) obj-$(CONFIG_NVRAM) += nvram.o
obj-$(CONFIG_NVRAM) += generic_nvram.o
else
obj-$(CONFIG_NVRAM) += nvram.o
endif
obj-$(CONFIG_TOSHIBA) += toshiba.o obj-$(CONFIG_TOSHIBA) += toshiba.o
obj-$(CONFIG_DS1620) += ds1620.o obj-$(CONFIG_DS1620) += ds1620.o
obj-$(CONFIG_HW_RANDOM) += hw_random/ obj-$(CONFIG_HW_RANDOM) += hw_random/

View File

@ -32,6 +32,7 @@
#include <linux/wait.h> #include <linux/wait.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/nospec.h>
#include <asm/io.h> #include <asm/io.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
@ -386,7 +387,11 @@ static ssize_t ac_write(struct file *file, const char __user *buf, size_t count,
TicCard = st_loc.tic_des_from_pc; /* tic number to send */ TicCard = st_loc.tic_des_from_pc; /* tic number to send */
IndexCard = NumCard - 1; IndexCard = NumCard - 1;
if((NumCard < 1) || (NumCard > MAX_BOARD) || !apbs[IndexCard].RamIO) if (IndexCard >= MAX_BOARD)
return -EINVAL;
IndexCard = array_index_nospec(IndexCard, MAX_BOARD);
if (!apbs[IndexCard].RamIO)
return -EINVAL; return -EINVAL;
#ifdef DEBUG #ifdef DEBUG
@ -697,6 +702,7 @@ static long ac_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
unsigned char IndexCard; unsigned char IndexCard;
void __iomem *pmem; void __iomem *pmem;
int ret = 0; int ret = 0;
static int warncount = 10;
volatile unsigned char byte_reset_it; volatile unsigned char byte_reset_it;
struct st_ram_io *adgl; struct st_ram_io *adgl;
void __user *argp = (void __user *)arg; void __user *argp = (void __user *)arg;
@ -711,16 +717,12 @@ static long ac_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
mutex_lock(&ac_mutex); mutex_lock(&ac_mutex);
IndexCard = adgl->num_card-1; IndexCard = adgl->num_card-1;
if(cmd != 6 && ((IndexCard >= MAX_BOARD) || !apbs[IndexCard].RamIO)) { if (cmd != 6 && IndexCard >= MAX_BOARD)
static int warncount = 10; goto err;
if (warncount) { IndexCard = array_index_nospec(IndexCard, MAX_BOARD);
printk( KERN_WARNING "APPLICOM driver IOCTL, bad board number %d\n",(int)IndexCard+1);
warncount--; if (cmd != 6 && !apbs[IndexCard].RamIO)
} goto err;
kfree(adgl);
mutex_unlock(&ac_mutex);
return -EINVAL;
}
switch (cmd) { switch (cmd) {
@ -838,5 +840,16 @@ static long ac_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
kfree(adgl); kfree(adgl);
mutex_unlock(&ac_mutex); mutex_unlock(&ac_mutex);
return 0; return 0;
err:
if (warncount) {
pr_warn("APPLICOM driver IOCTL, bad board number %d\n",
(int)IndexCard + 1);
warncount--;
}
kfree(adgl);
mutex_unlock(&ac_mutex);
return -EINVAL;
} }

View File

@ -254,27 +254,6 @@ static long efi_rtc_ioctl(struct file *file, unsigned int cmd,
return -ENOTTY; return -ENOTTY;
} }
/*
* We enforce only one user at a time here with the open/close.
* Also clear the previous interrupt data on an open, and clean
* up things on a close.
*/
static int efi_rtc_open(struct inode *inode, struct file *file)
{
/*
* nothing special to do here
* We do accept multiple open files at the same time as we
* synchronize on the per call operation.
*/
return 0;
}
static int efi_rtc_close(struct inode *inode, struct file *file)
{
return 0;
}
/* /*
* The various file operations we support. * The various file operations we support.
*/ */
@ -282,8 +261,6 @@ static int efi_rtc_close(struct inode *inode, struct file *file)
static const struct file_operations efi_rtc_fops = { static const struct file_operations efi_rtc_fops = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
.unlocked_ioctl = efi_rtc_ioctl, .unlocked_ioctl = efi_rtc_ioctl,
.open = efi_rtc_open,
.release = efi_rtc_close,
.llseek = no_llseek, .llseek = no_llseek,
}; };

View File

@ -1,159 +0,0 @@
/*
* Generic /dev/nvram driver for architectures providing some
* "generic" hooks, that is :
*
* nvram_read_byte, nvram_write_byte, nvram_sync, nvram_get_size
*
* Note that an additional hook is supported for PowerMac only
* for getting the nvram "partition" informations
*
*/
#define NVRAM_VERSION "1.1"
#include <linux/module.h>
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/fs.h>
#include <linux/miscdevice.h>
#include <linux/fcntl.h>
#include <linux/init.h>
#include <linux/mutex.h>
#include <linux/pagemap.h>
#include <linux/uaccess.h>
#include <asm/nvram.h>
#ifdef CONFIG_PPC_PMAC
#include <asm/machdep.h>
#endif
#define NVRAM_SIZE 8192
static DEFINE_MUTEX(nvram_mutex);
static ssize_t nvram_len;
static loff_t nvram_llseek(struct file *file, loff_t offset, int origin)
{
return generic_file_llseek_size(file, offset, origin,
MAX_LFS_FILESIZE, nvram_len);
}
static ssize_t read_nvram(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
unsigned int i;
char __user *p = buf;
if (!access_ok(buf, count))
return -EFAULT;
if (*ppos >= nvram_len)
return 0;
for (i = *ppos; count > 0 && i < nvram_len; ++i, ++p, --count)
if (__put_user(nvram_read_byte(i), p))
return -EFAULT;
*ppos = i;
return p - buf;
}
static ssize_t write_nvram(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
unsigned int i;
const char __user *p = buf;
char c;
if (!access_ok(buf, count))
return -EFAULT;
if (*ppos >= nvram_len)
return 0;
for (i = *ppos; count > 0 && i < nvram_len; ++i, ++p, --count) {
if (__get_user(c, p))
return -EFAULT;
nvram_write_byte(c, i);
}
*ppos = i;
return p - buf;
}
static int nvram_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
switch(cmd) {
#ifdef CONFIG_PPC_PMAC
case OBSOLETE_PMAC_NVRAM_GET_OFFSET:
printk(KERN_WARNING "nvram: Using obsolete PMAC_NVRAM_GET_OFFSET ioctl\n");
case IOC_NVRAM_GET_OFFSET: {
int part, offset;
if (!machine_is(powermac))
return -EINVAL;
if (copy_from_user(&part, (void __user*)arg, sizeof(part)) != 0)
return -EFAULT;
if (part < pmac_nvram_OF || part > pmac_nvram_NR)
return -EINVAL;
offset = pmac_get_partition(part);
if (copy_to_user((void __user*)arg, &offset, sizeof(offset)) != 0)
return -EFAULT;
break;
}
#endif /* CONFIG_PPC_PMAC */
case IOC_NVRAM_SYNC:
nvram_sync();
break;
default:
return -EINVAL;
}
return 0;
}
static long nvram_unlocked_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
int ret;
mutex_lock(&nvram_mutex);
ret = nvram_ioctl(file, cmd, arg);
mutex_unlock(&nvram_mutex);
return ret;
}
const struct file_operations nvram_fops = {
.owner = THIS_MODULE,
.llseek = nvram_llseek,
.read = read_nvram,
.write = write_nvram,
.unlocked_ioctl = nvram_unlocked_ioctl,
};
static struct miscdevice nvram_dev = {
NVRAM_MINOR,
"nvram",
&nvram_fops
};
int __init nvram_init(void)
{
int ret = 0;
printk(KERN_INFO "Generic non-volatile memory driver v%s\n",
NVRAM_VERSION);
ret = misc_register(&nvram_dev);
if (ret != 0)
goto out;
nvram_len = nvram_get_size();
if (nvram_len < 0)
nvram_len = NVRAM_SIZE;
out:
return ret;
}
void __exit nvram_cleanup(void)
{
misc_deregister( &nvram_dev );
}
module_init(nvram_init);
module_exit(nvram_cleanup);
MODULE_LICENSE("GPL");

View File

@ -377,7 +377,7 @@ static __init int hpet_mmap_enable(char *str)
pr_info("HPET mmap %s\n", hpet_mmap_enabled ? "enabled" : "disabled"); pr_info("HPET mmap %s\n", hpet_mmap_enabled ? "enabled" : "disabled");
return 1; return 1;
} }
__setup("hpet_mmap", hpet_mmap_enable); __setup("hpet_mmap=", hpet_mmap_enable);
static int hpet_mmap(struct file *file, struct vm_area_struct *vma) static int hpet_mmap(struct file *file, struct vm_area_struct *vma)
{ {
@ -842,7 +842,6 @@ int hpet_alloc(struct hpet_data *hdp)
struct hpet_dev *devp; struct hpet_dev *devp;
u32 i, ntimer; u32 i, ntimer;
struct hpets *hpetp; struct hpets *hpetp;
size_t siz;
struct hpet __iomem *hpet; struct hpet __iomem *hpet;
static struct hpets *last; static struct hpets *last;
unsigned long period; unsigned long period;
@ -860,10 +859,8 @@ int hpet_alloc(struct hpet_data *hdp)
return 0; return 0;
} }
siz = sizeof(struct hpets) + ((hdp->hd_nirqs - 1) * hpetp = kzalloc(struct_size(hpetp, hp_dev, hdp->hd_nirqs - 1),
sizeof(struct hpet_dev)); GFP_KERNEL);
hpetp = kzalloc(siz, GFP_KERNEL);
if (!hpetp) if (!hpetp)
return -ENOMEM; return -ENOMEM;

View File

@ -729,7 +729,7 @@ static long lp_ioctl(struct file *file, unsigned int cmd,
ret = lp_set_timeout32(minor, (void __user *)arg); ret = lp_set_timeout32(minor, (void __user *)arg);
break; break;
} }
/* fallthrough for 64-bit */ /* fall through - for 64-bit */
case LPSETTIMEOUT_NEW: case LPSETTIMEOUT_NEW:
ret = lp_set_timeout64(minor, (void __user *)arg); ret = lp_set_timeout64(minor, (void __user *)arg);
break; break;
@ -757,7 +757,7 @@ static long lp_compat_ioctl(struct file *file, unsigned int cmd,
ret = lp_set_timeout32(minor, (void __user *)arg); ret = lp_set_timeout32(minor, (void __user *)arg);
break; break;
} }
/* fallthrough for x32 mode */ /* fall through - for x32 mode */
case LPSETTIMEOUT_NEW: case LPSETTIMEOUT_NEW:
ret = lp_set_timeout64(minor, (void __user *)arg); ret = lp_set_timeout64(minor, (void __user *)arg);
break; break;

View File

@ -50,6 +50,7 @@ static LIST_HEAD(soft_list);
* file operations * file operations
*/ */
static const struct file_operations mbcs_ops = { static const struct file_operations mbcs_ops = {
.owner = THIS_MODULE,
.open = mbcs_open, .open = mbcs_open,
.llseek = mbcs_sram_llseek, .llseek = mbcs_sram_llseek,
.read = mbcs_sram_read, .read = mbcs_sram_read,

View File

@ -21,13 +21,6 @@
* ioctl(NVRAM_SETCKS) (doesn't change contents, just makes checksum valid * ioctl(NVRAM_SETCKS) (doesn't change contents, just makes checksum valid
* again; use with care!) * again; use with care!)
* *
* This file also provides some functions for other parts of the kernel that
* want to access the NVRAM: nvram_{read,write,check_checksum,set_checksum}.
* Obviously this can be used only if this driver is always configured into
* the kernel and is not a module. Since the functions are used by some Atari
* drivers, this is the case on the Atari.
*
*
* 1.1 Cesar Barros: SMP locking fixes * 1.1 Cesar Barros: SMP locking fixes
* added changelog * added changelog
* 1.2 Erik Gilling: Cobalt Networks support * 1.2 Erik Gilling: Cobalt Networks support
@ -39,64 +32,6 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/nvram.h> #include <linux/nvram.h>
#define PC 1
#define ATARI 2
/* select machine configuration */
#if defined(CONFIG_ATARI)
# define MACH ATARI
#elif defined(__i386__) || defined(__x86_64__) || defined(__arm__) /* and ?? */
# define MACH PC
#else
# error Cannot build nvram driver for this machine configuration.
#endif
#if MACH == PC
/* RTC in a PC */
#define CHECK_DRIVER_INIT() 1
/* On PCs, the checksum is built only over bytes 2..31 */
#define PC_CKS_RANGE_START 2
#define PC_CKS_RANGE_END 31
#define PC_CKS_LOC 32
#define NVRAM_BYTES (128-NVRAM_FIRST_BYTE)
#define mach_check_checksum pc_check_checksum
#define mach_set_checksum pc_set_checksum
#define mach_proc_infos pc_proc_infos
#endif
#if MACH == ATARI
/* Special parameters for RTC in Atari machines */
#include <asm/atarihw.h>
#include <asm/atariints.h>
#define RTC_PORT(x) (TT_RTC_BAS + 2*(x))
#define CHECK_DRIVER_INIT() (MACH_IS_ATARI && ATARIHW_PRESENT(TT_CLK))
#define NVRAM_BYTES 50
/* On Ataris, the checksum is over all bytes except the checksum bytes
* themselves; these are at the very end */
#define ATARI_CKS_RANGE_START 0
#define ATARI_CKS_RANGE_END 47
#define ATARI_CKS_LOC 48
#define mach_check_checksum atari_check_checksum
#define mach_set_checksum atari_set_checksum
#define mach_proc_infos atari_proc_infos
#endif
/* Note that *all* calls to CMOS_READ and CMOS_WRITE must be done with
* rtc_lock held. Due to the index-port/data-port design of the RTC, we
* don't want two different things trying to get to it at once. (e.g. the
* periodic 11 min sync from kernel/time/ntp.c vs. this driver.)
*/
#include <linux/types.h> #include <linux/types.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/miscdevice.h> #include <linux/miscdevice.h>
@ -106,28 +41,26 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/proc_fs.h> #include <linux/proc_fs.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <linux/slab.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/pagemap.h> #include <linux/pagemap.h>
#ifdef CONFIG_PPC
#include <asm/nvram.h>
#endif
static DEFINE_MUTEX(nvram_mutex); static DEFINE_MUTEX(nvram_mutex);
static DEFINE_SPINLOCK(nvram_state_lock); static DEFINE_SPINLOCK(nvram_state_lock);
static int nvram_open_cnt; /* #times opened */ static int nvram_open_cnt; /* #times opened */
static int nvram_open_mode; /* special open modes */ static int nvram_open_mode; /* special open modes */
static ssize_t nvram_size;
#define NVRAM_WRITE 1 /* opened for writing (exclusive) */ #define NVRAM_WRITE 1 /* opened for writing (exclusive) */
#define NVRAM_EXCL 2 /* opened with O_EXCL */ #define NVRAM_EXCL 2 /* opened with O_EXCL */
static int mach_check_checksum(void); #ifdef CONFIG_X86
static void mach_set_checksum(void);
#ifdef CONFIG_PROC_FS
static void mach_proc_infos(unsigned char *contents, struct seq_file *seq,
void *offset);
#endif
/* /*
* These functions are provided to be called internally or by other parts of * These functions are provided to be called internally or by other parts of
* the kernel. It's up to the caller to ensure correct checksum before reading * the kernel. It's up to the caller to ensure correct checksum before reading
@ -139,13 +72,20 @@ static void mach_proc_infos(unsigned char *contents, struct seq_file *seq,
* know about the RTC cruft. * know about the RTC cruft.
*/ */
unsigned char __nvram_read_byte(int i) #define NVRAM_BYTES (128 - NVRAM_FIRST_BYTE)
/* Note that *all* calls to CMOS_READ and CMOS_WRITE must be done with
* rtc_lock held. Due to the index-port/data-port design of the RTC, we
* don't want two different things trying to get to it at once. (e.g. the
* periodic 11 min sync from kernel/time/ntp.c vs. this driver.)
*/
static unsigned char __nvram_read_byte(int i)
{ {
return CMOS_READ(NVRAM_FIRST_BYTE + i); return CMOS_READ(NVRAM_FIRST_BYTE + i);
} }
EXPORT_SYMBOL(__nvram_read_byte);
unsigned char nvram_read_byte(int i) static unsigned char pc_nvram_read_byte(int i)
{ {
unsigned long flags; unsigned long flags;
unsigned char c; unsigned char c;
@ -155,16 +95,14 @@ unsigned char nvram_read_byte(int i)
spin_unlock_irqrestore(&rtc_lock, flags); spin_unlock_irqrestore(&rtc_lock, flags);
return c; return c;
} }
EXPORT_SYMBOL(nvram_read_byte);
/* This races nicely with trying to read with checksum checking (nvram_read) */ /* This races nicely with trying to read with checksum checking (nvram_read) */
void __nvram_write_byte(unsigned char c, int i) static void __nvram_write_byte(unsigned char c, int i)
{ {
CMOS_WRITE(c, NVRAM_FIRST_BYTE + i); CMOS_WRITE(c, NVRAM_FIRST_BYTE + i);
} }
EXPORT_SYMBOL(__nvram_write_byte);
void nvram_write_byte(unsigned char c, int i) static void pc_nvram_write_byte(unsigned char c, int i)
{ {
unsigned long flags; unsigned long flags;
@ -172,172 +110,266 @@ void nvram_write_byte(unsigned char c, int i)
__nvram_write_byte(c, i); __nvram_write_byte(c, i);
spin_unlock_irqrestore(&rtc_lock, flags); spin_unlock_irqrestore(&rtc_lock, flags);
} }
EXPORT_SYMBOL(nvram_write_byte);
int __nvram_check_checksum(void) /* On PCs, the checksum is built only over bytes 2..31 */
#define PC_CKS_RANGE_START 2
#define PC_CKS_RANGE_END 31
#define PC_CKS_LOC 32
static int __nvram_check_checksum(void)
{ {
return mach_check_checksum(); int i;
} unsigned short sum = 0;
EXPORT_SYMBOL(__nvram_check_checksum); unsigned short expect;
int nvram_check_checksum(void) for (i = PC_CKS_RANGE_START; i <= PC_CKS_RANGE_END; ++i)
{ sum += __nvram_read_byte(i);
unsigned long flags; expect = __nvram_read_byte(PC_CKS_LOC)<<8 |
int rv; __nvram_read_byte(PC_CKS_LOC+1);
return (sum & 0xffff) == expect;
spin_lock_irqsave(&rtc_lock, flags);
rv = __nvram_check_checksum();
spin_unlock_irqrestore(&rtc_lock, flags);
return rv;
} }
EXPORT_SYMBOL(nvram_check_checksum);
static void __nvram_set_checksum(void) static void __nvram_set_checksum(void)
{ {
mach_set_checksum(); int i;
unsigned short sum = 0;
for (i = PC_CKS_RANGE_START; i <= PC_CKS_RANGE_END; ++i)
sum += __nvram_read_byte(i);
__nvram_write_byte(sum >> 8, PC_CKS_LOC);
__nvram_write_byte(sum & 0xff, PC_CKS_LOC + 1);
} }
#if 0 static long pc_nvram_set_checksum(void)
void nvram_set_checksum(void)
{ {
unsigned long flags; spin_lock_irq(&rtc_lock);
spin_lock_irqsave(&rtc_lock, flags);
__nvram_set_checksum(); __nvram_set_checksum();
spin_unlock_irqrestore(&rtc_lock, flags); spin_unlock_irq(&rtc_lock);
return 0;
} }
#endif /* 0 */
static long pc_nvram_initialize(void)
{
ssize_t i;
spin_lock_irq(&rtc_lock);
for (i = 0; i < NVRAM_BYTES; ++i)
__nvram_write_byte(0, i);
__nvram_set_checksum();
spin_unlock_irq(&rtc_lock);
return 0;
}
static ssize_t pc_nvram_get_size(void)
{
return NVRAM_BYTES;
}
static ssize_t pc_nvram_read(char *buf, size_t count, loff_t *ppos)
{
char *p = buf;
loff_t i;
spin_lock_irq(&rtc_lock);
if (!__nvram_check_checksum()) {
spin_unlock_irq(&rtc_lock);
return -EIO;
}
for (i = *ppos; count > 0 && i < NVRAM_BYTES; --count, ++i, ++p)
*p = __nvram_read_byte(i);
spin_unlock_irq(&rtc_lock);
*ppos = i;
return p - buf;
}
static ssize_t pc_nvram_write(char *buf, size_t count, loff_t *ppos)
{
char *p = buf;
loff_t i;
spin_lock_irq(&rtc_lock);
if (!__nvram_check_checksum()) {
spin_unlock_irq(&rtc_lock);
return -EIO;
}
for (i = *ppos; count > 0 && i < NVRAM_BYTES; --count, ++i, ++p)
__nvram_write_byte(*p, i);
__nvram_set_checksum();
spin_unlock_irq(&rtc_lock);
*ppos = i;
return p - buf;
}
const struct nvram_ops arch_nvram_ops = {
.read = pc_nvram_read,
.write = pc_nvram_write,
.read_byte = pc_nvram_read_byte,
.write_byte = pc_nvram_write_byte,
.get_size = pc_nvram_get_size,
.set_checksum = pc_nvram_set_checksum,
.initialize = pc_nvram_initialize,
};
EXPORT_SYMBOL(arch_nvram_ops);
#endif /* CONFIG_X86 */
/* /*
* The are the file operation function for user access to /dev/nvram * The are the file operation function for user access to /dev/nvram
*/ */
static loff_t nvram_llseek(struct file *file, loff_t offset, int origin) static loff_t nvram_misc_llseek(struct file *file, loff_t offset, int origin)
{ {
return generic_file_llseek_size(file, offset, origin, MAX_LFS_FILESIZE, return generic_file_llseek_size(file, offset, origin, MAX_LFS_FILESIZE,
NVRAM_BYTES); nvram_size);
} }
static ssize_t nvram_read(struct file *file, char __user *buf, static ssize_t nvram_misc_read(struct file *file, char __user *buf,
size_t count, loff_t *ppos) size_t count, loff_t *ppos)
{ {
unsigned char contents[NVRAM_BYTES]; char *tmp;
unsigned i = *ppos; ssize_t ret;
unsigned char *tmp;
spin_lock_irq(&rtc_lock);
if (!__nvram_check_checksum()) if (!access_ok(buf, count))
goto checksum_err;
for (tmp = contents; count-- > 0 && i < NVRAM_BYTES; ++i, ++tmp)
*tmp = __nvram_read_byte(i);
spin_unlock_irq(&rtc_lock);
if (copy_to_user(buf, contents, tmp - contents))
return -EFAULT; return -EFAULT;
if (*ppos >= nvram_size)
return 0;
*ppos = i; count = min_t(size_t, count, nvram_size - *ppos);
count = min_t(size_t, count, PAGE_SIZE);
return tmp - contents; tmp = kmalloc(count, GFP_KERNEL);
if (!tmp)
return -ENOMEM;
checksum_err: ret = nvram_read(tmp, count, ppos);
spin_unlock_irq(&rtc_lock); if (ret <= 0)
return -EIO; goto out;
if (copy_to_user(buf, tmp, ret)) {
*ppos -= ret;
ret = -EFAULT;
}
out:
kfree(tmp);
return ret;
} }
static ssize_t nvram_write(struct file *file, const char __user *buf, static ssize_t nvram_misc_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos) size_t count, loff_t *ppos)
{ {
unsigned char contents[NVRAM_BYTES]; char *tmp;
unsigned i = *ppos; ssize_t ret;
unsigned char *tmp;
if (i >= NVRAM_BYTES) if (!access_ok(buf, count))
return 0; /* Past EOF */
if (count > NVRAM_BYTES - i)
count = NVRAM_BYTES - i;
if (count > NVRAM_BYTES)
return -EFAULT; /* Can't happen, but prove it to gcc */
if (copy_from_user(contents, buf, count))
return -EFAULT; return -EFAULT;
if (*ppos >= nvram_size)
return 0;
spin_lock_irq(&rtc_lock); count = min_t(size_t, count, nvram_size - *ppos);
count = min_t(size_t, count, PAGE_SIZE);
if (!__nvram_check_checksum()) tmp = memdup_user(buf, count);
goto checksum_err; if (IS_ERR(tmp))
return PTR_ERR(tmp);
for (tmp = contents; count--; ++i, ++tmp) ret = nvram_write(tmp, count, ppos);
__nvram_write_byte(*tmp, i); kfree(tmp);
return ret;
__nvram_set_checksum();
spin_unlock_irq(&rtc_lock);
*ppos = i;
return tmp - contents;
checksum_err:
spin_unlock_irq(&rtc_lock);
return -EIO;
} }
static long nvram_ioctl(struct file *file, unsigned int cmd, static long nvram_misc_ioctl(struct file *file, unsigned int cmd,
unsigned long arg) unsigned long arg)
{ {
int i; long ret = -ENOTTY;
switch (cmd) { switch (cmd) {
#ifdef CONFIG_PPC
case OBSOLETE_PMAC_NVRAM_GET_OFFSET:
pr_warn("nvram: Using obsolete PMAC_NVRAM_GET_OFFSET ioctl\n");
/* fall through */
case IOC_NVRAM_GET_OFFSET:
ret = -EINVAL;
#ifdef CONFIG_PPC_PMAC
if (machine_is(powermac)) {
int part, offset;
if (copy_from_user(&part, (void __user *)arg,
sizeof(part)) != 0)
return -EFAULT;
if (part < pmac_nvram_OF || part > pmac_nvram_NR)
return -EINVAL;
offset = pmac_get_partition(part);
if (offset < 0)
return -EINVAL;
if (copy_to_user((void __user *)arg,
&offset, sizeof(offset)) != 0)
return -EFAULT;
ret = 0;
}
#endif
break;
#ifdef CONFIG_PPC32
case IOC_NVRAM_SYNC:
if (ppc_md.nvram_sync != NULL) {
mutex_lock(&nvram_mutex);
ppc_md.nvram_sync();
mutex_unlock(&nvram_mutex);
}
ret = 0;
break;
#endif
#elif defined(CONFIG_X86) || defined(CONFIG_M68K)
case NVRAM_INIT: case NVRAM_INIT:
/* initialize NVRAM contents and checksum */ /* initialize NVRAM contents and checksum */
if (!capable(CAP_SYS_ADMIN)) if (!capable(CAP_SYS_ADMIN))
return -EACCES; return -EACCES;
if (arch_nvram_ops.initialize != NULL) {
mutex_lock(&nvram_mutex); mutex_lock(&nvram_mutex);
spin_lock_irq(&rtc_lock); ret = arch_nvram_ops.initialize();
for (i = 0; i < NVRAM_BYTES; ++i)
__nvram_write_byte(0, i);
__nvram_set_checksum();
spin_unlock_irq(&rtc_lock);
mutex_unlock(&nvram_mutex); mutex_unlock(&nvram_mutex);
return 0; }
break;
case NVRAM_SETCKS: case NVRAM_SETCKS:
/* just set checksum, contents unchanged (maybe useful after /* just set checksum, contents unchanged (maybe useful after
* checksum garbaged somehow...) */ * checksum garbaged somehow...) */
if (!capable(CAP_SYS_ADMIN)) if (!capable(CAP_SYS_ADMIN))
return -EACCES; return -EACCES;
if (arch_nvram_ops.set_checksum != NULL) {
mutex_lock(&nvram_mutex); mutex_lock(&nvram_mutex);
spin_lock_irq(&rtc_lock); ret = arch_nvram_ops.set_checksum();
__nvram_set_checksum();
spin_unlock_irq(&rtc_lock);
mutex_unlock(&nvram_mutex); mutex_unlock(&nvram_mutex);
return 0;
default:
return -ENOTTY;
} }
break;
#endif /* CONFIG_X86 || CONFIG_M68K */
}
return ret;
} }
static int nvram_open(struct inode *inode, struct file *file) static int nvram_misc_open(struct inode *inode, struct file *file)
{ {
spin_lock(&nvram_state_lock); spin_lock(&nvram_state_lock);
/* Prevent multiple readers/writers if desired. */
if ((nvram_open_cnt && (file->f_flags & O_EXCL)) || if ((nvram_open_cnt && (file->f_flags & O_EXCL)) ||
(nvram_open_mode & NVRAM_EXCL) || (nvram_open_mode & NVRAM_EXCL)) {
((file->f_mode & FMODE_WRITE) && (nvram_open_mode & NVRAM_WRITE))) {
spin_unlock(&nvram_state_lock); spin_unlock(&nvram_state_lock);
return -EBUSY; return -EBUSY;
} }
#if defined(CONFIG_X86) || defined(CONFIG_M68K)
/* Prevent multiple writers if the set_checksum ioctl is implemented. */
if ((arch_nvram_ops.set_checksum != NULL) &&
(file->f_mode & FMODE_WRITE) && (nvram_open_mode & NVRAM_WRITE)) {
spin_unlock(&nvram_state_lock);
return -EBUSY;
}
#endif
if (file->f_flags & O_EXCL) if (file->f_flags & O_EXCL)
nvram_open_mode |= NVRAM_EXCL; nvram_open_mode |= NVRAM_EXCL;
if (file->f_mode & FMODE_WRITE) if (file->f_mode & FMODE_WRITE)
@ -349,7 +381,7 @@ static int nvram_open(struct inode *inode, struct file *file)
return 0; return 0;
} }
static int nvram_release(struct inode *inode, struct file *file) static int nvram_misc_release(struct inode *inode, struct file *file)
{ {
spin_lock(&nvram_state_lock); spin_lock(&nvram_state_lock);
@ -366,123 +398,7 @@ static int nvram_release(struct inode *inode, struct file *file)
return 0; return 0;
} }
#ifndef CONFIG_PROC_FS #if defined(CONFIG_X86) && defined(CONFIG_PROC_FS)
static int nvram_add_proc_fs(void)
{
return 0;
}
#else
static int nvram_proc_read(struct seq_file *seq, void *offset)
{
unsigned char contents[NVRAM_BYTES];
int i = 0;
spin_lock_irq(&rtc_lock);
for (i = 0; i < NVRAM_BYTES; ++i)
contents[i] = __nvram_read_byte(i);
spin_unlock_irq(&rtc_lock);
mach_proc_infos(contents, seq, offset);
return 0;
}
static int nvram_add_proc_fs(void)
{
if (!proc_create_single("driver/nvram", 0, NULL, nvram_proc_read))
return -ENOMEM;
return 0;
}
#endif /* CONFIG_PROC_FS */
static const struct file_operations nvram_fops = {
.owner = THIS_MODULE,
.llseek = nvram_llseek,
.read = nvram_read,
.write = nvram_write,
.unlocked_ioctl = nvram_ioctl,
.open = nvram_open,
.release = nvram_release,
};
static struct miscdevice nvram_dev = {
NVRAM_MINOR,
"nvram",
&nvram_fops
};
static int __init nvram_init(void)
{
int ret;
/* First test whether the driver should init at all */
if (!CHECK_DRIVER_INIT())
return -ENODEV;
ret = misc_register(&nvram_dev);
if (ret) {
printk(KERN_ERR "nvram: can't misc_register on minor=%d\n",
NVRAM_MINOR);
goto out;
}
ret = nvram_add_proc_fs();
if (ret) {
printk(KERN_ERR "nvram: can't create /proc/driver/nvram\n");
goto outmisc;
}
ret = 0;
printk(KERN_INFO "Non-volatile memory driver v" NVRAM_VERSION "\n");
out:
return ret;
outmisc:
misc_deregister(&nvram_dev);
goto out;
}
static void __exit nvram_cleanup_module(void)
{
remove_proc_entry("driver/nvram", NULL);
misc_deregister(&nvram_dev);
}
module_init(nvram_init);
module_exit(nvram_cleanup_module);
/*
* Machine specific functions
*/
#if MACH == PC
static int pc_check_checksum(void)
{
int i;
unsigned short sum = 0;
unsigned short expect;
for (i = PC_CKS_RANGE_START; i <= PC_CKS_RANGE_END; ++i)
sum += __nvram_read_byte(i);
expect = __nvram_read_byte(PC_CKS_LOC)<<8 |
__nvram_read_byte(PC_CKS_LOC+1);
return (sum & 0xffff) == expect;
}
static void pc_set_checksum(void)
{
int i;
unsigned short sum = 0;
for (i = PC_CKS_RANGE_START; i <= PC_CKS_RANGE_END; ++i)
sum += __nvram_read_byte(i);
__nvram_write_byte(sum >> 8, PC_CKS_LOC);
__nvram_write_byte(sum & 0xff, PC_CKS_LOC + 1);
}
#ifdef CONFIG_PROC_FS
static const char * const floppy_types[] = { static const char * const floppy_types[] = {
"none", "5.25'' 360k", "5.25'' 1.2M", "3.5'' 720k", "3.5'' 1.44M", "none", "5.25'' 360k", "5.25'' 1.2M", "3.5'' 720k", "3.5'' 1.44M",
"3.5'' 2.88M", "3.5'' 2.88M" "3.5'' 2.88M", "3.5'' 2.88M"
@ -495,7 +411,7 @@ static const char * const gfx_types[] = {
"monochrome", "monochrome",
}; };
static void pc_proc_infos(unsigned char *nvram, struct seq_file *seq, static void pc_nvram_proc_read(unsigned char *nvram, struct seq_file *seq,
void *offset) void *offset)
{ {
int checksum; int checksum;
@ -557,143 +473,76 @@ static void pc_proc_infos(unsigned char *nvram, struct seq_file *seq,
return; return;
} }
static int nvram_proc_read(struct seq_file *seq, void *offset)
{
unsigned char contents[NVRAM_BYTES];
int i = 0;
spin_lock_irq(&rtc_lock);
for (i = 0; i < NVRAM_BYTES; ++i)
contents[i] = __nvram_read_byte(i);
spin_unlock_irq(&rtc_lock);
pc_nvram_proc_read(contents, seq, offset);
return 0;
}
#endif /* CONFIG_X86 && CONFIG_PROC_FS */
static const struct file_operations nvram_misc_fops = {
.owner = THIS_MODULE,
.llseek = nvram_misc_llseek,
.read = nvram_misc_read,
.write = nvram_misc_write,
.unlocked_ioctl = nvram_misc_ioctl,
.open = nvram_misc_open,
.release = nvram_misc_release,
};
static struct miscdevice nvram_misc = {
NVRAM_MINOR,
"nvram",
&nvram_misc_fops,
};
static int __init nvram_module_init(void)
{
int ret;
nvram_size = nvram_get_size();
if (nvram_size < 0)
return nvram_size;
ret = misc_register(&nvram_misc);
if (ret) {
pr_err("nvram: can't misc_register on minor=%d\n", NVRAM_MINOR);
return ret;
}
#if defined(CONFIG_X86) && defined(CONFIG_PROC_FS)
if (!proc_create_single("driver/nvram", 0, NULL, nvram_proc_read)) {
pr_err("nvram: can't create /proc/driver/nvram\n");
misc_deregister(&nvram_misc);
return -ENOMEM;
}
#endif #endif
#endif /* MACH == PC */ pr_info("Non-volatile memory driver v" NVRAM_VERSION "\n");
return 0;
#if MACH == ATARI
static int atari_check_checksum(void)
{
int i;
unsigned char sum = 0;
for (i = ATARI_CKS_RANGE_START; i <= ATARI_CKS_RANGE_END; ++i)
sum += __nvram_read_byte(i);
return (__nvram_read_byte(ATARI_CKS_LOC) == (~sum & 0xff)) &&
(__nvram_read_byte(ATARI_CKS_LOC + 1) == (sum & 0xff));
} }
static void atari_set_checksum(void) static void __exit nvram_module_exit(void)
{ {
int i; #if defined(CONFIG_X86) && defined(CONFIG_PROC_FS)
unsigned char sum = 0; remove_proc_entry("driver/nvram", NULL);
for (i = ATARI_CKS_RANGE_START; i <= ATARI_CKS_RANGE_END; ++i)
sum += __nvram_read_byte(i);
__nvram_write_byte(~sum, ATARI_CKS_LOC);
__nvram_write_byte(sum, ATARI_CKS_LOC + 1);
}
#ifdef CONFIG_PROC_FS
static struct {
unsigned char val;
const char *name;
} boot_prefs[] = {
{ 0x80, "TOS" },
{ 0x40, "ASV" },
{ 0x20, "NetBSD (?)" },
{ 0x10, "Linux" },
{ 0x00, "unspecified" }
};
static const char * const languages[] = {
"English (US)",
"German",
"French",
"English (UK)",
"Spanish",
"Italian",
"6 (undefined)",
"Swiss (French)",
"Swiss (German)"
};
static const char * const dateformat[] = {
"MM%cDD%cYY",
"DD%cMM%cYY",
"YY%cMM%cDD",
"YY%cDD%cMM",
"4 (undefined)",
"5 (undefined)",
"6 (undefined)",
"7 (undefined)"
};
static const char * const colors[] = {
"2", "4", "16", "256", "65536", "??", "??", "??"
};
static void atari_proc_infos(unsigned char *nvram, struct seq_file *seq,
void *offset)
{
int checksum = nvram_check_checksum();
int i;
unsigned vmode;
seq_printf(seq, "Checksum status : %svalid\n", checksum ? "" : "not ");
seq_printf(seq, "Boot preference : ");
for (i = ARRAY_SIZE(boot_prefs) - 1; i >= 0; --i) {
if (nvram[1] == boot_prefs[i].val) {
seq_printf(seq, "%s\n", boot_prefs[i].name);
break;
}
}
if (i < 0)
seq_printf(seq, "0x%02x (undefined)\n", nvram[1]);
seq_printf(seq, "SCSI arbitration : %s\n",
(nvram[16] & 0x80) ? "on" : "off");
seq_printf(seq, "SCSI host ID : ");
if (nvram[16] & 0x80)
seq_printf(seq, "%d\n", nvram[16] & 7);
else
seq_printf(seq, "n/a\n");
/* the following entries are defined only for the Falcon */
if ((atari_mch_cookie >> 16) != ATARI_MCH_FALCON)
return;
seq_printf(seq, "OS language : ");
if (nvram[6] < ARRAY_SIZE(languages))
seq_printf(seq, "%s\n", languages[nvram[6]]);
else
seq_printf(seq, "%u (undefined)\n", nvram[6]);
seq_printf(seq, "Keyboard language: ");
if (nvram[7] < ARRAY_SIZE(languages))
seq_printf(seq, "%s\n", languages[nvram[7]]);
else
seq_printf(seq, "%u (undefined)\n", nvram[7]);
seq_printf(seq, "Date format : ");
seq_printf(seq, dateformat[nvram[8] & 7],
nvram[9] ? nvram[9] : '/', nvram[9] ? nvram[9] : '/');
seq_printf(seq, ", %dh clock\n", nvram[8] & 16 ? 24 : 12);
seq_printf(seq, "Boot delay : ");
if (nvram[10] == 0)
seq_printf(seq, "default");
else
seq_printf(seq, "%ds%s\n", nvram[10],
nvram[10] < 8 ? ", no memory test" : "");
vmode = (nvram[14] << 8) | nvram[15];
seq_printf(seq,
"Video mode : %s colors, %d columns, %s %s monitor\n",
colors[vmode & 7],
vmode & 8 ? 80 : 40,
vmode & 16 ? "VGA" : "TV", vmode & 32 ? "PAL" : "NTSC");
seq_printf(seq, " %soverscan, compat. mode %s%s\n",
vmode & 64 ? "" : "no ",
vmode & 128 ? "on" : "off",
vmode & 256 ?
(vmode & 16 ? ", line doubling" : ", half screen") : "");
return;
}
#endif #endif
misc_deregister(&nvram_misc);
}
#endif /* MACH == ATARI */ module_init(nvram_module_init);
module_exit(nvram_module_exit);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_ALIAS_MISCDEV(NVRAM_MINOR); MODULE_ALIAS_MISCDEV(NVRAM_MINOR);
MODULE_ALIAS("devname:nvram");

View File

@ -114,6 +114,14 @@ config EXTCON_PALMAS
Say Y here to enable support for USB peripheral and USB host Say Y here to enable support for USB peripheral and USB host
detection by palmas usb. detection by palmas usb.
config EXTCON_PTN5150
tristate "NXP PTN5150 CC LOGIC USB EXTCON support"
depends on I2C && GPIOLIB || COMPILE_TEST
select REGMAP_I2C
help
Say Y here to enable support for USB peripheral and USB host
detection by NXP PTN5150 CC (Configuration Channel) logic chip.
config EXTCON_QCOM_SPMI_MISC config EXTCON_QCOM_SPMI_MISC
tristate "Qualcomm USB extcon support" tristate "Qualcomm USB extcon support"
depends on ARCH_QCOM || COMPILE_TEST depends on ARCH_QCOM || COMPILE_TEST

View File

@ -17,6 +17,7 @@ obj-$(CONFIG_EXTCON_MAX77693) += extcon-max77693.o
obj-$(CONFIG_EXTCON_MAX77843) += extcon-max77843.o obj-$(CONFIG_EXTCON_MAX77843) += extcon-max77843.o
obj-$(CONFIG_EXTCON_MAX8997) += extcon-max8997.o obj-$(CONFIG_EXTCON_MAX8997) += extcon-max8997.o
obj-$(CONFIG_EXTCON_PALMAS) += extcon-palmas.o obj-$(CONFIG_EXTCON_PALMAS) += extcon-palmas.o
obj-$(CONFIG_EXTCON_PTN5150) += extcon-ptn5150.o
obj-$(CONFIG_EXTCON_QCOM_SPMI_MISC) += extcon-qcom-spmi-misc.o obj-$(CONFIG_EXTCON_QCOM_SPMI_MISC) += extcon-qcom-spmi-misc.o
obj-$(CONFIG_EXTCON_RT8973A) += extcon-rt8973a.o obj-$(CONFIG_EXTCON_RT8973A) += extcon-rt8973a.o
obj-$(CONFIG_EXTCON_SM5502) += extcon-sm5502.o obj-$(CONFIG_EXTCON_SM5502) += extcon-sm5502.o

View File

@ -0,0 +1,339 @@
// SPDX-License-Identifier: GPL-2.0+
//
// extcon-ptn5150.c - PTN5150 CC logic extcon driver to support USB detection
//
// Based on extcon-sm5502.c driver
// Copyright (c) 2018-2019 by Vijai Kumar K
// Author: Vijai Kumar K <vijaikumar.kanagarajan@gmail.com>
#include <linux/err.h>
#include <linux/i2c.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/regmap.h>
#include <linux/slab.h>
#include <linux/extcon-provider.h>
#include <linux/gpio/consumer.h>
/* PTN5150 registers */
enum ptn5150_reg {
PTN5150_REG_DEVICE_ID = 0x01,
PTN5150_REG_CONTROL,
PTN5150_REG_INT_STATUS,
PTN5150_REG_CC_STATUS,
PTN5150_REG_CON_DET = 0x09,
PTN5150_REG_VCONN_STATUS,
PTN5150_REG_RESET,
PTN5150_REG_INT_MASK = 0x18,
PTN5150_REG_INT_REG_STATUS,
PTN5150_REG_END,
};
#define PTN5150_DFP_ATTACHED 0x1
#define PTN5150_UFP_ATTACHED 0x2
/* Define PTN5150 MASK/SHIFT constant */
#define PTN5150_REG_DEVICE_ID_VENDOR_SHIFT 0
#define PTN5150_REG_DEVICE_ID_VENDOR_MASK \
(0x3 << PTN5150_REG_DEVICE_ID_VENDOR_SHIFT)
#define PTN5150_REG_DEVICE_ID_VERSION_SHIFT 3
#define PTN5150_REG_DEVICE_ID_VERSION_MASK \
(0x1f << PTN5150_REG_DEVICE_ID_VERSION_SHIFT)
#define PTN5150_REG_CC_PORT_ATTACHMENT_SHIFT 2
#define PTN5150_REG_CC_PORT_ATTACHMENT_MASK \
(0x7 << PTN5150_REG_CC_PORT_ATTACHMENT_SHIFT)
#define PTN5150_REG_CC_VBUS_DETECTION_SHIFT 7
#define PTN5150_REG_CC_VBUS_DETECTION_MASK \
(0x1 << PTN5150_REG_CC_VBUS_DETECTION_SHIFT)
#define PTN5150_REG_INT_CABLE_ATTACH_SHIFT 0
#define PTN5150_REG_INT_CABLE_ATTACH_MASK \
(0x1 << PTN5150_REG_INT_CABLE_ATTACH_SHIFT)
#define PTN5150_REG_INT_CABLE_DETACH_SHIFT 1
#define PTN5150_REG_INT_CABLE_DETACH_MASK \
(0x1 << PTN5150_REG_CC_CABLE_DETACH_SHIFT)
struct ptn5150_info {
struct device *dev;
struct extcon_dev *edev;
struct i2c_client *i2c;
struct regmap *regmap;
struct gpio_desc *int_gpiod;
struct gpio_desc *vbus_gpiod;
int irq;
struct work_struct irq_work;
struct mutex mutex;
};
/* List of detectable cables */
static const unsigned int ptn5150_extcon_cable[] = {
EXTCON_USB,
EXTCON_USB_HOST,
EXTCON_NONE,
};
static const struct regmap_config ptn5150_regmap_config = {
.reg_bits = 8,
.val_bits = 8,
.max_register = PTN5150_REG_END,
};
static void ptn5150_irq_work(struct work_struct *work)
{
struct ptn5150_info *info = container_of(work,
struct ptn5150_info, irq_work);
int ret = 0;
unsigned int reg_data;
unsigned int int_status;
if (!info->edev)
return;
mutex_lock(&info->mutex);
ret = regmap_read(info->regmap, PTN5150_REG_CC_STATUS, &reg_data);
if (ret) {
dev_err(info->dev, "failed to read CC STATUS %d\n", ret);
mutex_unlock(&info->mutex);
return;
}
/* Clear interrupt. Read would clear the register */
ret = regmap_read(info->regmap, PTN5150_REG_INT_STATUS, &int_status);
if (ret) {
dev_err(info->dev, "failed to read INT STATUS %d\n", ret);
mutex_unlock(&info->mutex);
return;
}
if (int_status) {
unsigned int cable_attach;
cable_attach = int_status & PTN5150_REG_INT_CABLE_ATTACH_MASK;
if (cable_attach) {
unsigned int port_status;
unsigned int vbus;
port_status = ((reg_data &
PTN5150_REG_CC_PORT_ATTACHMENT_MASK) >>
PTN5150_REG_CC_PORT_ATTACHMENT_SHIFT);
switch (port_status) {
case PTN5150_DFP_ATTACHED:
extcon_set_state_sync(info->edev,
EXTCON_USB_HOST, false);
gpiod_set_value(info->vbus_gpiod, 0);
extcon_set_state_sync(info->edev, EXTCON_USB,
true);
break;
case PTN5150_UFP_ATTACHED:
extcon_set_state_sync(info->edev, EXTCON_USB,
false);
vbus = ((reg_data &
PTN5150_REG_CC_VBUS_DETECTION_MASK) >>
PTN5150_REG_CC_VBUS_DETECTION_SHIFT);
if (vbus)
gpiod_set_value(info->vbus_gpiod, 0);
else
gpiod_set_value(info->vbus_gpiod, 1);
extcon_set_state_sync(info->edev,
EXTCON_USB_HOST, true);
break;
default:
dev_err(info->dev,
"Unknown Port status : %x\n",
port_status);
break;
}
} else {
extcon_set_state_sync(info->edev,
EXTCON_USB_HOST, false);
extcon_set_state_sync(info->edev,
EXTCON_USB, false);
gpiod_set_value(info->vbus_gpiod, 0);
}
}
/* Clear interrupt. Read would clear the register */
ret = regmap_read(info->regmap, PTN5150_REG_INT_REG_STATUS,
&int_status);
if (ret) {
dev_err(info->dev,
"failed to read INT REG STATUS %d\n", ret);
mutex_unlock(&info->mutex);
return;
}
mutex_unlock(&info->mutex);
}
static irqreturn_t ptn5150_irq_handler(int irq, void *data)
{
struct ptn5150_info *info = data;
schedule_work(&info->irq_work);
return IRQ_HANDLED;
}
static int ptn5150_init_dev_type(struct ptn5150_info *info)
{
unsigned int reg_data, vendor_id, version_id;
int ret;
ret = regmap_read(info->regmap, PTN5150_REG_DEVICE_ID, &reg_data);
if (ret) {
dev_err(info->dev, "failed to read DEVICE_ID %d\n", ret);
return -EINVAL;
}
vendor_id = ((reg_data & PTN5150_REG_DEVICE_ID_VENDOR_MASK) >>
PTN5150_REG_DEVICE_ID_VENDOR_SHIFT);
version_id = ((reg_data & PTN5150_REG_DEVICE_ID_VERSION_MASK) >>
PTN5150_REG_DEVICE_ID_VERSION_SHIFT);
dev_info(info->dev, "Device type: version: 0x%x, vendor: 0x%x\n",
version_id, vendor_id);
/* Clear any existing interrupts */
ret = regmap_read(info->regmap, PTN5150_REG_INT_STATUS, &reg_data);
if (ret) {
dev_err(info->dev,
"failed to read PTN5150_REG_INT_STATUS %d\n",
ret);
return -EINVAL;
}
ret = regmap_read(info->regmap, PTN5150_REG_INT_REG_STATUS, &reg_data);
if (ret) {
dev_err(info->dev,
"failed to read PTN5150_REG_INT_REG_STATUS %d\n", ret);
return -EINVAL;
}
return 0;
}
static int ptn5150_i2c_probe(struct i2c_client *i2c,
const struct i2c_device_id *id)
{
struct device *dev = &i2c->dev;
struct device_node *np = i2c->dev.of_node;
struct ptn5150_info *info;
int ret;
if (!np)
return -EINVAL;
info = devm_kzalloc(&i2c->dev, sizeof(*info), GFP_KERNEL);
if (!info)
return -ENOMEM;
i2c_set_clientdata(i2c, info);
info->dev = &i2c->dev;
info->i2c = i2c;
info->int_gpiod = devm_gpiod_get(&i2c->dev, "int", GPIOD_IN);
if (IS_ERR(info->int_gpiod)) {
dev_err(dev, "failed to get INT GPIO\n");
return PTR_ERR(info->int_gpiod);
}
info->vbus_gpiod = devm_gpiod_get(&i2c->dev, "vbus", GPIOD_IN);
if (IS_ERR(info->vbus_gpiod)) {
dev_err(dev, "failed to get VBUS GPIO\n");
return PTR_ERR(info->vbus_gpiod);
}
ret = gpiod_direction_output(info->vbus_gpiod, 0);
if (ret) {
dev_err(dev, "failed to set VBUS GPIO direction\n");
return -EINVAL;
}
mutex_init(&info->mutex);
INIT_WORK(&info->irq_work, ptn5150_irq_work);
info->regmap = devm_regmap_init_i2c(i2c, &ptn5150_regmap_config);
if (IS_ERR(info->regmap)) {
ret = PTR_ERR(info->regmap);
dev_err(info->dev, "failed to allocate register map: %d\n",
ret);
return ret;
}
if (info->int_gpiod) {
info->irq = gpiod_to_irq(info->int_gpiod);
if (info->irq < 0) {
dev_err(dev, "failed to get INTB IRQ\n");
return info->irq;
}
ret = devm_request_threaded_irq(dev, info->irq, NULL,
ptn5150_irq_handler,
IRQF_TRIGGER_FALLING |
IRQF_ONESHOT,
i2c->name, info);
if (ret < 0) {
dev_err(dev, "failed to request handler for INTB IRQ\n");
return ret;
}
}
/* Allocate extcon device */
info->edev = devm_extcon_dev_allocate(info->dev, ptn5150_extcon_cable);
if (IS_ERR(info->edev)) {
dev_err(info->dev, "failed to allocate memory for extcon\n");
return -ENOMEM;
}
/* Register extcon device */
ret = devm_extcon_dev_register(info->dev, info->edev);
if (ret) {
dev_err(info->dev, "failed to register extcon device\n");
return ret;
}
/* Initialize PTN5150 device and print vendor id and version id */
ret = ptn5150_init_dev_type(info);
if (ret)
return -EINVAL;
return 0;
}
static const struct of_device_id ptn5150_dt_match[] = {
{ .compatible = "nxp,ptn5150" },
{ },
};
MODULE_DEVICE_TABLE(of, ptn5150_dt_match);
static const struct i2c_device_id ptn5150_i2c_id[] = {
{ "ptn5150", 0 },
{ }
};
MODULE_DEVICE_TABLE(i2c, ptn5150_i2c_id);
static struct i2c_driver ptn5150_i2c_driver = {
.driver = {
.name = "ptn5150",
.of_match_table = ptn5150_dt_match,
},
.probe = ptn5150_i2c_probe,
.id_table = ptn5150_i2c_id,
};
static int __init ptn5150_i2c_init(void)
{
return i2c_add_driver(&ptn5150_i2c_driver);
}
subsys_initcall(ptn5150_i2c_init);
MODULE_DESCRIPTION("NXP PTN5150 CC logic Extcon driver");
MODULE_AUTHOR("Vijai Kumar K <vijaikumar.kanagarajan@gmail.com>");
MODULE_LICENSE("GPL v2");

View File

@ -104,7 +104,7 @@ config SOCFPGA_FPGA_BRIDGE
config ALTERA_FREEZE_BRIDGE config ALTERA_FREEZE_BRIDGE
tristate "Altera FPGA Freeze Bridge" tristate "Altera FPGA Freeze Bridge"
depends on ARCH_SOCFPGA && FPGA_BRIDGE depends on FPGA_BRIDGE && HAS_IOMEM
help help
Say Y to enable drivers for Altera FPGA Freeze bridges. A Say Y to enable drivers for Altera FPGA Freeze bridges. A
freeze bridge is a bridge that exists in the FPGA fabric to freeze bridge is a bridge that exists in the FPGA fabric to

View File

@ -205,7 +205,7 @@ static int altera_ps_write_complete(struct fpga_manager *mgr,
struct fpga_image_info *info) struct fpga_image_info *info)
{ {
struct altera_ps_conf *conf = mgr->priv; struct altera_ps_conf *conf = mgr->priv;
const char dummy[] = {0}; static const char dummy[] = {0};
int ret; int ret;
if (gpiod_get_value_cansleep(conf->status)) { if (gpiod_get_value_cansleep(conf->status)) {

View File

@ -15,6 +15,19 @@ if GNSS
config GNSS_SERIAL config GNSS_SERIAL
tristate tristate
config GNSS_MTK_SERIAL
tristate "Mediatek GNSS receiver support"
depends on SERIAL_DEV_BUS
select GNSS_SERIAL
help
Say Y here if you have a Mediatek-based GNSS receiver which uses a
serial interface.
To compile this driver as a module, choose M here: the module will
be called gnss-mtk.
If unsure, say N.
config GNSS_SIRF_SERIAL config GNSS_SIRF_SERIAL
tristate "SiRFstar GNSS receiver support" tristate "SiRFstar GNSS receiver support"
depends on SERIAL_DEV_BUS depends on SERIAL_DEV_BUS

View File

@ -9,6 +9,9 @@ gnss-y := core.o
obj-$(CONFIG_GNSS_SERIAL) += gnss-serial.o obj-$(CONFIG_GNSS_SERIAL) += gnss-serial.o
gnss-serial-y := serial.o gnss-serial-y := serial.o
obj-$(CONFIG_GNSS_MTK_SERIAL) += gnss-mtk.o
gnss-mtk-y := mtk.o
obj-$(CONFIG_GNSS_SIRF_SERIAL) += gnss-sirf.o obj-$(CONFIG_GNSS_SIRF_SERIAL) += gnss-sirf.o
gnss-sirf-y := sirf.o gnss-sirf-y := sirf.o

View File

@ -334,6 +334,7 @@ static const char * const gnss_type_names[GNSS_TYPE_COUNT] = {
[GNSS_TYPE_NMEA] = "NMEA", [GNSS_TYPE_NMEA] = "NMEA",
[GNSS_TYPE_SIRF] = "SiRF", [GNSS_TYPE_SIRF] = "SiRF",
[GNSS_TYPE_UBX] = "UBX", [GNSS_TYPE_UBX] = "UBX",
[GNSS_TYPE_MTK] = "MTK",
}; };
static const char *gnss_type_name(struct gnss_device *gdev) static const char *gnss_type_name(struct gnss_device *gdev)

152
drivers/gnss/mtk.c Normal file
View File

@ -0,0 +1,152 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Mediatek GNSS receiver driver
*
* Copyright (C) 2018 Johan Hovold <johan@kernel.org>
*/
#include <linux/errno.h>
#include <linux/gnss.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/regulator/consumer.h>
#include <linux/serdev.h>
#include "serial.h"
struct mtk_data {
struct regulator *vbackup;
struct regulator *vcc;
};
static int mtk_set_active(struct gnss_serial *gserial)
{
struct mtk_data *data = gnss_serial_get_drvdata(gserial);
int ret;
ret = regulator_enable(data->vcc);
if (ret)
return ret;
return 0;
}
static int mtk_set_standby(struct gnss_serial *gserial)
{
struct mtk_data *data = gnss_serial_get_drvdata(gserial);
int ret;
ret = regulator_disable(data->vcc);
if (ret)
return ret;
return 0;
}
static int mtk_set_power(struct gnss_serial *gserial,
enum gnss_serial_pm_state state)
{
switch (state) {
case GNSS_SERIAL_ACTIVE:
return mtk_set_active(gserial);
case GNSS_SERIAL_OFF:
case GNSS_SERIAL_STANDBY:
return mtk_set_standby(gserial);
}
return -EINVAL;
}
static const struct gnss_serial_ops mtk_gserial_ops = {
.set_power = mtk_set_power,
};
static int mtk_probe(struct serdev_device *serdev)
{
struct gnss_serial *gserial;
struct mtk_data *data;
int ret;
gserial = gnss_serial_allocate(serdev, sizeof(*data));
if (IS_ERR(gserial)) {
ret = PTR_ERR(gserial);
return ret;
}
gserial->ops = &mtk_gserial_ops;
gserial->gdev->type = GNSS_TYPE_MTK;
data = gnss_serial_get_drvdata(gserial);
data->vcc = devm_regulator_get(&serdev->dev, "vcc");
if (IS_ERR(data->vcc)) {
ret = PTR_ERR(data->vcc);
goto err_free_gserial;
}
data->vbackup = devm_regulator_get_optional(&serdev->dev, "vbackup");
if (IS_ERR(data->vbackup)) {
ret = PTR_ERR(data->vbackup);
if (ret == -ENODEV)
data->vbackup = NULL;
else
goto err_free_gserial;
}
if (data->vbackup) {
ret = regulator_enable(data->vbackup);
if (ret)
goto err_free_gserial;
}
ret = gnss_serial_register(gserial);
if (ret)
goto err_disable_vbackup;
return 0;
err_disable_vbackup:
if (data->vbackup)
regulator_disable(data->vbackup);
err_free_gserial:
gnss_serial_free(gserial);
return ret;
}
static void mtk_remove(struct serdev_device *serdev)
{
struct gnss_serial *gserial = serdev_device_get_drvdata(serdev);
struct mtk_data *data = gnss_serial_get_drvdata(gserial);
gnss_serial_deregister(gserial);
if (data->vbackup)
regulator_disable(data->vbackup);
gnss_serial_free(gserial);
};
#ifdef CONFIG_OF
static const struct of_device_id mtk_of_match[] = {
{ .compatible = "globaltop,pa6h" },
{},
};
MODULE_DEVICE_TABLE(of, mtk_of_match);
#endif
static struct serdev_device_driver mtk_driver = {
.driver = {
.name = "gnss-mtk",
.of_match_table = of_match_ptr(mtk_of_match),
.pm = &gnss_serial_pm_ops,
},
.probe = mtk_probe,
.remove = mtk_remove,
};
module_serdev_device_driver(mtk_driver);
MODULE_AUTHOR("Loys Ollivier <lollivier@baylibre.com>");
MODULE_DESCRIPTION("Mediatek GNSS receiver driver");
MODULE_LICENSE("GPL v2");

View File

@ -25,31 +25,83 @@
#define SIRF_ON_OFF_PULSE_TIME 100 #define SIRF_ON_OFF_PULSE_TIME 100
#define SIRF_ACTIVATE_TIMEOUT 200 #define SIRF_ACTIVATE_TIMEOUT 200
#define SIRF_HIBERNATE_TIMEOUT 200 #define SIRF_HIBERNATE_TIMEOUT 200
/*
* If no data arrives for this time, we assume that the chip is off.
* REVISIT: The report cycle is configurable and can be several minutes long,
* so this will only work reliably if the report cycle is set to a reasonable
* low value. Also power saving settings (like send data only on movement)
* might things work even worse.
* Workaround might be to parse shutdown or bootup messages.
*/
#define SIRF_REPORT_CYCLE 2000
struct sirf_data { struct sirf_data {
struct gnss_device *gdev; struct gnss_device *gdev;
struct serdev_device *serdev; struct serdev_device *serdev;
speed_t speed; speed_t speed;
struct regulator *vcc; struct regulator *vcc;
struct regulator *lna;
struct gpio_desc *on_off; struct gpio_desc *on_off;
struct gpio_desc *wakeup; struct gpio_desc *wakeup;
int irq; int irq;
bool active; bool active;
struct mutex gdev_mutex;
bool open;
struct mutex serdev_mutex;
int serdev_count;
wait_queue_head_t power_wait; wait_queue_head_t power_wait;
}; };
static int sirf_serdev_open(struct sirf_data *data)
{
int ret = 0;
mutex_lock(&data->serdev_mutex);
if (++data->serdev_count == 1) {
ret = serdev_device_open(data->serdev);
if (ret) {
data->serdev_count--;
goto out_unlock;
}
serdev_device_set_baudrate(data->serdev, data->speed);
serdev_device_set_flow_control(data->serdev, false);
}
out_unlock:
mutex_unlock(&data->serdev_mutex);
return ret;
}
static void sirf_serdev_close(struct sirf_data *data)
{
mutex_lock(&data->serdev_mutex);
if (--data->serdev_count == 0)
serdev_device_close(data->serdev);
mutex_unlock(&data->serdev_mutex);
}
static int sirf_open(struct gnss_device *gdev) static int sirf_open(struct gnss_device *gdev)
{ {
struct sirf_data *data = gnss_get_drvdata(gdev); struct sirf_data *data = gnss_get_drvdata(gdev);
struct serdev_device *serdev = data->serdev; struct serdev_device *serdev = data->serdev;
int ret; int ret;
ret = serdev_device_open(serdev); mutex_lock(&data->gdev_mutex);
if (ret) data->open = true;
return ret; mutex_unlock(&data->gdev_mutex);
serdev_device_set_baudrate(serdev, data->speed); ret = sirf_serdev_open(data);
serdev_device_set_flow_control(serdev, false); if (ret) {
mutex_lock(&data->gdev_mutex);
data->open = false;
mutex_unlock(&data->gdev_mutex);
return ret;
}
ret = pm_runtime_get_sync(&serdev->dev); ret = pm_runtime_get_sync(&serdev->dev);
if (ret < 0) { if (ret < 0) {
@ -61,7 +113,11 @@ static int sirf_open(struct gnss_device *gdev)
return 0; return 0;
err_close: err_close:
serdev_device_close(serdev); sirf_serdev_close(data);
mutex_lock(&data->gdev_mutex);
data->open = false;
mutex_unlock(&data->gdev_mutex);
return ret; return ret;
} }
@ -71,9 +127,13 @@ static void sirf_close(struct gnss_device *gdev)
struct sirf_data *data = gnss_get_drvdata(gdev); struct sirf_data *data = gnss_get_drvdata(gdev);
struct serdev_device *serdev = data->serdev; struct serdev_device *serdev = data->serdev;
serdev_device_close(serdev); sirf_serdev_close(data);
pm_runtime_put(&serdev->dev); pm_runtime_put(&serdev->dev);
mutex_lock(&data->gdev_mutex);
data->open = false;
mutex_unlock(&data->gdev_mutex);
} }
static int sirf_write_raw(struct gnss_device *gdev, const unsigned char *buf, static int sirf_write_raw(struct gnss_device *gdev, const unsigned char *buf,
@ -105,8 +165,19 @@ static int sirf_receive_buf(struct serdev_device *serdev,
{ {
struct sirf_data *data = serdev_device_get_drvdata(serdev); struct sirf_data *data = serdev_device_get_drvdata(serdev);
struct gnss_device *gdev = data->gdev; struct gnss_device *gdev = data->gdev;
int ret = 0;
return gnss_insert_raw(gdev, buf, count); if (!data->wakeup && !data->active) {
data->active = true;
wake_up_interruptible(&data->power_wait);
}
mutex_lock(&data->gdev_mutex);
if (data->open)
ret = gnss_insert_raw(gdev, buf, count);
mutex_unlock(&data->gdev_mutex);
return ret;
} }
static const struct serdev_device_ops sirf_serdev_ops = { static const struct serdev_device_ops sirf_serdev_ops = {
@ -125,17 +196,45 @@ static irqreturn_t sirf_wakeup_handler(int irq, void *dev_id)
if (ret < 0) if (ret < 0)
goto out; goto out;
data->active = !!ret; data->active = ret;
wake_up_interruptible(&data->power_wait); wake_up_interruptible(&data->power_wait);
out: out:
return IRQ_HANDLED; return IRQ_HANDLED;
} }
static int sirf_wait_for_power_state_nowakeup(struct sirf_data *data,
bool active,
unsigned long timeout)
{
int ret;
/* Wait for state change (including any shutdown messages). */
msleep(timeout);
/* Wait for data reception or timeout. */
data->active = false;
ret = wait_event_interruptible_timeout(data->power_wait,
data->active, msecs_to_jiffies(SIRF_REPORT_CYCLE));
if (ret < 0)
return ret;
if (ret > 0 && !active)
return -ETIMEDOUT;
if (ret == 0 && active)
return -ETIMEDOUT;
return 0;
}
static int sirf_wait_for_power_state(struct sirf_data *data, bool active, static int sirf_wait_for_power_state(struct sirf_data *data, bool active,
unsigned long timeout) unsigned long timeout)
{ {
int ret; int ret;
if (!data->wakeup)
return sirf_wait_for_power_state_nowakeup(data, active, timeout);
ret = wait_event_interruptible_timeout(data->power_wait, ret = wait_event_interruptible_timeout(data->power_wait,
data->active == active, msecs_to_jiffies(timeout)); data->active == active, msecs_to_jiffies(timeout));
if (ret < 0) if (ret < 0)
@ -168,21 +267,22 @@ static int sirf_set_active(struct sirf_data *data, bool active)
else else
timeout = SIRF_HIBERNATE_TIMEOUT; timeout = SIRF_HIBERNATE_TIMEOUT;
do { if (!data->wakeup) {
sirf_pulse_on_off(data); ret = sirf_serdev_open(data);
ret = sirf_wait_for_power_state(data, active, timeout); if (ret)
if (ret < 0) {
if (ret == -ETIMEDOUT)
continue;
return ret; return ret;
} }
break; do {
} while (retries--); sirf_pulse_on_off(data);
ret = sirf_wait_for_power_state(data, active, timeout);
} while (ret == -ETIMEDOUT && retries--);
if (retries < 0) if (!data->wakeup)
return -ETIMEDOUT; sirf_serdev_close(data);
if (ret)
return ret;
return 0; return 0;
} }
@ -190,21 +290,60 @@ static int sirf_set_active(struct sirf_data *data, bool active)
static int sirf_runtime_suspend(struct device *dev) static int sirf_runtime_suspend(struct device *dev)
{ {
struct sirf_data *data = dev_get_drvdata(dev); struct sirf_data *data = dev_get_drvdata(dev);
int ret2;
int ret;
if (!data->on_off) if (data->on_off)
return regulator_disable(data->vcc); ret = sirf_set_active(data, false);
else
ret = regulator_disable(data->vcc);
return sirf_set_active(data, false); if (ret)
return ret;
ret = regulator_disable(data->lna);
if (ret)
goto err_reenable;
return 0;
err_reenable:
if (data->on_off)
ret2 = sirf_set_active(data, true);
else
ret2 = regulator_enable(data->vcc);
if (ret2)
dev_err(dev,
"failed to reenable power on failed suspend: %d\n",
ret2);
return ret;
} }
static int sirf_runtime_resume(struct device *dev) static int sirf_runtime_resume(struct device *dev)
{ {
struct sirf_data *data = dev_get_drvdata(dev); struct sirf_data *data = dev_get_drvdata(dev);
int ret;
if (!data->on_off) ret = regulator_enable(data->lna);
return regulator_enable(data->vcc); if (ret)
return ret;
return sirf_set_active(data, true); if (data->on_off)
ret = sirf_set_active(data, true);
else
ret = regulator_enable(data->vcc);
if (ret)
goto err_disable_lna;
return 0;
err_disable_lna:
regulator_disable(data->lna);
return ret;
} }
static int __maybe_unused sirf_suspend(struct device *dev) static int __maybe_unused sirf_suspend(struct device *dev)
@ -275,6 +414,8 @@ static int sirf_probe(struct serdev_device *serdev)
data->serdev = serdev; data->serdev = serdev;
data->gdev = gdev; data->gdev = gdev;
mutex_init(&data->gdev_mutex);
mutex_init(&data->serdev_mutex);
init_waitqueue_head(&data->power_wait); init_waitqueue_head(&data->power_wait);
serdev_device_set_drvdata(serdev, data); serdev_device_set_drvdata(serdev, data);
@ -290,6 +431,12 @@ static int sirf_probe(struct serdev_device *serdev)
goto err_put_device; goto err_put_device;
} }
data->lna = devm_regulator_get(dev, "lna");
if (IS_ERR(data->lna)) {
ret = PTR_ERR(data->lna);
goto err_put_device;
}
data->on_off = devm_gpiod_get_optional(dev, "sirf,onoff", data->on_off = devm_gpiod_get_optional(dev, "sirf,onoff",
GPIOD_OUT_LOW); GPIOD_OUT_LOW);
if (IS_ERR(data->on_off)) if (IS_ERR(data->on_off))
@ -301,48 +448,62 @@ static int sirf_probe(struct serdev_device *serdev)
if (IS_ERR(data->wakeup)) if (IS_ERR(data->wakeup))
goto err_put_device; goto err_put_device;
/*
* Configurations where WAKEUP has been left not connected,
* are currently not supported.
*/
if (!data->wakeup) {
dev_err(dev, "no wakeup gpio specified\n");
ret = -ENODEV;
goto err_put_device;
}
}
if (data->wakeup) {
ret = gpiod_to_irq(data->wakeup);
if (ret < 0)
goto err_put_device;
data->irq = ret;
ret = devm_request_threaded_irq(dev, data->irq, NULL,
sirf_wakeup_handler,
IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
"wakeup", data);
if (ret)
goto err_put_device;
}
if (data->on_off) {
ret = regulator_enable(data->vcc); ret = regulator_enable(data->vcc);
if (ret) if (ret)
goto err_put_device; goto err_put_device;
/* Wait for chip to boot into hibernate mode */ /* Wait for chip to boot into hibernate mode. */
msleep(SIRF_BOOT_DELAY); msleep(SIRF_BOOT_DELAY);
} }
if (data->wakeup) {
ret = gpiod_get_value_cansleep(data->wakeup);
if (ret < 0)
goto err_disable_vcc;
data->active = ret;
ret = gpiod_to_irq(data->wakeup);
if (ret < 0)
goto err_disable_vcc;
data->irq = ret;
ret = request_threaded_irq(data->irq, NULL, sirf_wakeup_handler,
IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
"wakeup", data);
if (ret)
goto err_disable_vcc;
}
if (data->on_off) {
if (!data->wakeup) {
data->active = false;
ret = sirf_serdev_open(data);
if (ret)
goto err_disable_vcc;
msleep(SIRF_REPORT_CYCLE);
sirf_serdev_close(data);
}
/* Force hibernate mode if already active. */
if (data->active) {
ret = sirf_set_active(data, false);
if (ret) {
dev_err(dev, "failed to set hibernate mode: %d\n",
ret);
goto err_free_irq;
}
}
}
if (IS_ENABLED(CONFIG_PM)) { if (IS_ENABLED(CONFIG_PM)) {
pm_runtime_set_suspended(dev); /* clear runtime_error flag */ pm_runtime_set_suspended(dev); /* clear runtime_error flag */
pm_runtime_enable(dev); pm_runtime_enable(dev);
} else { } else {
ret = sirf_runtime_resume(dev); ret = sirf_runtime_resume(dev);
if (ret < 0) if (ret < 0)
goto err_disable_vcc; goto err_free_irq;
} }
ret = gnss_register_device(gdev); ret = gnss_register_device(gdev);
@ -356,6 +517,9 @@ err_disable_rpm:
pm_runtime_disable(dev); pm_runtime_disable(dev);
else else
sirf_runtime_suspend(dev); sirf_runtime_suspend(dev);
err_free_irq:
if (data->wakeup)
free_irq(data->irq, data);
err_disable_vcc: err_disable_vcc:
if (data->on_off) if (data->on_off)
regulator_disable(data->vcc); regulator_disable(data->vcc);
@ -376,6 +540,9 @@ static void sirf_remove(struct serdev_device *serdev)
else else
sirf_runtime_suspend(&serdev->dev); sirf_runtime_suspend(&serdev->dev);
if (data->wakeup)
free_irq(data->irq, data);
if (data->on_off) if (data->on_off)
regulator_disable(data->vcc); regulator_disable(data->vcc);
@ -386,6 +553,7 @@ static void sirf_remove(struct serdev_device *serdev)
static const struct of_device_id sirf_of_match[] = { static const struct of_device_id sirf_of_match[] = {
{ .compatible = "fastrax,uc430" }, { .compatible = "fastrax,uc430" },
{ .compatible = "linx,r4" }, { .compatible = "linx,r4" },
{ .compatible = "wi2wi,w2sg0004" },
{ .compatible = "wi2wi,w2sg0008i" }, { .compatible = "wi2wi,w2sg0008i" },
{ .compatible = "wi2wi,w2sg0084i" }, { .compatible = "wi2wi,w2sg0084i" },
{}, {},

View File

@ -984,7 +984,9 @@ void i915_audio_component_init(struct drm_i915_private *dev_priv)
{ {
int ret; int ret;
ret = component_add(dev_priv->drm.dev, &i915_audio_component_bind_ops); ret = component_add_typed(dev_priv->drm.dev,
&i915_audio_component_bind_ops,
I915_COMPONENT_AUDIO);
if (ret < 0) { if (ret < 0) {
DRM_ERROR("failed to add audio component (%d)\n", ret); DRM_ERROR("failed to add audio component (%d)\n", ret);
/* continue with reduced functionality */ /* continue with reduced functionality */

View File

@ -26,6 +26,7 @@
#define _INTEL_DISPLAY_H_ #define _INTEL_DISPLAY_H_
#include <drm/drm_util.h> #include <drm/drm_util.h>
#include <drm/i915_drm.h>
enum i915_gpio { enum i915_gpio {
GPIOA, GPIOA,
@ -150,21 +151,6 @@ enum plane_id {
for ((__p) = PLANE_PRIMARY; (__p) < I915_MAX_PLANES; (__p)++) \ for ((__p) = PLANE_PRIMARY; (__p) < I915_MAX_PLANES; (__p)++) \
for_each_if((__crtc)->plane_ids_mask & BIT(__p)) for_each_if((__crtc)->plane_ids_mask & BIT(__p))
enum port {
PORT_NONE = -1,
PORT_A = 0,
PORT_B,
PORT_C,
PORT_D,
PORT_E,
PORT_F,
I915_MAX_PORTS
};
#define port_name(p) ((p) + 'A')
/* /*
* Ports identifier referenced from other drivers. * Ports identifier referenced from other drivers.
* Expected to remain stable over time * Expected to remain stable over time

View File

@ -5,6 +5,7 @@ config DRM_MSM
depends on ARCH_QCOM || SOC_IMX5 || (ARM && COMPILE_TEST) depends on ARCH_QCOM || SOC_IMX5 || (ARM && COMPILE_TEST)
depends on OF && COMMON_CLK depends on OF && COMMON_CLK
depends on MMU depends on MMU
depends on INTERCONNECT || !INTERCONNECT
select QCOM_MDT_LOADER if ARCH_QCOM select QCOM_MDT_LOADER if ARCH_QCOM
select REGULATOR select REGULATOR
select DRM_KMS_HELPER select DRM_KMS_HELPER

View File

@ -2,6 +2,7 @@
/* Copyright (c) 2017-2018 The Linux Foundation. All rights reserved. */ /* Copyright (c) 2017-2018 The Linux Foundation. All rights reserved. */
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/interconnect.h>
#include <linux/pm_opp.h> #include <linux/pm_opp.h>
#include <soc/qcom/cmd-db.h> #include <soc/qcom/cmd-db.h>
@ -84,6 +85,9 @@ bool a6xx_gmu_gx_is_on(struct a6xx_gmu *gmu)
static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int index) static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int index)
{ {
struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu);
struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
struct msm_gpu *gpu = &adreno_gpu->base;
int ret; int ret;
gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0); gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0);
@ -106,6 +110,12 @@ static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int index)
dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret); dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret);
gmu->freq = gmu->gpu_freqs[index]; gmu->freq = gmu->gpu_freqs[index];
/*
* Eventually we will want to scale the path vote with the frequency but
* for now leave it at max so that the performance is nominal.
*/
icc_set_bw(gpu->icc_path, 0, MBps_to_icc(7216));
} }
void a6xx_gmu_set_freq(struct msm_gpu *gpu, unsigned long freq) void a6xx_gmu_set_freq(struct msm_gpu *gpu, unsigned long freq)
@ -705,6 +715,8 @@ out:
int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu) int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu)
{ {
struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
struct msm_gpu *gpu = &adreno_gpu->base;
struct a6xx_gmu *gmu = &a6xx_gpu->gmu; struct a6xx_gmu *gmu = &a6xx_gpu->gmu;
int status, ret; int status, ret;
@ -720,6 +732,9 @@ int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu)
if (ret) if (ret)
goto out; goto out;
/* Set the bus quota to a reasonable value for boot */
icc_set_bw(gpu->icc_path, 0, MBps_to_icc(3072));
a6xx_gmu_irq_enable(gmu); a6xx_gmu_irq_enable(gmu);
/* Check to see if we are doing a cold or warm boot */ /* Check to see if we are doing a cold or warm boot */
@ -760,6 +775,8 @@ bool a6xx_gmu_isidle(struct a6xx_gmu *gmu)
int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu) int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu)
{ {
struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
struct msm_gpu *gpu = &adreno_gpu->base;
struct a6xx_gmu *gmu = &a6xx_gpu->gmu; struct a6xx_gmu *gmu = &a6xx_gpu->gmu;
u32 val; u32 val;
@ -806,6 +823,9 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu)
/* Tell RPMh to power off the GPU */ /* Tell RPMh to power off the GPU */
a6xx_rpmh_stop(gmu); a6xx_rpmh_stop(gmu);
/* Remove the bus vote */
icc_set_bw(gpu->icc_path, 0, 0);
clk_bulk_disable_unprepare(gmu->nr_clocks, gmu->clocks); clk_bulk_disable_unprepare(gmu->nr_clocks, gmu->clocks);
pm_runtime_put_sync(gmu->dev); pm_runtime_put_sync(gmu->dev);

View File

@ -18,6 +18,7 @@
*/ */
#include <linux/ascii85.h> #include <linux/ascii85.h>
#include <linux/interconnect.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/pm_opp.h> #include <linux/pm_opp.h>
#include <linux/slab.h> #include <linux/slab.h>
@ -747,6 +748,11 @@ static int adreno_get_pwrlevels(struct device *dev,
DBG("fast_rate=%u, slow_rate=27000000", gpu->fast_rate); DBG("fast_rate=%u, slow_rate=27000000", gpu->fast_rate);
/* Check for an interconnect path for the bus */
gpu->icc_path = of_icc_get(dev, NULL);
if (IS_ERR(gpu->icc_path))
gpu->icc_path = NULL;
return 0; return 0;
} }
@ -787,10 +793,13 @@ int adreno_gpu_init(struct drm_device *drm, struct platform_device *pdev,
void adreno_gpu_cleanup(struct adreno_gpu *adreno_gpu) void adreno_gpu_cleanup(struct adreno_gpu *adreno_gpu)
{ {
struct msm_gpu *gpu = &adreno_gpu->base;
unsigned int i; unsigned int i;
for (i = 0; i < ARRAY_SIZE(adreno_gpu->info->fw); i++) for (i = 0; i < ARRAY_SIZE(adreno_gpu->info->fw); i++)
release_firmware(adreno_gpu->fw[i]); release_firmware(adreno_gpu->fw[i]);
icc_put(gpu->icc_path);
msm_gpu_cleanup(&adreno_gpu->base); msm_gpu_cleanup(&adreno_gpu->base);
} }

View File

@ -19,6 +19,7 @@
#define __MSM_GPU_H__ #define __MSM_GPU_H__
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/interconnect.h>
#include <linux/regulator/consumer.h> #include <linux/regulator/consumer.h>
#include "msm_drv.h" #include "msm_drv.h"
@ -118,6 +119,8 @@ struct msm_gpu {
struct clk *ebi1_clk, *core_clk, *rbbmtimer_clk; struct clk *ebi1_clk, *core_clk, *rbbmtimer_clk;
uint32_t fast_rate; uint32_t fast_rate;
struct icc_path *icc_path;
/* Hang and Inactivity Detection: /* Hang and Inactivity Detection:
*/ */
#define DRM_MSM_INACTIVE_PERIOD 66 /* in ms (roughly four frames) */ #define DRM_MSM_INACTIVE_PERIOD 66 /* in ms (roughly four frames) */

View File

@ -282,8 +282,8 @@ int vmbus_open(struct vmbus_channel *newchannel,
EXPORT_SYMBOL_GPL(vmbus_open); EXPORT_SYMBOL_GPL(vmbus_open);
/* Used for Hyper-V Socket: a guest client's connect() to the host */ /* Used for Hyper-V Socket: a guest client's connect() to the host */
int vmbus_send_tl_connect_request(const uuid_le *shv_guest_servie_id, int vmbus_send_tl_connect_request(const guid_t *shv_guest_servie_id,
const uuid_le *shv_host_servie_id) const guid_t *shv_host_servie_id)
{ {
struct vmbus_channel_tl_connect_request conn_msg; struct vmbus_channel_tl_connect_request conn_msg;
int ret; int ret;

View File

@ -141,7 +141,7 @@ static const struct vmbus_device vmbus_devs[] = {
}; };
static const struct { static const struct {
uuid_le guid; guid_t guid;
} vmbus_unsupported_devs[] = { } vmbus_unsupported_devs[] = {
{ HV_AVMA1_GUID }, { HV_AVMA1_GUID },
{ HV_AVMA2_GUID }, { HV_AVMA2_GUID },
@ -171,26 +171,26 @@ static void vmbus_rescind_cleanup(struct vmbus_channel *channel)
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags); spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
} }
static bool is_unsupported_vmbus_devs(const uuid_le *guid) static bool is_unsupported_vmbus_devs(const guid_t *guid)
{ {
int i; int i;
for (i = 0; i < ARRAY_SIZE(vmbus_unsupported_devs); i++) for (i = 0; i < ARRAY_SIZE(vmbus_unsupported_devs); i++)
if (!uuid_le_cmp(*guid, vmbus_unsupported_devs[i].guid)) if (guid_equal(guid, &vmbus_unsupported_devs[i].guid))
return true; return true;
return false; return false;
} }
static u16 hv_get_dev_type(const struct vmbus_channel *channel) static u16 hv_get_dev_type(const struct vmbus_channel *channel)
{ {
const uuid_le *guid = &channel->offermsg.offer.if_type; const guid_t *guid = &channel->offermsg.offer.if_type;
u16 i; u16 i;
if (is_hvsock_channel(channel) || is_unsupported_vmbus_devs(guid)) if (is_hvsock_channel(channel) || is_unsupported_vmbus_devs(guid))
return HV_UNKNOWN; return HV_UNKNOWN;
for (i = HV_IDE; i < HV_UNKNOWN; i++) { for (i = HV_IDE; i < HV_UNKNOWN; i++) {
if (!uuid_le_cmp(*guid, vmbus_devs[i].guid)) if (guid_equal(guid, &vmbus_devs[i].guid))
return i; return i;
} }
pr_info("Unknown GUID: %pUl\n", guid); pr_info("Unknown GUID: %pUl\n", guid);
@ -561,10 +561,10 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
atomic_dec(&vmbus_connection.offer_in_progress); atomic_dec(&vmbus_connection.offer_in_progress);
list_for_each_entry(channel, &vmbus_connection.chn_list, listentry) { list_for_each_entry(channel, &vmbus_connection.chn_list, listentry) {
if (!uuid_le_cmp(channel->offermsg.offer.if_type, if (guid_equal(&channel->offermsg.offer.if_type,
newchannel->offermsg.offer.if_type) && &newchannel->offermsg.offer.if_type) &&
!uuid_le_cmp(channel->offermsg.offer.if_instance, guid_equal(&channel->offermsg.offer.if_instance,
newchannel->offermsg.offer.if_instance)) { &newchannel->offermsg.offer.if_instance)) {
fnew = false; fnew = false;
break; break;
} }

View File

@ -312,8 +312,8 @@ extern const struct vmbus_channel_message_table_entry
/* General vmbus interface */ /* General vmbus interface */
struct hv_device *vmbus_device_create(const uuid_le *type, struct hv_device *vmbus_device_create(const guid_t *type,
const uuid_le *instance, const guid_t *instance,
struct vmbus_channel *channel); struct vmbus_channel *channel);
int vmbus_device_register(struct hv_device *child_device_obj); int vmbus_device_register(struct hv_device *child_device_obj);

View File

@ -74,8 +74,10 @@ static void hv_signal_on_write(u32 old_write, struct vmbus_channel *channel)
* This is the only case we need to signal when the * This is the only case we need to signal when the
* ring transitions from being empty to non-empty. * ring transitions from being empty to non-empty.
*/ */
if (old_write == READ_ONCE(rbi->ring_buffer->read_index)) if (old_write == READ_ONCE(rbi->ring_buffer->read_index)) {
++channel->intr_out_empty;
vmbus_setevent(channel); vmbus_setevent(channel);
}
} }
/* Get the next write location for the specified ring buffer. */ /* Get the next write location for the specified ring buffer. */
@ -272,10 +274,19 @@ int hv_ringbuffer_write(struct vmbus_channel *channel,
* is empty since the read index == write index. * is empty since the read index == write index.
*/ */
if (bytes_avail_towrite <= totalbytes_towrite) { if (bytes_avail_towrite <= totalbytes_towrite) {
++channel->out_full_total;
if (!channel->out_full_flag) {
++channel->out_full_first;
channel->out_full_flag = true;
}
spin_unlock_irqrestore(&outring_info->ring_lock, flags); spin_unlock_irqrestore(&outring_info->ring_lock, flags);
return -EAGAIN; return -EAGAIN;
} }
channel->out_full_flag = false;
/* Write to the ring buffer */ /* Write to the ring buffer */
next_write_location = hv_get_next_write_location(outring_info); next_write_location = hv_get_next_write_location(outring_info);
@ -530,6 +541,7 @@ void hv_pkt_iter_close(struct vmbus_channel *channel)
if (curr_write_sz <= pending_sz) if (curr_write_sz <= pending_sz)
return; return;
++channel->intr_in_full;
vmbus_setevent(channel); vmbus_setevent(channel);
} }
EXPORT_SYMBOL_GPL(hv_pkt_iter_close); EXPORT_SYMBOL_GPL(hv_pkt_iter_close);

View File

@ -234,7 +234,7 @@ static ssize_t server_monitor_pending_show(struct device *dev,
return -ENODEV; return -ENODEV;
return sprintf(buf, "%d\n", return sprintf(buf, "%d\n",
channel_pending(hv_dev->channel, channel_pending(hv_dev->channel,
vmbus_connection.monitor_pages[1])); vmbus_connection.monitor_pages[0]));
} }
static DEVICE_ATTR_RO(server_monitor_pending); static DEVICE_ATTR_RO(server_monitor_pending);
@ -654,38 +654,28 @@ static int vmbus_uevent(struct device *device, struct kobj_uevent_env *env)
return ret; return ret;
} }
static const uuid_le null_guid;
static inline bool is_null_guid(const uuid_le *guid)
{
if (uuid_le_cmp(*guid, null_guid))
return false;
return true;
}
static const struct hv_vmbus_device_id * static const struct hv_vmbus_device_id *
hv_vmbus_dev_match(const struct hv_vmbus_device_id *id, const uuid_le *guid) hv_vmbus_dev_match(const struct hv_vmbus_device_id *id, const guid_t *guid)
{ {
if (id == NULL) if (id == NULL)
return NULL; /* empty device table */ return NULL; /* empty device table */
for (; !is_null_guid(&id->guid); id++) for (; !guid_is_null(&id->guid); id++)
if (!uuid_le_cmp(id->guid, *guid)) if (guid_equal(&id->guid, guid))
return id; return id;
return NULL; return NULL;
} }
static const struct hv_vmbus_device_id * static const struct hv_vmbus_device_id *
hv_vmbus_dynid_match(struct hv_driver *drv, const uuid_le *guid) hv_vmbus_dynid_match(struct hv_driver *drv, const guid_t *guid)
{ {
const struct hv_vmbus_device_id *id = NULL; const struct hv_vmbus_device_id *id = NULL;
struct vmbus_dynid *dynid; struct vmbus_dynid *dynid;
spin_lock(&drv->dynids.lock); spin_lock(&drv->dynids.lock);
list_for_each_entry(dynid, &drv->dynids.list, node) { list_for_each_entry(dynid, &drv->dynids.list, node) {
if (!uuid_le_cmp(dynid->id.guid, *guid)) { if (guid_equal(&dynid->id.guid, guid)) {
id = &dynid->id; id = &dynid->id;
break; break;
} }
@ -695,9 +685,7 @@ hv_vmbus_dynid_match(struct hv_driver *drv, const uuid_le *guid)
return id; return id;
} }
static const struct hv_vmbus_device_id vmbus_device_null = { static const struct hv_vmbus_device_id vmbus_device_null;
.guid = NULL_UUID_LE,
};
/* /*
* Return a matching hv_vmbus_device_id pointer. * Return a matching hv_vmbus_device_id pointer.
@ -706,7 +694,7 @@ static const struct hv_vmbus_device_id vmbus_device_null = {
static const struct hv_vmbus_device_id *hv_vmbus_get_id(struct hv_driver *drv, static const struct hv_vmbus_device_id *hv_vmbus_get_id(struct hv_driver *drv,
struct hv_device *dev) struct hv_device *dev)
{ {
const uuid_le *guid = &dev->dev_type; const guid_t *guid = &dev->dev_type;
const struct hv_vmbus_device_id *id; const struct hv_vmbus_device_id *id;
/* When driver_override is set, only bind to the matching driver */ /* When driver_override is set, only bind to the matching driver */
@ -726,7 +714,7 @@ static const struct hv_vmbus_device_id *hv_vmbus_get_id(struct hv_driver *drv,
} }
/* vmbus_add_dynid - add a new device ID to this driver and re-probe devices */ /* vmbus_add_dynid - add a new device ID to this driver and re-probe devices */
static int vmbus_add_dynid(struct hv_driver *drv, uuid_le *guid) static int vmbus_add_dynid(struct hv_driver *drv, guid_t *guid)
{ {
struct vmbus_dynid *dynid; struct vmbus_dynid *dynid;
@ -764,10 +752,10 @@ static ssize_t new_id_store(struct device_driver *driver, const char *buf,
size_t count) size_t count)
{ {
struct hv_driver *drv = drv_to_hv_drv(driver); struct hv_driver *drv = drv_to_hv_drv(driver);
uuid_le guid; guid_t guid;
ssize_t retval; ssize_t retval;
retval = uuid_le_to_bin(buf, &guid); retval = guid_parse(buf, &guid);
if (retval) if (retval)
return retval; return retval;
@ -791,10 +779,10 @@ static ssize_t remove_id_store(struct device_driver *driver, const char *buf,
{ {
struct hv_driver *drv = drv_to_hv_drv(driver); struct hv_driver *drv = drv_to_hv_drv(driver);
struct vmbus_dynid *dynid, *n; struct vmbus_dynid *dynid, *n;
uuid_le guid; guid_t guid;
ssize_t retval; ssize_t retval;
retval = uuid_le_to_bin(buf, &guid); retval = guid_parse(buf, &guid);
if (retval) if (retval)
return retval; return retval;
@ -803,7 +791,7 @@ static ssize_t remove_id_store(struct device_driver *driver, const char *buf,
list_for_each_entry_safe(dynid, n, &drv->dynids.list, node) { list_for_each_entry_safe(dynid, n, &drv->dynids.list, node) {
struct hv_vmbus_device_id *id = &dynid->id; struct hv_vmbus_device_id *id = &dynid->id;
if (!uuid_le_cmp(id->guid, guid)) { if (guid_equal(&id->guid, &guid)) {
list_del(&dynid->node); list_del(&dynid->node);
kfree(dynid); kfree(dynid);
retval = count; retval = count;
@ -1496,6 +1484,38 @@ static ssize_t channel_events_show(const struct vmbus_channel *channel, char *bu
} }
static VMBUS_CHAN_ATTR(events, S_IRUGO, channel_events_show, NULL); static VMBUS_CHAN_ATTR(events, S_IRUGO, channel_events_show, NULL);
static ssize_t channel_intr_in_full_show(const struct vmbus_channel *channel,
char *buf)
{
return sprintf(buf, "%llu\n",
(unsigned long long)channel->intr_in_full);
}
static VMBUS_CHAN_ATTR(intr_in_full, 0444, channel_intr_in_full_show, NULL);
static ssize_t channel_intr_out_empty_show(const struct vmbus_channel *channel,
char *buf)
{
return sprintf(buf, "%llu\n",
(unsigned long long)channel->intr_out_empty);
}
static VMBUS_CHAN_ATTR(intr_out_empty, 0444, channel_intr_out_empty_show, NULL);
static ssize_t channel_out_full_first_show(const struct vmbus_channel *channel,
char *buf)
{
return sprintf(buf, "%llu\n",
(unsigned long long)channel->out_full_first);
}
static VMBUS_CHAN_ATTR(out_full_first, 0444, channel_out_full_first_show, NULL);
static ssize_t channel_out_full_total_show(const struct vmbus_channel *channel,
char *buf)
{
return sprintf(buf, "%llu\n",
(unsigned long long)channel->out_full_total);
}
static VMBUS_CHAN_ATTR(out_full_total, 0444, channel_out_full_total_show, NULL);
static ssize_t subchannel_monitor_id_show(const struct vmbus_channel *channel, static ssize_t subchannel_monitor_id_show(const struct vmbus_channel *channel,
char *buf) char *buf)
{ {
@ -1521,6 +1541,10 @@ static struct attribute *vmbus_chan_attrs[] = {
&chan_attr_latency.attr, &chan_attr_latency.attr,
&chan_attr_interrupts.attr, &chan_attr_interrupts.attr,
&chan_attr_events.attr, &chan_attr_events.attr,
&chan_attr_intr_in_full.attr,
&chan_attr_intr_out_empty.attr,
&chan_attr_out_full_first.attr,
&chan_attr_out_full_total.attr,
&chan_attr_monitor_id.attr, &chan_attr_monitor_id.attr,
&chan_attr_subchannel_id.attr, &chan_attr_subchannel_id.attr,
NULL NULL
@ -1556,8 +1580,8 @@ int vmbus_add_channel_kobj(struct hv_device *dev, struct vmbus_channel *channel)
* vmbus_device_create - Creates and registers a new child device * vmbus_device_create - Creates and registers a new child device
* on the vmbus. * on the vmbus.
*/ */
struct hv_device *vmbus_device_create(const uuid_le *type, struct hv_device *vmbus_device_create(const guid_t *type,
const uuid_le *instance, const guid_t *instance,
struct vmbus_channel *channel) struct vmbus_channel *channel)
{ {
struct hv_device *child_device_obj; struct hv_device *child_device_obj;
@ -1569,12 +1593,10 @@ struct hv_device *vmbus_device_create(const uuid_le *type,
} }
child_device_obj->channel = channel; child_device_obj->channel = channel;
memcpy(&child_device_obj->dev_type, type, sizeof(uuid_le)); guid_copy(&child_device_obj->dev_type, type);
memcpy(&child_device_obj->dev_instance, instance, guid_copy(&child_device_obj->dev_instance, instance);
sizeof(uuid_le));
child_device_obj->vendor_id = 0x1414; /* MSFT vendor ID */ child_device_obj->vendor_id = 0x1414; /* MSFT vendor ID */
return child_device_obj; return child_device_obj;
} }

View File

@ -668,6 +668,10 @@ static const struct amba_id debug_ids[] = {
.id = 0x000bbd08, .id = 0x000bbd08,
.mask = 0x000fffff, .mask = 0x000fffff,
}, },
{ /* Debug for Cortex-A73 */
.id = 0x000bbd09,
.mask = 0x000fffff,
},
{ 0, 0 }, { 0, 0 },
}; };

View File

@ -55,7 +55,8 @@ static void etm4_os_unlock(struct etmv4_drvdata *drvdata)
static bool etm4_arch_supported(u8 arch) static bool etm4_arch_supported(u8 arch)
{ {
switch (arch) { /* Mask out the minor version number */
switch (arch & 0xf0) {
case ETM_ARCH_V4: case ETM_ARCH_V4:
break; break;
default: default:

View File

@ -793,7 +793,7 @@ static int stm_probe(struct amba_device *adev, const struct amba_id *id)
struct stm_drvdata *drvdata; struct stm_drvdata *drvdata;
struct resource *res = &adev->res; struct resource *res = &adev->res;
struct resource ch_res; struct resource ch_res;
size_t res_size, bitmap_size; size_t bitmap_size;
struct coresight_desc desc = { 0 }; struct coresight_desc desc = { 0 };
struct device_node *np = adev->dev.of_node; struct device_node *np = adev->dev.of_node;
@ -833,15 +833,11 @@ static int stm_probe(struct amba_device *adev, const struct amba_id *id)
drvdata->write_bytes = stm_fundamental_data_size(drvdata); drvdata->write_bytes = stm_fundamental_data_size(drvdata);
if (boot_nr_channel) { if (boot_nr_channel)
drvdata->numsp = boot_nr_channel; drvdata->numsp = boot_nr_channel;
res_size = min((resource_size_t)(boot_nr_channel * else
BYTES_PER_CHANNEL), resource_size(res));
} else {
drvdata->numsp = stm_num_stimulus_port(drvdata); drvdata->numsp = stm_num_stimulus_port(drvdata);
res_size = min((resource_size_t)(drvdata->numsp *
BYTES_PER_CHANNEL), resource_size(res));
}
bitmap_size = BITS_TO_LONGS(drvdata->numsp) * sizeof(long); bitmap_size = BITS_TO_LONGS(drvdata->numsp) * sizeof(long);
guaranteed = devm_kzalloc(dev, bitmap_size, GFP_KERNEL); guaranteed = devm_kzalloc(dev, bitmap_size, GFP_KERNEL);

View File

@ -80,8 +80,8 @@ static struct device_node *of_coresight_get_port_parent(struct device_node *ep)
* Skip one-level up to the real device node, if we * Skip one-level up to the real device node, if we
* are using the new bindings. * are using the new bindings.
*/ */
if (!of_node_cmp(parent->name, "in-ports") || if (of_node_name_eq(parent, "in-ports") ||
!of_node_cmp(parent->name, "out-ports")) of_node_name_eq(parent, "out-ports"))
parent = of_get_next_parent(parent); parent = of_get_next_parent(parent);
return parent; return parent;

View File

@ -422,6 +422,7 @@ static const struct intel_th_subdevice {
unsigned nres; unsigned nres;
unsigned type; unsigned type;
unsigned otype; unsigned otype;
bool mknode;
unsigned scrpd; unsigned scrpd;
int id; int id;
} intel_th_subdevices[] = { } intel_th_subdevices[] = {
@ -456,6 +457,7 @@ static const struct intel_th_subdevice {
.name = "msc", .name = "msc",
.id = 0, .id = 0,
.type = INTEL_TH_OUTPUT, .type = INTEL_TH_OUTPUT,
.mknode = true,
.otype = GTH_MSU, .otype = GTH_MSU,
.scrpd = SCRPD_MEM_IS_PRIM_DEST | SCRPD_MSC0_IS_ENABLED, .scrpd = SCRPD_MEM_IS_PRIM_DEST | SCRPD_MSC0_IS_ENABLED,
}, },
@ -476,6 +478,7 @@ static const struct intel_th_subdevice {
.name = "msc", .name = "msc",
.id = 1, .id = 1,
.type = INTEL_TH_OUTPUT, .type = INTEL_TH_OUTPUT,
.mknode = true,
.otype = GTH_MSU, .otype = GTH_MSU,
.scrpd = SCRPD_MEM_IS_PRIM_DEST | SCRPD_MSC1_IS_ENABLED, .scrpd = SCRPD_MEM_IS_PRIM_DEST | SCRPD_MSC1_IS_ENABLED,
}, },
@ -635,6 +638,7 @@ intel_th_subdevice_alloc(struct intel_th *th,
} }
if (subdev->type == INTEL_TH_OUTPUT) { if (subdev->type == INTEL_TH_OUTPUT) {
if (subdev->mknode)
thdev->dev.devt = MKDEV(th->major, th->num_thdevs); thdev->dev.devt = MKDEV(th->major, th->num_thdevs);
thdev->output.type = subdev->otype; thdev->output.type = subdev->otype;
thdev->output.port = -1; thdev->output.port = -1;

View File

@ -607,6 +607,7 @@ static void intel_th_gth_unassign(struct intel_th_device *thdev,
{ {
struct gth_device *gth = dev_get_drvdata(&thdev->dev); struct gth_device *gth = dev_get_drvdata(&thdev->dev);
int port = othdev->output.port; int port = othdev->output.port;
int master;
if (thdev->host_mode) if (thdev->host_mode)
return; return;
@ -615,6 +616,9 @@ static void intel_th_gth_unassign(struct intel_th_device *thdev,
othdev->output.port = -1; othdev->output.port = -1;
othdev->output.active = false; othdev->output.active = false;
gth->output[port].output = NULL; gth->output[port].output = NULL;
for (master = 0; master <= TH_CONFIGURABLE_MASTERS; master++)
if (gth->master[master] == port)
gth->master[master] = -1;
spin_unlock(&gth->gth_lock); spin_unlock(&gth->gth_lock);
} }

View File

@ -272,19 +272,17 @@ static ssize_t lpp_dest_store(struct device *dev, struct device_attribute *attr,
const char *buf, size_t size) const char *buf, size_t size)
{ {
struct pti_device *pti = dev_get_drvdata(dev); struct pti_device *pti = dev_get_drvdata(dev);
ssize_t ret = -EINVAL;
int i; int i;
for (i = 0; i < ARRAY_SIZE(lpp_dest_str); i++) i = sysfs_match_string(lpp_dest_str, buf);
if (sysfs_streq(buf, lpp_dest_str[i])) if (i < 0)
break; return i;
if (!(pti->lpp_dest_mask & BIT(i)))
return -EINVAL;
if (i < ARRAY_SIZE(lpp_dest_str) && pti->lpp_dest_mask & BIT(i)) {
pti->lpp_dest = i; pti->lpp_dest = i;
ret = size; return size;
}
return ret;
} }
static DEVICE_ATTR_RW(lpp_dest); static DEVICE_ATTR_RW(lpp_dest);

View File

@ -84,8 +84,12 @@ static ssize_t notrace sth_stm_packet(struct stm_data *stm_data,
/* Global packets (GERR, XSYNC, TRIG) are sent with register writes */ /* Global packets (GERR, XSYNC, TRIG) are sent with register writes */
case STP_PACKET_GERR: case STP_PACKET_GERR:
reg += 4; reg += 4;
/* fall through */
case STP_PACKET_XSYNC: case STP_PACKET_XSYNC:
reg += 8; reg += 8;
/* fall through */
case STP_PACKET_TRIG: case STP_PACKET_TRIG:
if (flags & STP_PACKET_TIMESTAMPED) if (flags & STP_PACKET_TIMESTAMPED)
reg += 4; reg += 4;

View File

@ -244,6 +244,9 @@ static int find_free_channels(unsigned long *bitmap, unsigned int start,
; ;
if (i == width) if (i == width)
return pos; return pos;
/* step over [pos..pos+i) to continue search */
pos += i;
} }
return -1; return -1;
@ -732,7 +735,7 @@ static int stm_char_policy_set_ioctl(struct stm_file *stmf, void __user *arg)
struct stm_device *stm = stmf->stm; struct stm_device *stm = stmf->stm;
struct stp_policy_id *id; struct stp_policy_id *id;
char *ids[] = { NULL, NULL }; char *ids[] = { NULL, NULL };
int ret = -EINVAL; int ret = -EINVAL, wlimit = 1;
u32 size; u32 size;
if (stmf->output.nr_chans) if (stmf->output.nr_chans)
@ -760,8 +763,10 @@ static int stm_char_policy_set_ioctl(struct stm_file *stmf, void __user *arg)
if (id->__reserved_0 || id->__reserved_1) if (id->__reserved_0 || id->__reserved_1)
goto err_free; goto err_free;
if (id->width < 1 || if (stm->data->sw_mmiosz)
id->width > PAGE_SIZE / stm->data->sw_mmiosz) wlimit = PAGE_SIZE / stm->data->sw_mmiosz;
if (id->width < 1 || id->width > wlimit)
goto err_free; goto err_free;
ids[0] = id->id; ids[0] = id->id;

View File

@ -0,0 +1,15 @@
menuconfig INTERCONNECT
tristate "On-Chip Interconnect management support"
help
Support for management of the on-chip interconnects.
This framework is designed to provide a generic interface for
managing the interconnects in a SoC.
If unsure, say no.
if INTERCONNECT
source "drivers/interconnect/qcom/Kconfig"
endif

View File

@ -0,0 +1,6 @@
# SPDX-License-Identifier: GPL-2.0
icc-core-objs := core.o
obj-$(CONFIG_INTERCONNECT) += icc-core.o
obj-$(CONFIG_INTERCONNECT_QCOM) += qcom/

799
drivers/interconnect/core.c Normal file
View File

@ -0,0 +1,799 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Interconnect framework core driver
*
* Copyright (c) 2017-2019, Linaro Ltd.
* Author: Georgi Djakov <georgi.djakov@linaro.org>
*/
#include <linux/debugfs.h>
#include <linux/device.h>
#include <linux/idr.h>
#include <linux/init.h>
#include <linux/interconnect.h>
#include <linux/interconnect-provider.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/slab.h>
#include <linux/of.h>
#include <linux/overflow.h>
static DEFINE_IDR(icc_idr);
static LIST_HEAD(icc_providers);
static DEFINE_MUTEX(icc_lock);
static struct dentry *icc_debugfs_dir;
/**
* struct icc_req - constraints that are attached to each node
* @req_node: entry in list of requests for the particular @node
* @node: the interconnect node to which this constraint applies
* @dev: reference to the device that sets the constraints
* @avg_bw: an integer describing the average bandwidth in kBps
* @peak_bw: an integer describing the peak bandwidth in kBps
*/
struct icc_req {
struct hlist_node req_node;
struct icc_node *node;
struct device *dev;
u32 avg_bw;
u32 peak_bw;
};
/**
* struct icc_path - interconnect path structure
* @num_nodes: number of hops (nodes)
* @reqs: array of the requests applicable to this path of nodes
*/
struct icc_path {
size_t num_nodes;
struct icc_req reqs[];
};
static void icc_summary_show_one(struct seq_file *s, struct icc_node *n)
{
if (!n)
return;
seq_printf(s, "%-30s %12u %12u\n",
n->name, n->avg_bw, n->peak_bw);
}
static int icc_summary_show(struct seq_file *s, void *data)
{
struct icc_provider *provider;
seq_puts(s, " node avg peak\n");
seq_puts(s, "--------------------------------------------------------\n");
mutex_lock(&icc_lock);
list_for_each_entry(provider, &icc_providers, provider_list) {
struct icc_node *n;
list_for_each_entry(n, &provider->nodes, node_list) {
struct icc_req *r;
icc_summary_show_one(s, n);
hlist_for_each_entry(r, &n->req_list, req_node) {
if (!r->dev)
continue;
seq_printf(s, " %-26s %12u %12u\n",
dev_name(r->dev), r->avg_bw,
r->peak_bw);
}
}
}
mutex_unlock(&icc_lock);
return 0;
}
static int icc_summary_open(struct inode *inode, struct file *file)
{
return single_open(file, icc_summary_show, inode->i_private);
}
static const struct file_operations icc_summary_fops = {
.open = icc_summary_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static struct icc_node *node_find(const int id)
{
return idr_find(&icc_idr, id);
}
static struct icc_path *path_init(struct device *dev, struct icc_node *dst,
ssize_t num_nodes)
{
struct icc_node *node = dst;
struct icc_path *path;
int i;
path = kzalloc(struct_size(path, reqs, num_nodes), GFP_KERNEL);
if (!path)
return ERR_PTR(-ENOMEM);
path->num_nodes = num_nodes;
for (i = num_nodes - 1; i >= 0; i--) {
node->provider->users++;
hlist_add_head(&path->reqs[i].req_node, &node->req_list);
path->reqs[i].node = node;
path->reqs[i].dev = dev;
/* reference to previous node was saved during path traversal */
node = node->reverse;
}
return path;
}
static struct icc_path *path_find(struct device *dev, struct icc_node *src,
struct icc_node *dst)
{
struct icc_path *path = ERR_PTR(-EPROBE_DEFER);
struct icc_node *n, *node = NULL;
struct list_head traverse_list;
struct list_head edge_list;
struct list_head visited_list;
size_t i, depth = 1;
bool found = false;
INIT_LIST_HEAD(&traverse_list);
INIT_LIST_HEAD(&edge_list);
INIT_LIST_HEAD(&visited_list);
list_add(&src->search_list, &traverse_list);
src->reverse = NULL;
do {
list_for_each_entry_safe(node, n, &traverse_list, search_list) {
if (node == dst) {
found = true;
list_splice_init(&edge_list, &visited_list);
list_splice_init(&traverse_list, &visited_list);
break;
}
for (i = 0; i < node->num_links; i++) {
struct icc_node *tmp = node->links[i];
if (!tmp) {
path = ERR_PTR(-ENOENT);
goto out;
}
if (tmp->is_traversed)
continue;
tmp->is_traversed = true;
tmp->reverse = node;
list_add_tail(&tmp->search_list, &edge_list);
}
}
if (found)
break;
list_splice_init(&traverse_list, &visited_list);
list_splice_init(&edge_list, &traverse_list);
/* count the hops including the source */
depth++;
} while (!list_empty(&traverse_list));
out:
/* reset the traversed state */
list_for_each_entry_reverse(n, &visited_list, search_list)
n->is_traversed = false;
if (found)
path = path_init(dev, dst, depth);
return path;
}
/*
* We want the path to honor all bandwidth requests, so the average and peak
* bandwidth requirements from each consumer are aggregated at each node.
* The aggregation is platform specific, so each platform can customize it by
* implementing its own aggregate() function.
*/
static int aggregate_requests(struct icc_node *node)
{
struct icc_provider *p = node->provider;
struct icc_req *r;
node->avg_bw = 0;
node->peak_bw = 0;
hlist_for_each_entry(r, &node->req_list, req_node)
p->aggregate(node, r->avg_bw, r->peak_bw,
&node->avg_bw, &node->peak_bw);
return 0;
}
static int apply_constraints(struct icc_path *path)
{
struct icc_node *next, *prev = NULL;
int ret = -EINVAL;
int i;
for (i = 0; i < path->num_nodes; i++) {
next = path->reqs[i].node;
/*
* Both endpoints should be valid master-slave pairs of the
* same interconnect provider that will be configured.
*/
if (!prev || next->provider != prev->provider) {
prev = next;
continue;
}
/* set the constraints */
ret = next->provider->set(prev, next);
if (ret)
goto out;
prev = next;
}
out:
return ret;
}
/* of_icc_xlate_onecell() - Translate function using a single index.
* @spec: OF phandle args to map into an interconnect node.
* @data: private data (pointer to struct icc_onecell_data)
*
* This is a generic translate function that can be used to model simple
* interconnect providers that have one device tree node and provide
* multiple interconnect nodes. A single cell is used as an index into
* an array of icc nodes specified in the icc_onecell_data struct when
* registering the provider.
*/
struct icc_node *of_icc_xlate_onecell(struct of_phandle_args *spec,
void *data)
{
struct icc_onecell_data *icc_data = data;
unsigned int idx = spec->args[0];
if (idx >= icc_data->num_nodes) {
pr_err("%s: invalid index %u\n", __func__, idx);
return ERR_PTR(-EINVAL);
}
return icc_data->nodes[idx];
}
EXPORT_SYMBOL_GPL(of_icc_xlate_onecell);
/**
* of_icc_get_from_provider() - Look-up interconnect node
* @spec: OF phandle args to use for look-up
*
* Looks for interconnect provider under the node specified by @spec and if
* found, uses xlate function of the provider to map phandle args to node.
*
* Returns a valid pointer to struct icc_node on success or ERR_PTR()
* on failure.
*/
static struct icc_node *of_icc_get_from_provider(struct of_phandle_args *spec)
{
struct icc_node *node = ERR_PTR(-EPROBE_DEFER);
struct icc_provider *provider;
if (!spec || spec->args_count != 1)
return ERR_PTR(-EINVAL);
mutex_lock(&icc_lock);
list_for_each_entry(provider, &icc_providers, provider_list) {
if (provider->dev->of_node == spec->np)
node = provider->xlate(spec, provider->data);
if (!IS_ERR(node))
break;
}
mutex_unlock(&icc_lock);
return node;
}
/**
* of_icc_get() - get a path handle from a DT node based on name
* @dev: device pointer for the consumer device
* @name: interconnect path name
*
* This function will search for a path between two endpoints and return an
* icc_path handle on success. Use icc_put() to release constraints when they
* are not needed anymore.
* If the interconnect API is disabled, NULL is returned and the consumer
* drivers will still build. Drivers are free to handle this specifically,
* but they don't have to.
*
* Return: icc_path pointer on success or ERR_PTR() on error. NULL is returned
* when the API is disabled or the "interconnects" DT property is missing.
*/
struct icc_path *of_icc_get(struct device *dev, const char *name)
{
struct icc_path *path = ERR_PTR(-EPROBE_DEFER);
struct icc_node *src_node, *dst_node;
struct device_node *np = NULL;
struct of_phandle_args src_args, dst_args;
int idx = 0;
int ret;
if (!dev || !dev->of_node)
return ERR_PTR(-ENODEV);
np = dev->of_node;
/*
* When the consumer DT node do not have "interconnects" property
* return a NULL path to skip setting constraints.
*/
if (!of_find_property(np, "interconnects", NULL))
return NULL;
/*
* We use a combination of phandle and specifier for endpoint. For now
* lets support only global ids and extend this in the future if needed
* without breaking DT compatibility.
*/
if (name) {
idx = of_property_match_string(np, "interconnect-names", name);
if (idx < 0)
return ERR_PTR(idx);
}
ret = of_parse_phandle_with_args(np, "interconnects",
"#interconnect-cells", idx * 2,
&src_args);
if (ret)
return ERR_PTR(ret);
of_node_put(src_args.np);
ret = of_parse_phandle_with_args(np, "interconnects",
"#interconnect-cells", idx * 2 + 1,
&dst_args);
if (ret)
return ERR_PTR(ret);
of_node_put(dst_args.np);
src_node = of_icc_get_from_provider(&src_args);
if (IS_ERR(src_node)) {
if (PTR_ERR(src_node) != -EPROBE_DEFER)
dev_err(dev, "error finding src node: %ld\n",
PTR_ERR(src_node));
return ERR_CAST(src_node);
}
dst_node = of_icc_get_from_provider(&dst_args);
if (IS_ERR(dst_node)) {
if (PTR_ERR(dst_node) != -EPROBE_DEFER)
dev_err(dev, "error finding dst node: %ld\n",
PTR_ERR(dst_node));
return ERR_CAST(dst_node);
}
mutex_lock(&icc_lock);
path = path_find(dev, src_node, dst_node);
if (IS_ERR(path))
dev_err(dev, "%s: invalid path=%ld\n", __func__, PTR_ERR(path));
mutex_unlock(&icc_lock);
return path;
}
EXPORT_SYMBOL_GPL(of_icc_get);
/**
* icc_set_bw() - set bandwidth constraints on an interconnect path
* @path: reference to the path returned by icc_get()
* @avg_bw: average bandwidth in kilobytes per second
* @peak_bw: peak bandwidth in kilobytes per second
*
* This function is used by an interconnect consumer to express its own needs
* in terms of bandwidth for a previously requested path between two endpoints.
* The requests are aggregated and each node is updated accordingly. The entire
* path is locked by a mutex to ensure that the set() is completed.
* The @path can be NULL when the "interconnects" DT properties is missing,
* which will mean that no constraints will be set.
*
* Returns 0 on success, or an appropriate error code otherwise.
*/
int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw)
{
struct icc_node *node;
u32 old_avg, old_peak;
size_t i;
int ret;
if (!path || !path->num_nodes)
return 0;
mutex_lock(&icc_lock);
old_avg = path->reqs[0].avg_bw;
old_peak = path->reqs[0].peak_bw;
for (i = 0; i < path->num_nodes; i++) {
node = path->reqs[i].node;
/* update the consumer request for this path */
path->reqs[i].avg_bw = avg_bw;
path->reqs[i].peak_bw = peak_bw;
/* aggregate requests for this node */
aggregate_requests(node);
}
ret = apply_constraints(path);
if (ret) {
pr_debug("interconnect: error applying constraints (%d)\n",
ret);
for (i = 0; i < path->num_nodes; i++) {
node = path->reqs[i].node;
path->reqs[i].avg_bw = old_avg;
path->reqs[i].peak_bw = old_peak;
aggregate_requests(node);
}
apply_constraints(path);
}
mutex_unlock(&icc_lock);
return ret;
}
EXPORT_SYMBOL_GPL(icc_set_bw);
/**
* icc_get() - return a handle for path between two endpoints
* @dev: the device requesting the path
* @src_id: source device port id
* @dst_id: destination device port id
*
* This function will search for a path between two endpoints and return an
* icc_path handle on success. Use icc_put() to release
* constraints when they are not needed anymore.
* If the interconnect API is disabled, NULL is returned and the consumer
* drivers will still build. Drivers are free to handle this specifically,
* but they don't have to.
*
* Return: icc_path pointer on success, ERR_PTR() on error or NULL if the
* interconnect API is disabled.
*/
struct icc_path *icc_get(struct device *dev, const int src_id, const int dst_id)
{
struct icc_node *src, *dst;
struct icc_path *path = ERR_PTR(-EPROBE_DEFER);
mutex_lock(&icc_lock);
src = node_find(src_id);
if (!src)
goto out;
dst = node_find(dst_id);
if (!dst)
goto out;
path = path_find(dev, src, dst);
if (IS_ERR(path))
dev_err(dev, "%s: invalid path=%ld\n", __func__, PTR_ERR(path));
out:
mutex_unlock(&icc_lock);
return path;
}
EXPORT_SYMBOL_GPL(icc_get);
/**
* icc_put() - release the reference to the icc_path
* @path: interconnect path
*
* Use this function to release the constraints on a path when the path is
* no longer needed. The constraints will be re-aggregated.
*/
void icc_put(struct icc_path *path)
{
struct icc_node *node;
size_t i;
int ret;
if (!path || WARN_ON(IS_ERR(path)))
return;
ret = icc_set_bw(path, 0, 0);
if (ret)
pr_err("%s: error (%d)\n", __func__, ret);
mutex_lock(&icc_lock);
for (i = 0; i < path->num_nodes; i++) {
node = path->reqs[i].node;
hlist_del(&path->reqs[i].req_node);
if (!WARN_ON(!node->provider->users))
node->provider->users--;
}
mutex_unlock(&icc_lock);
kfree(path);
}
EXPORT_SYMBOL_GPL(icc_put);
static struct icc_node *icc_node_create_nolock(int id)
{
struct icc_node *node;
/* check if node already exists */
node = node_find(id);
if (node)
return node;
node = kzalloc(sizeof(*node), GFP_KERNEL);
if (!node)
return ERR_PTR(-ENOMEM);
id = idr_alloc(&icc_idr, node, id, id + 1, GFP_KERNEL);
if (id < 0) {
WARN(1, "%s: couldn't get idr\n", __func__);
kfree(node);
return ERR_PTR(id);
}
node->id = id;
return node;
}
/**
* icc_node_create() - create a node
* @id: node id
*
* Return: icc_node pointer on success, or ERR_PTR() on error
*/
struct icc_node *icc_node_create(int id)
{
struct icc_node *node;
mutex_lock(&icc_lock);
node = icc_node_create_nolock(id);
mutex_unlock(&icc_lock);
return node;
}
EXPORT_SYMBOL_GPL(icc_node_create);
/**
* icc_node_destroy() - destroy a node
* @id: node id
*/
void icc_node_destroy(int id)
{
struct icc_node *node;
mutex_lock(&icc_lock);
node = node_find(id);
if (node) {
idr_remove(&icc_idr, node->id);
WARN_ON(!hlist_empty(&node->req_list));
}
mutex_unlock(&icc_lock);
kfree(node);
}
EXPORT_SYMBOL_GPL(icc_node_destroy);
/**
* icc_link_create() - create a link between two nodes
* @node: source node id
* @dst_id: destination node id
*
* Create a link between two nodes. The nodes might belong to different
* interconnect providers and the @dst_id node might not exist (if the
* provider driver has not probed yet). So just create the @dst_id node
* and when the actual provider driver is probed, the rest of the node
* data is filled.
*
* Return: 0 on success, or an error code otherwise
*/
int icc_link_create(struct icc_node *node, const int dst_id)
{
struct icc_node *dst;
struct icc_node **new;
int ret = 0;
if (!node->provider)
return -EINVAL;
mutex_lock(&icc_lock);
dst = node_find(dst_id);
if (!dst) {
dst = icc_node_create_nolock(dst_id);
if (IS_ERR(dst)) {
ret = PTR_ERR(dst);
goto out;
}
}
new = krealloc(node->links,
(node->num_links + 1) * sizeof(*node->links),
GFP_KERNEL);
if (!new) {
ret = -ENOMEM;
goto out;
}
node->links = new;
node->links[node->num_links++] = dst;
out:
mutex_unlock(&icc_lock);
return ret;
}
EXPORT_SYMBOL_GPL(icc_link_create);
/**
* icc_link_destroy() - destroy a link between two nodes
* @src: pointer to source node
* @dst: pointer to destination node
*
* Return: 0 on success, or an error code otherwise
*/
int icc_link_destroy(struct icc_node *src, struct icc_node *dst)
{
struct icc_node **new;
size_t slot;
int ret = 0;
if (IS_ERR_OR_NULL(src))
return -EINVAL;
if (IS_ERR_OR_NULL(dst))
return -EINVAL;
mutex_lock(&icc_lock);
for (slot = 0; slot < src->num_links; slot++)
if (src->links[slot] == dst)
break;
if (WARN_ON(slot == src->num_links)) {
ret = -ENXIO;
goto out;
}
src->links[slot] = src->links[--src->num_links];
new = krealloc(src->links, src->num_links * sizeof(*src->links),
GFP_KERNEL);
if (new)
src->links = new;
out:
mutex_unlock(&icc_lock);
return ret;
}
EXPORT_SYMBOL_GPL(icc_link_destroy);
/**
* icc_node_add() - add interconnect node to interconnect provider
* @node: pointer to the interconnect node
* @provider: pointer to the interconnect provider
*/
void icc_node_add(struct icc_node *node, struct icc_provider *provider)
{
mutex_lock(&icc_lock);
node->provider = provider;
list_add_tail(&node->node_list, &provider->nodes);
mutex_unlock(&icc_lock);
}
EXPORT_SYMBOL_GPL(icc_node_add);
/**
* icc_node_del() - delete interconnect node from interconnect provider
* @node: pointer to the interconnect node
*/
void icc_node_del(struct icc_node *node)
{
mutex_lock(&icc_lock);
list_del(&node->node_list);
mutex_unlock(&icc_lock);
}
EXPORT_SYMBOL_GPL(icc_node_del);
/**
* icc_provider_add() - add a new interconnect provider
* @provider: the interconnect provider that will be added into topology
*
* Return: 0 on success, or an error code otherwise
*/
int icc_provider_add(struct icc_provider *provider)
{
if (WARN_ON(!provider->set))
return -EINVAL;
if (WARN_ON(!provider->xlate))
return -EINVAL;
mutex_lock(&icc_lock);
INIT_LIST_HEAD(&provider->nodes);
list_add_tail(&provider->provider_list, &icc_providers);
mutex_unlock(&icc_lock);
dev_dbg(provider->dev, "interconnect provider added to topology\n");
return 0;
}
EXPORT_SYMBOL_GPL(icc_provider_add);
/**
* icc_provider_del() - delete previously added interconnect provider
* @provider: the interconnect provider that will be removed from topology
*
* Return: 0 on success, or an error code otherwise
*/
int icc_provider_del(struct icc_provider *provider)
{
mutex_lock(&icc_lock);
if (provider->users) {
pr_warn("interconnect provider still has %d users\n",
provider->users);
mutex_unlock(&icc_lock);
return -EBUSY;
}
if (!list_empty(&provider->nodes)) {
pr_warn("interconnect provider still has nodes\n");
mutex_unlock(&icc_lock);
return -EBUSY;
}
list_del(&provider->provider_list);
mutex_unlock(&icc_lock);
return 0;
}
EXPORT_SYMBOL_GPL(icc_provider_del);
static int __init icc_init(void)
{
icc_debugfs_dir = debugfs_create_dir("interconnect", NULL);
debugfs_create_file("interconnect_summary", 0444,
icc_debugfs_dir, NULL, &icc_summary_fops);
return 0;
}
static void __exit icc_exit(void)
{
debugfs_remove_recursive(icc_debugfs_dir);
}
module_init(icc_init);
module_exit(icc_exit);
MODULE_AUTHOR("Georgi Djakov <georgi.djakov@linaro.org>");
MODULE_DESCRIPTION("Interconnect Driver Core");
MODULE_LICENSE("GPL v2");

View File

@ -0,0 +1,13 @@
config INTERCONNECT_QCOM
bool "Qualcomm Network-on-Chip interconnect drivers"
depends on ARCH_QCOM
help
Support for Qualcomm's Network-on-Chip interconnect hardware.
config INTERCONNECT_QCOM_SDM845
tristate "Qualcomm SDM845 interconnect driver"
depends on INTERCONNECT_QCOM
depends on (QCOM_RPMH && QCOM_COMMAND_DB && OF) || COMPILE_TEST
help
This is a driver for the Qualcomm Network-on-Chip on sdm845-based
platforms.

View File

@ -0,0 +1,5 @@
# SPDX-License-Identifier: GPL-2.0
qnoc-sdm845-objs := sdm845.o
obj-$(CONFIG_INTERCONNECT_QCOM_SDM845) += qnoc-sdm845.o

View File

@ -0,0 +1,838 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2018, The Linux Foundation. All rights reserved.
*
*/
#include <asm/div64.h>
#include <dt-bindings/interconnect/qcom,sdm845.h>
#include <linux/device.h>
#include <linux/interconnect.h>
#include <linux/interconnect-provider.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/sort.h>
#include <soc/qcom/cmd-db.h>
#include <soc/qcom/rpmh.h>
#include <soc/qcom/tcs.h>
#define BCM_TCS_CMD_COMMIT_SHFT 30
#define BCM_TCS_CMD_COMMIT_MASK 0x40000000
#define BCM_TCS_CMD_VALID_SHFT 29
#define BCM_TCS_CMD_VALID_MASK 0x20000000
#define BCM_TCS_CMD_VOTE_X_SHFT 14
#define BCM_TCS_CMD_VOTE_MASK 0x3fff
#define BCM_TCS_CMD_VOTE_Y_SHFT 0
#define BCM_TCS_CMD_VOTE_Y_MASK 0xfffc000
#define BCM_TCS_CMD(commit, valid, vote_x, vote_y) \
(((commit) << BCM_TCS_CMD_COMMIT_SHFT) | \
((valid) << BCM_TCS_CMD_VALID_SHFT) | \
((cpu_to_le32(vote_x) & \
BCM_TCS_CMD_VOTE_MASK) << BCM_TCS_CMD_VOTE_X_SHFT) | \
((cpu_to_le32(vote_y) & \
BCM_TCS_CMD_VOTE_MASK) << BCM_TCS_CMD_VOTE_Y_SHFT))
#define to_qcom_provider(_provider) \
container_of(_provider, struct qcom_icc_provider, provider)
struct qcom_icc_provider {
struct icc_provider provider;
struct device *dev;
struct qcom_icc_bcm **bcms;
size_t num_bcms;
};
/**
* struct bcm_db - Auxiliary data pertaining to each Bus Clock Manager (BCM)
* @unit: divisor used to convert bytes/sec bw value to an RPMh msg
* @width: multiplier used to convert bytes/sec bw value to an RPMh msg
* @vcd: virtual clock domain that this bcm belongs to
* @reserved: reserved field
*/
struct bcm_db {
__le32 unit;
__le16 width;
u8 vcd;
u8 reserved;
};
#define SDM845_MAX_LINKS 43
#define SDM845_MAX_BCMS 30
#define SDM845_MAX_BCM_PER_NODE 2
#define SDM845_MAX_VCD 10
/**
* struct qcom_icc_node - Qualcomm specific interconnect nodes
* @name: the node name used in debugfs
* @links: an array of nodes where we can go next while traversing
* @id: a unique node identifier
* @num_links: the total number of @links
* @channels: num of channels at this node
* @buswidth: width of the interconnect between a node and the bus
* @sum_avg: current sum aggregate value of all avg bw requests
* @max_peak: current max aggregate value of all peak bw requests
* @bcms: list of bcms associated with this logical node
* @num_bcms: num of @bcms
*/
struct qcom_icc_node {
const char *name;
u16 links[SDM845_MAX_LINKS];
u16 id;
u16 num_links;
u16 channels;
u16 buswidth;
u64 sum_avg;
u64 max_peak;
struct qcom_icc_bcm *bcms[SDM845_MAX_BCM_PER_NODE];
size_t num_bcms;
};
/**
* struct qcom_icc_bcm - Qualcomm specific hardware accelerator nodes
* known as Bus Clock Manager (BCM)
* @name: the bcm node name used to fetch BCM data from command db
* @type: latency or bandwidth bcm
* @addr: address offsets used when voting to RPMH
* @vote_x: aggregated threshold values, represents sum_bw when @type is bw bcm
* @vote_y: aggregated threshold values, represents peak_bw when @type is bw bcm
* @dirty: flag used to indicate whether the bcm needs to be committed
* @keepalive: flag used to indicate whether a keepalive is required
* @aux_data: auxiliary data used when calculating threshold values and
* communicating with RPMh
* @list: used to link to other bcms when compiling lists for commit
* @num_nodes: total number of @num_nodes
* @nodes: list of qcom_icc_nodes that this BCM encapsulates
*/
struct qcom_icc_bcm {
const char *name;
u32 type;
u32 addr;
u64 vote_x;
u64 vote_y;
bool dirty;
bool keepalive;
struct bcm_db aux_data;
struct list_head list;
size_t num_nodes;
struct qcom_icc_node *nodes[];
};
struct qcom_icc_fabric {
struct qcom_icc_node **nodes;
size_t num_nodes;
};
struct qcom_icc_desc {
struct qcom_icc_node **nodes;
size_t num_nodes;
struct qcom_icc_bcm **bcms;
size_t num_bcms;
};
#define DEFINE_QNODE(_name, _id, _channels, _buswidth, \
_numlinks, ...) \
static struct qcom_icc_node _name = { \
.id = _id, \
.name = #_name, \
.channels = _channels, \
.buswidth = _buswidth, \
.num_links = _numlinks, \
.links = { __VA_ARGS__ }, \
}
DEFINE_QNODE(qhm_a1noc_cfg, MASTER_A1NOC_CFG, 1, 4, 1, SLAVE_SERVICE_A1NOC);
DEFINE_QNODE(qhm_qup1, MASTER_BLSP_1, 1, 4, 1, SLAVE_A1NOC_SNOC);
DEFINE_QNODE(qhm_tsif, MASTER_TSIF, 1, 4, 1, SLAVE_A1NOC_SNOC);
DEFINE_QNODE(xm_sdc2, MASTER_SDCC_2, 1, 8, 1, SLAVE_A1NOC_SNOC);
DEFINE_QNODE(xm_sdc4, MASTER_SDCC_4, 1, 8, 1, SLAVE_A1NOC_SNOC);
DEFINE_QNODE(xm_ufs_card, MASTER_UFS_CARD, 1, 8, 1, SLAVE_A1NOC_SNOC);
DEFINE_QNODE(xm_ufs_mem, MASTER_UFS_MEM, 1, 8, 1, SLAVE_A1NOC_SNOC);
DEFINE_QNODE(xm_pcie_0, MASTER_PCIE_0, 1, 8, 1, SLAVE_ANOC_PCIE_A1NOC_SNOC);
DEFINE_QNODE(qhm_a2noc_cfg, MASTER_A2NOC_CFG, 1, 4, 1, SLAVE_SERVICE_A2NOC);
DEFINE_QNODE(qhm_qdss_bam, MASTER_QDSS_BAM, 1, 4, 1, SLAVE_A2NOC_SNOC);
DEFINE_QNODE(qhm_qup2, MASTER_BLSP_2, 1, 4, 1, SLAVE_A2NOC_SNOC);
DEFINE_QNODE(qnm_cnoc, MASTER_CNOC_A2NOC, 1, 8, 1, SLAVE_A2NOC_SNOC);
DEFINE_QNODE(qxm_crypto, MASTER_CRYPTO, 1, 8, 1, SLAVE_A2NOC_SNOC);
DEFINE_QNODE(qxm_ipa, MASTER_IPA, 1, 8, 1, SLAVE_A2NOC_SNOC);
DEFINE_QNODE(xm_pcie3_1, MASTER_PCIE_1, 1, 8, 1, SLAVE_ANOC_PCIE_SNOC);
DEFINE_QNODE(xm_qdss_etr, MASTER_QDSS_ETR, 1, 8, 1, SLAVE_A2NOC_SNOC);
DEFINE_QNODE(xm_usb3_0, MASTER_USB3_0, 1, 8, 1, SLAVE_A2NOC_SNOC);
DEFINE_QNODE(xm_usb3_1, MASTER_USB3_1, 1, 8, 1, SLAVE_A2NOC_SNOC);
DEFINE_QNODE(qxm_camnoc_hf0_uncomp, MASTER_CAMNOC_HF0_UNCOMP, 1, 32, 1, SLAVE_CAMNOC_UNCOMP);
DEFINE_QNODE(qxm_camnoc_hf1_uncomp, MASTER_CAMNOC_HF1_UNCOMP, 1, 32, 1, SLAVE_CAMNOC_UNCOMP);
DEFINE_QNODE(qxm_camnoc_sf_uncomp, MASTER_CAMNOC_SF_UNCOMP, 1, 32, 1, SLAVE_CAMNOC_UNCOMP);
DEFINE_QNODE(qhm_spdm, MASTER_SPDM, 1, 4, 1, SLAVE_CNOC_A2NOC);
DEFINE_QNODE(qhm_tic, MASTER_TIC, 1, 4, 43, SLAVE_A1NOC_CFG, SLAVE_A2NOC_CFG, SLAVE_AOP, SLAVE_AOSS, SLAVE_CAMERA_CFG, SLAVE_CLK_CTL, SLAVE_CDSP_CFG, SLAVE_RBCPR_CX_CFG, SLAVE_CRYPTO_0_CFG, SLAVE_DCC_CFG, SLAVE_CNOC_DDRSS, SLAVE_DISPLAY_CFG, SLAVE_GLM, SLAVE_GFX3D_CFG, SLAVE_IMEM_CFG, SLAVE_IPA_CFG, SLAVE_CNOC_MNOC_CFG, SLAVE_PCIE_0_CFG, SLAVE_PCIE_1_CFG, SLAVE_PDM, SLAVE_SOUTH_PHY_CFG, SLAVE_PIMEM_CFG, SLAVE_PRNG, SLAVE_QDSS_CFG, SLAVE_BLSP_2, SLAVE_BLSP_1, SLAVE_SDCC_2, SLAVE_SDCC_4, SLAVE_SNOC_CFG, SLAVE_SPDM_WRAPPER, SLAVE_SPSS_CFG, SLAVE_TCSR, SLAVE_TLMM_NORTH, SLAVE_TLMM_SOUTH, SLAVE_TSIF, SLAVE_UFS_CARD_CFG, SLAVE_UFS_MEM_CFG, SLAVE_USB3_0, SLAVE_USB3_1, SLAVE_VENUS_CFG, SLAVE_VSENSE_CTRL_CFG, SLAVE_CNOC_A2NOC, SLAVE_SERVICE_CNOC);
DEFINE_QNODE(qnm_snoc, MASTER_SNOC_CNOC, 1, 8, 42, SLAVE_A1NOC_CFG, SLAVE_A2NOC_CFG, SLAVE_AOP, SLAVE_AOSS, SLAVE_CAMERA_CFG, SLAVE_CLK_CTL, SLAVE_CDSP_CFG, SLAVE_RBCPR_CX_CFG, SLAVE_CRYPTO_0_CFG, SLAVE_DCC_CFG, SLAVE_CNOC_DDRSS, SLAVE_DISPLAY_CFG, SLAVE_GLM, SLAVE_GFX3D_CFG, SLAVE_IMEM_CFG, SLAVE_IPA_CFG, SLAVE_CNOC_MNOC_CFG, SLAVE_PCIE_0_CFG, SLAVE_PCIE_1_CFG, SLAVE_PDM, SLAVE_SOUTH_PHY_CFG, SLAVE_PIMEM_CFG, SLAVE_PRNG, SLAVE_QDSS_CFG, SLAVE_BLSP_2, SLAVE_BLSP_1, SLAVE_SDCC_2, SLAVE_SDCC_4, SLAVE_SNOC_CFG, SLAVE_SPDM_WRAPPER, SLAVE_SPSS_CFG, SLAVE_TCSR, SLAVE_TLMM_NORTH, SLAVE_TLMM_SOUTH, SLAVE_TSIF, SLAVE_UFS_CARD_CFG, SLAVE_UFS_MEM_CFG, SLAVE_USB3_0, SLAVE_USB3_1, SLAVE_VENUS_CFG, SLAVE_VSENSE_CTRL_CFG, SLAVE_SERVICE_CNOC);
DEFINE_QNODE(xm_qdss_dap, MASTER_QDSS_DAP, 1, 8, 43, SLAVE_A1NOC_CFG, SLAVE_A2NOC_CFG, SLAVE_AOP, SLAVE_AOSS, SLAVE_CAMERA_CFG, SLAVE_CLK_CTL, SLAVE_CDSP_CFG, SLAVE_RBCPR_CX_CFG, SLAVE_CRYPTO_0_CFG, SLAVE_DCC_CFG, SLAVE_CNOC_DDRSS, SLAVE_DISPLAY_CFG, SLAVE_GLM, SLAVE_GFX3D_CFG, SLAVE_IMEM_CFG, SLAVE_IPA_CFG, SLAVE_CNOC_MNOC_CFG, SLAVE_PCIE_0_CFG, SLAVE_PCIE_1_CFG, SLAVE_PDM, SLAVE_SOUTH_PHY_CFG, SLAVE_PIMEM_CFG, SLAVE_PRNG, SLAVE_QDSS_CFG, SLAVE_BLSP_2, SLAVE_BLSP_1, SLAVE_SDCC_2, SLAVE_SDCC_4, SLAVE_SNOC_CFG, SLAVE_SPDM_WRAPPER, SLAVE_SPSS_CFG, SLAVE_TCSR, SLAVE_TLMM_NORTH, SLAVE_TLMM_SOUTH, SLAVE_TSIF, SLAVE_UFS_CARD_CFG, SLAVE_UFS_MEM_CFG, SLAVE_USB3_0, SLAVE_USB3_1, SLAVE_VENUS_CFG, SLAVE_VSENSE_CTRL_CFG, SLAVE_CNOC_A2NOC, SLAVE_SERVICE_CNOC);
DEFINE_QNODE(qhm_cnoc, MASTER_CNOC_DC_NOC, 1, 4, 2, SLAVE_LLCC_CFG, SLAVE_MEM_NOC_CFG);
DEFINE_QNODE(acm_l3, MASTER_APPSS_PROC, 1, 16, 3, SLAVE_GNOC_SNOC, SLAVE_GNOC_MEM_NOC, SLAVE_SERVICE_GNOC);
DEFINE_QNODE(pm_gnoc_cfg, MASTER_GNOC_CFG, 1, 4, 1, SLAVE_SERVICE_GNOC);
DEFINE_QNODE(llcc_mc, MASTER_LLCC, 4, 4, 1, SLAVE_EBI1);
DEFINE_QNODE(acm_tcu, MASTER_TCU_0, 1, 8, 3, SLAVE_MEM_NOC_GNOC, SLAVE_LLCC, SLAVE_MEM_NOC_SNOC);
DEFINE_QNODE(qhm_memnoc_cfg, MASTER_MEM_NOC_CFG, 1, 4, 2, SLAVE_MSS_PROC_MS_MPU_CFG, SLAVE_SERVICE_MEM_NOC);
DEFINE_QNODE(qnm_apps, MASTER_GNOC_MEM_NOC, 2, 32, 1, SLAVE_LLCC);
DEFINE_QNODE(qnm_mnoc_hf, MASTER_MNOC_HF_MEM_NOC, 2, 32, 2, SLAVE_MEM_NOC_GNOC, SLAVE_LLCC);
DEFINE_QNODE(qnm_mnoc_sf, MASTER_MNOC_SF_MEM_NOC, 1, 32, 3, SLAVE_MEM_NOC_GNOC, SLAVE_LLCC, SLAVE_MEM_NOC_SNOC);
DEFINE_QNODE(qnm_snoc_gc, MASTER_SNOC_GC_MEM_NOC, 1, 8, 1, SLAVE_LLCC);
DEFINE_QNODE(qnm_snoc_sf, MASTER_SNOC_SF_MEM_NOC, 1, 16, 2, SLAVE_MEM_NOC_GNOC, SLAVE_LLCC);
DEFINE_QNODE(qxm_gpu, MASTER_GFX3D, 2, 32, 3, SLAVE_MEM_NOC_GNOC, SLAVE_LLCC, SLAVE_MEM_NOC_SNOC);
DEFINE_QNODE(qhm_mnoc_cfg, MASTER_CNOC_MNOC_CFG, 1, 4, 1, SLAVE_SERVICE_MNOC);
DEFINE_QNODE(qxm_camnoc_hf0, MASTER_CAMNOC_HF0, 1, 32, 1, SLAVE_MNOC_HF_MEM_NOC);
DEFINE_QNODE(qxm_camnoc_hf1, MASTER_CAMNOC_HF1, 1, 32, 1, SLAVE_MNOC_HF_MEM_NOC);
DEFINE_QNODE(qxm_camnoc_sf, MASTER_CAMNOC_SF, 1, 32, 1, SLAVE_MNOC_SF_MEM_NOC);
DEFINE_QNODE(qxm_mdp0, MASTER_MDP0, 1, 32, 1, SLAVE_MNOC_HF_MEM_NOC);
DEFINE_QNODE(qxm_mdp1, MASTER_MDP1, 1, 32, 1, SLAVE_MNOC_HF_MEM_NOC);
DEFINE_QNODE(qxm_rot, MASTER_ROTATOR, 1, 32, 1, SLAVE_MNOC_SF_MEM_NOC);
DEFINE_QNODE(qxm_venus0, MASTER_VIDEO_P0, 1, 32, 1, SLAVE_MNOC_SF_MEM_NOC);
DEFINE_QNODE(qxm_venus1, MASTER_VIDEO_P1, 1, 32, 1, SLAVE_MNOC_SF_MEM_NOC);
DEFINE_QNODE(qxm_venus_arm9, MASTER_VIDEO_PROC, 1, 8, 1, SLAVE_MNOC_SF_MEM_NOC);
DEFINE_QNODE(qhm_snoc_cfg, MASTER_SNOC_CFG, 1, 4, 1, SLAVE_SERVICE_SNOC);
DEFINE_QNODE(qnm_aggre1_noc, MASTER_A1NOC_SNOC, 1, 16, 6, SLAVE_APPSS, SLAVE_SNOC_CNOC, SLAVE_SNOC_MEM_NOC_SF, SLAVE_IMEM, SLAVE_PIMEM, SLAVE_QDSS_STM);
DEFINE_QNODE(qnm_aggre2_noc, MASTER_A2NOC_SNOC, 1, 16, 9, SLAVE_APPSS, SLAVE_SNOC_CNOC, SLAVE_SNOC_MEM_NOC_SF, SLAVE_IMEM, SLAVE_PCIE_0, SLAVE_PCIE_1, SLAVE_PIMEM, SLAVE_QDSS_STM, SLAVE_TCU);
DEFINE_QNODE(qnm_gladiator_sodv, MASTER_GNOC_SNOC, 1, 8, 8, SLAVE_APPSS, SLAVE_SNOC_CNOC, SLAVE_IMEM, SLAVE_PCIE_0, SLAVE_PCIE_1, SLAVE_PIMEM, SLAVE_QDSS_STM, SLAVE_TCU);
DEFINE_QNODE(qnm_memnoc, MASTER_MEM_NOC_SNOC, 1, 8, 5, SLAVE_APPSS, SLAVE_SNOC_CNOC, SLAVE_IMEM, SLAVE_PIMEM, SLAVE_QDSS_STM);
DEFINE_QNODE(qnm_pcie_anoc, MASTER_ANOC_PCIE_SNOC, 1, 16, 5, SLAVE_APPSS, SLAVE_SNOC_CNOC, SLAVE_SNOC_MEM_NOC_SF, SLAVE_IMEM, SLAVE_QDSS_STM);
DEFINE_QNODE(qxm_pimem, MASTER_PIMEM, 1, 8, 2, SLAVE_SNOC_MEM_NOC_GC, SLAVE_IMEM);
DEFINE_QNODE(xm_gic, MASTER_GIC, 1, 8, 2, SLAVE_SNOC_MEM_NOC_GC, SLAVE_IMEM);
DEFINE_QNODE(qns_a1noc_snoc, SLAVE_A1NOC_SNOC, 1, 16, 1, MASTER_A1NOC_SNOC);
DEFINE_QNODE(srvc_aggre1_noc, SLAVE_SERVICE_A1NOC, 1, 4, 0);
DEFINE_QNODE(qns_pcie_a1noc_snoc, SLAVE_ANOC_PCIE_A1NOC_SNOC, 1, 16, 1, MASTER_ANOC_PCIE_SNOC);
DEFINE_QNODE(qns_a2noc_snoc, SLAVE_A2NOC_SNOC, 1, 16, 1, MASTER_A2NOC_SNOC);
DEFINE_QNODE(qns_pcie_snoc, SLAVE_ANOC_PCIE_SNOC, 1, 16, 1, MASTER_ANOC_PCIE_SNOC);
DEFINE_QNODE(srvc_aggre2_noc, SLAVE_SERVICE_A2NOC, 1, 4, 0);
DEFINE_QNODE(qns_camnoc_uncomp, SLAVE_CAMNOC_UNCOMP, 1, 32, 0);
DEFINE_QNODE(qhs_a1_noc_cfg, SLAVE_A1NOC_CFG, 1, 4, 1, MASTER_A1NOC_CFG);
DEFINE_QNODE(qhs_a2_noc_cfg, SLAVE_A2NOC_CFG, 1, 4, 1, MASTER_A2NOC_CFG);
DEFINE_QNODE(qhs_aop, SLAVE_AOP, 1, 4, 0);
DEFINE_QNODE(qhs_aoss, SLAVE_AOSS, 1, 4, 0);
DEFINE_QNODE(qhs_camera_cfg, SLAVE_CAMERA_CFG, 1, 4, 0);
DEFINE_QNODE(qhs_clk_ctl, SLAVE_CLK_CTL, 1, 4, 0);
DEFINE_QNODE(qhs_compute_dsp_cfg, SLAVE_CDSP_CFG, 1, 4, 0);
DEFINE_QNODE(qhs_cpr_cx, SLAVE_RBCPR_CX_CFG, 1, 4, 0);
DEFINE_QNODE(qhs_crypto0_cfg, SLAVE_CRYPTO_0_CFG, 1, 4, 0);
DEFINE_QNODE(qhs_dcc_cfg, SLAVE_DCC_CFG, 1, 4, 1, MASTER_CNOC_DC_NOC);
DEFINE_QNODE(qhs_ddrss_cfg, SLAVE_CNOC_DDRSS, 1, 4, 0);
DEFINE_QNODE(qhs_display_cfg, SLAVE_DISPLAY_CFG, 1, 4, 0);
DEFINE_QNODE(qhs_glm, SLAVE_GLM, 1, 4, 0);
DEFINE_QNODE(qhs_gpuss_cfg, SLAVE_GFX3D_CFG, 1, 8, 0);
DEFINE_QNODE(qhs_imem_cfg, SLAVE_IMEM_CFG, 1, 4, 0);
DEFINE_QNODE(qhs_ipa, SLAVE_IPA_CFG, 1, 4, 0);
DEFINE_QNODE(qhs_mnoc_cfg, SLAVE_CNOC_MNOC_CFG, 1, 4, 1, MASTER_CNOC_MNOC_CFG);
DEFINE_QNODE(qhs_pcie0_cfg, SLAVE_PCIE_0_CFG, 1, 4, 0);
DEFINE_QNODE(qhs_pcie_gen3_cfg, SLAVE_PCIE_1_CFG, 1, 4, 0);
DEFINE_QNODE(qhs_pdm, SLAVE_PDM, 1, 4, 0);
DEFINE_QNODE(qhs_phy_refgen_south, SLAVE_SOUTH_PHY_CFG, 1, 4, 0);
DEFINE_QNODE(qhs_pimem_cfg, SLAVE_PIMEM_CFG, 1, 4, 0);
DEFINE_QNODE(qhs_prng, SLAVE_PRNG, 1, 4, 0);
DEFINE_QNODE(qhs_qdss_cfg, SLAVE_QDSS_CFG, 1, 4, 0);
DEFINE_QNODE(qhs_qupv3_north, SLAVE_BLSP_2, 1, 4, 0);
DEFINE_QNODE(qhs_qupv3_south, SLAVE_BLSP_1, 1, 4, 0);
DEFINE_QNODE(qhs_sdc2, SLAVE_SDCC_2, 1, 4, 0);
DEFINE_QNODE(qhs_sdc4, SLAVE_SDCC_4, 1, 4, 0);
DEFINE_QNODE(qhs_snoc_cfg, SLAVE_SNOC_CFG, 1, 4, 1, MASTER_SNOC_CFG);
DEFINE_QNODE(qhs_spdm, SLAVE_SPDM_WRAPPER, 1, 4, 0);
DEFINE_QNODE(qhs_spss_cfg, SLAVE_SPSS_CFG, 1, 4, 0);
DEFINE_QNODE(qhs_tcsr, SLAVE_TCSR, 1, 4, 0);
DEFINE_QNODE(qhs_tlmm_north, SLAVE_TLMM_NORTH, 1, 4, 0);
DEFINE_QNODE(qhs_tlmm_south, SLAVE_TLMM_SOUTH, 1, 4, 0);
DEFINE_QNODE(qhs_tsif, SLAVE_TSIF, 1, 4, 0);
DEFINE_QNODE(qhs_ufs_card_cfg, SLAVE_UFS_CARD_CFG, 1, 4, 0);
DEFINE_QNODE(qhs_ufs_mem_cfg, SLAVE_UFS_MEM_CFG, 1, 4, 0);
DEFINE_QNODE(qhs_usb3_0, SLAVE_USB3_0, 1, 4, 0);
DEFINE_QNODE(qhs_usb3_1, SLAVE_USB3_1, 1, 4, 0);
DEFINE_QNODE(qhs_venus_cfg, SLAVE_VENUS_CFG, 1, 4, 0);
DEFINE_QNODE(qhs_vsense_ctrl_cfg, SLAVE_VSENSE_CTRL_CFG, 1, 4, 0);
DEFINE_QNODE(qns_cnoc_a2noc, SLAVE_CNOC_A2NOC, 1, 8, 1, MASTER_CNOC_A2NOC);
DEFINE_QNODE(srvc_cnoc, SLAVE_SERVICE_CNOC, 1, 4, 0);
DEFINE_QNODE(qhs_llcc, SLAVE_LLCC_CFG, 1, 4, 0);
DEFINE_QNODE(qhs_memnoc, SLAVE_MEM_NOC_CFG, 1, 4, 1, MASTER_MEM_NOC_CFG);
DEFINE_QNODE(qns_gladiator_sodv, SLAVE_GNOC_SNOC, 1, 8, 1, MASTER_GNOC_SNOC);
DEFINE_QNODE(qns_gnoc_memnoc, SLAVE_GNOC_MEM_NOC, 2, 32, 1, MASTER_GNOC_MEM_NOC);
DEFINE_QNODE(srvc_gnoc, SLAVE_SERVICE_GNOC, 1, 4, 0);
DEFINE_QNODE(ebi, SLAVE_EBI1, 4, 4, 0);
DEFINE_QNODE(qhs_mdsp_ms_mpu_cfg, SLAVE_MSS_PROC_MS_MPU_CFG, 1, 4, 0);
DEFINE_QNODE(qns_apps_io, SLAVE_MEM_NOC_GNOC, 1, 32, 0);
DEFINE_QNODE(qns_llcc, SLAVE_LLCC, 4, 16, 1, MASTER_LLCC);
DEFINE_QNODE(qns_memnoc_snoc, SLAVE_MEM_NOC_SNOC, 1, 8, 1, MASTER_MEM_NOC_SNOC);
DEFINE_QNODE(srvc_memnoc, SLAVE_SERVICE_MEM_NOC, 1, 4, 0);
DEFINE_QNODE(qns2_mem_noc, SLAVE_MNOC_SF_MEM_NOC, 1, 32, 1, MASTER_MNOC_SF_MEM_NOC);
DEFINE_QNODE(qns_mem_noc_hf, SLAVE_MNOC_HF_MEM_NOC, 2, 32, 1, MASTER_MNOC_HF_MEM_NOC);
DEFINE_QNODE(srvc_mnoc, SLAVE_SERVICE_MNOC, 1, 4, 0);
DEFINE_QNODE(qhs_apss, SLAVE_APPSS, 1, 8, 0);
DEFINE_QNODE(qns_cnoc, SLAVE_SNOC_CNOC, 1, 8, 1, MASTER_SNOC_CNOC);
DEFINE_QNODE(qns_memnoc_gc, SLAVE_SNOC_MEM_NOC_GC, 1, 8, 1, MASTER_SNOC_GC_MEM_NOC);
DEFINE_QNODE(qns_memnoc_sf, SLAVE_SNOC_MEM_NOC_SF, 1, 16, 1, MASTER_SNOC_SF_MEM_NOC);
DEFINE_QNODE(qxs_imem, SLAVE_IMEM, 1, 8, 0);
DEFINE_QNODE(qxs_pcie, SLAVE_PCIE_0, 1, 8, 0);
DEFINE_QNODE(qxs_pcie_gen3, SLAVE_PCIE_1, 1, 8, 0);
DEFINE_QNODE(qxs_pimem, SLAVE_PIMEM, 1, 8, 0);
DEFINE_QNODE(srvc_snoc, SLAVE_SERVICE_SNOC, 1, 4, 0);
DEFINE_QNODE(xs_qdss_stm, SLAVE_QDSS_STM, 1, 4, 0);
DEFINE_QNODE(xs_sys_tcu_cfg, SLAVE_TCU, 1, 8, 0);
#define DEFINE_QBCM(_name, _bcmname, _keepalive, _numnodes, ...) \
static struct qcom_icc_bcm _name = { \
.name = _bcmname, \
.keepalive = _keepalive, \
.num_nodes = _numnodes, \
.nodes = { __VA_ARGS__ }, \
}
DEFINE_QBCM(bcm_acv, "ACV", false, 1, &ebi);
DEFINE_QBCM(bcm_mc0, "MC0", true, 1, &ebi);
DEFINE_QBCM(bcm_sh0, "SH0", true, 1, &qns_llcc);
DEFINE_QBCM(bcm_mm0, "MM0", false, 1, &qns_mem_noc_hf);
DEFINE_QBCM(bcm_sh1, "SH1", false, 1, &qns_apps_io);
DEFINE_QBCM(bcm_mm1, "MM1", false, 7, &qxm_camnoc_hf0_uncomp, &qxm_camnoc_hf1_uncomp, &qxm_camnoc_sf_uncomp, &qxm_camnoc_hf0, &qxm_camnoc_hf1, &qxm_mdp0, &qxm_mdp1);
DEFINE_QBCM(bcm_sh2, "SH2", false, 1, &qns_memnoc_snoc);
DEFINE_QBCM(bcm_mm2, "MM2", false, 1, &qns2_mem_noc);
DEFINE_QBCM(bcm_sh3, "SH3", false, 1, &acm_tcu);
DEFINE_QBCM(bcm_mm3, "MM3", false, 5, &qxm_camnoc_sf, &qxm_rot, &qxm_venus0, &qxm_venus1, &qxm_venus_arm9);
DEFINE_QBCM(bcm_sh5, "SH5", false, 1, &qnm_apps);
DEFINE_QBCM(bcm_sn0, "SN0", true, 1, &qns_memnoc_sf);
DEFINE_QBCM(bcm_ce0, "CE0", false, 1, &qxm_crypto);
DEFINE_QBCM(bcm_cn0, "CN0", false, 47, &qhm_spdm, &qhm_tic, &qnm_snoc, &xm_qdss_dap, &qhs_a1_noc_cfg, &qhs_a2_noc_cfg, &qhs_aop, &qhs_aoss, &qhs_camera_cfg, &qhs_clk_ctl, &qhs_compute_dsp_cfg, &qhs_cpr_cx, &qhs_crypto0_cfg, &qhs_dcc_cfg, &qhs_ddrss_cfg, &qhs_display_cfg, &qhs_glm, &qhs_gpuss_cfg, &qhs_imem_cfg, &qhs_ipa, &qhs_mnoc_cfg, &qhs_pcie0_cfg, &qhs_pcie_gen3_cfg, &qhs_pdm, &qhs_phy_refgen_south, &qhs_pimem_cfg, &qhs_prng, &qhs_qdss_cfg, &qhs_qupv3_north, &qhs_qupv3_south, &qhs_sdc2, &qhs_sdc4, &qhs_snoc_cfg, &qhs_spdm, &qhs_spss_cfg, &qhs_tcsr, &qhs_tlmm_north, &qhs_tlmm_south, &qhs_tsif, &qhs_ufs_card_cfg, &qhs_ufs_mem_cfg, &qhs_usb3_0, &qhs_usb3_1, &qhs_venus_cfg, &qhs_vsense_ctrl_cfg, &qns_cnoc_a2noc, &srvc_cnoc);
DEFINE_QBCM(bcm_qup0, "QUP0", false, 2, &qhm_qup1, &qhm_qup2);
DEFINE_QBCM(bcm_sn1, "SN1", false, 1, &qxs_imem);
DEFINE_QBCM(bcm_sn2, "SN2", false, 1, &qns_memnoc_gc);
DEFINE_QBCM(bcm_sn3, "SN3", false, 1, &qns_cnoc);
DEFINE_QBCM(bcm_sn4, "SN4", false, 1, &qxm_pimem);
DEFINE_QBCM(bcm_sn5, "SN5", false, 1, &xs_qdss_stm);
DEFINE_QBCM(bcm_sn6, "SN6", false, 3, &qhs_apss, &srvc_snoc, &xs_sys_tcu_cfg);
DEFINE_QBCM(bcm_sn7, "SN7", false, 1, &qxs_pcie);
DEFINE_QBCM(bcm_sn8, "SN8", false, 1, &qxs_pcie_gen3);
DEFINE_QBCM(bcm_sn9, "SN9", false, 2, &srvc_aggre1_noc, &qnm_aggre1_noc);
DEFINE_QBCM(bcm_sn11, "SN11", false, 2, &srvc_aggre2_noc, &qnm_aggre2_noc);
DEFINE_QBCM(bcm_sn12, "SN12", false, 2, &qnm_gladiator_sodv, &xm_gic);
DEFINE_QBCM(bcm_sn14, "SN14", false, 1, &qnm_pcie_anoc);
DEFINE_QBCM(bcm_sn15, "SN15", false, 1, &qnm_memnoc);
static struct qcom_icc_node *rsc_hlos_nodes[] = {
[MASTER_APPSS_PROC] = &acm_l3,
[MASTER_TCU_0] = &acm_tcu,
[MASTER_LLCC] = &llcc_mc,
[MASTER_GNOC_CFG] = &pm_gnoc_cfg,
[MASTER_A1NOC_CFG] = &qhm_a1noc_cfg,
[MASTER_A2NOC_CFG] = &qhm_a2noc_cfg,
[MASTER_CNOC_DC_NOC] = &qhm_cnoc,
[MASTER_MEM_NOC_CFG] = &qhm_memnoc_cfg,
[MASTER_CNOC_MNOC_CFG] = &qhm_mnoc_cfg,
[MASTER_QDSS_BAM] = &qhm_qdss_bam,
[MASTER_BLSP_1] = &qhm_qup1,
[MASTER_BLSP_2] = &qhm_qup2,
[MASTER_SNOC_CFG] = &qhm_snoc_cfg,
[MASTER_SPDM] = &qhm_spdm,
[MASTER_TIC] = &qhm_tic,
[MASTER_TSIF] = &qhm_tsif,
[MASTER_A1NOC_SNOC] = &qnm_aggre1_noc,
[MASTER_A2NOC_SNOC] = &qnm_aggre2_noc,
[MASTER_GNOC_MEM_NOC] = &qnm_apps,
[MASTER_CNOC_A2NOC] = &qnm_cnoc,
[MASTER_GNOC_SNOC] = &qnm_gladiator_sodv,
[MASTER_MEM_NOC_SNOC] = &qnm_memnoc,
[MASTER_MNOC_HF_MEM_NOC] = &qnm_mnoc_hf,
[MASTER_MNOC_SF_MEM_NOC] = &qnm_mnoc_sf,
[MASTER_ANOC_PCIE_SNOC] = &qnm_pcie_anoc,
[MASTER_SNOC_CNOC] = &qnm_snoc,
[MASTER_SNOC_GC_MEM_NOC] = &qnm_snoc_gc,
[MASTER_SNOC_SF_MEM_NOC] = &qnm_snoc_sf,
[MASTER_CAMNOC_HF0] = &qxm_camnoc_hf0,
[MASTER_CAMNOC_HF0_UNCOMP] = &qxm_camnoc_hf0_uncomp,
[MASTER_CAMNOC_HF1] = &qxm_camnoc_hf1,
[MASTER_CAMNOC_HF1_UNCOMP] = &qxm_camnoc_hf1_uncomp,
[MASTER_CAMNOC_SF] = &qxm_camnoc_sf,
[MASTER_CAMNOC_SF_UNCOMP] = &qxm_camnoc_sf_uncomp,
[MASTER_CRYPTO] = &qxm_crypto,
[MASTER_GFX3D] = &qxm_gpu,
[MASTER_IPA] = &qxm_ipa,
[MASTER_MDP0] = &qxm_mdp0,
[MASTER_MDP1] = &qxm_mdp1,
[MASTER_PIMEM] = &qxm_pimem,
[MASTER_ROTATOR] = &qxm_rot,
[MASTER_VIDEO_P0] = &qxm_venus0,
[MASTER_VIDEO_P1] = &qxm_venus1,
[MASTER_VIDEO_PROC] = &qxm_venus_arm9,
[MASTER_GIC] = &xm_gic,
[MASTER_PCIE_1] = &xm_pcie3_1,
[MASTER_PCIE_0] = &xm_pcie_0,
[MASTER_QDSS_DAP] = &xm_qdss_dap,
[MASTER_QDSS_ETR] = &xm_qdss_etr,
[MASTER_SDCC_2] = &xm_sdc2,
[MASTER_SDCC_4] = &xm_sdc4,
[MASTER_UFS_CARD] = &xm_ufs_card,
[MASTER_UFS_MEM] = &xm_ufs_mem,
[MASTER_USB3_0] = &xm_usb3_0,
[MASTER_USB3_1] = &xm_usb3_1,
[SLAVE_EBI1] = &ebi,
[SLAVE_A1NOC_CFG] = &qhs_a1_noc_cfg,
[SLAVE_A2NOC_CFG] = &qhs_a2_noc_cfg,
[SLAVE_AOP] = &qhs_aop,
[SLAVE_AOSS] = &qhs_aoss,
[SLAVE_APPSS] = &qhs_apss,
[SLAVE_CAMERA_CFG] = &qhs_camera_cfg,
[SLAVE_CLK_CTL] = &qhs_clk_ctl,
[SLAVE_CDSP_CFG] = &qhs_compute_dsp_cfg,
[SLAVE_RBCPR_CX_CFG] = &qhs_cpr_cx,
[SLAVE_CRYPTO_0_CFG] = &qhs_crypto0_cfg,
[SLAVE_DCC_CFG] = &qhs_dcc_cfg,
[SLAVE_CNOC_DDRSS] = &qhs_ddrss_cfg,
[SLAVE_DISPLAY_CFG] = &qhs_display_cfg,
[SLAVE_GLM] = &qhs_glm,
[SLAVE_GFX3D_CFG] = &qhs_gpuss_cfg,
[SLAVE_IMEM_CFG] = &qhs_imem_cfg,
[SLAVE_IPA_CFG] = &qhs_ipa,
[SLAVE_LLCC_CFG] = &qhs_llcc,
[SLAVE_MSS_PROC_MS_MPU_CFG] = &qhs_mdsp_ms_mpu_cfg,
[SLAVE_MEM_NOC_CFG] = &qhs_memnoc,
[SLAVE_CNOC_MNOC_CFG] = &qhs_mnoc_cfg,
[SLAVE_PCIE_0_CFG] = &qhs_pcie0_cfg,
[SLAVE_PCIE_1_CFG] = &qhs_pcie_gen3_cfg,
[SLAVE_PDM] = &qhs_pdm,
[SLAVE_SOUTH_PHY_CFG] = &qhs_phy_refgen_south,
[SLAVE_PIMEM_CFG] = &qhs_pimem_cfg,
[SLAVE_PRNG] = &qhs_prng,
[SLAVE_QDSS_CFG] = &qhs_qdss_cfg,
[SLAVE_BLSP_2] = &qhs_qupv3_north,
[SLAVE_BLSP_1] = &qhs_qupv3_south,
[SLAVE_SDCC_2] = &qhs_sdc2,
[SLAVE_SDCC_4] = &qhs_sdc4,
[SLAVE_SNOC_CFG] = &qhs_snoc_cfg,
[SLAVE_SPDM_WRAPPER] = &qhs_spdm,
[SLAVE_SPSS_CFG] = &qhs_spss_cfg,
[SLAVE_TCSR] = &qhs_tcsr,
[SLAVE_TLMM_NORTH] = &qhs_tlmm_north,
[SLAVE_TLMM_SOUTH] = &qhs_tlmm_south,
[SLAVE_TSIF] = &qhs_tsif,
[SLAVE_UFS_CARD_CFG] = &qhs_ufs_card_cfg,
[SLAVE_UFS_MEM_CFG] = &qhs_ufs_mem_cfg,
[SLAVE_USB3_0] = &qhs_usb3_0,
[SLAVE_USB3_1] = &qhs_usb3_1,
[SLAVE_VENUS_CFG] = &qhs_venus_cfg,
[SLAVE_VSENSE_CTRL_CFG] = &qhs_vsense_ctrl_cfg,
[SLAVE_MNOC_SF_MEM_NOC] = &qns2_mem_noc,
[SLAVE_A1NOC_SNOC] = &qns_a1noc_snoc,
[SLAVE_A2NOC_SNOC] = &qns_a2noc_snoc,
[SLAVE_MEM_NOC_GNOC] = &qns_apps_io,
[SLAVE_CAMNOC_UNCOMP] = &qns_camnoc_uncomp,
[SLAVE_SNOC_CNOC] = &qns_cnoc,
[SLAVE_CNOC_A2NOC] = &qns_cnoc_a2noc,
[SLAVE_GNOC_SNOC] = &qns_gladiator_sodv,
[SLAVE_GNOC_MEM_NOC] = &qns_gnoc_memnoc,
[SLAVE_LLCC] = &qns_llcc,
[SLAVE_MNOC_HF_MEM_NOC] = &qns_mem_noc_hf,
[SLAVE_SNOC_MEM_NOC_GC] = &qns_memnoc_gc,
[SLAVE_SNOC_MEM_NOC_SF] = &qns_memnoc_sf,
[SLAVE_MEM_NOC_SNOC] = &qns_memnoc_snoc,
[SLAVE_ANOC_PCIE_A1NOC_SNOC] = &qns_pcie_a1noc_snoc,
[SLAVE_ANOC_PCIE_SNOC] = &qns_pcie_snoc,
[SLAVE_IMEM] = &qxs_imem,
[SLAVE_PCIE_0] = &qxs_pcie,
[SLAVE_PCIE_1] = &qxs_pcie_gen3,
[SLAVE_PIMEM] = &qxs_pimem,
[SLAVE_SERVICE_A1NOC] = &srvc_aggre1_noc,
[SLAVE_SERVICE_A2NOC] = &srvc_aggre2_noc,
[SLAVE_SERVICE_CNOC] = &srvc_cnoc,
[SLAVE_SERVICE_GNOC] = &srvc_gnoc,
[SLAVE_SERVICE_MEM_NOC] = &srvc_memnoc,
[SLAVE_SERVICE_MNOC] = &srvc_mnoc,
[SLAVE_SERVICE_SNOC] = &srvc_snoc,
[SLAVE_QDSS_STM] = &xs_qdss_stm,
[SLAVE_TCU] = &xs_sys_tcu_cfg,
};
static struct qcom_icc_bcm *rsc_hlos_bcms[] = {
&bcm_acv,
&bcm_mc0,
&bcm_sh0,
&bcm_mm0,
&bcm_sh1,
&bcm_mm1,
&bcm_sh2,
&bcm_mm2,
&bcm_sh3,
&bcm_mm3,
&bcm_sh5,
&bcm_sn0,
&bcm_ce0,
&bcm_cn0,
&bcm_qup0,
&bcm_sn1,
&bcm_sn2,
&bcm_sn3,
&bcm_sn4,
&bcm_sn5,
&bcm_sn6,
&bcm_sn7,
&bcm_sn8,
&bcm_sn9,
&bcm_sn11,
&bcm_sn12,
&bcm_sn14,
&bcm_sn15,
};
static struct qcom_icc_desc sdm845_rsc_hlos = {
.nodes = rsc_hlos_nodes,
.num_nodes = ARRAY_SIZE(rsc_hlos_nodes),
.bcms = rsc_hlos_bcms,
.num_bcms = ARRAY_SIZE(rsc_hlos_bcms),
};
static int qcom_icc_bcm_init(struct qcom_icc_bcm *bcm, struct device *dev)
{
struct qcom_icc_node *qn;
const struct bcm_db *data;
size_t data_count;
int i;
bcm->addr = cmd_db_read_addr(bcm->name);
if (!bcm->addr) {
dev_err(dev, "%s could not find RPMh address\n",
bcm->name);
return -EINVAL;
}
data = cmd_db_read_aux_data(bcm->name, &data_count);
if (IS_ERR(data)) {
dev_err(dev, "%s command db read error (%ld)\n",
bcm->name, PTR_ERR(data));
return PTR_ERR(data);
}
if (!data_count) {
dev_err(dev, "%s command db missing or partial aux data\n",
bcm->name);
return -EINVAL;
}
bcm->aux_data.unit = le32_to_cpu(data->unit);
bcm->aux_data.width = le16_to_cpu(data->width);
bcm->aux_data.vcd = data->vcd;
bcm->aux_data.reserved = data->reserved;
/*
* Link Qnodes to their respective BCMs
*/
for (i = 0; i < bcm->num_nodes; i++) {
qn = bcm->nodes[i];
qn->bcms[qn->num_bcms] = bcm;
qn->num_bcms++;
}
return 0;
}
inline void tcs_cmd_gen(struct tcs_cmd *cmd, u64 vote_x, u64 vote_y,
u32 addr, bool commit)
{
bool valid = true;
if (!cmd)
return;
if (vote_x == 0 && vote_y == 0)
valid = false;
if (vote_x > BCM_TCS_CMD_VOTE_MASK)
vote_x = BCM_TCS_CMD_VOTE_MASK;
if (vote_y > BCM_TCS_CMD_VOTE_MASK)
vote_y = BCM_TCS_CMD_VOTE_MASK;
cmd->addr = addr;
cmd->data = BCM_TCS_CMD(commit, valid, vote_x, vote_y);
/*
* Set the wait for completion flag on command that need to be completed
* before the next command.
*/
if (commit)
cmd->wait = true;
}
static void tcs_list_gen(struct list_head *bcm_list,
struct tcs_cmd tcs_list[SDM845_MAX_VCD],
int n[SDM845_MAX_VCD])
{
struct qcom_icc_bcm *bcm;
bool commit;
size_t idx = 0, batch = 0, cur_vcd_size = 0;
memset(n, 0, sizeof(int) * SDM845_MAX_VCD);
list_for_each_entry(bcm, bcm_list, list) {
commit = false;
cur_vcd_size++;
if ((list_is_last(&bcm->list, bcm_list)) ||
bcm->aux_data.vcd != list_next_entry(bcm, list)->aux_data.vcd) {
commit = true;
cur_vcd_size = 0;
}
tcs_cmd_gen(&tcs_list[idx], bcm->vote_x, bcm->vote_y,
bcm->addr, commit);
idx++;
n[batch]++;
/*
* Batch the BCMs in such a way that we do not split them in
* multiple payloads when they are under the same VCD. This is
* to ensure that every BCM is committed since we only set the
* commit bit on the last BCM request of every VCD.
*/
if (n[batch] >= MAX_RPMH_PAYLOAD) {
if (!commit) {
n[batch] -= cur_vcd_size;
n[batch + 1] = cur_vcd_size;
}
batch++;
}
}
}
static void bcm_aggregate(struct qcom_icc_bcm *bcm)
{
size_t i;
u64 agg_avg = 0;
u64 agg_peak = 0;
u64 temp;
for (i = 0; i < bcm->num_nodes; i++) {
temp = bcm->nodes[i]->sum_avg * bcm->aux_data.width;
do_div(temp, bcm->nodes[i]->buswidth * bcm->nodes[i]->channels);
agg_avg = max(agg_avg, temp);
temp = bcm->nodes[i]->max_peak * bcm->aux_data.width;
do_div(temp, bcm->nodes[i]->buswidth);
agg_peak = max(agg_peak, temp);
}
temp = agg_avg * 1000ULL;
do_div(temp, bcm->aux_data.unit);
bcm->vote_x = temp;
temp = agg_peak * 1000ULL;
do_div(temp, bcm->aux_data.unit);
bcm->vote_y = temp;
if (bcm->keepalive && bcm->vote_x == 0 && bcm->vote_y == 0) {
bcm->vote_x = 1;
bcm->vote_y = 1;
}
bcm->dirty = false;
}
static int qcom_icc_aggregate(struct icc_node *node, u32 avg_bw,
u32 peak_bw, u32 *agg_avg, u32 *agg_peak)
{
size_t i;
struct qcom_icc_node *qn;
qn = node->data;
*agg_avg += avg_bw;
*agg_peak = max_t(u32, *agg_peak, peak_bw);
qn->sum_avg = *agg_avg;
qn->max_peak = *agg_peak;
for (i = 0; i < qn->num_bcms; i++)
qn->bcms[i]->dirty = true;
return 0;
}
static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
{
struct qcom_icc_provider *qp;
struct icc_node *node;
struct tcs_cmd cmds[SDM845_MAX_BCMS];
struct list_head commit_list;
int commit_idx[SDM845_MAX_VCD];
int ret = 0, i;
if (!src)
node = dst;
else
node = src;
qp = to_qcom_provider(node->provider);
INIT_LIST_HEAD(&commit_list);
for (i = 0; i < qp->num_bcms; i++) {
if (qp->bcms[i]->dirty) {
bcm_aggregate(qp->bcms[i]);
list_add_tail(&qp->bcms[i]->list, &commit_list);
}
}
/*
* Construct the command list based on a pre ordered list of BCMs
* based on VCD.
*/
tcs_list_gen(&commit_list, cmds, commit_idx);
if (!commit_idx[0])
return ret;
ret = rpmh_invalidate(qp->dev);
if (ret) {
pr_err("Error invalidating RPMH client (%d)\n", ret);
return ret;
}
ret = rpmh_write_batch(qp->dev, RPMH_ACTIVE_ONLY_STATE,
cmds, commit_idx);
if (ret) {
pr_err("Error sending AMC RPMH requests (%d)\n", ret);
return ret;
}
return ret;
}
static int cmp_vcd(const void *_l, const void *_r)
{
const struct qcom_icc_bcm **l = (const struct qcom_icc_bcm **)_l;
const struct qcom_icc_bcm **r = (const struct qcom_icc_bcm **)_r;
if (l[0]->aux_data.vcd < r[0]->aux_data.vcd)
return -1;
else if (l[0]->aux_data.vcd == r[0]->aux_data.vcd)
return 0;
else
return 1;
}
static int qnoc_probe(struct platform_device *pdev)
{
const struct qcom_icc_desc *desc;
struct icc_onecell_data *data;
struct icc_provider *provider;
struct qcom_icc_node **qnodes;
struct qcom_icc_provider *qp;
struct icc_node *node;
size_t num_nodes, i;
int ret;
desc = of_device_get_match_data(&pdev->dev);
if (!desc)
return -EINVAL;
qnodes = desc->nodes;
num_nodes = desc->num_nodes;
qp = devm_kzalloc(&pdev->dev, sizeof(*qp), GFP_KERNEL);
if (!qp)
return -ENOMEM;
data = devm_kcalloc(&pdev->dev, num_nodes, sizeof(*node), GFP_KERNEL);
if (!data)
return -ENOMEM;
provider = &qp->provider;
provider->dev = &pdev->dev;
provider->set = qcom_icc_set;
provider->aggregate = qcom_icc_aggregate;
provider->xlate = of_icc_xlate_onecell;
INIT_LIST_HEAD(&provider->nodes);
provider->data = data;
qp->dev = &pdev->dev;
qp->bcms = desc->bcms;
qp->num_bcms = desc->num_bcms;
ret = icc_provider_add(provider);
if (ret) {
dev_err(&pdev->dev, "error adding interconnect provider\n");
return ret;
}
for (i = 0; i < num_nodes; i++) {
size_t j;
node = icc_node_create(qnodes[i]->id);
if (IS_ERR(node)) {
ret = PTR_ERR(node);
goto err;
}
node->name = qnodes[i]->name;
node->data = qnodes[i];
icc_node_add(node, provider);
dev_dbg(&pdev->dev, "registered node %p %s %d\n", node,
qnodes[i]->name, node->id);
/* populate links */
for (j = 0; j < qnodes[i]->num_links; j++)
icc_link_create(node, qnodes[i]->links[j]);
data->nodes[i] = node;
}
data->num_nodes = num_nodes;
for (i = 0; i < qp->num_bcms; i++)
qcom_icc_bcm_init(qp->bcms[i], &pdev->dev);
/*
* Pre sort the BCMs based on VCD for ease of generating a command list
* that groups the BCMs with the same VCD together. VCDs are numbered
* with lowest being the most expensive time wise, ensuring that
* those commands are being sent the earliest in the queue.
*/
sort(qp->bcms, qp->num_bcms, sizeof(*qp->bcms), cmp_vcd, NULL);
platform_set_drvdata(pdev, qp);
dev_dbg(&pdev->dev, "Registered SDM845 ICC\n");
return ret;
err:
list_for_each_entry(node, &provider->nodes, node_list) {
icc_node_del(node);
icc_node_destroy(node->id);
}
icc_provider_del(provider);
return ret;
}
static int qnoc_remove(struct platform_device *pdev)
{
struct qcom_icc_provider *qp = platform_get_drvdata(pdev);
struct icc_provider *provider = &qp->provider;
struct icc_node *n;
list_for_each_entry(n, &provider->nodes, node_list) {
icc_node_del(n);
icc_node_destroy(n->id);
}
return icc_provider_del(provider);
}
static const struct of_device_id qnoc_of_match[] = {
{ .compatible = "qcom,sdm845-rsc-hlos", .data = &sdm845_rsc_hlos },
{ },
};
MODULE_DEVICE_TABLE(of, qnoc_of_match);
static struct platform_driver qnoc_driver = {
.probe = qnoc_probe,
.remove = qnoc_remove,
.driver = {
.name = "qnoc-sdm845",
.of_match_table = qnoc_of_match,
},
};
module_platform_driver(qnoc_driver);
MODULE_AUTHOR("David Dai <daidavid1@codeaurora.org>");
MODULE_DESCRIPTION("Qualcomm sdm845 NoC driver");
MODULE_LICENSE("GPL v2");

View File

@ -569,6 +569,7 @@ cuda_interrupt(int irq, void *arg)
unsigned char ibuf[16]; unsigned char ibuf[16];
int ibuf_len = 0; int ibuf_len = 0;
int complete = 0; int complete = 0;
bool full;
spin_lock_irqsave(&cuda_lock, flags); spin_lock_irqsave(&cuda_lock, flags);
@ -656,12 +657,13 @@ idle_state:
break; break;
case reading: case reading:
if (reading_reply ? ARRAY_FULL(current_req->reply, reply_ptr) full = reading_reply ? ARRAY_FULL(current_req->reply, reply_ptr)
: ARRAY_FULL(cuda_rbuf, reply_ptr)) : ARRAY_FULL(cuda_rbuf, reply_ptr);
if (full)
(void)in_8(&via[SR]); (void)in_8(&via[SR]);
else else
*reply_ptr++ = in_8(&via[SR]); *reply_ptr++ = in_8(&via[SR]);
if (!TREQ_asserted(status)) { if (!TREQ_asserted(status) || full) {
if (mcu_is_egret) if (mcu_is_egret)
assert_TACK(); assert_TACK();
/* that's all folks */ /* that's all folks */

View File

@ -295,6 +295,17 @@ config QCOM_COINCELL
to maintain PMIC register and RTC state in the absence of to maintain PMIC register and RTC state in the absence of
external power. external power.
config QCOM_FASTRPC
tristate "Qualcomm FastRPC"
depends on ARCH_QCOM || COMPILE_TEST
depends on RPMSG
select DMA_SHARED_BUFFER
help
Provides a communication mechanism that allows for clients to
make remote method invocations across processor boundary to
applications DSP processor. Say M if you want to enable this
module.
config SGI_GRU config SGI_GRU
tristate "SGI GRU driver" tristate "SGI GRU driver"
depends on X86_UV && SMP depends on X86_UV && SMP
@ -535,4 +546,5 @@ source "drivers/misc/echo/Kconfig"
source "drivers/misc/cxl/Kconfig" source "drivers/misc/cxl/Kconfig"
source "drivers/misc/ocxl/Kconfig" source "drivers/misc/ocxl/Kconfig"
source "drivers/misc/cardreader/Kconfig" source "drivers/misc/cardreader/Kconfig"
source "drivers/misc/habanalabs/Kconfig"
endmenu endmenu

View File

@ -18,6 +18,7 @@ obj-$(CONFIG_TIFM_CORE) += tifm_core.o
obj-$(CONFIG_TIFM_7XX1) += tifm_7xx1.o obj-$(CONFIG_TIFM_7XX1) += tifm_7xx1.o
obj-$(CONFIG_PHANTOM) += phantom.o obj-$(CONFIG_PHANTOM) += phantom.o
obj-$(CONFIG_QCOM_COINCELL) += qcom-coincell.o obj-$(CONFIG_QCOM_COINCELL) += qcom-coincell.o
obj-$(CONFIG_QCOM_FASTRPC) += fastrpc.o
obj-$(CONFIG_SENSORS_BH1770) += bh1770glc.o obj-$(CONFIG_SENSORS_BH1770) += bh1770glc.o
obj-$(CONFIG_SENSORS_APDS990X) += apds990x.o obj-$(CONFIG_SENSORS_APDS990X) += apds990x.o
obj-$(CONFIG_SGI_IOC4) += ioc4.o obj-$(CONFIG_SGI_IOC4) += ioc4.o
@ -59,3 +60,4 @@ obj-$(CONFIG_PCI_ENDPOINT_TEST) += pci_endpoint_test.o
obj-$(CONFIG_OCXL) += ocxl/ obj-$(CONFIG_OCXL) += ocxl/
obj-y += cardreader/ obj-y += cardreader/
obj-$(CONFIG_PVPANIC) += pvpanic.o obj-$(CONFIG_PVPANIC) += pvpanic.o
obj-$(CONFIG_HABANA_AI) += habanalabs/

View File

@ -205,9 +205,7 @@ static s32 dpot_read_i2c(struct dpot_data *dpot, u8 reg)
dpot_write_r8d8(dpot, dpot_write_r8d8(dpot,
(DPOT_AD5270_1_2_4_READ_RDAC << 2), 0); (DPOT_AD5270_1_2_4_READ_RDAC << 2), 0);
value = dpot_read_r8d16(dpot, value = dpot_read_r8d16(dpot, DPOT_AD5270_1_2_4_RDAC << 2);
DPOT_AD5270_1_2_4_RDAC << 2);
if (value < 0) if (value < 0)
return value; return value;
/* /*

Some files were not shown because too many files have changed in this diff Show More