drm-misc-next for 5.4:
UAPI Changes: Cross-subsystem Changes: Core Changes: - dma-buf: add reservation_object_fences helper, relax reservation_object_add_shared_fence, remove reservation_object seq number (and then restored) - dma-fence: Shrinkage of the dma_fence structure, Merge dma_fence_signal and dma_fence_signal_locked, Store the timestamp in struct dma_fence in a union with cb_list Driver Changes: - More dt-bindings YAML conversions - More removal of drmP.h includes - dw-hdmi: Support get_eld and various i2s improvements - gm12u320: Few fixes - meson: Global cleanup - panfrost: Few refactors, Support for GPU heap allocations - sun4i: Support for DDC enable GPIO - New panels: TI nspire, NEC NL8048HL11, LG Philips LB035Q02, Sharp LS037V7DW01, Sony ACX565AKM, Toppoly TD028TTEC1 Toppoly TD043MTEA1 -----BEGIN PGP SIGNATURE----- iHUEABYIAB0WIQRcEzekXsqa64kGDp7j7w1vZxhRxQUCXVqvpwAKCRDj7w1vZxhR xa3RAQDzAnt5zeesAxX4XhRJzHoCEwj2PJj9Re6xMJ9PlcfcvwD+OS+bcB6jfiXV Ug9IBd/DqjlmD9G9MxFxfSV946rksAw= =8uv4 -----END PGP SIGNATURE----- Merge tag 'drm-misc-next-2019-08-19' of git://anongit.freedesktop.org/drm/drm-misc into drm-next drm-misc-next for 5.4: UAPI Changes: Cross-subsystem Changes: Core Changes: - dma-buf: add reservation_object_fences helper, relax reservation_object_add_shared_fence, remove reservation_object seq number (and then restored) - dma-fence: Shrinkage of the dma_fence structure, Merge dma_fence_signal and dma_fence_signal_locked, Store the timestamp in struct dma_fence in a union with cb_list Driver Changes: - More dt-bindings YAML conversions - More removal of drmP.h includes - dw-hdmi: Support get_eld and various i2s improvements - gm12u320: Few fixes - meson: Global cleanup - panfrost: Few refactors, Support for GPU heap allocations - sun4i: Support for DDC enable GPIO - New panels: TI nspire, NEC NL8048HL11, LG Philips LB035Q02, Sharp LS037V7DW01, Sony ACX565AKM, Toppoly TD028TTEC1 Toppoly TD043MTEA1 Signed-off-by: Dave Airlie <airlied@redhat.com> [airlied: fixup dma_resv rename fallout] From: Maxime Ripard <maxime.ripard@bootlin.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190819141923.7l2adietcr2pioct@flea
This commit is contained in:
commit
5f680625d9
|
@ -1,119 +0,0 @@
|
|||
Amlogic specific extensions to the Synopsys Designware HDMI Controller
|
||||
======================================================================
|
||||
|
||||
The Amlogic Meson Synopsys Designware Integration is composed of :
|
||||
- A Synopsys DesignWare HDMI Controller IP
|
||||
- A TOP control block controlling the Clocks and PHY
|
||||
- A custom HDMI PHY in order to convert video to TMDS signal
|
||||
___________________________________
|
||||
| HDMI TOP |<= HPD
|
||||
|___________________________________|
|
||||
| | |
|
||||
| Synopsys HDMI | HDMI PHY |=> TMDS
|
||||
| Controller |________________|
|
||||
|___________________________________|<=> DDC
|
||||
|
||||
The HDMI TOP block only supports HPD sensing.
|
||||
The Synopsys HDMI Controller interrupt is routed through the
|
||||
TOP Block interrupt.
|
||||
Communication to the TOP Block and the Synopsys HDMI Controller is done
|
||||
via a pair of dedicated addr+read/write registers.
|
||||
The HDMI PHY is configured by registers in the HHI register block.
|
||||
|
||||
Pixel data arrives in 4:4:4 format from the VENC block and the VPU HDMI mux
|
||||
selects either the ENCI encoder for the 576i or 480i formats or the ENCP
|
||||
encoder for all the other formats including interlaced HD formats.
|
||||
|
||||
The VENC uses a DVI encoder on top of the ENCI or ENCP encoders to generate
|
||||
DVI timings for the HDMI controller.
|
||||
|
||||
Amlogic Meson GXBB, GXL and GXM SoCs families embeds the Synopsys DesignWare
|
||||
HDMI TX IP version 2.01a with HDCP and I2C & S/PDIF
|
||||
audio source interfaces.
|
||||
|
||||
Required properties:
|
||||
- compatible: value should be different for each SoC family as :
|
||||
- GXBB (S905) : "amlogic,meson-gxbb-dw-hdmi"
|
||||
- GXL (S905X, S905D) : "amlogic,meson-gxl-dw-hdmi"
|
||||
- GXM (S912) : "amlogic,meson-gxm-dw-hdmi"
|
||||
followed by the common "amlogic,meson-gx-dw-hdmi"
|
||||
- G12A (S905X2, S905Y2, S905D2) : "amlogic,meson-g12a-dw-hdmi"
|
||||
- reg: Physical base address and length of the controller's registers.
|
||||
- interrupts: The HDMI interrupt number
|
||||
- clocks, clock-names : must have the phandles to the HDMI iahb and isfr clocks,
|
||||
and the Amlogic Meson venci clocks as described in
|
||||
Documentation/devicetree/bindings/clock/clock-bindings.txt,
|
||||
the clocks are soc specific, the clock-names should be "iahb", "isfr", "venci"
|
||||
- resets, resets-names: must have the phandles to the HDMI apb, glue and phy
|
||||
resets as described in :
|
||||
Documentation/devicetree/bindings/reset/reset.txt,
|
||||
the reset-names should be "hdmitx_apb", "hdmitx", "hdmitx_phy"
|
||||
|
||||
Optional properties:
|
||||
- hdmi-supply: Optional phandle to an external 5V regulator to power the HDMI
|
||||
logic, as described in the file ../regulator/regulator.txt
|
||||
|
||||
Required nodes:
|
||||
|
||||
The connections to the HDMI ports are modeled using the OF graph
|
||||
bindings specified in Documentation/devicetree/bindings/graph.txt.
|
||||
|
||||
The following table lists for each supported model the port number
|
||||
corresponding to each HDMI output and input.
|
||||
|
||||
Port 0 Port 1
|
||||
-----------------------------------------
|
||||
S905 (GXBB) VENC Input TMDS Output
|
||||
S905X (GXL) VENC Input TMDS Output
|
||||
S905D (GXL) VENC Input TMDS Output
|
||||
S912 (GXM) VENC Input TMDS Output
|
||||
S905X2 (G12A) VENC Input TMDS Output
|
||||
S905Y2 (G12A) VENC Input TMDS Output
|
||||
S905D2 (G12A) VENC Input TMDS Output
|
||||
|
||||
Example:
|
||||
|
||||
hdmi-connector {
|
||||
compatible = "hdmi-connector";
|
||||
type = "a";
|
||||
|
||||
port {
|
||||
hdmi_connector_in: endpoint {
|
||||
remote-endpoint = <&hdmi_tx_tmds_out>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
hdmi_tx: hdmi-tx@c883a000 {
|
||||
compatible = "amlogic,meson-gxbb-dw-hdmi", "amlogic,meson-gx-dw-hdmi";
|
||||
reg = <0x0 0xc883a000 0x0 0x1c>;
|
||||
interrupts = <GIC_SPI 57 IRQ_TYPE_EDGE_RISING>;
|
||||
resets = <&reset RESET_HDMITX_CAPB3>,
|
||||
<&reset RESET_HDMI_SYSTEM_RESET>,
|
||||
<&reset RESET_HDMI_TX>;
|
||||
reset-names = "hdmitx_apb", "hdmitx", "hdmitx_phy";
|
||||
clocks = <&clkc CLKID_HDMI_PCLK>,
|
||||
<&clkc CLKID_CLK81>,
|
||||
<&clkc CLKID_GCLK_VENCI_INT0>;
|
||||
clock-names = "isfr", "iahb", "venci";
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
|
||||
/* VPU VENC Input */
|
||||
hdmi_tx_venc_port: port@0 {
|
||||
reg = <0>;
|
||||
|
||||
hdmi_tx_in: endpoint {
|
||||
remote-endpoint = <&hdmi_tx_out>;
|
||||
};
|
||||
};
|
||||
|
||||
/* TMDS Output */
|
||||
hdmi_tx_tmds_port: port@1 {
|
||||
reg = <1>;
|
||||
|
||||
hdmi_tx_tmds_out: endpoint {
|
||||
remote-endpoint = <&hdmi_connector_in>;
|
||||
};
|
||||
};
|
||||
};
|
|
@ -0,0 +1,150 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
|
||||
# Copyright 2019 BayLibre, SAS
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: "http://devicetree.org/schemas/display/amlogic,meson-dw-hdmi.yaml#"
|
||||
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
|
||||
|
||||
title: Amlogic specific extensions to the Synopsys Designware HDMI Controller
|
||||
|
||||
maintainers:
|
||||
- Neil Armstrong <narmstrong@baylibre.com>
|
||||
|
||||
description: |
|
||||
The Amlogic Meson Synopsys Designware Integration is composed of
|
||||
- A Synopsys DesignWare HDMI Controller IP
|
||||
- A TOP control block controlling the Clocks and PHY
|
||||
- A custom HDMI PHY in order to convert video to TMDS signal
|
||||
___________________________________
|
||||
| HDMI TOP |<= HPD
|
||||
|___________________________________|
|
||||
| | |
|
||||
| Synopsys HDMI | HDMI PHY |=> TMDS
|
||||
| Controller |________________|
|
||||
|___________________________________|<=> DDC
|
||||
|
||||
The HDMI TOP block only supports HPD sensing.
|
||||
The Synopsys HDMI Controller interrupt is routed through the
|
||||
TOP Block interrupt.
|
||||
Communication to the TOP Block and the Synopsys HDMI Controller is done
|
||||
via a pair of dedicated addr+read/write registers.
|
||||
The HDMI PHY is configured by registers in the HHI register block.
|
||||
|
||||
Pixel data arrives in "4:4:4" format from the VENC block and the VPU HDMI mux
|
||||
selects either the ENCI encoder for the 576i or 480i formats or the ENCP
|
||||
encoder for all the other formats including interlaced HD formats.
|
||||
|
||||
The VENC uses a DVI encoder on top of the ENCI or ENCP encoders to generate
|
||||
DVI timings for the HDMI controller.
|
||||
|
||||
Amlogic Meson GXBB, GXL and GXM SoCs families embeds the Synopsys DesignWare
|
||||
HDMI TX IP version 2.01a with HDCP and I2C & S/PDIF
|
||||
audio source interfaces.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
oneOf:
|
||||
- items:
|
||||
- enum:
|
||||
- amlogic,meson-gxbb-dw-hdmi # GXBB (S905)
|
||||
- amlogic,meson-gxl-dw-hdmi # GXL (S905X, S905D)
|
||||
- amlogic,meson-gxm-dw-hdmi # GXM (S912)
|
||||
- const: amlogic,meson-gx-dw-hdmi
|
||||
- enum:
|
||||
- amlogic,meson-g12a-dw-hdmi # G12A (S905X2, S905Y2, S905D2)
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
clocks:
|
||||
minItems: 3
|
||||
|
||||
clock-names:
|
||||
items:
|
||||
- const: isfr
|
||||
- const: iahb
|
||||
- const: venci
|
||||
|
||||
resets:
|
||||
minItems: 3
|
||||
|
||||
reset-names:
|
||||
items:
|
||||
- const: hdmitx_apb
|
||||
- const: hdmitx
|
||||
- const: hdmitx_phy
|
||||
|
||||
hdmi-supply:
|
||||
description: phandle to an external 5V regulator to power the HDMI logic
|
||||
allOf:
|
||||
- $ref: /schemas/types.yaml#/definitions/phandle
|
||||
|
||||
port@0:
|
||||
type: object
|
||||
description:
|
||||
A port node pointing to the VENC Input port node.
|
||||
|
||||
port@1:
|
||||
type: object
|
||||
description:
|
||||
A port node pointing to the TMDS Output port node.
|
||||
|
||||
"#address-cells":
|
||||
const: 1
|
||||
|
||||
"#size-cells":
|
||||
const: 0
|
||||
|
||||
"#sound-dai-cells":
|
||||
const: 0
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- interrupts
|
||||
- clocks
|
||||
- clock-names
|
||||
- resets
|
||||
- reset-names
|
||||
- port@0
|
||||
- port@1
|
||||
- "#address-cells"
|
||||
- "#size-cells"
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
hdmi_tx: hdmi-tx@c883a000 {
|
||||
compatible = "amlogic,meson-gxbb-dw-hdmi", "amlogic,meson-gx-dw-hdmi";
|
||||
reg = <0xc883a000 0x1c>;
|
||||
interrupts = <57>;
|
||||
resets = <&reset_apb>, <&reset_hdmitx>, <&reset_hdmitx_phy>;
|
||||
reset-names = "hdmitx_apb", "hdmitx", "hdmitx_phy";
|
||||
clocks = <&clk_isfr>, <&clk_iahb>, <&clk_venci>;
|
||||
clock-names = "isfr", "iahb", "venci";
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
|
||||
/* VPU VENC Input */
|
||||
hdmi_tx_venc_port: port@0 {
|
||||
reg = <0>;
|
||||
|
||||
hdmi_tx_in: endpoint {
|
||||
remote-endpoint = <&hdmi_tx_out>;
|
||||
};
|
||||
};
|
||||
|
||||
/* TMDS Output */
|
||||
hdmi_tx_tmds_port: port@1 {
|
||||
reg = <1>;
|
||||
|
||||
hdmi_tx_tmds_out: endpoint {
|
||||
remote-endpoint = <&hdmi_connector_in>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
|
@ -1,121 +0,0 @@
|
|||
Amlogic Meson Display Controller
|
||||
================================
|
||||
|
||||
The Amlogic Meson Display controller is composed of several components
|
||||
that are going to be documented below:
|
||||
|
||||
DMC|---------------VPU (Video Processing Unit)----------------|------HHI------|
|
||||
| vd1 _______ _____________ _________________ | |
|
||||
D |-------| |----| | | | | HDMI PLL |
|
||||
D | vd2 | VIU | | Video Post | | Video Encoders |<---|-----VCLK |
|
||||
R |-------| |----| Processing | | | | |
|
||||
| osd2 | | | |---| Enci ----------|----|-----VDAC------|
|
||||
R |-------| CSC |----| Scalers | | Encp ----------|----|----HDMI-TX----|
|
||||
A | osd1 | | | Blenders | | Encl ----------|----|---------------|
|
||||
M |-------|______|----|____________| |________________| | |
|
||||
___|__________________________________________________________|_______________|
|
||||
|
||||
|
||||
VIU: Video Input Unit
|
||||
---------------------
|
||||
|
||||
The Video Input Unit is in charge of the pixel scanout from the DDR memory.
|
||||
It fetches the frames addresses, stride and parameters from the "Canvas" memory.
|
||||
This part is also in charge of the CSC (Colorspace Conversion).
|
||||
It can handle 2 OSD Planes and 2 Video Planes.
|
||||
|
||||
VPP: Video Post Processing
|
||||
--------------------------
|
||||
|
||||
The Video Post Processing is in charge of the scaling and blending of the
|
||||
various planes into a single pixel stream.
|
||||
There is a special "pre-blending" used by the video planes with a dedicated
|
||||
scaler and a "post-blending" to merge with the OSD Planes.
|
||||
The OSD planes also have a dedicated scaler for one of the OSD.
|
||||
|
||||
VENC: Video Encoders
|
||||
--------------------
|
||||
|
||||
The VENC is composed of the multiple pixel encoders :
|
||||
- ENCI : Interlace Video encoder for CVBS and Interlace HDMI
|
||||
- ENCP : Progressive Video Encoder for HDMI
|
||||
- ENCL : LCD LVDS Encoder
|
||||
The VENC Unit gets a Pixel Clocks (VCLK) from a dedicated HDMI PLL and clock
|
||||
tree and provides the scanout clock to the VPP and VIU.
|
||||
The ENCI is connected to a single VDAC for Composite Output.
|
||||
The ENCI and ENCP are connected to an on-chip HDMI Transceiver.
|
||||
|
||||
Device Tree Bindings:
|
||||
---------------------
|
||||
|
||||
VPU: Video Processing Unit
|
||||
--------------------------
|
||||
|
||||
Required properties:
|
||||
- compatible: value should be different for each SoC family as :
|
||||
- GXBB (S905) : "amlogic,meson-gxbb-vpu"
|
||||
- GXL (S905X, S905D) : "amlogic,meson-gxl-vpu"
|
||||
- GXM (S912) : "amlogic,meson-gxm-vpu"
|
||||
followed by the common "amlogic,meson-gx-vpu"
|
||||
- G12A (S905X2, S905Y2, S905D2) : "amlogic,meson-g12a-vpu"
|
||||
- reg: base address and size of he following memory-mapped regions :
|
||||
- vpu
|
||||
- hhi
|
||||
- reg-names: should contain the names of the previous memory regions
|
||||
- interrupts: should contain the VENC Vsync interrupt number
|
||||
- amlogic,canvas: phandle to canvas provider node as described in the file
|
||||
../soc/amlogic/amlogic,canvas.txt
|
||||
|
||||
Optional properties:
|
||||
- power-domains: Optional phandle to associated power domain as described in
|
||||
the file ../power/power_domain.txt
|
||||
|
||||
Required nodes:
|
||||
|
||||
The connections to the VPU output video ports are modeled using the OF graph
|
||||
bindings specified in Documentation/devicetree/bindings/graph.txt.
|
||||
|
||||
The following table lists for each supported model the port number
|
||||
corresponding to each VPU output.
|
||||
|
||||
Port 0 Port 1
|
||||
-----------------------------------------
|
||||
S905 (GXBB) CVBS VDAC HDMI-TX
|
||||
S905X (GXL) CVBS VDAC HDMI-TX
|
||||
S905D (GXL) CVBS VDAC HDMI-TX
|
||||
S912 (GXM) CVBS VDAC HDMI-TX
|
||||
S905X2 (G12A) CVBS VDAC HDMI-TX
|
||||
S905Y2 (G12A) CVBS VDAC HDMI-TX
|
||||
S905D2 (G12A) CVBS VDAC HDMI-TX
|
||||
|
||||
Example:
|
||||
|
||||
tv-connector {
|
||||
compatible = "composite-video-connector";
|
||||
|
||||
port {
|
||||
tv_connector_in: endpoint {
|
||||
remote-endpoint = <&cvbs_vdac_out>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
vpu: vpu@d0100000 {
|
||||
compatible = "amlogic,meson-gxbb-vpu";
|
||||
reg = <0x0 0xd0100000 0x0 0x100000>,
|
||||
<0x0 0xc883c000 0x0 0x1000>,
|
||||
<0x0 0xc8838000 0x0 0x1000>;
|
||||
reg-names = "vpu", "hhi", "dmc";
|
||||
interrupts = <GIC_SPI 3 IRQ_TYPE_EDGE_RISING>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
|
||||
/* CVBS VDAC output port */
|
||||
port@0 {
|
||||
reg = <0>;
|
||||
|
||||
cvbs_vdac_out: endpoint {
|
||||
remote-endpoint = <&tv_connector_in>;
|
||||
};
|
||||
};
|
||||
};
|
|
@ -0,0 +1,137 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
|
||||
# Copyright 2019 BayLibre, SAS
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: "http://devicetree.org/schemas/display/amlogic,meson-vpu.yaml#"
|
||||
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
|
||||
|
||||
title: Amlogic Meson Display Controller
|
||||
|
||||
maintainers:
|
||||
- Neil Armstrong <narmstrong@baylibre.com>
|
||||
|
||||
description: |
|
||||
The Amlogic Meson Display controller is composed of several components
|
||||
that are going to be documented below
|
||||
|
||||
DMC|---------------VPU (Video Processing Unit)----------------|------HHI------|
|
||||
| vd1 _______ _____________ _________________ | |
|
||||
D |-------| |----| | | | | HDMI PLL |
|
||||
D | vd2 | VIU | | Video Post | | Video Encoders |<---|-----VCLK |
|
||||
R |-------| |----| Processing | | | | |
|
||||
| osd2 | | | |---| Enci ----------|----|-----VDAC------|
|
||||
R |-------| CSC |----| Scalers | | Encp ----------|----|----HDMI-TX----|
|
||||
A | osd1 | | | Blenders | | Encl ----------|----|---------------|
|
||||
M |-------|______|----|____________| |________________| | |
|
||||
___|__________________________________________________________|_______________|
|
||||
|
||||
|
||||
VIU: Video Input Unit
|
||||
---------------------
|
||||
|
||||
The Video Input Unit is in charge of the pixel scanout from the DDR memory.
|
||||
It fetches the frames addresses, stride and parameters from the "Canvas" memory.
|
||||
This part is also in charge of the CSC (Colorspace Conversion).
|
||||
It can handle 2 OSD Planes and 2 Video Planes.
|
||||
|
||||
VPP: Video Post Processing
|
||||
--------------------------
|
||||
|
||||
The Video Post Processing is in charge of the scaling and blending of the
|
||||
various planes into a single pixel stream.
|
||||
There is a special "pre-blending" used by the video planes with a dedicated
|
||||
scaler and a "post-blending" to merge with the OSD Planes.
|
||||
The OSD planes also have a dedicated scaler for one of the OSD.
|
||||
|
||||
VENC: Video Encoders
|
||||
--------------------
|
||||
|
||||
The VENC is composed of the multiple pixel encoders
|
||||
- ENCI : Interlace Video encoder for CVBS and Interlace HDMI
|
||||
- ENCP : Progressive Video Encoder for HDMI
|
||||
- ENCL : LCD LVDS Encoder
|
||||
The VENC Unit gets a Pixel Clocks (VCLK) from a dedicated HDMI PLL and clock
|
||||
tree and provides the scanout clock to the VPP and VIU.
|
||||
The ENCI is connected to a single VDAC for Composite Output.
|
||||
The ENCI and ENCP are connected to an on-chip HDMI Transceiver.
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
oneOf:
|
||||
- items:
|
||||
- enum:
|
||||
- amlogic,meson-gxbb-vpu # GXBB (S905)
|
||||
- amlogic,meson-gxl-vpu # GXL (S905X, S905D)
|
||||
- amlogic,meson-gxm-vpu # GXM (S912)
|
||||
- const: amlogic,meson-gx-vpu
|
||||
- enum:
|
||||
- amlogic,meson-g12a-vpu # G12A (S905X2, S905Y2, S905D2)
|
||||
|
||||
reg:
|
||||
maxItems: 2
|
||||
|
||||
reg-names:
|
||||
items:
|
||||
- const: vpu
|
||||
- const: hhi
|
||||
|
||||
interrupts:
|
||||
maxItems: 1
|
||||
|
||||
power-domains:
|
||||
maxItems: 1
|
||||
description: phandle to the associated power domain
|
||||
|
||||
port@0:
|
||||
type: object
|
||||
description:
|
||||
A port node pointing to the CVBS VDAC port node.
|
||||
|
||||
port@1:
|
||||
type: object
|
||||
description:
|
||||
A port node pointing to the HDMI-TX port node.
|
||||
|
||||
"#address-cells":
|
||||
const: 1
|
||||
|
||||
"#size-cells":
|
||||
const: 0
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- interrupts
|
||||
- port@0
|
||||
- port@1
|
||||
- "#address-cells"
|
||||
- "#size-cells"
|
||||
|
||||
examples:
|
||||
- |
|
||||
vpu: vpu@d0100000 {
|
||||
compatible = "amlogic,meson-gxbb-vpu", "amlogic,meson-gx-vpu";
|
||||
reg = <0xd0100000 0x100000>, <0xc883c000 0x1000>;
|
||||
reg-names = "vpu", "hhi";
|
||||
interrupts = <3>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
|
||||
/* CVBS VDAC output port */
|
||||
port@0 {
|
||||
reg = <0>;
|
||||
|
||||
cvbs_vdac_out: endpoint {
|
||||
remote-endpoint = <&tv_connector_in>;
|
||||
};
|
||||
};
|
||||
|
||||
/* HDMI TX output port */
|
||||
port@1 {
|
||||
reg = <1>;
|
||||
|
||||
hdmi_tx_out: endpoint {
|
||||
remote-endpoint = <&hdmi_tx_in>;
|
||||
};
|
||||
};
|
||||
};
|
|
@ -9,6 +9,7 @@ Optional properties:
|
|||
- label: a symbolic name for the connector
|
||||
- hpd-gpios: HPD GPIO number
|
||||
- ddc-i2c-bus: phandle link to the I2C controller used for DDC EDID probing
|
||||
- ddc-en-gpios: signal to enable DDC bus
|
||||
|
||||
Required nodes:
|
||||
- Video port for HDMI input
|
||||
|
|
|
@ -0,0 +1,62 @@
|
|||
# SPDX-License-Identifier: GPL-2.0
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/panel/nec,nl8048hl11.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: NEC NL8048HL11 4.1" WVGA TFT LCD panel
|
||||
|
||||
description:
|
||||
The NEC NL8048HL11 is a 4.1" WVGA TFT LCD panel with a 24-bit RGB parallel
|
||||
data interface and an SPI control interface.
|
||||
|
||||
maintainers:
|
||||
- Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
||||
|
||||
allOf:
|
||||
- $ref: panel-common.yaml#
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
const: nec,nl8048hl11
|
||||
|
||||
label: true
|
||||
port: true
|
||||
reg: true
|
||||
reset-gpios: true
|
||||
|
||||
spi-max-frequency:
|
||||
maximum: 10000000
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
- reset-gpios
|
||||
- port
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
#include <dt-bindings/gpio/gpio.h>
|
||||
|
||||
spi0 {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
|
||||
lcd_panel: panel@0 {
|
||||
compatible = "nec,nl8048hl11";
|
||||
reg = <0>;
|
||||
spi-max-frequency = <10000000>;
|
||||
|
||||
reset-gpios = <&gpio7 7 GPIO_ACTIVE_LOW>;
|
||||
|
||||
port {
|
||||
lcd_in: endpoint {
|
||||
remote-endpoint = <&dpi_out>;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
...
|
|
@ -0,0 +1,36 @@
|
|||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/display/panel/ti,nspire.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Texas Instruments NSPIRE Display Panels
|
||||
|
||||
maintainers:
|
||||
- Linus Walleij <linus.walleij@linaro.org>
|
||||
|
||||
allOf:
|
||||
- $ref: panel-common.yaml#
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
enum:
|
||||
- ti,nspire-cx-lcd-panel
|
||||
- ti,nspire-classic-lcd-panel
|
||||
port: true
|
||||
|
||||
required:
|
||||
- compatible
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
panel {
|
||||
compatible = "ti,nspire-cx-lcd-panel";
|
||||
port {
|
||||
panel_in: endpoint {
|
||||
remote-endpoint = <&pads>;
|
||||
};
|
||||
};
|
||||
};
|
|
@ -511,6 +511,8 @@ patternProperties:
|
|||
description: Lenovo Group Ltd.
|
||||
"^lg,.*":
|
||||
description: LG Corporation
|
||||
"^lgphilips,.*":
|
||||
description: LG Display
|
||||
"^libretech,.*":
|
||||
description: Shenzhen Libre Technology Co., Ltd
|
||||
"^licheepi,.*":
|
||||
|
@ -933,6 +935,9 @@ patternProperties:
|
|||
description: Tecon Microprocessor Technologies, LLC.
|
||||
"^topeet,.*":
|
||||
description: Topeet
|
||||
"^toppoly,.*":
|
||||
description: TPO (deprecated, use tpo)
|
||||
deprecated: true
|
||||
"^toradex,.*":
|
||||
description: Toradex AG
|
||||
"^toshiba,.*":
|
||||
|
|
|
@ -5334,8 +5334,8 @@ L: linux-amlogic@lists.infradead.org
|
|||
W: http://linux-meson.com/
|
||||
S: Supported
|
||||
F: drivers/gpu/drm/meson/
|
||||
F: Documentation/devicetree/bindings/display/amlogic,meson-vpu.txt
|
||||
F: Documentation/devicetree/bindings/display/amlogic,meson-dw-hdmi.txt
|
||||
F: Documentation/devicetree/bindings/display/amlogic,meson-vpu.yaml
|
||||
F: Documentation/devicetree/bindings/display/amlogic,meson-dw-hdmi.yaml
|
||||
F: Documentation/gpu/meson.rst
|
||||
T: git git://anongit.freedesktop.org/drm/drm-misc
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
|
||||
reservation.o seqno-fence.o
|
||||
dma-resv.o seqno-fence.o
|
||||
obj-$(CONFIG_SYNC_FILE) += sync_file.o
|
||||
obj-$(CONFIG_SW_SYNC) += sw_sync.o sync_debug.o
|
||||
obj-$(CONFIG_UDMABUF) += udmabuf.o
|
||||
|
|
|
@ -21,7 +21,7 @@
|
|||
#include <linux/module.h>
|
||||
#include <linux/seq_file.h>
|
||||
#include <linux/poll.h>
|
||||
#include <linux/reservation.h>
|
||||
#include <linux/dma-resv.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/mount.h>
|
||||
#include <linux/pseudo_fs.h>
|
||||
|
@ -104,8 +104,8 @@ static int dma_buf_release(struct inode *inode, struct file *file)
|
|||
list_del(&dmabuf->list_node);
|
||||
mutex_unlock(&db_list.lock);
|
||||
|
||||
if (dmabuf->resv == (struct reservation_object *)&dmabuf[1])
|
||||
reservation_object_fini(dmabuf->resv);
|
||||
if (dmabuf->resv == (struct dma_resv *)&dmabuf[1])
|
||||
dma_resv_fini(dmabuf->resv);
|
||||
|
||||
module_put(dmabuf->owner);
|
||||
kfree(dmabuf);
|
||||
|
@ -165,7 +165,7 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
|
|||
* To support cross-device and cross-driver synchronization of buffer access
|
||||
* implicit fences (represented internally in the kernel with &struct fence) can
|
||||
* be attached to a &dma_buf. The glue for that and a few related things are
|
||||
* provided in the &reservation_object structure.
|
||||
* provided in the &dma_resv structure.
|
||||
*
|
||||
* Userspace can query the state of these implicitly tracked fences using poll()
|
||||
* and related system calls:
|
||||
|
@ -195,8 +195,8 @@ static void dma_buf_poll_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
|
|||
static __poll_t dma_buf_poll(struct file *file, poll_table *poll)
|
||||
{
|
||||
struct dma_buf *dmabuf;
|
||||
struct reservation_object *resv;
|
||||
struct reservation_object_list *fobj;
|
||||
struct dma_resv *resv;
|
||||
struct dma_resv_list *fobj;
|
||||
struct dma_fence *fence_excl;
|
||||
__poll_t events;
|
||||
unsigned shared_count, seq;
|
||||
|
@ -506,13 +506,13 @@ err_alloc_file:
|
|||
struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
|
||||
{
|
||||
struct dma_buf *dmabuf;
|
||||
struct reservation_object *resv = exp_info->resv;
|
||||
struct dma_resv *resv = exp_info->resv;
|
||||
struct file *file;
|
||||
size_t alloc_size = sizeof(struct dma_buf);
|
||||
int ret;
|
||||
|
||||
if (!exp_info->resv)
|
||||
alloc_size += sizeof(struct reservation_object);
|
||||
alloc_size += sizeof(struct dma_resv);
|
||||
else
|
||||
/* prevent &dma_buf[1] == dma_buf->resv */
|
||||
alloc_size += 1;
|
||||
|
@ -544,8 +544,8 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
|
|||
dmabuf->cb_excl.active = dmabuf->cb_shared.active = 0;
|
||||
|
||||
if (!resv) {
|
||||
resv = (struct reservation_object *)&dmabuf[1];
|
||||
reservation_object_init(resv);
|
||||
resv = (struct dma_resv *)&dmabuf[1];
|
||||
dma_resv_init(resv);
|
||||
}
|
||||
dmabuf->resv = resv;
|
||||
|
||||
|
@ -909,11 +909,11 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
|
|||
{
|
||||
bool write = (direction == DMA_BIDIRECTIONAL ||
|
||||
direction == DMA_TO_DEVICE);
|
||||
struct reservation_object *resv = dmabuf->resv;
|
||||
struct dma_resv *resv = dmabuf->resv;
|
||||
long ret;
|
||||
|
||||
/* Wait on any implicit rendering fences */
|
||||
ret = reservation_object_wait_timeout_rcu(resv, write, true,
|
||||
ret = dma_resv_wait_timeout_rcu(resv, write, true,
|
||||
MAX_SCHEDULE_TIMEOUT);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
@ -1154,8 +1154,8 @@ static int dma_buf_debug_show(struct seq_file *s, void *unused)
|
|||
int ret;
|
||||
struct dma_buf *buf_obj;
|
||||
struct dma_buf_attachment *attach_obj;
|
||||
struct reservation_object *robj;
|
||||
struct reservation_object_list *fobj;
|
||||
struct dma_resv *robj;
|
||||
struct dma_resv_list *fobj;
|
||||
struct dma_fence *fence;
|
||||
unsigned seq;
|
||||
int count = 0, attach_count, shared_count, i;
|
||||
|
|
|
@ -13,6 +13,8 @@
|
|||
#include <linux/slab.h>
|
||||
#include <linux/dma-fence-array.h>
|
||||
|
||||
#define PENDING_ERROR 1
|
||||
|
||||
static const char *dma_fence_array_get_driver_name(struct dma_fence *fence)
|
||||
{
|
||||
return "dma_fence_array";
|
||||
|
@ -23,10 +25,29 @@ static const char *dma_fence_array_get_timeline_name(struct dma_fence *fence)
|
|||
return "unbound";
|
||||
}
|
||||
|
||||
static void dma_fence_array_set_pending_error(struct dma_fence_array *array,
|
||||
int error)
|
||||
{
|
||||
/*
|
||||
* Propagate the first error reported by any of our fences, but only
|
||||
* before we ourselves are signaled.
|
||||
*/
|
||||
if (error)
|
||||
cmpxchg(&array->base.error, PENDING_ERROR, error);
|
||||
}
|
||||
|
||||
static void dma_fence_array_clear_pending_error(struct dma_fence_array *array)
|
||||
{
|
||||
/* Clear the error flag if not actually set. */
|
||||
cmpxchg(&array->base.error, PENDING_ERROR, 0);
|
||||
}
|
||||
|
||||
static void irq_dma_fence_array_work(struct irq_work *wrk)
|
||||
{
|
||||
struct dma_fence_array *array = container_of(wrk, typeof(*array), work);
|
||||
|
||||
dma_fence_array_clear_pending_error(array);
|
||||
|
||||
dma_fence_signal(&array->base);
|
||||
dma_fence_put(&array->base);
|
||||
}
|
||||
|
@ -38,6 +59,8 @@ static void dma_fence_array_cb_func(struct dma_fence *f,
|
|||
container_of(cb, struct dma_fence_array_cb, cb);
|
||||
struct dma_fence_array *array = array_cb->array;
|
||||
|
||||
dma_fence_array_set_pending_error(array, f->error);
|
||||
|
||||
if (atomic_dec_and_test(&array->num_pending))
|
||||
irq_work_queue(&array->work);
|
||||
else
|
||||
|
@ -63,9 +86,14 @@ static bool dma_fence_array_enable_signaling(struct dma_fence *fence)
|
|||
dma_fence_get(&array->base);
|
||||
if (dma_fence_add_callback(array->fences[i], &cb[i].cb,
|
||||
dma_fence_array_cb_func)) {
|
||||
int error = array->fences[i]->error;
|
||||
|
||||
dma_fence_array_set_pending_error(array, error);
|
||||
dma_fence_put(&array->base);
|
||||
if (atomic_dec_and_test(&array->num_pending))
|
||||
if (atomic_dec_and_test(&array->num_pending)) {
|
||||
dma_fence_array_clear_pending_error(array);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -142,6 +170,8 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
|
|||
atomic_set(&array->num_pending, signal_on_any ? 1 : num_fences);
|
||||
array->fences = fences;
|
||||
|
||||
array->base.error = PENDING_ERROR;
|
||||
|
||||
return array;
|
||||
}
|
||||
EXPORT_SYMBOL(dma_fence_array_create);
|
||||
|
|
|
@ -60,7 +60,7 @@ static atomic64_t dma_fence_context_counter = ATOMIC64_INIT(1);
|
|||
*
|
||||
* - Then there's also implicit fencing, where the synchronization points are
|
||||
* implicitly passed around as part of shared &dma_buf instances. Such
|
||||
* implicit fences are stored in &struct reservation_object through the
|
||||
* implicit fences are stored in &struct dma_resv through the
|
||||
* &dma_buf.resv pointer.
|
||||
*/
|
||||
|
||||
|
@ -129,31 +129,27 @@ EXPORT_SYMBOL(dma_fence_context_alloc);
|
|||
int dma_fence_signal_locked(struct dma_fence *fence)
|
||||
{
|
||||
struct dma_fence_cb *cur, *tmp;
|
||||
int ret = 0;
|
||||
struct list_head cb_list;
|
||||
|
||||
lockdep_assert_held(fence->lock);
|
||||
|
||||
if (WARN_ON(!fence))
|
||||
if (unlikely(test_and_set_bit(DMA_FENCE_FLAG_SIGNALED_BIT,
|
||||
&fence->flags)))
|
||||
return -EINVAL;
|
||||
|
||||
if (test_and_set_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
|
||||
ret = -EINVAL;
|
||||
/* Stash the cb_list before replacing it with the timestamp */
|
||||
list_replace(&fence->cb_list, &cb_list);
|
||||
|
||||
/*
|
||||
* we might have raced with the unlocked dma_fence_signal,
|
||||
* still run through all callbacks
|
||||
*/
|
||||
} else {
|
||||
fence->timestamp = ktime_get();
|
||||
set_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT, &fence->flags);
|
||||
trace_dma_fence_signaled(fence);
|
||||
}
|
||||
fence->timestamp = ktime_get();
|
||||
set_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT, &fence->flags);
|
||||
trace_dma_fence_signaled(fence);
|
||||
|
||||
list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
|
||||
list_del_init(&cur->node);
|
||||
list_for_each_entry_safe(cur, tmp, &cb_list, node) {
|
||||
INIT_LIST_HEAD(&cur->node);
|
||||
cur->func(fence, cur);
|
||||
}
|
||||
return ret;
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(dma_fence_signal_locked);
|
||||
|
||||
|
@ -173,28 +169,16 @@ EXPORT_SYMBOL(dma_fence_signal_locked);
|
|||
int dma_fence_signal(struct dma_fence *fence)
|
||||
{
|
||||
unsigned long flags;
|
||||
int ret;
|
||||
|
||||
if (!fence)
|
||||
return -EINVAL;
|
||||
|
||||
if (test_and_set_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
|
||||
return -EINVAL;
|
||||
spin_lock_irqsave(fence->lock, flags);
|
||||
ret = dma_fence_signal_locked(fence);
|
||||
spin_unlock_irqrestore(fence->lock, flags);
|
||||
|
||||
fence->timestamp = ktime_get();
|
||||
set_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT, &fence->flags);
|
||||
trace_dma_fence_signaled(fence);
|
||||
|
||||
if (test_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags)) {
|
||||
struct dma_fence_cb *cur, *tmp;
|
||||
|
||||
spin_lock_irqsave(fence->lock, flags);
|
||||
list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
|
||||
list_del_init(&cur->node);
|
||||
cur->func(fence, cur);
|
||||
}
|
||||
spin_unlock_irqrestore(fence->lock, flags);
|
||||
}
|
||||
return 0;
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(dma_fence_signal);
|
||||
|
||||
|
@ -248,7 +232,8 @@ void dma_fence_release(struct kref *kref)
|
|||
|
||||
trace_dma_fence_destroy(fence);
|
||||
|
||||
if (WARN(!list_empty(&fence->cb_list),
|
||||
if (WARN(!list_empty(&fence->cb_list) &&
|
||||
!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags),
|
||||
"Fence %s:%s:%llx:%llx released with pending signals!\n",
|
||||
fence->ops->get_driver_name(fence),
|
||||
fence->ops->get_timeline_name(fence),
|
||||
|
|
|
@ -32,7 +32,7 @@
|
|||
* Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com>
|
||||
*/
|
||||
|
||||
#include <linux/reservation.h>
|
||||
#include <linux/dma-resv.h>
|
||||
#include <linux/export.h>
|
||||
|
||||
/**
|
||||
|
@ -56,16 +56,15 @@ const char reservation_seqcount_string[] = "reservation_seqcount";
|
|||
EXPORT_SYMBOL(reservation_seqcount_string);
|
||||
|
||||
/**
|
||||
* reservation_object_list_alloc - allocate fence list
|
||||
* dma_resv_list_alloc - allocate fence list
|
||||
* @shared_max: number of fences we need space for
|
||||
*
|
||||
* Allocate a new reservation_object_list and make sure to correctly initialize
|
||||
* Allocate a new dma_resv_list and make sure to correctly initialize
|
||||
* shared_max.
|
||||
*/
|
||||
static struct reservation_object_list *
|
||||
reservation_object_list_alloc(unsigned int shared_max)
|
||||
static struct dma_resv_list *dma_resv_list_alloc(unsigned int shared_max)
|
||||
{
|
||||
struct reservation_object_list *list;
|
||||
struct dma_resv_list *list;
|
||||
|
||||
list = kmalloc(offsetof(typeof(*list), shared[shared_max]), GFP_KERNEL);
|
||||
if (!list)
|
||||
|
@ -78,12 +77,12 @@ reservation_object_list_alloc(unsigned int shared_max)
|
|||
}
|
||||
|
||||
/**
|
||||
* reservation_object_list_free - free fence list
|
||||
* dma_resv_list_free - free fence list
|
||||
* @list: list to free
|
||||
*
|
||||
* Free a reservation_object_list and make sure to drop all references.
|
||||
* Free a dma_resv_list and make sure to drop all references.
|
||||
*/
|
||||
static void reservation_object_list_free(struct reservation_object_list *list)
|
||||
static void dma_resv_list_free(struct dma_resv_list *list)
|
||||
{
|
||||
unsigned int i;
|
||||
|
||||
|
@ -97,10 +96,10 @@ static void reservation_object_list_free(struct reservation_object_list *list)
|
|||
}
|
||||
|
||||
/**
|
||||
* reservation_object_init - initialize a reservation object
|
||||
* dma_resv_init - initialize a reservation object
|
||||
* @obj: the reservation object
|
||||
*/
|
||||
void reservation_object_init(struct reservation_object *obj)
|
||||
void dma_resv_init(struct dma_resv *obj)
|
||||
{
|
||||
ww_mutex_init(&obj->lock, &reservation_ww_class);
|
||||
|
||||
|
@ -109,15 +108,15 @@ void reservation_object_init(struct reservation_object *obj)
|
|||
RCU_INIT_POINTER(obj->fence, NULL);
|
||||
RCU_INIT_POINTER(obj->fence_excl, NULL);
|
||||
}
|
||||
EXPORT_SYMBOL(reservation_object_init);
|
||||
EXPORT_SYMBOL(dma_resv_init);
|
||||
|
||||
/**
|
||||
* reservation_object_fini - destroys a reservation object
|
||||
* dma_resv_fini - destroys a reservation object
|
||||
* @obj: the reservation object
|
||||
*/
|
||||
void reservation_object_fini(struct reservation_object *obj)
|
||||
void dma_resv_fini(struct dma_resv *obj)
|
||||
{
|
||||
struct reservation_object_list *fobj;
|
||||
struct dma_resv_list *fobj;
|
||||
struct dma_fence *excl;
|
||||
|
||||
/*
|
||||
|
@ -129,32 +128,31 @@ void reservation_object_fini(struct reservation_object *obj)
|
|||
dma_fence_put(excl);
|
||||
|
||||
fobj = rcu_dereference_protected(obj->fence, 1);
|
||||
reservation_object_list_free(fobj);
|
||||
dma_resv_list_free(fobj);
|
||||
ww_mutex_destroy(&obj->lock);
|
||||
}
|
||||
EXPORT_SYMBOL(reservation_object_fini);
|
||||
EXPORT_SYMBOL(dma_resv_fini);
|
||||
|
||||
/**
|
||||
* reservation_object_reserve_shared - Reserve space to add shared fences to
|
||||
* a reservation_object.
|
||||
* dma_resv_reserve_shared - Reserve space to add shared fences to
|
||||
* a dma_resv.
|
||||
* @obj: reservation object
|
||||
* @num_fences: number of fences we want to add
|
||||
*
|
||||
* Should be called before reservation_object_add_shared_fence(). Must
|
||||
* Should be called before dma_resv_add_shared_fence(). Must
|
||||
* be called with obj->lock held.
|
||||
*
|
||||
* RETURNS
|
||||
* Zero for success, or -errno
|
||||
*/
|
||||
int reservation_object_reserve_shared(struct reservation_object *obj,
|
||||
unsigned int num_fences)
|
||||
int dma_resv_reserve_shared(struct dma_resv *obj, unsigned int num_fences)
|
||||
{
|
||||
struct reservation_object_list *old, *new;
|
||||
struct dma_resv_list *old, *new;
|
||||
unsigned int i, j, k, max;
|
||||
|
||||
reservation_object_assert_held(obj);
|
||||
dma_resv_assert_held(obj);
|
||||
|
||||
old = reservation_object_get_list(obj);
|
||||
old = dma_resv_get_list(obj);
|
||||
|
||||
if (old && old->shared_max) {
|
||||
if ((old->shared_count + num_fences) <= old->shared_max)
|
||||
|
@ -166,7 +164,7 @@ int reservation_object_reserve_shared(struct reservation_object *obj,
|
|||
max = 4;
|
||||
}
|
||||
|
||||
new = reservation_object_list_alloc(max);
|
||||
new = dma_resv_list_alloc(max);
|
||||
if (!new)
|
||||
return -ENOMEM;
|
||||
|
||||
|
@ -180,7 +178,7 @@ int reservation_object_reserve_shared(struct reservation_object *obj,
|
|||
struct dma_fence *fence;
|
||||
|
||||
fence = rcu_dereference_protected(old->shared[i],
|
||||
reservation_object_held(obj));
|
||||
dma_resv_held(obj));
|
||||
if (dma_fence_is_signaled(fence))
|
||||
RCU_INIT_POINTER(new->shared[--k], fence);
|
||||
else
|
||||
|
@ -206,35 +204,34 @@ int reservation_object_reserve_shared(struct reservation_object *obj,
|
|||
struct dma_fence *fence;
|
||||
|
||||
fence = rcu_dereference_protected(new->shared[i],
|
||||
reservation_object_held(obj));
|
||||
dma_resv_held(obj));
|
||||
dma_fence_put(fence);
|
||||
}
|
||||
kfree_rcu(old, rcu);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(reservation_object_reserve_shared);
|
||||
EXPORT_SYMBOL(dma_resv_reserve_shared);
|
||||
|
||||
/**
|
||||
* reservation_object_add_shared_fence - Add a fence to a shared slot
|
||||
* dma_resv_add_shared_fence - Add a fence to a shared slot
|
||||
* @obj: the reservation object
|
||||
* @fence: the shared fence to add
|
||||
*
|
||||
* Add a fence to a shared slot, obj->lock must be held, and
|
||||
* reservation_object_reserve_shared() has been called.
|
||||
* dma_resv_reserve_shared() has been called.
|
||||
*/
|
||||
void reservation_object_add_shared_fence(struct reservation_object *obj,
|
||||
struct dma_fence *fence)
|
||||
void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence)
|
||||
{
|
||||
struct reservation_object_list *fobj;
|
||||
struct dma_resv_list *fobj;
|
||||
struct dma_fence *old;
|
||||
unsigned int i, count;
|
||||
|
||||
dma_fence_get(fence);
|
||||
|
||||
reservation_object_assert_held(obj);
|
||||
dma_resv_assert_held(obj);
|
||||
|
||||
fobj = reservation_object_get_list(obj);
|
||||
fobj = dma_resv_get_list(obj);
|
||||
count = fobj->shared_count;
|
||||
|
||||
preempt_disable();
|
||||
|
@ -243,7 +240,7 @@ void reservation_object_add_shared_fence(struct reservation_object *obj,
|
|||
for (i = 0; i < count; ++i) {
|
||||
|
||||
old = rcu_dereference_protected(fobj->shared[i],
|
||||
reservation_object_held(obj));
|
||||
dma_resv_held(obj));
|
||||
if (old->context == fence->context ||
|
||||
dma_fence_is_signaled(old))
|
||||
goto replace;
|
||||
|
@ -262,25 +259,24 @@ replace:
|
|||
preempt_enable();
|
||||
dma_fence_put(old);
|
||||
}
|
||||
EXPORT_SYMBOL(reservation_object_add_shared_fence);
|
||||
EXPORT_SYMBOL(dma_resv_add_shared_fence);
|
||||
|
||||
/**
|
||||
* reservation_object_add_excl_fence - Add an exclusive fence.
|
||||
* dma_resv_add_excl_fence - Add an exclusive fence.
|
||||
* @obj: the reservation object
|
||||
* @fence: the shared fence to add
|
||||
*
|
||||
* Add a fence to the exclusive slot. The obj->lock must be held.
|
||||
*/
|
||||
void reservation_object_add_excl_fence(struct reservation_object *obj,
|
||||
struct dma_fence *fence)
|
||||
void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence)
|
||||
{
|
||||
struct dma_fence *old_fence = reservation_object_get_excl(obj);
|
||||
struct reservation_object_list *old;
|
||||
struct dma_fence *old_fence = dma_resv_get_excl(obj);
|
||||
struct dma_resv_list *old;
|
||||
u32 i = 0;
|
||||
|
||||
reservation_object_assert_held(obj);
|
||||
dma_resv_assert_held(obj);
|
||||
|
||||
old = reservation_object_get_list(obj);
|
||||
old = dma_resv_get_list(obj);
|
||||
if (old)
|
||||
i = old->shared_count;
|
||||
|
||||
|
@ -299,27 +295,26 @@ void reservation_object_add_excl_fence(struct reservation_object *obj,
|
|||
/* inplace update, no shared fences */
|
||||
while (i--)
|
||||
dma_fence_put(rcu_dereference_protected(old->shared[i],
|
||||
reservation_object_held(obj)));
|
||||
dma_resv_held(obj)));
|
||||
|
||||
dma_fence_put(old_fence);
|
||||
}
|
||||
EXPORT_SYMBOL(reservation_object_add_excl_fence);
|
||||
EXPORT_SYMBOL(dma_resv_add_excl_fence);
|
||||
|
||||
/**
|
||||
* reservation_object_copy_fences - Copy all fences from src to dst.
|
||||
* dma_resv_copy_fences - Copy all fences from src to dst.
|
||||
* @dst: the destination reservation object
|
||||
* @src: the source reservation object
|
||||
*
|
||||
* Copy all fences from src to dst. dst-lock must be held.
|
||||
*/
|
||||
int reservation_object_copy_fences(struct reservation_object *dst,
|
||||
struct reservation_object *src)
|
||||
int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src)
|
||||
{
|
||||
struct reservation_object_list *src_list, *dst_list;
|
||||
struct dma_resv_list *src_list, *dst_list;
|
||||
struct dma_fence *old, *new;
|
||||
unsigned i;
|
||||
|
||||
reservation_object_assert_held(dst);
|
||||
dma_resv_assert_held(dst);
|
||||
|
||||
rcu_read_lock();
|
||||
src_list = rcu_dereference(src->fence);
|
||||
|
@ -330,7 +325,7 @@ retry:
|
|||
|
||||
rcu_read_unlock();
|
||||
|
||||
dst_list = reservation_object_list_alloc(shared_count);
|
||||
dst_list = dma_resv_list_alloc(shared_count);
|
||||
if (!dst_list)
|
||||
return -ENOMEM;
|
||||
|
||||
|
@ -351,7 +346,7 @@ retry:
|
|||
continue;
|
||||
|
||||
if (!dma_fence_get_rcu(fence)) {
|
||||
reservation_object_list_free(dst_list);
|
||||
dma_resv_list_free(dst_list);
|
||||
src_list = rcu_dereference(src->fence);
|
||||
goto retry;
|
||||
}
|
||||
|
@ -370,8 +365,8 @@ retry:
|
|||
new = dma_fence_get_rcu_safe(&src->fence_excl);
|
||||
rcu_read_unlock();
|
||||
|
||||
src_list = reservation_object_get_list(dst);
|
||||
old = reservation_object_get_excl(dst);
|
||||
src_list = dma_resv_get_list(dst);
|
||||
old = dma_resv_get_excl(dst);
|
||||
|
||||
preempt_disable();
|
||||
write_seqcount_begin(&dst->seq);
|
||||
|
@ -381,15 +376,15 @@ retry:
|
|||
write_seqcount_end(&dst->seq);
|
||||
preempt_enable();
|
||||
|
||||
reservation_object_list_free(src_list);
|
||||
dma_resv_list_free(src_list);
|
||||
dma_fence_put(old);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(reservation_object_copy_fences);
|
||||
EXPORT_SYMBOL(dma_resv_copy_fences);
|
||||
|
||||
/**
|
||||
* reservation_object_get_fences_rcu - Get an object's shared and exclusive
|
||||
* dma_resv_get_fences_rcu - Get an object's shared and exclusive
|
||||
* fences without update side lock held
|
||||
* @obj: the reservation object
|
||||
* @pfence_excl: the returned exclusive fence (or NULL)
|
||||
|
@ -401,10 +396,10 @@ EXPORT_SYMBOL(reservation_object_copy_fences);
|
|||
* exclusive fence is not specified the fence is put into the array of the
|
||||
* shared fences as well. Returns either zero or -ENOMEM.
|
||||
*/
|
||||
int reservation_object_get_fences_rcu(struct reservation_object *obj,
|
||||
struct dma_fence **pfence_excl,
|
||||
unsigned *pshared_count,
|
||||
struct dma_fence ***pshared)
|
||||
int dma_resv_get_fences_rcu(struct dma_resv *obj,
|
||||
struct dma_fence **pfence_excl,
|
||||
unsigned *pshared_count,
|
||||
struct dma_fence ***pshared)
|
||||
{
|
||||
struct dma_fence **shared = NULL;
|
||||
struct dma_fence *fence_excl;
|
||||
|
@ -412,7 +407,7 @@ int reservation_object_get_fences_rcu(struct reservation_object *obj,
|
|||
int ret = 1;
|
||||
|
||||
do {
|
||||
struct reservation_object_list *fobj;
|
||||
struct dma_resv_list *fobj;
|
||||
unsigned int i, seq;
|
||||
size_t sz = 0;
|
||||
|
||||
|
@ -487,10 +482,10 @@ unlock:
|
|||
*pshared = shared;
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(reservation_object_get_fences_rcu);
|
||||
EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu);
|
||||
|
||||
/**
|
||||
* reservation_object_wait_timeout_rcu - Wait on reservation's objects
|
||||
* dma_resv_wait_timeout_rcu - Wait on reservation's objects
|
||||
* shared and/or exclusive fences.
|
||||
* @obj: the reservation object
|
||||
* @wait_all: if true, wait on all fences, else wait on just exclusive fence
|
||||
|
@ -501,9 +496,9 @@ EXPORT_SYMBOL_GPL(reservation_object_get_fences_rcu);
|
|||
* Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
|
||||
* greater than zer on success.
|
||||
*/
|
||||
long reservation_object_wait_timeout_rcu(struct reservation_object *obj,
|
||||
bool wait_all, bool intr,
|
||||
unsigned long timeout)
|
||||
long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
|
||||
bool wait_all, bool intr,
|
||||
unsigned long timeout)
|
||||
{
|
||||
struct dma_fence *fence;
|
||||
unsigned seq, shared_count;
|
||||
|
@ -531,8 +526,7 @@ retry:
|
|||
}
|
||||
|
||||
if (wait_all) {
|
||||
struct reservation_object_list *fobj =
|
||||
rcu_dereference(obj->fence);
|
||||
struct dma_resv_list *fobj = rcu_dereference(obj->fence);
|
||||
|
||||
if (fobj)
|
||||
shared_count = fobj->shared_count;
|
||||
|
@ -575,11 +569,10 @@ unlock_retry:
|
|||
rcu_read_unlock();
|
||||
goto retry;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(reservation_object_wait_timeout_rcu);
|
||||
EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_rcu);
|
||||
|
||||
|
||||
static inline int
|
||||
reservation_object_test_signaled_single(struct dma_fence *passed_fence)
|
||||
static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence)
|
||||
{
|
||||
struct dma_fence *fence, *lfence = passed_fence;
|
||||
int ret = 1;
|
||||
|
@ -596,7 +589,7 @@ reservation_object_test_signaled_single(struct dma_fence *passed_fence)
|
|||
}
|
||||
|
||||
/**
|
||||
* reservation_object_test_signaled_rcu - Test if a reservation object's
|
||||
* dma_resv_test_signaled_rcu - Test if a reservation object's
|
||||
* fences have been signaled.
|
||||
* @obj: the reservation object
|
||||
* @test_all: if true, test all fences, otherwise only test the exclusive
|
||||
|
@ -605,8 +598,7 @@ reservation_object_test_signaled_single(struct dma_fence *passed_fence)
|
|||
* RETURNS
|
||||
* true if all fences signaled, else false
|
||||
*/
|
||||
bool reservation_object_test_signaled_rcu(struct reservation_object *obj,
|
||||
bool test_all)
|
||||
bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all)
|
||||
{
|
||||
unsigned seq, shared_count;
|
||||
int ret;
|
||||
|
@ -620,8 +612,7 @@ retry:
|
|||
if (test_all) {
|
||||
unsigned i;
|
||||
|
||||
struct reservation_object_list *fobj =
|
||||
rcu_dereference(obj->fence);
|
||||
struct dma_resv_list *fobj = rcu_dereference(obj->fence);
|
||||
|
||||
if (fobj)
|
||||
shared_count = fobj->shared_count;
|
||||
|
@ -629,7 +620,7 @@ retry:
|
|||
for (i = 0; i < shared_count; ++i) {
|
||||
struct dma_fence *fence = rcu_dereference(fobj->shared[i]);
|
||||
|
||||
ret = reservation_object_test_signaled_single(fence);
|
||||
ret = dma_resv_test_signaled_single(fence);
|
||||
if (ret < 0)
|
||||
goto retry;
|
||||
else if (!ret)
|
||||
|
@ -644,8 +635,7 @@ retry:
|
|||
struct dma_fence *fence_excl = rcu_dereference(obj->fence_excl);
|
||||
|
||||
if (fence_excl) {
|
||||
ret = reservation_object_test_signaled_single(
|
||||
fence_excl);
|
||||
ret = dma_resv_test_signaled_single(fence_excl);
|
||||
if (ret < 0)
|
||||
goto retry;
|
||||
|
||||
|
@ -657,4 +647,4 @@ retry:
|
|||
rcu_read_unlock();
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(reservation_object_test_signaled_rcu);
|
||||
EXPORT_SYMBOL_GPL(dma_resv_test_signaled_rcu);
|
|
@ -132,17 +132,14 @@ static void timeline_fence_release(struct dma_fence *fence)
|
|||
{
|
||||
struct sync_pt *pt = dma_fence_to_sync_pt(fence);
|
||||
struct sync_timeline *parent = dma_fence_parent(fence);
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(fence->lock, flags);
|
||||
if (!list_empty(&pt->link)) {
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(fence->lock, flags);
|
||||
if (!list_empty(&pt->link)) {
|
||||
list_del(&pt->link);
|
||||
rb_erase(&pt->node, &parent->pt_tree);
|
||||
}
|
||||
spin_unlock_irqrestore(fence->lock, flags);
|
||||
list_del(&pt->link);
|
||||
rb_erase(&pt->node, &parent->pt_tree);
|
||||
}
|
||||
spin_unlock_irqrestore(fence->lock, flags);
|
||||
|
||||
sync_timeline_put(parent);
|
||||
dma_fence_free(fence);
|
||||
|
@ -265,7 +262,8 @@ static struct sync_pt *sync_pt_create(struct sync_timeline *obj,
|
|||
p = &parent->rb_left;
|
||||
} else {
|
||||
if (dma_fence_get_rcu(&other->base)) {
|
||||
dma_fence_put(&pt->base);
|
||||
sync_timeline_put(obj);
|
||||
kfree(pt);
|
||||
pt = other;
|
||||
goto unlock;
|
||||
}
|
||||
|
|
|
@ -419,7 +419,7 @@ static long sync_file_ioctl_fence_info(struct sync_file *sync_file,
|
|||
* info->num_fences.
|
||||
*/
|
||||
if (!info.num_fences) {
|
||||
info.status = dma_fence_is_signaled(sync_file->fence);
|
||||
info.status = dma_fence_get_status(sync_file->fence);
|
||||
goto no_fences;
|
||||
} else {
|
||||
info.status = 1;
|
||||
|
|
|
@ -218,14 +218,14 @@ void amdgpu_amdkfd_unreserve_memory_limit(struct amdgpu_bo *bo)
|
|||
static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo,
|
||||
struct amdgpu_amdkfd_fence *ef)
|
||||
{
|
||||
struct reservation_object *resv = bo->tbo.base.resv;
|
||||
struct reservation_object_list *old, *new;
|
||||
struct dma_resv *resv = bo->tbo.base.resv;
|
||||
struct dma_resv_list *old, *new;
|
||||
unsigned int i, j, k;
|
||||
|
||||
if (!ef)
|
||||
return -EINVAL;
|
||||
|
||||
old = reservation_object_get_list(resv);
|
||||
old = dma_resv_get_list(resv);
|
||||
if (!old)
|
||||
return 0;
|
||||
|
||||
|
@ -241,7 +241,7 @@ static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo,
|
|||
struct dma_fence *f;
|
||||
|
||||
f = rcu_dereference_protected(old->shared[i],
|
||||
reservation_object_held(resv));
|
||||
dma_resv_held(resv));
|
||||
|
||||
if (f->context == ef->base.context)
|
||||
RCU_INIT_POINTER(new->shared[--j], f);
|
||||
|
@ -263,7 +263,7 @@ static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo,
|
|||
struct dma_fence *f;
|
||||
|
||||
f = rcu_dereference_protected(new->shared[i],
|
||||
reservation_object_held(resv));
|
||||
dma_resv_held(resv));
|
||||
dma_fence_put(f);
|
||||
}
|
||||
kfree_rcu(old, rcu);
|
||||
|
@ -887,7 +887,7 @@ static int init_kfd_vm(struct amdgpu_vm *vm, void **process_info,
|
|||
AMDGPU_FENCE_OWNER_KFD, false);
|
||||
if (ret)
|
||||
goto wait_pd_fail;
|
||||
ret = reservation_object_reserve_shared(vm->root.base.bo->tbo.base.resv, 1);
|
||||
ret = dma_resv_reserve_shared(vm->root.base.bo->tbo.base.resv, 1);
|
||||
if (ret)
|
||||
goto reserve_shared_fail;
|
||||
amdgpu_bo_fence(vm->root.base.bo,
|
||||
|
@ -2133,7 +2133,7 @@ int amdgpu_amdkfd_add_gws_to_process(void *info, void *gws, struct kgd_mem **mem
|
|||
* Add process eviction fence to bo so they can
|
||||
* evict each other.
|
||||
*/
|
||||
ret = reservation_object_reserve_shared(gws_bo->tbo.base.resv, 1);
|
||||
ret = dma_resv_reserve_shared(gws_bo->tbo.base.resv, 1);
|
||||
if (ret)
|
||||
goto reserve_shared_fail;
|
||||
amdgpu_bo_fence(gws_bo, &process_info->eviction_fence->base, true);
|
||||
|
|
|
@ -730,7 +730,7 @@ static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
|
|||
|
||||
list_for_each_entry(e, &p->validated, tv.head) {
|
||||
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
|
||||
struct reservation_object *resv = bo->tbo.base.resv;
|
||||
struct dma_resv *resv = bo->tbo.base.resv;
|
||||
|
||||
r = amdgpu_sync_resv(p->adev, &p->job->sync, resv, p->filp,
|
||||
amdgpu_bo_explicit_sync(bo));
|
||||
|
@ -1727,7 +1727,7 @@ int amdgpu_cs_find_mapping(struct amdgpu_cs_parser *parser,
|
|||
*map = mapping;
|
||||
|
||||
/* Double check that the BO is reserved by this CS */
|
||||
if (reservation_object_locking_ctx((*bo)->tbo.base.resv) != &parser->ticket)
|
||||
if (dma_resv_locking_ctx((*bo)->tbo.base.resv) != &parser->ticket)
|
||||
return -EINVAL;
|
||||
|
||||
if (!((*bo)->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS)) {
|
||||
|
|
|
@ -205,7 +205,7 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc,
|
|||
goto unpin;
|
||||
}
|
||||
|
||||
r = reservation_object_get_fences_rcu(new_abo->tbo.base.resv, &work->excl,
|
||||
r = dma_resv_get_fences_rcu(new_abo->tbo.base.resv, &work->excl,
|
||||
&work->shared_count,
|
||||
&work->shared);
|
||||
if (unlikely(r != 0)) {
|
||||
|
|
|
@ -137,23 +137,23 @@ int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
|
|||
}
|
||||
|
||||
static int
|
||||
__reservation_object_make_exclusive(struct reservation_object *obj)
|
||||
__dma_resv_make_exclusive(struct dma_resv *obj)
|
||||
{
|
||||
struct dma_fence **fences;
|
||||
unsigned int count;
|
||||
int r;
|
||||
|
||||
if (!reservation_object_get_list(obj)) /* no shared fences to convert */
|
||||
if (!dma_resv_get_list(obj)) /* no shared fences to convert */
|
||||
return 0;
|
||||
|
||||
r = reservation_object_get_fences_rcu(obj, NULL, &count, &fences);
|
||||
r = dma_resv_get_fences_rcu(obj, NULL, &count, &fences);
|
||||
if (r)
|
||||
return r;
|
||||
|
||||
if (count == 0) {
|
||||
/* Now that was unexpected. */
|
||||
} else if (count == 1) {
|
||||
reservation_object_add_excl_fence(obj, fences[0]);
|
||||
dma_resv_add_excl_fence(obj, fences[0]);
|
||||
dma_fence_put(fences[0]);
|
||||
kfree(fences);
|
||||
} else {
|
||||
|
@ -165,7 +165,7 @@ __reservation_object_make_exclusive(struct reservation_object *obj)
|
|||
if (!array)
|
||||
goto err_fences_put;
|
||||
|
||||
reservation_object_add_excl_fence(obj, &array->base);
|
||||
dma_resv_add_excl_fence(obj, &array->base);
|
||||
dma_fence_put(&array->base);
|
||||
}
|
||||
|
||||
|
@ -216,7 +216,7 @@ static int amdgpu_dma_buf_map_attach(struct dma_buf *dma_buf,
|
|||
* fences on the reservation object into a single exclusive
|
||||
* fence.
|
||||
*/
|
||||
r = __reservation_object_make_exclusive(bo->tbo.base.resv);
|
||||
r = __dma_resv_make_exclusive(bo->tbo.base.resv);
|
||||
if (r)
|
||||
goto error_unreserve;
|
||||
}
|
||||
|
@ -367,7 +367,7 @@ amdgpu_gem_prime_import_sg_table(struct drm_device *dev,
|
|||
struct dma_buf_attachment *attach,
|
||||
struct sg_table *sg)
|
||||
{
|
||||
struct reservation_object *resv = attach->dmabuf->resv;
|
||||
struct dma_resv *resv = attach->dmabuf->resv;
|
||||
struct amdgpu_device *adev = dev->dev_private;
|
||||
struct amdgpu_bo *bo;
|
||||
struct amdgpu_bo_param bp;
|
||||
|
@ -380,7 +380,7 @@ amdgpu_gem_prime_import_sg_table(struct drm_device *dev,
|
|||
bp.flags = 0;
|
||||
bp.type = ttm_bo_type_sg;
|
||||
bp.resv = resv;
|
||||
reservation_object_lock(resv, NULL);
|
||||
dma_resv_lock(resv, NULL);
|
||||
ret = amdgpu_bo_create(adev, &bp, &bo);
|
||||
if (ret)
|
||||
goto error;
|
||||
|
@ -392,11 +392,11 @@ amdgpu_gem_prime_import_sg_table(struct drm_device *dev,
|
|||
if (attach->dmabuf->ops != &amdgpu_dmabuf_ops)
|
||||
bo->prime_shared_count = 1;
|
||||
|
||||
reservation_object_unlock(resv);
|
||||
dma_resv_unlock(resv);
|
||||
return &bo->tbo.base;
|
||||
|
||||
error:
|
||||
reservation_object_unlock(resv);
|
||||
dma_resv_unlock(resv);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
|
|
|
@ -50,7 +50,7 @@ void amdgpu_gem_object_free(struct drm_gem_object *gobj)
|
|||
int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned long size,
|
||||
int alignment, u32 initial_domain,
|
||||
u64 flags, enum ttm_bo_type type,
|
||||
struct reservation_object *resv,
|
||||
struct dma_resv *resv,
|
||||
struct drm_gem_object **obj)
|
||||
{
|
||||
struct amdgpu_bo *bo;
|
||||
|
@ -215,7 +215,7 @@ int amdgpu_gem_create_ioctl(struct drm_device *dev, void *data,
|
|||
union drm_amdgpu_gem_create *args = data;
|
||||
uint64_t flags = args->in.domain_flags;
|
||||
uint64_t size = args->in.bo_size;
|
||||
struct reservation_object *resv = NULL;
|
||||
struct dma_resv *resv = NULL;
|
||||
struct drm_gem_object *gobj;
|
||||
uint32_t handle;
|
||||
int r;
|
||||
|
@ -433,7 +433,7 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
|
|||
return -ENOENT;
|
||||
}
|
||||
robj = gem_to_amdgpu_bo(gobj);
|
||||
ret = reservation_object_wait_timeout_rcu(robj->tbo.base.resv, true, true,
|
||||
ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true,
|
||||
timeout);
|
||||
|
||||
/* ret == 0 means not signaled,
|
||||
|
|
|
@ -47,7 +47,7 @@ void amdgpu_gem_force_release(struct amdgpu_device *adev);
|
|||
int amdgpu_gem_object_create(struct amdgpu_device *adev, unsigned long size,
|
||||
int alignment, u32 initial_domain,
|
||||
u64 flags, enum ttm_bo_type type,
|
||||
struct reservation_object *resv,
|
||||
struct dma_resv *resv,
|
||||
struct drm_gem_object **obj);
|
||||
|
||||
int amdgpu_mode_dumb_create(struct drm_file *file_priv,
|
||||
|
|
|
@ -104,7 +104,7 @@ static void amdgpu_pasid_free_cb(struct dma_fence *fence,
|
|||
*
|
||||
* Free the pasid only after all the fences in resv are signaled.
|
||||
*/
|
||||
void amdgpu_pasid_free_delayed(struct reservation_object *resv,
|
||||
void amdgpu_pasid_free_delayed(struct dma_resv *resv,
|
||||
unsigned int pasid)
|
||||
{
|
||||
struct dma_fence *fence, **fences;
|
||||
|
@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct reservation_object *resv,
|
|||
unsigned count;
|
||||
int r;
|
||||
|
||||
r = reservation_object_get_fences_rcu(resv, NULL, &count, &fences);
|
||||
r = dma_resv_get_fences_rcu(resv, NULL, &count, &fences);
|
||||
if (r)
|
||||
goto fallback;
|
||||
|
||||
|
@ -156,7 +156,7 @@ fallback:
|
|||
/* Not enough memory for the delayed delete, as last resort
|
||||
* block for all the fences to complete.
|
||||
*/
|
||||
reservation_object_wait_timeout_rcu(resv, true, false,
|
||||
dma_resv_wait_timeout_rcu(resv, true, false,
|
||||
MAX_SCHEDULE_TIMEOUT);
|
||||
amdgpu_pasid_free(pasid);
|
||||
}
|
||||
|
|
|
@ -72,7 +72,7 @@ struct amdgpu_vmid_mgr {
|
|||
|
||||
int amdgpu_pasid_alloc(unsigned int bits);
|
||||
void amdgpu_pasid_free(unsigned int pasid);
|
||||
void amdgpu_pasid_free_delayed(struct reservation_object *resv,
|
||||
void amdgpu_pasid_free_delayed(struct dma_resv *resv,
|
||||
unsigned int pasid);
|
||||
|
||||
bool amdgpu_vmid_had_gpu_reset(struct amdgpu_device *adev,
|
||||
|
|
|
@ -179,7 +179,7 @@ static void amdgpu_mn_invalidate_node(struct amdgpu_mn_node *node,
|
|||
if (!amdgpu_ttm_tt_affect_userptr(bo->tbo.ttm, start, end))
|
||||
continue;
|
||||
|
||||
r = reservation_object_wait_timeout_rcu(bo->tbo.base.resv,
|
||||
r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
|
||||
true, false, MAX_SCHEDULE_TIMEOUT);
|
||||
if (r <= 0)
|
||||
DRM_ERROR("(%ld) failed to wait for user bo\n", r);
|
||||
|
|
|
@ -550,7 +550,7 @@ static int amdgpu_bo_do_create(struct amdgpu_device *adev,
|
|||
|
||||
fail_unreserve:
|
||||
if (!bp->resv)
|
||||
reservation_object_unlock(bo->tbo.base.resv);
|
||||
dma_resv_unlock(bo->tbo.base.resv);
|
||||
amdgpu_bo_unref(&bo);
|
||||
return r;
|
||||
}
|
||||
|
@ -612,13 +612,13 @@ int amdgpu_bo_create(struct amdgpu_device *adev,
|
|||
|
||||
if ((flags & AMDGPU_GEM_CREATE_SHADOW) && !(adev->flags & AMD_IS_APU)) {
|
||||
if (!bp->resv)
|
||||
WARN_ON(reservation_object_lock((*bo_ptr)->tbo.base.resv,
|
||||
WARN_ON(dma_resv_lock((*bo_ptr)->tbo.base.resv,
|
||||
NULL));
|
||||
|
||||
r = amdgpu_bo_create_shadow(adev, bp->size, *bo_ptr);
|
||||
|
||||
if (!bp->resv)
|
||||
reservation_object_unlock((*bo_ptr)->tbo.base.resv);
|
||||
dma_resv_unlock((*bo_ptr)->tbo.base.resv);
|
||||
|
||||
if (r)
|
||||
amdgpu_bo_unref(bo_ptr);
|
||||
|
@ -715,7 +715,7 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr)
|
|||
return 0;
|
||||
}
|
||||
|
||||
r = reservation_object_wait_timeout_rcu(bo->tbo.base.resv, false, false,
|
||||
r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, false, false,
|
||||
MAX_SCHEDULE_TIMEOUT);
|
||||
if (r < 0)
|
||||
return r;
|
||||
|
@ -1093,7 +1093,7 @@ int amdgpu_bo_set_tiling_flags(struct amdgpu_bo *bo, u64 tiling_flags)
|
|||
*/
|
||||
void amdgpu_bo_get_tiling_flags(struct amdgpu_bo *bo, u64 *tiling_flags)
|
||||
{
|
||||
reservation_object_assert_held(bo->tbo.base.resv);
|
||||
dma_resv_assert_held(bo->tbo.base.resv);
|
||||
|
||||
if (tiling_flags)
|
||||
*tiling_flags = bo->tiling_flags;
|
||||
|
@ -1242,7 +1242,7 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
|
|||
!(abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE))
|
||||
return;
|
||||
|
||||
reservation_object_lock(bo->base.resv, NULL);
|
||||
dma_resv_lock(bo->base.resv, NULL);
|
||||
|
||||
r = amdgpu_fill_buffer(abo, AMDGPU_POISON, bo->base.resv, &fence);
|
||||
if (!WARN_ON(r)) {
|
||||
|
@ -1250,7 +1250,7 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
|
|||
dma_fence_put(fence);
|
||||
}
|
||||
|
||||
reservation_object_unlock(bo->base.resv);
|
||||
dma_resv_unlock(bo->base.resv);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1325,12 +1325,12 @@ int amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo)
|
|||
void amdgpu_bo_fence(struct amdgpu_bo *bo, struct dma_fence *fence,
|
||||
bool shared)
|
||||
{
|
||||
struct reservation_object *resv = bo->tbo.base.resv;
|
||||
struct dma_resv *resv = bo->tbo.base.resv;
|
||||
|
||||
if (shared)
|
||||
reservation_object_add_shared_fence(resv, fence);
|
||||
dma_resv_add_shared_fence(resv, fence);
|
||||
else
|
||||
reservation_object_add_excl_fence(resv, fence);
|
||||
dma_resv_add_excl_fence(resv, fence);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1370,7 +1370,7 @@ int amdgpu_bo_sync_wait(struct amdgpu_bo *bo, void *owner, bool intr)
|
|||
u64 amdgpu_bo_gpu_offset(struct amdgpu_bo *bo)
|
||||
{
|
||||
WARN_ON_ONCE(bo->tbo.mem.mem_type == TTM_PL_SYSTEM);
|
||||
WARN_ON_ONCE(!reservation_object_is_locked(bo->tbo.base.resv) &&
|
||||
WARN_ON_ONCE(!dma_resv_is_locked(bo->tbo.base.resv) &&
|
||||
!bo->pin_count && bo->tbo.type != ttm_bo_type_kernel);
|
||||
WARN_ON_ONCE(bo->tbo.mem.start == AMDGPU_BO_INVALID_OFFSET);
|
||||
WARN_ON_ONCE(bo->tbo.mem.mem_type == TTM_PL_VRAM &&
|
||||
|
|
|
@ -41,7 +41,7 @@ struct amdgpu_bo_param {
|
|||
u32 preferred_domain;
|
||||
u64 flags;
|
||||
enum ttm_bo_type type;
|
||||
struct reservation_object *resv;
|
||||
struct dma_resv *resv;
|
||||
};
|
||||
|
||||
/* bo virtual addresses in a vm */
|
||||
|
|
|
@ -190,10 +190,10 @@ int amdgpu_sync_fence(struct amdgpu_device *adev, struct amdgpu_sync *sync,
|
|||
*/
|
||||
int amdgpu_sync_resv(struct amdgpu_device *adev,
|
||||
struct amdgpu_sync *sync,
|
||||
struct reservation_object *resv,
|
||||
struct dma_resv *resv,
|
||||
void *owner, bool explicit_sync)
|
||||
{
|
||||
struct reservation_object_list *flist;
|
||||
struct dma_resv_list *flist;
|
||||
struct dma_fence *f;
|
||||
void *fence_owner;
|
||||
unsigned i;
|
||||
|
@ -203,16 +203,16 @@ int amdgpu_sync_resv(struct amdgpu_device *adev,
|
|||
return -EINVAL;
|
||||
|
||||
/* always sync to the exclusive fence */
|
||||
f = reservation_object_get_excl(resv);
|
||||
f = dma_resv_get_excl(resv);
|
||||
r = amdgpu_sync_fence(adev, sync, f, false);
|
||||
|
||||
flist = reservation_object_get_list(resv);
|
||||
flist = dma_resv_get_list(resv);
|
||||
if (!flist || r)
|
||||
return r;
|
||||
|
||||
for (i = 0; i < flist->shared_count; ++i) {
|
||||
f = rcu_dereference_protected(flist->shared[i],
|
||||
reservation_object_held(resv));
|
||||
dma_resv_held(resv));
|
||||
/* We only want to trigger KFD eviction fences on
|
||||
* evict or move jobs. Skip KFD fences otherwise.
|
||||
*/
|
||||
|
|
|
@ -27,7 +27,7 @@
|
|||
#include <linux/hashtable.h>
|
||||
|
||||
struct dma_fence;
|
||||
struct reservation_object;
|
||||
struct dma_resv;
|
||||
struct amdgpu_device;
|
||||
struct amdgpu_ring;
|
||||
|
||||
|
@ -44,7 +44,7 @@ int amdgpu_sync_fence(struct amdgpu_device *adev, struct amdgpu_sync *sync,
|
|||
struct dma_fence *f, bool explicit);
|
||||
int amdgpu_sync_resv(struct amdgpu_device *adev,
|
||||
struct amdgpu_sync *sync,
|
||||
struct reservation_object *resv,
|
||||
struct dma_resv *resv,
|
||||
void *owner,
|
||||
bool explicit_sync);
|
||||
struct dma_fence *amdgpu_sync_peek_fence(struct amdgpu_sync *sync,
|
||||
|
|
|
@ -303,7 +303,7 @@ int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
|
|||
struct amdgpu_copy_mem *src,
|
||||
struct amdgpu_copy_mem *dst,
|
||||
uint64_t size,
|
||||
struct reservation_object *resv,
|
||||
struct dma_resv *resv,
|
||||
struct dma_fence **f)
|
||||
{
|
||||
struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
|
||||
|
@ -1486,7 +1486,7 @@ static bool amdgpu_ttm_bo_eviction_valuable(struct ttm_buffer_object *bo,
|
|||
{
|
||||
unsigned long num_pages = bo->mem.num_pages;
|
||||
struct drm_mm_node *node = bo->mem.mm_node;
|
||||
struct reservation_object_list *flist;
|
||||
struct dma_resv_list *flist;
|
||||
struct dma_fence *f;
|
||||
int i;
|
||||
|
||||
|
@ -1494,18 +1494,18 @@ static bool amdgpu_ttm_bo_eviction_valuable(struct ttm_buffer_object *bo,
|
|||
* cleanly handle page faults.
|
||||
*/
|
||||
if (bo->type == ttm_bo_type_kernel &&
|
||||
!reservation_object_test_signaled_rcu(bo->base.resv, true))
|
||||
!dma_resv_test_signaled_rcu(bo->base.resv, true))
|
||||
return false;
|
||||
|
||||
/* If bo is a KFD BO, check if the bo belongs to the current process.
|
||||
* If true, then return false as any KFD process needs all its BOs to
|
||||
* be resident to run successfully
|
||||
*/
|
||||
flist = reservation_object_get_list(bo->base.resv);
|
||||
flist = dma_resv_get_list(bo->base.resv);
|
||||
if (flist) {
|
||||
for (i = 0; i < flist->shared_count; ++i) {
|
||||
f = rcu_dereference_protected(flist->shared[i],
|
||||
reservation_object_held(bo->base.resv));
|
||||
dma_resv_held(bo->base.resv));
|
||||
if (amdkfd_fence_check_mm(f, current->mm))
|
||||
return false;
|
||||
}
|
||||
|
@ -2009,7 +2009,7 @@ error_free:
|
|||
|
||||
int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
|
||||
uint64_t dst_offset, uint32_t byte_count,
|
||||
struct reservation_object *resv,
|
||||
struct dma_resv *resv,
|
||||
struct dma_fence **fence, bool direct_submit,
|
||||
bool vm_needs_flush)
|
||||
{
|
||||
|
@ -2083,7 +2083,7 @@ error_free:
|
|||
|
||||
int amdgpu_fill_buffer(struct amdgpu_bo *bo,
|
||||
uint32_t src_data,
|
||||
struct reservation_object *resv,
|
||||
struct dma_resv *resv,
|
||||
struct dma_fence **fence)
|
||||
{
|
||||
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
|
||||
|
|
|
@ -85,18 +85,18 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev,
|
|||
|
||||
int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
|
||||
uint64_t dst_offset, uint32_t byte_count,
|
||||
struct reservation_object *resv,
|
||||
struct dma_resv *resv,
|
||||
struct dma_fence **fence, bool direct_submit,
|
||||
bool vm_needs_flush);
|
||||
int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
|
||||
struct amdgpu_copy_mem *src,
|
||||
struct amdgpu_copy_mem *dst,
|
||||
uint64_t size,
|
||||
struct reservation_object *resv,
|
||||
struct dma_resv *resv,
|
||||
struct dma_fence **f);
|
||||
int amdgpu_fill_buffer(struct amdgpu_bo *bo,
|
||||
uint32_t src_data,
|
||||
struct reservation_object *resv,
|
||||
struct dma_resv *resv,
|
||||
struct dma_fence **fence);
|
||||
|
||||
int amdgpu_mmap(struct file *filp, struct vm_area_struct *vma);
|
||||
|
|
|
@ -1073,7 +1073,7 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
|
|||
ib->length_dw = 16;
|
||||
|
||||
if (direct) {
|
||||
r = reservation_object_wait_timeout_rcu(bo->tbo.base.resv,
|
||||
r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv,
|
||||
true, false,
|
||||
msecs_to_jiffies(10));
|
||||
if (r == 0)
|
||||
|
|
|
@ -1702,7 +1702,7 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev,
|
|||
ttm = container_of(bo->tbo.ttm, struct ttm_dma_tt, ttm);
|
||||
pages_addr = ttm->dma_address;
|
||||
}
|
||||
exclusive = reservation_object_get_excl(bo->tbo.base.resv);
|
||||
exclusive = dma_resv_get_excl(bo->tbo.base.resv);
|
||||
}
|
||||
|
||||
if (bo) {
|
||||
|
@ -1879,18 +1879,18 @@ static void amdgpu_vm_free_mapping(struct amdgpu_device *adev,
|
|||
*/
|
||||
static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
|
||||
{
|
||||
struct reservation_object *resv = vm->root.base.bo->tbo.base.resv;
|
||||
struct dma_resv *resv = vm->root.base.bo->tbo.base.resv;
|
||||
struct dma_fence *excl, **shared;
|
||||
unsigned i, shared_count;
|
||||
int r;
|
||||
|
||||
r = reservation_object_get_fences_rcu(resv, &excl,
|
||||
r = dma_resv_get_fences_rcu(resv, &excl,
|
||||
&shared_count, &shared);
|
||||
if (r) {
|
||||
/* Not enough memory to grab the fence list, as last resort
|
||||
* block for all the fences to complete.
|
||||
*/
|
||||
reservation_object_wait_timeout_rcu(resv, true, false,
|
||||
dma_resv_wait_timeout_rcu(resv, true, false,
|
||||
MAX_SCHEDULE_TIMEOUT);
|
||||
return;
|
||||
}
|
||||
|
@ -1978,7 +1978,7 @@ int amdgpu_vm_handle_moved(struct amdgpu_device *adev,
|
|||
struct amdgpu_vm *vm)
|
||||
{
|
||||
struct amdgpu_bo_va *bo_va, *tmp;
|
||||
struct reservation_object *resv;
|
||||
struct dma_resv *resv;
|
||||
bool clear;
|
||||
int r;
|
||||
|
||||
|
@ -1997,7 +1997,7 @@ int amdgpu_vm_handle_moved(struct amdgpu_device *adev,
|
|||
spin_unlock(&vm->invalidated_lock);
|
||||
|
||||
/* Try to reserve the BO to avoid clearing its ptes */
|
||||
if (!amdgpu_vm_debug && reservation_object_trylock(resv))
|
||||
if (!amdgpu_vm_debug && dma_resv_trylock(resv))
|
||||
clear = false;
|
||||
/* Somebody else is using the BO right now */
|
||||
else
|
||||
|
@ -2008,7 +2008,7 @@ int amdgpu_vm_handle_moved(struct amdgpu_device *adev,
|
|||
return r;
|
||||
|
||||
if (!clear)
|
||||
reservation_object_unlock(resv);
|
||||
dma_resv_unlock(resv);
|
||||
spin_lock(&vm->invalidated_lock);
|
||||
}
|
||||
spin_unlock(&vm->invalidated_lock);
|
||||
|
@ -2416,7 +2416,7 @@ void amdgpu_vm_bo_trace_cs(struct amdgpu_vm *vm, struct ww_acquire_ctx *ticket)
|
|||
struct amdgpu_bo *bo;
|
||||
|
||||
bo = mapping->bo_va->base.bo;
|
||||
if (reservation_object_locking_ctx(bo->tbo.base.resv) !=
|
||||
if (dma_resv_locking_ctx(bo->tbo.base.resv) !=
|
||||
ticket)
|
||||
continue;
|
||||
}
|
||||
|
@ -2649,7 +2649,7 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
|
|||
*/
|
||||
long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout)
|
||||
{
|
||||
return reservation_object_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv,
|
||||
return dma_resv_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv,
|
||||
true, true, timeout);
|
||||
}
|
||||
|
||||
|
@ -2724,7 +2724,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
|
|||
if (r)
|
||||
goto error_free_root;
|
||||
|
||||
r = reservation_object_reserve_shared(root->tbo.base.resv, 1);
|
||||
r = dma_resv_reserve_shared(root->tbo.base.resv, 1);
|
||||
if (r)
|
||||
goto error_unreserve;
|
||||
|
||||
|
|
|
@ -5695,7 +5695,7 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
|
|||
* deadlock during GPU reset when this fence will not signal
|
||||
* but we hold reservation lock for the BO.
|
||||
*/
|
||||
r = reservation_object_wait_timeout_rcu(abo->tbo.base.resv, true,
|
||||
r = dma_resv_wait_timeout_rcu(abo->tbo.base.resv, true,
|
||||
false,
|
||||
msecs_to_jiffies(5000));
|
||||
if (unlikely(r <= 0))
|
||||
|
|
|
@ -27,7 +27,7 @@ static void komeda_crtc_update_clock_ratio(struct komeda_crtc_state *kcrtc_st)
|
|||
return;
|
||||
}
|
||||
|
||||
pxlclk = kcrtc_st->base.adjusted_mode.crtc_clock * 1000;
|
||||
pxlclk = kcrtc_st->base.adjusted_mode.crtc_clock * 1000ULL;
|
||||
aclk = komeda_crtc_get_aclk(kcrtc_st);
|
||||
|
||||
kcrtc_st->clock_ratio = div64_u64(aclk << 32, pxlclk);
|
||||
|
|
|
@ -9,7 +9,12 @@
|
|||
* Implementation of a CRTC class for the HDLCD driver.
|
||||
*/
|
||||
|
||||
#include <drm/drmP.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/of_graph.h>
|
||||
#include <linux/platform_data/simplefb.h>
|
||||
|
||||
#include <video/videomode.h>
|
||||
|
||||
#include <drm/drm_atomic.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
|
@ -19,10 +24,7 @@
|
|||
#include <drm/drm_of.h>
|
||||
#include <drm/drm_plane_helper.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/of_graph.h>
|
||||
#include <linux/platform_data/simplefb.h>
|
||||
#include <video/videomode.h>
|
||||
#include <drm/drm_vblank.h>
|
||||
|
||||
#include "hdlcd_drv.h"
|
||||
#include "hdlcd_regs.h"
|
||||
|
|
|
@ -14,21 +14,26 @@
|
|||
#include <linux/clk.h>
|
||||
#include <linux/component.h>
|
||||
#include <linux/console.h>
|
||||
#include <linux/dma-mapping.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/of_graph.h>
|
||||
#include <linux/of_reserved_mem.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
|
||||
#include <drm/drmP.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_debugfs.h>
|
||||
#include <drm/drm_drv.h>
|
||||
#include <drm/drm_fb_cma_helper.h>
|
||||
#include <drm/drm_fb_helper.h>
|
||||
#include <drm/drm_gem_cma_helper.h>
|
||||
#include <drm/drm_gem_framebuffer_helper.h>
|
||||
#include <drm/drm_irq.h>
|
||||
#include <drm/drm_modeset_helper.h>
|
||||
#include <drm/drm_of.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
#include <drm/drm_vblank.h>
|
||||
|
||||
#include "hdlcd_drv.h"
|
||||
#include "hdlcd_regs.h"
|
||||
|
|
|
@ -6,14 +6,17 @@
|
|||
* ARM Mali DP500/DP550/DP650 driver (crtc operations)
|
||||
*/
|
||||
|
||||
#include <drm/drmP.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
|
||||
#include <video/videomode.h>
|
||||
|
||||
#include <drm/drm_atomic.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_print.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <video/videomode.h>
|
||||
#include <drm/drm_vblank.h>
|
||||
|
||||
#include "malidp_drv.h"
|
||||
#include "malidp_hw.h"
|
||||
|
|
|
@ -15,17 +15,19 @@
|
|||
#include <linux/pm_runtime.h>
|
||||
#include <linux/debugfs.h>
|
||||
|
||||
#include <drm/drmP.h>
|
||||
#include <drm/drm_atomic.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
#include <drm/drm_fb_helper.h>
|
||||
#include <drm/drm_drv.h>
|
||||
#include <drm/drm_fb_cma_helper.h>
|
||||
#include <drm/drm_fb_helper.h>
|
||||
#include <drm/drm_fourcc.h>
|
||||
#include <drm/drm_gem_cma_helper.h>
|
||||
#include <drm/drm_gem_framebuffer_helper.h>
|
||||
#include <drm/drm_modeset_helper.h>
|
||||
#include <drm/drm_of.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
#include <drm/drm_vblank.h>
|
||||
|
||||
#include "malidp_drv.h"
|
||||
#include "malidp_mw.h"
|
||||
|
|
|
@ -9,12 +9,13 @@
|
|||
#ifndef __MALIDP_DRV_H__
|
||||
#define __MALIDP_DRV_H__
|
||||
|
||||
#include <drm/drm_writeback.h>
|
||||
#include <drm/drm_encoder.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/wait.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <drm/drmP.h>
|
||||
|
||||
#include <drm/drm_writeback.h>
|
||||
#include <drm/drm_encoder.h>
|
||||
|
||||
#include "malidp_hw.h"
|
||||
|
||||
#define MALIDP_CONFIG_VALID_INIT 0
|
||||
|
|
|
@ -9,12 +9,17 @@
|
|||
*/
|
||||
|
||||
#include <linux/clk.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/io.h>
|
||||
#include <drm/drmP.h>
|
||||
|
||||
#include <video/videomode.h>
|
||||
#include <video/display_timing.h>
|
||||
|
||||
#include <drm/drm_fourcc.h>
|
||||
#include <drm/drm_vblank.h>
|
||||
#include <drm/drm_print.h>
|
||||
|
||||
#include "malidp_drv.h"
|
||||
#include "malidp_hw.h"
|
||||
#include "malidp_mw.h"
|
||||
|
|
|
@ -5,13 +5,14 @@
|
|||
*
|
||||
* ARM Mali DP Writeback connector implementation
|
||||
*/
|
||||
|
||||
#include <drm/drm_atomic.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
#include <drm/drm_fb_cma_helper.h>
|
||||
#include <drm/drm_fourcc.h>
|
||||
#include <drm/drm_gem_cma_helper.h>
|
||||
#include <drm/drmP.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
#include <drm/drm_writeback.h>
|
||||
|
||||
#include "malidp_drv.h"
|
||||
|
|
|
@ -7,11 +7,13 @@
|
|||
*/
|
||||
|
||||
#include <linux/iommu.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
||||
#include <drm/drmP.h>
|
||||
#include <drm/drm_atomic.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_drv.h>
|
||||
#include <drm/drm_fb_cma_helper.h>
|
||||
#include <drm/drm_fourcc.h>
|
||||
#include <drm/drm_gem_cma_helper.h>
|
||||
#include <drm/drm_gem_framebuffer_helper.h>
|
||||
#include <drm/drm_plane_helper.h>
|
||||
|
|
|
@ -3,15 +3,19 @@
|
|||
* Copyright (C) 2012 Russell King
|
||||
* Rewritten from the dovefb driver, and Armada510 manuals.
|
||||
*/
|
||||
|
||||
#include <linux/clk.h>
|
||||
#include <linux/component.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <drm/drmP.h>
|
||||
|
||||
#include <drm/drm_atomic.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
#include <drm/drm_plane_helper.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_plane_helper.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
#include <drm/drm_vblank.h>
|
||||
|
||||
#include "armada_crtc.h"
|
||||
#include "armada_drm.h"
|
||||
#include "armada_fb.h"
|
||||
|
|
|
@ -3,11 +3,15 @@
|
|||
* Copyright (C) 2012 Russell King
|
||||
* Rewritten from the dovefb driver, and Armada510 manuals.
|
||||
*/
|
||||
|
||||
#include <linux/ctype.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/seq_file.h>
|
||||
#include <drm/drmP.h>
|
||||
#include <linux/uaccess.h>
|
||||
|
||||
#include <drm/drm_debugfs.h>
|
||||
#include <drm/drm_file.h>
|
||||
|
||||
#include "armada_crtc.h"
|
||||
#include "armada_drm.h"
|
||||
|
||||
|
|
|
@ -8,11 +8,14 @@
|
|||
#include <linux/kfifo.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/workqueue.h>
|
||||
#include <drm/drmP.h>
|
||||
|
||||
#include <drm/drm_device.h>
|
||||
#include <drm/drm_mm.h>
|
||||
|
||||
struct armada_crtc;
|
||||
struct armada_gem_object;
|
||||
struct clk;
|
||||
struct drm_display_mode;
|
||||
struct drm_fb_helper;
|
||||
|
||||
static inline void
|
||||
|
|
|
@ -2,14 +2,22 @@
|
|||
/*
|
||||
* Copyright (C) 2012 Russell King
|
||||
*/
|
||||
|
||||
#include <linux/clk.h>
|
||||
#include <linux/component.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_graph.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_drv.h>
|
||||
#include <drm/drm_ioctl.h>
|
||||
#include <drm/drm_prime.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
#include <drm/drm_fb_helper.h>
|
||||
#include <drm/drm_of.h>
|
||||
#include <drm/drm_vblank.h>
|
||||
|
||||
#include "armada_crtc.h"
|
||||
#include "armada_drm.h"
|
||||
#include "armada_gem.h"
|
||||
|
|
|
@ -2,9 +2,12 @@
|
|||
/*
|
||||
* Copyright (C) 2012 Russell King
|
||||
*/
|
||||
|
||||
#include <drm/drm_modeset_helper.h>
|
||||
#include <drm/drm_fb_helper.h>
|
||||
#include <drm/drm_fourcc.h>
|
||||
#include <drm/drm_gem_framebuffer_helper.h>
|
||||
|
||||
#include "armada_drm.h"
|
||||
#include "armada_fb.h"
|
||||
#include "armada_gem.h"
|
||||
|
|
|
@ -3,11 +3,14 @@
|
|||
* Copyright (C) 2012 Russell King
|
||||
* Written from the i915 driver.
|
||||
*/
|
||||
|
||||
#include <linux/errno.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
|
||||
#include <drm/drm_fb_helper.h>
|
||||
#include <drm/drm_fourcc.h>
|
||||
|
||||
#include "armada_crtc.h"
|
||||
#include "armada_drm.h"
|
||||
#include "armada_fb.h"
|
||||
|
|
|
@ -2,12 +2,17 @@
|
|||
/*
|
||||
* Copyright (C) 2012 Russell King
|
||||
*/
|
||||
|
||||
#include <linux/dma-buf.h>
|
||||
#include <linux/dma-mapping.h>
|
||||
#include <linux/mman.h>
|
||||
#include <linux/shmem_fs.h>
|
||||
|
||||
#include <drm/armada_drm.h>
|
||||
#include <drm/drm_prime.h>
|
||||
|
||||
#include "armada_drm.h"
|
||||
#include "armada_gem.h"
|
||||
#include <drm/armada_drm.h>
|
||||
#include "armada_ioctlP.h"
|
||||
|
||||
static vm_fault_t armada_gem_vm_fault(struct vm_fault *vmf)
|
||||
|
|
|
@ -3,12 +3,14 @@
|
|||
* Copyright (C) 2012 Russell King
|
||||
* Rewritten from the dovefb driver, and Armada510 manuals.
|
||||
*/
|
||||
#include <drm/drmP.h>
|
||||
#include <drm/drm_atomic.h>
|
||||
#include <drm/drm_atomic_uapi.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_plane_helper.h>
|
||||
|
||||
#include <drm/armada_drm.h>
|
||||
#include <drm/drm_atomic.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_atomic_uapi.h>
|
||||
#include <drm/drm_fourcc.h>
|
||||
#include <drm/drm_plane_helper.h>
|
||||
|
||||
#include "armada_crtc.h"
|
||||
#include "armada_drm.h"
|
||||
#include "armada_fb.h"
|
||||
|
|
|
@ -3,10 +3,12 @@
|
|||
* Copyright (C) 2012 Russell King
|
||||
* Rewritten from the dovefb driver, and Armada510 manuals.
|
||||
*/
|
||||
#include <drm/drmP.h>
|
||||
|
||||
#include <drm/drm_atomic.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_fourcc.h>
|
||||
#include <drm/drm_plane_helper.h>
|
||||
|
||||
#include "armada_crtc.h"
|
||||
#include "armada_drm.h"
|
||||
#include "armada_fb.h"
|
||||
|
|
|
@ -3,7 +3,10 @@
|
|||
#define ARMADA_TRACE_H
|
||||
|
||||
#include <linux/tracepoint.h>
|
||||
#include <drm/drmP.h>
|
||||
|
||||
struct drm_crtc;
|
||||
struct drm_framebuffer;
|
||||
struct drm_plane;
|
||||
|
||||
#undef TRACE_SYSTEM
|
||||
#define TRACE_SYSTEM armada
|
||||
|
|
|
@ -215,7 +215,7 @@ static void aspeed_gfx_disable_vblank(struct drm_simple_display_pipe *pipe)
|
|||
writel(reg | CRT_CTRL_VERTICAL_INTR_STS, priv->base + CRT_CTRL1);
|
||||
}
|
||||
|
||||
static struct drm_simple_display_pipe_funcs aspeed_gfx_funcs = {
|
||||
static const struct drm_simple_display_pipe_funcs aspeed_gfx_funcs = {
|
||||
.enable = aspeed_gfx_pipe_enable,
|
||||
.disable = aspeed_gfx_pipe_disable,
|
||||
.update = aspeed_gfx_pipe_update,
|
||||
|
|
|
@ -1780,8 +1780,7 @@ void analogix_dp_unbind(struct analogix_dp_device *dp)
|
|||
if (dp->plat_data->panel) {
|
||||
if (drm_panel_unprepare(dp->plat_data->panel))
|
||||
DRM_ERROR("failed to turnoff the panel\n");
|
||||
if (drm_panel_detach(dp->plat_data->panel))
|
||||
DRM_ERROR("failed to detach the panel\n");
|
||||
drm_panel_detach(dp->plat_data->panel);
|
||||
}
|
||||
|
||||
drm_dp_aux_unregister(&dp->aux);
|
||||
|
|
|
@ -42,7 +42,7 @@ static int dumb_vga_get_modes(struct drm_connector *connector)
|
|||
struct edid *edid;
|
||||
int ret;
|
||||
|
||||
if (IS_ERR(vga->ddc))
|
||||
if (!vga->ddc)
|
||||
goto fallback;
|
||||
|
||||
edid = drm_get_edid(connector, vga->ddc);
|
||||
|
@ -84,7 +84,7 @@ dumb_vga_connector_detect(struct drm_connector *connector, bool force)
|
|||
* wire the DDC pins, or the I2C bus might not be working at
|
||||
* all.
|
||||
*/
|
||||
if (!IS_ERR(vga->ddc) && drm_probe_ddc(vga->ddc))
|
||||
if (vga->ddc && drm_probe_ddc(vga->ddc))
|
||||
return connector_status_connected;
|
||||
|
||||
return connector_status_unknown;
|
||||
|
@ -197,6 +197,7 @@ static int dumb_vga_probe(struct platform_device *pdev)
|
|||
if (PTR_ERR(vga->ddc) == -ENODEV) {
|
||||
dev_dbg(&pdev->dev,
|
||||
"No i2c bus specified. Disabling EDID readout\n");
|
||||
vga->ddc = NULL;
|
||||
} else {
|
||||
dev_err(&pdev->dev, "Couldn't retrieve i2c bus\n");
|
||||
return PTR_ERR(vga->ddc);
|
||||
|
@ -218,7 +219,7 @@ static int dumb_vga_remove(struct platform_device *pdev)
|
|||
|
||||
drm_bridge_remove(&vga->bridge);
|
||||
|
||||
if (!IS_ERR(vga->ddc))
|
||||
if (vga->ddc)
|
||||
i2c_put_adapter(vga->ddc);
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -63,10 +63,6 @@ enum {
|
|||
HDMI_REVISION_ID = 0x0001,
|
||||
HDMI_IH_AHBDMAAUD_STAT0 = 0x0109,
|
||||
HDMI_IH_MUTE_AHBDMAAUD_STAT0 = 0x0189,
|
||||
HDMI_FC_AUDICONF2 = 0x1027,
|
||||
HDMI_FC_AUDSCONF = 0x1063,
|
||||
HDMI_FC_AUDSCONF_LAYOUT1 = 1 << 0,
|
||||
HDMI_FC_AUDSCONF_LAYOUT0 = 0 << 0,
|
||||
HDMI_AHB_DMA_CONF0 = 0x3600,
|
||||
HDMI_AHB_DMA_START = 0x3601,
|
||||
HDMI_AHB_DMA_STOP = 0x3602,
|
||||
|
@ -403,7 +399,7 @@ static int dw_hdmi_prepare(struct snd_pcm_substream *substream)
|
|||
{
|
||||
struct snd_pcm_runtime *runtime = substream->runtime;
|
||||
struct snd_dw_hdmi *dw = substream->private_data;
|
||||
u8 threshold, conf0, conf1, layout, ca;
|
||||
u8 threshold, conf0, conf1, ca;
|
||||
|
||||
/* Setup as per 3.0.5 FSL 4.1.0 BSP */
|
||||
switch (dw->revision) {
|
||||
|
@ -434,20 +430,12 @@ static int dw_hdmi_prepare(struct snd_pcm_substream *substream)
|
|||
conf1 = default_hdmi_channel_config[runtime->channels - 2].conf1;
|
||||
ca = default_hdmi_channel_config[runtime->channels - 2].ca;
|
||||
|
||||
/*
|
||||
* For >2 channel PCM audio, we need to select layout 1
|
||||
* and set an appropriate channel map.
|
||||
*/
|
||||
if (runtime->channels > 2)
|
||||
layout = HDMI_FC_AUDSCONF_LAYOUT1;
|
||||
else
|
||||
layout = HDMI_FC_AUDSCONF_LAYOUT0;
|
||||
|
||||
writeb_relaxed(threshold, dw->data.base + HDMI_AHB_DMA_THRSLD);
|
||||
writeb_relaxed(conf0, dw->data.base + HDMI_AHB_DMA_CONF0);
|
||||
writeb_relaxed(conf1, dw->data.base + HDMI_AHB_DMA_CONF1);
|
||||
writeb_relaxed(layout, dw->data.base + HDMI_FC_AUDSCONF);
|
||||
writeb_relaxed(ca, dw->data.base + HDMI_FC_AUDICONF2);
|
||||
|
||||
dw_hdmi_set_channel_count(dw->data.hdmi, runtime->channels);
|
||||
dw_hdmi_set_channel_allocation(dw->data.hdmi, ca);
|
||||
|
||||
switch (runtime->format) {
|
||||
case SNDRV_PCM_FORMAT_IEC958_SUBFRAME_LE:
|
||||
|
|
|
@ -14,6 +14,7 @@ struct dw_hdmi_audio_data {
|
|||
|
||||
struct dw_hdmi_i2s_audio_data {
|
||||
struct dw_hdmi *hdmi;
|
||||
u8 *eld;
|
||||
|
||||
void (*write)(struct dw_hdmi *hdmi, u8 val, int offset);
|
||||
u8 (*read)(struct dw_hdmi *hdmi, int offset);
|
||||
|
|
|
@ -10,6 +10,7 @@
|
|||
#include <linux/module.h>
|
||||
|
||||
#include <drm/bridge/dw_hdmi.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
|
||||
#include <sound/hdmi-codec.h>
|
||||
|
||||
|
@ -44,14 +45,30 @@ static int dw_hdmi_i2s_hw_params(struct device *dev, void *data,
|
|||
u8 inputclkfs = 0;
|
||||
|
||||
/* it cares I2S only */
|
||||
if ((fmt->fmt != HDMI_I2S) ||
|
||||
(fmt->bit_clk_master | fmt->frame_clk_master)) {
|
||||
dev_err(dev, "unsupported format/settings\n");
|
||||
if (fmt->bit_clk_master | fmt->frame_clk_master) {
|
||||
dev_err(dev, "unsupported clock settings\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Reset the FIFOs before applying new params */
|
||||
hdmi_write(audio, HDMI_AUD_CONF0_SW_RESET, HDMI_AUD_CONF0);
|
||||
hdmi_write(audio, (u8)~HDMI_MC_SWRSTZ_I2SSWRST_REQ, HDMI_MC_SWRSTZ);
|
||||
|
||||
inputclkfs = HDMI_AUD_INPUTCLKFS_64FS;
|
||||
conf0 = HDMI_AUD_CONF0_I2S_ALL_ENABLE;
|
||||
conf0 = (HDMI_AUD_CONF0_I2S_SELECT | HDMI_AUD_CONF0_I2S_EN0);
|
||||
|
||||
/* Enable the required i2s lanes */
|
||||
switch (hparms->channels) {
|
||||
case 7 ... 8:
|
||||
conf0 |= HDMI_AUD_CONF0_I2S_EN3;
|
||||
/* Fall-thru */
|
||||
case 5 ... 6:
|
||||
conf0 |= HDMI_AUD_CONF0_I2S_EN2;
|
||||
/* Fall-thru */
|
||||
case 3 ... 4:
|
||||
conf0 |= HDMI_AUD_CONF0_I2S_EN1;
|
||||
/* Fall-thru */
|
||||
}
|
||||
|
||||
switch (hparms->sample_width) {
|
||||
case 16:
|
||||
|
@ -63,7 +80,30 @@ static int dw_hdmi_i2s_hw_params(struct device *dev, void *data,
|
|||
break;
|
||||
}
|
||||
|
||||
switch (fmt->fmt) {
|
||||
case HDMI_I2S:
|
||||
conf1 |= HDMI_AUD_CONF1_MODE_I2S;
|
||||
break;
|
||||
case HDMI_RIGHT_J:
|
||||
conf1 |= HDMI_AUD_CONF1_MODE_RIGHT_J;
|
||||
break;
|
||||
case HDMI_LEFT_J:
|
||||
conf1 |= HDMI_AUD_CONF1_MODE_LEFT_J;
|
||||
break;
|
||||
case HDMI_DSP_A:
|
||||
conf1 |= HDMI_AUD_CONF1_MODE_BURST_1;
|
||||
break;
|
||||
case HDMI_DSP_B:
|
||||
conf1 |= HDMI_AUD_CONF1_MODE_BURST_2;
|
||||
break;
|
||||
default:
|
||||
dev_err(dev, "unsupported format\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
dw_hdmi_set_sample_rate(hdmi, hparms->sample_rate);
|
||||
dw_hdmi_set_channel_count(hdmi, hparms->channels);
|
||||
dw_hdmi_set_channel_allocation(hdmi, hparms->cea.channel_allocation);
|
||||
|
||||
hdmi_write(audio, inputclkfs, HDMI_AUD_INPUTCLKFS);
|
||||
hdmi_write(audio, conf0, HDMI_AUD_CONF0);
|
||||
|
@ -80,8 +120,15 @@ static void dw_hdmi_i2s_audio_shutdown(struct device *dev, void *data)
|
|||
struct dw_hdmi *hdmi = audio->hdmi;
|
||||
|
||||
dw_hdmi_audio_disable(hdmi);
|
||||
}
|
||||
|
||||
hdmi_write(audio, HDMI_AUD_CONF0_SW_RESET, HDMI_AUD_CONF0);
|
||||
static int dw_hdmi_i2s_get_eld(struct device *dev, void *data, uint8_t *buf,
|
||||
size_t len)
|
||||
{
|
||||
struct dw_hdmi_i2s_audio_data *audio = data;
|
||||
|
||||
memcpy(buf, audio->eld, min_t(size_t, MAX_ELD_BYTES, len));
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int dw_hdmi_i2s_get_dai_id(struct snd_soc_component *component,
|
||||
|
@ -107,6 +154,7 @@ static int dw_hdmi_i2s_get_dai_id(struct snd_soc_component *component,
|
|||
static struct hdmi_codec_ops dw_hdmi_i2s_ops = {
|
||||
.hw_params = dw_hdmi_i2s_hw_params,
|
||||
.audio_shutdown = dw_hdmi_i2s_audio_shutdown,
|
||||
.get_eld = dw_hdmi_i2s_get_eld,
|
||||
.get_dai_id = dw_hdmi_i2s_get_dai_id,
|
||||
};
|
||||
|
||||
|
@ -119,7 +167,7 @@ static int snd_dw_hdmi_probe(struct platform_device *pdev)
|
|||
|
||||
pdata.ops = &dw_hdmi_i2s_ops;
|
||||
pdata.i2s = 1;
|
||||
pdata.max_i2s_channels = 6;
|
||||
pdata.max_i2s_channels = 8;
|
||||
pdata.data = audio;
|
||||
|
||||
memset(&pdevinfo, 0, sizeof(pdevinfo));
|
||||
|
|
|
@ -645,6 +645,42 @@ void dw_hdmi_set_sample_rate(struct dw_hdmi *hdmi, unsigned int rate)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(dw_hdmi_set_sample_rate);
|
||||
|
||||
void dw_hdmi_set_channel_count(struct dw_hdmi *hdmi, unsigned int cnt)
|
||||
{
|
||||
u8 layout;
|
||||
|
||||
mutex_lock(&hdmi->audio_mutex);
|
||||
|
||||
/*
|
||||
* For >2 channel PCM audio, we need to select layout 1
|
||||
* and set an appropriate channel map.
|
||||
*/
|
||||
if (cnt > 2)
|
||||
layout = HDMI_FC_AUDSCONF_AUD_PACKET_LAYOUT_LAYOUT1;
|
||||
else
|
||||
layout = HDMI_FC_AUDSCONF_AUD_PACKET_LAYOUT_LAYOUT0;
|
||||
|
||||
hdmi_modb(hdmi, layout, HDMI_FC_AUDSCONF_AUD_PACKET_LAYOUT_MASK,
|
||||
HDMI_FC_AUDSCONF);
|
||||
|
||||
/* Set the audio infoframes channel count */
|
||||
hdmi_modb(hdmi, (cnt - 1) << HDMI_FC_AUDICONF0_CC_OFFSET,
|
||||
HDMI_FC_AUDICONF0_CC_MASK, HDMI_FC_AUDICONF0);
|
||||
|
||||
mutex_unlock(&hdmi->audio_mutex);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dw_hdmi_set_channel_count);
|
||||
|
||||
void dw_hdmi_set_channel_allocation(struct dw_hdmi *hdmi, unsigned int ca)
|
||||
{
|
||||
mutex_lock(&hdmi->audio_mutex);
|
||||
|
||||
hdmi_writeb(hdmi, ca, HDMI_FC_AUDICONF2);
|
||||
|
||||
mutex_unlock(&hdmi->audio_mutex);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dw_hdmi_set_channel_allocation);
|
||||
|
||||
static void hdmi_enable_audio_clk(struct dw_hdmi *hdmi, bool enable)
|
||||
{
|
||||
if (enable)
|
||||
|
@ -2763,6 +2799,7 @@ __dw_hdmi_probe(struct platform_device *pdev,
|
|||
struct dw_hdmi_i2s_audio_data audio;
|
||||
|
||||
audio.hdmi = hdmi;
|
||||
audio.eld = hdmi->connector.eld;
|
||||
audio.write = hdmi_writeb;
|
||||
audio.read = hdmi_readb;
|
||||
hdmi->enable_audio = dw_hdmi_i2s_audio_enable;
|
||||
|
|
|
@ -865,12 +865,18 @@ enum {
|
|||
|
||||
/* AUD_CONF0 field values */
|
||||
HDMI_AUD_CONF0_SW_RESET = 0x80,
|
||||
HDMI_AUD_CONF0_I2S_ALL_ENABLE = 0x2F,
|
||||
HDMI_AUD_CONF0_I2S_SELECT = 0x20,
|
||||
HDMI_AUD_CONF0_I2S_EN3 = 0x08,
|
||||
HDMI_AUD_CONF0_I2S_EN2 = 0x04,
|
||||
HDMI_AUD_CONF0_I2S_EN1 = 0x02,
|
||||
HDMI_AUD_CONF0_I2S_EN0 = 0x01,
|
||||
|
||||
/* AUD_CONF1 field values */
|
||||
HDMI_AUD_CONF1_MODE_I2S = 0x00,
|
||||
HDMI_AUD_CONF1_MODE_RIGHT_J = 0x02,
|
||||
HDMI_AUD_CONF1_MODE_LEFT_J = 0x04,
|
||||
HDMI_AUD_CONF1_MODE_RIGHT_J = 0x20,
|
||||
HDMI_AUD_CONF1_MODE_LEFT_J = 0x40,
|
||||
HDMI_AUD_CONF1_MODE_BURST_1 = 0x60,
|
||||
HDMI_AUD_CONF1_MODE_BURST_2 = 0x80,
|
||||
HDMI_AUD_CONF1_WIDTH_16 = 0x10,
|
||||
HDMI_AUD_CONF1_WIDTH_24 = 0x18,
|
||||
|
||||
|
@ -938,6 +944,7 @@ enum {
|
|||
HDMI_MC_CLKDIS_PIXELCLK_DISABLE = 0x1,
|
||||
|
||||
/* MC_SWRSTZ field values */
|
||||
HDMI_MC_SWRSTZ_I2SSWRST_REQ = 0x08,
|
||||
HDMI_MC_SWRSTZ_TMDSSWRST_REQ = 0x02,
|
||||
|
||||
/* MC_FLOWCTRL field values */
|
||||
|
|
|
@ -1312,7 +1312,7 @@ static int tc_connector_get_modes(struct drm_connector *connector)
|
|||
{
|
||||
struct tc_data *tc = connector_to_tc(connector);
|
||||
struct edid *edid;
|
||||
unsigned int count;
|
||||
int count;
|
||||
int ret;
|
||||
|
||||
ret = tc_get_display_props(tc);
|
||||
|
@ -1321,11 +1321,9 @@ static int tc_connector_get_modes(struct drm_connector *connector)
|
|||
return 0;
|
||||
}
|
||||
|
||||
if (tc->panel && tc->panel->funcs && tc->panel->funcs->get_modes) {
|
||||
count = tc->panel->funcs->get_modes(tc->panel);
|
||||
if (count > 0)
|
||||
return count;
|
||||
}
|
||||
count = drm_panel_get_modes(tc->panel);
|
||||
if (count > 0)
|
||||
return count;
|
||||
|
||||
edid = drm_get_edid(connector, &tc->aux.ddc);
|
||||
|
||||
|
|
|
@ -1037,7 +1037,7 @@ int drm_atomic_set_property(struct drm_atomic_state *state,
|
|||
* As a contrast, with implicit fencing the kernel keeps track of any
|
||||
* ongoing rendering, and automatically ensures that the atomic update waits
|
||||
* for any pending rendering to complete. For shared buffers represented with
|
||||
* a &struct dma_buf this is tracked in &struct reservation_object.
|
||||
* a &struct dma_buf this is tracked in &struct dma_resv.
|
||||
* Implicit syncing is how Linux traditionally worked (e.g. DRI2/3 on X.org),
|
||||
* whereas explicit fencing is what Android wants.
|
||||
*
|
||||
|
|
|
@ -986,12 +986,14 @@ static const struct drm_prop_enum_list hdmi_colorspaces[] = {
|
|||
* - Kernel sends uevent with the connector id and property id through
|
||||
* @drm_hdcp_update_content_protection, upon below kernel triggered
|
||||
* scenarios:
|
||||
* DESIRED -> ENABLED (authentication success)
|
||||
* ENABLED -> DESIRED (termination of authentication)
|
||||
*
|
||||
* - DESIRED -> ENABLED (authentication success)
|
||||
* - ENABLED -> DESIRED (termination of authentication)
|
||||
* - Please note no uevents for userspace triggered property state changes,
|
||||
* which can't fail such as
|
||||
* DESIRED/ENABLED -> UNDESIRED
|
||||
* UNDESIRED -> DESIRED
|
||||
*
|
||||
* - DESIRED/ENABLED -> UNDESIRED
|
||||
* - UNDESIRED -> DESIRED
|
||||
* - Userspace is responsible for polling the property or listen to uevents
|
||||
* to determine when the value transitions from ENABLED to DESIRED.
|
||||
* This signifies the link is no longer protected and userspace should
|
||||
|
|
|
@ -159,7 +159,7 @@ void drm_gem_private_object_init(struct drm_device *dev,
|
|||
kref_init(&obj->refcount);
|
||||
obj->handle_count = 0;
|
||||
obj->size = size;
|
||||
reservation_object_init(&obj->_resv);
|
||||
dma_resv_init(&obj->_resv);
|
||||
if (!obj->resv)
|
||||
obj->resv = &obj->_resv;
|
||||
|
||||
|
@ -633,6 +633,9 @@ void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
|
|||
|
||||
pagevec_init(&pvec);
|
||||
for (i = 0; i < npages; i++) {
|
||||
if (!pages[i])
|
||||
continue;
|
||||
|
||||
if (dirty)
|
||||
set_page_dirty(pages[i]);
|
||||
|
||||
|
@ -752,7 +755,7 @@ drm_gem_object_lookup(struct drm_file *filp, u32 handle)
|
|||
EXPORT_SYMBOL(drm_gem_object_lookup);
|
||||
|
||||
/**
|
||||
* drm_gem_reservation_object_wait - Wait on GEM object's reservation's objects
|
||||
* drm_gem_dma_resv_wait - Wait on GEM object's reservation's objects
|
||||
* shared and/or exclusive fences.
|
||||
* @filep: DRM file private date
|
||||
* @handle: userspace handle
|
||||
|
@ -764,7 +767,7 @@ EXPORT_SYMBOL(drm_gem_object_lookup);
|
|||
* Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
|
||||
* greater than 0 on success.
|
||||
*/
|
||||
long drm_gem_reservation_object_wait(struct drm_file *filep, u32 handle,
|
||||
long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle,
|
||||
bool wait_all, unsigned long timeout)
|
||||
{
|
||||
long ret;
|
||||
|
@ -776,7 +779,7 @@ long drm_gem_reservation_object_wait(struct drm_file *filep, u32 handle,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
ret = reservation_object_wait_timeout_rcu(obj->resv, wait_all,
|
||||
ret = dma_resv_wait_timeout_rcu(obj->resv, wait_all,
|
||||
true, timeout);
|
||||
if (ret == 0)
|
||||
ret = -ETIME;
|
||||
|
@ -787,7 +790,7 @@ long drm_gem_reservation_object_wait(struct drm_file *filep, u32 handle,
|
|||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_reservation_object_wait);
|
||||
EXPORT_SYMBOL(drm_gem_dma_resv_wait);
|
||||
|
||||
/**
|
||||
* drm_gem_close_ioctl - implementation of the GEM_CLOSE ioctl
|
||||
|
@ -953,7 +956,7 @@ drm_gem_object_release(struct drm_gem_object *obj)
|
|||
if (obj->filp)
|
||||
fput(obj->filp);
|
||||
|
||||
reservation_object_fini(&obj->_resv);
|
||||
dma_resv_fini(&obj->_resv);
|
||||
drm_gem_free_mmap_offset(obj);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_object_release);
|
||||
|
@ -1288,7 +1291,7 @@ retry:
|
|||
if (contended != -1) {
|
||||
struct drm_gem_object *obj = objs[contended];
|
||||
|
||||
ret = reservation_object_lock_slow_interruptible(obj->resv,
|
||||
ret = dma_resv_lock_slow_interruptible(obj->resv,
|
||||
acquire_ctx);
|
||||
if (ret) {
|
||||
ww_acquire_done(acquire_ctx);
|
||||
|
@ -1300,16 +1303,16 @@ retry:
|
|||
if (i == contended)
|
||||
continue;
|
||||
|
||||
ret = reservation_object_lock_interruptible(objs[i]->resv,
|
||||
ret = dma_resv_lock_interruptible(objs[i]->resv,
|
||||
acquire_ctx);
|
||||
if (ret) {
|
||||
int j;
|
||||
|
||||
for (j = 0; j < i; j++)
|
||||
reservation_object_unlock(objs[j]->resv);
|
||||
dma_resv_unlock(objs[j]->resv);
|
||||
|
||||
if (contended != -1 && contended >= i)
|
||||
reservation_object_unlock(objs[contended]->resv);
|
||||
dma_resv_unlock(objs[contended]->resv);
|
||||
|
||||
if (ret == -EDEADLK) {
|
||||
contended = i;
|
||||
|
@ -1334,7 +1337,7 @@ drm_gem_unlock_reservations(struct drm_gem_object **objs, int count,
|
|||
int i;
|
||||
|
||||
for (i = 0; i < count; i++)
|
||||
reservation_object_unlock(objs[i]->resv);
|
||||
dma_resv_unlock(objs[i]->resv);
|
||||
|
||||
ww_acquire_fini(acquire_ctx);
|
||||
}
|
||||
|
@ -1410,12 +1413,12 @@ int drm_gem_fence_array_add_implicit(struct xarray *fence_array,
|
|||
|
||||
if (!write) {
|
||||
struct dma_fence *fence =
|
||||
reservation_object_get_excl_rcu(obj->resv);
|
||||
dma_resv_get_excl_rcu(obj->resv);
|
||||
|
||||
return drm_gem_fence_array_add(fence_array, fence);
|
||||
}
|
||||
|
||||
ret = reservation_object_get_fences_rcu(obj->resv, NULL,
|
||||
ret = dma_resv_get_fences_rcu(obj->resv, NULL,
|
||||
&fence_count, &fences);
|
||||
if (ret || !fence_count)
|
||||
return ret;
|
||||
|
|
|
@ -7,7 +7,7 @@
|
|||
|
||||
#include <linux/dma-buf.h>
|
||||
#include <linux/dma-fence.h>
|
||||
#include <linux/reservation.h>
|
||||
#include <linux/dma-resv.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include <drm/drm_atomic.h>
|
||||
|
@ -294,7 +294,7 @@ int drm_gem_fb_prepare_fb(struct drm_plane *plane,
|
|||
return 0;
|
||||
|
||||
obj = drm_gem_fb_get_obj(state->fb, 0);
|
||||
fence = reservation_object_get_excl_rcu(obj->resv);
|
||||
fence = dma_resv_get_excl_rcu(obj->resv);
|
||||
drm_atomic_set_fence_for_plane(state, fence);
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -75,6 +75,7 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t
|
|||
shmem = to_drm_gem_shmem_obj(obj);
|
||||
mutex_init(&shmem->pages_lock);
|
||||
mutex_init(&shmem->vmap_lock);
|
||||
INIT_LIST_HEAD(&shmem->madv_list);
|
||||
|
||||
/*
|
||||
* Our buffers are kept pinned, so allocating them
|
||||
|
@ -118,11 +119,11 @@ void drm_gem_shmem_free_object(struct drm_gem_object *obj)
|
|||
if (shmem->sgt) {
|
||||
dma_unmap_sg(obj->dev->dev, shmem->sgt->sgl,
|
||||
shmem->sgt->nents, DMA_BIDIRECTIONAL);
|
||||
|
||||
drm_gem_shmem_put_pages(shmem);
|
||||
sg_free_table(shmem->sgt);
|
||||
kfree(shmem->sgt);
|
||||
}
|
||||
if (shmem->pages)
|
||||
drm_gem_shmem_put_pages(shmem);
|
||||
}
|
||||
|
||||
WARN_ON(shmem->pages_use_count);
|
||||
|
@ -362,6 +363,62 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
|
|||
}
|
||||
EXPORT_SYMBOL(drm_gem_shmem_create_with_handle);
|
||||
|
||||
/* Update madvise status, returns true if not purged, else
|
||||
* false or -errno.
|
||||
*/
|
||||
int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv)
|
||||
{
|
||||
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
|
||||
|
||||
mutex_lock(&shmem->pages_lock);
|
||||
|
||||
if (shmem->madv >= 0)
|
||||
shmem->madv = madv;
|
||||
|
||||
madv = shmem->madv;
|
||||
|
||||
mutex_unlock(&shmem->pages_lock);
|
||||
|
||||
return (madv >= 0);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_shmem_madvise);
|
||||
|
||||
void drm_gem_shmem_purge_locked(struct drm_gem_object *obj)
|
||||
{
|
||||
struct drm_device *dev = obj->dev;
|
||||
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
|
||||
|
||||
WARN_ON(!drm_gem_shmem_is_purgeable(shmem));
|
||||
|
||||
drm_gem_shmem_put_pages_locked(shmem);
|
||||
|
||||
shmem->madv = -1;
|
||||
|
||||
drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
|
||||
drm_gem_free_mmap_offset(obj);
|
||||
|
||||
/* Our goal here is to return as much of the memory as
|
||||
* is possible back to the system as we are called from OOM.
|
||||
* To do this we must instruct the shmfs to drop all of its
|
||||
* backing pages, *now*.
|
||||
*/
|
||||
shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1);
|
||||
|
||||
invalidate_mapping_pages(file_inode(obj->filp)->i_mapping,
|
||||
0, (loff_t)-1);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_shmem_purge_locked);
|
||||
|
||||
void drm_gem_shmem_purge(struct drm_gem_object *obj)
|
||||
{
|
||||
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
|
||||
|
||||
mutex_lock(&shmem->pages_lock);
|
||||
drm_gem_shmem_purge_locked(obj);
|
||||
mutex_unlock(&shmem->pages_lock);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_shmem_purge);
|
||||
|
||||
/**
|
||||
* drm_gem_shmem_dumb_create - Create a dumb shmem buffer object
|
||||
* @file: DRM file structure to create the dumb buffer for
|
||||
|
|
|
@ -123,18 +123,110 @@ EXPORT_SYMBOL(drm_panel_attach);
|
|||
*
|
||||
* This function should not be called by the panel device itself. It
|
||||
* is only for the drm device that called drm_panel_attach().
|
||||
*
|
||||
* Return: 0 on success or a negative error code on failure.
|
||||
*/
|
||||
int drm_panel_detach(struct drm_panel *panel)
|
||||
void drm_panel_detach(struct drm_panel *panel)
|
||||
{
|
||||
panel->connector = NULL;
|
||||
panel->drm = NULL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_panel_detach);
|
||||
|
||||
/**
|
||||
* drm_panel_prepare - power on a panel
|
||||
* @panel: DRM panel
|
||||
*
|
||||
* Calling this function will enable power and deassert any reset signals to
|
||||
* the panel. After this has completed it is possible to communicate with any
|
||||
* integrated circuitry via a command bus.
|
||||
*
|
||||
* Return: 0 on success or a negative error code on failure.
|
||||
*/
|
||||
int drm_panel_prepare(struct drm_panel *panel)
|
||||
{
|
||||
if (panel && panel->funcs && panel->funcs->prepare)
|
||||
return panel->funcs->prepare(panel);
|
||||
|
||||
return panel ? -ENOSYS : -EINVAL;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_panel_prepare);
|
||||
|
||||
/**
|
||||
* drm_panel_unprepare - power off a panel
|
||||
* @panel: DRM panel
|
||||
*
|
||||
* Calling this function will completely power off a panel (assert the panel's
|
||||
* reset, turn off power supplies, ...). After this function has completed, it
|
||||
* is usually no longer possible to communicate with the panel until another
|
||||
* call to drm_panel_prepare().
|
||||
*
|
||||
* Return: 0 on success or a negative error code on failure.
|
||||
*/
|
||||
int drm_panel_unprepare(struct drm_panel *panel)
|
||||
{
|
||||
if (panel && panel->funcs && panel->funcs->unprepare)
|
||||
return panel->funcs->unprepare(panel);
|
||||
|
||||
return panel ? -ENOSYS : -EINVAL;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_panel_unprepare);
|
||||
|
||||
/**
|
||||
* drm_panel_enable - enable a panel
|
||||
* @panel: DRM panel
|
||||
*
|
||||
* Calling this function will cause the panel display drivers to be turned on
|
||||
* and the backlight to be enabled. Content will be visible on screen after
|
||||
* this call completes.
|
||||
*
|
||||
* Return: 0 on success or a negative error code on failure.
|
||||
*/
|
||||
int drm_panel_enable(struct drm_panel *panel)
|
||||
{
|
||||
if (panel && panel->funcs && panel->funcs->enable)
|
||||
return panel->funcs->enable(panel);
|
||||
|
||||
return panel ? -ENOSYS : -EINVAL;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_panel_enable);
|
||||
|
||||
/**
|
||||
* drm_panel_disable - disable a panel
|
||||
* @panel: DRM panel
|
||||
*
|
||||
* This will typically turn off the panel's backlight or disable the display
|
||||
* drivers. For smart panels it should still be possible to communicate with
|
||||
* the integrated circuitry via any command bus after this call.
|
||||
*
|
||||
* Return: 0 on success or a negative error code on failure.
|
||||
*/
|
||||
int drm_panel_disable(struct drm_panel *panel)
|
||||
{
|
||||
if (panel && panel->funcs && panel->funcs->disable)
|
||||
return panel->funcs->disable(panel);
|
||||
|
||||
return panel ? -ENOSYS : -EINVAL;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_panel_disable);
|
||||
|
||||
/**
|
||||
* drm_panel_get_modes - probe the available display modes of a panel
|
||||
* @panel: DRM panel
|
||||
*
|
||||
* The modes probed from the panel are automatically added to the connector
|
||||
* that the panel is attached to.
|
||||
*
|
||||
* Return: The number of modes available from the panel on success or a
|
||||
* negative error code on failure.
|
||||
*/
|
||||
int drm_panel_get_modes(struct drm_panel *panel)
|
||||
{
|
||||
if (panel && panel->funcs && panel->funcs->get_modes)
|
||||
return panel->funcs->get_modes(panel);
|
||||
|
||||
return panel ? -ENOSYS : -EINVAL;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_panel_get_modes);
|
||||
|
||||
#ifdef CONFIG_OF
|
||||
/**
|
||||
* of_drm_find_panel - look up a panel using a device tree node
|
||||
|
|
|
@ -29,21 +29,97 @@
|
|||
/**
|
||||
* DOC: Overview
|
||||
*
|
||||
* DRM synchronisation objects (syncobj, see struct &drm_syncobj) are
|
||||
* persistent objects that contain an optional fence. The fence can be updated
|
||||
* with a new fence, or be NULL.
|
||||
*
|
||||
* syncobj's can be waited upon, where it will wait for the underlying
|
||||
* fence.
|
||||
*
|
||||
* syncobj's can be export to fd's and back, these fd's are opaque and
|
||||
* have no other use case, except passing the syncobj between processes.
|
||||
*
|
||||
* DRM synchronisation objects (syncobj, see struct &drm_syncobj) provide a
|
||||
* container for a synchronization primitive which can be used by userspace
|
||||
* to explicitly synchronize GPU commands, can be shared between userspace
|
||||
* processes, and can be shared between different DRM drivers.
|
||||
* Their primary use-case is to implement Vulkan fences and semaphores.
|
||||
* The syncobj userspace API provides ioctls for several operations:
|
||||
*
|
||||
* syncobj have a kref reference count, but also have an optional file.
|
||||
* The file is only created once the syncobj is exported.
|
||||
* The file takes a reference on the kref.
|
||||
* - Creation and destruction of syncobjs
|
||||
* - Import and export of syncobjs to/from a syncobj file descriptor
|
||||
* - Import and export a syncobj's underlying fence to/from a sync file
|
||||
* - Reset a syncobj (set its fence to NULL)
|
||||
* - Signal a syncobj (set a trivially signaled fence)
|
||||
* - Wait for a syncobj's fence to appear and be signaled
|
||||
*
|
||||
* At it's core, a syncobj is simply a wrapper around a pointer to a struct
|
||||
* &dma_fence which may be NULL.
|
||||
* When a syncobj is first created, its pointer is either NULL or a pointer
|
||||
* to an already signaled fence depending on whether the
|
||||
* &DRM_SYNCOBJ_CREATE_SIGNALED flag is passed to
|
||||
* &DRM_IOCTL_SYNCOBJ_CREATE.
|
||||
* When GPU work which signals a syncobj is enqueued in a DRM driver,
|
||||
* the syncobj fence is replaced with a fence which will be signaled by the
|
||||
* completion of that work.
|
||||
* When GPU work which waits on a syncobj is enqueued in a DRM driver, the
|
||||
* driver retrieves syncobj's current fence at the time the work is enqueued
|
||||
* waits on that fence before submitting the work to hardware.
|
||||
* If the syncobj's fence is NULL, the enqueue operation is expected to fail.
|
||||
* All manipulation of the syncobjs's fence happens in terms of the current
|
||||
* fence at the time the ioctl is called by userspace regardless of whether
|
||||
* that operation is an immediate host-side operation (signal or reset) or
|
||||
* or an operation which is enqueued in some driver queue.
|
||||
* &DRM_IOCTL_SYNCOBJ_RESET and &DRM_IOCTL_SYNCOBJ_SIGNAL can be used to
|
||||
* manipulate a syncobj from the host by resetting its pointer to NULL or
|
||||
* setting its pointer to a fence which is already signaled.
|
||||
*
|
||||
*
|
||||
* Host-side wait on syncobjs
|
||||
* --------------------------
|
||||
*
|
||||
* &DRM_IOCTL_SYNCOBJ_WAIT takes an array of syncobj handles and does a
|
||||
* host-side wait on all of the syncobj fences simultaneously.
|
||||
* If &DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL is set, the wait ioctl will wait on
|
||||
* all of the syncobj fences to be signaled before it returns.
|
||||
* Otherwise, it returns once at least one syncobj fence has been signaled
|
||||
* and the index of a signaled fence is written back to the client.
|
||||
*
|
||||
* Unlike the enqueued GPU work dependencies which fail if they see a NULL
|
||||
* fence in a syncobj, if &DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT is set,
|
||||
* the host-side wait will first wait for the syncobj to receive a non-NULL
|
||||
* fence and then wait on that fence.
|
||||
* If &DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT is not set and any one of the
|
||||
* syncobjs in the array has a NULL fence, -EINVAL will be returned.
|
||||
* Assuming the syncobj starts off with a NULL fence, this allows a client
|
||||
* to do a host wait in one thread (or process) which waits on GPU work
|
||||
* submitted in another thread (or process) without having to manually
|
||||
* synchronize between the two.
|
||||
* This requirement is inherited from the Vulkan fence API.
|
||||
*
|
||||
*
|
||||
* Import/export of syncobjs
|
||||
* -------------------------
|
||||
*
|
||||
* &DRM_IOCTL_SYNCOBJ_FD_TO_HANDLE and &DRM_IOCTL_SYNCOBJ_HANDLE_TO_FD
|
||||
* provide two mechanisms for import/export of syncobjs.
|
||||
*
|
||||
* The first lets the client import or export an entire syncobj to a file
|
||||
* descriptor.
|
||||
* These fd's are opaque and have no other use case, except passing the
|
||||
* syncobj between processes.
|
||||
* All exported file descriptors and any syncobj handles created as a
|
||||
* result of importing those file descriptors own a reference to the
|
||||
* same underlying struct &drm_syncobj and the syncobj can be used
|
||||
* persistently across all the processes with which it is shared.
|
||||
* The syncobj is freed only once the last reference is dropped.
|
||||
* Unlike dma-buf, importing a syncobj creates a new handle (with its own
|
||||
* reference) for every import instead of de-duplicating.
|
||||
* The primary use-case of this persistent import/export is for shared
|
||||
* Vulkan fences and semaphores.
|
||||
*
|
||||
* The second import/export mechanism, which is indicated by
|
||||
* &DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_IMPORT_SYNC_FILE or
|
||||
* &DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_EXPORT_SYNC_FILE lets the client
|
||||
* import/export the syncobj's current fence from/to a &sync_file.
|
||||
* When a syncobj is exported to a sync file, that sync file wraps the
|
||||
* sycnobj's fence at the time of export and any later signal or reset
|
||||
* operations on the syncobj will not affect the exported sync file.
|
||||
* When a sync file is imported into a syncobj, the syncobj's fence is set
|
||||
* to the fence wrapped by that sync file.
|
||||
* Because sync files are immutable, resetting or signaling the syncobj
|
||||
* will not affect any sync files whose fences have been imported into the
|
||||
* syncobj.
|
||||
*/
|
||||
|
||||
#include <linux/anon_inodes.h>
|
||||
|
|
|
@ -397,13 +397,13 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op,
|
|||
}
|
||||
|
||||
if (op & ETNA_PREP_NOSYNC) {
|
||||
if (!reservation_object_test_signaled_rcu(obj->resv,
|
||||
if (!dma_resv_test_signaled_rcu(obj->resv,
|
||||
write))
|
||||
return -EBUSY;
|
||||
} else {
|
||||
unsigned long remain = etnaviv_timeout_to_jiffies(timeout);
|
||||
|
||||
ret = reservation_object_wait_timeout_rcu(obj->resv,
|
||||
ret = dma_resv_wait_timeout_rcu(obj->resv,
|
||||
write, true, remain);
|
||||
if (ret <= 0)
|
||||
return ret == 0 ? -ETIMEDOUT : ret;
|
||||
|
@ -459,8 +459,8 @@ static void etnaviv_gem_describe_fence(struct dma_fence *fence,
|
|||
static void etnaviv_gem_describe(struct drm_gem_object *obj, struct seq_file *m)
|
||||
{
|
||||
struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
|
||||
struct reservation_object *robj = obj->resv;
|
||||
struct reservation_object_list *fobj;
|
||||
struct dma_resv *robj = obj->resv;
|
||||
struct dma_resv_list *fobj;
|
||||
struct dma_fence *fence;
|
||||
unsigned long off = drm_vma_node_start(&obj->vma_node);
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
#ifndef __ETNAVIV_GEM_H__
|
||||
#define __ETNAVIV_GEM_H__
|
||||
|
||||
#include <linux/reservation.h>
|
||||
#include <linux/dma-resv.h>
|
||||
#include "etnaviv_cmdbuf.h"
|
||||
#include "etnaviv_drv.h"
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
*/
|
||||
|
||||
#include <linux/dma-fence-array.h>
|
||||
#include <linux/reservation.h>
|
||||
#include <linux/dma-resv.h>
|
||||
#include <linux/sync_file.h>
|
||||
#include "etnaviv_cmdbuf.h"
|
||||
#include "etnaviv_drv.h"
|
||||
|
@ -165,10 +165,10 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit)
|
|||
|
||||
for (i = 0; i < submit->nr_bos; i++) {
|
||||
struct etnaviv_gem_submit_bo *bo = &submit->bos[i];
|
||||
struct reservation_object *robj = bo->obj->base.resv;
|
||||
struct dma_resv *robj = bo->obj->base.resv;
|
||||
|
||||
if (!(bo->flags & ETNA_SUBMIT_BO_WRITE)) {
|
||||
ret = reservation_object_reserve_shared(robj, 1);
|
||||
ret = dma_resv_reserve_shared(robj, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
@ -177,13 +177,13 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit)
|
|||
continue;
|
||||
|
||||
if (bo->flags & ETNA_SUBMIT_BO_WRITE) {
|
||||
ret = reservation_object_get_fences_rcu(robj, &bo->excl,
|
||||
ret = dma_resv_get_fences_rcu(robj, &bo->excl,
|
||||
&bo->nr_shared,
|
||||
&bo->shared);
|
||||
if (ret)
|
||||
return ret;
|
||||
} else {
|
||||
bo->excl = reservation_object_get_excl_rcu(robj);
|
||||
bo->excl = dma_resv_get_excl_rcu(robj);
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -199,10 +199,10 @@ static void submit_attach_object_fences(struct etnaviv_gem_submit *submit)
|
|||
struct drm_gem_object *obj = &submit->bos[i].obj->base;
|
||||
|
||||
if (submit->bos[i].flags & ETNA_SUBMIT_BO_WRITE)
|
||||
reservation_object_add_excl_fence(obj->resv,
|
||||
dma_resv_add_excl_fence(obj->resv,
|
||||
submit->out_fence);
|
||||
else
|
||||
reservation_object_add_shared_fence(obj->resv,
|
||||
dma_resv_add_shared_fence(obj->resv,
|
||||
submit->out_fence);
|
||||
|
||||
submit_unlock_object(submit, i);
|
||||
|
|
|
@ -65,17 +65,9 @@ static const struct drm_connector_funcs fsl_dcu_drm_connector_funcs = {
|
|||
static int fsl_dcu_drm_connector_get_modes(struct drm_connector *connector)
|
||||
{
|
||||
struct fsl_dcu_drm_connector *fsl_connector;
|
||||
int (*get_modes)(struct drm_panel *panel);
|
||||
int num_modes = 0;
|
||||
|
||||
fsl_connector = to_fsl_dcu_connector(connector);
|
||||
if (fsl_connector->panel && fsl_connector->panel->funcs &&
|
||||
fsl_connector->panel->funcs->get_modes) {
|
||||
get_modes = fsl_connector->panel->funcs->get_modes;
|
||||
num_modes = get_modes(fsl_connector->panel);
|
||||
}
|
||||
|
||||
return num_modes;
|
||||
return drm_panel_get_modes(fsl_connector->panel);
|
||||
}
|
||||
|
||||
static int fsl_dcu_drm_connector_mode_valid(struct drm_connector *connector,
|
||||
|
|
|
@ -13,10 +13,10 @@
|
|||
#include <sound/asoundef.h>
|
||||
#include <sound/hdmi-codec.h>
|
||||
|
||||
#include <drm/drmP.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_edid.h>
|
||||
#include <drm/drm_of.h>
|
||||
#include <drm/drm_print.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
#include <drm/i2c/tda998x.h>
|
||||
|
||||
|
|
|
@ -29,7 +29,7 @@
|
|||
#include <linux/intel-iommu.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/reservation.h>
|
||||
#include <linux/dma-resv.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/vgaarb.h>
|
||||
|
||||
|
@ -14431,7 +14431,7 @@ intel_prepare_plane_fb(struct drm_plane *plane,
|
|||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
fence = reservation_object_get_excl_rcu(obj->base.resv);
|
||||
fence = dma_resv_get_excl_rcu(obj->base.resv);
|
||||
if (fence) {
|
||||
add_rps_boost_after_vblank(new_state->crtc, fence);
|
||||
dma_fence_put(fence);
|
||||
|
|
|
@ -82,7 +82,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
|
|||
{
|
||||
struct drm_i915_gem_busy *args = data;
|
||||
struct drm_i915_gem_object *obj;
|
||||
struct reservation_object_list *list;
|
||||
struct dma_resv_list *list;
|
||||
unsigned int seq;
|
||||
int err;
|
||||
|
||||
|
@ -105,7 +105,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
|
|||
* Alternatively, we can trade that extra information on read/write
|
||||
* activity with
|
||||
* args->busy =
|
||||
* !reservation_object_test_signaled_rcu(obj->resv, true);
|
||||
* !dma_resv_test_signaled_rcu(obj->resv, true);
|
||||
* to report the overall busyness. This is what the wait-ioctl does.
|
||||
*
|
||||
*/
|
||||
|
|
|
@ -147,7 +147,7 @@ bool i915_gem_clflush_object(struct drm_i915_gem_object *obj,
|
|||
true, I915_FENCE_TIMEOUT,
|
||||
I915_FENCE_GFP);
|
||||
|
||||
reservation_object_add_excl_fence(obj->base.resv,
|
||||
dma_resv_add_excl_fence(obj->base.resv,
|
||||
&clflush->dma);
|
||||
|
||||
i915_sw_fence_commit(&clflush->wait);
|
||||
|
|
|
@ -287,7 +287,7 @@ int i915_gem_schedule_fill_pages_blt(struct drm_i915_gem_object *obj,
|
|||
if (err < 0) {
|
||||
dma_fence_set_error(&work->dma, err);
|
||||
} else {
|
||||
reservation_object_add_excl_fence(obj->base.resv, &work->dma);
|
||||
dma_resv_add_excl_fence(obj->base.resv, &work->dma);
|
||||
err = 0;
|
||||
}
|
||||
i915_gem_object_unlock(obj);
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
|
||||
#include <linux/dma-buf.h>
|
||||
#include <linux/highmem.h>
|
||||
#include <linux/reservation.h>
|
||||
#include <linux/dma-resv.h>
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "i915_gem_object.h"
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
*/
|
||||
|
||||
#include <linux/intel-iommu.h>
|
||||
#include <linux/reservation.h>
|
||||
#include <linux/dma-resv.h>
|
||||
#include <linux/sync_file.h>
|
||||
#include <linux/uaccess.h>
|
||||
|
||||
|
@ -1242,7 +1242,7 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
|
|||
goto skip_request;
|
||||
|
||||
i915_vma_lock(batch);
|
||||
GEM_BUG_ON(!reservation_object_test_signaled_rcu(batch->resv, true));
|
||||
GEM_BUG_ON(!dma_resv_test_signaled_rcu(batch->resv, true));
|
||||
err = i915_vma_move_to_active(batch, rq, 0);
|
||||
i915_vma_unlock(batch);
|
||||
if (err)
|
||||
|
@ -1313,7 +1313,7 @@ relocate_entry(struct i915_vma *vma,
|
|||
|
||||
if (!eb->reloc_cache.vaddr &&
|
||||
(DBG_FORCE_RELOC == FORCE_GPU_RELOC ||
|
||||
!reservation_object_test_signaled_rcu(vma->resv, true))) {
|
||||
!dma_resv_test_signaled_rcu(vma->resv, true))) {
|
||||
const unsigned int gen = eb->reloc_cache.gen;
|
||||
unsigned int len;
|
||||
u32 *batch;
|
||||
|
|
|
@ -78,7 +78,7 @@ i915_gem_object_lock_fence(struct drm_i915_gem_object *obj)
|
|||
I915_FENCE_GFP) < 0)
|
||||
goto err;
|
||||
|
||||
reservation_object_add_excl_fence(obj->base.resv, &stub->dma);
|
||||
dma_resv_add_excl_fence(obj->base.resv, &stub->dma);
|
||||
|
||||
return &stub->dma;
|
||||
|
||||
|
|
|
@ -152,7 +152,7 @@ static void __i915_gem_free_object_rcu(struct rcu_head *head)
|
|||
container_of(head, typeof(*obj), rcu);
|
||||
struct drm_i915_private *i915 = to_i915(obj->base.dev);
|
||||
|
||||
reservation_object_fini(&obj->base._resv);
|
||||
dma_resv_fini(&obj->base._resv);
|
||||
i915_gem_object_free(obj);
|
||||
|
||||
GEM_BUG_ON(!atomic_read(&i915->mm.free_count));
|
||||
|
|
|
@ -99,22 +99,22 @@ i915_gem_object_put(struct drm_i915_gem_object *obj)
|
|||
__drm_gem_object_put(&obj->base);
|
||||
}
|
||||
|
||||
#define assert_object_held(obj) reservation_object_assert_held((obj)->base.resv)
|
||||
#define assert_object_held(obj) dma_resv_assert_held((obj)->base.resv)
|
||||
|
||||
static inline void i915_gem_object_lock(struct drm_i915_gem_object *obj)
|
||||
{
|
||||
reservation_object_lock(obj->base.resv, NULL);
|
||||
dma_resv_lock(obj->base.resv, NULL);
|
||||
}
|
||||
|
||||
static inline int
|
||||
i915_gem_object_lock_interruptible(struct drm_i915_gem_object *obj)
|
||||
{
|
||||
return reservation_object_lock_interruptible(obj->base.resv, NULL);
|
||||
return dma_resv_lock_interruptible(obj->base.resv, NULL);
|
||||
}
|
||||
|
||||
static inline void i915_gem_object_unlock(struct drm_i915_gem_object *obj)
|
||||
{
|
||||
reservation_object_unlock(obj->base.resv);
|
||||
dma_resv_unlock(obj->base.resv);
|
||||
}
|
||||
|
||||
struct dma_fence *
|
||||
|
@ -367,7 +367,7 @@ i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj)
|
|||
struct dma_fence *fence;
|
||||
|
||||
rcu_read_lock();
|
||||
fence = reservation_object_get_excl_rcu(obj->base.resv);
|
||||
fence = dma_resv_get_excl_rcu(obj->base.resv);
|
||||
rcu_read_unlock();
|
||||
|
||||
if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence))
|
||||
|
|
|
@ -31,7 +31,7 @@ i915_gem_object_wait_fence(struct dma_fence *fence,
|
|||
}
|
||||
|
||||
static long
|
||||
i915_gem_object_wait_reservation(struct reservation_object *resv,
|
||||
i915_gem_object_wait_reservation(struct dma_resv *resv,
|
||||
unsigned int flags,
|
||||
long timeout)
|
||||
{
|
||||
|
@ -43,7 +43,7 @@ i915_gem_object_wait_reservation(struct reservation_object *resv,
|
|||
unsigned int count, i;
|
||||
int ret;
|
||||
|
||||
ret = reservation_object_get_fences_rcu(resv,
|
||||
ret = dma_resv_get_fences_rcu(resv,
|
||||
&excl, &count, &shared);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
@ -72,7 +72,7 @@ i915_gem_object_wait_reservation(struct reservation_object *resv,
|
|||
*/
|
||||
prune_fences = count && timeout >= 0;
|
||||
} else {
|
||||
excl = reservation_object_get_excl_rcu(resv);
|
||||
excl = dma_resv_get_excl_rcu(resv);
|
||||
}
|
||||
|
||||
if (excl && timeout >= 0)
|
||||
|
@ -84,10 +84,10 @@ i915_gem_object_wait_reservation(struct reservation_object *resv,
|
|||
* Opportunistically prune the fences iff we know they have *all* been
|
||||
* signaled.
|
||||
*/
|
||||
if (prune_fences && reservation_object_trylock(resv)) {
|
||||
if (reservation_object_test_signaled_rcu(resv, true))
|
||||
reservation_object_add_excl_fence(resv, NULL);
|
||||
reservation_object_unlock(resv);
|
||||
if (prune_fences && dma_resv_trylock(resv)) {
|
||||
if (dma_resv_test_signaled_rcu(resv, true))
|
||||
dma_resv_add_excl_fence(resv, NULL);
|
||||
dma_resv_unlock(resv);
|
||||
}
|
||||
|
||||
return timeout;
|
||||
|
@ -140,7 +140,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
|
|||
unsigned int count, i;
|
||||
int ret;
|
||||
|
||||
ret = reservation_object_get_fences_rcu(obj->base.resv,
|
||||
ret = dma_resv_get_fences_rcu(obj->base.resv,
|
||||
&excl, &count, &shared);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
@ -152,7 +152,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
|
|||
|
||||
kfree(shared);
|
||||
} else {
|
||||
excl = reservation_object_get_excl_rcu(obj->base.resv);
|
||||
excl = dma_resv_get_excl_rcu(obj->base.resv);
|
||||
}
|
||||
|
||||
if (excl) {
|
||||
|
|
|
@ -112,18 +112,18 @@ __dma_fence_signal__timestamp(struct dma_fence *fence, ktime_t timestamp)
|
|||
}
|
||||
|
||||
static void
|
||||
__dma_fence_signal__notify(struct dma_fence *fence)
|
||||
__dma_fence_signal__notify(struct dma_fence *fence,
|
||||
const struct list_head *list)
|
||||
{
|
||||
struct dma_fence_cb *cur, *tmp;
|
||||
|
||||
lockdep_assert_held(fence->lock);
|
||||
lockdep_assert_irqs_disabled();
|
||||
|
||||
list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) {
|
||||
list_for_each_entry_safe(cur, tmp, list, node) {
|
||||
INIT_LIST_HEAD(&cur->node);
|
||||
cur->func(fence, cur);
|
||||
}
|
||||
INIT_LIST_HEAD(&fence->cb_list);
|
||||
}
|
||||
|
||||
void intel_engine_breadcrumbs_irq(struct intel_engine_cs *engine)
|
||||
|
@ -185,11 +185,12 @@ void intel_engine_breadcrumbs_irq(struct intel_engine_cs *engine)
|
|||
list_for_each_safe(pos, next, &signal) {
|
||||
struct i915_request *rq =
|
||||
list_entry(pos, typeof(*rq), signal_link);
|
||||
|
||||
__dma_fence_signal__timestamp(&rq->fence, timestamp);
|
||||
struct list_head cb_list;
|
||||
|
||||
spin_lock(&rq->lock);
|
||||
__dma_fence_signal__notify(&rq->fence);
|
||||
list_replace(&rq->fence.cb_list, &cb_list);
|
||||
__dma_fence_signal__timestamp(&rq->fence, timestamp);
|
||||
__dma_fence_signal__notify(&rq->fence, &cb_list);
|
||||
spin_unlock(&rq->lock);
|
||||
|
||||
i915_request_put(rq);
|
||||
|
|
|
@ -43,7 +43,7 @@
|
|||
#include <linux/mm_types.h>
|
||||
#include <linux/perf_event.h>
|
||||
#include <linux/pm_qos.h>
|
||||
#include <linux/reservation.h>
|
||||
#include <linux/dma-resv.h>
|
||||
#include <linux/shmem_fs.h>
|
||||
#include <linux/stackdepot.h>
|
||||
|
||||
|
|
|
@ -29,7 +29,7 @@
|
|||
#include <drm/i915_drm.h>
|
||||
#include <linux/dma-fence-array.h>
|
||||
#include <linux/kthread.h>
|
||||
#include <linux/reservation.h>
|
||||
#include <linux/dma-resv.h>
|
||||
#include <linux/shmem_fs.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/stop_machine.h>
|
||||
|
|
|
@ -94,10 +94,10 @@ i915_gem_batch_pool_get(struct i915_gem_batch_pool *pool,
|
|||
list = &pool->cache_list[n];
|
||||
|
||||
list_for_each_entry(obj, list, batch_pool_link) {
|
||||
struct reservation_object *resv = obj->base.resv;
|
||||
struct dma_resv *resv = obj->base.resv;
|
||||
|
||||
/* The batches are strictly LRU ordered */
|
||||
if (!reservation_object_test_signaled_rcu(resv, true))
|
||||
if (!dma_resv_test_signaled_rcu(resv, true))
|
||||
break;
|
||||
|
||||
/*
|
||||
|
@ -109,9 +109,9 @@ i915_gem_batch_pool_get(struct i915_gem_batch_pool *pool,
|
|||
* than replace the existing fence.
|
||||
*/
|
||||
if (rcu_access_pointer(resv->fence)) {
|
||||
reservation_object_lock(resv, NULL);
|
||||
reservation_object_add_excl_fence(resv, NULL);
|
||||
reservation_object_unlock(resv);
|
||||
dma_resv_lock(resv, NULL);
|
||||
dma_resv_add_excl_fence(resv, NULL);
|
||||
dma_resv_unlock(resv);
|
||||
}
|
||||
|
||||
if (obj->base.size >= size)
|
||||
|
|
|
@ -1038,7 +1038,7 @@ i915_request_await_object(struct i915_request *to,
|
|||
struct dma_fence **shared;
|
||||
unsigned int count, i;
|
||||
|
||||
ret = reservation_object_get_fences_rcu(obj->base.resv,
|
||||
ret = dma_resv_get_fences_rcu(obj->base.resv,
|
||||
&excl, &count, &shared);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
@ -1055,7 +1055,7 @@ i915_request_await_object(struct i915_request *to,
|
|||
dma_fence_put(shared[i]);
|
||||
kfree(shared);
|
||||
} else {
|
||||
excl = reservation_object_get_excl_rcu(obj->base.resv);
|
||||
excl = dma_resv_get_excl_rcu(obj->base.resv);
|
||||
}
|
||||
|
||||
if (excl) {
|
||||
|
|
|
@ -7,7 +7,7 @@
|
|||
#include <linux/slab.h>
|
||||
#include <linux/dma-fence.h>
|
||||
#include <linux/irq_work.h>
|
||||
#include <linux/reservation.h>
|
||||
#include <linux/dma-resv.h>
|
||||
|
||||
#include "i915_sw_fence.h"
|
||||
#include "i915_selftest.h"
|
||||
|
@ -510,7 +510,7 @@ int __i915_sw_fence_await_dma_fence(struct i915_sw_fence *fence,
|
|||
}
|
||||
|
||||
int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
|
||||
struct reservation_object *resv,
|
||||
struct dma_resv *resv,
|
||||
const struct dma_fence_ops *exclude,
|
||||
bool write,
|
||||
unsigned long timeout,
|
||||
|
@ -526,7 +526,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
|
|||
struct dma_fence **shared;
|
||||
unsigned int count, i;
|
||||
|
||||
ret = reservation_object_get_fences_rcu(resv,
|
||||
ret = dma_resv_get_fences_rcu(resv,
|
||||
&excl, &count, &shared);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
@ -551,7 +551,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
|
|||
dma_fence_put(shared[i]);
|
||||
kfree(shared);
|
||||
} else {
|
||||
excl = reservation_object_get_excl_rcu(resv);
|
||||
excl = dma_resv_get_excl_rcu(resv);
|
||||
}
|
||||
|
||||
if (ret >= 0 && excl && excl->ops != exclude) {
|
||||
|
|
|
@ -16,7 +16,7 @@
|
|||
#include <linux/wait.h>
|
||||
|
||||
struct completion;
|
||||
struct reservation_object;
|
||||
struct dma_resv;
|
||||
|
||||
struct i915_sw_fence {
|
||||
wait_queue_head_t wait;
|
||||
|
@ -82,7 +82,7 @@ int i915_sw_fence_await_dma_fence(struct i915_sw_fence *fence,
|
|||
gfp_t gfp);
|
||||
|
||||
int i915_sw_fence_await_reservation(struct i915_sw_fence *fence,
|
||||
struct reservation_object *resv,
|
||||
struct dma_resv *resv,
|
||||
const struct dma_fence_ops *exclude,
|
||||
bool write,
|
||||
unsigned long timeout,
|
||||
|
|
|
@ -890,7 +890,7 @@ static void export_fence(struct i915_vma *vma,
|
|||
struct i915_request *rq,
|
||||
unsigned int flags)
|
||||
{
|
||||
struct reservation_object *resv = vma->resv;
|
||||
struct dma_resv *resv = vma->resv;
|
||||
|
||||
/*
|
||||
* Ignore errors from failing to allocate the new fence, we can't
|
||||
|
@ -898,9 +898,9 @@ static void export_fence(struct i915_vma *vma,
|
|||
* synchronisation leading to rendering corruption.
|
||||
*/
|
||||
if (flags & EXEC_OBJECT_WRITE)
|
||||
reservation_object_add_excl_fence(resv, &rq->fence);
|
||||
else if (reservation_object_reserve_shared(resv, 1) == 0)
|
||||
reservation_object_add_shared_fence(resv, &rq->fence);
|
||||
dma_resv_add_excl_fence(resv, &rq->fence);
|
||||
else if (dma_resv_reserve_shared(resv, 1) == 0)
|
||||
dma_resv_add_shared_fence(resv, &rq->fence);
|
||||
}
|
||||
|
||||
int i915_vma_move_to_active(struct i915_vma *vma,
|
||||
|
|
|
@ -55,7 +55,7 @@ struct i915_vma {
|
|||
struct i915_address_space *vm;
|
||||
const struct i915_vma_ops *ops;
|
||||
struct i915_fence_reg *fence;
|
||||
struct reservation_object *resv; /** Alias of obj->resv */
|
||||
struct dma_resv *resv; /** Alias of obj->resv */
|
||||
struct sg_table *pages;
|
||||
void __iomem *iomap;
|
||||
void *private; /* owned by creator */
|
||||
|
@ -299,16 +299,16 @@ void i915_vma_close(struct i915_vma *vma);
|
|||
void i915_vma_reopen(struct i915_vma *vma);
|
||||
void i915_vma_destroy(struct i915_vma *vma);
|
||||
|
||||
#define assert_vma_held(vma) reservation_object_assert_held((vma)->resv)
|
||||
#define assert_vma_held(vma) dma_resv_assert_held((vma)->resv)
|
||||
|
||||
static inline void i915_vma_lock(struct i915_vma *vma)
|
||||
{
|
||||
reservation_object_lock(vma->resv, NULL);
|
||||
dma_resv_lock(vma->resv, NULL);
|
||||
}
|
||||
|
||||
static inline void i915_vma_unlock(struct i915_vma *vma)
|
||||
{
|
||||
reservation_object_unlock(vma->resv);
|
||||
dma_resv_unlock(vma->resv);
|
||||
}
|
||||
|
||||
int __i915_vma_do_pin(struct i915_vma *vma,
|
||||
|
|
|
@ -124,14 +124,11 @@ static void imx_ldb_ch_set_bus_format(struct imx_ldb_channel *imx_ldb_ch,
|
|||
static int imx_ldb_connector_get_modes(struct drm_connector *connector)
|
||||
{
|
||||
struct imx_ldb_channel *imx_ldb_ch = con_to_imx_ldb_ch(connector);
|
||||
int num_modes = 0;
|
||||
int num_modes;
|
||||
|
||||
if (imx_ldb_ch->panel && imx_ldb_ch->panel->funcs &&
|
||||
imx_ldb_ch->panel->funcs->get_modes) {
|
||||
num_modes = imx_ldb_ch->panel->funcs->get_modes(imx_ldb_ch->panel);
|
||||
if (num_modes > 0)
|
||||
return num_modes;
|
||||
}
|
||||
num_modes = drm_panel_get_modes(imx_ldb_ch->panel);
|
||||
if (num_modes > 0)
|
||||
return num_modes;
|
||||
|
||||
if (!imx_ldb_ch->edid && imx_ldb_ch->ddc)
|
||||
imx_ldb_ch->edid = drm_get_edid(connector, imx_ldb_ch->ddc);
|
||||
|
|
|
@ -47,14 +47,11 @@ static int imx_pd_connector_get_modes(struct drm_connector *connector)
|
|||
{
|
||||
struct imx_parallel_display *imxpd = con_to_imxpd(connector);
|
||||
struct device_node *np = imxpd->dev->of_node;
|
||||
int num_modes = 0;
|
||||
int num_modes;
|
||||
|
||||
if (imxpd->panel && imxpd->panel->funcs &&
|
||||
imxpd->panel->funcs->get_modes) {
|
||||
num_modes = imxpd->panel->funcs->get_modes(imxpd->panel);
|
||||
if (num_modes > 0)
|
||||
return num_modes;
|
||||
}
|
||||
num_modes = drm_panel_get_modes(imxpd->panel);
|
||||
if (num_modes > 0)
|
||||
return num_modes;
|
||||
|
||||
if (imxpd->edid) {
|
||||
drm_connector_update_edid_property(connector, imxpd->edid);
|
||||
|
|
|
@ -136,7 +136,7 @@ static int lima_gem_sync_bo(struct lima_sched_task *task, struct lima_bo *bo,
|
|||
int err = 0;
|
||||
|
||||
if (!write) {
|
||||
err = reservation_object_reserve_shared(bo->gem.resv, 1);
|
||||
err = dma_resv_reserve_shared(bo->gem.resv, 1);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
@ -296,9 +296,9 @@ int lima_gem_submit(struct drm_file *file, struct lima_submit *submit)
|
|||
|
||||
for (i = 0; i < submit->nr_bos; i++) {
|
||||
if (submit->bos[i].flags & LIMA_SUBMIT_BO_WRITE)
|
||||
reservation_object_add_excl_fence(bos[i]->gem.resv, fence);
|
||||
dma_resv_add_excl_fence(bos[i]->gem.resv, fence);
|
||||
else
|
||||
reservation_object_add_shared_fence(bos[i]->gem.resv, fence);
|
||||
dma_resv_add_shared_fence(bos[i]->gem.resv, fence);
|
||||
}
|
||||
|
||||
lima_gem_unlock_bos(bos, submit->nr_bos, &ctx);
|
||||
|
@ -341,7 +341,7 @@ int lima_gem_wait(struct drm_file *file, u32 handle, u32 op, s64 timeout_ns)
|
|||
|
||||
timeout = drm_timeout_abs_to_jiffies(timeout_ns);
|
||||
|
||||
ret = drm_gem_reservation_object_wait(file, handle, write, timeout);
|
||||
ret = drm_gem_dma_resv_wait(file, handle, write, timeout);
|
||||
if (ret == 0)
|
||||
ret = timeout ? -ETIMEDOUT : -EBUSY;
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
*/
|
||||
|
||||
#include <linux/dma-buf.h>
|
||||
#include <linux/reservation.h>
|
||||
#include <linux/dma-resv.h>
|
||||
|
||||
#include <drm/drm_modeset_helper.h>
|
||||
#include <drm/drm_fb_helper.h>
|
||||
|
|
|
@ -265,11 +265,11 @@ static void meson_crtc_enable_vd1(struct meson_drm *priv)
|
|||
|
||||
static void meson_g12a_crtc_enable_vd1(struct meson_drm *priv)
|
||||
{
|
||||
writel_relaxed(((1 << 16) | /* post bld premult*/
|
||||
(1 << 8) | /* post src */
|
||||
(1 << 4) | /* pre bld premult*/
|
||||
(1 << 0)),
|
||||
priv->io_base + _REG(VD1_BLEND_SRC_CTRL));
|
||||
writel_relaxed(VD_BLEND_PREBLD_SRC_VD1 |
|
||||
VD_BLEND_PREBLD_PREMULT_EN |
|
||||
VD_BLEND_POSTBLD_SRC_VD1 |
|
||||
VD_BLEND_POSTBLD_PREMULT_EN,
|
||||
priv->io_base + _REG(VD1_BLEND_SRC_CTRL));
|
||||
}
|
||||
|
||||
void meson_crtc_irq(struct meson_drm *priv)
|
||||
|
@ -487,7 +487,12 @@ void meson_crtc_irq(struct meson_drm *priv)
|
|||
writel_relaxed(priv->viu.vd1_range_map_cr,
|
||||
priv->io_base + meson_crtc->viu_offset +
|
||||
_REG(VD1_IF0_RANGE_MAP_CR));
|
||||
writel_relaxed(0x78404,
|
||||
writel_relaxed(VPP_VSC_BANK_LENGTH(4) |
|
||||
VPP_HSC_BANK_LENGTH(4) |
|
||||
VPP_SC_VD_EN_ENABLE |
|
||||
VPP_SC_TOP_EN_ENABLE |
|
||||
VPP_SC_HSC_EN_ENABLE |
|
||||
VPP_SC_VSC_EN_ENABLE,
|
||||
priv->io_base + _REG(VPP_SC_MISC));
|
||||
writel_relaxed(priv->viu.vpp_pic_in_height,
|
||||
priv->io_base + _REG(VPP_PIC_IN_HEIGHT));
|
||||
|
|
|
@ -140,10 +140,28 @@ static struct regmap_config meson_regmap_config = {
|
|||
|
||||
static void meson_vpu_init(struct meson_drm *priv)
|
||||
{
|
||||
writel_relaxed(0x210000, priv->io_base + _REG(VPU_RDARB_MODE_L1C1));
|
||||
writel_relaxed(0x10000, priv->io_base + _REG(VPU_RDARB_MODE_L1C2));
|
||||
writel_relaxed(0x900000, priv->io_base + _REG(VPU_RDARB_MODE_L2C1));
|
||||
writel_relaxed(0x20000, priv->io_base + _REG(VPU_WRARB_MODE_L2C1));
|
||||
u32 value;
|
||||
|
||||
/*
|
||||
* Slave dc0 and dc5 connected to master port 1.
|
||||
* By default other slaves are connected to master port 0.
|
||||
*/
|
||||
value = VPU_RDARB_SLAVE_TO_MASTER_PORT(0, 1) |
|
||||
VPU_RDARB_SLAVE_TO_MASTER_PORT(5, 1);
|
||||
writel_relaxed(value, priv->io_base + _REG(VPU_RDARB_MODE_L1C1));
|
||||
|
||||
/* Slave dc0 connected to master port 1 */
|
||||
value = VPU_RDARB_SLAVE_TO_MASTER_PORT(0, 1);
|
||||
writel_relaxed(value, priv->io_base + _REG(VPU_RDARB_MODE_L1C2));
|
||||
|
||||
/* Slave dc4 and dc7 connected to master port 1 */
|
||||
value = VPU_RDARB_SLAVE_TO_MASTER_PORT(4, 1) |
|
||||
VPU_RDARB_SLAVE_TO_MASTER_PORT(7, 1);
|
||||
writel_relaxed(value, priv->io_base + _REG(VPU_RDARB_MODE_L2C1));
|
||||
|
||||
/* Slave dc1 connected to master port 1 */
|
||||
value = VPU_RDARB_SLAVE_TO_MASTER_PORT(1, 1);
|
||||
writel_relaxed(value, priv->io_base + _REG(VPU_WRARB_MODE_L2C1));
|
||||
}
|
||||
|
||||
static void meson_remove_framebuffers(void)
|
||||
|
|
|
@ -429,6 +429,8 @@ static int dw_hdmi_phy_init(struct dw_hdmi *hdmi, void *data,
|
|||
/* Enable internal pixclk, tmds_clk, spdif_clk, i2s_clk, cecclk */
|
||||
dw_hdmi_top_write_bits(dw_hdmi, HDMITX_TOP_CLK_CNTL,
|
||||
0x3, 0x3);
|
||||
|
||||
/* Enable cec_clk and hdcp22_tmdsclk_en */
|
||||
dw_hdmi_top_write_bits(dw_hdmi, HDMITX_TOP_CLK_CNTL,
|
||||
0x3 << 4, 0x3 << 4);
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue